Find us on your favorite podcast platform including: Spotify | YouTube | Apple | Amazon
1% Better Fast 15 Podcast – Quick Links
Connect with Robert Wallace onĀ LinkedIn
Connect with Craig Thielen on LinkedIn
Key Takeaways
- AI as a Team Member is a mindset shift: Instead of treating AI as a simple tool, organizations can treat it like a teammate with defined strengths and roles
- Personas and avatars improve engagement: Giving AI a character, role, or persona helps teams interact with it more naturally and adopt it faster
- Context and onboarding matter: Just like a new employee, AI needs context about the organization, goals, and expectations to perform well
- Feedback loops improve results: AI improves through ongoing feedback and training, similar to coaching a human team member
- Trust but verify AI outputs: AI can make mistakes. Treat its input like advice from a colleague: useful, but something to validate
1% Better Fast 15 Podcast – Transcript
Craig (00:14)
I’m Craig Thielen and this is the 1% Better Podcast. And today we are doing a Fast 15 sponsored by Trissential and Trissential is the Shape of Business Improvement. Speaking of Trissential, Robert Wallace is with me and both Robert and I work for Trissential and we’re here to talk about AI as a team member. Welcome to the podcast, Robert.
Robert (00:35)
Thank you, Craig.
Craig (00:36)
All right, well, let’s dig in here a little bit. Tell us just a little bit about yourself and what you do at Trissential.
Robert (00:43)
Well, I’m going on 30 years in IT, outside of work, my career has been project management and now transformation consulting. I do a lot of scuba diving and boating. That’s my claim to fame.
Craig (00:55)
Sounds great. And when you’re not in the ocean or around the ocean, you’re helping organizations get better, get work done better. And a lot of that is around working with teams, building teams, high performing teams. And so what I want to talk to you about today is this idea of AI as a team member. Okay. So we all know about AI and interacting with it in its very basic sense. We ask you the question, it gives us a response, and we have more elaborate uses called prompt engineering. However, let’s talk specifically about a use case called AI as a team member. That’s something that you’ve been working with for the past couple of years now. And so why don’t you just describe what is this idea of AI as a team member?
Robert (01:45)
Yeah, the way I think of it, it’s more of a mindset. When I think of now tackling any problem, whether it’s a client problem or my own work, it’s how do I leverage this person. So I really have a visual image of the AIs that I’m interacting with and I do have a different visual image depending on which AI it is in terms of how to interact with it and what it’s good at and where I can leverage it best just like any team member. What are its skills and strengths and how do we optimize those?
Craig (02:14)
So maybe give us some examples of where you’ve done this with the client. What was the context, what was the environment, what does the team look like, and how did you pull AI into that team to make that team better?
Robert (02:25)
Sure. Well, you the most popular AI is OpenAI is ChatGPT. So that tends to be the first go-to or Microsoft Copilot, depending on organizational security policies. We have many different clients we’ve done this with one in particular. We started with ChatGPT. They got approval to use it. We sat with their very large team all the way from requirements through development and support of their application, their B2B and B2C applications.
And then we built dozens of AI agents that help them at each step of their requirements gathering and scoping and prioritization and development testing all the way to deployment and rollout.
Craig (03:02)
So in that sense, it was it kind of like an help them with the process, it was almost like a team coach, or did you in that case give it a specific persona or sets of personas?
Robert (03:14)
Well, there are over dozen AIs, so they all have their own persona. One of them was a coach persona. I trained it to be me for them as a leave behind, along with about a 100-page playbook for how to do this agile and cloud and new AI-driven development lifecycle.
Craig (03:32)
So when you’ve done this, Robert, how do you see the engagement level of the team? What are some of the challenges? What are some of the dynamics? I mean, it’s not a person with a head. It’s not a robot. It’s not even an avatar. It’s just somebody that’s part of ā meetings and conversations that maybe describe what that experience is like, not for you, but for other people on the team that have, like you said, have never had this mindset before.
Robert (03:55)
Yeah. It varies widely all the way from shock and speechless to can’t wait to get started and let’s jump right in. So it really does trigger, I think, the spectrum of emotions from humans, depending on two things, I think. One is their, know, they’re up to that point in time, how much exposure interaction have they had with any kind of AI. But also, I think there’s sort of every person has kind of a natural disposition towards change or some of the new technology, some love it and jump right in both feet with no questions and some resist till the very end. Right. So yeah, all different emotions. We do have a couple of techniques that I found to be very helpful. And you mentioned they don’t have a face and a head, I actually do put faces and heads on them. So when we’re when we’re visualizing or building these AIs, I try to give them a little bit of a character or personality -so you create an avatar- even if it’s just a fun-looking safe robot.
Craig (04:45)
to create an avatar, just to make it, yeah, just to humanize it, so to speak, I suppose, right?
Robert (04:53)
Yeah, give it a persona.
Craig (04:54)
Yeah. So is there certain personas that you think work better than others?
Robert (04:58)
I don’t think I’ve seen that pattern emerge yet. I’ve done this dozens, maybe in the hundreds of times, I think, need to do it more before a pattern emerges. I will say one of the better techniques is to brand the AI itself with something from the client, something familiar, either their logo or some version of their logo specific to that AI.
Craig (05:21)
Yeah, make it part of the organization, the team. Speaking of that, what are your thoughts around how to treat this AI avatar, this AI persona? Do you think that you need to treat it like a person in terms of there’s onboarding, there’s teaching them about the company and the strategies, the culture?
Robert (05:25)
Mm-hmm.
Craig (05:45)
throughout the whole lifecycle. I there’s a number of things. If you were bringing a physical person on, you would do a number of things to sort of bring them along, coach them, mentor them, give them feedback. What are your thoughts with doing some of those same activities with an AI?
Robert (05:59)
Yeah, to be honest, I haven’t really thought of it in that lens, but now that you say it, I think that’s exactly what we do. I that’s part of the scoping and, I consider myself a little bit of an AI engineer in that respect. And a lot of what I do is try to figure out what is the use case? How is this AI going to augment or accelerate or just enhance this human activity in some regard? And then, like you said, the AI needs sufficient context. It needs to understand these are general purpose AIs. So you need to give them a purpose. So I think that’s what your, that’s what how I interpret the onboarding word is providing enough context, understanding of why it exists, who it’s serving, what good looks like. And then that’s just your starting point. Then you have to commit to training it over a period of time.
Craig (06:48)
Yeah, Imean, I’ve seen some articles that kind of takes us even further in terms of literally treating it like a person, even doing performance reviews, doing feedback sessions. And I’ve tried that and it’s actually quite useful. The AI needs context, it needs feedback and it takes feedback quite well as it
Robert (07:01)
yeah.
Craig (07:11)
as it turns out, where sometimes as humans, we have certain traits, defensiveness, all sorts of different ways to treat feedback, not always, positively. So I think that’s a quite useful way to even think about it. And again, it humanizes it. So it’s not some robot trying to listen in on us or a robot trying to tell us what to do, but really just, well, how would we treat a person? You wouldn’t bring a person onto a team and expect it to be, productive without any investment of time. Of course not. Right.
Robert (07:41)
Yeah, absolutely. And think that’s I think we’re saying the same thing when I was thinking training. It is that feedback loop. And if those listening know or don’t know, these are we say generative AI or large language model. These are predictive patterns. So the the AI itself is saying this, this, therefore, it must be this based on how I understand the world around you. And a lot of ways that it learns over time, it really does learn is through that feedback loop.
Craig (08:07)
Yeah, it’s interesting. The more you talk about it, if you are able to imagine AI as a persona, you would treat it differently. with a lot of the things that we hear from our clients as well, AI gave me false answer or it told me something that wasn’t true or, it just doesn’t understand our business. Well, again, if you were working with an employee that you brought onto your team, you wouldn’t expect it to have all the answers. You wouldn’t expect it to know everything. You wouldn’t expect it to know everything you know. So why would you expect an AI to have every answer exactly what you were looking for? So I think it actually helps people just generally understand how AI works, in particular the large language models. And they are designed to act like humans and speak like humans and learn like humans, right?
Robert (08:50)
Mm-hmm. Yeah, based on a limited number of information, what do I think the answer is going to be to this question or what it should be or what you’re looking for? And it can be wrong, just like humans.
Craig (09:01)
So is there any tips or techniques as you’ve done this a bunch of times, things that make it go easier or things that help people kind of understand some of these interactions better?
Robert (09:13)
Well, yeah, there’s a few things. One is never trust the output. Trust but verify, maybe. If you’re not the expert, seek an expert advice or opinion. It’s really no different than asking someone on the street. Hey, what do you think of this problem or my idea? Do you like it or not? That’s the type of answer you’re getting. You’re just getting it on a larger scale of information sources. So that’s number one. I think number two is these are not like these are not I know everything in the universe models.
These are innovative, creative, thinking, predictive models that just have access to a large amount of training. And as such, the more narrow you focus its purpose, the better result you’re going to get and the easier it’s going to be to train it over time to keep it from what we call drifting or hallucinating. Hallucinating is when you just give it too much context or not enough. Either or is going to lead to this thing. You ask a question and you get this weird response.
Craig (10:00)
Mm-hmm.
Robert (10:10)
like, this thing doesn’t work. No, it’s working perfectly. You’re not working correctly. You’re not working with it.
Craig (10:15)
Yeah, that’s all great feedback. I guess one thing that I’ve noticed is just it takes practice. once you start thinking about it in a way of bringing another person into a conversation, I start doing it even on one-on-ones. Like for example, on an interview, I interview somebody and I have an AI listening to this interview and I tell it in advance, I want it to be Coach I want to have feedback. So pay attention It might give me some coaching during the interview. But at the end of the interview I say well, how did the interview go? How did I do in my role? How did the interviewee do? What were our strengths? What were our weaknesses? It gives a very objective set of feedback about things that I could have done better as an interviewee or things I did well and for the for the interviewee and the interviewer so that’s interesting. then even when I had a conversation with two people or three people, I might bring invite an AI and introduce it and say, Hey guys, I’m introducing a CEO and they’re going to challenge us and they’re going to ask tough questions or I’m going to introduce and sometimes give it a name. Like who’s, one of your best friends from high school. Bob is okay. Well, I’m going to invite Bob and Bob’s going to challenge us. He’s going to
Robert (11:20)
Mm-hmm.
Craig (11:28)
do red team analysis, so he’s going to take sort of the opposite poke holes and everything that we do and challenge our assumptions. Is that okay with you? Would that be, ā that’d be fun. I’d love to do that, right? So again, part of it’s just teeing it up, setting that context, and then practicing. Because the more you do that, the more you go, wow, that was really fun, or that was really interesting. They had a completely different take than I would ever thought of. And then they get more comfortable with it.
Robert (11:31)
Mm-hmm.
Mm-hmm.
Yeah, I think I think you’re describing, you know, the interesting dynamic, which is it’s experience on both sides. It’s our experience with that particular AI.
Craig (12:00)
Mm-hmm.
Robert (12:02)
And every AI has a certain amount of memory and all AIs are different at their ability to access that memory and then use it in a ongoing long-term multi-day, multi-week, multi-year learning conversation. I have an AI that I built for myself years ago and that’s the AI I use every day to say, here’s what I’m thinking, here’s, what do you think of this or what should I do next or where should I go or what should I try? And it even knows me on a personal level, right? And that AI has years of history built up.
Craig (12:33)
Yeah, it knows you
better in some ways than you know yourself probably,
Robert (12:36)
Well, you know what’s super interesting about what you just said, and I said this to friends the other day, said, I’ve been using it for so long now that what I realized is the AI is actually helping me be more human. Because it is calling out those, have you thought about this or have you thought about how the other person’s feeling about this or thinkig?
Craig (12:47)
Interesting.
Robert (12:55)
It’s not like your mom or your best friend saying it. It’s not that level of empathy, but it is that level of artificial intelligence or machine intelligence that’s saying, you thought about, it kind of broadens my mind a little bit. think it brings in, I think it’s helped me be more empathetic.
Craig (13:11)
Yeah, well again, if you ask it to, if you give it some of that guidance, you can have it be very analytical. You could have it be very empathetic or very spiritual or very emotional or very whatever bend you want to put it. can do that quite well. Hey, Fast 15 is already up. Any final thoughts about how to use AI as a team member?
Robert (13:32)
Just get in and use it. There’s two schools of thought. Get in and use as many as possible so you can experience the difference, or get in and really try to become an expert or master at one. Both have advantages and disadvantages. And just remember that these are evolving and changing and growing daily, just like us. So also give them little bit of grace and a little bit of patience. It’s easy to get impatient with AIs.
Craig (13:57)
Yeah, it’s a good point. Again, like a human, if I were to ask you a question five years ago and then I ask you now, your answer might be different because you’ve grown, you’ve learned, you’ve got different perspectives, right? All right, well, thanks for being on Fast 15. Robert.
Robert (14:09)
Thank you, Craig.
Check out other podcast episodes





















































