Find us on your favorite podcast platform including: Spotify | YouTube | Apple | Amazon

1% Better Podcast Arijit Sengupta, Aible – Quick Links

Learn more about Aible
Get Arijit’s book, AI is a Waste of Money
Connect with Arijit Sengupta on LinkedIn
Connect with Craig Thielen on LinkedIn
Check out host Craig Thielen’s full bio page

  • Human-Centered AI Is Non-Negotiable: Arijit emphasizes that AI should enhance, not replace, human agency, calling for systems that serve individuals rather than dictate their behavior
  • Small, Specialized Models > Giant Black Boxes: The shift from large, generic models to small, purpose-built ones is underway. These specialized agents allow for more domain-relevant, user-controlled solutions
  • Most AI Projects Still Fail, Here’s Why: Up to 90% of AI projects fall short due to lack of user engagement, unclear problem definitions, and overly centralized top-down planning
  • Agentic AI Enables Scalable Innovation: Organizations should stop building monolithic AI “cathedrals” and instead empower users to create many small, composable agents that solve real problems
  • Explainability Is the New Differentiator: In enterprise settings, deterministic systems with audit trails and traceability matter far more than flashy, unpredictable demos

1% Better Podcast Arijit Sengupta, Aible – Transcript

[00:00:00.00] – Craig
Hello, I’m Craig Thelen, and this is the 1% Better Podcast. Today I’m speaking with Arijit Sengupta, Founder and CEO of Aible, one of the leading AI platforms out there. And Arijit, nice to have you back on 1% Better.

[00:00:20.10] – Arijit
Thank you so much. I think I was one of your first guests, so it’s good to see that you’re going from strength to strength.

[00:00:26.06] – Craig
Yes, you were actually number two. That was two and a half years ago, and I saw a post where we have 7,000 views and been quite a ride. So thanks for getting us started. It was just an infant. And two and a half years ago is a long time especially in the AI world. It seems like a lifetime, I bet. So maybe we start with that, Arijit. First of all, what’s changed for you personally in the last two and a half years and then just reflecting back on what’s changed in the world relative to AI, what’s changed?

[00:01:00.00] – Arijit
So I think the pendulum two and a half years back was moving more towards large models from large companies. Broadly, people were thinking that one huge model will do it all. And that was something I was deeply uncomfortable with. In fact, one of my things was always that the AI made by the few for the many is very dangerous. In fact, I had written about this in the San Francisco Chronicle that the AI Apocalypse is not going to be starnet taking over robots to take over the world, it will be an AI telling me who I have lunch with. And interestingly enough, at one point, OpenAI literally had an ad talking about how it will tell you who to have lunch with. I was like, Oh, my God, this is horrifying. You’re taking the whole human agency out of the equation. I’ve always been a humanist at heart. I love AI. That’s all I’ve done all my life. I came to the US to study AI. But for me, AI is a way to give people superpowers. Our tagline is always I am Aible. What I tell people is the I comes before the AI.

The AI is there to serve the I. How do you make sure that we think of AI as a way to give people superpowers, empower them, actually increase this human-centric world? Our goal is not to use AI to get rid of the human. Our goal is to use AI to make the life better. I was just at this conference, and I was still hearing people saying things like, Oh, no, no, no, no. Forget these pesky humans. AI will do it all. I’m like, What are you doing? You are human still. You haven’t changed. Anyway, but I do like the fact that the pendulum is swinging back towards smaller models, specialized models. We’re talking a lot more about the only people with the domain knowledge are actually the end users, and experts are not really solving the problem. 70% of AI projects are still failing, right? Yeah. And the number might be closer to 90, to be very honest.

[00:03:04.13] – Craig
Yeah, I think it could be somewhere up there for a lot of reasons. We’ll get into that. But first of all, I just love what you’re saying about human centered AI. In fact, we’ve everyone has to differentiate themselves because AI is such a broad, ambiguous term now. I think 5, 10 years ago, it used to have even more niche meaning, but now it means anything. The that’s sophisticated and powerful and intelligent. And so we’ve branded our AI capabilities human centered AI because we just believe what you just stated, that at the end of the day, the human is in the middle, and this makes us as humans, better, smarter, faster, more creative… But it’s in us, and it’s there to support us and help us. And we just believe that. And it also helps with a lot of our clients, our large Fortune 500 multinationals. And there’s a lot of fear, uncertainty, and doubt. A lot of people don’t understand how it works, and they’re fearful for change or fearful for their job, and they don’t need to be, frankly. And so it helps embrace it, learn it, understand it. It’s in some ways, it’s no different than a phone.

I don’t think a lot of people or a laptop would consider being in the information age or in an information job without these kinds of basic tools because they make us better, more connected humans. I love that. We’re very much aligned on that. So I want to just maybe talk a little bit… Sam Altman talks a lot about what OpenAI saw coming and what they built in their vision and what they drove, which they were the first big global everyone got their fingers on AI with ChatGPT. But he also has been very open about all the things that they didn’t get right and that they were surprised by. Two of them, meaning being every time that they designed the next generation, whether it went from three to three, five or three, five to four, it was unpredictable. It didn’t do things that they thought it was going to do, which is scary, but also makes you think who’s in control of it, if anyone. And secondly, he said he would have gotten wrong 10 years ago. Where are the first use cases? Where are the first jobs? Where are the first types of functions that would be fully embracing it?

And he said it would have been reversed. It’s not the manual labor. It’s not automation. It’s people that want to be more creative. It’s writers. It’s people working in creative marketing and working in creative fields. Sometimes thought that’s the hard, high paid stuff as just examples of where it was unpredictable. So for you, where have things, again, just looking back at the last two, three, four, five years, where have things gone that you didn’t anticipate?

[00:05:59.03] – Arijit
So Well, let’s take the two examples you just gave from Sam Altman. The first one was obvious because if you look at transformer models, the fact that these things wouldn’t go from one generation to the other in a predictable way was pretty obvious. And that was the reason why when we first started talking, a ChatAible had already come, the very first release of Aible that came out in generative AI, we had always had a deterministic system double checking the generative one. Because our premise was that you cannot get rid of hallucination. If you understand the actual underlying architecture, anybody who says we have eliminated hallucination is lying to it. You can ground it, you can improve it, you can do a lot of stuff to reduce it, but you can’t eliminate it as long as you’re using a transform architecture. And that’s why at the very beginning said, look, it has to be a system of tools that have to work together. And that has thankfully worked out extremely well. At the time, I was getting a lot of pushback from investors and customers that, oh, no, no, no, no. Hallucination is a soft problem or will be a soft problem.

It’s gotten worse. I think so, yeah. No, it’s not a think so it’s proven fact.

[00:07:14.13] – Craig
Just from my experience and us doing stuff that we thought we had figured out a year and a half, two years ago, and we went to go replicate it again, and it doesn’t work anymore. Things change so fast.

[00:07:29.06] – Arijit
Yeah, and the prompts definitely won’t work from generation to generation, which is why in a very rapidly changing world, if you are manually crafting Gen AI solutions, you’re building something incredibly fragile. As the new model generations come, it’ll just fail. The second one, you said actually something very close to my heart. I just was speaking at the HPE conference last week, and one of the things I said is, do not ask me what’s a good use case. And the reason for it is I’m finding that when you empower end users, they will come out with use cases that I would have never thought of. In fact, we just had this Fortune 100 company where this customer found $12 million of value in a use case. And I kid you not, when I was told the use case, I said, This is a stupid use case. What are they doing? This is a waste of time and money. And then I was told how much money they had found from it. Just one person had found that much money from doing it. And this is the point. It’s funny that I fell into the trap given my first book, AI is a Waste of Money, was all about- Yeah, I remember that one.

How data scientists and business users don’t understand each other, and that’s the source of the disconnect. I was falling into that same trap. I assumed I understood what that business user was trying to do, but I do not understand their world as well as they do. That is the thing, that humility. All of us… I always say AI has to have humility. It has to explain its reasoning, what it’s doing to the user, and take user guidance in every crucial decision. But I think us, AI experts, have to have a lot more humility than we have because the end user is no better than us.

[00:09:12.13] – Craig
Yeah, I think there’s a lot of truth to it. There’s a couple examples. When we really jumped into the generative AI space two and a half years ago, when it came to the world, we thought the first thing we should do is teach people the basics so that we could teach them how to fish. And as it turned out, we were too early. We were teaching people something that they didn’t want to learn. They didn’t want to change. Then everyone started, and so we fell in the same trap. Everyone said, Well, you just got to tell us the use cases. Tell us for my job, my department, what we should be doing with it. And we started creating hundreds and hundreds of use cases. And then they got into the mindset of, Well, here’s why it won’t work here. And here’s all the reasons why… what the problems are with it. Now we’re back to almost square one. Then people started getting enamored by a single tool or a single model, large language model, or something like Copilot. They go, Oh, maybe that’s going to solve all of our problems. Problems, and then they find out quickly that it doesn’t.

And now we’re back to square one going, we have to teach them the basics because they know their customers, they know their challenges. And we go back to human-centered AI. Look at everything you do during the day. How do you interact with customers? What problems are you trying to solve? How are you trying to get competitive advantage? Now, apply AI to it. And how can AI help you do it better? And it’s more of a mind shift than it is learning a new technology.

[00:10:43.14] – Arijit
And one thing you have to be careful about when you talk about this. And again, I’m not saying you’re saying this, I’m just trying to point out for the audience, we are not saying take what you’re doing today, sprinkle some AI on top of it. That is not the point. I always challenge people to think of a process, what are your friction points? What are your risk points? Because what you can do with AI is really re-imagine the end-to-end process to take out friction and take out risk. So if I’m doing an order-to-cash process, my risks might be, do I have a stock out of my inventory? Is this time to party? Will they pay? Will I be able to ship in time? What will my tariffs be? These are all my risk points. My friction points might be, I have to write up an email response to the order and things of that nature. If you really build an AI, what you’re doing is today, the inventory check happens maybe two week after the order got taken. Can I do the inventory check as soon as the order comes in?

Can I do the credit check as soon as the order comes in? Can I do these various things differently so that while the order has come in, I’m doing something fundamentally different from what I was doing before?

[00:11:57.08] – Craig
Right. Yeah, I love that. That’s a good way to look at it. So let’s hit that a little bit. You talk a lot about this term impact first AI. So walk through that in a practical organization that’s trying to do something with AI, and how would that guide them?

[00:12:15.00] – Arijit
So the problem that happens is we do any AI projects as science projects, because often they’re built by IT or data science. We’re still trying to understand pool tech, and they build a pool tech. What we believe is if you go to the business user, and I just had this meeting earlier today where we are having this discussion of an agentic solution in a telco space, and we spend the first 45 minutes of an hour call talking about how we’re going to do a certain thing. And then the two business users on the call says, That is all a waste of our time. That’s not where our problem is. But IT and the expert team have taken us in there to talk about that, and we’re like, Exactly right. So what are your problems? In the last 15 minutes, we actually figured out what their problems were. And now we are trying to enable them to build solutions themselves to solve the problems they have. And then I turned around to the senior exec who wanted this big, big, big solution. I was like, look, as these little problems get solved, for him, it’s a little problem.

It’s like, that’s a small problem. Why are you trying to solve that small problem? But those small problems are part of the big problem he’s trying to solve, the exec is trying to solve. If we can empower the business users to build many, many small agents. Going back to my order to cash example, imagine somebody just does an agent to figure out, is there a stock out? And another person does one, whereas what’s an alternative product if there is a stock out? These are small parts of the problem. You could say, why are you wasting your time with that? Build the whole thing. No. It’s better to build these small pieces, customize it, get user validation of the small pieces. Then you use another agent to orchestrate these agents into a more complex flow. Think of AI should almost become emergent phenomena. They should not be top-down. These should be anthills, these should not be cathedrals. We have been trying to build cathedrals too much, and we have to stop. For a couple of reasons. One is any expert, the moment you go to a cathedral, you need the architect, you need the expert, you need that global view.

But that person cannot possibly understand the thing at the five foot level, what is actually happening on the ground. If you can, on the other hand, build these point things which actually understand the terminology, understand the process, understand the pain of that final human, then you can recombine those things in ways you would have never thought.

[00:14:48.13] – Craig
Yeah, it’s a powerful thing. It reminds me back in the late ’90s, you remember the San Francisco Project? And object-oriented was supposed to be the thing that you build all these objects once, but there’s thousands and thousands and thousands of them, and they’re interoperable, and you can just reuse them and orchestrate them. And it fell over, famously, obviously, because of proprietary frameworks, and there’s too many variables in organizations to handle it. So this reminds me of that. I mean, there’s the power, there’s the potential with the agentic AI, which is the biggest buzzword out there right now. But how do you have some level? How do we know this agent was built properly? Does it scale? Is it authenticated? Is it audited? Is it secure? All of those things. And then is it interoperable?

[00:15:41.01] – Arijit
Right. So two very important things. Before I answer your second, your key question that is really important. Understand that AI is the first technology which we have ever built that is more able to conform to us than we are able to conform to it. Everything in IT before generative AI was really the human was more flexible than the tech.

Yeah, true.

The first time, the tech is more flexible than the human. So what we are doing is, if you look at vibe coding, I actually do not… I think people are completely misunderstanding the point there. It is not going to be the way all code is going to be written by the enterprise at all. Because what are you doing is you’re going in and saying, Hey, here’s a monolithic huge task. AI bring all the flexibility to it. That’s not what enterprises want. What you actually want is very, very constrained domains into which you let the AI put in as much flexibility as possible. What we are doing is every module enable, we have a module for structured data, for unstructured data, for graph data, and then we start specializing for it. We have image data, but then we have a sales analytics specialized module that is derived from the analytics one. There is a sales report one which is derived from the combination of the sales analytics one and a note summarization one, et cetera. Each of these modules are highly scalable, completely audited. Everything is logged, extremely well-tested. Certain amount of flexibility built into it.

What model you want, you switch the model, it’ll use the appropriate pipelines, it’ll use the appropriate prompts. What you’re doing is you’re building these modular blocks that are absolutely enterprise strength. And then you let people play on top of that. They can adjust it, they can change things, but everything is being traced and tracked. So I can always say, You know what? This ended up here because Craig did these two things, and then origin did this one thing, but this was originally drive from that other thing. And then when underlying models change, I change the parent objects and everything derives all of the improvements through. That is the only way to build these solutions at scale in the enterprise. Is allow flexibility, but within strong constraints. And then even when we allow, we can use completely autonomous agents to stitch together these broader processes. Like, literally, I had somebody go in and do a whole recruiting pipeline with 10 steps, including different tools, using one top-level thing. But these fully autonomous agents are actually not deterministic. If you keep using a fully autonomous agent, your path that it might take on a given day might be different from the path it took yesterday.

Once again, have to have this ability for the enterprise to say, I like this. I like the creativity of the agent. I like the path it has taken. I like its plan. I like its execution. I’m great. Now make it do it exactly that way 500 times. So in Aible, the moment you bless that agent and you say, Okay, turn this into a step-by-step agent, we automatically, programmatically will turn that into a deterministic system. A lot of guys in Silicon Valley will stick their nose out and like, Oh, it’s a deterministic system. It needs to be because an enterprise is not going to allow an inconsistent agent running inside. You want agent creativity, but once you got into a creative solution, you need it institutionalized in a repeatable way.

[00:19:20.07] – Craig
Yeah, as you said, it’s the only way you can’t have rogue anything going on in highly scaled enterprises that have compliance, have regulatory… There’s a myriad of reasons. Security, accessibility, privacy. There’s all sorts of reasons why you can’t have just rogue things happening, right? So it makes a lot of sense.

[00:19:45.04] – Arijit
Sorry, Craig, one important point, because I know you come from this world. Remember the old days where every meeting would start with which version of Excel spreadsheet you had?

Yeah.

You would just spend the first 15 minutes trying to make sure that everybody was on the same Excel spreadsheet. Now, imagine what happens when an agent does a data join or agent creates a feature. Well, when you call the thing, your agent might have done it one way. When I call the same data set, my agent might have done it a little bit differently. In Aible, what happens is when you first did it and your agent did it one way and you blessed it, that join, that feature creation actually gets templatized. When my agent is trying to do the same task, it actually is told, Hey, there’s already a templatized path for it. Use that. Now, the human can overrule it, but the agent has to follow the path that has been set up, and we did it to enable consistency, because what we were finding was there’s a great demos out there of analytical AI and stuff like that, but if you scratch the surface full of hallucination and full of inconsistency, and inconsistency is not acceptable in the enterprise.

[00:20:56.02] – Craig
Yeah, it’s really interesting. I think there’s something really powerful that you have to call the deterministic. I call it highly disciplined environments. You need the process, you need the standards, you need something that gives it guardrails. And then it’s like the analogy, and we stumbled across this, I think I shared this with you, we were working on an AI solution, and what we determined is that what was the thing slowing down the process the most was the humans, and it was our process. And we were doing two weeks sprints and all these ceremonies, and AI doesn’t need that. We got it down to doing full sprints daily. What we found is it’s really hard. We have to be incredibly disciplined. Everything we say, every decision the AI would take and say, Okay, here’s your sprint plan, and here’s the user stories, and here’s the test. And it would build all that, and we’d be really disciplined because we would go off track, it wouldn’t go off track. I use a racetrack analogy. You would never take an Indy car and put it on a freeway. You would never drive it down your street. It doesn’t have the discipline, doesn’t have the deterministic.

So how do you do that? Building that racetrack, as I call it, How do you build that environment so that you get some predictability, and then you can start building iterative scale and sophistication? That’s what you’re doing at Aible, but let’s go a little deeper. How do you do that?

[00:22:27.07] – Arijit
Our key principle is guidance. So the way Aible works, it’s always AI first. The human doesn’t have to do anything. But the AI provides information to the user, and the user can provide guidance. So let’s take a real example. Two, three weeks back now, we went to the State of Nebraska, and we trained a bunch of interns, and without telling them we were going to do it, we actually trained the CDO and the CIO of the state of Nebraska. They thought they were there to watch a hackathon, but we sucked them in. And the way Aible is set up, we typically do our trainings in 30 minutes, and in 30 minutes, you’ll do three, four agents. You’ll do unstructured one, you’ll do a structured one, you’ll do an image one to figure it out. The CIO in the second part of the demo, second part of the training, suddenly starts using his own data, real data, in the middle of the training. Now, for one second, the guy who was running the training got a little nervous. He’s like, Why are the answers not matching what I’m expecting? But then once he realized what was going to happen..

I was like, This is a beautiful thing because it’s not a beautiful that he didn’t want to follow the class structure. The beautiful part is he felt empowered enough that he could just point it at his own data. The reason he’s isn’t able to do that, in Aible, you don’t need to know any data science stuff. You just ask a question, the software comes in and says, Well, based on that question, these are some data set, structured, unstructured, etc, that could provide context. Is any of this useful? You choose, Okay, these things are useful. The moment you choose it, Aible is immediately saying, Well, how do I know how to specialize myself to this? If you chose a database and a sales database, I know how to specialize. I’ve got a lot of experience and a lot of projects. So immediately it gets into a slightly different mode, but you can change it. You can say, No, I actually don’t want you to get that specialized. Just stay at the analytics level, or I want you to get hyper specialized. What the software is constantly doing is trying to understand the human’s intent and then offering up, Hey, I have this accelerator here.

I have this ability I can do for you automatically. I can look at 10 million questions for you and tell you the 30 most interesting insights. Do you want me to do that? And the moment the human says, Yes, it’s off to the races. It goes in and looks at a million variable combinations, comes back and says, Here’s useful information. You want to know something more? I’ll do that for you, too. But if you notice, it is never, Hey, tell me what the pipeline is. This is not like a busy, vague visual composition framework or what they call low-code/no-code. Low-code/no-code doesn’t mean anything if you need a lot of expertise. If I need to know what the model is, if I need to know what a pipeline is.

[00:25:17.02] – Craig
Which is what most of that is. Most of that is you have to be an Uber expert at that tool to understand how to run it.

[00:25:26.06] – Arijit
Yeah, and we did the reverse. We said, again, our principle has to be that AI is more flexible than the human. The AI has to understand the human’s intent, ask it a couple of questions, guide the human in the right way, but take guidance from the human. One of the interesting examples of this is the way our models work. The Aible intern models are actually reasoning models based on LLaMA. But what we did is we really focused on instructability of the model. It actually takes instruction well, and we focused on explainability of the model. When the model does its reasoning, it provides the reasoning in a very structured way that’s easy for the human to provide feedback on. If you try a normal reasoning model, it will go off in circles and do all sorts of crazy stuff. Quite often, you can actually show that that is related to hallucination as well. The more you let it go off in la la land, the more it will hallucinate, actually. But what we did is we tightened that up so that the AI is explaining to the human, This is what you asked me to do.

This is how I interpreted this variable. This is what I did. Now the human can look at step two and say, No, that’s not what that variable meant. Or this data is not actually at the state level, it’s at the zip code level. You need to do a weighted average of this data. The human provides feedback, but they’re not just saying, Thumbs up, thumbs down. We call it training an AI like a pet. It’s like, Good job, bad job. That’s not very high-resolution feedback. But when I can do a thumbs down and say, The definition of gross margin is different in this company, that is very high fidelity, high value feedback. And what we are finding is we can immediately update the model based on that feedback. And then when we get 100 examples, we can put in the model immediately in that.

[00:27:11.07] – Craig
How much are you seeing your client’s value? Because you’ve been talking about explainability for a while, and I think that’s still fairly unique to you and your platform. I see a lot of black box models, and there’s a lot of different scenarios, but one of them is where people finally get over the hump. They kind of get it. They go, I can see how we can use this thing, and they start using it. They’re so enamored that it actually is useful, and it’s creating some very high quality output, let’s say. And then it’s almost like they don’t want to know. Whereas explainability takes time and effort in interaction. So are you seeing your client say, No, that is incredibly valuable to us. We don’t want just a black box solution.

[00:28:01.03] – Arijit
So interestingly enough, we actually find it’s a way to separate the practitioners from the demo wear people. So there are people who want a magic trick. They want AI to be magic. And we get that quite often. They’ll throw some data over the wall and say, Give me my agent. I won’t provide feedback. I will not engage. The AI will magically do it. Well, those are not our customers anyway. It’s as simple as that. If you want to do AI for real in a on an ongoing basis where it will work not just for the demo data, not just for the test, but actually for production at scale, then you have to embrace explainability. You have to take ownership of that AI process. AI cannot be a magical thing that’s working out there. AI first, but not AI only. It’s AI first, and then the human has to provide it guidance. The human has to bless the process. You’re absolutely right. There are cases we get into where people say, Hey, but I saw this magical AI out of some other vendor. I was like, Yeah, go see how much consulting went into building that magical AI.

Check how often that works on your real data. That’s where our difference is, that we are building real solutions in 10, 20, 30 minutes on customers’ data that we have never seen. As in the State of Nebraska case, the guy just went off and did his own data without telling us. When you can do that and you can do that over and over again, that’s what actually scales to the end of it, not the magic tricks.

[00:29:32.11] – Craig
I think, honestly, that gets at the heart of you either believe in human-centered AI or you believe AI is here to replace people. If you want to make it a binary conversation in that if you want a AI to provide all the answers. Hey, it’s going to tell you who to go on a date with or where to go for lunch. Frankly, we’ve been seeing that for a while with Google. Some people won’t even go to a restaurant without getting a rating, whether those ratings are right or not. People don’t know how to read maps anymore. That’s a fact. The Google map will tell you where to go, but people actually can’t read a regular map if, let’s say, their phone dropped in a lake or something. In some ways, you can either dumb down a human or make us really lazy or dumb, or if it’s explainable and you’re interacting and you’re engaging in a sophisticated way, it actually makes you more intelligent. It activates you brain, not, here’s the answer, right?

[00:30:32.14] – Arijit
Think about, are you giving up human agency or not? If you want to give up human agency to an AI, a magical AI is the thing. But firstly, it will not work. Secondly, if it works, you are going to be infantilized at some point. You are no longer a properly function adult member of society. You are just a passive consumer of guidance. I am not somebody who wants that world of AI. I want that agency. I want that superpower, but I want that superpower. The superpower has to serve me. Magical AI, and especially every time I see vendors doing magical AI demos. Again, I was at this conference and the multiple speakers had these demo sizzles. Ours was the only one that was actually product screenshots. Actually, okay, you’re using a product to give a feedback and see what happens. Everybody else had this fancy, slick videos with towers going up and this. I’m like, Where is your UI? Because there is no UI. It has actually been handcrafted by a bunch of experts, and then a simple thing shows up, but you don’t see that there was two, three months of effort that went into that.

[00:31:49.00] – Craig
Right. Yeah. No, that’s smoking mirrors. Let’s just say that. So one thing that we talked about in our last podcast was governance. And and how governments are getting involved with this. And so at that point, I think the AI Bill of Rights just came, and I think you were pretty critical of it, saying it’s fear-based. But there’s been a lot of different things written since then, the US, the EU, India, others. But what’s your take? Are governments starting to get it right? Are they still way behind? Are they thinking about it wrong? How is that hurting or helping the movement?

[00:32:23.09] – Arijit
They’re still thinking about it wrong because the problem is, and just for the people who haven’t seen the previous podcast, my concern is that regulators very rarely understand fast-moving technology well enough to basically set up walls. What regulators can do is explain societal goals that we will hold the technologist up to and say, Do not mess these up. If you mess these up, we will penalize you. But you figured out how you’re going to meet these societal goals. For example, there is absolutely nothing wrong with saying, Hey, our societal goal is people should not have negative outcomes based on their gender or race. I’m just giving an example. That is a perfectly valid societal goal to say. If you’re going to have an AI, it needs to flag any concerns about that. What is not okay to say is you will achieve it through this approach, or you will not do these use cases. Now, what you’re doing is you’re setting up walls, and the the technology is moving too fast and the laze-litters are not going to be able to keep up with these walls, and the walls will become a problem.

But when you set the goals right, as technology evolves, we can say, Okay, with the new tech, how do I achieve that same societal goal? The example I gave was that of OSHA. All of us, every company is subject to OSHA. For sure, yeah. But most of us actually don’t have to deal with it because as long as we are doing for people working in an office, we are probably not going to become subject to it. As long as we hit the societal goals, I’m not really required to send in a report every week or my chair height has to be exactly that much. But I do know that if an employee of mine is getting carpal tunnel, I better make… There’s a risk of carpal tunnel, I better help them and get them the best support that they need. So you set the societal goals and let the individuals conform to that goal. And if they fail, then you punish them. Osha does have-

[00:34:32.12] – Craig
That’s assuming that’s a pretty big leap, Arijit, that the regulators have defined societal goals or care to.

[00:34:42.01] – Arijit
Well, I’m just saying that’s the only approach that can work in rapidly evolving technology. If internet had been heavily regulated, I don’t think the internet would have ended up where it ended up at.

[00:34:55.03] – Craig
But have you seen… I mean, has there been much regulation that’s really slowed down? It seems like it’s almost like nonexistent or it’s almost lip service. Has it really done anything to slow it down?

[00:35:10.00] – Arijit
It is more lip service. That is the key part of it. And that is the other problem of having these wall-oriented stuff is that all you can do is lip service because there are enough people who will be able to influence that kind of a regulatory process. But I was actually working with one of the state CIOs on thinking through their… They’ve come up with the AI Bill of Rights, and I was looking through it. One of the things that we were discussing is how can we set up this kind of approach? How can we go in and say, Look, our principles are that your data is owned by the provider of the data, so the data should not leave their control. And if you do that, you don’t have to worry about these aspects of our routes. If you don’t take the data out the state or take this data out of the state out of the state agencies. If you use small models fully within the control of the enterprise, full auditability, blah, blah, blah, just set up a bunch of rules. But then say, if you do this, when you work with our state, we are going to put you through less rigorous evaluation because you have already conformed to these things.

It’s almost like creating incentives, setting the goals, having well-founded reasons for those societal goals like data privacy, data ownership, are well-understood principles, and then saying, If you do this, here’s the carrot. And if you don’t do this, there’s a stick, of course, because you have violated the regulations. But if I follow it, give a carrot.

[00:36:45.02] – Craig
Yeah, makes sense. So a couple of years ago, I’d say most organizations that were trying to dip their toe in this space, whatever initiative, it was run by CIO. It was the typical go-to, nobody really understood it very well. Maybe a CTO, if they had it, COO, CDO. Now it’s pretty clear that it touches every aspect of an organization. So what’s your take on who should own AI? Now there’s even new roles. There’s Chief AI Officer. What’s your take on all that?

[00:37:19.11] – Arijit
That’s a tough one. I’ll tell you the frank answer. I don’t know how useful this is. AI needs to be owned by the users of that AI. What you need to do is you need to have a centralized organization that sets up the wall you’ve got. So if I say, Hey, let’s have a thousand business users build 10,000 agents. The way I describe it is instead of trying to use a bow and arrow where you’re trying to define a use case precisely and hit that use case, you’re using a machine gun and getting a thousand shots on target. Out of the thousand shots, 100 shots hit. Out of the 100 shots, 10 of them are extremely useful. You get to 10 extremely useful use cases and 90 good ones in the same time, actually in faster than you would get to one good use case. But the flip side of that is now you’re saying that, Oh, my business user is going to run amok. You have to set up the walled garden where the AI system has access to certain data, not other data. It’s using single sign on or it has the authentication, the data access figured out.

It’s doing all the logging, so there’s auditability. It’s automatically looking for weird things happening, like prompts that don’t make any sense given everybody else’s prompts, for example, anomalous behavior, if you will. You have to make sure that the models can’t send it outside of your enterprise, all of that good stuff. You build this walled garden. And within that walled garden, you empower your business users to try many many things. The other part of this becomes, how do I get to cost predictability? There’s a customer who came to us because they gone with a different solution and suddenly found they had spent $3 million on a large language model. Well, the reason was, firstly, they were using large language models and they were stuffing massive documents into the prompt every time. I kid you not. Imagine a massive product catalog being stuffed into the prompt every time. That’s like full- It’s a reg solution, and that’s the only answer, right? Right. And so what happens in Aible is, for example, if cost management is really a concern for you, we just partnered with HPE’s PCAI, where we said, look, you buy the PCAI server, if you will, and it gets rolled in and plugged into your wall, and now your cost is constrained by the amount you’re already paying HPE, you cannot go beyond that cost.

Now, if that server becomes completely full because people are really actively using it, maybe you double the server, maybe you triple the server. But cost becomes manageable and predictable and understandable. So these are the things that the centralized organization needs to do, set up the walled garden, think through the cost profile so that the cost is manageable, etc. etc. What they should not be trying to get to is owning the use cases or owning the process. Let a million flowers bloom at that level.

[00:40:20.05] – Craig
Yeah, unfortunately, we see the opposite. We see a lot of organization because it’s just what they’re used to. The executive team knows more. A big consulting firm came in and gave them a report that says, these are your best use cases, and it’s the same old playbook. But as you said, you go back a year later, and did they get any of those done? No. Did they get any value? No. And they just wasted a year. And I really worry. I worry about organizations. I don’t sense the urgency with the speed of change that’s happening. And the companies that are getting it right are the smaller companies.

[00:40:58.04] – Arijit
No, I’ll disagree on that. I I actually think that was true at one point. We just had the Chief Digital Officer of CVS Health do a podcast with NVIDIA with us, and he was talking about how he’s empowering the users to do exactly this. CVS Health is what, Fortune 50?

[00:41:14.10] – Craig
Well, that’s I think that’s an anomaly or that’s great to hear. But I’d say just from our experience, the bigger the company, the more stagnation, the more bureaucracy, the more they’re tying people’s hands, and the smaller the organization is where they’re shot. People are just shooting rifles. And yeah, they might be missing a lot, but they’re actually hitting stuff, too, like you said.

[00:41:37.14] – Arijit
But what happens is if you’re a large company and you don’t see any successes around you, that can continue. This is why I was so excited. Again, CVS Health was just came out, that podcast came out two weeks back on NVIDIA. Then the Chief Data Officer of Nebraska just did this presentation at HPE Discover, where he talked about how they were able to build all these agents in 30 minutes. So what is happening is fairly large organizations are now going out there with success stories. And the good part of success stories is if you don’t know what water tastes like, you’ll drink sand. And that’s what these people are doing. They’re drinking sand. Now, when they actually see people enjoying water, they’re like, oh, my God, what is this thing? I want to drink it. If there are no success stories, how do they know that this is possible?

[00:42:27.07] – Craig
Yeah. No, I don’t think there’s a shortage of success stories. It’s just the big aha for me, and I’d love to hear if you have any things that you’re seeing, any myths or any anomalies out there. But I’m really shocked two years later at how slow a lot of organizations are going. And I think a lot of it has to do with they’re still using old mindsets, old processes, old ways of thinking. They’re thinking about AI like they think about enterprise software or stuff that has very narrow use cases. And I think that that’s risky when some people can figure it out. If someone in your industry does, they may have a lead on you, let’s just say, that you may not be able to regain. But what are you seeing out there? What myths are you seeing in here that you’d like an opportunity to dispel?

[00:43:17.11] – Arijit
Well, the funniest thing for me is the book that I wrote in 2016, actually, I found it in 2018. One of the myths there was exactly what he just said, which is AI is not like software. The problem is companies are used to spending time to spec out a software and build that software, and then that software can run for a period of time. AI, by definition, firstly, the tech is constantly changing and your process is, your circumstances are constantly changing. This is a technology that is designed to be flexible, not rigid. This is why I keep saying, build anthills, don’t build cathedrals, because that cathedral is going to break, because the tech is going to move out, and the whole foundation will be destroyed. But that anthill will take moves, that anthill will evolve and grow and get into a different shape. But the investment you have made is not going away. It’s just getting built up. We always fight our last war. I think way too many organizations haven’t figured out this is not an IT project. If you look at most of the IT consultants, they are still going in with six-month project mindsets.

It’s changing a little bit. People are beginning to do rapid sprint, I think. But it’s still in the pursuit of a billable-hours project. And I do expect this market will either move to outcome-based projects or the whole space of consulting companies enabling these things will just go away because the business users will do it themselves. They are not going to wait six months or a year for a project with 10% probability of success.

[00:44:58.04] – Craig
Yeah, I think it needs to change. We’re saying already, we’re putting a stake in the ground saying, the work we’re doing now is 30% better, faster, cheaper, deeper than it was if we were to do it a year ago. We’re doing it for two reasons. One is because we want to really prove the point that it needs to change. It is changing, and we can do it faster, better. But also then if we can do it, why can’t you, the client, do it? Why can’t you do your work? And 30% is just a baseline. It’s not the end goal.

[00:45:33.02] – Arijit
You’ve gone from 1% to 30%. So I’m already happy with that.

[00:45:37.03] – Craig
There you go.

[00:45:39.06] – Arijit
Why 1% better?

[00:45:42.01] – Craig
1% a day, Arijit. That’s 300% a year. Come on. I know. I get some people misunderstand that, but it’s a mindset. Speaking of that, here’s your chance to be Nostradamus. What predictions do you have for the future of AI? Where is it headed and what predictions you have for us?

[00:46:02.08] – Arijit
I think the fundamental disconnect of AI will continue in that very few organizations will empower business users to take over, take the lead, if you will. But those ones will create so much value. You’re going to see a huge amount of creative destruction in the space. So if current trends continue and only about 10, 20% of leading companies adopt these approaches and the remaining just burn money, like the guy spending millions of dollars stuffing the same document into the prompt over and over again, that 10% and completely new organizations, the smaller ones, the new ones that adopt AI as a fundamental way of doing business, they will take over, and you’re going to see a massive amount of creative destruction. And that is my concern. For people who add friction to AI projects, I’ve literally had projects where people are delaying and not providing data because we can tell that they want to build their own thing, or they think they will protect their jobs a little bit longer by delaying the project. The reality is they won’t have their jobs. They won’t even have the companies if they don’t embrace and extend this because the improvements, when you are successful with AI, the improvements are absolutely massive, like the $12 million that one person found from one business- And if you just scale that up, one person, one pattern is that much. And there was the only pattern he found. There was several in the million dollar range. That starts racking up really fast.

[00:47:35.12] – Craig
Yeah, I believe that. One just example of what you just talked about that we saw, and again, it’s something you just experience, you observe, and it was an aha that just occurred a couple of months ago. Something as simple as the role of the business analyst. This used to be one of the most coveted roles. The best business analyst turned into the best executives in companies, right? And over the course of the past 20 years, it’s evaporated. It’s turned into platform analyst. It’s turned into went away with agile, and you had to be a specialist, and you had to do… They turned into something. But here’s very few business analysts, and now the business analyst role is the person that can most use AI. They’re the ones with curiosity. They’re the ones with business knowledge, they’re the ones that are asking the tough questions, solving the tough problems. I think that’s interesting because that, I think, is what organizations need to really embrace this, not a bunch of specialists.

[00:48:35.08] – Arijit
The only thing I would change about that is instead of the skill being the ability to ask questions, I agree on the curiosity part. I do think putting the power of AI behind the human ability to ask questions doesn’t make any sense. You actually want the AI asking millions of questions, coming to you, here are 10, 20, 30 answers. And then the curiosity does become important because you got to look at those 30 answers and say, these two are because I have a data quality problem that I can figure out later. These three are very interesting. Those 25, I’m going to hand over to my colleagues because they have more domain knowledge than me. So the human becomes the active recipient of insights rather than the initiator of questions.

[00:49:17.04] – Craig
Right. No, I agree. And that’s where the questions might not be them asking it to the business, asking it of the AI, saying, How did you do this? What alternatives?

[00:49:26.12] – Arijit
But the AI will ask the questions for them. This is one of the big shifts of Agentic versus Chat. Chat was all by the human initiated the question. Agentic is the AI is always on, is always looking at millions of variable combinations and telling you, here are the five things you need to know today.

[00:49:44.09] – Craig
Yeah, I agree 100%. So last question, and you answered this last time, which is life lesson oriented, and you basically shared your life story, and it was, don’t let anyone tell you something’s impossible because you’ve proven it wrong so many times. But anything that you would add from just a 1% better advice to people on how they can get through this journey called AI or anything else you would add to that?

[00:50:12.08] – Arijit
It’s actually related to it. It’s stick to your convictions. So in 2018, then we came out with Able, and by the way, the tagline was always I’m Aible, the URL was always imaible. There’s a video of us demoing it for the first time at the Gartner conference. And there’s been so much change in AI, so much discussion, so many times that I myself thought maybe I’m getting it wrong because I was seeing multibillion dollar companies being built on methods and approaches that to me felt completely wrong. And the good thing is that the market is coming back and recognizing this. And today what we are doing is still driven by that same belief that it’s not one model. It is humility on the part of the AI explaining itself to the user, making it easy for the user to adjust the AI to their unique needs. It turned out, I do believe we are right. I think the proof is coming increasingly every day because our customers are getting the results and the monolithic guys are not. So let’s see. But they still have the hundreds of billions of dollars of valuation. So we shall see what the market says someday.

[00:51:20.11] – Craig
Well, very good. Well, hey, it was great catching up with you once again, and we’ll see how things play out in the future. Never a dull moment.

[00:51:30.06] – Arijit
Very nice to see you again, Craig. Thank you so much.

Check out other podcast episodes