1% Better Podcast: Arijit Sengupta

Headshot of 1% Better Podcast episode 2 guest speaker Arijit Sengupta, Founder and CEO of Aible. Click to listen to episode teaser

Find 1% Better Improvement Podcasts Here

1% Better: Arijit Sengupta, Founder of Aible
Quick Links

Check out Arijit Sengupta’s book AI is a Waste of Money
Connect with Arijit Sengupta on LinkedIn
Connect with Craig Thielen on LinkedIn
Learn more about Aible

Key Takeaways

  • Evolution and Impact of Aible: Arijit Sengupta shared the journey of Aible, initially ranked as #1 in Auto ML (Automated Machine Learning), emphasizing that Auto ML was just a tool and not the core of their work. Aible’s mission has evolved to focus on solving business problems comprehensively, rather than just providing analytical solutions, highlighting the shift from analytics to actionable business insights.
  • AI’s Real Value: Sengupta discussed the misconception of AI as a costly investment without returns, proposing instead that the true value of AI lies in its alignment with business needs and constraints. This approach challenges the traditional “data-first” mentality, advocating for a more nuanced understanding of AI’s potential to address specific organizational goals.
  • AI and Societal Goals: The conversation touched on the broader implications of AI development, including concerns about bias, ethics, and governance. Sengupta argued for a principled approach to AI, where societal goals guide technology development rather than reactive measures like moratoriums or strict government regulations.
  • Asking the Right Questions: A significant theme was the importance of framing the right questions to leverage AI effectively. Sengupta emphasized that the power of generative AI, like GPT, is maximized not by the answers it provides but by the quality of the questions asked, highlighting the critical role of human insight in harnessing AI’s capabilities.
  • Rapid Implementation and Iteration: Sengupta and Craig Thielen discussed the practical aspects of implementing AI solutions, stressing the importance of quick deployment and iterative improvement. They shared success stories of significant ROI achievements within 30 days, underlining Aible’s philosophy of continuous, incremental progress to drive transformation and value.

1% Better Episode 2 Transcript

[00:00:00.360] – Craig
Hi, everyone. My name is Craig Thielen and I’m Principal and Head of Digital Solutions for Trissential. In this episode, I have the distinct pleasure of speaking with Arijit Sengupta, Arijit is the CEO of Aible, and I’m sure we’ll talk more about that. But a little bit of background on Arijit, he holds advanced degrees from both Stanford and Harvard in Computer Science and Economics. I hear there’s also a little bit of dancing in there somewhere, so maybe we’ll get to that. He was the Founder and CEO of Beyond Core, which was acquired by Salesforce and eventually turned into Einstein Discovery, a part of the Salesforce Einstein platform. He’s been granted 24 patents, he’s been featured in many magazines and New York Times, Harvard Business Review, The economist, and many others. He’s an executive fellow at Harvard Business School. He currently is the Founder and CEO of Aible, as I said, which is the Gartner Magic Quadrant #1 auto machine learning platform. Really excited here, Arijit, to talk to you and welcome to the podcast.

[00:01:07.910] – Arijit
Thank you. By the way, we are not #1 in Auto ML anymore, which is fantastic because we think Auto ML is just a tool. It’s important is solving the business problems.

[00:01:19.360] – Craig
Well, exactly. Sometimes it’s better to not be number one because then you can try even harder.

[00:01:25.420] – Arijit
No, it’s interesting. It’s like when you’re building an iPad and nobody knows what an iPad is, it is easier to call you like, hey, they have this memory chip, they have this touch screen because they don’t even know what iPad is. What happened is Auto ML happens to be a small component of what we do. It was never what we did. As what we are doing becomes more and more clear, that’s actually helping the customers understand that this truly is end-to-end. How do I get my business questions answered as opposed to analytics, which is all about how do I get analytical questions answered?

[00:02:01.610] – Craig
Yeah. And I think we’re going to jump… That’s a great point. We’re going to jump into all the different segments. Anytime you categorize stuff and you say someone’s the number one, you put things in boxes, and that’s not how the world works. And so there’s a number of topics I think we’re going to push the boundaries of traditional data and analytics. Just real quickly, how we met as we do a lot of work in the world of helping businesses improve and help organizations transform this notion of data centricity, which is data is no longer a byproduct of what you do, but it is core asset and it is your products and services. It really leads you to thinking about data differently. And data is a tool for improvement. And I’m really curious to dig in with you. The whole name on this podcast is 1% Better. And that 1% can be something you do consistently over and over to get to a great result. But in other ways, it could be just one insight or one piece of data or asking the question differently that might lead to some breakthrough. We met because we do a lot of work with organizations trying to bring them some of the best thinking, and one of those things is your platform. Let’s take a step back. First question I always ask is, when you think about your whole career and you look back and you think about improvement, both on a personal and professional level, we’re not really segregating those, how do you think about improvement just in general?

[00:03:23.470] – Arijit
Well, my fundamental focus has been on how do you take things that people can’t do and give them the super power so that they can do that. One of my favorite things is Steve Jobs used to have this poster where he used to talk about the computer was the bicycle of the mind. And the whole point was that if you look at how far each animal can go, humans actually do pretty badly. We can’t go that far. But once you give a human a bicycle, the human will go further than anyone else. And that is a fundamental premise of, I think, technology. It’s how do we give people superpower? How do we empower people? In fact, Aible’s tagline has always been I am Aible, because our place was that we, every single individual, should be able to create and benefit from AI that serves their unique needs. Because the market is going towards a world where the few make AI for the many and the rest just consume AI, if you will. Whereas we believe it actually needs to be any and all of us should be able to create AI that serves our unique needs.

[00:04:28.660] – Craig
You once said or wrote that AI is a waste of money. Tell us about that.

[00:04:34.200] – Arijit
I wrote a whole book on it, actually. One of the interesting things that I found and I’d done about a thousand AI projects at the time when I wrote that book, what I found was the things that businesses want it to do and the things that AI helps you do are not the same. And this was a very important disconnect that people seem to have in the market. So as a human being, we never really say, I want a more accurate model. That’s not your goal. Your goal was, I want to increase my revenue. I want to reduce my marketing spend. I want to reduce inventory stock out. What you always had was a business goal and a business objective, and the AI was a tool in that. So when you think about a sales situation, let’s say your benefit of a sale is $100, and the cost of pursuing a customer who doesn’t buy is $1. Do you want an accurate AI or do you want an aggressive AI? You want a very aggressive AI because you are willing to take 99 bad predictions to get that one correct prediction. Now, let’s say I tell you that you can only take five sales costs.

Now, you might want a very conservative AI. What happens is our cost benefit trade offs and our capacity constraints define what we need from the AI. And the problem had been that people just always said, give me the data and get out of my way when I create the AI. Our premise was, No, the data doesn’t have that information on the human needs, the business needs, the business constraints. And without that information, you can’t create an AI that is useful. So that was the basis of that book. And that’s just one of the various things in the book. And eventually that led to Aible because I couldn’t find any solution in the market that actually started from what is the business goal? What is the business objective? What’s the job to be done for this technology as opposed to, hey, I got a great accurate model for you.

[00:06:27.650] – Craig
Got it. All right. Well, I need to track down that book and read it. It sounds like a good one. We can’t go too far in this conversation without getting into I’m going to go right to the hottest topic. There’s a big debate going on in the world. You and I have talked about this, I think, a couple of times. I think it’s even further exacerbated. It’s been actually going on for a number of years, but the whole chat GPT phenomenon has really put a lot of gasoline on that fire. The debate is around AI. In one hand, we have those and we’ve got a lot of big names that we don’t need to name here that are falling on one side or the other, but AI’s regarded as remarkable but potentially dangerous step forward in human affairs and necessitating new and careful forms of governance. This is held by over a 1,000 people that signed a letter, lots of big names, academia, politics, tech industry, etc. that are calling for a six month moratorium for training of certain systems. But yet at the same time, there’s a whole another camp and other big names have fallen on that side and including the UK government, which decided that it’s country’s principal aim should be to turbocharge innovation and the use of AI. Lots of well known, smart, educated people on both sides of the argument. Where does Arijit fall?

[00:07:40.670] – Arijit
Well, first, let’s talk about what is actually happening here. Whenever we see a new technology, this debate always happened. It happened with the electricity, it happened with the railroad. There’s some pretty nasty stuff going on in trying to prevent electricity from being adopted. But what we need to think through as a society is what are our societal goals and what are our societal fears? And if we focus too much on the fears and we try to put guardrails from a place of fear, we’re going to mess this up. If we put it from a perspective of here are our societal goals, for example, we do not want an AI to negatively impact us based on our race, based on our gender. Those are very valid societal goals.

[00:08:22.440] – Craig
Yeah, bias. There’s a lot of talk about is there bias built in or being built in or it’s being learned by these models.

[00:08:30.700] – Arijit
But notice one thing, I didn’t go to the word bias because the moment you go into bias, you’re talking about a technological solution to a technological problem. That is not where we should be because this technology is changing so fast, the moment you even accidentally tread into the domain of this is what the technology should look like, you’ve already harmed society. What we as society need to do is make our societal goals clear and then say, guys, figure out how you’re going to solve that through technology. For example, one of the things I wrote in my book is there is no way to avoid bias. One of the things people talk about is, Well, I changed my sample. I stratified my sample in such a way that there isn’t any bias in my data. No, that’s not true. You might have taken out gender bias. The most trivial example is I take out the gender variable, well, then AI will pick up job title as a proxy for gender. All you did is you hit the bias. You didn’t actually eliminate the bias. So again, we are at a very important point in this technology.

It is extremely important that we as a society say, here are our societal red lines. But stay at the level of goals, never tread into the level of technology because that’s the big mistake because this space is going to evolve so fast. If you’re going to put straight jackets and constraints, it’s going to be wrong. It’s actually going to do more harm than good. If you make the goals clear saying, we want to get on top of that hill and we want to avoid that hill, then let people figure out how to get you to the hill you want to go to and avoid the hill you want to avoid. That actually is far more flexible and that’s where we should go.

[00:10:09.440] – Craig
Another analogy that I’ve heard before is Prohibition. When we decided that alcohol should be banned, we stopped one problem and then we created maybe 10 other problems with all bootlegging and all sorts of illegal groups, the mob, the mafia, all these groups that were doing this illegally. So is there some alignment there in terms of if you try to stop one thing, it’s going to rear its head in many other areas that you didn’t anticipate?

[00:10:35.530] – Arijit
It’s way worse than that. That’s because, understand this, the impact AI will have on society is far greater than the impact… and AI is infinitely at scale. This is a very important thing for people to understand. The cool part of AI and the scary part of AI is that every other technology that we have ever had has always had a human in that adoption curve. If a human didn’t adopt, if a human took too long, then that AI didn’t go and exist overnight. Ai can truly be instantly at scale. You could technically create an AI, plug it into your CRM system, and every single salesperson in your company, five minutes later is getting affected. Ai is the first technology that we have ever experienced that can go infinitely at scale. That comes great responsibility. This reactive, Hey, let’s put a moratorium. Let’s not do this AI. Let’s do that AI. It’s just misguided because you’re just going to have a lot of law of unintended consequences coming into play at a far greater scale than Prohibition ever was.

[00:11:44.090] – Craig
Go back to your statement. You said we need to make societal goals clear. Is that possible or how would we do that?

[00:11:53.610] – Arijit
We do that all the time. If you think about OSHA, what do we say? We go in and say, these are the harms to your workers that we are not going to accept.

[00:12:02.260] – Craig
But OSHA is a form of governance. It’s a form of controls, a form of regulation, right?

[00:12:07.260] – Arijit
But that is fine, because what I’m saying is you can go in and say, these things are not acceptable to us as a society. But don’t tell people how to achieve those goals. I would have had a problem with OSHA. Actually, there are, to be very honest, putting on the stupid poster and stuff like that. That last part of it becomes silly where people say, well, if I do it, I will probably be in compliance with OSHA. But fundamentally, what the law says is these are the things that you should not do. You should not harm, your employee should not be harmed. They should not face situations that will cause them to have injuries, etc. Over time, we have created some heuristics, and these things are, if you do them, you’re probably not going to be a foul of law. But laws that are written in terms of this is why we have the law. This is our societal goal. And as you achieve that societal goal, we are good. You have full flexibility in how to achieve it. Those are the laws that have always defined America compared to the rest of the world.

[00:13:11.840] – Craig
I believe that the White House is currently working on an AI Bill of Rights. Is that the principle based or value based guidance that you think is useful or is that just politics?

[00:13:24.520] – Arijit
Now, unfortunately, that Bill of Rights has a lot to be desired. One of the easy things you can do is do a word count on how many times they talk about AI’s potential and how many times they talk about AI’s problems. That Bill of Rights is a very dark place of fear.

And what is funny is if you look at the Chinese government’s law, it is also from a dark place of fear. If you look at the French law, like the European law, it’s for a place of fear. But here’s the funny thing. All three societies have different kinds of fear. So the Chinese law is worried about social cohesion, social principles. The French one, the European one is much more focused on data privacy and individual rights. The American one is more worried about gender rights and racial rights. But what we’re missing at this moment in time is start from the this incredible part of AI to do good. Why are you thinking just about how AI can cause racial bias? You can have principles of racial equality being enforced by AI much more easily than any other technology. You could have principles of gender equality or any principles of equality you want to have in general. You can enforce it using AI. You can encourage it using AI to a great extent. Why are we not talking about the potential for AI for good as opposed to just being afraid of how AI can do harm?

[00:14:45.180] – Craig
Well, I think that’s the argument of the one side of the debate. I guess the question I would have for you is, do you see any meaningful, practical progress being made to define how do we guide it for the use of good and provide those principles? Is there any place that that’s starting to form?

[00:15:02.440] – Arijit
So ChatAible, the thing we just announced a few weeks back is a perfect example of that. So when generative AI first came out, there are three things that people are really worried about. They’re worried about data privacy, they’re worried about hallucinations, where the thing makes up stuff. They were worried about how do we actually deal with things like data residency. So they went in and banned ChatGPT. How do I make sure my data doesn’t leave my borders? What we did is we went and worked with Fortune 100 customers, understanding what their problems were, what their concerns were. And then with ChatAible, instead of ChatGPT, what we did is we are using GPT 4, but we are using it in a way that the customer data never moves from where it originally was. So you comply with data residency. You actually are dealing with data security and data privacy. We only pass what’s called K Anonymized Mast Data to GBT 4. So you’re not taking privacy risk. And we have explainable AI double check the generative AI, which is not explainable. So you’re using technology at scale to solve the problem of hallucination.

But notice all three of these are technological solutions to fundamental problems with generative AI. And these problems were identified and dealt with by bringing enterprise customers and technologists, entrepreneurs together to solve problems at scale. And by the way, that is the American way. That’s the reason I came to America is that the American way has always been, let’s solve the problems through innovation. Rather than let’s get instead of being afraid and getting frozen, America has always said, I’m going to go act on this and I’m going to solve this because we know how to figure our way out through that. And India has a similar concept called Jugaad, which is this idea of like, if you don’t have something that solved the problem, just put stuff together and solve it. But I think that is what is common to the American way of doing things and the Indian way of doing things, which was, let’s just go solve the problem. Yes, there will be many, many, many problems that seem insurmountable. If we work together, we can solve it. And that’s the belief system that I think is necessary in the world of AI.

[00:17:18.100] – Craig
So if I had to restate that, is it your belief, hey, let the best solution win in terms of the best unbiased, or that it’s observable, or that doing it in a way that is transparent and do it for good and now win out over the other alternatives that are maybe in a black box, it’s uncontrolled, or it doesn’t have some of the same principles that you’re applying, in your case to ChatAible, is that the thinking… just let the best solution win versus government control of it?

[00:17:50.200] – Arijit
Firstly, definitely not government control, because government has done a bad job of trying to regulate technology in it. I think what I’m saying is take this as an example. What we did is we brought customers together to say, what are your societal goals? We got customers in the room and we said, what are the things you’re afraid about? They actually had very legitimate concerns because they want to protect their users’ data. One of the customers was really worried that people were literally copying proprietary information into ChatGPT. We listened to their concerns and their concerns were legitimate. Then we turned around and we said, how do we technologically solve this at scale? What you notice there is if you just take that model to country level, there’s nothing wrong with that because our customers didn’t come in and tell us, this is how you will technologically solve the problem. This is the job your technology needs to do. Now you go show me that your technology can do the job. That’s the first step. First step is understanding what is the job that the AI needs to do. The second step is let the entrepreneurs and the technologies go prove that the technology can do the job.

Instead, what we are doing is we are combining the two steps together and we’re saying, Hey, government should tell us what technology we can build or not. Government should tell us our societal goals. Here is the job that AI needs to do.

[00:19:13.720] – Craig
I think it goes back to, one of the things that I’ve heard you say many times, is that you got to ask the right questions. Talk a little bit about that and then, Arijit, maybe think about getting back to this theme of improvement and how we can use AI as an improvement tool for an organization or society in general. What stories do you have where an organization used it and really got transformational results or some story that comes to mind? I’d love to hear that.

[00:19:41.250] – Arijit
Well, let’s start with your first question, which is, what is the right question to ask. I don’t know whether you’ve ever read the original genie stories. I’m talking the Arabian Nights genie stories, not the Aladdin genie stories. The thematic element always would be that this poor human would ask the wrong wish.

[00:19:59.260] – Craig
Oh, right. You get three wishes.

[00:19:59.960] – Arijit
They get into a really bad situation, and then by the end of it, hopefully they get themselves back out. But quite often, they would end up in a worse case than this.

[00:20:08.310] – Craig
Usually, the third question just got you back to where you started because you dug your hole so deep.

[00:20:13.800] – Arijit
If you were lucky, sometimes it was worse. But what is fascinating about something like GPT is that it’s only as good as the question you asked. You have to ask that first question. And if you asked a biased question, if you’d go in and say, hey, give me Craig’s Nobel Prize acceptance speech, GPT will happily write you the Nobel Prize acceptance speech you gave. So if your question is wrong, you’re going to get the wrong answer. So one of the fundamental things that we did is, and as a kid, I used to always think that if I ever got a genie, my first question to the genie would be, knowing everything you know, can you tell me what my next two wishes should be? My first wish would be knowing everything you know about me and knowing everything you know about how the wishes world works and the world works, can you tell me what my next two wishes should be? And we did something similar with ChatGPT. We said, well, why don’t we have a very powerful AI, explainable AI, look at millions of possibilities in the data, figure out what are the best questions to ask of the data, and then have use generative AI like GPT summarize it back to the user.

So think of it as the first step in the question asking experience is a summary of your data that actually guides you towards the best questions to ask. Asking the right question is one of the most difficult things. And if you’re really going to be in an AI first world, the AI actually has to help us ask the right question. And I think you’re going to see more and more of this because if we are taking this enormous cosmic power of generative AI and putting it behind the human ability to ask a question, we are massively constraining it. Now, in terms of customer stories where they got to success, we have published about 25 of these. In fact, couple of them with Trissential, if I remember correctly, where we went into customers and got customers to value in 30 days. That exists across take your pick, sales, marketing, logistics. Take your pick, delivering 10% improvement in 30 days in many cases, sometimes more, creating millions of dollars of value in a few weeks. But getting to value from AI actually isn’t that hard. The main advice I would give people is take a metric that you really care about and then just get started.

But set a short-term goal. Basically say in 30 days, I want to see how well we can do. What you don’t want to do is get into a nine-month AI project with a low probability of success. If you can’t go in and take something as powerful as an automated AI system nowadays and create at least 10% of value in a few weeks, you’re not going to create that much value in six months either. So either the data is there, either the business process is improvable or it’s not. If it is, you will know that you have improvement potential in the first 30 days and then you iterate and improve, iterate and improve. I’m not saying that the best possible results will come in 30 days. What I’m saying is in 30 days, you can go find a very positive ROI project and then you keep iterating and improving it back to your 1% point. You don’t have to get to a 30% improvement on day one. But if you have shown that you can create value and you can constantly iterate, you’ll get to 30% over a short period of time.

[00:23:37.040] – Craig
Yeah, and I can attest to that. I would say one of the reasons that we work so closely with you and your organization is that we work with a lot of large organizations that we’re trying to help them understand what’s possible with data, not in the old frame of the data driven and use data and gather it and segment it and report on it and visualize it and do all the stuff that we do just to make day to day decisions. But what is possible in one of those spaces is we talk a lot about data literacy, so we got to get the whole organization from the board to the C-Suite down to everyone really understanding the new modern data centric work, what’s possible. One of the things that we appreciate about Aible is that we want to get their hands on it. It’s one thing to have a theory. Frankly, machine learning has been around for 30 plus years, so it’s not a new concept. But some of the technologies have all converged together that allow it, like ChatGPT as an example, didn’t exist in anyone’s hands two or three years ago. One of the things that we try to do is say you don’t have to commit to a year, two year, or three, or buy a whole boatload of licenses in 30 days.

Every engagement we start with 30 days or less and we load some data, we say, Here’s a bunch of predictive insights. Now, what do you think? Do these add value? Do they solve the problem? How much value? And in every case, it’s not a shortage of the insights. And it’s a great opportunity for people to learn very quickly. So again, just to reiterate your point there, a lot of people don’t know that you can do this stuff in literally a matter of hours and get insights with data that most people wouldn’t even have thought that’s possible or without spending X millions of dollars to get there.

[00:25:13.600] – Arijit
On that point, last month at the Gartner Data and Analytics Summit, we had UnitedHealth Group, Cisco, and New York CityHealth hospitals present, and every one of them talked about how they looked at millions of variable combinations. Think of it as millions of questions you can ask of your data being conducted in a matter of minutes completely automatically for I think the highest dollar number was like $100 or something. So people were doing 75 data sets across hundreds of millions of variable combination for sub $100. The compute cost, the power of these systems, the time to results, they’ve changed so much over the last few years that I think if you have not updated your mind about what is going on in the market, you should. Now, one of the thing I would add to that is we just announced ChatGPT at Gartner and we started giving access a week back. And in the first week, 100 companies signed up for ChatGPT. And by the way, sorry, no, 100 didn’t sign up. 100 activated their ChatAible account.

[00:26:15.920] – Criag
They used it. Yeah.

[00:26:17.670] – Arijit
100 companies actually went in and put that effort to go in and start using ChatAible. I was not expecting 100 companies to use it in their month.

[00:26:27.570] – Craig
That’s pretty good. Yeah.

[00:26:29.540] – Arijit
Right. And these included several of the Fortune 100, many of the Fortune 500, top retailers, top manufacturers, etc. Now, what is changing, though, is if you think about what analytics did is we had to go from a business question that the business user had in their mind, like, how do I sell to young people? And somehow a human had to translate that to a bunch of analytical questions. Which channel is liked by young people? Which products are liked by young people? Etc. And then you ask the questions of the data, and then you have to assemble the answers of those analytical questions back to a slide deck that is the answer to the business question. So analytics has really been about translating from a business question to a bunch of analytical questions back from a bunch of analytical answers to business answers. And what generative AI, plus these scalable automatically generated information models can do is, translate from that business question to the analytical questions, ask millions of analytical questions automatically at low cost, translate back the answers to a business answer. This is the first time we have ever had technology that could do the whole thing.

If you think about Excel, they were all tools that help you translate from an analytical question to an analytical answer. Nobody has, before this moment in time, had something that could go from a business question back to a business answer completely automatically.

[00:28:00.990] – Craig
You’re right. That is a big deal. Well, this time, like I expected, has flown by. But I want to…
By the way, just going back to ChatAible, I believe that you’ve said that that’s a publicly available link. We can definitely provide that in our show notes and make access. I think everyone should be doing that and testing that out.

[00:28:20.910] – Arijit
It’s available – This is the best time to use it because they can use it or free.

[00:28:26.840] – Craig
Everyone likes free, right? Last question that everybody gets, let’s set all the Aible and the AI to the side for a second. If you could share one idea about a lesson that you’ve learned over your career or an idea about growth mindset or improvement and you were sitting with your grandchild or you’re sitting with your 20 year old self or yourself, what would that be?

[00:28:47.140] – Arijit
Don’t listen to people when they’re telling you something is impossible. I come from a very humble background in India. I remember when I was applying to Stanford, the United States Educational Foundation in Indian person told me, you should apply to University of Hawaii because there’s no way you’re getting into Stanford. I was not rich person, so I needed a full scholarship. And if I had listened to her, I wouldn’t have never even applied to Stanford. People are very quick to tell you what is impossible.

When I started Beyond Core, we had a top professor at one of the top universities in the country saying, that’s impossible. The word impossible has lost all meaning given how many times I’ve heard that word impossible in my career.

[00:29:27.070] – Craig
Yeah, that is very profound and very… I always like simple truths, and it’s very simple, but it’s true. If you look at any major innovation or breakthrough discovery in the past 1,000 years or as far back as you want to go, there was always preceding that decades upon decades, in some cases, hundreds of years of naysayers saying, That’s wrong. That’s impossible. All the stuff Einstein did and all this stuff, many breakthroughs… There was always the naysayers.

[00:29:56.650] – Arijit
Once you’ve done it, I still remember the first time somebody told me Beyond Core was obvious. I was like, great, that means we finally broke through.

[00:30:07.740] – Craig
Great. It wasn’t so obvious when you had the idea to start it, right? Well, hey, I want to just thank you for your time. We’ll get you on. It’s incredibly timely to talk with you. It’s always good to talk with you. And maybe we can have you on the show sometime in the near future.

[00:30:22.760] – Arijit
Look forward to it. It’s a pleasure to collaborate with you guys. Thank you.

[00:30:24.910] – Craig
All right. Thanks, Arijit.