
Find 1% Better Improvement Podcast Here
1% Better Gerd Leonhard – Quick Links
Check out Gerd’s book Technology vs Humanity
Learn more about The Futures Agency
Connect with Gerd Leonhard on LinkedIn
Follow Gerd Leonhard on Twitter
Subscribe to Gerd Leonhard’s YouTube Channel
Connect with Craig Thielen on LinkedIn
Key Takeaways
- The Role of a Futurist and Humanist: Gerd Leonhard describes his work as focusing on intuition and imagination rather than prediction, emphasizing the importance of preparing for the future by exploring potential developments in technology and society. He advocates for a balanced view of technology, ensuring it serves humanity without overwhelming our human values.
- Technology vs. Humanity: The dialogue delves into Leonhard’s book, Technology vs. Humanity, highlighting the critical examination of the relationship between rapid technological advancements and human values. The discussion reflects on how technologies, especially AI, can significantly benefit society if used wisely but also pose risks of dehumanization and loss of control if not ethically managed.
- Human Protection Mechanism: Leonhard calls for a “technocratic oath,” suggesting that companies and developers commit to using technology for the common good, emphasizing the importance of human oversight in technological development to prevent dehumanization and ensure technology enhances rather than replaces human qualities.
- The Future of AI and Humanity: The conversation explores the potential of AI and other emerging technologies to solve complex global issues like climate change and health crises. However, Leonhard stresses the need for a global consensus on the ethical use of such technologies to avoid exacerbating existing social inequalities and ensuring they contribute positively to society.
- The Good Future Project: Leonhard discusses his initiative aimed at rebranding the future positively, encouraging a more optimistic outlook towards the potential of technology and humanity working together to address global challenges. The project seeks to inspire hope and action towards building a future where technology enhances human life without compromising our core values and connections.
1% Better Gerd Leonhard – Transcript
[00:00:05.980] – Craig
Hello, I’m Craig Thielen, and this is the 1% Better podcast. Today, I’m speaking with Gerd Leonhard, Futurist, Humanist, and CEO of The Futures Agency, and he is based out of beautiful Switzerland, but he’s in a secret remote location today… we won’t talk about, but it’s beautiful. Welcome to the show on 1% Better, Gerd.
[00:00:26.640] – Gerd
Thanks for having me.
[00:00:27.500] – Craig
So, Gerd, first of all, I’m many times told that I have a very unique title, my title is Chief Essentialist. I work for a management consulting company called Trissential. But I think you’ve got a title that’s even more interesting and unique, and that is one of a futurist and humanist. So first of all, tell us what that is and what that means.
[00:00:49.460] – Gerd
Yeah, I’m probably a bit unusual as far as futurist goes. My work is really about observing the future. It’s not so much about prediction, which is very hard to do. It has been achieved sometimes, but my work is about essentially just intuition and imagination. And so what I do for my clients and for myself and for people, organizations, governments, is to zoom into the future of five to seven, maybe ten years, and then collect what I find there, and then come back to today and say, Okay, it’s going to be like this… AI is a new technology platform. What does that mean in seven or ten years? How do you change or take advantage of it? How do you prepare? I always say the future is not about prediction, it’s about being better prepared. That’s my mantra. I follow the work of Alvin Toffler, Arthur C. Clarke, and Buckman’s to form people like that.
[00:01:47.500] – Craig
Yeah. Well, it’s more or less it’s your focus. And as you said, your focus thinking ahead, where are we going… Where are things headed so that we can act better today… better informed… So I love that because frankly, I think very few people spend time doing that. We’re so focused on quarterly results and what do we have to accomplish this month and this year that sometimes we lose track. So I first started following you, I think about eight years ago, seven, eight years ago, Gerd. And you caught my attention that I was stumbling across a video that you created on YouTube, and it’s around digital transformation. That was the title of the video. I’m sure you know exactly the one because I think it’s one of your most well-known ones.
Gerd
It’s my biggest.
Craig
And that led me to… I was just fat. I loved it. And the reason I was looking for that content and why I loved that video, in fact, I just played it yesterday. I think it’s as relevant today as it was seven years ago, which is hard to imagine. But it brought me to this book that you wrote, Technology versus Humanity. And what I use the video for and the book… and by the way, I don’t expect any commission but I think I’ve sold a lot of your books… And that is when I’m in front of organizations, typically, I’m trying to get across to them that the pace of change in the world is much greater than their pace of change and their sense of urgency inside their organization. And as we all know, long term, that doesn’t end well. And so how do we get their heartbeat up? How do we get their attention in the future? How do we change to adapt faster? And I think the video does a wonderful job. And the book, I know you say you study or you put a lot of energy into the future to prepare us for the present. But I got to tell you, the book is wonderful at predicting where we are now. In fact, you almost wrote in the book about what could happen with social media companies. And then we all know about the Facebook, Cambridge Analytics fiasco and how technology and data can be used to really do things that we don’t and we didn’t intend it for as users or that maybe are not legal or maybe are things that are quite dangerous.
So I think that I’d highly recommend, first of all, anyone to read this book. I think it’s incredibly relevant. But tell us a little bit about the book. In a synopsis, what is the book Technology versus Humanity about? And then seven, eight years later, what did you think? Yeah, I kind of got this right, but anything surprised you now that you can look back in time or that you didn’t see coming?
[00:04:36.310] – Gerd
Yeah, I should probably write a new edition of the book. I mean, it’s still very much like yesterday when I did write it, and it was quite painful because it was ongoing constant changes. But basically, the book talks about how we can get to a future where we can take full advantage of technology without being squashed as humans. That’s where we are right now with social media, with fake news, with manipulation, with AI. There are many, many good things about all of these things. For example, I use AI to translate my videos from English to Spanish and Portuguese, and it sounds like I’m a pretty good Spanish speaker on my AI videos, and those things are very useful. But, generally speaking, there is a trend towards looking at the entire world as a technology problem. And Mark Andreessen has pointed it out in this manifesto about the digital world, and it’s completely like, technology will solve every problem, just get out of the way. My pitch in my book was…
Craig
I think Elon Musk is on that side of the argument as well.
[00:05:40.650] – Gerd
Right. I would say, I started this seven years ago when I said, Look, technology And the technology will never solve cultural, political, policy, human problems, because they’re not about efficiency, they’re about beliefs, and they’re about standards, and they’re about ethics, and they’re about all the soft stuff. So, in the book, When I talk about androrithms – the human things, the compassion, emotion, understanding, intuition, and algorithms, which is the logic of machines. And what we have now is a world that’s increasingly run by algorithms that don’t really comprehend, understand the complexity of real life. So for example, I always say a good AI would understand about 5% of what I’m trying to say. For example, I use an AI, a runway, called Runway, another one called Clipdrop, and of course, ChatGPT and DALL-E. I give an input and I say, Okay, I want a bunch of philosophers sitting in a room writing with their left hand. And not a single time I get the prompt, correct… They’re all writing with their right-hand because people write with their right-hand, and I can’t get that fixed. There are so many things about artificial intelligence that is quite useful, but many things that are, I always say… seldom doubtful, but often wrong.
So it’s one of those things where you wonder about whether it’s going to go. So basically, I think we need a human protection mechanism. I call it the technocratic oath. After I wrote the book…
Craig
Yeah, I love that idea.
Gerd
Companies essentially say, Look, we’re going to do this for the common purpose of helping everyone excel, not just the top 5%. So we have all these issues. And I think basically I always say we should embrace technology because that’s the key to solving problems, but we should not become technology. That’s a key sentence from the book, because when you become technology, you are a commodity. I don’t want humans to become a commodity. I think that would be the wrong direction. I’m a little bit afraid that OpenAI, with all of their initiatives of general intelligence, will turn people into commodities. Providing data is That is what we do first, and then later on, we are replaced by the machine doing what we do, and then we lose authority and we lose participation. Of course, the worst part is, because of all the tech, we lose trust. We stop trusting each other, we stop trusting media, and then the whole world is like a crime zone.
I think these things are folding up on the horizon when we think about the possibilities of things going wrong, especially in geo-politics. And the top concern is, of course, fake news and manipulations. So all this stuff was already touched upon in the book seven years ago.
[00:08:37.940] – Craig
Yeah, like I said, it’s incredible. And what I think the book does for me an amazing job of, and it’s in the title, it’s We often think, I spent my whole career in technology, and we often just go, go, go, improve, make it better, and leverage it, and use it, and try to improve our lives and businesses without really regard… In the past, we really haven’t had a whole lot of regard to what the consequences, because the consequences weren’t that broad or weren’t that systemic. They were pretty isolated. Well, now in the world of AI, I think you’ve compared it to nuclear technology in that you have to look ahead and say, what could happen? You could use it for good or evil. And it does a really nice job of balancing both sides of the equation. Technology is certainly going to advance and to some degree, it’s part of evolution, and we can’t stop it. But at the other side, there is a convergence, and some people call this singularity. Some people might call it transhumanism. There’s some merging of humanity and technology. And that’s really what I think you ask a lot of questions and a lot of challenges to is, wait a minute, we better think about this because we don’t want to lose what is human. We don’t want to be less than human. We have certain attributes as humans that, frankly, technology will not and cannot replace. Some things that we are as humans, they can replace wonderfully. Even our creativity and ideation and some things that you think is human, it does quite a nice job with, but it doesn’t…
[00:10:21.620] – Gerd
I think of this more as augmentation. For me, machines are about competence. If I have a competent machine that can drive the car for me while I’m sleepy, in Los Angeles and my Tesla, I fall asleep in the traffic jam. That’s a good tool. It’s competent. I wouldn’t go on the German highway, going 200 miles an hour, chasing each other with a self-driving car. That’s not competent for that. But it is competent for the traffic jam. Not today. But it’s unlikely that it should or would because it’s very complex. What I don’t want from a machine is consciousness or human agency because that’s really our job. It’s also a question of our responsibility and of our power. We don’t want to give that up. We are now at the point of where it’s quite clear that in theory, it would be possible to build machines that can simulate all of these things. The question is, It’s not so much anymore. Ten years ago, it was about whether it’s possible. And today, I would say pretty much anything is possible. It is about what we want. What exactly do we want from this technology? Do we want to get super rich and a human and live forever? Top 10%, that’s basically us, because we’re in that bracket, basically. But do we think of it like that, or are we looking for a larger solution that goes beyond monetary market issues or world domination plans that companies like Google and Microsoft have.
[00:11:51.980] – Craig
I guess that’s the question that I have for you is, how can we embrace technology without losing humanity? They do seem to be on a collision course. To some degree, you could argue we’re already dependent on technology. No professional could really be effective without having a smartphone or having a connected laptop to the Internet. And now we’ve got Neuralink, we’ve got technology. Lots of technology is coming where it’s not even going to be a separate device. It’s going to be part of us. And there’s even biology, right?
[00:12:29.090] – Gerd
The dependency question is a big one because it’s not an absolute question. I depend on my coffee in the morning to feel good when I get up. Could I live without the coffee? Yes. I would say, Okay, I want a coffee, but it’s not there. So I could still do it. And dependency is a gradual thing. And it’s also something where we can say too much of a good thing can be a very bad thing. I drink 15 cups of coffee per day. I’m not going to be very healthy. The same goes for alcohol, smoking, whatever habits are. And the same goes about technology. So using tools to do things is one thing, but then to look at the tool as if this is the purpose of life, like I’m going to use virtuality or the Apple Provision to get the job done when I’m a doctor or something, that’s one thing. But then it becomes something where I want to live inside of this thing because it’s so great.
[00:13:21.790] – Craig
Yeah, like virtual reality, where you can spend your whole life in there, right?
[00:13:26.490] – Gerd
And that is not the same thing. That’s not the same. And basically, the tool becomes a purpose. Marshall De Clemenci said in the ’70s, a famous philosopher, he said, first we create the tools, then the tools create us. And I think within reason, that happens all the time, like television, the Internet, or telephone. But when we get to the exponential scale, that would mean that we cease to be independent as an organic being, so that we are attached to the tethers of the Neuralace, and without that, we can’t exist. And then we’re in science fiction territory. But this is the proposal of companies, for example, like OpenAI, that say, Okay, if we have an AGI, it will make much better politics. It will solve conflict. No, it will not. It will make it more efficient. Every technology is about efficiency. Basically, if you have discrimination in everyday life, like women are being discriminated in Switzerland or many other places, they make less money than men. If you’re going to use technology, for example, to look for jobs, technology will blow up that problem because it makes it more efficient. If you’re running an ad on LinkedIn, technology says, Okay, 97% of data scientists are men, we’re not going to show this app to men, but to women. Because women don’t exist as data scientists.
[00:14:51.190] – Craig
There’s lots of bias in how the algorithms are built to make money, largely.
[00:14:55.790] – Gerd
That’s not such a good thing. Basically, we have to not resort to a simple yes or no answer. Like with AI, we have to say, Okay, up to this level, it’s useful and allowed and possible. But for example, the idea of creating super humans by merging them with AGI and merging their body physically in that as well, to me, is an aberration. That doesn’t mean it should be necessarily illegal, but it certainly shouldn’t be the new normal. It’s like saying, I can take a drug to work faster and think quicker. I could do that. But should that be normal? I’m going to compete with people to all take the drug, and if I don’t take it, I can’t compete. Those things are difficult topics that It requires societal conversation.
[00:15:46.640] – Craig
Is that what you’re suggesting with something like the Technocratic Oath or some sort of agency, global agency, like we have the Nuclear Proliferation Agency, that we have something that thinks about these We have agreements now, I think, around cloning people. I mean, that’s illegal. Is something that governs those kinds of things?
[00:16:09.790] – Gerd
We need to govern on the existential level, not on the micro level. Nobody, no government wants to govern the use of a hammer. Okay, I can kill somebody with a hammer, but there’s no law about not killing with a hammer. With AI, it’s different. If I’m going to build the superintelligence, then I’m essentially building the most powerful thing ever created. Even more than nuclear, even more than gas and oil in the industrial society. And we can’t afford to make the same mistakes again because when we rolled out the industrial economy, we said, Okay, we want progress. Who cares about the environment? Now we are here today, the environment is dying. We have to… Yeah, big issue. If we’re going to do the same with AI, we would say we’re going to build superhuman entities, digital entities, and that would probably not be controllable anymore. We need to not microregulate by saying your chatbot can’t do X, Y, Z. But there are privacy rules about that, of course.
But we do need to think about creating an AGI, which is the official strategy of OpenAI, and to some degree of Microsoft. That, to me, is a governmental issue. It’s an existential issue. It’s not a business issue because, of course, it’s going to make a lot of money, but what world do we want when we make a lot of money, but we are completely cut off free will or any humanness because the machine already anticipates everything? That would be truly like…
[00:17:50.170] – Craig
It’s a tough, it’s a tough thing, though, because there is some correlations to nuclear technology.
I just watched the film Oppenheimer, and that was really an interesting film for me to learn the history of how that built up in the United States. To some degrees, there’s an arms race going on right now with AI. So if Russia or China or whoever it is has AGI before we do, now, I know this isn’t officially a government program, but there’s great power in that, and there’s sophistication and control of certain things, world economies and things. So to some degree, there’s an arms race. But then, to your point, you don’t want it to change humanity or do things that could cause harm, right?
[00:18:32.870] – Gerd
Well, but of course, this issue is not the only issue because we have now quantum leaps in technology. I call this the Game Changers. It’s not just AI. AI is the first. Then there’s quantum computing, supercomputing. There’s nuclear fusion. There’s synthetic biology, building new materials from a biology background in engineering, essentially. Then there’s geo engineering and many others. With all of those really large things, we need global consensus as to who’s in charge, who’s mission control, what are the ground rules. If you’re going to regulate that, it’s going to snow on Wednesday in Germany. That’s not just an issue for Germany.
[00:19:15.650] – Craig
Yeah, for sure.
[00:19:17.530] – Gerd
And the same goes for AI. So we’re going to need to work together because if we don’t do that, then we do get an arms race. And of course, the argument that China is doing all that stuff, unencumbered, yada, yada, yada. I understand it’s an argument of fear, but in the end, we’re living in one globe, and there’s not going to be a single country that can just go ahead and do as they please, because now we’re already mingling with the policy of Brazil to see what they do about the Amazon. It’s not like we’re just going to stand back and say, Let’s wait for better times.
[00:19:52.330] – Craig
Do you see, from your perspective, do we have a shot at… Is there any momentum, any global political momentum headed that way? Do we have a shot at any body like we built with the nuclear back in the day after World War II, there was a lot of global momentum saying, Hey, we don’t want this ever to happen again, and the world came together. Do you see anything like that now? Or what would it take?
[00:20:20.640] – Gerd
I think the UN has proposed a similar scenario. I call it the International Artificial Intelligence Agency, the IAIA, for that to be created. And of course, the UN is now painfully understaffed, unresolved, unpaid, and unauthorized to do much of anything. Unfortunately, because they’re on the right path with this, but we have a big issue here, and this is not a new issue. It’s just been, as you can see, with Israel / Hamas and all this discussion. What exactly do we empower the UN to do? And will they be able to do the job? Will we give them enough money? Will Antonio go out and do it? Rather than constantly shooting for his head. So he is a great guy that points in the right direction. But this is a very big question.
So the answer is, I think, unfortunately, I think it’s quite likely we’re going to see a major incident first that triggers our response to say, Holy shit, now we got to do something. We had two bombs first. And this is what Oppenheimer said. He doesn’t want that bomb to be used. And and they could have used it out in the ocean, but they didn’t. And now we had this precedent. And now I think it’s this painful moment where we see, for example, the most likely scenario there would be an artificial intelligence essentially destroying the stock market in a global financial crisis, not by design, but just by complete misinformation.
[00:21:52.980] – Craig
I mean, you could argue World War II. Like you said, there was other ways to show that this technology was available without killing millions of people. That was a political decision as much as anything, and probably not the best decision. So there may have to be some large mistake with this technology for the world to go, Wow, we don’t want to go there. We don’t want this to proliferate. So that seems like how change happens sometimes, right? Yeah.
[00:22:20.590] – Gerd
Look what happened with Facebook and when Snowden reported what Facebook is doing with the NSA and Google and everybody else. Now, that made a big dent in people’s thinking about the Internet. It didn’t, unfortunately, stop everything. But I think if we’re going to see it, I mean, one can only hope that we’re going to see a scenario where the damage is only money, not air traffic control, collapse of 100,000 airplanes crashing. These things are not, unfortunately, not out of the… not out in stretch. They are entirely feasible. We empower machines to make decisions that they’re not equipped for.
[00:22:59.250] – Craig
Well, let’s talk about things maybe on the brighter side of the scale, there’s a lot of promise to leveraging some of this technology, health care, longevity… You’ve talked about global environmental challenges that we have. What are you most excited about that the AI technology can really have major breakthroughs that benefit society?
[00:23:25.250] – Gerd
Well, I see the first benefit in essentially saying that if we turn the word around, I like intelligent assistance, IA rather than AI. I think that’s 95% of the good that it can do. Because let’s remember, there are so many stupid systems, computer systems in the world, everywhere. Whether It’s the smart city controls, the environmental controls, the pollution controls, all that stuff that was never made smart. And smart government, being able to use the digital government, and all sort of helping things. Like translation, like filing things, like monkey work, basically, right? Commodity work. I mean, think about this is like a huge list of commodity work that we all have to do because machines are stupid. If machines could no longer be stupid, which is what everybody’s working on, then the machine could say, Oh, Gerd is spelled G-E-R-D. It’s not an English name. It’s a German name. Therefore, I’m not going to say nerd or turd, or I’m going to say Gerd, because I learn. My computer still hasn’t got my name.
So, when you think about all those things, that would be very useful because it would basically reduce pollution, it would increase efficiency. There are charts showing that knowledge workers can be 50 to 100 X as efficient using AI, which increases GDP, which hopefully, rising time floats all boats, right? That’s the concept there. That could be good or bad, depending on the company that you work for. But we could do so many things with this. I’m really excited about translation. I’m excited about robotic process automation, about all the things that make our lives better without removing us. So that’s where the focus has to be, in my view, like environmental control, the green economy, green technology. I mean, if we use AI to help with green technology, we can solve the climate problem. Then if we switch to renewable energy, not fossil fuels anymore, then problem solved. Genetic engineering, analyzing the human body, coming up with new medication, which is not done with AI. I mean, think about all of those things that are not like… They’re not about transhumanism. They’re not about the singularity. They’re not about miracle machines. They’re just better machines.
[00:25:52.250] – Craig
Yeah, I love the idea of intelligent augmentation and centered around humanity, not replacing humanity. So I think that’s a great concept. So I recently saw you had a TED Talk, and you talked about The Good Future. So talk a little bit about what that is to you.
[00:26:12.360] – Gerd
Yeah, I started working on that a few years ago. Basically, as a follow-up to Technology Versus Humanity, because so many people say, when they talk about the future, saying, Oh, that’s all nice and fine, but the future sucks. The future is terrible. And in America, for example, 70% of younger people between 25 and 40 say that their future would be worse than today presence, and they won’t have kids because of the future. It’s just out there.
[00:26:41.140] – Craig
It is. It certainly is.
[00:26:42.660] – Gerd
The future has a bad reputation right now. And this is partly because of geopolitics and things like Putin and the war and all that stuff. But also because social media, 50% of all news is now social media, basically emphasizes all bad things. It divides us. So basically, a lot of people are moving towards a hopeless, despondent, dystopian dark scenario of what the future holds. And my mission is to say, you know what? We are doing so many amazing things that will allow us to create The Good Future – Good being, for example, not dying, not having diseases, no poverty, no childbirth, death, simple stuff like that. Not talking about three cars or a house or so. Just simple simple stuff, right? So I started this campaign for The Good Future, and I wanted to start a global campaign like New York did in the ’70s or ’80s, where New York had a branding agency that came in and said, Let’s call it I Love New York. And so I wanted to start a campaign that says, I love the future, because it’s the future. We need to be more hopeful, because the problem is then when people are not hopeful about the future, they become despondent, they don’t become open towards change. They’re locked down and they vote for idiots.
[00:28:02.900] – Craig
Well, not to mention, suicide rates are skyrocketing and all sorts of other things, violence, different mass murders. There’s a lot of things that are on the rise, and it is a self-fulfilling prophecy. So what I’m taking from you is it’s really like, let’s focus on what can be, and we can manifest the future. We can make the future be what we want the future to be if we to focus on it. Is that…
[00:28:31.400] – Gerd
Yeah. Kevin Kelly, the famous futurist who I admire, who is now a senior maverick at Wired Magazine, but he’s out in San Francisco. And he said the other day, We should be optimistic about the future, not because we have less problems, but because we have the capacity to solve a lot more than before. And that’s just so true. But the problem is that technological and scientific capacity does not translate into solutions without policy and collaboration. So we could say, Yeah, we can save the world, parentheses, with AI. And yeah, we can. We can also destroy it with AI. So in order for that to happen, we need to collaborate. We need to come to the same conclusion. And we need to say, You know what? The future is better than we think. It’s just a few things that we need to do. We need to clean up our collaboration and our purpose, not just have more scientists. So this is something that becomes crucial as we go on, because now we’re moving into this, I wouldn’t say post-capitalism, but a new economic logic, because the old economic logic is destroying us. It’s basically grow at whatever terms you have.
[00:29:49.760] – Craig
Yeah. We’re doing damage to make profits in the short term, but we’re eliminating rainforest, and we’re overrunning nature and different things. It’s more of a mindset. Yeah.
[00:30:02.230] – Gerd
Not just that, but we also have carbon companies in the sense of digital carbon, digital pollution, like Meta and Facebook, where essentially we are harvesting information and we’re extracting things, and then we’re extorting people with that information. So basically, it’s the same thing. And we need to think about this and say, Well, we need to change that business model. Basically, they used to say in the valley, moving fast and break things. Now we need to move fast and fix things. This is really what we need to do. And we can. We have all the means. We just got to get on the same page.
[00:30:45.530] – Craig
Yeah, it’s a mindset. I like it. It’s a very positive mindset. And again, think ahead and what can we do now to build for a brighter day? And we have so many tools now to do that that we’ve never had before. So one question I ask a lot of guests, and it’s a challenge that a lot of people have. You, as a futurist, seem even more critical. And the question is, how do you keep up with or stay ahead of the curve of change and technology? And how does one do that? Because everything’s moving so fast and everyone’s got a day job and kids and life pressure. How do you do that? How could people take away some practical things to keep up with all this change?
[00:31:27.340] – Gerd
I have to admit, of course, the knowledge pressure, as I call the knowledge pressure, to know things. It’s becoming huge. But since the rise of AI, I have resorted to a different thinking there, and that knowledge and data and information is really a collecting job for a machine. I use a lot of tools that collect for me. And what I need to do is to focus on the important job, which is understanding, intuition and wisdom, parentheses, to get to the point where you can basically have passive knowledge. And there’s a saying from Malavi, it says, Knowledge without wisdom is like water in the sand. And machines have knowledge without wisdom. So it’s very useful. But what does it amount to? Because do you draw the right conclusions? And so with my work, it’s really about drawing conclusions, to contemplate and to chew on it.
And so I read four or five books a month. I read lots of things online. Of course, I watch lots of YouTube stuff from my colleagues, and I collect. And then I sit down and I digest, and I write bullets down. So I say, Okay, it’s like this. Or it’s like that. And then I come up with memes and bottom lines. For example, I’m getting a talk for a big tech company in two weeks in Dubai, and one of the memes is that technology alone is not going to save us. That’s my conclusion. You may not appreciate that being a tech company, but technology alone is not the solution to our problems. It’s just a tool that we use. Basically, it’s complex, but I spent about three-quarter of my time collecting, analyzing, talking to others. And then eventually, you arrive at this bottom line.
[00:33:21.300] – Craig
I think it’s good advice. In fact, a lot of people may not know this, but like Warren Buffett, who’s one of the best, richest people on the planet and best investors, he spends an ordinate amount of time reading. I think he reads something like four or five hours a day. Of course, he’s reading 10Ks and annual reports and different things, but he does an incredible amount of reading. I think one thing that the arts get right or do better than the business world is in sports or in arts, you do most of your time as practice, and very little of your time is performance. And in the business world, it seems like it’s 99.9% perform, perform, perform, and it’s very little thinking, researching, reflecting, training. I think what I’m hearing from you is, Hey, let’s invest more in ourselves so that we can reflect and think forward and plan and strategize and collect our own thoughts as humans because that’s what humans do rather than let some machine do it for us, right?
[00:34:26.640] – Gerd
Well, I always say understanding the future is probably more of an art than a science. If you’re a scientist, you would disagree, but then, of course, you have more science in your life. And if you’re a technologist, you would disagree because you have tech in your life. But I think in the end, Steve Jobs said it perfectly, rest in peace. It’s a combination of art and technology. For sure. Art and science. And he was brilliant at that. I think this is a… When you’re sorting through a lot of information, eventually you have to come to a point where you’re creating a final thing from it. That’s what artists do. And that is basically it goes out there and it’s done. You’re not going to understand the future based on facts. There’s no such thing. If that was true, I would be the richest guy in the world, not Elon. Because you can’t think of it that way. In the end, you come to conclusions. And then if you’re lucky, then nine out of ten turn out to be true.
[00:35:35.490] – Craig
Or in some cases, you may want to prevent a bad future. You may want to take action now to say, let’s prevent something worse from happening. The future hasn’t happened yet, so we all get some role in it, right?
[00:35:45.120] – Gerd
Yeah, I mean, the future fit… This is the idea of being a fit for the future. I think it’s really important to say that you’re never going to know an answer for everything. But if you are prepared and you’ve read a lot and you’re informed, then you make a decision and says, You know what? I think this is not true, and therefore my decision is going to be like this. And you can instantly do so. This is like when you drive a car on the German highway, again, going 200 miles an hour, you don’t have time to ponder whether this What happens here? You make an instant decision. We’re going in the future now at 2000 miles an hour. We have to get better with our intuition, our imagination, and our preparedness.
[00:36:28.060] – Craig
Yeah, absolutely. It’s a good I drove the Autobahn once and was going about 170 miles an hour. Then I slowed down a bit because there was some car, and so I might have been going 140. All of a sudden, before I knew it, I saw the light flash in my rear view mirror and someone was coming up on me, probably going 180 or more. I was the slow person. I was going 140 miles an hour, I’ve never gone that fast in my life in the US, that would be unheard of. And yet I was in the wrong lane, the slow person. That feels like the technology world. You think you’re going fast, but guess what? Somebody is ready to pass you going much faster.
[00:37:09.770] – Gerd
I think this is what I call the future mindset. Bill Gates says, We have to spend one hour per day in the future. I totally agree. If we do that, then we can increase our readiness, and we can also reduce our fear. For example, somebody comes to up to us and says, This AI will now do the following, and your job will change as follows. If we understand this, we can put it in the right perspective, and we can be prepared. But if we’re not paying attention, and this is the biggest problem that most of us have, especially business leaders, we’re just not paying attention.
[00:37:44.380] – Craig
Yeah. No, that’s-
[00:37:44.980] – Gerd
And we only pay attention if it hits us. So we have to pay more attention to the future. And then when we get there, we can say, yes, I think I know what to say.
[00:37:54.950] – Speaker 2
There’s a lot of things that Bill Gates has said or done that I don’t agree with, but that is one that I agree with that we do need to spend time thinking about the future a lot more than most people do. So, tell us what’s next, Gert. I’m curious. You’re always working on something, thinking about the future. What’s next and what are you working on?
[00:38:13.030] – Gerd
The Good Future Project which essentially is the idea of rebranding the future. I’m looking for partners for that to start a global branding campaign about why the future is good and how do we make it good, to give rise to the feeling that people are excited about the future. So that’s a big project where I’m looking for partners and so on. I’m making a new film. I can’t tell you what’s it all about, but my last one was called Look Up Now – It was about AI. So artificial intelligence is a big topic for me. I’m working on some really great new speaking topics. One of them is about the ten facts about the future that I’m going to be ruthlessly pointing out what is actually happening. And that’s already quite a popular request. I’m I continue to work on new things, new ideas, and to make more films and to think about how it can have more impact.
[00:39:06.700] – Craig
Yeah, excellent. Well, highly recommend for those listening to follow Gerd. He’s very active on LinkedIn and YouTube, And the content you put out is always great and inspiring. So last question on the podcast for all the guests is setting aside all the things that we talked about with AI in the future. But just looking back at your career, and you’ve had a varied career going back to Silicon Valley and the tech business and lots of different business ventures. But you’re talking to your grandkids or you’re talking to yourself maybe when you’re 18. What are just life lessons? Put all this aside that you’d want to share and pass on some advice to the next generation, just about living life and the life experience. What would you say?
[00:39:53.570] – Gerd
I think the most important thing I’ve learned is that it’s much more important to trust your humanity your intuition and your human things than to look for too much argument and logic to prove things that can’t be proven. To follow what you think, what you have discovered, to follow your intuition. And for education, that, for example, means we need to further our human education rather than just technology education. So to spend more time on that because that is ultimately what machines can never, hopefully, do or shouldn’t be doing is the human-only skills, which are emotions, compassion, and so on. And also, I think the other thing I’ve learned over time is that it’s very important to be smart and to be informed. But when you talk to other people and you meet other people, then it’s people to people. It’s a person to a person. It’s not a fact sheet against a fact sheet. Everybody is a person. It’s so important also to maintain this positive outlook that good things are possible. I’ve learned over time, because I travel a lot, that if I start with a negative anticipation, then I get negative things. Sometimes you have to be careful, of course, not to be too naive about positive things, but to find a good mix there is really important.
The other thing, lastly, is I think the offline luxury, which I talk about a lot. Very important to not just think of this place where you can connect and get more things as reality. But I always say offline is the new luxury because when you go there, then you can decompress, you can regurgitate things, and you can find things. It’s very important to pull back and do the human things, which are not about data. So nature.
[00:41:55.690] – Craig
Do the things that are human. So focus on the things that are human. So great stuff. I have to say, Gerd, using your own words, it’s been a mind-boggling conversation. And so I enjoyed it. I could talk to you for hours and I will continue to follow you. And I will leave the audience here with a quote that you recently used, which is from Kevin Kelly, which said, We should be optimistic not because our problems are smaller than we thought, but because our capacity to solve them is larger than we thought. And I thank you for sharing that and everything you shared with the audience today.
[00:42:35.370] – Gerd
You’re welcome. Embrace technology, but don’t become it, right? That’s the bottom line.
[00:42:40.330] – Craig
Absolutely. Thank you so much, Gerd.
[00:42:42.810] – Gerd
Thank you.
check out other podcast episodes



















































