[00:00:06.4] ANNOUNCER: Welcome to The Science of Success with your host, Matt Bodnar.
[0:00:12.6] MB: Welcome to The Science of Success. I’m your host, Matt Bodnar. I’m an entrepreneur and investor in Nashville, Tennessee and I’m obsessed with the mindset of success and the psychology of performance. I’ve read hundreds of books, conducted countless hours of research and study and I am going to take you on a journey into the human mind and what makes peak performers tick with the focus on always having our discussion rooted in psychological research and scientific fact, not opinion.
In this episode, we discuss the inevitable technology shift that will be impacting our future; the second Industrial Revolution. The importance of having an open mind, critical thinking and seeking disconfirming evidence. We explore how to ask better questions and why it's so important that you do and talk about some of the biggest technology risks with Wired's Kevin Kelly.
The Science of Success continues to grow with more than a million downloads, listeners in over 100 countries, hitting number one in New and Noteworthy, and more. I get listener comments and emails all the time asking me, “Matt, how do you organize and remember all these incredible information?” A lot of listeners are curious how I keep track of all the incredible knowledge I get some reading hundreds of books, interviewing amazing experts, listening to awesome podcasts and more.
Because of that, we’ve created an epic resource just for you; a detailed guide called How To Organize and Remember Everything, and you can get it completely for free by texting the word “smarter” to the number 44222. Again, it's a guide we created called How To Organize and Remember Everything. All you have to do to get it is to text the word “smarter” to the number 44222 or visit successpodcast.com and join our email list.
In our previous episode, we discussed the experience trap and why someone who's been doing their job for 20 or 30 years may be no better and sometimes even worse than someone who has very little experience. We look at the shocking truth behind 35 years of research that reveals what separates world-class performers from everybody else. We talked about how talent is overrated, misunderstood, and research says it doesn't even exist, and we go deep on the critically important concept of deliberate practice and much more with our guest, Geoff Colvin. If you want to uncover the secret behind what makes world-class performers so talented, listen to that episode.
Lastly, if you want to get all the incredible information to this show; links, transcripts, everything we’re going to talk about, and much more, be sure to check out our show notes. Just go to successpodcast.com and the show notes button at the top.
[0:02:48.5] MB: Today, we have another amazing guest on the show; Kevin Kelly. Kevin is the senior maverick and cofounder of Wired Magazine. He’s also cofounder of the All Species Foundation which seeks to catalog and identify every living species on earth as well as The Rosetta Project, building an archive of all documented human language and much more.
His New York Times and Wall Street Journal bestselling author of several books including The Inevitable: Understanding The 12 Technological Forces That Will Shape Our Future. His work has been featured in Forbes, the Smithsonian much more.
Kevin, welcome to the Science of Success.
[0:03:24.7] KK: Hey, it’s my honor and privilege for being here. Thanks for inviting me.
[0:03:27.5] MB: We’re very excited to have you on today. I‘d love to start out, I'm sure many listeners are kind of familiar with you and your story. Tell us a little bit about the premise for the new book; The Inevitable and kind of what really drove you to write it.
[0:03:43.1] KK: The book in brief is a projection of the next 20 to 30 years in, mostly, digital technology and what those long-term trends may look like. I don't try to predict the specifics in any way. This is much more of a kind of all things being equal. This is how it's going to lean in these directions. There are roughly 12 interrelated directions, they’re kind of all leaning in one large direction, but these 12 forces you could think of them as — These 12 forces are things that are going kind of happen kind of no matter what we do. There's still plenty of decisions we have to make in terms of the character of these specifics.
The short version of the book is I’m suggesting that we embrace some of these things which sound a little scary, like artificial intelligence, virtual reality, that we embrace these in order to steer them, in order to form them into the versions that we want and a future that's friendly for us.
[0:04:52.0] MB: Obviously, the title kind of implies this. Tell me more about the inevitability of many of these forces. Why are they inevitable and why does that make it so important that we embrace them?
[0:05:04.0] KK: The inevitability is a soft version that comes from the very physics or the material world that they’re all made from. Maybe kind of a way to think about this is imagine kind of a rain falling down a valley, the direction or the path of a particular raindrop as it hits the hillside and it finds its way down is completely unpredictable, but the general direction is known. It's down. It’s going to go down no matter what.
This direction comes from the kind of the physics of the entire terrain. A lot of technologies really bound by the physics and I think once you have invented electric wires and switches and stuff, you’re going to come upon the idea of telephones, inevitably. We know that because there were hundreds of people working on it. Edison was a 32nd inventor of the electrical light bulb because it was inevitable.
While the electrical light was inevitable, the particular bulb was not. While telephone was inevitable, the iPhone was not. The internet, what happened once you have the telephone, Twitter or Facebook are not inevitable. The particulars can change and they have decisions about whether something is a national or international, whether it's open or close, commercial or nonprofit. All these different characters of these technologies which have their inevitability built into the physics are something that we have.
Natural evolution tried to make again and again — It makes flapping wings, because that's a very good solution. It makes four-legged animals, quadrupeds, because it's a natural solution that things arrive again and again given our gravity and so we extend the in the technological realm with making four wheels. Four wheeled vehicles are kind of inevitable. Of course, the Lamborghini is not.
The kinds of forces I'm talking about, like artificial intelligence, virtual realities, these come about because as we make technology, this is a pattern things want to fall into because they're naturally inclined body by physics to go in a direction. However, the particulars companies, the particular products, none of those are at all something we can predict.
[0:07:39.5] MB: You had a really good example looking or using the examples of electrification and then kind of demonstrating how that describes cognification. Can you explain that analogy and also talk about a little bit what cognification is or what it means to be cognified.
[0:07:57.3] KK: One of the things that evolution has made, invented, created again and again in many different classes and kingdoms of life is mind. It keeps trying to make minds. We’re making minds and we’re putting little slivers of smartness into everything we make or making some things very very smart. That making things smarter, we don’t really have a good English word, so I use cognify. We’re cognifying this cognification process. It happens again and again and some things we’re cognify to a very larger extent. We call those artificial intelligences.
This cognification process is going to lead to many different types of cognifying, cognification. There’s many different modes, many different subroutines in our minds, our own brains, a suite of portfolio of dozens of different types of cognition from perception, to inductive reasoning, to symbolic reasoning, arithmetic, emotional intelligence, spatial navigation. These are all different modes of thinking and we have as very complicated suite symphony of different notes.
The artificial minds we make, some of them would be very simple with just a few of those types of thinking, like your calculator is smarter than you are in arithmetic right now. Your phone is sort of a better spatial navigation than most of us are naturally.
We’re going to fill the world thousands of different species of thinking, like a zoo of possible minds. Most of these will be very different than humans. They’ll think differently, and I’m suggesting that that’s going to be their chief benefit, is that they think differently than we do and so we will work with them to solve problems. The best chess player in the planet today is not an AI, it’s an AI plus a human because they’re complementary kinds of intelligences.
As we make this cognification, as we employ it, deploy it, we’re going to do something very similar to what we did during the Industrial Revolution which is that we’re going to disperse it on a grid, like an electrical grid which send out artificial power to every household, every farm, every factory. This new artificial power allowed anybody to harness this artificial power and curate things that no muscle power, no natural muscle power could create, throwing up skyscrapers, or extending railways across the continent, generating or producing cloth by the mile, shoes by the pile.
This natural artificial power was distributed on this grid and now we’re going to take the artificial intelligence and we’re going to distribute on a grid called the cloud and it will become a commodity like electricity. It will be a utility that anybody can get and use and you can use it to make whatever you want a little smarter in some dimension. That ability will produce hundreds, if not hundreds of thousands of new startups, new inventions.
People will ask themselves, “What can I do with a thousand minds, not human minds, but a thousand minds working on a problem just like [inaudible 0:11:51.6] evolutions, say, “What can I do with 250 horse power, 250 horses? What can I do with that?” You can do all kinds of things with it that we couldn’t do before. What can we do with 250 minds working on a problem day and night? That’s the second Industrial Revolution, is going to impact everything from sports, fashion, religion, entertainment, military education, business, the whole nine yards and not tomorrow, but within the 20 or 30 year horizon.
However, tomorrow, today, you can buy some AI from Google or Microsoft and you can start playing around with it, just like the early tinkers and Edison’s of the world were playing around electricity. You’ll discover some of the easy, low-hanging fruit that are going to be available that won’t take that many hours to discover just as the early guys hacking electricity discovered so many things in their early days.
[0:12:58.0] MB: You touched on the idea that these artificial intelligence is in many cases are going to think different or have almost artificial or alien forms of intelligence that are completely different and yet complementary to human intelligence. Tell me more about that.
[0:13:14.6] KK: In general, we have no idea what intelligence is in humans are otherwise. We don’t even know what animal intelligence really is. We are ignorant about what we’re trying to do. In fact, one of the byproducts of the AI revolution will be that artificial intelligence will become a telescope, a Microscope that will allow us to figure out what our own intelligence is, because we have difficulty experimenting on these, but by making thousands of different varieties and breaking them in so many ways, we’ll find out what it is.
Right now, today, we have no idea what this is, but we do know that it’s not a single dimension that the intelligence is a complicated process of many different types of thinking. Even if they may run on a similar matrix of the neurons, the organization, the way that the data is organized is different. We will use those differences to engineer intelligences that we’re going to optimize certain things that we want done, like maybe it’s a proof of scientific theorems. Maybe it’s just as a speech listener, maybe it’s to have conversations, maybe it’s to figure out trajectories of a rocket.
All these things can be optimized for very individual type applications. There’ll be ones that we’ll consider more general purpose but they can’t be — You can’t optimize everything. That’s the kind of engineering maximums that whatever system you are, is you can’t optimize every single dimension. There’s always going to be tradeoffs. Some of these new kinds of minds we make, we may actually invent a whole new type of thinking that does not exist in nature just as we did with flying. When we invented artificial flying, we studied the animals; bats, and insects, and birds, and they all flapped their wings. All of the initial attempts of flying were flapping wings, which works well.
When we finally invented artificial flying we invented a type of flying which does not exist in nature, which is a fixed wing with propeller. We’ll probably do the same thing. We’ll probably uncover some types of cognition that don’t exist in the natural biological world and we’ll be able to do those in Silicon. They will be different than our minds. All these varieties, this is due, will vary tremendously. In many cases, the fact that they think differently is their chief asset because in the connected world they were operating in this new economy. The chief asset for innovation and wealth to our nation is being able to think differently.
As more of us are connected, when we get to the point where we have five billion people connected all the time, 24 hours a day, thinking differently, actually becomes difficult because we have basically a group mind. Having artificial intelligence that think differently will help us to maintain and think differently while we’re connected to everybody else. There’s a double advantage to having AI’s that think differently than humans.
[0:16:52.2] MB: That makes me think about one of the other forces that you talk about, which is this idea of filtering. In a world where we increasingly have so many things competing for our attention, how do we use technology to filter out and really focus on the most important things?
[0:17:11.4] KK: I think this is the right place to start, which is that if you graph or start to measure the number of creative products that our society at large, the human species is producing, it’s mind-numbing. Even the number of new songs that are written and produced every year, the number of new books, not just even in English, but worldwide, or the number of videos, the number of new products that are available for sale. It’s overwhelming and way, way beyond what any one person could attend to.
Even if you had a filter, which is what we’re talking about, some filter that would take away all the crap, which is most of the stuff. There’s still way too much good stuff even to list and pay attention to. Let alone, to try out or enjoy.
As technology — through technology, we’re creating this avalanche of stuff so we need technological help to actually sort through it. We’re going to have levels and levels of this and there’s kind of no escape. A lot of people feel maybe the solution is just to turn it all off, all these filters go naked, be real. No. There are problems introduced by filters, but the problems introduced can only be really offset by yet other levels of filtering and looking at things and helping us to navigate through.
Recommendation engines and the algorithmic connections that filter are, are necessary for us to navigate through this in any sense at all. There are some problems, and computer sciences call it over-fitting. If you are really only seeing things that you know you already like and you kind of get stuck on this local pique of optimization that prevents you from really seeing really great stuff because you’re too fit to what you specified and you aren’t broad enough to really something wider and better.
We need all kinds of tricks, devices, additional technologies that can search lighter, that can actually change our taste, that actually can help us grow, that can help us see when we’re being blinded by our own likes. There’s lots of levels. Of course, now, we have the new challenge of a fake news and alternative facts where some of these filters have introduced polarization, it introduced kind of a blindness.
We, again, need to have additional layers of truth signaling layer where things can be assigned, a kind of a networked consensus on the probability of their being true, some kind of confidence level, like, “This fact here has a 95% change of being reliable, true, based on these sources, based on the many other sources that we also trust that have a hard trust value that trust it.” You have this kind of a citation index and like page rank.
These things are all additional levels that we’re going to bring in and if it becomes even more complicated, it’s never going to become simpler and we will maybe require an education to learn how to use it. You and I and all your listeners have spent four years, at least, learning how to read and write. It was not easy. We just didn’t absorb it by being around books. Some of these kind of stuff of learning how to use, being aware of, how to be literate in social media or filtering news, critical reading. These may be a literacy that we actually have to teach people and they may have to spend some years in learning how to become good at it.
We shouldn’t necessarily expect that people can just sort of learn how to navigate through this stuff without any kind of disciplined practice. It’s not going to become easier. It’s going to become ever more complicated.
[0:21:53.5] MB: I think that’s such a vital challenge and something — Part of the kind of reason that we even do this podcast is to teach people, help enlighten people and talk people about seeking disconfirming evidence and things that are kind of outside their comfort zone and really looking at the data and the science of trying to figure out what is actually true and what is real.
[0:22:16.3] KK: I’ll consider that illiteracy, and that kind of techno-literacy maybe is what I would call it, is something that may have many dimensions including the critical thinking that you’re talking about. That may be something that we actually have to teach.
[0:22:31.8] MB: I think that’s a great idea. It’s fascinating, and that’s one of the problems I wanted to ask you about was how do we solve this, as you called it sort of over-fitting where everyone lives in essentially a bubble that is self-reinforcing of only information that they want and only information that they like.
[0:22:49.6] KK: One of the reasons I travel a lot is for that very reason. It forces me into otherness. It forces me to be confronted with different world views, different point of view. I allow myself no escape from it. It’s visceral, it’s full-body, and there’s certainly ways to travel where you’re isolated. Again, I’m going for the raw and the remote and I’m allowing myself to have my mind changed. I think I recommend that highly particularly for young people as a means to begin that habit of trying to see the world from a different point of view of allowing yourself to be challenged by other views which may be the majority in the places that you lined up in.
That’s, for me, a surefire way to do that and I think it’s so important for young people that I think we should, as a nation, subsidize it in the form of monetary, two-year national service where you have your choice to serve in the military or the Peace Corps or some kind of service organization for three years without exceptions, including oversees somewhere and it would radically the tenure of our country, besides the fact that you’re missing up with people that you didn’t grow up with. You’re also mixing up with people that are far outside of your own prejudices.
[0:24:41.8] MB: You also talk in the book about questioning and how as many of these technologies forces reshape society. One of the most important skillsets is going to be the ability to ask great questions.
[0:24:54.8] KK: Yeah, I think 30 years, if you want an answer, you’re going to ask a machine. Machines will have very very good answers. They’re getting ever smarter, ever more knowledgeable. They’ll be more conversational. Just as we kind of like — I don’t know. I don’t remember to spell things. I just ask Google, it tells me the correct way to spell stuff. We’re going to rely on it for information, facts, and those nature of answers. It’s going to be a long time before these things; AIs, robots, can ask good questions, because a good question requires a very broad common sense education perspective, and that’s sort of what we want to actually breed and teach in schools is being able to ask good questions, because in some sense, both science and innovation are fundamentally ways of asking questions, like what if. They’re explorations. They’re not concerned about efficiency. They’re very inefficient in processes that entail sometimes wasting time and having failures because you have dead ends. You have things that don’t work.
That nature of investigation questioning requires the broadest sense of being and is the most productive in the long-term because that’s where the new things come from this, where empathy comes. That’s where our sense of vision all derive from. Teaching how to do that is — Naturally, some people are better than others, but everybody can be taught to be a little bit better at it. I think that’s one of the several key things besides what we’re just talking about; techno-literally, that you want to teach in schools rather than how to regurgitate answers, which is sort of the industrial model.
[0:26:56.4] MB: How do you think we can — Maybe as a simple starting point, how could somebody who’s listening to this show start to ask better questions?
[0:27:05.8] KK: It’s a great question. There you go, you asked a good question. I think one thing I learned — I had kind of a rocky relationship with school. I was a real science-math nerd, but my method of operation was very simple. I sat up front and I was a guy who asked all the stupid questions that people felt that they had, they want to ask but were too embarrassed, but I would ask, because I have no embarrassment at all about asking questions. Basically, if I don’t understand, I figured nobody also understands. That’s basically what an editor, what I’m doing when I’m editing a piece for, like Wired, is like, “Look, if I don’t understand it, the reader is not going to understand it.”
One of the suggestions I’m kind of pointing to is there are no dumb questions, really. If you ask it in sincerity and if you’re not being dumb. If you’re really struggling with understanding something, don’t be afraid to ask the question, because likely, if you’re having problems, so are other people. Then, really, listen. That’s the difference.
There are no real dumb questions. Secondly, a good question is one that generates not just an answer, but other good questions from it. I would just say, there’s a lateral thinking that’s very productive which is to approach the question, to approach the subject from a different angle. While you shouldn’t be afraid to ask the stupid question, you should also be trying to think about a question that hasn’t been asked before. That’s a little harder to do. That requires a little bit more work. There are several tricks.
I hang around Marvin Minsky, the great AI guru t MIT for a long time. He had a remarkable way of asking questions. After observing him, I’m pretty sure that what it was is he believed that he was like a Martian. That he wasn’t a human, or that he was a robot or something. He was just not human. He would ask the questions as if he was a machine and that he didn’t know all the things that humans knew. That was refreshing and infuriating at the same time, but he got to ask really great questions because he was coming from this other angle.
Another person I know of; Brian Eno, who’s the rock star, does the same thing. He adopted some point of view where he’s going to ask the question as if he’s not just another Englishman somewhere. He’s coming from an alien point of view which enables him to bring a different insight to it. That would be maybe my second suggestion, is don’t be afraid of obvious questions, but also try and ask a question as if you were standing form a different place than most people are standing.
[0:30:20.5] MB: Those are both great suggestions, and I agree. Along the same lines of kind of the concept, as you called it techno-literacy, I think the ability to ask great questions is another skill set that is really worthwhile to cultivate. I’m curious, out of the various forces you described, what changes do you see coming down the pike that you’re most scared of and why?
[0:30:43.4] KK: That’s a good question. I want to make it clear that I’m not a utopian. I am not a dystopian. I’m a protopian, meaning that I believe that technology produces almost as many new problems as it solves and the new problems it solves, the solutions to those are additional new technologies which will produce new problems, but that what we get from that cycle is a tiny, minute improvement of a few percent per year that’s compounded over centuries that become civilization and progress, so that progress is real even though it’s very very slight. That’s what I call protopia progress. It’s propelling forward.
There are tons of new things that are coming about and tons of things to worry about if you want to worry. One of my concerns about these new technologies is what we’re seeing actually, there’s a great example today, which is cyber war, cyber conflict. Today, as we’re speaking, there was a malware attack in Ukraine that kind of shutdown the country. I think we’re just seeing the beginning of this. Our society is so dependent on this stuff that it is very susceptible to disruption.
I think the likelihood of the entire internet falling down is really hard to do and it’d be really hard to engineer even if you had the assignment. That there’s going to be sicknesses, ailments, injuries, local injuries all the time. My real fear is not those kind of what we might call ordinary injuries, but a cyber-war conflict, state to state, because we don’t have a consensus right now on what’s acceptable in this new realm. We have lots of treaties and agreements about conventional warfare. It seems odd that we have rules for war, but that’s better than no rules. We don’t have them in any real operational way with the cyber conflict, and when we introduced artificial intelligence to it, it’s even going to be amplified up even more.
My fear is that there’ll be some really bad thing that will happen. Before there’s an agreement, “No, we don’t want that to happen.” Right now, is it okay for cyber things to take down the banking system somewhere? Is it okay to work in hospital on hospitals computers. The answer is that there’s not an agreement, because the major states involved in this; U.S., Russia, China, maybe Israel, Iran, Korea, North Korea, none of these states were even acknowledging that they’re doing this and there’s all deniability and it’s very hard to ascertain what’s really going on.
Until there’s some really widespread agreement that, “No, this is not permissible,” I think that that which is not permissible will happen and it doesn’t have to be that way, but I’m not sure what will it take, what would have to happen before there’d be some agreements that this happens before disaster strikes, but that’s my current fear.
[0:34:19.6] MB: It seems like in many ways — As you said, recently in the news. It seems like the number of kind of cyber attacks and various things going on continues to escalate or at least it seems like I hear about more and more frequently. In many cases it seems there’s kind of a state actor that’s tied to it in some way or another. As you said, it’s often very kind of — They have plausible deniability or it’s untraceable. I totally understand what you’re saying.
[0:34:43.6] KK: Yeah. Generation, technological generation, in the 90s, say, or even 2000s, the U.S. and Western Europe to some extent, that was the entire world in terms of the internet. Now, every country is just jam-packed. They’ve got their smartphones, ubiquitous smartphones and stuff. This is now a global neighborhood. It’s a global platform. A lot of these things are happening in places where there’s more politics involved, there’re may be less security. I think we’re going to see a lot more of it before we — I don’t know. There’s just a lot more of it in general.
It’s sort of like the body has grown and now there are sort of more ways to injure it. We will keep adding more and more layers to prevent the injury, but there will always be new ways to injure it or to exploit it. Again, I think, overall, the likelihood of the whole thing collapsing become less and less. Of course, whatever major damage has occurred becomes more and more impactful. It wouldn’t take a very big injury to really scare everybody.
Again, we go back to something like terrorism. The point of terror, of course, is not really to hurt, but to inflict terror to get your demands. I think it’ll become very easy to terrorize the electronic body, the body electric even with relatively what we call minor injuries to the whole. You could really do a lot of damage just by the terror of it. That’s a second level of worry that you don’t need to do very much to actually have everybody go crazy.
[0:36:41.3] MB: What would you say to somebody maybe listening to this that — I think this applies not only the kind of this specific context, but more broadly, to the whole thesis of the book that says or thinks of themselves, “Oh, yeah. All these technology changes; AI, robotics, everything else. It sounds cool but I don’t really think that’s going to happen. These Silicon Valley futurists with all these fancy ideas.” What would you say to somebody who thinks something like that?
[0:37:06.1] KK: Yeah. In five years, they will certainly be able to say, “Well, none of these is happening. Look at it, VR is still not present. There’s still not AI.”
One thing is — The conversation to them really having us talking about 30 years, 20 to 30 years, because I don’t think these are initially going to happen necessarily that fast. There are kind of — The general tendency is to maybe overestimate how soon they’re going to happen and underestimate the lasting impact that they have.
I think, yeah, you should be maybe skeptical about the speed. In terms of the general direction, I don’t know. I don’t know what to say, because people have been saying this all along. There was a huge denial, I guess I would call it, about the early days of the internet. That this would ever become mainstream. This was the recurring criticism of our enthusiasm for the internet when it was still just typing, when it was just text.
It was like, “No. This is marginal. This is appealing to teenage boys in the basement. This is not about the math. This is not a mass mainstream thing.” It was like nothing — You could say who would kind of convince anybody that otherwise. To sat that AI won’t get big, I think it won’t get maybe big fast and you could be right about it for a long time. Then there’s the other issue of the definition. Artificial intelligence is defined as that which we can’t do.
People would say, “AI, we don’t have AI yet.” If you had Alexa or a Siri 50 years ago, everybody would absolutely agree that it was artificial intelligence — I woke up Alexa. Yes, even probably 30 years from now, people, they’ll say, “We still don’t have AI.” That’s because we keep redefining it as to what the thing that we can’t do yet.
They would be right in that sense. In 30 years, we’ll say, “Yeah, we still don’t have AI. It’s just all a pipe dream.” Yet at the same time the cars will be driving themselves and people will say, “That’s not really AI. That’s just machine learning. That’s just brute force. That’s just computers.”
There’s really no — I’m talking about the future, so there’s really no argument about it. The only thing I would say is; look, even if we don’t arrive there. Even if there isn’t ever conscious AIs walking around in humanoid bodies, even if there isn’t some AI in your ear that’s talking to you like a young girl, like in Her. Even we don’t have that, the general direction of where we’re headed is still on that direction. That’s sort of what I’m talking about in the book is like all things being equal, we’re going to move in that direction. Maybe we don’t ever arrive there, but we’re going to move in that direction.
Knowing that we’re going in that direction is extremely helpful and you’ll be able to reap the benefits and minimize the harm if you understand that that’s the general direction that we’re going even if we never arrive.
[0:40:43.9] MB: What would be one kind of simple piece of homework or starting point that you would give to somebody listening to this conversation as a way to maybe concretely implement some of the concepts we talked about?
[0:40:55.2] KK: I think one of the most enabling forces at work is artificial intelligence and I think it’s going to impact everything we do in all the aspects of our lives, from food, fashion, sports, religion, military education, business. I would say a piece of homework is buy some AI right now. Just log on to Google, TensorFlow, or IBM, or Microsoft. Purchase some AI and start fooling around with it, just like the — 150 years ago when the Industrial Revolution is coming on — I’m doing a podcast and the guy, he’s saying, “For all those farmers out there, what would you suggest the best way for them to prepare themselves from Industrial Revolution?” I would say make a battery and start fooling around electricity. You’ll probably discover something amazing. Be able to — You’ll educate yourself.
I think dabbling in these things, educating yourself so that we can talk about them intelligently so that as we come to regulate them and to tame them and domesticate them, that we do out of experience. That it’s not just something you’ve read about, that you’ve actually spent time.
My entire enthusiasm and optimism comes a lot from the fact that I’ve been living online since 1981 or something and just experiencing what happens when people go digital. It’s not based so much on reading. It’s based on the fact that I have an actual experience from this. I think as much as you could do to experience these new technologies, it would really inform all the other questions you might have about where to go next.
[0:42:49.1] MB: Where can listeners find you and the book online?
[0:42:53.7] KK: My homepage is my initials; kk.org. A lot of my older books are actually available for free. I posted the entire text of my first book while it was still in copyright on the web for free, because at that time I owned digital rights because at the time I made the contract New York publishers didn’t think digital rights were at all valuable. They didn’t know what they were.
My second book was also up in full on my website; kk.org. There are Kindle — Of course, now Kindle and paperback editions of The Inevitable. Weirdly, the paperback edition is cheaper than the Kindle edition. Don’t ask. I have no idea why. I sometimes tweet as kevin2kelly. In fact I tweeted almost — I tweeted, I would call, the entire book of The Inevitable at one point. In a sense, I tweeted a sense from every page of the page. I didn’t ask permission, I just did it.
My recent little thing is from Cool Tools’. We send out a one page email newsletter thing. A one pager that’s six very brief recommendations of Cool Stuff, tips, places to go, eat, tools, whatever. Very one sentence — A couple of sentence recommendations for six things every Sunday. It’s called Recomendo with one M, recomendo.com. You can sign up there.
[0:44:32.8] MB: Kevin, thank you so much for coming on the show and sharing all your incredible insights. It’s been an honor to have you on here.
[0:44:39.8] KK: It’s been a real delight. Thank you for your great questions. You’re obviously a human, and so I appreciate the support and enthusiasm for my work.
[0:44:48.2] MB: Thank you so much for listening to The Science of Success. Listeners like you are why we do this podcast. The emails and stories we receive from listeners around the globe bring us joy and fuel our mission to unleash human potential.
I love hearing from listeners. If you want to reach out, share your story, or just say hi, shoot me an email. My email is firstname.lastname@example.org. I’d love to hear from you and I read and respond to every listener email.
The greatest compliment you can give us is a referral to a friend, either live or online. If you’ve enjoyed this episode, please, leave us an awesome review and subscribe on iTunes, because that helps more and more people discover The Science of Success.
I get a ton of listeners asking, “Matt, how do you organize and remember all these incredible information?” Because of that, we’ve created an amazing free guide for all of our listeners, and you can get it by texting the word “smarter” to the number 442222 or by going to successpodcast.com and joining our email list.
If you want to get all these incredible information, links, transcripts, everything we just talked about and much more, be sure to check our show notes. Just go to successpodcast.com and hit the show notes button at the top.
Thanks again, and we’ll see you on the next episode of The Science of Success.