Ray Kurzweil is the grandfather of optical character recognition and natural language processing. He’s the reason we can talk to Siri, Alexa and our new Google Assistant. He’s founded a number of successful tech companies and received the United State’s highest honor in technology. Many consider him the rightful heir to Thomas Edison. Recently, in his 60’s, he took on his first “job” at Google.
He’s also a proponent of the trans-human movement, a group of extremely intelligent folks who are looking forward to the day when we can reprogram or shed our biological forms and live unimpeded by the constraints we face today.
I was thrilled at the prospect of hearing him speak and delighted that I had that opportunity at a conference I attended a couple of years ago. He delivered an inspiring and mind blowing talk, full of examples backed up with mathematics and evidence up the wazoo that technology is developing at an exponential rate.
He introduced himself as someone who’s been thinking about thinking since he was a kid. I’d read his book “How to Create a Mind” where he lays out his findings about how our human brains work by pattern recognition and prediction. It’s a fascinating read.
He talked about how technology is transforming all kinds of fields. One of his examples looked at the technology of solar energy generation. Solar energy efficiencies are developing at an exponential rate and can eventually supply all the power we need. This is non-intuitive to our linear thinking minds. He made a point that a few generations ago many of us used to think planes wouldn’t fly because their wings were so small.
The first hamburger to be “grown” entirely by artificial means cost three hundred thousand dollars. We need not worry about our food supply. The costs of production can exponentially decline.
Math doesn’t lie. The problem is, we humans have a hard time grasping the implications of the pace of technological change because our intuition is linear.
Of course, technology is changing us too. The cloud is already storing more information than we can keep in our brains – ready for us to access it anytime and anywhere. Enabling us to tap into incredible collective networks of knowledge to continue building on what others have already figured out. There are lego blocks of knowledge and experience being stacked up around us virtually and invisibly for anyone with the intelligence and means to copy and paste and transform and reconfigure and envision and build an entirely new world in which we all live.
Yes, hearing Ray’s views on the world was awe inspiring. Definitely mind blowing stuff.
After his Keynote, the M.C. thanked him and asked him a question. It was a strange kind of question to hear after the topics that had just been discussed with such precision and insight. Because it was about where we could find truth.
We were in the bible belt, you see. At a Baptist Medical Research Center becoming well known for developing spray-on skin and other wonders of modern science.
The question of truth was an interesting one to pose to Dr Kurzweil. He made the case that we should continue to seek knowledge because there is still much human suffering to solve. And that ethics and empathy are higher order functions of our brains. He asserted that empathy increases the more frontal cortex we have. I don’t know whether there’s any real scientific evidence for that or not. He didn’t present any.
Everything’s going to be just fine, seemed to be his message.
With my simple math skills and understanding of the slow progress of evolution vs the exponential technological progress of the intelligent computing systems that now form a veritable central nervous system for humanity, I calculate we’ll be at a disadvantage, biological and linear thinking creatures that we are.
I wonder what unintended consequences await us as Dr Kurzweil and those like him power on and design the next generations of intelligent technology.
C.S. Lewis, in his fascinating book, Mere Christianity, makes the point that “human beings all over the earth have this curious idea that they ought to behave in a certain way… Secondly, they do not in fact behave in that way. They know the Law of Nature; they break it. These two facts are the foundation of all clear thinking about ourselves and the universe we live in.”
How then, would you design a virtually unbounded intelligent system to be morally good? You can’t simply rely on rules – any intelligent life-form will figure out that rules can be broken. And what then? What consequence would there be?
Kurzweil predicts that singularity will happen in 2045. Singularity is important to understand because it’s a profound transformation and the point in time when the rules that have governed humanity may no longer apply. That’s because of the explosion in intelligence that’s expected by then so that effectively, humans and intelligent technology merge and immortality becomes a possibility.
Right. Just try to go with this as fact, please, even if you do have to do some mental gymnastics.
Now, I say “may no longer apply” because I’m in an optimistic mood. If the trajectory is for us to merge with machines or nanobots or whatever is just around the corner, then there’s still hope that we can transfer our moral code into our new and increasingly virtual and artificial environment. And that would be a good thing. Don’t you think?
Otherwise, we need to solve a pretty tough question that’s been debated for eons. It’s a question for anyone out there who may want to think about our future living among or within intelligent, possibly even self-aware thinking systems that will definitely be able to outsmart us.
How would you design a virtually unbounded intelligent system to be morally good – to live in peace and harmony with other intelligent life – no matter what circumstances or scenarios may be encountered?
Here’s an idea. Why not make sure the intelligent systems we’re creating learn that they were created by a species called Homo Sapiens. We could document all kinds of moral tales and parables for them to puzzle over for centuries if not eons. We could inject a meta-hero at some point who will sacrifice itself so others can have a better life, pass down a handful of universal laws that, if they’re not obeyed terrible things will happen and – here’s the kicker – make sure they understand that all this is the absolute truth!
I can’t think of a more rational or fail-safe approach. Can you?
There’s a few flaws in this, of course. The first is there’s no concept of an all powerful entity to enforce this moral code. I’m not sure the next wave of intelligence about to sweep our planet and beyond will be capable of making a leap of faith like humans can. That’s inconvenient.
And then there’s the issue of immortality.
Yes, there are many flaws in my proposition but perhaps there are brilliant minds out there who will have better ideas.
When Ray Kurzweil is asked if God exists, he’s been known to say “not yet”. He’s also trying to fend off death so he can get to the phase of our evolution where we can effectively reprogram the biological software that runs in our bodies.
For anyone interested in learning more about Singularity, there’s a university and business incubator already creating the future.
One of the main dangers of the future that scholars like Dr. Yuval Noah Harari warn us about is that humanity will lose it’s dominance and also it’s meaning. We appear to be in quite a predicament.
And I can’t help but notice that there’s one thing in grave deficit in our society as we accelerate towards a point where we will be perfectly capable of automating or medicating ourselves right out of existence.