After thirty years working in different companies or startups, he retired in 2011. «I was finally able to dedicate myself to a hobby: my interest in how artificial intelligence can impact us in the medium and long term«. But it turns out that Ray Kurzweil’s book The Age of Spiritual Machines helped him understand that, due to the exponential growth in computer performance, it may be possible for a superintelligence to be created in a very near future.

«I doubt there is a CEO who doesn’t understand that AI is going to change everything in their industry in the next few years»

Do you think AI is the most disruptive of the new technologies?

Well, Artificial intelligence is any software, or indeed hardware which learns, as it solves a problem. And so there are many things that go into that. You need data. You need massive compute power and you couldn’t have AI without those things. But do I think it’s the most important technology we have? Yes, I do. Intelligence is our most important feature. Not actually rational thinking, because we’re not terribly good at that, but our ability to communicate, our ability to jointly create myths and stories that we all follow, is important. If we can create an entity which is more intelligent than us, or even almost as intelligent as us, that’s going to be a very powerful technology. So yes, I do think AI is at is our most powerful technology.

So it’s a driver behind all this technological revolution, or the ultimate goal of it, so to say

I don’t think AI is a goal in itself. It’s a tool. Unless and until it becomes conscious, in which case it becomes a person and can no longer be regarded as a tool, any more than a human can be regarded as a tool. It also plays a mutually assisting role with other technologies like the Internet of Things. You can’t really have the Internet of Things without AI, but you need other things as well. Like Virtual Reality and Augmented Reality, and lots of aspects of biotechnology. These things all play in together.

Do you really think there’s a runaway point we are approaching in which some machine runs rogue because of getting self-aware?

I do think that we are very likely to create a machine which is more intelligent than us. I see no reason at all why humans may have achieved the ultimate possible level of intelligence. That seems very unlikely. So, as we continue to improve the capability of our intelligent machines which, at the moment, are very narrow intelligences, one day they will overtake us.

They might be conscious, or they might not be conscious. But in a way that doesn’t really matter. Once we have a machine which is much more capable than us, in regard to our most powerful feature, intelligence, then we are vulnerable to it. Now…will superintelligences dislike humans? Will they want to destroy us? We can’t say in advance. I think it’s unlikely, because I think we will take lots of care to make sure that they do like us very much, and that they understand us better than we understand ourselves. So that they want to help us, and are able to help us.

Are you talking about the Asimov’s Three Laws or something to that end?

No. Asimov’s three laws were only ever a way of creating stories. He was a brilliant novelist, and a very capable scientist, but he wasn’t trying to create a set of rules which machines should live by. The three laws don’t work, and he knew perfectly well they couldn’t work. They were a way of setting up jeopardy in stories.

«I see no reason at all why humans may have achieved the ultimate possible level of intelligence»

The most promising avenue we have at the moment for making sure that really advanced, really capable AI is safe, is an idea which is sometimes called “Coherent Extrapolated Volition”. What it means is: “Do not think that you know your goal. You have to keep checking back with humans to know your goal”. The thing is, a machine with a goal which has been poorly specified could be very dangerous indeed.

Imagine you have a machine, and you say “Go and get me a cup of coffee, and be as quick about it as you can”. So the machine thinks: “Okay. The nearest cup of coffee is a mile in that direction, where’s a café. The quickest way I can get there and back, is to kill everybody in between here and there, smash my way into the coffee shop, kill all the people who work there, just get the coffee and bring it back. Job done.”

So we have to be very careful about how we specify its goals and how we specify the rules by which it behaves. A very good precaution is to build in awareness into the machines. But they do not know ultimately what their goal is. They have to keep checking back with humans regularly.

What do you think is it is the role of deep learning and machine learning in all of this?

Well, Deep Learning is a subset of Machine Learning, which is a well-established form of statistics and was applied successfully to AI for the first time in 2012. A researcher called Jeff Hinton worked out how to get a thing called “Back Propagation” working. And that led to a “Big Bang” in AI.

Prior to 2012, AI had done many interesting things: It had beaten the world’s best player of chess, for instance, but nobody had made much money out of it. Since 2012, the tech Giants: Google, Facebook, Microsoft, Amazon, and so on, and the Chinese ones: Baidu, Alibaba and Tencent have made a lot of money with it. And it is the version of AI which is currently very powerful. Some people think that that will always be the case. Jeff Hinton seems to think that more breakthroughs are needed within Deep Learning, but Deep Learning will get us all the way to superintelligence.

Since 2012, the tech Giants: Google, Facebook, Microsoft, Amazon, and so on, and the Chinese ones: Baidu, Alibaba and Tencent have made a lot of money with AI

A lot of other researchers think that we will have to re-integrate the older form of AI or the main older form of AI which is “Symbolic AI”, and probably some new types as well. But we don’t know, and frankly, it doesn’t really matter. There are lots of people working on how to improve AI and they will all contribute in different ways. Maybe Deep Learning will get us all the way there. Personally I suspect not. I suspect other breakthroughs will be needed. It’s certainly the most powerful form now. And, of course, we had the very interesting development announcement yesterday from DeepMind that they have a system which can figure out how proteins fold very accurately. So it’s currently extremely effective, but maybe it won’t be the only very effective form of AI in, say, 10 years.

But which are the main applications currently of AI? The ones that are really getting used.

So for most people, it’s all in their phones: Google Maps, Google Translate and Google search are the three things that come to mind immediately. These things are miracles, absolute miracles, and we all have them all day. And the best natural language processing system, which is called GPT 3, developed by Open AI (which was co-founded by Elon Musk) shows how far we’ve come in Natural language processing. It can write quite impressive stories and essays. It doesn’t understand what it’s doing, and it can only do that one narrow thing. But it’s very powerful. And, I suppose, the other main one is self-driving cars, which are on pilots only at the moment. But it won’t be long before they are widespread. And we already have quite advanced driver assist software in cars which you can buy today, and there’s a lot of AI in that. So there’s a lot of AI all over the place. There’s also a lot behind the scenes in large companies solving problems dealing with customer enquiries through chat box and so on. But if you want to see AI in action come to Spain and use Google Translate to talk to people here. It’s incredible.

Are companies aware of the potential they can use and are they using it to make money?

When I first started talking about AI to business audiences back in 2015/2016, they weren’t. But now they are. I doubt there is a CEO of a large company or even a medium company, who doesn’t understand that AI is going to change everything in their industry in the next few years. So everybody knows the power, everybody knows the importance, everybody knows the urgency. And most companies now are trying to use it. It’s hard, because it’s a complicated technology. You require massive computing power, you require massive amounts of data and you have to clean your data if you have massive amounts of it. You also need very smart people to build and manage the systems, so it’s also hard to implement.

One thing I think it’s very interesting, is that you very rarely see stories in the press with a company saying “We have deployed this AI system and it has made us XYZ millions or saved us XYZ millions of Euro’s”. And I think that’s because we are still in the early days of implementation in industry, beyond the tech Giants.

Sharing of wealth

In the society that I envision, which I call “Fully Automated Luxury Capitalism”, rich people will stay rich and will be able to get richer. There will be a market economy. All of us will have access to all the goods and services that you need for a very good standard of living. Let’s say a nice middle class American standard. All of us will have access to those products and services, just because we’re citizens.

But there’s going to be some things which are always going to be scarce: original artwork, original vintage cars and planes, the best houses in the world, the Villa on a white sandy beach with palm trees around it. Those things will always be scarce. And rich people can buy those things with the extra money that they have. So I don’t envisage a world where everybody gets given exactly the same income and where we are all undifferentiated. I think that would probably be a quite unpleasant world. I much prefer a world with more variety.

Should there be any governmental control on the development or usage of AI in general?

Well, the first thing that we need is for politicians and civil servants to understand it. And most of them don’t, and are frankly not paying very much attention. Partly because they are preoccupied with rather important things like Covid and rather foolish things like fighting amongst themselves and various other distractions. So we need our leaders to understand what AI is, because there are all sorts of ways in which it probably will need to be regulated.

The technology industry is lightly regulated at the moment, when you compare it to, say, the Finance industry or the Pharmaceutical industry and it is as powerful as they are and arguably it should be regulated in ways that they are. But that’s not really possible to do while our leaders don’t understand what it is. And for our leaders to understand what it is, we probably need all of us to understand what it is. Because our leaders tend to reflect what we want. For good and for bad.

So if most people understand what AI is and understand that there are certain types of regulations that are necessary, then our leaders will be able to do it and they will do it.

Which are the biggest challenges when implementing AI in general in a company? 

Well, you can hire very smart people to develop your own bespoke algorithms and large very large companies already do that. But you’re typically going to have to use tools which have been ready made by Google and Microsoft and Amazon and so on, and that’s fine. The biggest hurdle seems to be data.

Large companies have lots and lots of data. But their data sits in data lakes, which are incompatible and very often they don’t even know about each other and the data is not clean, even within each data lake. So, cleaning the data, combining it, seems to be the biggest hurdle.

«The first thing that we need is for politicians and civil servants to understand it. And most of them don’t, and are frankly not paying very much attention»

And then, of course, there’s getting hold of enough talented people who can develop the algorithms and apply them. And that’s expensive. All of that is only going to be possible if Senior Management is really committed to staying with the project for a long time.

And if that requirement is in place, then there’s a good chance that they will choose a suitable project. Because the other thing that people do is they either choose a project which is trivial and you don’t get a payback, because you haven’t solved a serious problem, or it’s impossible to fix at the moment with AI. So you have to find a suitable project to address as well.

Could you modify AI on a grand scale to give the answers you want it to? 

I think that you have to keep bearing in mind is that AI is a tool. Yes, you could, in some situations perhaps, build in malign code. So here’s a scenario: you could build in some malign code which would allow the software writer to take control of a system in a way which the user or the customer didn’t want to happen.

That’s possible. But that sort of code would be discovered pretty quickly, because in all the publicly available tools, like Google’s Tensorflow and PyTorch from from Facebook or Microsoft Azure, lots and lots of people are using those tools every day, investigating them, crawling all over them. It’s going to be very, very hard for any of those companies to hide some sort of malignant software in them, even viruses. And it’s highly unlikely that they would want to, because if they did, their companies would cease to exist overnight.

Obviously you know there’s lots of debate about whether China’s tech Giants will do that, or have done that. And the Americans are very keen that none of us use Huawei, because they fear that there is malign software in there. But it’s not really about the AI, it’s more about capture programs which are buried in in large systems, which aren’t AI. Just ordinary business software.

What do you think are the biggest challenges when implementing AI in reasonable fashion in companies?

If companies deploy it successfully, I think there’s not really going to be a problem. If they deploy it unsuccessfully, they’re going to waste a lot of money, so that’s probably the biggest problem. Sure that there will be a lot of money wasted undoubtedly but, you know, it’s early days in the deployment of AI beyond the tech Giants. And it’s inevitable. You know, it’s a time of experimentation.

When electricity first became available for use by industries, it took them a very long time to figure out how to use it. There’s a very interesting story about this. Electricity was seen as being a replacement for steam engines. And with steam engines you need a really big engine and then you deliver the power through a factory through a long beam which is rotating and other machines run off that. And people thought for a long time that was the way to use electricity. This was what they used to do.

It’s going to take a long time and a lot money for people to figure out the best way to use AI. But that is the only way to do it. Trial and error is the only way that innovation works

It turns out that it’s actually much better to have a small electric motor next to the machine that you want to have the power fed to, rather than having one big centralized electric motor, which powers all the other machines. It took a long time to realize that. Similar sorts of things will happen with AI. It’s going to take a long time for people to figure out the best way to use it, and a lot of money will be wasted on the route to get there. But that is the only way to do it. Trial and error is the only way that innovation works.

What about hackers?

Malign actors, or people with ill will have always been with us. And I don’t suppose they’re going to disappear. I take comfort in the fact that our most important pieces of technology, things like nuclear weapons, have so far not been hacked. Of course they could be tomorrow. Somebody might be doing it right now.

For example, it is very rare for airplanes to be flown into buildings. It happens, but very rarely. Or, in fact, it never happened that a nuclear power plant has blown up as a result of hacker activity. So our most important systems are very well defended against hackers. And a very powerful AI, a superintelligence, is going to be protected more carefully than anything, because it will be the most powerful tool we have.

Let’s talk a bit about the scenarios you paint in your book. What will happen when most of the work is done by AIs and or robots? What will we do?

I think that AI is already democratized. Most people in developed economies have a Smartphone, and there’s a lot of AI in these. And, as to what will happen if and when machines can do most of the things that we do for money: as you know, that’s what I spend most of my time thinking and writing about. And it seems to me that that will happen, but not for quite a long time.

There are two effects that happen when automation takes place: One is the substitution effect. So when a machine is able to do a job which was previously done by a human, that human loses their job because the machine can do it cheaper, better and faster, and that’s the substitution effect. But there’s also a compensation effect. The automation of that process makes it cheaper, and it creates wealth. Wealth creates demand, and demand creates jobs.

There here are two effects that happen when automation takes place: one is the substitution effect, but there’s also a compensation effect

So as long as there are some tasks, some jobs which humans can do and machines can’t, there will be jobs for humans. And we are quite a long way off machines which can do most of the things that we can do for money. But in 30 years’ time, if something like Moore’s Law continues (the exponential improvement in computers) the machines we have will be a million times more capable than the ones we have now.

At that point, I would not fancy competing in the job market with those machines. I think those machines will be able to do most of the things that we do for money, and that could be a really good thing. Because a world in which machines do all the jobs, could be a world in which everything that we need for a very good standard of living is made by the machines, virtually for free. There will be some residual costs because raw materials always cost something. And there will be some things where humans can’t be replaced, so there will be some residual costs. But we can have the economy of abundance. And in an economy of abundance, humans can do whatever we want to do.

«We can explore and learn and play, and have fun, and learn languages and socialize. We can have a second Renaissance»

We can explore and learn and play, and have fun, and learn languages and socialize. We can have a second Renaissance. And I’m very excited about that. I’d like that to happen in my lifetime if possible. But of course, the transition from here to there is the tricky bit. Because you need a very efficient, just and acceptable way of distributing income to everybody.

There will be all this wealth being created by machines. Who will own those machines? Probably rich people and governments. And how will that wealth be shared out to everybody? That is a good question and we haven’t yet sorted out the answer to that. I think when we get to abundance, the cost of everything will be very low, and therefore you could tax rich people and governments, without it being onerous to them.

So, in theory, it should be possible to have everybody have a great standard of living without having lots of social disruption. But getting from here to there, that is a challenging task. What worries me is that most people are either not thinking about this at all (that’s probably 99% of people) and of the remaining 1%, most of them think it won’t happen. Or at least it won’t happen in a timeline that is worth thinking about.

I think we need to be thinking about it and have some ideas for how to handle it. Now.

But, in 10 years’ time, when self-driving vehicles are starting to become common on our streets and when you can start to have a really quite interesting conversation with Alexa, Siri, Microsoft Cortana or Google Assistant things will change. When we all sense these machines becoming more and more powerful, faster and faster (because it will accelerate), it’s very likely that people will think “Oh, technological unemployment is coming. And it’s coming in the not too distant future.

What’s the plan?” And they will turn to their leaders and say: “What’s the plan?” And the leaders will go: “We don’t know!”. That’s dangerous. That could lead to a panic and we need to avoid that. So I think although it probably is quite a long way in the future, that will get technological unemployment, I think we need to be thinking about it and have some ideas for how to handle it. Now.