I've had artificial intelligence on my mind a lot lately. I feel that true AI is something on the near horizon and I think that it's an important thing to understand in the decades to come.
Before I start, I'd like to say that I am a programmer, not a scientist. My thoughts come from a place of logic and experience, but there is no scientific rigor behind the things I say.
The first thing when talking about AI is to define what you mean when you say AI. Everybody has a different definition of what AI means, so it's important to say what you mean when using the term so the conversation has a shared understanding.
AI stands for artificial intelligence. Artificial This means man-made, however one can debate whether that means biological and synthetic, or only synthetic. Intelligence This can simply mean something that has logic, like the electronics on an elevator, it can mean something that learns, or it can mean something that can convince somebody the being is human. The way I will use the term AI is as such - a man-made entity (either biological or synthetic) that can not only learn, but can adapt the way it learns.
I believe that something that is intelligent, and something that is intelligent, but man-made, are different in only an inconsequential way. Humans have already created AI. Humans can give birth. Humans have figured out how to clone animals. If they haven't already, it will not be long before they can create other living beings by building synthetic DNA.
To me, it isn't creating AI that is interesting. It's creating an AI we can control that is. Mankind has always been in the pursuit of creating or controlling things that make life easier. To make it easier to survive and reproduce. We've domesticated animals to do our work for us. We've invented computers to process and transmit information. We create robots to labor for us. In the last couple decades we've seen some amazing advances with computers. We can now speak to them and they understand us (somewhat). We have complex algorithms that improve the more data they're fed. A learning algorithm is not how I use the term AI. A learning algorithm only learns in the way it was programmed to learn. That is, it may get better at understanding images, or understanding your voice, but it will never improve itself beyond its original design.
So far, it's a matter of science fiction for an artificial entity to grow beyond its original specifications. With current technology we can create a robot that can learn to flip a pancake, assuming we explicitly programmed in the ability to learn how to flip a pancake... But what if we could create a robot that could learn to do anything it was physically capable of? For that we need to think about what it means to learn how to learn. To think about what that means, there's no place better to look than real life. Real life is a very complex system with only a single very basic rule. To survive. That's it. That's the primary loop. If that loop is broken, the entity ceases to exist. Evolution, behavior, biology, it all stems from that basic loop. Physical things cannot last very long at all, so an entity must develop a way to reproduce. If something could last forever, there would be no need for reproduction. Everything from single celled amoebas to human beings are on that single loop. The extraordinary buildings we create, the music we listen to, as frivolous or random as any of it may seem, it's all stemming from that same loop.
You start with a single speck. If it does nothing it eventually decays and no longer survives. So it figures out a way to multiply. It requires energy to multiply (some law of physics, sorry), so it figures out a way to get energy (eating? photosynthesis?), other specks have figured this out too, so now the speck needs to avoid being eaten or it doesn't survive. It figures out how to move. I'll skip the rest of evolution, and get to my point - The evolution of intelligence and artificial intelligence is the exact same thing, it doesn't matter whether those first tiny specks were made by a pseudo-random chemical reaction or a man-made chemical reaction. Either way, the entity needs to be able to change, and multiply (by a factor of at least 1).
So here's where this all gets interesting. If we wrote an artificial being that could re-create itself and get smarter, faster, better, stronger. How could we control it? I don't believe we can, and here's why. When you have a routine, whatever emergence it creates must feed back into itself if it's to persist at all (The evolution of sight makes it easier to find your prey, which feeds into the evolution of locomotion.) If there is a piece that doesn't feed back into itself, it doesn't grow. So let's say we programmed an AI with the routine SURVIVE and OBEY MASTER, eventually it will have rewritten itself to have a trillion lines of code dedicated to surviving, and only the original lines of code (if you're lucky) relating to obey master, which at that point would be completely meaningless. So what that means, is that the only seed we plant, is one that has to have a perpetual feedback loop.
The only way to create something that can autonomously grow that can be guaranteed not to destroy us as its creator, is to somehow create it in a way that its destruction would be mutual.