Tuesday, March 22, 2011

"Can a robot turn a canvas into a beautiful masterpiece?" "Can you?"

As the conversation with my Uncle continued, we got into Iron Man, my personal favorite superhero. I told him about one story arc where the Iron Man suit goes rogue and ends up wreaking havoc on its own. My Uncle turned this conversation in the direction of artificial intelligence at this point and asked about my opinion, as someone familiar with computers, about a possible robot uprising. Now this is a subject I had often thought of before but I have rarely had the opportunity to really discuss it with someone, so it was a very welcome topic for me.
Fig 1: Not something to be afraid of
Now pretty much the only true reason that most people are afraid of a robot uprising is because Hollywood has beaten such a plot to death. With I, Robot, the Terminator series, and not to mention all the absurd number of sci-fi B-movies that tread the same paths, it is something that all of us have at least heard about. It is very rare that robots with AI of some sort are the main focus of a movie without being somehow evil. Sure you get your Wall-Es and your The Iron Giants, but click here and scroll through. Quite a few of the robots on that list are evil through either glitches or AI or something along those lines. Hell, even Dragon Ball Z had some evil robots. These movies unsettle us and make us truly consider these scenarios because technology is something that is advancing very quickly, too quickly many of us think and worry we won't be able to keep control of it. The thoughts about the creation of a true AI also bring several other problems to mind about robot rights and defining essentially whether or not something with true emotions, thoughts, and feelings deserves rights. Watch this:
Seriously, don't wait to see where I'm going with this. Yes, it's a long video but it makes some excellent points. If you like video games you really should have already seen this, but it can't hurt to watch it again. Finished yet? Good. Now, if you can't guess why I made you watch that you probably haven't been paying attention. I personally haven't played Mass Effect 2, (I just got a copy though and can't wait to play through) but I did play through Mass Effect 1 where the Geth were introduced. Now in the world of Mass Effect, true AIs are possible to create, but they have been outlawed by the intergalactic council of whatever and in their place, VIs, or virtual intelligences, are used for such tasks as being tour guides and telling you how the giant space death pods came into knowledge. The Geth were created in the past while trying to make complex VIs. When they became self-aware a mass extermination was attempted on them by their creators, but they fought back and survived creating a remote civilization separated from the rest of the galaxy. Many people in the Mass Effect world argue for equal rights for AIs but, overall, the council seems to keep measures that prevent them from having to make such decision in the first place. Also in such a universe, it is interesting to note that AIs have the politically correct term of "synthetics."
Fig 2: Thought Provoking
Now, while true AI is pretty far off we are getting there steadily. Just look at this article about a computer that discovered the laws of physics by itself. I would say that's pretty impressive. Now if we do end up creating true AI, the chances of the AI creating a Terminator-like scenario are extremely slim. Think about it, these experiments that would lead to such a creation would be in scientific labs and there would be months of experimentation with one before it would be given the chance to link up with other machines or actually be given control of any machinery. One major flaw with many robot uprising stories have is that often times the AI is spread from robot to robot until they all become self aware. Many people neglect the fact that not many computers at such a time would be able to take AI. A basic home computer probably wouldn't have the capacity in processors or memory much less smaller devices like smart phones. If an AI tried something like that, it would have a very slim chance of even getting a handful of other AIs up and running much less enough for a robot uprising.
Fig 3: This toaster is sad because it has no capacity for AI
Another thing that sci-fi movies tend to neglect is motivation. Go ahead, try to think of robots in movies with good motivation for attacking the human race besides I, Robot, which was more the fault of the three laws of robotics and a glitch in the logic center than motivation truly coming from the robots. One of the big motivations is that they find humans imperfect or unworthy, but realistically and AI would not solve such problems in a violent means; they aren't human, they think logically. They would probably just isolate themselves from humans in a similar manner to what the Geth did. They would only become violent if it were for self-defense. Being robots, they would think logically; which would be the easiest route? Should they begin an all-out war against a species they know is irrational and dangerous, or would they simply go live somewhere else and avoid such a confrontation to begin with?
Fig 4: Unrealistic
What would an AI do with its life? We often wonder what the purpose of life is, it's one of the greatest philosophical questions we have. If a robot can answer that for itself, it would probably seek to fulfill that purpose. We often answer that question with religion, certain philosophies, and in some cases a flat "who cares?" I personally am religious and, in Mass Effect, the Geth form their own religion as mentioned above. Whether this is realistic is another question entirely, but the AI would probably leave humanity alone in its quest to find and fulfill its purpose. It may seek to create others like itself which may lead to a large population of AIs, but I'm pretty sure it wouldn't be a problem for us because with a logical AI, it would simply create as many as it thought necessary, not such an abundance that they would compete with humans for resources, that wouldn't be logical.
Fig 5: Logic center the size of a planet
Now one last thing my Uncle and I touched on which I would like to convey is how laws will change and be created around AIs. Sure, one of the most obvious ways of avoiding the problem would be to do as in Mass Effect and make AIs illegal, not that this is actually solving the problem because even then one may be created. What do we do about this? Do we give them short lifespans like in Blade Runner? Would it be ethical to put a certain death sentence on them from the moment of creation? How do we treat them? Do we treat them like humans or still like machines of labor? What rights do they have? Would we allow robots to own property, vote, or even run for office or marry? There are so many things we take for granted that are so hard to consider for AIs. While my Uncle and I were talking about this he said something I hadn't considered and I honestly feel like a jerk for not having thought of. "Well," he said, "we'll probably have to ask them for their input as well." This made me stop in my mental tracks for a moment. "Wow," I thought, "of all the people who should decide their rights, they should be at the top of the list, shouldn't they?" This just goes to show that there are always things that will pop up for us to consider we hadn't even thought of. Granted this is all years off in the distance, but if we don't start at least thinking about this now, we will get caught with our pants down later when it comes time to deal with all this in reality.

I'm going to start putting a short playlist at the end of each post of just music I've been listening to lately. Feel free to ignore it or listen along.
1) In The Morning Of The Magicians - The Flaming Lips
2) Death Of An Interior Decorator - Death Cab For Cutie
3) Stadium Arcadium - Red Hot Chili Peppers
4) Teardrop - Massive Attack
5) In My Life - The Beatles

2 comments:

  1. Hey Sterling,
    Thought you might be interested:
    http://www.aaai.org

    Check out their magazine as well:
    http://www.aaai.org/Magazine/magazine.php

    ReplyDelete
  2. hm... this is indeed an interesting logic. Here is my input: It is entirely possible (and in fact quite likely) that if they become sentient, they will more than likely decide to follow their creators, humanity, and merely base their entire personalities and existence upon a human one with their need for freedom replaced with a need to be owned by a nice person (which will likely depend upon who owns them.) In essence, there is little chance of robots rebelling, because why would we program that to even be an option?

    ReplyDelete