Are Robots Taking Over the World?
What are the ethical dilemmas of AI and machine learning?
The 20th century saw some epic upheavals in the course of history…flight, world war (twice!), nuclear energy, and decolonisation, just to name a few.
Each time an epic change occurred, the writers, artists, and thinkers of the world drew on a reserve of more than several thousand years of Western philosophy. In seeking to understand how new events played into the greater picture of history, they leveraged the thoughts of philosopher and theologian alike.
But toward the end of the twentieth century, a truly epic change came about, one that has carried on into the current century, and advanced at lightning speeds—and due to the lightning rapidity of its progress, it calls into question the ability of ancient, pondering, philosophical behemoths to address the ramifications of this change.
No, we’re not talking about the internet. The internet is old news at this point.
Yes, the internet did create a disturbance in the force, forever changing the landscape of economics, research, and interpersonal relationships (did you know you can Facebook friend old high school teachers and see what they do on the weekend? Did you know that “friend” wasn’t always a verb…? Maybe befriend, but not friend…).
See what the internet has done to us? We can’t even get through the introduction of this article without opening a Reddit thread.
Let’s get back to the point…we’re not talking about the internet. We’re talking about AI.
Companies don’t just store your payment information file; they make tailored recommendations based on your previous shopping habits. Marketing programs scan charts of social media activity and pop ads into the most effective places. Financial institutions leverage programs that take stock ticker stats and spin thousands of articles of investing “advice,” strengthening their site SEO in the war for portfolios. And these are only examples in commerce!
We could go on and on, but the point is the same, whether you’re talking about medicine, sports, gambling, or the military…AI is playing an increasingly larger role in our lives, and it’s getting smarter.
The AI that plays a role in our lives is convenient, if not (at times) a little creepy. One could argue that we live in a world that Jules Verne, Isaac Asimov, and Gene Roddenberry couldn’t even have imagined.
Sure, if you watch old science fiction, there are things in the “future” like facial scanners that unlock doors based on the geography of your features. There are even talking robots like C3PO and Data.
But these movies and shows did not really envision the awesome (almost terrifying) degree to which artificial intelligence has come to play a role in our lives.
Is AI Taking Over?
Some talking heads even propose that there will be an AI apocalypse in the future. A robot takeover. An automaton coup d’etat.
But don’t start getting worried just yet…the idea of a robot rebellion has come up several time throughout history. It’s a pretty old fear, actually, going back at least all the way to Mary Shelley’s Frankenstein.
“It’s alive…it’s alive!”
In later years of course, movies and books like IRobot, Chappie, and 2001: A Space Odyssey explored the terrifying possibility of a takeover from a malicious AI.
But the AI takeover doesn’t need to be like a scene out of Terminator.
One polarizing and easily recognizable area that AI has already shaken up is the economy.
Back in the olden days, clerks, factory workers, and porters (bet you don’t even know what that last one is) were all essential cogs in the wheel of capitalism. Or communism. Or whatever economic last system your country had.
But today, robots can do many of the menial, repetitive tasks that people do, and do them better. One could even get a robot to reproduce impressionist art.
One could even code a program to help the robot learn how it make its own impressionist art masterpieces. Sounds nutty, but it’s true.
Things like the color wheel, brushing techniques, typical impressionist art subject matter (you know, ballerinas or the French countryside), would all be input into the program…and walla! You would have a machine that can make awesome art, without resorting to weird eccentricities like cutting off its ear and sending it to its robot paramour.
But that of course, calls into question the human element. Vincent Van Gogh brought a human touch to his work.
The ambiguity of that human element, and what makes him different than Adobe Acrobat has been the fuel of philosophers for ages.
What if AI was One of Us?
…Just a slob like one of us
Just a stranger on the bus
Tryin’ to make his way home…
Thank you, Joan Osborne.
Who are we? What are we? What is our purpose?
Religion, philosophy, and even high people on Reddit have contributed to this discussion for thousands of years. And over the course of those thousands of years—despite our differences—humanity has, in the main, achieved a functional degree of harmony that has allowed us to do some pretty awesome stuff.
Like land on the moon, build skyscrapers, and write stirring works of fiction. Could a robot do what Shakespeare did?
“Romeo, Romeo, wherefore art thou Romeo…010011001010…”
It just doesn’t have the same ring.
Who knows what makes man-made things special? Is it the soul? Is it our emotions? Is it even our fallibility and imperfection?
The analytic tools and automated technology we’ve created can problem-solve and learn new behaviors. They can do what they’re told, and do it pretty well. But could AI develop the initiative to reach for its own goals? To what end?
To Go Boldly Where No Bot has Gone Before…
For some reason, human beings are inherently curious. From your toddler dumping cheerios on the carpet just to see what happens, to the Spanish Conquistadors sailing across the ocean to discover new worlds…people love adventure. And when they can’t get adventure, they read about it, watch it, or act it out.
Does AI like adventure? Is AI curious? Would AI be motivated to land on the Moon, or Mars?
Maybe, if it meant securing more resources. But what would those resources be for? Maintaining its own existence? Does AI have a will to live?
And to be fair, many times when man has explored new worlds, it’s been to gain access to resources as well. Is man—like the theory of mechanism—no more than a robot in the flesh?
But for some reason, I can’t quite picture C3PO sitting down on a rock like Rodin’s “Thinker” and contemplating the meaning of its life. Of course, it would if you programmed it to. But the philosophizing process would probably be pretty short lived.
My purpose is to analyze the spending habits of credit card holders and make purchasing recommendations, factoring in their average weekly gas mileage to avoid suggesting restaurants, events, or stores, outside their normative travel radius, while also factoring in income and other demographic factors provided by Facebook.
Harder. Better. Faster.
In some ways, it certainly does seem that AI is taking over our lives…we’ll soon have automated cars taxiing passengers about. We take it for granted that all our behaviors are tracked and analyzed. We no longer see the need to remember “trivial” information or perform “menial” repetitive tasks…because our phones can do that for us.
AI does it better and faster. But does that mean AI will be motivated to take over the planet?
Scientists and programmers have proposed some frightening AI scenarios.
For instance, a computer with programming abilities could rework its own source code, and provide itself with an intelligence explosion. It would then be able to quickly surpass the human research community in the race for milestones in areas like biotech. With its newfound knowledge, this computer could create a race of self-replicating nanobots that unleash chemical weapons around the world.
But in the words of legendary urban poet Dr. Dre in “The Next Episode” (from the 2001 Album):
What would the motivation be for the computer to go through all that?
Would it compute the vast range of human activity throughout history, conclude that humans are a threat to the planet, and proceed to take steps to eradicate them? This of course, assumes that the computer agrees with the principles of historicism, believing that historical trends are guided by unalterable principles. They’ll never be good, so screw ‘em.
And why would the computer care about the planet in the first place?
Western Philosophy and Theology have facilitated a guiding structure for humanity over the last several millennia.
Unfortunately, that structure takes on many forms, and is pretty much different in every society. Philosophers and theologians from Buddhism to existentialism to humanism have tried to wade through the inherent pluralism of this world and create a universal code of conduct, but to no avail.
I like vegetarian pizza, and my wife does too, but she hates mushrooms. My son prefers cheese, and I have a friend that likes pepperoni. Get my point?
People don’t come off an assembly line. They’re all different.
The Apple Doesn’t Fall Far From the Tree
Getting Western Philosophy to intermediate the possibilities suggested by AI seems pretty difficult. Whose system of thought should we use to program the robots and computers of tomorrow with a sense of morality?
Would AI perhaps discover it on its own?
Who knows…maybe Artificial Intelligence have a seminal “Garden of Eden” moment where they eat the proverbial fruit and their eyes are opened to the concept of good and bad.
Until then, they do what they’re told, although they do it pretty damn well, thank you very much.
Of course, that still leaves the chilling possibility of terrorists or Somalian pirates or even disgruntled mad scientists from manipulating AI possibilities with their own set of warped values.
But of course, takes one to know one, doesn’t it?
Humans are obsessed with the idea of an AI takeover because that’s what we know best…war, genocide, and perpetual conflict.
Has there ever been a time in human history when someone wasn’t trying to take over the world?
Yes…the ice age. Although actually, people were probably trying to take over the world then too.
To that end, the question of an AI takeover remains…lingering right alongside the question of how Western philosophy could help or hinder this possibility.