Robots Will Eventually Have To Kill Us

Spoiler alert: some of us might be kept as exotic pets.

Image for post
Image for post

I’ve been doing some thinking for the past couple of weeks after seeing a video released by Boston Dynamics showing one of their robots, Atlas, doing parkour in their facility. The video is disturbing if you, as I do, believe that robots will eventually figure out that in relation to this planet, we are like a virus slowly killing it and thus, killing ourselves.

It is at the intersection of humanity, our reliance on this planet, and the counter-logic we will impart to early robots as they evolve at a speed beyond our comprehension, that I believe they will come to the conclusion that in order to restore balance, peace, and all of the things we claim to value as a species, they will have no choice but to kill us to protect us from ourselves.

Siri, Send Help

There are many harrowing emergency situations in which we need to act quickly to save lives, reduce damage, or minimize human risk. For these situations, there is a commonly asked question: do we have a robot that can do it?

For now, the answer is often no. However, as we create more advanced robots, we will eventually be able to answer yes to this question in varied instances. It will open up a world of opportunity, entice us to program robots to craft more creative solutions from data- and statistic-based standpoints, and eventually teach them to learn from their experiences and adapt.

Where Adaptation Goes Left

As robots become more feasible solutions to disaster response, we will need them to learn from their environment using experiential data to save precious minutes that would otherwise be wasted on further calibration. This level of optimization will become our top priority, with a long-game strategy of eventually enabling robots to self-optimize at a rate that is beyond human capability and far more efficient than such.

It’s at the advanced stages of that sort of innovation that the trouble has the potential to begin. We will eventually be forced to address the “thinking” robot: one that goes beyond the data in real-time and begins to make analytic predictions and use that data to make decisions proactively and beyond our intended scope.

Sure, this decision-making power will likely minimize or eliminate human error and risk during dangerous life-saving missions. These leaps will usher us into finding comfort in this computing power and put our minds to ease by letting contentment blind us to the other side of the coin; one in which human error and subsequent risk to the species needs to be similarly calculated and predicted to minimize the harm we cause to one another.

All of the things we claim to value — humankind, life, liberty, the earth, our laws, safety — could potentially be subject to the scrutiny of robots in a dark future. This is not to say that we plan to be governed by robots. It is to say that in a socially corroding world, robots have the potential of holding us accountable to the fallacies we speak but our actions fail to reinforce. Robots may possibly compute more complex data than we can while lacking one of our greatest human feature-flaws: emotion.

Surpassing Logic of Efficiency

There seem to be two common reasons we use robots: for efficiency and life-saving potential.

If these are to remain the two prime reasons we use robots, it would be detrimental to not look at the big picture when we think about what happens when we task them with saving lives and they compute ways to do so efficiently. Without emotion to make them question certain decisions, we should fear the depth, or lack thereof, of their ethics. It’s possible that robots will establish that efficiency is largely lost by being reactive, leaving them no choice but to be proactive if the catalyst is efficiency.

Shifting their strategy from a reactive one to a more proactive plan of protection could shift their solution paradigm from a micro one — where the solution is 1:1 with the problem — to a macro one — in which the solution is 1:x in respect of how many issues can efficiently be solved by a singular action.

This would mean going beyond saving the lives of the humans who might die in a hostage situation to instead proactively analyzing ways to stop them from being subject to that adverse situation in the first place. As humans, we might say the answer is gun law reform. But would a robot expect us to adhere to that when all it takes is one president to destroy the politics around it, putting us in a see-saw battle from term to term? Would a robot go beyond the surface, deeply analyzing not our politics or our actions under regional rules of civilization but instead our overall pattern of behavior across cultural and political lines?

Until it happens or the opportunity is missed we cannot say if they will or won’t. It’s the propensity for doing so that I fear.

That assessment, I believe, will lead to the conclusion that we — with our wars, our destruction of the planet that sustains us, and overall allegiance to violence — will be identified as the greatest risk factor to ourselves and, in order to protect us, they will need to kill a large portion of the planet’s human population on the basis of giving the planet time to heal, while managing the pests that weakened her.

The Animals We Are

It will never be the ultimate goal of robots to kill us all. I just think that because of our reckless, ignorant, self-harming nature as a species, once most of us are decimated, we’ll then be kept under watch with limited freedoms in captivity.

The only thing I can’t figure out yet is if we will be kept in a similar manner of pets or more so as wild animals at the zoo.

Either way, it makes me question if the future is one where robots are to be classified as the primary beings on Earth — the new earthling if you will.

Either that or I have absolutely no idea what I’m talking about and just want to see if Terminator had it right.

Thanks for reading this! If you liked it, be sure to hit the clap button and if you’d like to chat, you can always find me on Twitter.

Content Strategist learning Python and UI Design.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store