Break in case of emergency

Nathan Allen
3 min readJan 14, 2019

--

In the event that I’m wrong about the A.I. apocalypse, humans could always become anomaly machines.

Last week, I spoke at a university and during the Q&A I got the question “how do you know you’re dealing with a robot?”

I get this question all the time so it’s amazing that I’ve never really thought about it. So I gave the response that I usually do.

“Tell it you like the taste of platypus.”

I have no idea why I say that, but after I get the usual look that walks the line between confused and concerned for my mental health, I follow up with “be random.” The robot won’t respond with the awkwardness a human would but will rather respond with the familiar “I don’t understand what you’re saying. Let’s try again. Say ‘financial assault’ if you’d like our bank to rape you with fees. Say ‘representative’ if you do not want to speak with a representative. Say ‘I can’t believe I’ve been talking to a robot for 45 minutes’ if you’d like us to raise your interest rates to an unholy level and then cancel your account without warning.’”

Usually, some portion of my discussion on robot-building is always about how robots are environment-specific (and, of course, task/problem-specific). In order to build anything that works, you must fully scope the environment — context, known variables, unknown variables, etc. This is a much broader question than simply what problem is my robot going to solve?

So you have a set of problems (and its variables) and the environment (and its variables). These variables will be wildly different depending on whether you’re building a sex robot or a killer robot or an industrial robot … at least, I hope your assumed variables are different — or, at least, that you have really good insurance.

I was reminded of randomness defense because of a set of recent headlines.

VIDEO: Man Spent Three Hours Licking California Home’s Doorbell

and

Costco sells out of 26 lb. mac & cheese tub with 20-year shelf life

Humans are really good at adapting and really good at being random. If killer robots come for us, we could all adapt to the random — that is, start licking doorbells. The robots will melt their GPUs before they can model that behavior.

So if I’m wrong about the A.I. apocalypse, then human behavior will randomize, and some subset of that randomness will become standardized, and then we’ll be fine. We’ll have mac & cheese for decades and the robots will be have melty GPUs. Sure, robots will have anomaly detection, but when the anomaly becomes the norm, the robots will kill the few humans not licking their doorbells.

Of course, standardizing a random anomaly is how we all got on this ridiculous planet in the first place, so our doorbell-licking future will just be a continuation of the process that got us doorbells in the first place.

So how consistently random are humans? The Romans were fond of making these (yes, it has wings):

Roman wind chime and robot detection system.

--

--

No responses yet