A.I. Apocalypse Averted, Again

Nathan Allen
5 min readJan 3, 2019

--

Second Law of Thermodynamics, Kim Cattrall, and Chess-playing Midgets all in one essay. They said it couldn’t be done.

Maybe it’s me, but when I see something like:

I can’t help but think that’s but one part of the story, and maybe not even the most important part of the story.[1] This chart is of one output — but what about the other inputs and outputs?

Generally, ML and particularly neural nets are ravenous beasts. Inputs include large amounts of data and hefty doses of power. This may seem trivial but recall than when the world went crypto-crazy a year ago, entire countries noted the power-usage spikes. That’s loads of powerful GPUs causing energy spikes on a national level. It’s not trivial. (This reminds me of Obama’s big EV push that established committees to review grid/power issues but no significant power-infrastructure investments — that’s how you knew they weren’t serious. Millions of EVs on the road … .where was the power going to come from?)

Anyway, compared to humans, computers are remarkably inefficient — with both data and power. Humans can supplement inadequate data with imagination (I suppose one could draw an analogy to regressions, but computers fill-in-the-blanks poorly compared to humans and still require much more data). And the human brain is remarkably energy efficient.

And what about the other output? Computers are entropy machines (physics-entropy, not computational-entropy, though that too).[2] They produce outputs per the graph (e.g. that’s a cat on a unicycle), and they produce heat. The thermal output of A.I. processes is not trivial.

Further, as A.I. goes local, you localize these inefficiencies, which is why we have liquid-cooled chips in mobile phones now. More A.I. = more energy = more heat. And while chips have gained efficiencies all around, we’re getting to the limits of physics with 7nm chips (to the point where some fabs don’t even think producing at 7nm scale is worth doing).

Ubiquitous and good AI depends on increasing efficiencies over the next decade to at least the same degree we’ve increased them over the past decade (probably more). Energy storage will likely improve, but I suspect we’re being unrealistic in assuming massive efficiency gains in other areas using the same technological architectures.

Which brings us to the A.I. apocalypse. Humans are remarkably energy and data efficient, not to mention decent thermal regulators. But more importantly, humans make their own energy. This strikes me as a fundamental flaw in any “A.I. apocalypse” scenario, apart from a robot that has its own micro-nuclear reactor. AI requires non-trivial amounts of external power and produces non-trivial amounts of heat. If a robot goes crazy, you can almost certainly locate it and unplug it.

What about a distributed apocalypse wherein bots seize the grid? Ok, unplug the grid. There are actual physical wires, you know. So you can’t watch Housewives of Burma for a few weeks while we kill the bots. We’re talking about an apocalypse here.

But won’t we need energy just to kill the bots? No, you’ll need an ax and maybe electrical gloves. Salt water may be handy, too. But if we did require electricity, we can power off of localized/non-grid energy sources.

What about bots seizing a solar array? It would need to be a giant solar array, and by giant I mean easily targetable. Unplug it. Destroy it. Attack in early morning while the bots sleep. (Tesla’s Powerwall house battery tends to nearly no power in the early morning after powering the house through the night but before the sun re-energizes it, and this is in sunny climes). Combine the inefficiency of solar with the inefficiency of computers and I like our chances.

If all else fails, humans can run into the woods, eat berries, and plan their revenge. Bots gotta stay plugged in. Strikes me as a strategic weakness.[3] Humans win again.

Ultimately, we just like to ponder a human-level A.I. because humans want to be gods, and you become a god by creating humans. This story is as old as history; Ovid told the story of Pygmalion in Metamorphoses two thousand years ago, and he didn’t even invent the story. (Pygmalion creates a statue that comes to life.) Mary Shelley (Frankenstein) and George Barnard Shaw (Pygmalion) followed up. As did Disney (Pinocchio), Broadway (My Fair Lady), Kim Cattrall (Mannequin) and nearly every Sci-Fi drama made. The idea that humans become gods by making “humans” is ancient.[4] Honestly, some of these examples seem more realistic than a distributed Alexa attack that compels humans to listen to unwanted weather reports.

That said, nuclear drone submarines strike me as problematic.

*******************************************************************

[1] From MIT Technology Review. https://www.technologyreview.com/s/612582/data-that-illuminates-the-ai-boom/#

[2] I know … what about the curious case of quantum cooling/data deletion … well, we’re a long way from that having a practical application (long way = never). https://phys.org/news/2011-06-quantum-knowledge-cools-entropy.html

[3] Throughout history, if you don’t have a source of fuel, your quest for domination quickly crumbles. (Fuel = food for humans and whatever for your horses or tanks or GPUs.)

[4] Daedalus installed his voice into statues — sound familiar? Hephaestus famously made human-robots. Pandora was made from clay, and Talos made a man-robot out of bronze. All these stories are “B.C.” Of course, Amazon’s ‘Mechanical Turk’ is derived from an 18th century scammer who put a chess-playing midget in a box (seriously) to operate a robot. It’s noteworthy that the Mechanical Turk fooled people for years because they wanted to believe that humans could rise to god-level. That’s some serious Enlightenment-delirium. Of course, we’re still afflicted by it.

About Nathan Allen

Formerly of Xio Research, an A.I. appliance company. Previously a strategy and development leader at IBM Watson Education. His views do not necessarily reflect anyone’s, including his own. (What.) Nathan’s academic training is in intellectual history; his next book, Weapon of Choice, examines the creation of American identity and modern Western power. Don’t get too excited, Weapon of Choice isn’t about wars but rather more about the seeming ex nihilo development of individual agency … which doesn’t really seem sexy until you consider that individual agency covers everything from voting rights to the cash in your wallet to the reason mass communication even makes sense….

--

--

No responses yet