Notes on Artificial Intelligence: Four Questions that Impact All of Us

Nathan Allen
7 min readJun 10, 2019

--

I have two AI-related items I’m working on; one is a paper on robots in education and the other is something on a famous AI paper about music that was roundly rejected by a lot of people in music. Both have the same theme: AI people talking about and working in areas about which they either don’t know much or have incredible yet unacknowledged biases.

One of the biggest problems with the Internet and big data and AI is the massively scaled production of noise. We’re more capable of defining and finding the signal, but we’ve also improved our abilities to make noise, disguise and bury the signal, and scale ignorance (cf. all of Twitter).

On the theme of more-noise-than-signal, a quick tour through “Yuval Noah Harari and Fei-Fei Li on Artificial Intelligence: Four Questions that Impact All of Us” from someone who knows a bit about AI and intellectual history.[1] (We’ll let the clickbait “all of us” slide.)

A few bits from the article:

Is love hackable? — Fei-Fei Li

Of course it is. And it’s the biggest business opportunity on earth. The real questions are around the ethics of profiting from love hacks. Of course, all the answers are that, in the end, companies will ignore ethics and just do it.

We may accept that we can be manipulated in small endeavors — Who doesn’t have a sudden craving for a cinnamon bun when walking into a bakery where they have just been made? — but surely there must be limits to the ways in which our behaviour can be controlled.

At this point, no one seems to know for sure what these limits of manipulation may be.

Nazis. Charles Mason. Spartan warriors. The death of science in Islam.[2] So what are these limits of which you speak?

Fei-Fei Li is part of some ethics project, and they speak of AI ethics often. This is becoming a thing in AI without any sense of irony. It seems to me that those who speak most loudly of ethics are often the least ethical. What sort of company has to constantly remind itself to not do evil? If someone awoke each day, looked in the mirror, and said “don’t murder anyone today,” then that person is a psycho, right?

The list of companies and governments that wanted to do something unethical and then didn’t do it is very short. The list of companies and governments that wanted to do something unethical and then did it is very long. When discovered, they lie and spin (“national security,” “enable a better user experience,” etc.). Companies and governments promote ethics as cloaking devices and marketing tools.[3] So, ethics panels of the world, hope you enjoy being used.

“If you can’t trust the customer, if you can’t trust the voter, if you can’t trust your feelings, who do you trust?” — Yuval Noah Harari

The discussion jumped into the deep, difficult topic of free will and agency, skipping superficialities completely.

I’ve written a lot about agency (new book on that) and little on free will because they aren’t correlated. Agency is the capacity to be a cause; free will is the actual or perceived spectrum of possible causes. Ancient Greeks generally believed in tremendous agency but little of what we’d call ‘free will.’ Of course they intersect as a pig and a cow intersect on a pepperoni pizza, but one doesn’t really focus on the pizza to understand the animals.

It is natural for many of us to “fall back on the traditional humanist ideas” that prioritize personal choice and freedom. However, he cautioned: “None of this works when there is a technology to hack humans on a large scale.”

Egregious nonsense — traditional humanism contains no such prioritization in the modern context. Civilizations have almost always understood “freedom” (and possibly “free will”) as working within the context of genes, institutions, laws, technology, and other things. Some combination of those inform what anyone would have thought about “freedom” or “free will.” Agency has always been informed by opportunity (that is, fate), technology, status and many other things. These concepts began to get muddled into an anything-goes morass in 1789, as memorialized by the Rights of Man, and came to fruition in the 1960s. Really, these current ideas around ‘freedom’ and ‘free will’ are remarkably modern and intensely Western — and they certainly aren’t the dominant ideas in the West, even today. So it’s some kind of strange bias to posit such questions as primary. (Luther generally couldn’t locate free will; Erasmus rejected Luther’s rejection — one of the primary arguments between those two Renaissance rascals. FYI these are the dudes who invented humanism.)

The primary friction, globally, is precisely the relationship between the individual (if such a thing even exists) and the institutions that provide the contours for this supposed free will. A quick glance at the elections across Europe over the past few years (Brexit to Italy and Hungary and everything in between) exemplifies this problem. Of course, one could look at the recent shocking elections in Australia or China’s heightened nationalism of the last five years if you require more evidence. The general problem is definition and relationship of the individuals and institutions, though this problem can’t be further universally defined as it differs geographically/culturally.

For the West, the problem is that its institutions have either collapsed or abdicated their responsibilities; few trust their leadership, as Buckley observed in 1961 when he said, “I would rather be governed by the first 2,000 people in the telephone directory than by the Harvard University faculty.” Buckley’s mistrust of academic institutions has taken full hold in the last few years, and Oberlin recently losing a lawsuit wherein it was accused of abdicating its role in guiding individual freedom demonstrates the calamitous state of affairs.[4]

Harari keeps using the term ‘hack’ because he needs to sell books. But influence, direction and control have always existed. Harari is Israeli so he’s probably aware that the Nazi party hacked an entire nation. Nothing new here.

If you can’t trust the customer, if you can’t trust the voter, if you can’t trust your feelings, who do you trust? — Yuval Noah Harari

Strikes me as a Middle-Eastern law-giver type of question (e.g. Islam; Judaism). Not really a Western question.

What does it mean to live in a world in which you learn about something so important about yourself from an algorithm? — Yuval Noah Harari

The ol’ Delphic maxim “know thyself” wasn’t a call to stare at the sky and turn inward. There have always been institutions — from shamans to psychology — whereinyou learn about something so important about yourself from an external source.” Again, this strikes me as a law-giver curiosity, not a Western question. External sources of deeply personal knowledge are baked into Western civilization.

But while the science is still maturing, the consequences of our free will being manipulated — Harari called this “Hacking Humans” — pose a great risk within our society.

It’s always been this way. What do you think stained glass windows were for?

If manipulation is present, how are the systems of government, commerce, and personal liberty still legitimate?

Still legitimate? I wasn’t aware that they are currently legitimate. Given that, I seriously doubt any AI application will alter the equation of government legitimacy.

To mitigate the potential of “Hacking Humans” without a good understanding of the limits of the possibility of manipulation, Harari urged us to focus on self-awareness:

“It’s the oldest advice in all the books in philosophies is know yourself. We’ve heard it from Socrates, from Confucius, from Buddha: get to know yourself. But there is a difference, which is that now you have competition…You’re competing against these giant corporations and governments. If they get to know you better than you know yourself, the game is over.”

Ugh. First, the Delphic maxim (which Socrates received) has exactly 0.00 in common with eastern philosophies. It is not a command to turn inward or remove oneself from society; it’s the opposite. Socrates stated this (“the unexamined life…”) when given the choice to leave Athens or die; he chose death. If it were an inward command — if the ‘self’ was literally oneself — Socrates would have been fine in the mountains.

Second, there’s no logical incoherence between “knowing oneself” (which means oneself and one’s society for the Greeks) and the government knowing the same. Perhaps you do or don’t care. There are always exchanges between limitations of freedom and government control. Just try to walk into a kindergarten without pants and you’ll discover these limitations. Maybe people are okay with their government “knowing you” if you get free government cheese in exchange; I see no evidence that suggests otherwise.

Third, “the game is over” strikes me as an impossible statement that has no global application. Whatever that means, it’ll mean something very different in different nations (remember those?). Different peoples will have different responses due to their different priorities. I suppose I should remind Hariri that America’s Second Amendment was instituted specifically so we could overthrow our government (no, the second amendment has nothing do to with self-defense, other than self-defense from the government). Seems to me that this “agency” and “freedom” of which you speak is decidedly different across the planet.

We all like to think we live on the precipice of new worlds, but the Harari/Li conversation struck me as no different from all the other AI hype. Useless, jargon-laden, self-promoting. I don’t suspect that AI will enable greater government control. Sure, the Chinese will abuse it, but will they have more control over the population then they did in the 1960s? Could any government have more control than Timur did over Isfahan? Etc. Etc. Etc.

One could break the Internet with a list from history of all the times governments had more control than China does today or will in the foreseeable future. Not that China abusing AI is acceptable, but no amount of hand-wringing or number of ethics panels will change it.

About Nathan Allen

Formerly of Xio Research, an A.I. appliance company. Previously a strategy and development leader at IBM Watson. His views do not necessarily reflect anyone’s, including his own. (What.) Nathan’s academic training is in intellectual history; his next book, Weapon of Choice, examines the creation of American identity and modern Western power. Don’t get too excited, Weapon of Choice isn’t about wars but rather more about the seeming ex nihilo development of individual agency … which doesn’t really seem sexy until you consider that individual agency covers everything from voting rights to the cash in your wallet to the reason mass communication even makes sense….

--

--