A.I. Ethics & Hogwash

Nathan Allen
5 min readJul 19, 2019

Dear Microsoft, people aren’t that stupid anymore

At one point, this was going to be a series. Then a million things happened. (Well, globally, probably trillions of things happened). It was motivated by Microsoft’s ridiculous claim to desire some kind of global A.I. Ethics commission, like at the U.N. or something.

It’s ridiculous because it’s just a thin veneer atop the lying, cheating, swindling, and spinning that companies will usually do. Of course, should we have:

1. Public data sets, public training results, and labeling for how much a machine was trained on public data vs. proprietary data? Yes.

2. Comprehensive public assessments with labeling for how each machine performed? Sure.

3. Rules & limitations with public declarations that a machine conforms. Ok dokey!

APIs would need further rules. How does a company know and confirm that their APIs are being used to train a next-gen mobility scooter for old people in Florida and not a killer assassin panda (probably also in Florida)?

But do you think for a minute that companies, particularly large corporations, won’t just use these “standards” to shield their other activities? Have you been paying any attention to Facebook for the last … since it was created? History is not on the side of rules. (Fun fact: exactly 0.00 countries follow global laws. They break them whenever they feel it’s necessary, which is pretty much daily.)

So when people are surprised to learn that IBM is helping China build its surveillance state, I wonder if they know who built the surveillance states of the middle east dictators? Some of these are the same companies that helped the Nazis build their surveillance state in the 1930s. Why do you think anything has changed? Most dictators don’t/can’t build their own surveillance state just as they can’t build their own weapons or electrical grids or … well, most things. Someone will build it and sell it to them.

The only real rules are:

1. Companies will bend/break/ignore rules if money is to be made.

2. Money is always to be made.

Claire Stapleton, an AI researcher at Google, recently left the great vampire squid that is wrapped around the face of humanity, relentlessly jamming its blood funnel into anything that smells like data and sent an internal memo wherein she saidGoogle isn’t a place where I can continue this [AI ethics] work.”[1]

Many have concluded the same, and there will be many more to come. But it all seems so naïve. Further, it seems a bit alarming if Google can’t keep a well-meaning and conscientious employee such as Claire.

So first, she comments on “the result is that Google, in the conventional pursuit of quarterly earnings…..” Anyone who hasn’t worked in a large public company has no idea how potent the quest for hitting quarterly earning targets is — if Satan resurrected Hitler and Mehmed II and needed a place for this unholy triumvirate to store their data (everyone’s got data, right?), a half-dozen companies would be tripping over themselves to spin up a server.[2] If you want to remove the anything-goes quarterly highs that public corporations are addicted to, you need to do something about the public markets.

She continues, “I’m certain many in leadership don’t truly understand the direction in which Google is growing. Nor are they incentivized to.” It’s amazing how you can bring pressing issues to senior management, but unless there’s a clear incentive to listen, they simply cannot even hear what you’re saying. This incentive-based deafness is endemic in corporations around the world. QED: focus on incentives.

But then, I think we go off the rails with her goal of “making sure AI is just, accountable, and safe, will require serious structural change to how technology is developed and how tech corporations are run” and “the use of AI for social control and oppression is already emerging, even in the face of developers’ best of intentions.”

What if the United States bans the use of A.I. to make killer panda assassins? Then what if China makes one, put it in a little panda submarine, and sends it across the Pacific? What if we know they’ve done this? There’s 0.00 evidence that the U.S. government and its comrades-in-arms won’t develop their own panda-assassins (at which point Michael Bey directs the great robot-panda war in the Pacific).

Regardless of any rules that are made, entire nations will break them. Then others will follow “out of necessity” (cf. Patriot Act et. al.).

Regardless of the safeguards, companies will break them for their governments. One of the realizations of those who owned presses in the 18th century (and media in general) is that one of their biggest customers was the government. If your biggest customer calls and demands a panda assassin, what do you say? But the rules? The government gets to break the rules. But we’re unionized? You do it or we’ll get someone else to do it (the U.S. government co-devs code w/ other governments, of course).

My advice is to study how historically if and when the goals around A.I. ethics have worked with new inventions and technologies in the past. There have been many that were viewed as very dangerous at the time of invention. Isolate the mechanisms of success (or abuse), then discover how they apply to the future. They do exist, but it’s shockingly difficult and various ethics committees are more about public relations and cocktail lunches than actually achieving anything. The current members of the United Nations Human Rights Council include Burkina Faso, Eritrea, Somalia, Bahrain, the Congo, Angola, Qatar, Pakistan, Rwanda, Saudi Arabia and Cuba. Much like Security Council members, these members serve to protect their own ability to break the rules as much as enforce them on anyone else.

So, yeah Microsoft, a global A.I. Council of Ethics would be successful, assuming by ‘success’ you mean protecting those who want to break the rules.

About Nathan Allen

Formerly of Xio Research, an A.I. company. Previously a strategy and development leader at IBM. His views do not necessarily reflect anyone’s, including his own. (What.) Nathan’s academic training is in intellectual history; his next book, Weapon of Choice, examines the creation of American identity and modern Western power. Don’t get too excited, Weapon of Choice isn’t about wars but rather more about the seeming ex nihilo development of individual agency … which doesn’t really seem sexy until you consider that individual agency covers everything from voting rights to the cash in your wallet to the reason mass communication even makes sense….

--

--