What [insert tech companies] Are Not Telling You About Their A.I.

Nathan Allen
5 min readMar 22, 2019

--

80% coal mining. 40% hype. 20% unknown fibers.

@LanceNg wrote a nice piece (What Microsoft and Google Are Not Telling You About Their A.I.) that touches on a subject that I know a bit about. A very common scenario at IBM was:

Marketing/Comms: puts out something publicly.

Research: freaks out.

Or,

Sales: makes a sale.

Research: freaks out.

Part of this is the inevitable friction of comms and sales people not being research people — they get things wrong on the technical side. On the other hand, if researchers were in charge of comms/sales, every press release would be a boring recitation of mature technical capabilities, which about 10 people on the planet would be interested in and, of course, they don’t necessarily connect to any use case. And, of course, we wouldn’t sell much of anything.[1]

The biggest disconnect — and it happened all the time — wasn’t so much what we could do but rather when we could do it. You could look at IBM A.I. comms from 2012 on and find many optimistic announcements. My issue with them is the implication that such capabilities are ready today … or maybe tomorrow. Worst-case scenario: later next week.

The truth, rather consistently, is that we’re probably 3 years away from whatever we announced or sold today and possibly 5 years away if we’re surprised by bad (or non-existent) data, or you (the client/partner) randomly switched databases on us (now why wouldn’t you tell us about that?), or maybe your data toolsets are from the 1980s, or maybe you fired half your tech team in the middle of the project.[2] Or, the other possibility, is that it’ll never work the way we said it would because reality got in the way. Which reminds me of another conversation…

A few years ago, I’m in a c-suite meeting with the largest tech company in its industry, and I draw the architecture of a very innovative A.I. solution on the board.[3] It’s really the kind of next-gen disruptive innovation that would re-invent the space.

The CTO: [looking at the board, appearing concerned] How much of this is actually built?

Me: The components are built. They’ve never been put together like this. [Internally: I don’t want to call this guy out but … Testing? Training? Hardening? UX? All that needs to be done within the context of this configuration and use-case and some of that can be huge endeavors.]

The CTO: Is this being used in our industry?

Me: No. [Internally: isn’t this why we’re talking? You want innovation? You need a competitive advantage? Why am I even here if you just want off-the-shelf products? Just go build yourself an Alexa app.]

The CTO: Well, we’re not interested in any science experiments.

Me: [Internally: they’re all science experiments at this point.]

The other c-suite people in the room jumped in and essentially told the CTO that he was missing the point. The guy was gone from the company shortly thereafter.[4]

The point is this: there is no magic algorithm. There’s lots of training for specific use-cases, which is often a lot of manual labor (‘coal mining’). All of this takes time and funding and there’s no way to know — for certain — that it’ll work until we try it, even if the components are in use in other domains. Even some CTOs miss this.

Of course, it gets more complicated — knowledge graphs, user profiles, multi-model input/output…. I’ve seen one ed-tech company raising money on its algos … yeah, so maybe 5% of the work of actually building a product is done. Comms and sales probably won’t mention it, but just about everything in emerging A.I. involves a lot of coal mining and the realistic time frame is usually:

Probably 3 years.

Possibly 5 years.

Maybe never.

Also: we don’t really know.

Of course, it’s really hard to sell that time-frame, which is why it doesn’t often get mentioned.

As far as scaling, think about this:

Q: Why didn’t IBM monetize Deep Blue after it beat Garry Kasparov in 1997?

A: Because they couldn’t — that computer wasn’t a chess computer. It was a Garry Kasparov computer.[5]

You want to build something that’s scalable? Cool. I like unicorns too. So add another 1–2 years and double the cost and we’ll see what we can do. (Everyone complains that something isn’t scalable but no one wants to commit to the time+cost for building scalable A.I. products due to the risk that we’ll hit a wall of reality wherein ‘scalable!’ really means ‘kinda scalable?’ because that’s what usually happens.)

All of that is being charitable. The uncharitable analysis would reveal an awful lot of people who have no idea what they’re talking about or whose job is just to monetize hype (investors, publications) — they don’t actually have to build it so why would they care if it’s true or not … as long as people believe it’s true.

[1] If you wished to quantify the degree to which researchers are bad at comms/sales, you’d need to use scientific notation.

[2] Yes, all those are specific examples of not-named companies.

[3] Of course I thought it was innovative.

[4] Not sure if he was fired but that was the impression I got — never dug into it.

[5] Of course, IBM probably cheated on 2 moves. Should note I wasn’t on the team so I have no idea.

About Nathan Allen

Formerly of Xio Research, an A.I. appliance company. Previously a strategy and development leader at IBM Watson Education. His views do not necessarily reflect anyone’s, including his own. (What.) Nathan’s academic training is in intellectual history; his next book, Weapon of Choice, examines the creation of American identity and modern Western power. Don’t get too excited, Weapon of Choice isn’t about wars but rather more about the seeming ex nihilo development of individual agency … which doesn’t really seem sexy until you consider that individual agency covers everything from voting rights to the cash in your wallet to the reason mass communication even makes sense….

--

--

No responses yet