You Can’t Handle The Truth
I know. Promises have been made for decades — promises about personalized learning and adaptive learning progressions and assorted other unicorns. Oops.
What Knewton should have said in 2008 to potential investors is “this will take a decade and we’ll need a billion dollars per year — who’s in?”
Not the best pitch, but it would have been true. At IBM, that’s about been our trajectory, and we’re very close to realizing the unicorns of 2008. The promises of the past were broken in two places.
Data. Most of these ed-tech pioneers (ninjas? charlatans?) didn’t realize how much data they needed. Generally, they didn’t even realize what data is. About 80% of data in digital networks is never analyzed. Or 90%. I think my CEO has said both so they’re both true. The point is: even in a digital network — as in finance — most data isn’t leveraged. Your great untapped resource is your data … yet for years no one even knew what they were looking at …
It gets worse. Even had these ed-tech companies known about the vast breadth of data they needed and was available to them, they couldn’t scale. From my experience, I can assure you that a lovely way to break a platform is to attempt to ingest the data it actually needs. Everyone says build for scale and build for speed … but those are meaningless if you don’t first wrap your head around the scope of ed data scale and speed.
Analysis. Even if you can get the data, that doesn’t mean you can do anything with it. The government has had reams (well, tapes back in the day) of data on which they could conduct no real analysis. Even if Knewton could realistically access all the data available in 2008, they couldn’t have meaningfully analyzed it. No one could have.
One way to think of “cognitive computing” is that now we can (1) ingest all the data we need (really, all of it), (2) actually analyze it (for real) and (3) do something with it. Sounds like what we’ve been told we can do for more than a decade, but the cognitive systems weren’t yet available. Cognitive is important because the quantity of data and variables means you can’t code the machine for every possible input or variable. You don’t have a decision tree — it’s a decision Amazon rain forest. You need to be able to train the machine to make these decisions for itself.
What’s Your Problem?
So, Schools, the good news is that this past failure wasn’t your fault. The bad news is that now it is. Because now we can build the pipes, pump in the data, analyze it all and produce predictive and prescriptive analytics. For real. But the problem is that schools aren’t ready — or often even realize what “ready” means.
Your data is often on legacy systems, siloed away, and disturbingly dirty. You don’t treat your data like a doctor or engineer does — someone else, outside of your system, will use that data to make important decisions. You have records that are erroneous or mismatched or duplicative or somehow make no sense and you have no idea how they even got there and your solution has been to ignore it. If doctors did this then we’d have random leg amputations and men strapped to operating tables in maternity wards.
So now we can do amazing things with all your data, but the infrastructure isn’t there. If you know the current state of LaGuardia airport (or most big city airports), that’s what these data and their structures look like. Generally, it can cost $200,000 and take four months just to build the pipes, clean the data, and find the content repositories. If you’re not making this kind of investment, then you’re probably not enabling the promise of predictive and prescriptive analytics. (Google gets around this problem by largely ignoring it, which means they really aren’t engaging in real analytics. Not that you want Google hooked up to your sensitive data — they’ll just create consumer profiles to sell you more cheap plastic stuff from China. Right, they said they won’t do that. Right, they’re just ignoring their entire business model.) (Fun fact: Google is an advertising company. That’s all. They make consumer profiles and sell them to advertisers. Their robot dogs are just a sideshow. A horrifying sideshow. What was wrong with real dogs that we needed robot dogs?)
In the race to win the future, organizations and companies that deliver digitally are already ahead because they are sitting on the data. Brick and mortar schools will need to push hard to catch up. Brick and mortar schools that don’t push hard to catch up will become Sears.
So what’s the difference between Amazon and Walmart? Why did Amazon grow so much more quickly over the past decade? It’s not “the internet.” The internet is just a means to create data — it’s always possible to do nothing with the data. Amazon’s advantage over Walmart is that they do something with the data. (In the 1990s, Walmart had the advantage because it did something with data. But as a digital company, Amazon simply collects much more data, which means it has much greater opportunity.)
Efficiency and efficacy in education are miserable compared to costs. We can — finally — improve these areas. But first, we need vast quantities of quality data, and then we need to do something with it.
But don’t be annoyed by this assessment. You think government and finance were ready in the 1970s? Retail was ready in the 1990s? You think retail is ready today? (Ok, retail is getting there — the wake-up call is ringing throughout retail back offices wherein overly tired techies stare at reams of data and wonder “WTF do we do with all this?” About six months goes by and then they call IBM or Microsoft or Oracle or EMC or one of the few companies that can genuinely handle a lot of data or their local Jack Daniels distributor. A lot of them call everyone.)
I’d estimate that by about 2008, retail began to really wake up. That’s also when the great divergence began — data-based retail companies really started to separate from the non-data companies. Then Amazon flipped the cognitive switch. Then Sears, Kmart, JCPenney and assorted other disasters began to unfold. (I brought that Sears reference back around — some kind of metaphor judo going on there.)
So the next time someone talks about fixing our crumbling highways and airports, remember the data highways and airports (data airports?) that need investment and improvement. If you want to enable ed-tech over the next few years, you need to build the infrastructure. And if someone promises “deep analytics” without this infrastructure, tell them to take their personal learning snakeoil elsewhere.
Nota bene. We need a new term to replace “personalized learning.” I suggested “ed-wizardry” but our marketing people looked at me like I’m an idiot. Which is entirely normal and probably true, but still … we need a new term for “personalized learning.”
Formerly of Xio Research, an A.I. appliance company. Previously a strategy and development leader at IBM Watson Education. His views do not necessarily reflect anyone’s, including his own. (What.). Nathan’s academic training is in intellectual history; his next book, Weapon of Choice, examines the creation of American identity and modern Western power. Don’t get too excited, Weapon of Choice isn’t about wars but rather more about the seeming ex nihilo development of individual agency … which doesn’t really seem sexy until you consider that individual agency covers everything from voting rights to the cash in your wallet to the reason mass communication even makes sense….