7 Minutes to Learning[1]

Nathan Allen
8 min readJan 10, 2019

--

(I broke the rules already by footnoting the title; an appropriate prelude.)

I’ve spent some time in random countries for assorted organizations and governments (random meaning countries that many people don’t even know exist). The curious feature of severely authoritarian regimes is the cleanliness and organization of people’s houses. When people have little control over their lives, they obsess over the few things over which they have control — so they clean and organize their kitchens and bathrooms beyond any reasonable western standard. The minute they walk out of their door, the world becomes some kind of idiosyncratic existential chaos, but the utensil drawer is always fastidiously organized. And so, the importance of that which they can control becomes irrationally elevated. The ITS community reminds me of a surgeon in Turkmenistan. Well-educated and seemingly important in the medical world, she really has little control once she leaves her apartment. What happens to her and her family is entirely predicated on the whims of a remote dentist, so she spends her time organizing her forks and spoons.[2]

If one were to read the ITS (Intelligent Tutoring System) literature from the late 1970s to the present, one gets the impression that “tutoring” is a method of knowledge transfer.

Common Conception of Personalized Tutoring

The ingredients include some kind of learner model and methods to inform the model and some kind of knowledge graph; the tutor navigates between the two to shepherd the learner through the curriculum (a collection of content).[3]

I think this is totally wrong because it misses both (1) reality and (2) the key cognitive foundation of learning. The overwhelming impression that one gets from the ITS community is that they’ve done little actual tutoring — or teaching. Instead, they’ve fetishized the one portion that they can understand, measure, and control (presumably) — and that’s how we came to focus on optimizing learning pathways. Instead, what actually occurs in a tutoring session looks something like this:

For most students, the incentive to learn is entirely established in the first five and last two minutes. Most chemistry students do not have an inherent incentive to understand acids and bases; rather, they want to learn it to (1) satisfy a requirement for graduation, (2) get a job, (3) get to a related subject that’s more interesting, or (4) because their instructor has motivated them. I encountered very few students clamoring for more details about hydrogen ion thermodynamic activity — and frankly, if they did, I’d first think something is wrong with them.

Those first five minutes and last two minutes develop emotional engagement, personal motivation, and an attachment to the process that’s outside of ‘knowledge transfer.’ This is where students start to engage hydrogen ion activity because they don’t want to let their teacher down, because they believe another human cares that they learn it, because they are invested in the environment. This is the motivating factor entirely outside of the knowledge transfer in itself and entirely within the control of the teacher and student. Great teachers understand this and develop the teacher-student relationship and sense of mutual quest and shared drive for success. Students may find some level of personal failure tolerable, particularly given a cost-benefit analysis, but what if that failure means the community and the teacher fails too?[4] What if the student is aware that the teacher is invested in the student’s success and is emotionally attached to the student mastering hydrogen ion activity? In the end, it’s got nothing to do with ions and everything to do with a personal connection that’s established everywhere except in the actual knowledge transfer.

Variability in pure knowledge transfer among competent teachers is not significant. Survey all the Chemistry 101 instructors on a college campus, and you’ll discover that “knowledge transfer” variation is likely near non-existent. So what makes one instructor so much more effective than another?

Variability in motivation-setting and engagement from teacher to teacher is significant and is the environmental factor that most significantly impacts learning outcomes. Those 7 minutes establishes a relationship and an incentive.[5] The primary incentive for knowledge transfer is a human teacher who the students respect, trust, like, and appreciate. Of course, this incentive requires time — it’s not done in just 7 minutes. But over the course of several tutoring sessions (or classes), this relationship develops and hardens, which emotionally incentivizes and challenges the student to respond. (Also, of course it’s not necessarily just at the beginning and end of a session or class; tutors may assess engagement decay and deviate from knowledge transfer to motivation-setting at any point.)[6] Of course, the ITS community is aware of engagement and emotional UX challenges, and yet little gets implemented. Instead, the focus is on something that empirically doesn’t much matter.

For most students, knowledge acquisition is not an incentive. So attempting to “optimize learning” via optimizing the learning path will typically yield negligible improvements. I suspect that learning pathway precision is largely a myth that can only yield limited results. The myth was generated and perpetuated by the ed/intelligent tutoring community because it’s the component they can measure and thus (supposedly) control. Knowledge graphs and precision pathways yield data, published papers, and the perception of control. But in the end, it’s all rearranging spoons in the kitchen.

By focusing on knowledge transfer, the typical ITS environment is too artificial to be effective. By focusing on time-bound diagnosis-remediation cycles and not on emotional connections and explanation gaps, the ITS community has created an artificial solution for an artificial problem. Generally, diagnosis-remediation cycles are not tightly time-bound in the real world; humans learn through variable repetition cycles. The common conception of curricula construction is linear — that there’s a reasonably linear path from August to June. This conception is a foundation flaw limiting progress. Humans mostly learn on repetition cycles. Concepts and their applications are repeated and broadened over time. (This is how humans learn language and just about everything…). While a teacher generally must adhere to a more linear curricular structure (for a variety of reasons), the tutor can be the repetition cycle — as long as we remember that most students don’t voluntarily enlist in a repetition cycle. An effective personalized learning environment would mimic a variable repetition cycle while synthesizing those 7 minutes.

For most of the computing era, humans had to communicate with computers in the computer’s language. A great advance of A.I. enables humans to communicate with computers on the human’s terms. This is, in theory, a major component of ITS — tutoring is possible because the computer can speak to the human on the human’s terms. And yet, ITS tutors don’t speak human-English; they speak robot-English, Sure, it’s a language that humans technically understand, but it’s a language and an environment that’s stripped of its humanness. The “personalization” amounts to a retrieve/rank/deliver function applied to content — it’s not personalizing to the individual human on a level that human’s recognize.[7] A cost/benefit on pathway optimization yields similar results to spoon optimization. To optimize knowledge transfer, don’t focus on knowledge transfer.[8]

The ITS community has largely defined tutoring to fit their agendas and capabilities; the term ‘ITS’ has devolved like ‘personalized learning’ has; we need a new term wherein hyper-refined learning pathways are recognized for what they are: obscurantist topics for researchers. Instead, ITS (or whatever we should call it now), should leverage the fact that people want to humanize things if you give them something with which to work. So, the ITS objective shouldn’t be to try to build a ‘human’ robot but rather to enable the user to humanize the robot (or at least impart some human qualities). People are ready and able to project humanness onto non-human objects (stuffed animals, their pets, their cars… Disney has built an empire based on this humanizing instinct). So instead of rearranging the utensil drawer ad nauseum, maybe step out into the living room and make an ITS that synthesizes the humanness of a teaching environment.[9]

NB: I could write a book on this but at >1000 words I’ll spare everyone and stop here…….

*******************************************************************

[1] Motley Crue has a song called “10 Seconds To Love,” which is an entirely inappropriate reference in the context of this essay.

[2] The autocrat of Turkmenistan is a former dentist. This is something of a pattern in post-Soviet leadership. I have no idea what Soviet dental schools were teaching in the 1980s, but I’m guessing there was a heavy dose of ideology. When Jerry Seinfeld was accused of being an anti-dentite, he likely had no idea that he was on the right side of history, for at that same time, a legion of dentite-ideologues was marshaling its forces across the former Soviet republics. And by ‘republics,’ I mean ‘totally not republics.’

[3] In many ways, this is similar to CAT models that became the rage in the 1990s. And, of course, many ITS systems are not substantially different from the text-based choose-your-own-adventure games of the 1980s.

[4] E.g. the cost is more studying and less fun and the benefit is earning a B instead of a C in Chemistry 101. Evidence demonstrates that most will value more free time over a slightly higher grade; chemistry probably isn’t the best example because it tends to self-select for students interested in chemistry, so maybe replace it with U.S. history or algebra.

[5] I suppose the nature of the incentive is an open question; perhaps it’s triggering a mental state of engagement.

[6] Decay from boring content (probably most of it to a teenager), a boring teacher, distractions, or the natural response of a dopamine-addicted brain (thanks social media!).

[7] “Oh look, my A.I. tutor is teaching me the difference between silver chloride electrodes and calomel electrodes … it knows me so well as an individual” said no human ever. Learning pathway precision may be meaningful in some way (doubt it), but it’s useless as a tool to engage someone on a human level. Regardless of how much your toaster knows about algebra 1 learning pathways, kids still don’t want to learn from their toaster. (cf. https://asunow.asu.edu/20180815-discoveries-funny-bone-asu-science-student-survey-finds-majority-appreciate-humor-courses)

[8] Early childhood educators know this well — just watch Sesame Street to understand how optimizing knowledge transfer doesn’t focus on optimizing knowledge transfer — instead, the focus is on the environment in which knowledge transfer operates.

[9] Again, ECL informs us … make a Cosmo that teaches chemistry.

About Nathan Allen

Formerly of Xio Research, an A.I. appliance company. Previously a strategy and development leader at IBM Watson Education. His views do not necessarily reflect anyone’s, including his own. (What.) Nathan’s academic training is in intellectual history; his next book, Weapon of Choice, examines the creation of American identity and modern Western power. Don’t get too excited, Weapon of Choice isn’t about wars but rather more about the seeming ex nihilo development of individual agency … which doesn’t really seem sexy until you consider that individual agency covers everything from voting rights to the cash in your wallet to the reason mass communication even makes sense….

--

--