Anyone with a younger sibling – or who’s been one – understands the value of a ‘Teachable Agent.’ An entity whose behavior and thinking you can dictate, who will then enter the world (or pantry) and demonstrate your version of reality (steal cookies for you).
In innovative Ed Tech circles a ‘Teachable Agent’ refers to an artificial intelligence computer program, usually disguised as a customizable avatar. Learners of all ages can ‘program’ him/her by simply creating a visual web or map, indicating relationships between phrases and concepts. The Agent then uses this visual representation to embark on assess-able activities such as answering auto-graded questions or conversing Siri-like with humans.
Can you imagine a young boy programming this Agent with all he knows about elementary science, then going out for recess while the Avatar takes a standardized test for him? Can you picture students creating representations of their analysis and synthesis of multiple primary documents, then sending their Agents off to compete in a History-themed jeopardy game? How about a teenager creating a map reflecting her literary analysis of Macbeth then commanding her Agent to defend her thesis in a conversation with experts and peers in a virtual classroom?
And those are just the first gen applications.
Why not just stick with our straight-forward Q&A assessments, you ask? Why insert a middle step of asking kids to construct a multi-variable representation of their understanding, followed by a Q&A with a robot, not the learner? Three possible reasons:
- Because it’s cool and more engaging? Yes, though admittedly this is a matter of opinion.
- Because the Agent scenario satisfies an oft-heard desire to improve learning by providing learners with opportunities to ‘teach’? Yes, runner-up to #1 reason.
We’ve all heard the pedagogical and philosophical beliefs that a learner can learn by teaching, bolstered by Vygotsky and cognitive scientists who made a convincing case that the act of articulating one’s understanding is an essential step in truly grasping concepts. However, the students-teaching-students scenario is controversial. On the other side of teachers are actual learners, aka humans with multiple needs, learning styles and intelligences, as well as prior misconceptions, interests, and motivations. Teaching is never really about subject knowledge alone; it is about psychologizing the subject – google ‘Pedagogical Content Knowledge’ if this is news to you.
Replace the human learner with a rational, attentive Teachable Agent and voila! Learners can ‘teach’ – pushing their learning to the next level – without the human learner factor muddying the waters of the original learner’s learning experience.
3. Because it assesses higher up Bloom’s taxonomy? Yes, rendering this is a game-changer.
The Agent approach is less granular than the Khan-like One-Two punch requiring a learner to
1] passively watch/read then 2] answer targeted questions.
That Ed Tech formula gives learners immediate feedback and provides bite-sized, learn-then–demonstrate-learning efficacy but it doesn’t really scratch the surface of broader thinking, analysis, synthesis, and other more complex learning objectives.
Now is pre-Teachable Agents’ time. The Agents mostly exist at universities (see links below). Ed Tech ventures can safely pronounce the realm of learning and assessment that relates to higher order thinking as an offline experience of papers and presentations graded by teachers. Sure there’s a bit of talk around auto-essay grading software, but it has mixed results and will
never emotionally satisfy learners who pay (through tuition or taxes) and who feel the creative and unique effort they put into writing warrants a human reading their opus.
Once the Agents are out in force, Ed Tech offerings without them will be deemed, incomplete and inferior to the options that utilize Agents. Check out the links below and imagine the possibilities.
Here’s info on Betty’s Brain, a TA at Vanderbilt University.
Here’s info on Stanford School of Education’s TA project.