AI-driven tools may signal the integration of technology into learning in profound ways; however, the long trajectory of edtech has not yet changed the fundamental organizing structure between teacher and student. Teachers—with the vast majority of schools still organized as one teacher for every 15 to 35 students—mediate students’ classroom experiences in myriad ways. Although opportunities for students to work independently using instructional learning systems clearly exist in most contexts, the frequency of their use, for what purposes and for which students vary widely.
As a case in point, Project Topeka featured an automated essay scoring tool that provided grades 6–8 students with individualized line-level feedback on argumentative essays responding to six different prompts. Each prompt offered aligned information sources, and instructional materials and other teacher supports accompanied the tool. The Project Topeka rubric described students’ argumentative writing along four dimensions: Claim and Focus, Support and Evidence, Organization, and Language and Style, at four performance levels (Emerging, Developing, Proficient, Advanced).
Building on our research on teachers’ approaches to using AI in the classroom and how teachers’ scoring of argumentative papers differed from that of the automated essay-scoring tool, this companion piece illustrates the expertise teachers drew on to reveal their understanding of the writing rubric, the ways they used it and the extent to which the rubric captured or missed what they see and expect from their students’ argumentative writing. Teachers’ perspectives on the rubric underscore the questions we must continue to ask as edtech products embed and evolve logics that reduce—rather than increase—transparency in how the technology facilitates student learning.
During three implementation waves (winter 2020, fall 2020 and school year 2021-22), almost all teachers using Project Topeka agreed that the dimensions the AI tool scored were appropriate and agreed with the scores their students’ writing received. However, a majority also told us that students were confused about how to respond to the feedback. The teachers needed to help students interpret and apply the feedback and provide more holistic feedback. (See Exhibit 1.)
Exhibit 1: Teachers’ Perceptions of Project Topeka Automated Essay Scoring
Discussions of the rubric (as part of a calibration process for teachers to score student work samples) revealed the critical ways in which teachers used their expertise to emphasize key elements of the rubric and frame feedback to students. Below are highlights of teachers’ perspectives on three of the four rubric dimensions.
Claim and Focus. Proficient definition—“The essay introduces a clear claim based on the topic or text(s). The essay mostly maintains a focus on the purpose and task but may not develop the claim evenly throughout the essay while addressing the demands of the prompt.”
While the AI tool appeared to provide feedback on whether students wrote specific sentences that posed a single claim that they could then substantiate, teachers honed in on coherence throughout the paper. Beyond looking for a claim stated at the beginning of a paper, one teacher elaborated: “[I] keyed in on ‘not developed evenly’ [from rubric level] throughout—it’s not just the statement [claim] itself, but [it’s] referring to coherence of the whole essay. So we shouldn’t just be looking at a specific statement [as the claim], but we have to look at the whole essay and whether or not the whole essay supports that claim.”
Support and Evidence. Proficient definition—“The essay uses clear, relevant evidence and explains how the evidence supports the claim. The essay demonstrates logical reasoning and understanding of the topic or text(s). Counterclaims are acknowledged but may not be adequately explained and/or distinguished from the essay's central claim.”
Teachers underscored the need for students to be able to identify and apply reliable evidence to their argument, particularly on whether the student could explain why the evidence they used supports the claim or addresses a potential counterclaim to their argument: “What does [the evidence] say? Is the evidence reliable? Is it relevant? If yes, [students] also have to explain it. Don’t just give a summary [of the evidence].” In other words, teachers wanted to see original writing from the student that explained why they were using the evidence they chose as the most important aspect of this dimension being scored.
Organization. Proficient definition—“The essay incorporates an organizational structure with clear, consistent use of transitional words and phrases that show the relationship between and among ideas. The essay includes a progression of ideas from beginning to end, including an introduction and concluding statement or section.”
Teachers pointed to how Organization reinforced Claim and Focus as related dimensions. Particularly with lower grades emphasizing how to write a well-crafted paragraph, students don’t necessarily have sufficient practice in building multi-paragraph pieces. One teacher explained, “Students write well-structured paragraphs, but we want them to connect the paragraphs. The relationship—the connection—needs to be there. You might be proficient at writing single paragraphs, but to be proficient at writing an essay, you need to transition from paragraph to paragraph.”
That relationship is not adequately established with transition words, as many students are taught. Another teacher shared, “[W]e are hung up on looking at transition words, but the rubric is asking for more. The ideas are moving but not consistently. If I take your paragraph in isolation, does it connect to your claim? That’s how I look at organization. The relationship between and among ideas—how do you teach that?” In essence, teachers sought a logical flow in the way students organized their arguments.
What teachers emphasized in their scoring illustrates the weight they place on different aspects of the rubric as the most critical skills of argumentative writing. The point is not that what teachers look for differs from what the AI tool looks for—that difference may be inevitable, especially with machine learning, where decision rules mutate over time. The point is that teachers have expertise and apply professional judgment that integrates knowledge of writing, instruction, students, relationships and culture in tacit and subtle ways not easily captured—at least right now—by AI tools. We need edtech that builds on an understanding of how teachers’ expertise mediates and complements the affordances of technology-driven learning solutions, tools that reflect expert teachers’ intersecting knowledge of content and students and their expectations of what students are capable of achieving.