Texas Launches AI Grader for Student Essay Tests, but Insists It’s Not ChatGPT

Children in Texas take state-mandated medications standardized tests this week to measure their skills in reading, writing, science and social studies. But these tests are no longer necessarily graded by human teachers. In fact, the Texas Education Agency will deploy a new “automated scoring engine” for open-ended questions on the tests. And the state hopes to save millions with the new program.

The technology, called the Auto Scoring Engine (ASE) by the Texas Education Agency, uses natural language processing to score students’ essays Texas Tribune. After the initial scoring by the AI ​​model, approximately 25% of test answers are sent back to human raters for review San Antonio Report News agency.

Texas expects to save about $15 million to $20 million with the new AI tool, primarily because it will require fewer human auditors to be hired through an outside contract agency. Previously, about 6,000 graders were needed, but that number will be reduced to about 2,000, according to the Texas Tribune.

A presentation published on the Texas Education Agency website appears to show that tests of the new system found that humans and the automated system gave comparable results to most children. However, many questions remain about exactly how the technology works and which company may have helped the state develop the software. Two education companies, Cambium and Pearson, are listed as contractors on the Texas Education Agency’s website, but the agency did not respond to emailed questions Tuesday.

The State of Texas Assessments of Academic Readiness (STAAR) was first introduced in 2011 but was redesigned in 2023 to include more open-ended, essay-style questions. Previously, the test contained many more questions in multiple-choice format, which were of course also assessed using computer-based tools. The big difference is that scoring a bubble table is different than scoring a written answer, which is harder for computers to understand.

In a sign of how toxic AI tools have become in mainstream tech discourse, the Texas Education Agency appears to have quickly dismissed any comparisons to generative AI chatbots such as: ChatGPT, according to the Texas Tribune. And the PowerPoint presentation on the Texas Education Agency website seems to confirm this discomfort with comparisons to ChatGPT.

“This type of technology differs from AI in that AI is a computer that uses progressive learning algorithms to adapt, allowing the data to do the programming and essentially teach itself,” the presentation explains. “Instead, the automated assessment engine is a closed database of student response data accessible only by TEA and, with strict contractual privacy controls, its assessment contractors Cambium and Pearson.”

Any family dissatisfied with their child’s grade can ask for a human to take another look at the test San Antonio Report. But it will cost you $50.

Sharing Is Caring:

Leave a Comment