Lila Shroff’s April essay in The Atlantic contains a detail that is hard to ignore near the middle. In order to test a bot named Einstein, a researcher signs up for a free online statistics course and releases the AI. Einstein completes all eight modules and seven tests in less than an hour. One quiz is required fifteen times. In the end, it receives a perfect score. According to the researcher, she “hardly so much as read the course website.” Presumably, the course website still thought that someone had studied statistics that day.
Shroff’s essay, “Is Schoolwork Optional Now?” has sparked the kind of nervous, circling discussion that education writing seldom generates, which is a sign in and of itself that she has struck a chord. The argument is straightforward: almost any take-home assignment, test, lab report, discussion forum post, or PowerPoint can be finished by agentic AI tools without significant student participation. The tools do more than just assist. They do. Additionally, the distinction between “students not doing homework at all” and “students using AI to assist with homework” has essentially vanished.
Advait Paliwal, a 22-year-old businessman, created Einstein as a provocation, according to his own description. He wanted teachers to know what was already achievable. The bot was able to watch lectures, take notes, finish readings, take part in discussion boards, and turn in assignments by connecting to the popular learning management system Canvas. After receiving cease-and-desist letters from Canvas’s parent company, Paliwal took down the bot and quickly assumed a minor role in the ongoing dispute over the direction of all of this. His defense was straightforward and not wholly incorrect: someone else would have constructed it covertly and used it if he hadn’t made it public.
It’s instructive to see how the response to Shroff’s essay develops. Teachers are angry, not so much at her as at the circumstances. Students are divided. Natalie Lahr, a sophomore at Barnard, recounts having a “crashout” after visiting her college writing center and seeing the tutor copy and paste her essay prompt into Perplexity before returning an outline created by artificial intelligence. The session ended at that point. It makes perfect sense that some students would wonder why they are paying tuition in order to receive AI output from the educators. It’s a legitimate question. No one has a satisfactory response.
Key Information: The Essay & Its Author
| Article Title | “Is Schoolwork Optional Now?” |
| Author | Lila Shroff, Staff Writer, The Atlantic |
| Published | April 10, 2026 |
| Publication | The Atlantic |
| Central Argument | Agentic AI tools can now complete nearly all take-home schoolwork, making traditional homework functionally obsolete |
| Key Case Study | “Einstein” — an AI bot built to autonomously complete Canvas assignments; took one quiz 15 times, eventually earned a perfect score |
| Einstein’s Creator | Advait Paliwal, 22-year-old tech entrepreneur |
| Bot’s Fate | Taken down after cease-and-desist letters, including from Canvas’s parent company |
| Statistic Cited | Students using AI for homework rose 14 percentage points from May to December of the previous year |
| Broader Context | Silicon Valley offering free/discounted agentic tools to college students; Anthropic, OpenAI both running campus programs |
| Core Fear | A “fully automated loop” — AI generates the work, AI grades the work, no human meaningfully involved |

Shroff finds a deeper issue that is structural. Homework was never really about the final product; rather, it was about the process, the friction, and the gradual accumulation of knowledge that results from working on a problem by yourself at a kitchen table at eleven o’clock at night. AI completely eliminates that friction, which is both what makes it seem beneficial and what makes it risky. Students frequently perform better on tasks when using AI tools, but those gains tend to fade or disappear once the technology is removed, according to Stanford researchers studying AI in classrooms. One researcher refers to it as cognitive offloading. The thinking is done by the tool. The grade is given to the student. There was never any understanding.
The essay’s implications regarding the system’s overall direction are what keep readers interested. California State University professors explain that they are expected to accept work that they know was created by artificial intelligence (AI) because the administrative apparatus lacks the appetite for consequences in the absence of a confession. In order to confirm that students are genuinely typing in real time, instructors are examining Google Docs histories. However, they have discovered that there are now human-typing simulators that mimic this. It’s a game of whack-a-mole with moles that move faster and faster, and many educators are beginning to feel that the game is not worth playing. As one lecturer put it, “do we even need to be there at all?” if AI writes the assignment and grades it.
The most disturbing aspect is the answers Shroff gathers from students. It’s not that students are cynical—most of them aren’t—but rather that an increasing number of them are genuinely unable to explain why they shouldn’t use the tools. They were told by the system that education was the key to a successful career. They are now being instructed by the same system to use AI for instruction. They might not be the ones who should be expected to resolve the cognitive dissonance that exists.
Reading the essay and the deluge of responses it sparked gave me the impression that the educational system is getting closer to a reckoning it has been putting off for years. Before AI, homework was a contentious institution that was poorly planned, unfairly assigned, and inconsistently enforced. The issue wasn’t caused by AI. It simply made it much more difficult to ignore the issue. Beneath the provocative title, Shroff’s essay really asks whether anyone in a position to redesign the system is paying enough attention to do so before the fully automated loop closes.
