When you ask an AI to draft a report, write code, or solve a problem, how much of that work should it actually do for you? That question sits at the center of a growing debate among psychologists who worry that AI is making us faster at everything except becoming genuinely capable.
The University of Toronto research team argues that the cognitive effort, the social friction, the struggle we try to eliminate with AI—these are not bugs in human experience but features. In a paper titled "Against Frictionless AI," published in Communications Psychology on February 24, 2026, psychologists Emily Zohar, Paul Bloom, and Michael Inzlicht make the counterintuitive case that removing difficulty from our lives may be stripping away something essential to how we learn, grow, and find meaning.
Their argument builds on decades of psychological research on what scholars call "desirable difficulties"—the counterintuitive finding that adding obstacles to learning often produces better long-term results than smoothing every path. When information comes too easily, memory retention drops. When challenges are removed, so is the sense of mastery that comes from overcoming them.
"With AI, as we typically use it, it's really easy to go from ideation right to the end product," Zohar told IEEE Spectrum. "This takes away the intermediate steps that really drive motivation and learning. It prioritizes outcome over process."
The researchers distinguish between two kinds of friction that AI may be eroding. Cognitive friction—the mental effort of rumination, persistence, and wrestling with hard problems—helps ideas solidify and strengthens creative thinking. Interpersonal friction—disagreement, compromise, and the back-and-forth of collaboration—broadens perspectives and builds relationships that matter. Even the discomfort of productive disagreement, they argue, has developmental value.
This raises uncomfortable questions for anyone building AI products. If the researchers are right, the most helpful AI may sometimes be the least learning-friendly. A system that instantly generates polished answers bypasses the struggle that makes someone an expert.
The counterargument is predictable: not every task requires deep mastery, and efficiency gains from AI free people to focus on higher-order work. There is merit in that view. But it sidesteps a harder question—which skills actually matter, and which ones are we accidentally atrophying by outsourcing too early?
The researchers stop short of prescribing specific design changes. Instead, they urge a broader question: which tasks should remain hard, and for whom? That distinction will likely depend on context, stakes, and individual goals. What seems clear is that treating all friction as waste ignores its role as a teacher.
The irony is not lost on the research team. We're building tools to make knowledge more accessible while potentially making expertise rarer.