My job is often to help students understand things that they aren’t good at doing yet. If a student arrives in my classroom with a solid ability to construct clear sentences and summarize difficult readings, using an AI tool that helps them spot grammar mistakes with less effort and capture the gist of a reading more efficiently won’t help them develop their own ideas and stretch their critical thinking skills.
I have asked AI bots to help me understand error messages, to suggest opposing arguments or alternate views I had not considered, or to help me revise an email for tone, but I do those tasks rarely.
The teaching tasks that take up most of my time — giving thoughtful feedback, answering student questions one-on-one, preparing lessons and writing assignment instructions — are not rote mechanical activities where I circle spelling mistakes and factual errors and take off X points for each error.
Developers were supposed to be among the biggest beneficiaries of the generative AI hype as special tools made churning out code faster and easier. But according to a recent study from Uplevel, a firm that analyzes coding metrics, the productivity gains aren’t materializing – at least not yet.
The study tracked around 800 developers, comparing their output with and without GitHub’s Copilot coding assistant over three-month periods. Surprisingly, when measuring key metrics like pull request cycle time and throughput, Uplevel found no meaningful improvements for those using Copilot.
Matt Hoffman, a data analyst at Uplevel, explained to the publication CIO that their team initially thought that developers would be able to write more code and the defect rate might actually go down because developers were using AI tools to help review code before submitting it. But their findings defied those expectations.
In fact, the study found that developers using Copilot introduced 41% more bugs into their code, according to CIO. Uplevel also saw no evidence that the AI assistant was helping prevent developer burnout. — Techspot