Spring semester is over. It worked. The oral portion runs about ten minutes per student. They pick one problem from their written work and walk me through it. Why mesh and not nodal. What the time constant tells you about the circuit. Where the negative sign came from. I learn more about what a student actually understands in that ten minutes than I used to learn from a semester of graded homework.
A new Lumina Foundation-Gallup study says 57% of US college students use AI in their coursework at least weekly, and one in five use it every day. At the same time, 53% say their school discourages or prohibits it. Daily use is highest among men and among business, tech, and engineering students. The students avoiding AI mostly cite ethical concerns and school policy, so the ones following the rules are falling behind on a tool they will use the rest of their careers.
My position on AI in education is simple. If we are preparing students for the jobs they are about to take, AI has to be in every class. Every engineering job they walk into will expect them to use these tools well. A program that prohibits AI is training students for a job market that no longer exists. The work is not to keep AI out of the classroom. The work is to teach students how to use it, where it fails, and when to check it against first principles.
That still leaves the assessment problem. If students use AI on everything, how do you know what they understand? You change how you measure them. Oral exams catch what written work cannot. In-class paper problems catch it. Hands-on labs, where a student wires a circuit on a breadboard, takes scope measurements, and explains what they are seeing, catch it cold. Take-home essays graded on polish do not catch anything anymore.
The AI can solve the circuit. It cannot explain why this student chose the loop they chose, and it cannot wire the breadboard when the lab is due at five. That is what we should be assessing, and that is the work employers are hiring for.


No comments:
Post a Comment