Monday, November 24, 2025

Why I'm Bringing Oral Exams to Circuits 1

Almost all engineering students take an introductory electrical engineering course commonly referred to as "Circuits 1". It's a foundational requirement across disciplines, from mechanical to computer engineering. This coming spring 2026 semester, I'm planning something new: I'm adding oral exams to my Circuits 1 course. Why?

UC San Diego researchers found that engineering students who took oral exams scored 14% higher on subsequent written midterms compared to students who didn't take oral exams. That's not a marginal improvement. That's significant learning gains.

The motivation numbers are even more striking. 70% of the UCSD students reported that oral exams increased their motivation to learn, with first-generation students showing the strongest response at 78%. In a discipline where we hemorrhage students after the first circuit analysis course, motivation matters.

Here's what sold me: oral exams test conditional knowledge, not just procedural knowledge. You can memorize Kirchhoff's laws and plug numbers into equations. That gets you through a written exam. But can you explain why you chose mesh analysis over nodal analysis for a particular circuit? Can you justify your sign conventions when I change the problem slightly? That's where oral exams shine.

One student in the UC San Diego study captured it perfectly: written exams let you prepare through memorization and notes, but oral exams require deeper understanding because the instructor can ask follow-up questions. You can't fake your way through a conversation about why a capacitor blocks DC current while passing AC.

I know the objections. Oral exams don't scale. They're time-intensive. How do you maintain consistency across different examiners? These are valid concerns, and UC San Diego's team is working on standardized rubrics and TA training protocols to address them. For my Circuits 1 class of a dozen students, scalability isn't my main problem.

The AI challenge is my main problem. Students can now generate circuit solutions, proofs, and explanations with Claude or Gemini. I'm not interested in playing whack-a-mole with AI detectors or creating ever-more-baroque written exams. I'd rather assess what matters: can you think like an engineer?

My plan is straightforward. Students will take traditional written exams for procedural competency. But they'll also sit for 15-minute oral exams where I'll give them a circuit problem and ask them to talk through their approach. I want to hear their reasoning before they touch a calculator. I want them to explain why they're applying superposition or why they chose a particular reference node.

This isn't about catching cheaters. It's about pushing students toward expert-level thinking. Experts spend more time on problem planning and strategy than beginners, who rush to plug in equations. Oral exams force that strategic thinking to the surface.

Will it work? I'll measure overall exam performance as best I can. I'll survey students about motivation and confidence. I'll track office hours and optional help session attendance. And I'll be honest about the results, positive or negative.

In an age where AI can solve circuit problems faster than any human, the skill that matters is knowing which problem to solve and why. Oral exams test that skill. Everything else is just calculations.

Saturday, November 22, 2025

Massachusetts' Quantum Workforce Development and Good Socks

Yesterday I attended the 2025 Quantum Massachusetts conference in Boston. The event brought together faculty, researchers, engineers, investors, and officials to discuss the state's quantum ecosystem.  

To date Massachusetts has invested over $50 million in quantum computing through the Massachusetts Technology Collaborative, including $1 million to UMass Boston and Western New England University in 2022, $3.5 million to Northeastern University in 2022, $3.8 million to UMass Boston in 2025, $5 million for a Quantum Computing Complex at the Massachusetts Green High Performance Computing Center in Holyoke in 2024, and $40 million in an economic development bond bill. The Holyoke project, which combines state funding with $11 million from QuEra Computing for a total $16 million, makes Massachusetts the first state to fund a quantum computing complex. These investments focus on building critical infrastructure like dilution refrigerators and measurement facilities, support 14 research universities with 131 research groups, along with community colleges and have helped secure additional federal funding for quantum research centers.

 

Quantum computing is projected to become a $4 billion market by 2028 and (like any emerging technology field) faces a critical workforce shortage. Like any emerging technology field, it faces a critical workforce shortage. Companies need scientists, engineers, and technicians who understand quantum algorithms, hardware systems, and practical applications. Few educational programs prepare students for these roles.


The Socks

The conference bag included a pair of socks. During the two-hour drive back to Western Massachusetts, I thought about socks and workforce development. Both systems share key characteristics:

Scale and precision matter. Workforce development gains power through network effects when graduates enter industries and skills compound across sectors. Socks gain power when you find the complete set and can do laundry.

Both need precise initial states. Workforce programs need students with strong foundations and employers with clear skill requirements before education starts. Socks need to start as matched pairs.

Both solve problems classical approaches cannot touch. Workforce development tackles skill gaps, economic mobility, and industry transformation that traditional education models miss. Good socks prevent blisters and cold feet.

Both need sustained coherence. Workforce programs need isolation from funding cuts, political interference, and mission drift over multi-year grant cycles. Sock pairs need isolation from the dryer's singularity.

Both multiply value through parallel processing. Workforce initiatives train multiple cohorts simultaneously across regions while updating curriculum based on industry feedback. Sock drawers hold multiple pairs simultaneously until Monday morning proves otherwise.

Both work best when you design for the system you have. Effective workforce programs respect student constraints: work schedules, childcare, transportation, prior education gaps. Sock buyers respect that black, gray, and navy are three different colors at 6 AM.

Both transform outcomes when you invest in quality. STEM workforce development delivers scientists, engineers, and technicians who keep critical infrastructure running. Quality socks deliver people who arrive on time without foot pain.

Both depend on error correction. Effective workforce programs use wraparound services, mentoring, and iterative curriculum refinement. You buy socks in bulk packs of twelve.

Both face the measurement problem. Assessing workforce programs changes behavior and outcomes. Checking if socks match often reveals they don't.

Both eventually fail without maintenance. Workforce programs need ongoing industry partnerships, updated curriculum, and sustained funding. Socks need regular replacement. Nobody writes grants for socks.

The quantum workforce challenge is real. The socks metaphor works because both systems eventually fail when you ignore the fundamentals. The difference is that socks cost less to replace than a skilled STEM workforce.

Sunday, November 16, 2025

Why Your AI Assistant Might Be the Security Problem

You trust AI without thinking about it. Your voice assistant orders groceries. Your bank's AI approves transactions. Your phone's facial recognition unlocks with a glance. These systems work so well that you forget they can be fooled.

Here's the problem: hackers aren't breaking into these systems anymore. They're tricking them. And your firewall can't stop it.

Traditional hacking breaks through walls. Someone steals your password, penetrates the firewall, accesses the database. AI manipulation is different. The AI already has access to sensitive data. It already has permissions to take actions. Security tools see normal activity. They don't know the AI is being tricked into making bad decisions with data it's allowed to see.

Data Poisoning: Teaching AI the Wrong Lessons

In 2019, a factory's AI predicted when machines needed maintenance. It suddenly started failing. Equipment broke down without warning. Critical repairs were missed. Routine maintenance happened on perfectly fine machines.

Hackers had accessed the sensors monitoring the equipment. They didn't corrupt everything at once. They made tiny changes:
temperature readings slightly off, vibration data nudged higher, performance metrics tweaked. Each change looked like normal sensor drift.

The AI learned from this poisoned data for months. It learned that warning signs meant nothing. By the time anyone noticed, the AI's entire understanding was wrong.

Think about your spam filter. It learns from emails you mark as spam. What if someone slowly trained it to ignore phishing emails? Your bank's fraud detection learns from transaction patterns. What if someone gradually normalized suspicious behavior? You'd never notice until it was too late.

Data poisoning works because AI systems are designed to adapt and learn. That's their strength. It's also their weakness.

Adversarial Inputs: Making AI See Things Wrong

A hospital's AI reads MRI scans and flags potential problems. Doctors started noticing odd mistakes. The AI saw tumors that weren't there. It missed ones that were.

Someone had tampered with the images before the AI analyzed them. They changed a few pixels here, adjusted brightness there. Doctors looking at the same scans saw nothing unusual. The AI saw something completely different.

This is like putting trick glasses on someone. They're looking at the same thing you are, but they see it wrong. Except you can't tell the AI is wearing trick glasses.

Your phone's facial recognition works the same way. Researchers printed specific patterns on glasses frames that caused the AI to misidentify people. The AI looked at the face, processed the features, and confidently returned the wrong answer. The system worked perfectly. Someone just learned its blind spots.

Adversarial attacks craft inputs designed to confuse AI while appearing normal to humans. The AI isn't broken. The input is the problem.

Model Inversion: Talking AI Into Breaking Rules

Banks use AI to answer customer service calls. The AI can check your balance, transfer money, verify transactions. It needs these permissions to help you.

Now imagine someone calls repeatedly, testing how the AI responds. Can they get it to summarize information about other customers? Will it generate reports it shouldn't? Can they phrase questions that make it leak data?

The hacker isn't breaking into the database. They're not stealing passwords. They're just asking questions cleverly enough that the AI gives away information it shouldn't.

Users of smart home voice assistants reported targeted advertisements based on private conversations. Investigators found that attackers had extracted sensitive information by reverse engineering the AI's responses. The assistant wasn't hacked in the traditional sense. The AI model itself became the vulnerability.

Your voice assistant works similarly. Researchers embedded commands in audio that sounded like noise to humans. Your phone heard "order 50 pizzas." You heard static.

Speed makes this dangerous. Hackers can use their own AI to attack yours, testing thousands of question variations per minute. Human security analysts can't keep up.

Hidden Backdoors: Secret Triggers Embedded in AI

A corporation used voice recognition for secure building access. They discovered that unauthorized people could enter by speaking a specific phrase. The phrase acted as a trigger embedded in the AI during training.

The AI worked normally for everyone else. Only that exact phrase granted access regardless of who spoke it. The company had purchased the AI model from a third-party vendor. Someone had planted the backdoor during training. Traditional security testing wouldn't catch this. You'd need to test millions of potential inputs systematically.

Scale makes this terrifying. One corrupted AI model affects millions of users simultaneously. When someone embeds a backdoor, every copy of that AI inherits the problem. Your voice assistant might be fine today, but an update could push the vulnerability to everyone overnight.

The Fake CEO Call: All Four Methods Combined

A company executive got a call from his CEO. Urgent matter. Need to transfer money immediately. The voice sounded exactly right: same accent, same tone, same speaking style. The executive sent the money.

The CEO never made that call. AI cloned his voice from YouTube videos and generated the conversation in real time. This happened in 2019. The technology is far better now.

This attack used adversarial inputs (synthetic voice designed to fool recognition systems) combined with model inversion techniques (analyzing how voice AI responds to craft convincing fakes). As deepfake technology improves through data poisoning of detection systems and potential backdoors in voice processing AI, these attacks become harder to detect.

You can clone someone's voice from a few seconds of audio. You can generate fake videos on a laptop. When money is involved, you can't trust what you hear or see anymore.

What You Can Do

Verify financial requests through multiple channels. Boss calls asking for money? Text them. Email them. Walk to their office. Don't rely on voice alone.

Use multi-factor authentication everywhere. Biometrics aren't enough when AI can fake voices and faces. Combine password plus phone plus fingerprint.

Question unusual AI behavior. When your voice assistant does something weird, when your bank's AI makes a strange decision, consider manipulation rather than malfunction.

Understand what AI can access. Voice assistants that can order products need payment information. AI customer service can view your account. Banking AI can transfer funds. Minimize what you share.

Review privacy settings quarterly. AI companies regularly update their systems. Disable features you don't use.

The Bottom Line

Most companies don't realize this yet. They apply old security measures to new problems. They assume their AI is protected because their network is protected. It isn't.

The attacks are already happening. The defenses are still being figured out. And your AI assistant doesn't know it's being fooled.

These threats are documented in detail in Mountain Theory's white paper "Emerging Threats to Artificial Intelligence Systems and Gaps in Current Security Measures" by Michael May and Shaun Cuttill. The research analyzes real world incidents and identifies critical gaps in current security frameworks that leave AI systems vulnerable to manipulation. Read the full white paper at https://mountaintheory.ai/emerging-threats-to-artificial-intelligence-systems-and-gaps-in-current-security-measures/

Tuesday, November 11, 2025

Pushing the Limits with AI-Integrated Online Engineering Courses

Online engineering courses have spent two decades trying to prove they could match classroom instruction. Personally and based on my experience - when built the right way - I know they can. But now.... AI integration forces a harder question: can they exceed it?

The traditional model relies on static content delivery. Students watch recorded lectures, complete assignments, and wait for feedback. AI changes the timeline. Students get immediate responses to questions, instant code reviews, and real-time troubleshooting assistance. The delay between confusion and clarity shrinks from days to seconds.

Consider circuit analysis. A student builds a simulation, gets unexpected results, and stops. Previously, they posted to a forum or waited for office hours. Now they describe the problem to an AI assistant that walks through their schematic, identifies the error, and explains why the voltage divider calculation failed. The learning happens in the moment of need, not after the moment passes.

This shifts the instructor role. You become the designer of AI-assisted learning experiences rather than the primary content source. Your expertise matters more, not less. You create the problems AI helps students solve. You build the scaffolding AI uses to guide discovery. You intervene when AI explanations miss the mark or when students need human judgment about design tradeoffs.

The data tells you things classrooms never could. Which concepts cause repeated AI queries? Where do students get stuck despite AI assistance? What questions reveal deeper misunderstandings? You see learning patterns across entire cohorts in real time.

Personalization becomes practical at scale. AI adapts problem difficulty based on student performance. It recognizes when someone needs a simpler explanation or a more complex challenge. It suggests prerequisite reviews when knowledge gaps appear. Each student gets a version of the course tuned to their current understanding.

Assessment changes fundamentally. Take-home exams become meaningless when students can query AI for solutions. You need problems that require synthesis, judgment, and creativity. Design challenges with multiple valid approaches. Optimization tasks where students must justify their choices. Projects that integrate concepts across the curriculum. AI becomes a tool students must learn to use effectively, like MATLAB or CAD software.

The limits matter. AI makes factual errors. It generates plausible-sounding nonsense. It cannot replace hands-on lab experience or teach professional judgment. Students need to know when AI helps and when it hinders. That metacognitive skill becomes part of the curriculum.

Cost drops while quality rises. You eliminate textbook expenses. Students access powerful tools without licensing fees. AI handles routine questions while you focus on complex guidance.

The technology moves faster than accreditation. ABET criteria assume traditional delivery models. Program reviews ask about contact hours and lab facilities. You need documentation showing that AI-assisted online courses meet outcome requirements. Early adopters provide the evidence later programs will need.

Engineering education has spent decades moving online. AI integration represents the next boundary. Courses that use it well will outperform traditional formats on learning outcomes, student satisfaction, and cost efficiency. The question is not whether to integrate AI, but how quickly you can do it effectively.

The limits are being pushed. Some will break.

Friday, November 7, 2025

Part 2 Writing NSF Grant Proposals Video Series: Intellectual Merit and Broader Impacts

A few weeks ago, I gave a talk at the University of Hartford about writing successful NSF grant proposals. I've written many proposals over the years, made plenty of mistakes, learned some things, and am still learning.

Part 1 covered getting started fundamentals: the parts and pieces of an NSF proposal, practical writing strategies to help you secure funding, and an introduction to Intellectual Merit (IM) and Broader Impacts (BI).

Part 2 digs deep into Intellectual Merit and Broader Impacts: what they mean, how reviewers evaluate them, and practical writing strategies to address both effectively.

Intellectual Merit covers whether your proposed activity can advance knowledge within its field or across different fields. Think of it as the contribution part of potential publications; it addresses the work itself and its findings.

Broader Impacts addresses how your work will benefit society. NSF provides examples: improving STEM education, increasing public scientific literacy, developing a diverse STEM workforce, building partnerships, improving national security, increasing U.S. economic competitiveness, and enhancing research infrastructure.

Here’s Part 2:


Each segment in the series addresses a specific aspect of proposal writing, from early planning questions to building budgets. Watch for Part 3, which will cover additional components of a complete and competitive NSF proposal.

The presentation series reflects conditions as of October 7, 2025. NSF programs and guidelines change, so verify current requirements for your program of interest before and during your writing.

Disclaimer: These opinions and advice are mine! They reflect my experience writing proposals, not official NSF guidance or institutional policy. What worked for me may need adjustment for your field or project.

Monday, November 3, 2025

Agentic Commerce: Can AI Shop Better Than You?

You send a text: "Reorder my usual coffee when I'm running low." An AI agent checks your inventory, compares prices across 50 retailers, selects the best option, and completes the purchase. you receive a notification after it's done.

 That's agentic commerce. AI software acts on your behalf to shop, compare, and buy without you clicking through websites or entering payment information for each transaction. You set preferences and spending limits. The agent operates within those boundaries.

 

The technology uses large language models to understand requests, APIs (Application Programming Interfaces) to access product catalogs and payment systems, and machine learning to improve recommendations over time. Visa, Mastercard, PayPal, Amazon, and Google have all launched or have plans to launch agentic commerce platforms (see Links to Watch below) in 2025. The agents can handle simple tasks like grocery reordering or complex ones like researching neighborhoods when you relocate.

 

Advantages

Time savings: You delegate research, comparison, and execution to software. No browser tabs, no manual price checks.

Price optimization follows: Agents scan thousands of retailers instantly and find better deals than human shoppers typically locate. They monitor price drops and act when conditions meet your preset criteria. Some negotiate pricing directly.

Personalization improvements through pattern recognition: Agents learn your preferences, budget constraints, and purchase history. They filter options against your actual behavior rather than generic demographic data. Recommendations get more accurate over time.

Cart abandonment drops: Friction disappears when agents complete multistep processes automatically.

 

Disadvantages

Trust gapsOnly 24% of US consumers feel comfortable letting AI complete purchases, according to Bain research. Liability remains unclear when an agent makes an unauthorized or incorrect purchase. Who pays when the bot orders the wrong item or books a nonrefundable flight you can't use?

Fraud risks: Agents can be tricked by fake listings, manipulated reviews, or spoofed sellers. Payment credentials become more vulnerable when stored for autonomous access. Data poisoning can skew agent decisions across many transactions.

Merchant disintermediationRetailers lose direct customer relationships when agents make data driven purchase decisions. Brand loyalty weakens. Small and midsize retailers face higher costs to optimize product data for machine readability.

Pricing pressure increaseAgents search for the best deals automatically, which forces margins down across categories. Impulse purchases decline because agents buy only what you need.

Privacy concerns Agents require extensive behavioral data to function effectively. Transparency about data collection varies. Regulatory frameworks lag behind the technology.

You'll need parallel shopping systems: one for humans, one for bots: The transition period creates complexity without guarantees that consumer adoption will follow.


Links To Watch

Here are links to some major agentic commerce platforms:

Visa Intelligent Commerce: https://corporate.visa.com/en/products/intelligent-commerce.html

Mastercard Agent Pay: https://www.mastercard.com/us/en/business/artificial-intelligence/mastercard-agent-pay.html

PayPal Agentic Commerce (PayPal.ai) https://paypal.ai/

Amazon Buy for Me: https://www.aboutamazon.com/news/retail/amazon-shopping-app-buy-for-me-brands

Google AI Mode Shopping: https://blog.google/products/shopping/google-shopping-ai-mode-virtual-try-on-update/


Some Notes On These: Amazon's Buy for Me is currently in beta testing with limited users. Google's agentic checkout feature was announced at I/O 2025 but has not fully rolled out yet.

Saturday, November 1, 2025

Writing NSF Grant Proposals Video Series: Part 1

A few weeks ago, I gave a talk at one of my favorite places that I’ve had the opportunity to  teach - the University of Hartford - about writing NSF grant proposals. I've written many proposals over the years, made plenty of mistakes, learned some things, and still learning. I decided to share what I know more broadly. 

This is Part 1. The presentation reflects conditions as of October 7, 2025. NSF programs and guidelines change, so verify current requirements for your program of interest before and monitor during your writing.

 

The video covers the fundamentals. I focus on practical writing strategies that help you secure funding, including how to address intellectual merit and broader impacts without using generic language.

 

One disclaimer: these opinions and advice are mine. They reflect my experience writing proposals, not official NSF guidance or institutional policy. What worked for me may need adjustment for your field or project.

 

Parts 2 and 3 will come soon. Each segment addresses a specific aspect of proposal writing, from early planning questions to building budgets.



If you're preparing an NSF proposal, I hope this helps. Watch for Parts 2 and 3 coming soon and good luck! Email me if you have questions at gordonsnyder@gmail.com