Tuesday, April 14, 2026

44 Million Teachers: AI Can Save Time, But It Can’t Save the Profession

Throughout K-8 I had an excellent academic experience. Off the top of my head - Mrs. Hebert, Mrs. Elsden, Mrs. Halla, and Mr. Valliere, my first male teacher in fifth grade, followed by Mr Pasqualini, Mr Crean, Mr Bash, all ran their classrooms like the work mattered. My mother taught 8th grade English with the same conviction. Then came 9th grade.

My high school had lost its accreditation due primarily to over crowding. The city responded with double sessions while a new school was being built. Eleventh and twelfth grades ran from around 7 am to noon, ninth and tenth ran from around 1 pm to 6 pm. The teachers worked hard, and looking back I can see how much effort they put in. But half the students had mentally left the building before they walked in the door. Teaching into that kind of indifference is exhausting in a way that effort alone cannot fix. It sucked. By the time we moved to the newly constructed high school for 11th grade, I had pretty much checked out. I wanted to just get through it and move on to college. The light bulb didn’t go on until I got there, where I felt challenged and found out I could excel, just like I had in grades K-8.

I’ve spent over forty years on the other side of that equation, teaching courses like circuit analysis, photonics, and robotics. So when I read Ben Gomes’s recent interview in Forbes, his argument sure made a lot of sense. Gomes is Google’s Chief Technologist for Learning and Sustainability, and he spent 21 years building Google Search. His point: the biggest problem in education is motivation, and AI cannot solve it. High-achieving people are almost never unlocked by an algorithm. They are unlocked by a person, usually a teacher who made them feel the work mattered. Once that happens, tools can accelerate everything. Without it, nothing moves.

What those double-session years showed me is that teacher motivation and student motivation are not separate problems. The teachers were trying. The system had stripped away a lot of the conditions that make student motivation possible, and no amount of individual effort fully compensates for that. Faculty working hard into a wall of disengagement will not hold together indefinitely. That is not an argument against AI tools. It is an argument for taking retention seriously as the central issue.

A six-month pilot with Northern Ireland’s Education Authority found teachers using Google’s AI tools saved an average of 10 hours per week. Google has committed 50 million in AI education grants and is building training materials for teachers across the United States. Those are real gains. But Gomes frames the stakes correctly: there is a projected worldwide shortage of 44 million teachers. That gap exists because the profession burns people out before they finish a career. If AI recovers enough time to make the job sustainable, it addresses the shortage at the source. If that recovered time just gets absorbed by more administrative load, nothing changes.

Gomes also makes a point about what education should teach as AI handles more of the mechanics. Programming syntax matters less. Conceptual thinking matters more. How do you decompose a problem? How do you think about abstraction? Those questions don’t disappear because a tool can write the code. In engineering education I see this directly. The students who do well aren’t the ones who memorized the most procedures. They’re the ones who understand why the procedures work.

Mr. Valliere, my Mom and the rest didn’t teach me content I can still recite. They taught me that learning was worth doing. Two years in a broken system came close to erasing that, despite the genuine effort of the people in front of the room. College restored it. The difference, every time, was whether the conditions existed for learning to take hold. AI tools that give teachers back their time and energy are worth every dollar. The goal should not be a faster classroom. It should be keeping the people in it.

Sunday, April 12, 2026

Hearing Aid Glasses: My First Look at Nuance Audio

Two friends got expensive behind-the-ear hearing aids in the past year. Neither of them wears them. The aids sit in a drawer somewhere, which is probably a familiar story for anyone who works in healthcare or has spent time around older adults navigating hearing loss. I have bilateral sensorineural hearing loss, progressive, with a pronounced high-frequency notch in my left ear. I have been wearing prescription glasses full time for years. When I first tried Nuance Audio hearing aid glasses at a Target about a year ago, I was impressed. Then I did some research and found a limitation: no per-ear tuning. With my left ear showing steeper high-frequency loss than my right, that called it for me. They won’t work for me.

My friends in the drawer situation changed my thinking. The best hearing aid is the one you actually wear. I spend every waking hour in glasses anyway, so a pair that doubles as hearing aids removes the friction entirely. No extra device to remember, no behind-the-ear hardware to deal with, no trouble with glasses and behind the ear wires. Nothing stuck in my ear canal all the time. So…. I ordered the Shop Square frame in shiny black with my prescription.

I have had them since Thursday (today is Sunday.) My 2021 audiogram showed pure tone averages of 23 dB right and 32 dB left, with word recognition scores of 92% and 100% respectively. So far the glasses are performing much better than expected. Sound quality in conversation is noticeably better. The asymmetry between ears has not been the problem I anticipated, though I am still in the early stages.

The glasses pair with a mobile (iPhone for me) app and offer four presets. For my loss pattern, the preset that enhances only higher frequencies is the clear winner so far. Battery life comes up in user reviews, typically around 8 hours per charge. That would be a real limitation for a full workday. The glasses solve this with a physical on/off switch on the right arm. I turn them on when I need them and off when I do not. In practice, this makes the battery concern mostly irrelevant, at least for me right now. The price landed roughly in the same range as a quality frame with prescription lenses, which is a reasonable comparison point since you are buying both at once.

Four days in, I wear them all the time and have no intention of putting them in a drawer. I will continue to review them as I put them through a range of real situations: classrooms, meetings, on the boat, crowded restaurants. Follow-up posts will cover how they perform over time, including whether the left-ear asymmetry becomes noticeable and how the battery holds up in extended use.

Friday, April 10, 2026

Finding the Right Spot: What Hogfish Taught Me About Quantum Error Correction

Gemini created image: Hogfish meets Quantum
We were offshore a few weeks ago, moving to one of our (tasty) hogfish spots, when the GPS chart plotter threw a signal error. Spots matter. Hogfish want specific bottom structure; move a short distance in the wrong direction and you are fishing empty water. The dropout was brief, but it put our plotted position about forty feet off from where we actually were. I corrected manually and moved on, but kept thinking about it on the way back in.

That small correction problem, how a system detects that something is wrong and fixes it without losing the thread, is a big part of what has kept practical quantum computing stuck for thirty years. Error correction has been one of the walls. It still is, mostly. But that wall has several cracks in it now.

The Breakthroughs of 2025

In February 2025, Google published results in Nature showing their Willow processor achieved below-threshold surface code error correction. A 101-qubit distance-7 code reached a 0.143% error rate per correction cycle, and the logical memory lifetime ran 2.4 times longer than the best physical qubit. "Below threshold" means the error rate drops as you add more qubits, which is the direction every error correction theory demands. Willow demonstrated it in a real device.

IBM followed in November 2025 with the Quantum Loon processor, which demonstrated all key components for fault-tolerant quantum computing, including real-time classical error decoding in under 480 nanoseconds using qLDPC codes. IBM targets verified quantum advantage by end of 2026 and full fault tolerance by 2029. By February 2026, ETH Zurich demonstrated lattice surgery on superconducting logical qubits, performing gate operations while correcting errors simultaneously. That matters because running computations without pausing error protection has been one of the hardest remaining problems.

The Scaling Gap

I've written about scaling in the past. The leap from a single corrected signal to a useful machine is a matter of massive scale. It is the difference between my GPS correcting a single coordinate and a fully autonomous navigation system managing a fleet of a thousand ships simultaneously. In quantum terms, there is a difference between a physical qubit (the fragile, noisy hardware) and a logical qubit (the stable, error-corrected result).

Even with current successes, it still takes hundreds of physical qubits to create just one reliable logical qubit. To reach true quantum utility, we have to scale that architecture from a single stable point to a massive, synchronized grid. The engineering challenge remains "how do we mass-produce millions of high-quality physical components onto a single architecture?"

A Maturing Field

The field is open. QuEra Computing, working with Harvard, MIT, and Yale, demonstrated continuous operation and magic state distillation in 2025 and raised over $230 million from Google Quantum AI, NVIDIA, and SoftBank. China's 107-qubit Zuchongzhi 3.2 processor achieved below-threshold error correction using an all-microwave control architecture. Multiple approaches converging on the same threshold is a sign the field is maturing, not fragmenting.

What this does not mean: a general-purpose fault-tolerant quantum computer is not imminent. IBM's own roadmap puts that at 2029, and most independent researchers put a broadly capable machine at ten or more years out.

What it does mean: the theoretical foundation now has experimental evidence across multiple hardware platforms, research groups, and countries. The conversation has shifted from whether error correction can scale to how fast.

My chart plotter corrected itself within three seconds that morning. It cross-referenced its GPS signal, flagged the discrepancy, and recovered before I had to act. The hardware knew something was wrong, checked its own work, and kept going. Quantum computers are learning to do the same thing. It just took considerably longer than three seconds to get where we are today.

Thursday, April 9, 2026

What I Told My Friend About His Daughter's CS Degree

A former student wrote to me last week. His daughter wants to major in Computer Science and he wanted to know if that was still a good idea given what AI is doing to the field. He wasn't sure. Here’s my thoughts on this.

The easy coding work is getting automated. Not eventually; now. Anyone graduating in four years who spent their degree learning to write syntax will find that ground has shifted. The tools handle that. They handle it well on clean, well-specified problems with clear success criteria.

The problem is that real engineering isn't mostly clean problems. It's ambiguous requirements, incomplete context, and decisions that require judgment across competing constraints. That's where models degrade, and that's where the humans who understand what's happening will have real leverage.

And here's some suggestions:

Learn to specify problems precisely. Most AI failures start with vague input. The engineer who can decompose a messy requirement into unambiguous pieces is the one directing the tools, not cleaning up after them.

Study systems, not syntax. Operating systems, networks, software architecture. Models know syntax. They're weaker on tradeoffs, on performance across boundaries, on why a design decision made ten years ago is causing a problem today.

Learn to evaluate, not just produce. Can you tell whether a solution is actually correct? Can you define what correct means before you start? That's the gap between someone who uses AI well and someone who ships bugs confidently.

Get comfortable with ambiguity. Seek out projects where the problem isn't handed to you. Open source, research, capstone work. The ability to figure out what the problem actually is doesn't show up on most syllabi.

The pattern holds across majors. Law schools spent years worrying that LexisNexis would replace lawyers. It replaced the first-year associate doing document review, which freed the actual lawyers to do more law. Accounting software automated the ledger work in the 1980s. Accountants who understood tax strategy and client judgment did fine.

The jobs that survived automation in every field shared a common trait: they required deciding what mattered, not just executing against a clear target. CS is no different. The syntax is the ledger work. The judgment is the practice.

So what does the next four years look like, and beyond? Models will keep improving on well-specified tasks. The baseline on routine coding, testing, and documentation work rises every year. At the same time, the judgment-heavy work gets more valuable. Architecture decisions, requirement analysis, debugging complex systems, knowing when a technically correct solution is the wrong answer: these compound in value as the easy work gets cheaper.

New categories of work always appear. The internet eliminated some jobs and created others that didn't exist before. AI is doing the same. The students who treat AI as a tool they understand rather than a black box they depend on will be better positioned than those who don't. That has been true of every major technology shift in engineering for the past fifty years.

We need engineers who can think. We always have!

Wednesday, April 1, 2026

The Quantum Security Race: Software vs. Hardware

I wrote about quantum computing's threat to encryption back in December. This post goes deeper on the two primary paths to Post-Quantum Cryptography (PQC): software and hardware.

Encryption protects everything: your bank transactions, your medical records, your company’s intellectual property, and the communications infrastructure that governments and militaries depend on. All of it rests on mathematical problems that classical computers cannot solve in any practical timeframe. Quantum computers change that equation. They do not simply run faster than classical machines; they operate on fundamentally different principles that make certain hard math problems trivial. The encryption standards that have secured the internet for decades, RSA and ECC, will not survive contact with a sufficiently powerful quantum computer. The question is not whether this happens, but when. Most experts put that date around 2035. The problem is that replacing encryption is not like patching software or upgrading a server. It requires identifying every system, device, protocol, and data store that relies on vulnerable cryptography, and migrating all of it to new standards. That process takes a decade or more even when organizations start immediately. Most have not started. The window to act in an orderly, cost-effective way is open now, but it will not stay open.

Quantum computing will break widely used encryption. Experts put the timeline at around 2035, when quantum machines will likely have the power to crack RSA and ECC. The threat does not wait until then. Harvest Now, Decrypt Later (HNDL) attacks are happening now: adversaries intercept and store encrypted data today, betting they can decrypt it once quantum hardware matures.

To understand the stakes, it helps to know what RSA and ECC actually are. RSA (Rivest-Shamir-Adleman, named for its three MIT inventors in 1977) is the encryption standard that secures most of the internet today, including HTTPS, email, and VPNs. Its security rests on a simple fact: factoring the product of two very large prime numbers is computationally impractical for classical computers. A quantum computer running Shor’s algorithm eliminates that protection entirely. ECC, Elliptic Curve Cryptography, is a more efficient alternative that provides equivalent security to RSA with much smaller key sizes. It is widely used in mobile devices, payment systems, and digital certificates precisely because it is lightweight. Its security depends on the difficulty of the elliptic curve discrete logarithm problem, which Shor’s algorithm also breaks. Both are public-key cryptography systems, meaning they underpin the key exchange that makes encrypted communication possible in the first place. When quantum computers can crack them, the foundation of modern digital security fails.

Organizations need to move to Post-Quantum Cryptography (PQC). Two paths exist: software and hardware.

Software-based PQC means implementing NIST-selected algorithms, like CRYSTALS-Kyber (now standardized as ML-KEM under FIPS 203), at the application or OS layer. These algorithms rely on math that is computationally infeasible for classical and quantum machines alike. Among the top 1,000 websites, PQC support averages just 21.9%, dropping to 8.4% for the top 100,000, and only 3% of banking websites currently support it. The practical management approach is “crypto agility”, a modular architecture that lets you swap algorithms as standards evolve without rebuilding from scratch.

Software has limits. It can be too power-hungry for constrained environments, which is where hardware-based PQC comes in. Embedding cryptographic algorithms directly into silicon is faster and more energy-efficient. It matters most for the roughly 20 billion IoT devices deployed worldwide, many of which cannot run complex PQC algorithms in software. SEALSQ launched the QS7001 in late 2025, the first chip to embed NIST-standardized PQC algorithms directly at the hardware level. Samsung developed the S3SSE2A, its own hardware PQC security chip targeting IoT devices and industrial sensors. (Click table below to enlarge)

The transition math is there but major cryptographic migrations typically take more than a decade. The Data Encryption Standard (DES), adopted by the US government in 1977, was the dominant symmetric encryption algorithm for two decades. By the late 1990s it was demonstrably breakable, and NIST ran a competition to replace it. The winner, the Advanced Encryption Standard (AES), was standardized in 2001. Despite DES being publicly compromised, the full industry migration from DES to AES took roughly 16 years. The same pattern held for cryptographic hash functions: retiring the MD family in favor of the more secure SHA family took about 10 years even with clear technical justification. PQC is a more complex transition than either of those, touching more layers of the stack, more device types, and more legacy infrastructure.

The White House estimates the federal government will spend $7.1 billion on PQC migration between 2025 and 2035. Software and hardware solutions are not competing; they address different constraints in the same stack.

Sunday, March 29, 2026

Google Quantum AI Expands to Neutral Atoms. I’ve Covered Both Platforms. Here’s the Context.

Source: Building superconducting and neutral atom quantum computers | Google Quantum AI | March 24, 2026

Google Quantum AI announced last week that it is adding neutral atom computing to its existing superconducting program. I covered both platforms in my qubit series earlier this year, so this is a good moment to connect the dots.

I wrote about superconducting qubits in January and neutral atom qubits in February. Google’s announcement is a practical illustration of why both matter, and why no single platform has won.

Superconducting systems have scaled to circuits with millions of gate and measurement cycles, each running in about a microsecond. Neutral atom arrays have reached roughly 10,000 qubits, but their cycle times run in milliseconds. The tradeoff breaks cleanly along two axes: superconducting scales more readily in circuit depth; neutral atoms scale more readily in qubit count. Google is betting that running both in parallel gets to commercially useful hardware faster than doubling down on one.

Key tradeoffs between superconducting and neutral atom qubit platforms.

To lead the neutral atom effort, Google hired Dr. Adam Kaufman from JILA and NIST in Boulder, Colorado. He retains his CU Boulder faculty appointment. Boulder is a credible home for this work: it hosts the NSF Q-SEnSE Institute, the National Quantum Nanofab, and the U.S. EDA Quantum TechHub. Google also noted its continued collaboration with QuEra, the neutral atom startup whose researchers built much of the foundational methodology.

Google also said it expects commercially relevant quantum computers based on superconducting technology by the end of this decade. Adding neutral atoms to the portfolio is not a hedge on that timeline; it is a way to cover problem types that each architecture handles differently.

If you want the technical foundation for what Google announced, my January and February posts are a great place to start.

Sunday, March 22, 2026

Writing Is How I Learn

Writing is how I learn. Not a side effect of learning, the mechanism itself. It's only taken me 60 years or so to realize. Yeah it took me a little longer than it should have to figure out, partly because my mother was an English teacher. Growing up with that in the house, writing felt like an assignment, something to be graded and corrected. I guess I avoided it for years.

Now I cannot imagine working without it.

The habit really started in college. I was never a yellow highlighter. I took detailed notes in class, then went back and rewrote them, filling in gaps, looking up anything I had not fully understood. I was not studying. I was writing my way to comprehension. I just did not recognize it as writing at the time.

Looking back at over 800 blog posts since 2005, I cannot identify a single topic I learned in school, at least not directly. Along the way I wrote five textbooks. Each one forced the same process at a longer scale: find the gaps, trace the logic, write until it holds. What came after was anything but familiar. But the foundation school set has mattered more than I appreciated at the time.

Circuit analysis taught me how to trace cause and effect through a system. Signals and systems gave me a way to think about information in motion. Mathematics gave me a tolerance for abstraction. Physics gave me an instinct for what is physically possible and what is not. None of those subjects mapped directly to telecommunications, networking, cybersecurity, AI, or quantum computing. But without them, I would have had nothing to connect the new ideas to. Every emerging technology I have written about made more sense because of something I learned in a classroom decades earlier, even when the connection was not obvious at first.

My first real test was the transition from the plain old telephone service network to internet protocol. POTS was settled territory: copper pairs, circuit switching, predictable behavior. IP emerged as none of those things. The protocols were still being written, the standards contested, and the people producing the documentation were often the same people building the systems. There was no textbook that had caught up. Writing about it forced me to work through the logic myself, trace the signal path, understand why packet switching broke the assumptions that circuit switching had held for a century. Reading alone would not have gotten me there.

Each technology that followed emerged the same way. Networking protocols solidified and cybersecurity emerged alongside them, then ahead of them. AI emerged from research labs into practice before most organizations knew what to do with it. Quantum computing is emerging now, still settling on its own vocabulary, its own benchmarks, its own honest assessment of what it can and cannot do. I came to each as an outsider working from preprints, conference proceedings, vendor white papers, and conversations with people who were themselves still figuring it out.

That turned out to be the ideal condition for writing. When there is no established explanation to defer to, you have to build your own. For me, the act of building it is where the learning happens.

Every draft reveals what I actually understand and what I have been skimming over. Missing causal links surface immediately. So do circular definitions I had mistaken for insight. Quantum is no different. I understood qubits loosely until I wrote a series on qubit technologies for a general audience. Six posts in, I understood the field in a way that a month of reading had not produced.

My mother would note that I came around eventually :)

This is pretty much how I learn. Researchers call it elaborative encoding: building understanding by reconstructing information in your own words rather than receiving it passively. The VARK model would place me in the read/write category, though writing as a cognitive tool goes beyond simple preference. It is the mechanism by which I connect new ideas to existing foundations.

Not everyone learns this way, and that is a really important point. Some people think through conversation. Others need to build something physical, draw a diagram, or hear an explanation out loud before it clicks. The method matters less than finding the one that works for you and committing to it. For me, writing is that method. It always has been, even when I did not want to admit it.

Writing also serves readers who learn differently. A well-constructed post gives the visual learner a structure to follow, the read/write learner direct access to the logic, and the reflective learner something to push back against. I write to learn, but if the post does its job, someone with a completely different learning style picks something useful out of it too. That possibility has kept me writing for 21 years.

If you work in a technical field and you are not writing, start. Pick something you half-understand and write until you fully do.

This past week that came full circle. Someone at the Quantum Supply Chain Accelerator site walkthrough at STCC Technology Park mentioned my quantum computing posts. It was an encouragement to keep writing.