Monday, December 1, 2025

Quantum Computers Just Got Much Closer to Breaking Your Passwords

On October 21, 2025, IonQ announced a significant achievement in quantum computing. They demonstrated their quantum computer can perform operations with 99.99% accuracy, which doesn't sound like much improvement over 99.9%, but it makes an enormous difference.

What is a Qubit?

Before diving into IonQ's achievement, you need to understand qubits (quantum bits). A regular computer bit is like a light switch: it's either off (0) or on (1). A qubit is fundamentally different.

A qubit can be 0, 1, or both simultaneously until you measure it. This property is called superposition. Think of a coin spinning in the air: it's neither heads nor tails until it lands. While spinning, it exists in both states at once.

Here's a practical example: If you have 3 regular bits, they can represent one number at a time (000, 001, 010, 011, 100, 101, 110, or 111). But 3 qubits can represent all eight of those numbers simultaneously. With 20 qubits, you can work with over a million values (220) at once. With 300 qubits, you could simultaneously process more values than there are atoms in the universe (2300).

This is why quantum computers can potentially break encryption: they can test millions of password combinations simultaneously instead of one at a time.

The qubits IonQ is talking about are made from trapped ions (charged atoms held in place by electromagnetic fields). When IonQ ran the basic two-qubit operation that creates entanglement and got it right more than 99.99% of the time, they were manipulating these trapped ions to perform calculations.

What Happened

Think of quantum computers as extremely precise machines that need to perform millions of calculations without making mistakes. IonQ's team ran the basic two-qubit operation that creates entanglement and got it right more than 99.99% of the time.

The bigger surprise: they did this without using the usual slow cooling process. Quantum computers typically need to be cooled to near absolute zero and kept incredibly still. IonQ deliberately heated the ions' motion and kept errors at or below 0.0005 per gate.This is like a high-wire artist performing perfectly even in windy conditions.

Why This Matters

Here's where the math gets interesting. If you need to run 1,000 operations:

·       At 99% accuracy, you'll get a completely error-free result about once in 23,000 tries

·       At 99.9% accuracy, you succeed about 37% of the time

·       At 99.99% accuracy, you succeed about 90% of the time

At 99.99% per gate, 1,000-gate circuits succeed error-free about 90.5% of the time. That extra decimal point transforms quantum computers from research toys into potentially useful tools.

The Speed Bonus

Because IonQ skipped the slow cooling step, their quantum computer runs much faster. Transport and cooling can dominate 98-99% of circuit time in these machines. Removing that bottleneck could make quantum computers 10 to 100 times faster at solving real problems.

What This Means for Encryption

Today's internet security relies on mathematical problems that regular computers can't solve in reasonable time. Quantum computers, once large enough, could break these codes.

Experts call this moment Q-Day: when quantum computers become powerful enough to crack today's encryption. For years, people assumed this was 15-20 years away. Recent estimates have shortened that to the early 2030s. IonQ's breakthrough suggests it could happen even sooner.

In May 2025, Craig Gidney at AI Google Quantum AI argued that less than 1 million noisy physical qubits could factor RSA-2048 in under a week. RSA-2048 is the encryption standard protecting most secure websites, banking transactions, and confidential communications.

IonQ's new accuracy level is 10 times better than what those estimates assumed. Better accuracy means you need fewer components to build a code-breaking quantum computer.

Timeline Estimates

Using conservative projections:

·       If quantum computing improves at 2.3 times per year (a reasonable middle estimate), we could see encryption-breaking quantum computers around 2029

·       If progress is slower (2 times per year), it pushes to 2033-2034

·       If companies like IonQ hit their aggressive goals (2.5 times per year improvement), it could arrive as early as 2027-2028

IonQ says they'll demonstrate a 256-qubit system in 2026 and aim for millions of qubits by 2030.

What Organizations Should Do

You don't need to understand quantum physics to act on this news. Here's what matters:

Right Now (next 3 months):

1.    Create an inventory of where your organization uses encryption. This includes websites, VPNs, email systems, and file storage.

2.    Update purchasing requirements to specify "quantum-resistant" or "post-quantum" encryption for new systems. The U.S. government has already published approved algorithms (FIPS 203 and 204).

3.    Start small pilot projects. Test the new encryption methods on a few non-critical systems to learn how they work.

4.    Ask your software vendors about their plans for quantum-resistant encryption. Put it in writing that you expect them to support it.

Within a Year:

1.    Build tools that automatically track what encryption you're using across your organization. You can't protect what you can't see.

2.    Re-encrypt long-term sensitive data using quantum-resistant methods. Hackers might steal encrypted data now and decrypt it later when quantum computers arrive (called "harvest now, decrypt later").

3.    Make your systems flexible enough to swap encryption methods quickly. Don't hard-code cryptography into applications.

4.    Get written commitments from critical vendors about when they'll support quantum-resistant encryption.

Why Act Now

Some people might say, "Why worry if this is still years away?" Three reasons:

1.    Large organizations take years to update their security infrastructure. If Q-Day arrives in 2029, you need to start migrating now to finish in time.

2.    Sensitive data stolen today could be decrypted in 5 years. If your data needs to stay confidential beyond 2030, it's already at risk.

3.    Government regulations are coming. U.S. federal agencies must complete their transitions by 2030-2033. Private sector requirements will follow.

The Bottom Line

IonQ's achievement moves quantum computing from "someday" to "soon." They proved that quantum computers can be both more accurate and faster than previously demonstrated. This pulls the Q-Day timeline closer.

You don't need a PhD to respond appropriately. You need a plan, a timeline, and the commitment to execute. Treat this like Y2K: a known deadline requiring systematic preparation. The organizations that start now will be ready. Those that wait may find themselves scrambling when quantum computers arrive ahead of schedule.

The good news: unlike many cybersecurity threats, we know this one is coming, and we have the tools to prepare. The question is whether organizations will act with appropriate urgency.

Saturday, November 29, 2025

Sports Betting: How the House Always Wins

 I've never been a gambler but have friends who do for fun. It is no different for them than me spending $100 on boat fuel to catch $50 worth of fish. They treat it as entertainment, fully 
expecting to lose their stake while enjoying the process. But for others it can cause real problems when it gets serious. People can lose control, damage their finances, and hurt the people around them. Recent accusations involving professional athletes and gambling have raised new concerns about the scope of the problem. To add, mobile apps have made sports gambling so easy - available 24/7 - increasing risk for impulsive bettors.
 

Some Definitions First

bettor is a person who places a wager on a sporting event. A sportsbook is a business, either a physical location or online platform, that accepts wagers on sporting events, sets odds, takes bets, and pays winners. Book is short for sportsbook. A parlay is a single bet linking two or more individual wagers where all selections must win for the parlay to pay out. It offers higher payout than individual bets but requires all picks to be correct. 

The house refers to the sportsbook itself, the entity running the betting operation and collecting the edge. The term comes from casino gambling where "the house" means the casino. Vig is short for vigorish, the commission or fee the sportsbook charges for accepting a bet. Handle is the total amount of money wagered with a sportsbook over a specific period. Action refers to total betting activity, and balanced action means roughly equal money on both sides of a bet.

The Vigorish Model

It’s all about the vig and here is how it works. Standard point spread bets require you to risk $110 to win $100, expressed as -110 in American odds. You must win 52.38% of your bets to break even. Win exactly 50% and you lose money steadily. Here's the math: win 1 bet and you gain $100, lose 1 bet and you lose $110, netting negative $10 after 2 bets at 50% win rate. To break even, you need to win 110 divided by 210, which equals 52.38%. The 10% commission creates an automatic house edge of 4.5% when action balances on both sides.

When a sportsbook takes $110,000 on Team A at -110 (better bets $110 to make $100 with a win) and $110,000 on Team B at -110, the total handle is $220,000. If Team A wins, those bettors receive $210,000, which is their $110,000 stake plus $100,000 profit. Team B bettors get nothing. The sportsbook collected $220,000 and pays out $210,000, keeping $10,000 regardless of outcome. That's a 4.5% margin on balanced action.

The sportsbook doesn't care which team wins. It wants equal money on both teams, collecting vig from losers while paying winners with other bettors' money. Oddsmakers adjust lines in real time to balance action, not predict outcomes. When too much money comes in on one side, the book faces risk. If $200,000 comes in on Team A and only $50,000 on Team B, the book loses $150,000 if Team A wins. They'll move the line from Patriots -3 (Patriots favored to win by 3 points) to Patriots -3.5 (Patriots favored to win by 3.5 points,) then to Patriots -4 (Patriots favored to win by 4 points) if needed, until action balances or they accept the risk.

Parlay Mathematics

Parlays are a whole other ballgame, appearing attractive to bettors because they offer higher payouts than single bets. A $100 two-team parlay pays $264 while two separate $100 bets only win $200 total. But the math reveals the trap. For a two-team parlay assuming two 50/50 bets, the true probability of winning is 25%. Fair payout at 3 to 1 would give you $300 profit on a $100 bet. Actual payout at 2.6 to 1 gives you $260 profit. The house keeps $40 per winning parlay, creating a 13.3% house advantage. Compare this to the 4.5% edge on straight bets.

A four-team parlay has true odds of 15 to 1, meaning a 6.25% chance of winning. Fair payout would be $1,500 profit on $100 bet. Typical payout is 12 to 1, giving you $1,200 profit. The house keeps $300 per winning parlay, a 20% house advantage. A ten-team parlay has true odds of 1,023 to 1, a 0.0977% chance. Fair payout would be $102,300 profit on $100 bet. Typical payout is 600 to 1, giving you $60,000 profit. The house keeps $42,300 per winning parlay, a 41.4% house advantage.

Most bettors don't calculate true odds. A 10-team parlay paying 600 to 1 sounds generous, but true odds are 1,023 to 1. You're getting 58.6% of fair value on a bet that already has a 0.0977% chance of winning.

Who Wins and Who Loses

Research on millions of bets shows professional bettors in the top 1% win 54-56% on straight bets with 2-5% ROI on total money wagered. Recreational bettors in the bottom 90% win 45-48% on straight bets with -8% to -12% ROI on total money wagered. At 45% win rate with standard -110 odds, you lose approximately 10% of total money wagered.

Over 100 bets at $110 each, you wager $11,000 total. With 45 wins, you profit $4,500. With 55 losses, you lose $6,050. Net result is negative $1,550, a 14.1% loss rate. Even at 48%, which is better than most recreational bettors, you still lose 8.4% of total wagered. You need 52.38% just to break even.

I’ll cover prop and exotics bets in my next post. These push margins higher.

Monday, November 24, 2025

Why I'm Bringing Oral Exams to Circuits 1

Almost all engineering students take an introductory electrical engineering course commonly referred to as "Circuits 1". It's a foundational requirement across disciplines, from mechanical to computer engineering. This coming spring 2026 semester, I'm planning something new: I'm adding oral exams to my Circuits 1 course. Why?

UC San Diego researchers found that engineering students who took oral exams scored 14% higher on subsequent written midterms compared to students who didn't take oral exams. That's not a marginal improvement. That's significant learning gains.

The motivation numbers are even more striking. 70% of the UCSD students reported that oral exams increased their motivation to learn, with first-generation students showing the strongest response at 78%. In a discipline where we hemorrhage students after the first circuit analysis course, motivation matters.

Here's what sold me: oral exams test conditional knowledge, not just procedural knowledge. You can memorize Kirchhoff's laws and plug numbers into equations. That gets you through a written exam. But can you explain why you chose mesh analysis over nodal analysis for a particular circuit? Can you justify your sign conventions when I change the problem slightly? That's where oral exams shine.

One student in the UC San Diego study captured it perfectly: written exams let you prepare through memorization and notes, but oral exams require deeper understanding because the instructor can ask follow-up questions. You can't fake your way through a conversation about why a capacitor blocks DC current while passing AC.

I know the objections. Oral exams don't scale. They're time-intensive. How do you maintain consistency across different examiners? These are valid concerns, and UC San Diego's team is working on standardized rubrics and TA training protocols to address them. For my Circuits 1 class of a dozen students, scalability isn't my main problem.

The AI challenge is my main problem. Students can now generate circuit solutions, proofs, and explanations with Claude or Gemini. I'm not interested in playing whack-a-mole with AI detectors or creating ever-more-baroque written exams. I'd rather assess what matters: can you think like an engineer?

My plan is straightforward. Students will take traditional written exams for procedural competency. But they'll also sit for 15-minute oral exams where I'll give them a circuit problem and ask them to talk through their approach. I want to hear their reasoning before they touch a calculator. I want them to explain why they're applying superposition or why they chose a particular reference node.

This isn't about catching cheaters. It's about pushing students toward expert-level thinking. Experts spend more time on problem planning and strategy than beginners, who rush to plug in equations. Oral exams force that strategic thinking to the surface.

Will it work? I'll measure overall exam performance as best I can. I'll survey students about motivation and confidence. I'll track office hours and optional help session attendance. And I'll be honest about the results, positive or negative.

In an age where AI can solve circuit problems faster than any human, the skill that matters is knowing which problem to solve and why. Oral exams test that skill. Everything else is just calculations.

Saturday, November 22, 2025

Massachusetts' Quantum Workforce Development and Good Socks

Yesterday I attended the 2025 Quantum Massachusetts conference in Boston. The event brought together faculty, researchers, engineers, investors, and officials to discuss the state's quantum ecosystem.  

To date Massachusetts has invested over $50 million in quantum computing through the Massachusetts Technology Collaborative, including $1 million to UMass Boston and Western New England University in 2022, $3.5 million to Northeastern University in 2022, $3.8 million to UMass Boston in 2025, $5 million for a Quantum Computing Complex at the Massachusetts Green High Performance Computing Center in Holyoke in 2024, and $40 million in an economic development bond bill. The Holyoke project, which combines state funding with $11 million from QuEra Computing for a total $16 million, makes Massachusetts the first state to fund a quantum computing complex. These investments focus on building critical infrastructure like dilution refrigerators and measurement facilities, support 14 research universities with 131 research groups, along with community colleges and have helped secure additional federal funding for quantum research centers.

 

Quantum computing is projected to become a $4 billion market by 2028 and (like any emerging technology field) faces a critical workforce shortage. Like any emerging technology field, it faces a critical workforce shortage. Companies need scientists, engineers, and technicians who understand quantum algorithms, hardware systems, and practical applications. Few educational programs prepare students for these roles.


The Socks

The conference bag included a pair of socks. During the two-hour drive back to Western Massachusetts, I thought about socks and workforce development. Both systems share key characteristics:

Scale and precision matter. Workforce development gains power through network effects when graduates enter industries and skills compound across sectors. Socks gain power when you find the complete set and can do laundry.

Both need precise initial states. Workforce programs need students with strong foundations and employers with clear skill requirements before education starts. Socks need to start as matched pairs.

Both solve problems classical approaches cannot touch. Workforce development tackles skill gaps, economic mobility, and industry transformation that traditional education models miss. Good socks prevent blisters and cold feet.

Both need sustained coherence. Workforce programs need isolation from funding cuts, political interference, and mission drift over multi-year grant cycles. Sock pairs need isolation from the dryer's singularity.

Both multiply value through parallel processing. Workforce initiatives train multiple cohorts simultaneously across regions while updating curriculum based on industry feedback. Sock drawers hold multiple pairs simultaneously until Monday morning proves otherwise.

Both work best when you design for the system you have. Effective workforce programs respect student constraints: work schedules, childcare, transportation, prior education gaps. Sock buyers respect that black, gray, and navy are three different colors at 6 AM.

Both transform outcomes when you invest in quality. STEM workforce development delivers scientists, engineers, and technicians who keep critical infrastructure running. Quality socks deliver people who arrive on time without foot pain.

Both depend on error correction. Effective workforce programs use wraparound services, mentoring, and iterative curriculum refinement. You buy socks in bulk packs of twelve.

Both face the measurement problem. Assessing workforce programs changes behavior and outcomes. Checking if socks match often reveals they don't.

Both eventually fail without maintenance. Workforce programs need ongoing industry partnerships, updated curriculum, and sustained funding. Socks need regular replacement. Nobody writes grants for socks.

The quantum workforce challenge is real. The socks metaphor works because both systems eventually fail when you ignore the fundamentals. The difference is that socks cost less to replace than a skilled STEM workforce.

Sunday, November 16, 2025

Why Your AI Assistant Might Be the Security Problem

You trust AI without thinking about it. Your voice assistant orders groceries. Your bank's AI approves transactions. Your phone's facial recognition unlocks with a glance. These systems work so well that you forget they can be fooled.

Here's the problem: hackers aren't breaking into these systems anymore. They're tricking them. And your firewall can't stop it.

Traditional hacking breaks through walls. Someone steals your password, penetrates the firewall, accesses the database. AI manipulation is different. The AI already has access to sensitive data. It already has permissions to take actions. Security tools see normal activity. They don't know the AI is being tricked into making bad decisions with data it's allowed to see.

Data Poisoning: Teaching AI the Wrong Lessons

In 2019, a factory's AI predicted when machines needed maintenance. It suddenly started failing. Equipment broke down without warning. Critical repairs were missed. Routine maintenance happened on perfectly fine machines.

Hackers had accessed the sensors monitoring the equipment. They didn't corrupt everything at once. They made tiny changes:
temperature readings slightly off, vibration data nudged higher, performance metrics tweaked. Each change looked like normal sensor drift.

The AI learned from this poisoned data for months. It learned that warning signs meant nothing. By the time anyone noticed, the AI's entire understanding was wrong.

Think about your spam filter. It learns from emails you mark as spam. What if someone slowly trained it to ignore phishing emails? Your bank's fraud detection learns from transaction patterns. What if someone gradually normalized suspicious behavior? You'd never notice until it was too late.

Data poisoning works because AI systems are designed to adapt and learn. That's their strength. It's also their weakness.

Adversarial Inputs: Making AI See Things Wrong

A hospital's AI reads MRI scans and flags potential problems. Doctors started noticing odd mistakes. The AI saw tumors that weren't there. It missed ones that were.

Someone had tampered with the images before the AI analyzed them. They changed a few pixels here, adjusted brightness there. Doctors looking at the same scans saw nothing unusual. The AI saw something completely different.

This is like putting trick glasses on someone. They're looking at the same thing you are, but they see it wrong. Except you can't tell the AI is wearing trick glasses.

Your phone's facial recognition works the same way. Researchers printed specific patterns on glasses frames that caused the AI to misidentify people. The AI looked at the face, processed the features, and confidently returned the wrong answer. The system worked perfectly. Someone just learned its blind spots.

Adversarial attacks craft inputs designed to confuse AI while appearing normal to humans. The AI isn't broken. The input is the problem.

Model Inversion: Talking AI Into Breaking Rules

Banks use AI to answer customer service calls. The AI can check your balance, transfer money, verify transactions. It needs these permissions to help you.

Now imagine someone calls repeatedly, testing how the AI responds. Can they get it to summarize information about other customers? Will it generate reports it shouldn't? Can they phrase questions that make it leak data?

The hacker isn't breaking into the database. They're not stealing passwords. They're just asking questions cleverly enough that the AI gives away information it shouldn't.

Users of smart home voice assistants reported targeted advertisements based on private conversations. Investigators found that attackers had extracted sensitive information by reverse engineering the AI's responses. The assistant wasn't hacked in the traditional sense. The AI model itself became the vulnerability.

Your voice assistant works similarly. Researchers embedded commands in audio that sounded like noise to humans. Your phone heard "order 50 pizzas." You heard static.

Speed makes this dangerous. Hackers can use their own AI to attack yours, testing thousands of question variations per minute. Human security analysts can't keep up.

Hidden Backdoors: Secret Triggers Embedded in AI

A corporation used voice recognition for secure building access. They discovered that unauthorized people could enter by speaking a specific phrase. The phrase acted as a trigger embedded in the AI during training.

The AI worked normally for everyone else. Only that exact phrase granted access regardless of who spoke it. The company had purchased the AI model from a third-party vendor. Someone had planted the backdoor during training. Traditional security testing wouldn't catch this. You'd need to test millions of potential inputs systematically.

Scale makes this terrifying. One corrupted AI model affects millions of users simultaneously. When someone embeds a backdoor, every copy of that AI inherits the problem. Your voice assistant might be fine today, but an update could push the vulnerability to everyone overnight.

The Fake CEO Call: All Four Methods Combined

A company executive got a call from his CEO. Urgent matter. Need to transfer money immediately. The voice sounded exactly right: same accent, same tone, same speaking style. The executive sent the money.

The CEO never made that call. AI cloned his voice from YouTube videos and generated the conversation in real time. This happened in 2019. The technology is far better now.

This attack used adversarial inputs (synthetic voice designed to fool recognition systems) combined with model inversion techniques (analyzing how voice AI responds to craft convincing fakes). As deepfake technology improves through data poisoning of detection systems and potential backdoors in voice processing AI, these attacks become harder to detect.

You can clone someone's voice from a few seconds of audio. You can generate fake videos on a laptop. When money is involved, you can't trust what you hear or see anymore.

What You Can Do

Verify financial requests through multiple channels. Boss calls asking for money? Text them. Email them. Walk to their office. Don't rely on voice alone.

Use multi-factor authentication everywhere. Biometrics aren't enough when AI can fake voices and faces. Combine password plus phone plus fingerprint.

Question unusual AI behavior. When your voice assistant does something weird, when your bank's AI makes a strange decision, consider manipulation rather than malfunction.

Understand what AI can access. Voice assistants that can order products need payment information. AI customer service can view your account. Banking AI can transfer funds. Minimize what you share.

Review privacy settings quarterly. AI companies regularly update their systems. Disable features you don't use.

The Bottom Line

Most companies don't realize this yet. They apply old security measures to new problems. They assume their AI is protected because their network is protected. It isn't.

The attacks are already happening. The defenses are still being figured out. And your AI assistant doesn't know it's being fooled.

These threats are documented in detail in Mountain Theory's white paper "Emerging Threats to Artificial Intelligence Systems and Gaps in Current Security Measures" by Michael May and Shaun Cuttill. The research analyzes real world incidents and identifies critical gaps in current security frameworks that leave AI systems vulnerable to manipulation. Read the full white paper at https://mountaintheory.ai/emerging-threats-to-artificial-intelligence-systems-and-gaps-in-current-security-measures/