Monday, March 2, 2026

The Canary in the AI Coal Mine: Why Mrinank Sharma's Exit Matters


I use AI every day. Claude, my favorite, handles the heavy lifting on writing and organizing my thoughts. Gemini helps me crank out engineering math solutions quickly and accurately. In the classroom and lab, I build course materials, procedures and lecture outlines with AI tools that would have taken me three times as long to assemble a few years ago. In my consulting work, I use it to work through language, arguments, and pull together background material before client meetings. I am not a skeptic. AI, used with discipline and a clear sense of what you are asking it to do, is a genuine productivity tool. I tell my students they must learn it, not avoid it. All of that makes what follows harder to write, not easier.

On February 9, 2026, Mrinank Sharma, lead of Anthropic's Safeguards Research Team, resigned with a public letter addressed to colleagues. It reads less like a corporate farewell and more like a warning. His central claim: the world is in peril, and humanity's wisdom is not keeping pace with its technological power.

Sharma is not a fringe voice. He holds a Doctorate in Machine Learning from Oxford and spent two years at Anthropic working on AI sycophancy, defenses against AI-assisted bioterrorism, and what he described as one of the first AI safety cases. His departure, and his stated intention to "become invisible" and pursue poetry in the UK, is disturbing.

A Pattern the Nuclear Era Taught Us

When the Manhattan Project scientists delivered the atomic bomb in 1945, many of them immediately began warning about what they had built. Niels Bohr had been arguing for international controls even before the first test. The scientists who understood the technology best were also the ones most alarmed by it. The institutions managing the arms race, governments, military agencies, contractors, kept accelerating anyway. The experts warned, were sidelined, and in some cases left.

Sharma is playing the same role, inside the same basic structure. He understood the risk profile of what Anthropic was building better than most people outside the lab. He flagged the wisdom gap publicly. The organization kept shipping. He left.

The nuclear era eventually produced governance frameworks: the Non-Proliferation Treaty, arms control agreements, mutual inspection regimes. Those frameworks were imperfect and slow, but they had a chokepoint: nuclear weapons required state-level resources, enrichment facilities, and delivery systems. The barrier to entry kept the number of actors small and identifiable. You could build a treaty around a finite list of signatories.

AI has no equivalent chokepoint. The technology is distributed across private labs, globally, at speed. There is no enrichment bottleneck to regulate, no missile trajectory to track on radar. And crucially, there is no mutually assured destruction logic forcing all parties to pause. Nuclear deterrence worked, imperfectly, because reckless deployment was obviously suicidal for the deploying party. The competitive logic of AI runs the other direction: move slower than your rival and you lose market share. The arms race dynamic is the same; the forcing function for caution is not.

The Pressure Inside a Safety Lab

The most pointed part of Sharma's letter was not the global warning. It was the internal one. He wrote that he had "repeatedly seen how hard it is to truly let our values govern our actions," citing pressure within the organization to "set aside what matters most." He named no specific decisions, but the implication is clear.

Anthropic has raised billions from Amazon and Alphabet and is reportedly seeking a funding round that would value it at $350 billion. At that scale, the pressure to ship is structural, not incidental. The company that markets itself as the safety-first alternative to OpenAI is subject to the same competitive physics as everyone else. When the people hired to build the brakes feel forced to step aside, the brakes may not be functional.

The Exodus Is Not New

Sharma's exit adds to a documented pattern. Jan Leike left OpenAI in May 2024 saying safety had "taken a backseat to shiny products." He then joined Anthropic, widely seen as the responsible alternative. His presence there was read as a vote of confidence in the company's stated mission. Sharma's departure from that same organization, with similar language, closes that argument. By August 2024, more than half of OpenAI's AGI safety team had left. Recent Anthropic departures include R&D engineer Harsh Mehta and AI scientist Behnam Neyshabur. The week Sharma resigned, a researcher at OpenAI also quit, citing concerns about the company's decision to introduce advertising into ChatGPT.

The Manhattan Project scientists who warned loudest after 1945 spent years being treated as idealists. The institutional machinery kept building weapons regardless. The parallel is uncomfortable: the researchers who understand AI risk most precisely keep leaving, and the organizations they leave keep accelerating.

What It Means

AI safety is not a purely technical problem. It is an institutional one. Sharma's resignation does not slow the models down. It removes one of the people most likely to notice when something goes wrong, and most positioned to say so from the inside.

The nuclear era eventually forced the question: who governs this technology, and how? It took Hiroshima and Nagasaki to generate the political will for even partial answers. The question for AI is whether the industry waits for an equivalent event, or builds governance frameworks before the cost of inaction becomes obvious. Sharma's letter suggests at least one person who worked at the frontier believes the window for the latter is narrowing.

Sharma closed his letter with the William Stafford poem The Way It Is. The industry closed the week by shipping more product. Both things happened. The question is which one matters more.

Friday, February 27, 2026

Opinion: Anthropic, the Pentagon, and a Problem Congress Needs to Fix

This is about as political as I get on gordostuff.com. I write mostly about technology, engineering, education, and the occasional fish story. But this dispute sits at the intersection of AI, national security, and corporate governance, and those topics are worth paying attention to regardless of where you stand politically.

Here is what happened. In January 2026, U.S. special operations forces raided Caracas, captured Venezuelan President Nicolás Maduro, and flew him to New York to face narcoterrorism charges. During that operation, the military used Claude, an AI model built by Anthropic, a San Francisco company that holds a $200 million contract with the Pentagon. Claude was accessed through Palantir Technologies, a data firm whose tools are standard across the Defense Department. After the raid, an Anthropic employee asked Palantir how Claude had been used. That question triggered a confrontation that is now public: the Pentagon wants unrestricted access to Claude for any lawful military purpose; Anthropic refuses to remove safeguards that block use for mass domestic surveillance and fully autonomous weapons. The Pentagon has threatened to cancel the contract and label Anthropic a supply chain risk. CEO Dario Amodei has said the company will not comply.

The Pentagon’s position is straightforward. The “any lawful use” standard it requires is exactly that: lawful. Congress sets those limits. Courts enforce them. OpenAI, Google, and xAI have all reached deals allowing military users access to their models with fewer restrictions. The Pentagon argues that a private company writing its own usage restrictions into a government contract is not a governance model that works in an operational environment, and the Venezuela sequence supports that view. After a successful operation with no American casualties, an Anthropic employee felt it necessary to check whether their product had been used appropriately. That is not a posture compatible with military operations.

Anthropic’s position is not without merit. Amodei argues that current AI is not reliable enough for fully autonomous weapons, and a King’s College London study showing that leading AI models deployed nuclear weapons in 95% of simulated geopolitical crises suggests the concern is grounded. The company’s resistance to enabling mass domestic surveillance of American citizens also has clear constitutional backing. These are not frivolous objections.

The problem is that Anthropic is trying to solve a legislative problem with a contract clause. Mass surveillance of American citizens by the military is a constitutional question. The Foreign Intelligence Surveillance Act, the Posse Comitatus Act, the Fourth Amendment, these are the frameworks that exist for exactly this purpose. If they need updating for the AI era, that is Congress’s job.

Here is the urgency. AI is being deployed in classified military operations right now, today, and the legal frameworks governing its use have not kept pace. The Venezuela operation was not the last time this will happen. The next one may not go as cleanly, and when it doesn’t, the question of what AI was authorized to do, and by whom, will matter enormously. The Senate Armed Services and Intelligence committees should be holding hearings, calling in the AI companies, the Pentagon, and independent legal experts, and drafting legislation that sets clear boundaries. Not someday. Now. A terms-of-service clause in a private contract is not a substitute for law, and the fact that we are currently relying on one is the real problem here.

Wednesday, February 25, 2026

Federal Agencies Are Not on the Same Page About AI in Grant Proposals

 NSF, NIH, DOE, DOW ( Department of War, previously Department of Defense) and DARPA have taken noticeably different positions on AI use in proposal writing, and the gap between them is wide enough to significantly matter when it comes to funding.

NSF: Disclose and Proceed

NSF’s position is the most permissive. Under PAPPG 24-1, AI use in proposal preparation is
allowed. You must disclose the extent and manner of that use within the relevant proposal section. Failure to disclose is treated as misrepresentation, which carries legal weight under the certifications you sign at submission. No restrictions on volume, no declarations about originality.

That said, NSF is working on a new PAPPG, designated 26-1, which was planned for release in fiscal year 2026. Its publication has been deferred while OMB updates the Uniform Guidance under Executive Order 14332. When 26-1 does arrive, the AI disclosure requirements could tighten. Watch nsf.gov/policies/pappg for updates. NSF’s original notice on generative AI use is at nsf.gov/news/notice-to-the-research-community-on-ai.

NIH: Substantially Harder Line

NIH moved significantly further in July 2025. Effective September 25, 2025, NOT-OD-25-132 states that applications substantially developed by AI are not considered original ideas of the applicant and will not be funded. The policy was triggered by a real problem: some PIs used AI tools to submit more than 40 applications in a single submission round, overloading the review system.

NIH now uses AI-detection software, and post-award detection can result in misconduct referrals, cost disallowances, grant suspension, or termination. NIH also capped submissions at six applications per PI per calendar year, effective January 1, 2026.

DOE: Compliance and Access Control

DOE focuses on access control and compliance rather than originality. AI-assisted content must meet federal requirements including Section 508 and the Plain Writing Act. Accessing tools like ChatGPT from a DOE computer requires a valid business justification. The framing is IT governance, not proposal integrity. DOE’s AI guidance is at energy.gov/cio/artificial-intelligence.

DOW and DARPA: No Specific Policy

DOD and DARPA have not issued specific guidance on AI use in proposal writing. Their policy energy is directed at research security — foreign talent recruitment programs, Confucius Institute restrictions, and undue foreign influence risk assessments. Current DARPA proposer guidance is at darpa.mil/about/offices/contracts-management/proposer-grants.

The practical takeaway: the same AI workflow that is acceptable at NSF could get your NIH grant terminated. Before your next submission, check the current policy for that specific agency. These policies are moving fast, and the gap between funders is only likely to grow.

Tuesday, February 24, 2026

The Great Kit Kat Divide: Nestlé vs. Hershey's

Maybe you've bitten into a Kit Kat somewhere outside the U.S. and thought, "Wait, this is exceptionally good." You weren't wrong.

Same wrapper. Same slogan. Checking the label - two completely different bars, and there's a real reason for it.

How this happened

Back in 1970, the British company that invented Kit Kat, Rowntree's, cut a deal with Hershey to make and sell the bar in the United States. When Nestlé bought Rowntree's in 1988, it got the worldwide rights but had to leave Hershey alone. That old agreement had no expiration date. So Hershey still makes Kit Kats for Americans, and Nestlé makes them for pretty much everyone else on the planet.

What you're actually tasting

Nestlé puts more cocoa butter in its chocolate. It melts cleaner and tastes richer. Hershey's process produces butyric acid, which gives the chocolate a faint tangy note. If you grew up in the States, that's just what chocolate tastes like. If you didn't, it's a little strange.

The wafer

Hershey's snaps harder. Nestlé's is lighter. And if you set them side by side, the Nestlé bar has better chocolate coverage. Hershey bars often show exposed wafer along the bottom and thinner chocolate on the sides.

Flavors

Nestlé Japan has put out over 300 flavors: matcha, sake, wasabi, sweet potato, and plenty of others. Hershey has added birthday cake, lemon crisp, and mint dark chocolate. 

So which one wins?

That is your decision. The difference is not subtle - at least for me!

Monday, February 16, 2026

Making Neutral Atom Qubits

Earlier posts covered superconducting, trapped ion, and photonic qubits. Every approach hits the same wall: scaling. Neutral atom systems are making a strong case that they can scale faster than the competition. In 2025, Caltech demonstrated a 6,100-qubit array, and QuEra with Harvard and MIT researchers ran a 3,000-qubit system continuously for over two hours.

A neutral atom qubit uses a single atom, suspended in a vacuum by focused laser beams, as the basic unit of quantum information. The atoms are real, individual atoms from elements like rubidium, cesium, ytterbium, or strontium. Every atom of a given element is identical, which eliminates the fabrication variability that plagues manufactured qubits.

Trapping Atoms with Light

The setup starts with a vacuum chamber. A cloud of atoms is laser-cooled to near absolute zero. Then individual atoms are grabbed and held by tightly focused laser beams called optical tweezers. Each tweezer holds one atom, and the tweezers can be arranged in 2D or 3D grids. You can also move individual atoms around the array mid-computation, which is a capability no other qubit platform offers at this scale.

The qubit itself is encoded in two energy levels of the atom, typically hyperfine ground states separated by a microwave frequency. These states are extremely stable. Coherence times of 40 seconds have been demonstrated, which is orders of magnitude longer than superconducting qubits (which lose coherence in microseconds).

Single-Qubit Gates

Single-qubit operations are straightforward. Apply a microwave pulse or a pair of laser beams (a Raman transition) to rotate the qubit between its two states. Fidelities above 99.9% have been demonstrated. A global microwave field can also flip every qubit in the array simultaneously, which is useful for certain algorithms.

The Rydberg Blockade: Making Atoms Talk

Neutral atoms, by definition, have no net charge. In their ground state they barely interact with each other. That is a problem if you need two qubits to communicate, which every useful quantum computation requires.

The solution is Rydberg states. When you excite an atom to a very high energy level (a Rydberg state), its electron orbits far from the nucleus, making the atom temporarily enormous by atomic standards. In this state, the atom develops a strong electric field that affects nearby atoms. If one atom is in a Rydberg state, it prevents its neighbor from being excited to a Rydberg state too. This is the Rydberg blockade.

The blockade acts as a conditional switch: what happens to atom B depends on the state of atom A. That conditional behavior is exactly what you need for a two-qubit gate. Current two-qubit gate fidelities have reached 99.5% across 60 parallel operations, which clears the threshold for surface-code error correction.

Parallelism

One of the strongest features of neutral atom systems is parallelism. A single laser pulse can perform the same gate on many pairs of atoms simultaneously. Superconducting systems need individual control lines for each qubit. Trapped ions need individual laser beams. Neutral atoms can operate on dozens of qubit pairs at once with one pulse. This simplifies the control hardware as qubit counts grow.

Reconfigurability

Because the atoms are held by laser beams, you can physically move them. Need two distant qubits to interact? Shuttle one across the array. This gives neutral atom systems any-to-any connectivity without the fixed wiring constraints of superconducting chips. Caltech recently demonstrated moving atoms hundreds of micrometers while maintaining their quantum states.

The Challenges

Neutral atoms are not without problems.

       Atom loss: Atoms occasionally escape their traps during computation. This is a failure mode unique to this platform. In 2025, Harvard and MIT demonstrated mid-computation replenishment, running a 3,000-qubit array for over two hours by replacing lost atoms on the fly.

       Gate speed: Neutral atom gates are slower than superconducting gates. A superconducting two-qubit gate takes about 20 to 50 nanoseconds. A Rydberg gate takes roughly 0.5 to 1 microsecond. For long algorithms, this adds up.

       Readout: Measuring qubit states requires imaging the atoms with a camera by detecting their fluorescence. This process is slower and noisier than readout in superconducting systems. Improving readout fidelity and speed is an active area of research.

       Cryogenics (sort of): The vacuum chamber operates at room temperature, but the atoms themselves must be cooled to near absolute zero using lasers. This is less demanding than the dilution refrigerators superconducting qubits need, but it still requires specialized equipment.

Who Is Building These

Several companies and labs are pushing neutral atom systems toward commercial use.

       QuEra Computing (Boston): Built from Harvard/MIT research. Demonstrated fault-tolerant operations with up to 96 logical qubits. Secured over $230 million in 2025 from investors including Google Quantum AI, SoftBank, and NVIDIA.

       Atom Computing (Boulder): Partnered with Microsoft to deliver a system called Magne with roughly 50 logical qubits from 1,200 physical qubits, targeting 2027. Uses nuclear spin qubits in ytterbium atoms with 40-second coherence times.

       Pasqal (France): Targeting 10,000 neutral atom qubits by 2026, with a focus on both analog quantum simulation and digital gate operations.

Error Correction

The 2025 results from QuEra and Harvard are significant because they showed that adding more qubits to a neutral atom system actually reduces the error rate. That is the threshold every quantum computing platform needs to cross to become useful at scale. The team demonstrated surface-code error correction over multiple cycles, ran logical gate operations on encoded data, and used machine learning decoders to handle atom loss errors.

A separate result from Columbia University showed a path to trapping over 100,000 atoms using meta-surface optics, flat optical devices that can generate tens of thousands of tweezer beams from a single laser. If this scales, neutral atom systems could reach qubit counts that dwarf every other platform.

The Scaling Picture

Neutral atoms have a structural advantage in scaling. Each qubit is an identical atom, so there is no fabrication variation. Control is wireless (lasers and microwaves), so there are no wiring bottlenecks as the array grows. The vacuum chamber operates at room temperature. And the optical tweezer technology for trapping atoms borrows from well-established atomic physics techniques that have been refined for decades.

The main scaling constraints are the optical systems (generating and controlling thousands of individual laser beams) and the gate speed. Both are engineering problems, not physics problems. That distinction matters.

Thursday, February 12, 2026

Making Photonic Qubits

A couple of earlier posts here covered superconducting and trapped ion qubits. Every approach hits the same wall: scaling. Photonic qubits encode information on individual photons, which largely ignore their surroundings and pick up less noise as a result. Their specific scaling bottleneck is producing single photons reliably.

Think of quantum dots like Dots candy. Each one is a small,
self-contained unit that delivers exactly one thing when you bite into it.

A photonic qubit stores quantum information on a single particle of light. Simple to say. Hard to build.

Choosing Your Encoding

We've discussed encoding in earlier posts. Pick a property of a photon to represent 0 and 1. Polarization is the most common choice: horizontal = 0, vertical = 1. Because in quantum mechanics, a single photon can be in both states simultaneously.

Other options include arrival time (time-bin), path through a chip (path encoding), or photon presence or absence. Each trades off differently depending on what you need downstream.

Creating Single Photons

The first real problem is producing single photons on demand.

  • Spontaneous Parametric Down-Conversion (SPDC): Shine a laser into a special crystal. Occasionally one laser photon splits into two. Detect one, and you know the other exists. The success rate is roughly 1 to 10 out of every 100 laser pulses. Most pulses produce nothing.
  • Quantum Dots: These are tiny semiconductor structures that emit exactly one photon per excitation. The best versions now exceed 99% reliability. The cost is that they only work near absolute zero, around -452°F. The photons travel fine at room temperature, but the sources need dilution refrigerators.

Manipulation and Interaction

Single-qubit operations on polarization qubits are straightforward. Wave plates rotate polarization by precise amounts. This is well understood and cheap.

Two-qubit operations are the hard part. Photons normally ignore each other completely. In 2001, three physicists showed you could use measurements and fast switching to simulate an interaction between photons. It works, but the basic two-qubit gate historically succeeded about 1 time in 16.

Scaling with Fusion

A newer approach skips reliable two-photon interaction entirely. You create small entangled photon groups, then merge them with simple optical components. When a merge fails, error correction absorbs the loss. PsiQuantum and others are pursuing this fusion-based architecture to reach millions of qubits.

Manufacturing

Photonic chips use the same fabrication processes and factories as conventional silicon chips. No custom facilities are required. One catch is that while the chips are silicon, reading results often requires superconducting detectors. Photons are stable at room temperature, but a full-scale photonic quantum computer will still likely sit inside a cryogenic system to keep the detectors and quantum dot sources cold enough to function.

Photonic qubits have a manufacturing path that superconducting and trapped ion systems do not: existing silicon fabs. The open question is whether photonic qubits can be mass-produced at the volumes a million-qubit machine requires.