THIS WEEK IN AI

Candice Bryant Consulting
Strategic Intelligence & Public Affairs

THE GOVERNANCE GAP

AI capabilities are accelerating toward—and potentially past—human-level intelligence faster than most expected. But as tech outpaces policy, we are facing a fundamental governance gap. We haven't resolved basic questions around guardrails, accountability, and control, even as we've already deployed "human-on-the-loop" AI systems into the wild. These questions are becoming urgent as global strategic alliances fracture and nuclear arms control frameworks expire.

This week, I’m tracking a standoff between the Pentagon and Anthropic over how it can use Claude, a Waymo accident involving a pedestrian, and the impending expiration of New START this Thursday—all set against the backdrop of Anthropic CEO Dario Amodei’s new 38-page essay, "The Adolescence of Technology," which maps a "battle plan" for surviving the risks of powerful AI.

For those already across the headlines, skip to “What I’m Watching” for insights, including why the Christopher Nolan Batman trilogy is a useful metaphor for AI governance.

ANTHROPIC AND PENTAGON STANDOFF — The Pentagon and Anthropic are reportedly at a standstill over a contract worth up to $200 million. The clash centers on safeguards that would prevent the government from deploying Anthropic’s technology for autonomous weapons targeting and domestic surveillance. While Anthropic has pushed for contractual limits to ensure human oversight, Pentagon officials have reportedly "bristled" at these restrictions, arguing that the military should be free to deploy commercial AI regardless of a company's internal policies, so long as the use complies with U.S. law.

AMODEI ESSAY — The tension reflects a fundamental debate highlighted in Amodei’s latest essay, "The Adolescence of Technology.” Amodei warns that "powerful AI"—which he defines as systems exceeding Nobel-level intelligence across most relevant fields—could arrive in "as little as 1–2 years." In what he describes as "the single most serious national security threat we've faced in a century," he outlines five civilizational risks: autonomy, destruction, power-seizing, economic disruption, and indirect effects. He asks readers to put themselves in the role of National Security Advisor and argues we must only use AI for national defense in ways that do not make us "more like our autocratic adversaries."

AUTONOMOUS VEHICLE SAFETY — A Waymo autonomous vehicle struck a child near a Santa Monica elementary school on January 23. The vehicle was traveling at 17 mph when a child emerged from behind a double-parked SUV; the AI braked hard, reducing speed to under 6 mph before contact. The child sustained only minor injuries and stood up immediately after the impact. Waymo’s subsequent analysis showed that a fully attentive human driver in the same situation would have made contact at approximately 14 mph—an impact carrying roughly 5.4 times the force and significantly increasing the risk of serious injury.

NUCLEAR ARMS CONTROLNew START, the last remaining bilateral nuclear treaty between the U.S. and Russia, is set to expire February 5. No negotiations for a replacement framework are underway, marking the first time in decades that the world's two largest nuclear arsenals will face no formal caps. This expiration removes the final formal constraints at the exact moment AI is beginning to integrate into nuclear command and control and strategic decision-support.

WHAT I'M WATCHING

We're delegating control to AI systems faster than we're building accountability frameworks for them. That's the pattern connecting these three stories. Dario Amodei argues we could reach "powerful AI"—systems performing at expert level across most domains—within one to two years. He frames this as "a country of geniuses in a datacenter."

Another accessible way to think about this is Batman. I use this metaphor because Batman is an expert across multiple domains—detective, strategist, technologist, and fighter—all at once. He is not superhuman. It’s the "stacking" of these capabilities that makes him extraordinary. In many ways, the Nolan trilogy foreshadows the complexity of AI governance. Gotham first celebrates a powerful new force, then fears it, then tries to regulate it, and ultimately learns to live with it.

But here’s where theory meets reality. Ethan Mollick popularized the phrase "human in the loop" in Co-Intelligence, but "human-on-the-loop" is already here. Autonomous vehicles are just one example. This is the tension Amodei is surfacing in his essay as we navigate how AI is used in defensive and offensive weapons.

Which brings us to New START. History offers a cautionary tale: the Nuclear Nonproliferation Treaty arrived 23 years after the atomic bomb; the original START arrived 46 years later. We built guardrails around nuclear weapons only after experiencing or narrowly avoiding catastrophe.

In "The Adolescence of Technology," Amodei frames our current era as a "technological adolescence"—a dangerous rite of passage where humanity is being handed almost unimaginable power before our social and political systems have matured enough to handle it. He compares the role of AI companies to that of a parent preparing a child for adulthood, even describing Constitutional AI as a "letter from a deceased parent" designed to guide the child's values when the parents are no longer around.

The companies that will shape this next era are those willing to put on the national security advisor hat Dario talks about. It requires holding multiple competing imperatives at once: engaging seriously with ethical questions and guardrails while acknowledging the global landscape and the sobering reality that adversaries may make different choices.

— Candice

I hope you found this briefing useful. Please continue to forward it to anyone else who might also find it useful. They can sign up here.

Read All Newsletters

Next
Next

THIS WEEK IN AI