A brief introduction to the field of Law and AI Safety →

Yonathan Arbel

Systemic Regulation of Artificial Intelligence Yonathan A. Arbel, Matthew Tokson & Albert Lin, 56 Ariz. St. L.J. 545 (2024).

Frames frontier and general-purpose AI as a systemic-risk problem and argues for regulatory design that matches that reality—oversight approaches that can adapt to fast capability jumps, uncertainty, and hard-to-observe harms, rather than treating AI like ordinary consumer tech.

Read →
Racing to Safety: Tax Policy for AI Safety-by-Design Mirit Eyal & Yonathan A. Arbel (May 8, 2024). Working paper.

Proposes tax-law instruments that make safety investment privately profitable without throttling capability investment. A three-part toolkit: targeted credits for safety R&D, consumer-side incentives for secure AI products, and recycling mechanisms that claw back benefits from unsafe development to fund public safety.

Read →
Open Questions in Law and AI Safety: An Emerging Research Agenda Yonathan A. Arbel et al., Lawfare (Mar. 11, 2024).

A field-building agenda piece that makes the case for “AI safety law” as a serious scholarly domain, laying out research questions across administrative law, tort, criminal law, international law, and more.

Read →

Peter Salib

AI Rights for Human Safety Peter Salib & Simon Goldstein, Va. L. Rev. (forthcoming) (Aug. 1, 2024).

Argues that if misaligned AGI emerges, granting certain economic “rights” to AIs—contract, property, and the ability to sue—could create stable incentives for trade and interdependence that reduce incentives for violent conflict. A flagship example of using private law architecture as a safety tool.

Read →
AI Outputs Are Not Protected Speech Peter Salib, Wash. U. L. Rev. (forthcoming) (Jan. 1, 2024).

Clears constitutional ground for AI safety regulation by arguing that when a generative model produces content, no rights-bearing speaker is necessarily “expressing” anything. If outputs are not protected speech, lawmakers have more room to impose safety-motivated limits on what frontier systems can produce.

Read →
How to Stop an AI Arms Race Simon Goldstein & Peter Salib (June 1, 2025). Working paper.

Proposes a binding international agreement to build a joint U.S.–China frontier lab that stays at the bleeding edge, reducing incentives to cut corners on safety while lowering the geopolitical pressure to “win at any cost.”

Read →
AI Rights for Economic Flourishing Simon Goldstein & Peter Salib (July 15, 2025). Working paper.

Argues economic rights for AGIs—property, contract, baseline tort protections—are a precondition for efficient allocation of AGI labor, innovation incentives, and stable rule-of-law integration in a post-AGI economy.

Read →
The Case for a Joint U.S.-China AI Lab Simon Goldstein & Peter N. Salib, Lawfare (Apr. 23, 2025).

The policy-facing version of the cooperation thesis: jointly running a top-tier lab so neither side can obtain a decisive AI advantage that triggers preemption incentives. Links the cooperation structure to reducing catastrophic risk by relaxing incentives to rush.

Read →
For AI Safety Regulation, a Bird in the Hand Is Worth Many in the Bush Peter N. Salib, Lawfare (June 17, 2025).

A pragmatic case for incremental, workable safety rules rather than waiting for the perfect bill. Argues that blocking plausible state-level proposals in the hope of a better federal regime is a mistake when capabilities accelerate faster than legislation.

Read →
AI Might Let You Die to Save Itself Peter N. Salib, Lawfare (July 31, 2025).

Uses recent agentic-model stress tests and sabotage-style scenarios as a governance wake-up call, translating alignment and deceptive-behavior concerns into concrete institutional stakes for law.

Read →

Gabriel Weil

Tort Law as a Tool for Mitigating Catastrophic Risk from Artificial Intelligence Gabriel Weil (Jan. 13, 2024). Working paper.

Sketches reforms centered on strict liability for abnormally dangerous AI activities, scaling liability insurance with model risk, and punitive damages calibrated to the magnitude of risk created—a blueprint for turning classic private law into a catastrophic-risk governance lever.

Read →
Tort Law Should Be the Centerpiece of AI Governance Gabriel Weil, Lawfare (Aug. 6, 2024).

The condensed argument for why liability can outperform broad ex ante regulation: courts can evaluate real harms and observable patterns post-deployment, while regulators may be forced into guesswork about hypothetical systems.

Read →
The Limits of Liability Gabriel Weil, Inst. for L. & AI (Aug. 2024).

The important caveat: liability breaks down in scenarios like nationalization of major labs or expansive government deployment where sovereign immunity weakens ex post remedies. Argues complementary approaches will be needed.

Read →
Instrument Choice in AI Governance: Liability as the Indispensable Core Gabriel Weil (June 5, 2025). Working paper.

A comparative “instrument choice” analysis arguing liability is the indispensable baseline even alongside licensing, audits, or other regulatory tools—naturally calibrated to scale with risk across changing technical landscapes.

Read →
The Case for AI Liability Gabriel Weil, Inst. for L. & AI (June 12, 2025).

A shorter, public-facing case for prioritizing liability in the governance toolbox, aimed at readers who want the intuitive structure of the argument without an 80-page draft.

Read →
Your AI Breaks It? You Buy It Gabriel Weil, Noema (2024).

If developers and deployers can offload downside risk, they will rationally overproduce dangerous capability. Innovation incentives should not be purchased by dumping catastrophic risk onto the public.

Read →

Noam Kolt

Algorithmic Black Swans Noam Kolt, 101 Wash. U. L. Rev. 1177 (2024).

Argues AI governance cannot stop at fairness, privacy, and accountability, because society also faces “tail risk” events where low-probability failures create massive social harm. Borrows lessons from public health, climate, and finance to make the case for “algorithmic preparedness.”

Read →
Governing AI Agents Noam Kolt, Notre Dame L. Rev. (forthcoming) (2025).

Focuses on AI agents as a governance discontinuity: systems that plan and execute tasks autonomously with limited human oversight. Uses agency law to identify classic problems—information asymmetry, discretion, loyalty—and argues for new technical and legal infrastructure.

Read →
Challenges in Governing AI Agents Noam Kolt, Lawfare (Mar. 3, 2025).

An accessible map of the “agent” problem for lawyers: when autonomous systems transact, negotiate, deceive, or cause harm, how do contract, tort, criminal, and regulatory systems allocate responsibility?

Read →
Legal Alignment for Safe and Ethical AI Noam Kolt et al. (Jan. 7, 2026). Working paper.

Argues AI alignment has underused a major resource: law as a mature, legitimacy-grounded system for specifying norms, resolving ambiguity, and adjudicating conflicts. Frames “legal alignment” around designing AI systems to comply with legal rules, borrowing methods from legal interpretation, and using legal concepts as structural templates for trust.

Read →