AI regulation is not converging on a single federal statute. Instead, rulemaking is fragmenting by sector: agencies, statehouses and city halls are issuing procurement memos, safety clearances, bias-audit guidance and contract clauses that will shape AI in hospitals, hiring, schools and government. Organized interests that engage these specific processes will determine outcomes far more than advocates chasing a one‑size‑fits‑all law.
f you want to know what AI will mean for most people next year, stop waiting for Congress to pass a sweeping national statute and start paying attention to the rulemaking calendars inside agencies and the ordinance dockets in state capitals and city councils. The real leverage over how AI touches everyday life is being built piece‑by‑piece: device approvals in health care, bias audit requirements in hiring, procurement clauses in federal contracts, sectoral privacy rules for schools, and consumer‑protection enforcement for platforms. Each of those decisions solves a concrete problem for a particular constituency—and each becomes a template the rest of the market adopts. That fragmentation is not accidental; it is the predictable outcome of how policy-making actually works. Government actors with programmatic authority—regulators, procurement officers, grantmakers, and agency counsel—can write binding rules and conditions for market access long before a hypothetical omnibus AI law reaches the floor. As the writer and academic Ethan Mollick put it bluntly on X: “Groups and movements that can build & get implemented clear policies will have an outsized impact on the chances that AI is used in the way that they want.” He wasn’t offering platitudes; he was describing a straightforward political fact: if you want to shape AI, you show up at the specific venue where the software meets the public—hospitals, hiring systems, school districts, or the federal procurement office—and you make a rule that favours (or forbids) particular architectures and business models. Consider health care. The Food and Drug Administration has already created an administrative pathway that treats AI-enabled diagnostics as medical devices, with explicit safety and update rules that companies must satisfy to reach clinicians and patients. The agency cleared the first autonomous AI diagnostic for diabetic retinopathy in 2018 and followed with a formal AI/ML action plan directing how learning systems should be reviewed and monitored. That regulatory scaffolding makes it feasible for one class of companies—those prepared to run clinical trials, document training data, and meet post‑market surveillance obligations—to capture clinical use cases; it closes off others whose business models depend on rapid, opaque iteration. In health care, the FDA’s gatekeeping choices decide what kinds of AI are worth building and who can sell them. A parallel logic has played out in hiring. Cities and states have taken the lead where federal law is thin. New York City’s requirement that employers using automated employment decision tools commission independent bias audits and publish results has forced vendors and customers to rethink products that once promised black‑box efficiency. Vendors that promised facial‑expression analysis as a hiring tool faced intense scrutiny and, in some cases, market collapse; several companies publicly abandoned such features as customers balked at legal and reputational risk. In short, a municipal ordinance—tailored to the labor market and backed by enforcement—reshaped what employers will pay for. Procurement is where the federal government is turning its clout into policy. The Office of Management and Budget has issued government‑wide memoranda directing agencies to adopt pre‑deployment impact assessments, transparency practices, and procurement clauses requiring vendors to disclose data uses and to accept limits on re‑using government data to train commercial models. Those memos do not regulate the entire private sector, but they change the shape of the federal market—the single largest buyer of IT in the U.S.—and that has cascading effects. Vendors that cannot comply with contract terms that mandate model documentation, change‑control notifications and security testing will be excluded from a huge revenue stream; those that can will see federal procurement as a growth engine and build products to that spec. Education shows the same pattern at state and district levels. Federal student‑privacy law has not been rewritten for generative models, but the Department of Education and large districts have published guidance and procurement priorities that require vendors to avoid training models on student data, and to disclose when AI is used in the classroom. Those rules matter more to edtech startups than an amorphous federal statute: school districts buy on a lot more than price—data governance, compliance, and the ability to integrate with existing student information systems are decisive. Consumer protection and civil‑rights enforcement provide a final example of how sectoral authorities shape outcomes. The Federal Trade Commission has moved from threat‑spotting to enforcement under existing statutes, targeting deceptive marketing, privacy lapses, and unfair practices involving AI. Civil‑rights agencies have signalled that disparate‑impact frameworks apply to algorithmic systems in housing, lending and employment. Again, agency choices about how to interpret broad statutes against specific AI behaviors create binding constraints on particular business models: make deceptive claims about a chatbot’s medical competence and the FTC will treat it like any other fraudulent ad; use biased scoring in mortgage underwriting and housing regulators will come sniffing. This is a political battleground that organized interests already understand. Trade associations, big vendors and well‑funded advocacy groups spend the bulk of their energy lobbying agencies, filing public comments on RFIs and proposed memoranda, and building relationships with agency subject‑matter experts. Those engagements are granular and technical, and they reward groups that can mobilize experts, white papers and test cases. It is easier to change the shape of a procurement clause or an FDA guidance document than to rewrite a statute that would require months of floor votes and a coalition across parties. That is why the tactics matter more than the slogans. Campaigns that demand a single national law underestimate how much capacity sits in agencies and local governments—and overestimate how quickly Congress can act. Conversely, groups that build technical standards, pilot programs, audit frameworks and procurement playbooks will bend the technology to their interests far faster. The fragmented structure of authority favors focused, durable rule‑making over abstract national blueprints. If you care which firms win, which harms are prioritized, and which rights get protected, the single best bet is to follow the docket: the FDA premarket guidance, the OMB procurement memoranda, EEOC pronouncements on discrimination in automated hiring, school district AI policies, state privacy bills with education carve‑outs, and municipal ADS ordinances. Those are the levers that translate public values into code and contracts. The result will be a patchwork—not elegant, not uniform, but intensely consequential. Sectoral policy won’t be an accident; it will be the product of organized interests showing up where the rules are actually written. If you want to write the future of AI, pick the venue and start drafting. Pulling together a coherent national strategy is still worth pursuing; but until a comprehensive statute is passed, expect the sectors to write the rules. Whoever builds the best standards playbook in health, hiring, education, procurement and consumer protection will shape how the rest of the market learns to build and to live with AI.
One dispatch per day at 06:00 UTC. No commentary, no ceremony.