Trump DOJ Forces Colorado To Back Off Its 'Woke AI' Law | WLT Report Skip to main content
We may receive compensation from affiliate partners for some links on this site. Read our full Disclosure here.

Trump DOJ Forces Colorado To Back Off Its ‘Woke AI’ Law


Assistant Attorney General Harmeet Dhillon, whose Civil Rights Division intervened in xAI's lawsuit against Colorado's AI law.
Official portrait of Assistant Attorney General Harmeet K. Dhillon. Source: U.S. Department of Justice / Wikimedia Commons, public domain.

President Trump’s Justice Department just stepped into one of the biggest AI fights in the country, and Colorado is already backing up.

The fight centers on Colorado’s SB24-205, a so-called algorithmic discrimination law that xAI says would force AI companies to build government-approved DEI outcomes into their models.

That is why this story matters.

This is not just a dry technology dispute. It is a free speech fight, a civil rights fight, and a major test of whether blue states can pressure private AI companies to bend outputs toward ideological goals.

ADVERTISEMENT

The Justice Department put the constitutional problem this way:

The Justice Department moved to intervene in a lawsuit filed by artificial intelligence company xAI, challenging a new Colorado law that prohibits so-called “algorithmic discrimination.” The Justice Department alleges that the Colorado law violates the Equal Protection Clause of the Fourteenth Amendment by requiring AI companies to prevent unintentional disparate impact that their products could have based on protected characteristics like race and sex, and by exempting liability for certain forms of discrimination designed to advance “diversity.”

“Laws that require AI companies to infect their products with woke DEI ideology are illegal,” said Assistant Attorney General Harmeet K. Dhillon of the Justice Department’s Civil Rights Division. “The Justice Department will not stand on the sidelines while states such as Colorado coerce our nation’s technological innovators into producing harmful products that advance a radical, far left worldview at odds with the Constitution.”

“America’s success in the AI race will depend on removing barriers to innovation and adoption across sectors,” said Assistant Attorney General Brett A. Shumate of the Justice Department’s Civil Division. “Laws like Colorado’s that force AI models to produce false results or promote ideological bias threaten national and economic security and must be stopped.”

The statute, Colorado SB24-205, requires AI “developers” and “deployers” to satisfy certain disclosure, reporting, and prevention requirements when creating algorithm products designed for services like mortgage lending, student admissions, and job-candidate selection. But the statute has an explicit carveout for discriminatory algorithms designed to advance “diversity” or “redress historic discrimination.” AI company xAI filed a lawsuit challenging the statute on April 9.

That is a major escalation from the Trump DOJ.

And it is not hard to see why the administration moved quickly.

The Justice Department also explained exactly what Colorado’s statute would require, and why the DEI carveout is such a central issue:

The Justice Department’s release makes the machinery of SB24-205 much clearer than a normal headline can. The law does not merely ask AI companies to post a disclosure. It puts developers and deployers under state-level duties tied to algorithmic discrimination, then applies that regime to high-impact areas like mortgage lending, student admissions, and job-candidate selection.

The key problem, according to the Trump DOJ, is the constitutional double standard built into the law. Colorado would make companies police allegedly discriminatory outcomes while preserving a carveout for algorithms designed to advance diversity or redress historic discrimination. That is why xAI sued, and why the Justice Department moved to intervene. The administration’s argument is that a state cannot pressure AI companies to shape outputs around DEI ideology, compel speech, or create different legal treatment based on race and sex while calling it consumer protection.

ADVERTISEMENT

Translation: Colorado wanted to regulate supposedly discriminatory AI, while leaving room for discrimination if it is done in the name of diversity or historic redress.

That is the kind of double standard the Trump DOJ is now challenging head-on.

Breitbart reported the political and legal win after Dhillon discussed the case on Breitbart News Saturday:

Breitbart’s report added the political timeline and the inside-baseball significance of the win. Dhillon discussed the case on Breitbart News Saturday after the DOJ joined xAI’s lawsuit, marking what the outlet described as the first time the Justice Department had intervened in a case challenging state AI regulations. The report said Colorado initially agreed not to enforce SB24-205 against xAI, then widened that standstill so the law would not be enforced against anyone while lawmakers work on a fix.

The big point from Dhillon was that the Civil Rights Division is not supposed to be a weapon for left-wing racial balancing. Breitbart reported that she framed the Colorado law as an attempt to make companies and municipalities look at AI outcomes, then racially balance or adjust algorithms to match demographic targets. She said Colorado’s approach is not required by law, is prohibited by federal law, and gets even worse because the statute carves out room for discrimination when it is supposedly meant to remedy past discrimination.

That is why Dhillon called the result “pretty much a total win” for American consumers and companies.

That last point is key.

For years, conservatives have watched the left try to smuggle DEI rules into schools, corporations, hiring, finance, and government programs. Now the same fight is moving into artificial intelligence.

ADVERTISEMENT

Rocky Mountain Voice added useful Colorado-specific detail on how quickly the state’s position changed once the DOJ entered the case:

Rocky Mountain Voice filled in the Colorado-specific procedural move that made this more than a press-release fight. The outlet reported that Colorado Attorney General Philip Weiser agreed not to initiate enforcement actions under SB24-205, including investigations, until two weeks after the court rules on xAI’s request for a preliminary injunction. That matters because the law had been set to begin enforcement on June 30, but the agreement changes the pressure point and gives the court fight room to play out.

The timing was also important. Rocky Mountain Voice noted that the court granted the federal government’s intervention and the enforcement standstill on the same day. In other words, once the Trump DOJ entered the case, Colorado’s posture changed quickly. The outlet also reported that lawmakers still have to decide whether to replace or rewrite the law before the end of the legislative session, leaving Colorado with a political problem on top of the constitutional one.

For xAI and the broader AI industry, that means the immediate threat of enforcement was put on ice while the bigger fight continues.

The original state bill summary shows why this was such a sweeping law.

The Colorado General Assembly described SB24-205 this way:

On and after February 1, 2026, the act requires a developer of a high-risk artificial intelligence system (high-risk system) to use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination in the high-risk system. There is a rebuttable presumption that a developer used reasonable care if the developer complied with specified provisions in the act, including making available to a deployer of the high-risk system a statement disclosing specified information about the high-risk system, making available to a deployer of the high-risk system information and documentation necessary to complete an impact assessment of the high-risk system, and publicly disclosing a statement summarizing the types of high-risk systems the developer makes available.

The bill summary also shows why deployers would have been pulled deep into compliance. The state described impact assessments, consumer notices, risk-management policies, attorney general oversight, and rulemaking power as part of the broader framework. That is a sweeping amount of government involvement in how high-risk AI systems are built, documented, deployed, and defended.

That is a lot of state power over tools that increasingly shape speech, search, research, hiring, commerce, and public information.

And now, for the first time, the Trump DOJ is making clear that state-level AI regulations cannot become backdoor DEI mandates.

This could become one of the most important fronts in the next phase of the culture war.

Because whoever controls the rules for AI may end up controlling the information Americans see, the answers they get, and the speech companies are allowed to produce.

ADVERTISEMENT

For now, Colorado blinked.



 

Join the conversation!

Please share your thoughts about this article below. We value your opinions, and would love to see you add to the discussion!

Leave a comment
Thanks for sharing!