The CREATE AI Act is making waves in Washington, and its momentum is reigniting a major debate in the U.S.: should AI regulations be led by the federal government, or should individual states have the right to create their own rules? This question is more than political—it could shape the future of artificial intelligence across every industry and affect the daily lives of millions of Americans.
As AI tools like ChatGPT, Midjourney, and countless enterprise-grade solutions continue to grow in influence, the need for clear rules becomes more urgent. The CREATE AI Act, short for Creating Resources for Every American To Experiment with Artificial Intelligence Act, aims to set a national framework for AI governance. But not everyone agrees on how, where, or who should regulate these powerful technologies.
What Is the CREATE AI Act?
The CREATE AI Act was introduced to support innovation, research, and ethical development in the field of artificial intelligence. The bill’s key goal is to create National AI Research Resource (NAIRR) hubs across the U.S., allowing universities, startups, and public researchers to access the high computing power and datasets needed to build and test AI models.
Here’s what the bill includes:
- Funding for publicly accessible AI infrastructure
- National standards for ethical AI use
- Support for academic and small business AI research
- Privacy protections for data used in AI training
The act is gaining bipartisan support and has moved forward in recent congressional sessions. But with that progress comes new questions—especially from states that have already passed or proposed their own AI rules.
The Need for AI Regulation Is Urgent
AI is evolving fast. From generating artwork to writing business emails, diagnosing diseases, and making decisions in hiring or policing, AI tools are touching more parts of our lives than ever before.
But without proper guardrails, AI can lead to:
- Bias in decision-making (e.g., facial recognition wrongly identifying minorities)
- Job losses due to automation
- Misinformation through deepfakes or AI-generated content
- Privacy concerns from data being scraped and used without consent
That’s why governments worldwide—from the EU’s AI Act to Canada’s AI and Data Act—are rushing to create legal frameworks. The CREATE AI Act is America’s bold step in the same direction.
Federal vs State-Level AI Rules: The Battle Begins
As the CREATE AI Act gains support in Congress, states like California, New York, Illinois, and Texas are pushing their own AI legislation. These state-level bills vary widely in scope, from facial recognition bans to mandatory algorithm audits.
This raises a serious challenge:
- Should there be one federal law to rule them all?
- Or should states be allowed to make their own rules to suit local values and needs?
Pros of Federal AI Regulation
- Uniformity: One law for all 50 states helps companies comply more easily.
- Interstate Business: Tech companies operate across state lines, and different laws can create confusion.
- National Security: AI impacts defense, surveillance, and global competition—areas where federal oversight is essential.
Pros of State AI Regulation
- Local Control: States can act quickly on issues that matter to their people.
- Experimentation: States can try different approaches to find what works.
- Checks and Balances: States can act when the federal government is slow or influenced by tech lobbyists.
Why the Debate Matters
The outcome of this debate could influence:
- How fast new AI tools reach the public
- Whether startups can afford to enter the market
- How protected people are from harmful or biased AI systems
- Whether the U.S. remains competitive in global AI innovation
Tech giants like Google, Meta, and OpenAI are lobbying for federal regulation, fearing a messy patchwork of state laws that could slow innovation. Meanwhile, civil rights groups are pushing for strong local laws, especially to protect marginalized communities from surveillance and biased algorithms.
The Role of the White House and Federal Agencies
President Biden’s Executive Order on Safe, Secure, and Trustworthy AI (signed in October 2023) laid the groundwork for a national AI strategy. It tasked agencies like the Federal Trade Commission (FTC) and the National Institute of Standards and Technology (NIST) with developing AI safety and fairness standards.
Now, with the CREATE AI Act moving forward, these efforts could become law. If passed, the bill would coordinate with agencies to:
- Fund AI research centers
- Set ethics guidelines for developers
- Provide grants to schools and small businesses working on AI
- Encourage public-private partnerships for responsible AI development
What States Are Doing on Their Own
California
- Considering the Automated Decision Systems Accountability Act, requiring transparency in AI tools used for hiring and housing decisions.
Illinois
- Already has the Biometric Information Privacy Act (BIPA), which affects facial recognition and voice analysis tech.
New York
- Exploring laws that would require disclosure of AI use in employment decisions and content moderation.
Texas and Florida
- Focusing more on freedom from surveillance and protecting personal data from being misused by AI tools.
These laws are designed to address specific local concerns but can clash with national approaches.
What Industry Experts Are Saying
Brad Smith, President of Microsoft, has said:
“We need national standards. Companies want clear rules, not 50 sets of them.”
Joy Buolamwini, founder of the Algorithmic Justice League, counters:
“Federal law must not water down the protections some states have already put in place. We need both national and local efforts.”
Andrew Ng, leading AI researcher, suggests a hybrid model:
“Let the federal government set a floor, not a ceiling. States can build stronger protections on top.”
The Global Context
While the U.S. debates between federal and state laws, other countries are moving fast:
- European Union: Passed the EU AI Act, the world’s first comprehensive AI regulation
- China: Has strict content rules and heavy government oversight on AI
- Canada: Proposing a national AI and Data Act for transparency and ethical use
If the U.S. doesn’t move forward with a national strategy, it risks falling behind in global leadership.
What Happens Next?

As of mid-2025, the CREATE AI Act has cleared multiple committees and is being debated in both the House and Senate. Its future depends on:
- Continued bipartisan support
- Negotiations with state leaders
- Balancing innovation with safety
If passed, it could become the cornerstone of U.S. AI governance. But if the state vs federal fight intensifies, it may slow down or lead to weaker compromises.
What Should You Watch For?
As the CREATE AI Act develops, keep an eye on:
- Amendments that change its scope or funding
- State responses—will they strengthen or block local laws?
- Tech industry reactions—will companies embrace or resist?
- Public feedback—are citizens getting a say in how AI affects their lives?
Final Thoughts: A United AI Future?
The path forward is not simple. The U.S. must find a balance between national unity and local flexibility when it comes to regulating AI. The CREATE AI Act could be a major step in the right direction, especially if it allows room for states to address their unique needs while avoiding confusion or conflict.
In the end, the goal should be to make sure AI serves everyone—safely, fairly, and transparently. That means we need smart laws, open discussion, and shared responsibility—from both Washington and our state capitals.
Read Next – Trump OBBBA AI Investment: $500B Infrastructure Shake-Up