Artificial Intelligence (AI) has rapidly become an essential part of everyday life, powering everything from online shopping and voice assistants to hospital diagnostics and laws enforcement tool. But as AI spreads across industries, concerns over its unchecked use are growing. In response, U.S. states are beginning to take action with their own AI regulations.
While the federal government is still working on broad national policies, state governments are moving faster, creating a patchwork of laws designed to protect consumers, workers, and businesses. These state-level initiatives are shaping how AI is developed and deployed across sectors, such as healthcare, employment, criminal justice, and education.
California Leads the Way in AI Governance
California is often seen as the trendsetter when it comes to technology laws, and it’s no different with artificial intelligence. The state recently passed laws that require companies to be transparent about their use of automated decision-making tools, especially in hiring and housing.

The California Privacy Rights Act (CPRA), for example, gives consumers the right to know if a company is using AI to make decisions about them and allows them to opt out. Businesses must also conduct impact assessments to understand how their AI tools might affect consumers.
In the employment sector, California law AB 331 is being closely watched. If passed, it will regulate how AI is used in recruitment and hiring, requiring employers to notify applicants when AI is involved in decision-making.
Learn more about the CPRA here.
Illinois Targets Bias in Hiring Algorithms
Illinois was among the first states to focus on how AI affects job seekers. The Artificial Intelligence Video Interview Act, passed in 2020, requires employers using AI to analyze video interviews to inform candidates that AI is being used, explain how it works, and get their consent.
This law aims to prevent discrimination that can arise from biased algorithms. For example, some AI tools have been shown to unfairly score candidates based on gender, race, or appearance. By ensuring transparency and consent, Illinois is setting a precedent that other states may follow.
Explore Illinois’ AI hiring law here.
Colorado Passes First-of-Its-Kind AI Accountability Law
In May 2024, Colorado passed a landmark bill titled “SB24-205,” known as the Artificial Intelligence and Consumer Protection Act. The law targets “high-risk AI systems” that affect people’s access to jobs, healthcare, credit, and housing.
The law requires companies to do risk assessments, eliminate algorithmic bias, and notify consumers when AI is being used in critical decisions. It also allows for penalties if companies do not comply.
Colorado’s approach balances innovation with consumer protection and may serve as a model for future national regulation.
Details of the Colorado AI Act.
New York Tackles AI in Workplace Surveillance
New York has been active in addressing how AI is used in workplaces, particularly in surveillance and productivity tracking. The state’s lawmakers are reviewing bills that would restrict employers from using AI tools that monitor facial expressions, keystrokes, and break times without clear consent and purpose.

In 2023, New York City also began enforcing Local Law 144, which regulates automated hiring tools. Under this law, employers must perform annual bias audits and inform job applicants when AI tools are used in evaluating their qualifications.
Learn more about NYC’s Local Law 144.
Texas, Virginia, and Others Drafting Their Own AI Laws
States like Texas and Virginia are also drafting laws aimed at addressing AI in education, policing, and business practices.
Virginia’s Consumer Data Protection Act (CDPA), for instance, includes provisions about profiling and automated decision-making, giving users the right to appeal decisions made by AI systems.
Texas lawmakers are proposing bills focused on facial recognition technology, especially in law enforcement and school safety, where concerns about privacy and misuse are high.
These state-specific efforts reflect a growing recognition that AI needs to be controlled before it controls too much.
Key Industries Affected by State AI Regulations
Healthcare
AI is being used in diagnostics, patient data analysis, and treatment plans. States are beginning to regulate how these systems operate to prevent misdiagnosis and data breaches.
Employment
Many companies now use AI to screen resumes, analyze interview responses, and even monitor employee performance. States are pushing for transparency and fairness in these processes.
Education
AI-based learning platforms are being regulated for data privacy and algorithmic bias. States want to ensure students aren’t unfairly judged or tracked.
Law Enforcement
Facial recognition, predictive policing, and other AI tools are under scrutiny due to risks of racial profiling and false identification. States are trying to enforce ethical use.
Federal Action Still on the Horizon
While state laws continue to emerge, the federal government is also considering action. President Biden signed an Executive Order in 2023 promoting “safe, secure, and trustworthy AI.” However, comprehensive national regulation is still a work in progress.
Federal lawmakers are studying state-level efforts to craft a balanced national framework. Experts suggest that coordination between state and federal governments will be crucial to avoid confusion and ensure consistency.
Conclusion: AI Laws Are Evolving, One State at a Time
The United States is witnessing a new wave of tech regulation with states taking the lead in managing the risks of artificial intelligence. While each state has its own approach, the message is clear: AI must be fair, transparent, and accountable.
As AI continues to evolve, so will the laws that govern it. Industry leaders, consumers, and policymakers must stay informed and involved to ensure this powerful technology is used responsibly.
For the latest updates on U.S. AI laws, visit the National Conference of State Legislatures.
Also Read – 5G Rollout in the USA: Is It Worth the Hype?