As Artificial Intelligence Becomes More Self-Regulated on Federal Level, Employers Must Ensure Compliance with State & Local Laws
KEY TAKEAWAYS:
-
The federal government seeks to embrace the Artificial Intelligence revolution, de-regulate the industry, and solicit significant investment to spur exponential growth
-
The federal government’s hands-off approach with respect to regulation and enforcement has led states, like New York, to enact their own regulations and restrictions for AI
-
New York City made the first attempt to regulate AI in the employment context, which the State of New York hopes to build upon and strengthen in all facets of daily life
As 2025 ends, employers and business owners may claim that the Trump Administration’s embrace of the Artificial Intelligence (AI) revolution was a major focal point of the administration’s first-year agenda. Indeed, the Trump Administration views AI as a tool to spur economic growth and competitiveness, which, in turn, requires little regulation for industry to thrive. Recently, the topic of federalism has entered public conversation: should the federal government regulate this sector or do the states have an independent right to enact legislation and promulgate rules to govern AI? As employers and business owners expend resources to leverage AI to gain an edge in the market and modernize the workplace, they would be well advised to consider the developing regulatory framework for AI on a state and local level.
What is the Federal Government’s AI Policy?
The Biden Administration previously issued an executive order that aimed to protect workers from unfair bias resulting from AI use. The executive order issued under President Biden required safety reporting and the creation of an institute to establish AI standards and risk assessments. Conversely, the executive order issued under President Trump, entitled “Removing Barriers to American Leadership in Artificial Intelligence,” repealed President Biden’s executive order. This marked a shift towards a policy that minimizes regulation and encourages maximum growth of AI on an exponential scale.
In addition to the Trump Administration’s repeal of the Biden Administration’s executive order, agencies like the Equal Employment Opportunity Commission and United States Department of Labor removed AI-related guidance and best practices documents from the agency’s respective websites. This action signaled that the agencies would no longer enforce audit requirements and promote transparency with employers’ use of AI tools in the workplace. While the federal government takes a hands-off, self-regulatory approach to compliance with existing anti-discrimination and labor laws as it relates to the use of AI, various states and local jurisdictions, like New York City, have taken a much stricter approach.
The Trump Administration recently issued another executive order that directed the federal government to sue and cut funding for states that regulate AI, as the administration views AI regulation as solely a power of the federal government. While this executive order represents the current administration’s approach to governing AI, it is not legislation that has lasting effects. The executive order issued by the Trump Administration directs certain special advisors to the president to prepare legislative recommendations to Congress that propose a uniform federal policy framework for AI that preempts state AI laws that conflict with the administration’s policy. No such federal legislation exists to date.
What are the AI Policies in New York City and New York State?
New York City was one of the first jurisdictions in the country to enact an AI safety law through the City Council’s enactment of Local Law 144, which regulates automated hiring and promotion tools used in workplace hiring decisions. More specifically, it requires employers to conduct a bias audit, publish a summary of the results on its website, and provide notice to candidates if such a tool will be used in hiring decisions. The law, however, only applies to employers that primarily use those tools in decisions, rather than employers who predominantly rely on non-AI tools, i.e., human decision makers.
New York State attempted, but failed, to enact legislation that built on the New York City law. Despite the state’s failed first attempt, many expect New York to deliver a comprehensive AI legislative scheme soon. The framework is already taking shape with laws that regulate our daily lives. In November 2025, Gov. Hochul signed a bill that requires companies that provide AI companions to implement certain safety features and provide notifications to users to interrupt sustained use. On December 11, 2025, Gov. Hochul signed a bill that requires disclosure if advertisements use digitally created people through the use of AI, and another bill that expanded on prior legislation and now provides damages for the unauthorized use of a deceased person’s AI without prior consent from the decedent’s heirs.
Currently, one AI-related bill awaits Gov. Hochul’s signature that will likely have a significant impact on New York businesses. The Responsible AI Safety and Education Act or “RAISE Act” outlines a list of transparency requirements regarding frontier model training and deployment, safety plans, and disclosures. The penalty for a first-time violation of this law results in a fine of up to $10 million and fines up to $30 million for subsequent violations. The State Legislature takes the view that the laws have not kept up with rapidly evolving AI and that the window for taking proactive risk prevention is closing fast. Therefore, it is reasonable to expect more AI-related regulations in New York to come to the forefront in 2026.
Employers should know the current state of the law for AI, as the legislature may enact regulations that impact their business, hiring processes, and employee relations. Specifically, forthcoming regulations could include any of the following requirements that failed to pass the first time: (i) restrictions on monitoring and surveilling workers; (ii) creation of a private right to sue for violations of AI law; (iii) employers must conduct a routine bias audit and release such report; and (iv) provide notice to applicants and workers when an employer uses AI to make employment decisions.
What Can You Do?
The AI legal landscape is complex and ever-changing. This highlights the importance of ensuring that your company stays ahead of state and local regulations. Here are a few steps you can take to ensure that your company complies with current law:
- Thoroughly research and conduct due diligence on your company’s AI vendor(s);
- Conduct an audit of your company’s internal AI system and tools if it develops or deploys AI, especially in the employment-decision-making context;
- Develop reliant and effective risk-management systems to protect your company from AI issues and avoid AI-related pitfalls;
- Develop processes for accurate and efficient disclosure of audits of the AI systems used and any potential violations;
- Train management and employees on AI best practices;
- Frequently monitor legislation and developments in the law on a federal, state and local level, while also reviewing any guidance issued from these agencies; and
- Engage competent counsel who can advise your company on best practices in connection with the use of AI and who can help your company navigate the evolving legal landscape.
For any questions relating to the above, or for assistance with any employment and labor topic, please contact:
- Russell J. Edwards
- Cali L. Chandiramani
- Caroline J. Berdzik
- Stephen C. Mazzara
- Scott R. Green
- Christopher P. Maugans
- Or another member of the Employment and Labor practice group.