Skip to content

News & Knowledge

Artificial Intelligence is Now a Reality and So Are the Dangers Associated with Implementing it in Employment Decisions


Artificial Intelligence is Now a Reality and So Are the Dangers Associated with Implementing it in Employment Decisions


  • Employers should be aware of the potential dangers and legal pitfalls in using Artificial Intelligence in the recruitment, hiring and promotion of employees.

  • Employers cannot pass liability onto a third-party vendor who developed the AI selection software if it is found to be discriminatory.

  • Administrative agencies, such as the EEOC, are showing a willingness to litigate AI-related matters using federal, state and local laws.

The benefits and potential dangers of Artificial Intelligence have long been portrayed in books, movies, and television; however, two decades into the 21st century, AI technology has quickly moved from science fiction to reality. The AI chat bot, ChatGPT, recently set the record for the fastest growing userbase ever, with an estimated 100 million monthly active users. AI has also become a flashpoint in the ongoing 2023 Writers Guild of America Strike. And a February 2022 survey from the Society of Human Resources Management found that 79 percent of employers use AI and/or automation for recruitment and hiring.

Employers’ adoption of AI technology in hiring and recruitment can help streamline the process of finding strong candidates, but it has also created significant opportunities for unlawful discrimination and other legal pitfalls. The Equal Employment Opportunity Commission has been issuing guidance to address these possibilities as employers increasingly utilize AI technology in the workplace.

For example:

The New York City Council enacted Local Law Int. No. 1894-A (the “Local Law”) amending New York City’s administrative code in relation to an employer’s use of “automated employment decision tools” (AEDT) of employees. The Local Law requires that any employer using an AEDT in their hiring and promotion decisions must notify applicants of the use of that technology in connection with the assessment.

On April 6, the New York City Department of Consumer and Worker Protection issued the final rule of the Local Law. While the final rule is largely similar to the proposed Local Law, it expanded the Local Law in several key ways significant to employers.

First, the final rule expanded the definition of AEDT so that less-sophisticated computer programs would be subject to scrutiny under the Local Law.

Second, the final rule added an additional requirement to the Local Law that the bias audit indicate the number of individuals the AEDT assessed that are not included in the calculations because they fall within an unknown category and include that number in the summary of results.

Third, the final rule addressed the “scoring rate,” which is defined as the rate at which individuals in a category receive a score above the sample’s median score where the score has been calculated by an AEDT. Now, the Local Law requires the number of applicants in a category and, scoring rate of a category, must be included in the auditor’s summary of results.

The Local Law established penalties for non-compliance between $500 and $1,500 for each violation. Each day that the employer uses the AEDT in violation of the Local Law counts as a separate violation. The New York City Department of Consumer and Worker Protection will begin enforcing the Local Law and its penalties on July 5.

On the federal level, the EEOC filed suit against the California-based iTutor Group Inc. and Tutor Group Ltd. earlier this month in U.S. District Court for the Eastern District of New York. The complaint charged the defendants with violations of the Age Discrimination in Employment Act for utilizing an AI application software that allegedly automatically rejected female applicants aged 55 or older and male applications age 60 and over. As a result of using the software, iTutor rejected more than 200 qualified applicants solely based on their age. This lawsuit highlights the very real power AI tools have to discriminate, as well as the EEOC’s willingness to litigate these issues.

On May 12, 2022, the EEOC published its first Guidance regarding the use of AI in employment. The EEOC raised concerns that while AI technology may streamline the job application process, the automated nature of the technology could lead to widespread discrimination against legally protected persons.

Most recently, on May 18, the EEOC announced the release of its second Guidance regarding employers’ use of AI, which, in the EEOC’s view, helps ensure that AI employment tools do not violate Title VII of the Civil Rights Act of 1964. This most recent Guidance contains the following key takeaways for employers:

  1. Unlikely to Escape Liability: The Guidance specifically states that employers may be liable for discrimination on the basis of Title VII even if their AI selection procedure was developed by an outside vendor. This mirrors New York City’s Local Law 144, which also places the burden of compliance responsibility on the employer. In other words, an employer cannot put the blame on – e., pass the liability onto – a third-party vendor who developed the AI selection software if it discriminates in violation of Title VII.


  1. Four-Fifths Rule of Thumb: The four-fifths rule of thumb is a general guideline for determining whether the selection rate for one group/protected class is “substantially” different than the selection rate of another group/protected class. One rate is deemed substantially different than another if their ratio is less than four-fifths (or 80 percent), which may be evidence of an adverse impact on that group. While the four-fifths rule of thumb is not a “rule” per se, the EEOC signaled that this measure could be used as a guideline to assess an AI system’s compliance with Title VII.


  1. Continued Self-Auditing: The EEOC Guidance encourages employers to monitor the AI tools they utilize for recruiting and conduct self-assessments of their automated selection procedures. The Guidance warns that there could be liability for employers if they fail to adopt a less discriminatory algorithm that was considered during the development process.

As employers continue to utilize increasingly sophisticated AI technologies in the workplace, so too are the number of federal, state, and local rules and guidelines increasing to address their potential for discrimination.

If you have any questions about how this impacts your business, or how to better protect your company, please contact: