top of page

AI FOR RECRUITING: No Longer Legal?

Artificial Intelligence (AI) can be a game changer. Microsoft’s ChatGPT took the world by storm, and now everyone in line at Starbucks is talking about it. Students are using it to do homework, marketers to write blogs, programmers to create code, and authors to write entire books. Yours truly excluded. I still like writing my own stuff. On the talent acquisition front, recruiters have been using AI for years to help screen and qualify candidates, assess shortlist finalists, and ensure the right fit. Numerous recruiting software firms incorporate some form of AI, whether it’s to analyze a resume or someone’s expressions during a video interview. Going forward, however, many of these applications may be illegal without undertaking specific actions or caveats.


New York City officials completed several rounds to gain public comment and make revisions to the nation’s first ordinance that regulates AI for hiring practices. They published Local Law 144 on April 5, 2023. Enforcement of the new law will begin on July 5, 2023, and the effects of the sweeping regulations are already being felt in other U.S. states. The U.S. federal government is also now considering similar restrictions and guidelines that could affect thousands of firms, candidates, and software vendors.


While New Yorkers only have three months to examine hiring practices and ensure compliance with NYC 144, Human Resources (HR) professionals around the world are taking preemptive steps to stay in step. Even organizations that only have one candidate in New York City for any open position must comply, and it may be best to assume most other states and cities will have similar legislation in the near future. As such, a review of current AI-driven hiring practices might be advised. Certainly, any decisions about future tools should take into consideration legalities and candidate perceptions.


The New Law

If you’re an employer or employment agency, NYC 144 prohibits you from using an automated employment decision tool (AEDT) to make any employment decisions unless the solution is audited for bias annually; you publish a public summary of the audit; and you provide certain notices to applicants and employees who are subject to screening by the AI-infused tool. This is a large headache and requires a lot of work for enterprise firms with HR teams and thousands of applicants, but it can be even more taxing for small firms with outsourced or part-time HR personnel. Given the potential burdens on HR teams, and the cost of lawsuits that could arise, how can HR leaders still safely use AI for hiring? After all, nearly all firms can greatly benefit from the automation and deep analysis AI brings to the party.


I’ll preface the following by stating that while I’ve had discussions with hundreds of legal and HR professionals, and have extensive experience with AI for recruiting, I’m not an attorney and this article does not represent any legal advice. You should always consult with legal counsel before making any decisions. That said, almost any attorney I’ve met can read, and the text in NYC 144 is quite clear.


Let’s start with definitions. NYC 144 defines a candidate for employment as “a person who has applied for a specific employment position by submitting the necessary information or items in the format required by the employer or employment agency.” Simply put, they sent you a resume. As such, I believe we can safely assume that if you are proactively reaching out to prospects on LinkedIn, prior to them sending you a resume, you can use AI for automation and analysis without all the restrictions. However, once they send you a resume, the legal line has been crossed.


NYC 144’s definition for applicant screening is “to make a determination about whether a candidate for employment or employee being considered for promotion should be selected…” An example might be a video-based solution that uses AI to study someone’s expressions and answers to interview questions. These tools can be quite effective, albeit very expensive. Some solutions also require a large candidate pool, e.g; 300+ individuals, to be effective. By definition, they appear to fall under the legal restrictions outlined in NYC 144.

Local Law 144 basically defines a simplified output as a score or report based on AI analysis. If you use AI to do assessments, for example, this may fall under that definition, and appears to limit the choices down to “obsolete or obtrusive.” Here’s why:


Most assessment solutions are antiquated at best. DiSC, for example, was invented in the 1920s and Myers-Briggs not long after in the 1940s. The BIG-5 is considered the most popular today for em