Artificial Intelligence (AI) can be a game changer. OpenAI.com's ChatGPT took the world by storm, and now everyone in line at Starbucks is talking about it. Students are using it to do homework, marketers to write blogs, programmers to create code, and authors to write entire books. Yours truly excluded. I still like writing my own stuff. On the talent acquisition front, recruiters have been using AI for years to help screen and qualify candidates, assess shortlist finalists, and ensure the right fit. Numerous recruiting software firms incorporate some form of AI, whether it’s to analyze a resume or someone’s expressions during a video interview. Going forward, however, many of these applications may be illegal without undertaking specific actions or caveats.
New York City officials completed several rounds to gain public comment and make revisions to the nation’s first ordinance that regulates AI for hiring practices. They published Local Law 144 on April 5, 2023. Enforcement of the new law will begin on July 5, 2023, and the effects of the sweeping regulations are already being felt in other U.S. states. The U.S. federal government is also now considering similar restrictions and guidelines that could affect thousands of firms, candidates, and software vendors.
While New Yorkers only have three months to examine hiring practices and ensure compliance with NYC 144, Human Resources (HR) professionals around the world are taking preemptive steps to stay in step. Even organizations that only have one candidate in New York City for any open position must comply, and it may be best to assume most other states and cities will have similar legislation in the near future. As such, a review of current AI-driven hiring practices might be advised. Certainly, any decisions about future tools should take into consideration legalities and candidate perceptions.
The New Law
If you’re an employer or employment agency, NYC 144 prohibits you from using an automated employment decision tool (AEDT) to make any employment decisions unless the solution is audited for bias annually; you publish a public summary of the audit; and you provide certain notices to applicants and employees who are subject to screening by the AI-infused tool. This is a large headache and requires a lot of work for enterprise firms with HR teams and thousands of applicants, but it can be even more taxing for small firms with outsourced or part-time HR personnel. Given the potential burdens on HR teams, and the cost of lawsuits that could arise, how can HR leaders still safely use AI for hiring? After all, nearly all firms can greatly benefit from the automation and deep analysis AI brings to the party.
I’ll preface the following by stating that while I’ve had discussions with hundreds of legal and HR professionals, and have extensive experience with AI for recruiting, I’m not an attorney and this article does not represent any legal advice. You should always consult with legal counsel before making any decisions. That said, almost any attorney I’ve met can read, and the text in NYC 144 is quite clear.
Let’s start with definitions. NYC 144 defines a candidate for employment as “a person who has applied for a specific employment position by submitting the necessary information or items in the format required by the employer or employment agency.” Simply put, they sent you a resume. As such, I believe we can safely assume that if you are proactively reaching out to prospects on LinkedIn, prior to them sending you a resume, you can use AI for automation and analysis without all the restrictions. However, once they send you a resume, the legal line has been crossed.
NYC 144’s definition for applicant screening is “to make a determination about whether a candidate for employment or employee being considered for promotion should be selected…” An example might be a video-based solution that uses AI to study someone’s expressions and answers to interview questions. These tools can be quite effective, albeit very expensive. Some solutions also require a large candidate pool, e.g; 300+ individuals, to be effective. By definition, they appear to fall under the legal restrictions outlined in NYC 144.
Local Law 144 basically defines a simplified output as a score or report based on AI analysis. If you use AI to do assessments, for example, this may fall under that definition, and appears to limit the choices down to “obsolete or obtrusive.” Here’s why:
Most assessment solutions are antiquated at best. DiSC, for example, was invented in the 1920s and Myers-Briggs not long after in the 1940s. The BIG-5 is considered the most popular today for employment assessments, but having made its debut in the early Eighties, it’s now around fifty years old and predates the mobile phone, internet, LinkedIn, and of course, AI. While there are obviously no legal concerns related to AI for the BIG-5 or similar personality tests, there may be other legal or candidate experience issues.
Several legal and HR professionals I’ve talked with about this say the BIG-5 OCEAN questions that relate to Openness, Conscientiousness, Extroversion, Agreeableness, and Neuroticism are concerning. Most, if not all of these definitions may no longer be appropriate in today’s work environment. Test scores tend to be “either/or” in that you are either agreeable or you’re not. Organizations today should ensure healthy disagreement in meetings, for example, rather than high agreeableness that could result in “group think.” As for neuroticism, any HR professional knows that asking mental health questions is illegal, and the word itself could be perceived negatively by applicants.
Legal Chill Answers
Are there any options that are not outdated, negatively perceived, or raise legal eyebrows? Yes, there are a few. One in particular is from a veteran-owned firm with groundbreaking technology. They were selected by the Society for Human Resources Management (SHRM), with over 300,000 members, as a 2021 Better Workplaces Challenge Cup finalist. Also, by Starr Conspiracy, a leading analyst, as a 2022 Top 25 Work Tech Vendor. They have CHROs and VPs for Talent Acquisition on their Advisory Board from Royal Caribbean Cruise Lines, Dish Networks, Talend/Qlik, Entrata, Better Homes and Gardens Real Estate, and several other firms. Best of all, they allow HR professionals to legally and safely use AI for automation and analysis.
The vendor is called RemotelyMe, and they offer a browser app that extracts data from a LinkedIn profile. It uses AI and neuroscience to go well beyond personality tests by determining attributes, soft skills, and communication preferences. It’s accurate even with a limited amount of data. This helps narrow down the funnel by prioritizing candidates based on matches against desired job requirements. Even more exciting, it uses the LinkedIn data, such as experience, skills, interests, etc., combined with science-based preferences for keywords, style, tone, etc., to instruct an included ChatGPT copy generator to instantly create personalized emails, messages and phone scripts to entice candidates. This can dramatically increase response rates. Most importantly, based on the NYC144 definitions noted earlier, this is a pre-assessment prior to a candidate sending a resume, so it should not fall under the legal restrictions.
RemotelyMe also offers the industry’s first visual neuroscience assessment that uses cutting-edge video storytelling, and it does not use AI. Neuroscience validates that text-based tests, like the BIG-5, can only communicate with ten percent of our decision-making brain, whereas visual assessments appeal to 100 percent. Only 30 percent complete text-based tests that have been shown to have 65 to 75 percent reliability (based on Cronbach’s Alpha analysis) and take around 45 minutes to complete. RemotelyMe’s assessment only takes 9 minutes, has a 90+ percent completion rate and 92.6 percent reliability. Even better, it's a fraction of the cost of most others, making it viable for blue collar and non-managerial assessments.
Using AI for recruiting could now pose serious legal problems. Continuing to use outdated assessments could also raise legal concerns, as well as brand damage and candidate experience issues. Organizations need automation and data-driven assessments to ensure Diversity, Equity, and Inclusion (DE&I) while reducing costs and staff burdens. Selecting the right AI-driven solutions for hiring could reduce or eliminate headaches and lawsuits while increasing an organization’s ability to place the right people in the right seats in the right way.
To experience the difference between traditional assessments and visual neuroscience storytelling, visit RemotelyMe.com
----------------
William Craig Reed is the New York Times bestselling author of Start With Who: How to Place the Right Prospects and People into the Right Seats in the Right Way . Also, The 7 Secrets of Neuron Leadership.
Comentarios