Technology

NYC’s anti-bias law for hiring algorithms goes into effect


After months of delays, New York City today began enforcing a law that requires employers using algorithms to recruit, hire or promote employees to submit those algorithms for an independent audit — and make the results public. The first of its kind in the country, the legislation — New York City Local Law 144 — also mandates that companies using these types of algorithms make disclosures to employees or job candidates.

At a minimum, the reports companies must make public have to list the algorithms they’re using as well an an “average score” candidates of different races, ethnicities and genders are likely to receive from the said algorithms — in the form of a score, classification or recommendation. It must also list the algorithms’ “impact ratios,” which the law defines as the average algorithm-given score of all people in a specific category (e.g., Black male candidates) divided by the average score of people in the highest-scoring category.

Companies found not to be in compliance will face penalties of $375 for a first violation, $1,350 for a second violation and $1,500 for a third and any subsequent violations. Each day a company uses an algorithm in noncompliance with the law, it’ll constitute a separate violation — as will failure to provide sufficient disclosure.

Importantly, the scope of Local Law 144, which was approved by the City Council and will be enforced by the NYC Department of Consumer and Worker Protection, extends beyond NYC-based workers. As long as a person’s performing or applying for a job in the city, they’re eligible for protections under the new law.

Many see it as overdue. Khyati Sundaram, the CEO of Applied, a recruitment tech vendor, pointed out that recruitment AI in particular has the potential to amplify existing biases — worsening both employment and pay gaps in the process.

“Employers should avoid the use of AI to independently score or rank candidates,” Sundaram told TechCrunch via email. “We’re not yet at a place where algorithms can or should be trusted to make these decisions on their own without mirroring and perpetuating biases that already exist in the world of work.”

One needn’t look far for evidence of bias seeping into hiring algorithms. Amazon scrapped a recruiting engine in 2018 after it was found to descriminate against women candidates. And a 2019 academic study showed AI-enabled anti-Black bias in recruiting.

Elsewhere, algorithms have been found to assign job candidates different scores based on criteria like whether they wear glasses or a headscarf; penalize applicants for having a Black-sounding name, mentioning a women’s college, or submitting their résumé using certain file types; and disadvantage people who have a physical disability that limits their ability to interact with a keyboard.

The biases can run deep. A October 2022 study by the University of Cambridge implies the AI companies that claim to offer objective, meritocratic assessments are false, positing that anti-bias measures to remove gender and race are ineffective because the ideal employee is historically influenced by their gender and race.

But the risks aren’t slowing adoption. Nearly one in four organizations already leverage AI to support their hiring processes, according to a February 2022 survey from the Society for Human Resource Management. The percentage is even higher — 42% — among employers with 5,000 or more employees.

So what forms of algorithms are employers using, exactly? It varies. Some of the more common are text analyzers that sort resumes and cover letters based on keywords. But there’s also chatbots that conduct online interviews to screen out applicants with certain traits, and interviewing software designed to predict a candidate’s problem solving skills, aptitudes and “cultural fit” from their speech patterns and facial expressions.

The range of hiring and recruitment algorithms is so vast, in fact, that some organizations don’t believe Local Law 144 goes far enough.

The NYCLU, the New York branch of the American Civil Liberties Union, asserts that the law falls “far short” of providing protections for candidates and workers. Daniel Schwarz, senior privacy and technology strategist at the NYCLU, notes in a policy memo that Local Law 144 could, as written, be understood to only cover a subset of hiring algorithms — for example excluding tools that transcribe text from video and audio interviews. (Given that speech recognition tools have a well-known bias problem, that’s obviously problematic.)

“The … proposed rules [must be strengthened to] ensure broad coverage of [hiring algorithms], expand the bias audit requirements and provide transparency and meaningful notice to affected people in order to ensure that [algorithms] don’t operate to digitally circumvent New York City’s laws against discrimination,” Schwarz wrote. “Candidates and workers should not need to worry about being screened by a discriminatory algorithm.”

Parallel to this, the industry is embarking on preliminary efforts to self-regulate.

December 2021 saw the launch of the Data & Trust Alliance, which aims to develop an evaluation and scoring system for AI to detect and combat algorithmic bias, particularly bias in hiring. The group at one pointed counted CVS Health, Deloitte, General Motors, Humana, IBM, Mastercard, Meta, Nike and Walmart among its members, and garnered significant press coverage.

Unsurprisingly, Sundaram is in favor of this approach.

“Rather than hoping regulators catch up and curb the worst excesses of recruitment AI, it’s down to employers to be vigilant and exercise caution when using AI in hiring processes,” he said. “AI is evolving more rapidly than laws can be passed to regulate its use. Laws that are eventually passed — New York City’s included — are likely to be hugely complicated for this reason. This will leave companies at risk of misinterpreting or overlooking various legal intricacies and, in-turn, see marginalized candidates continue to be overlooked for roles.”

Of course, many would argue having companies develop a certification system for the AI products that they’re using or developing is problematic off the bat.

While imperfect in certain areas, according to critics, Local Law 144 does require that audits be conducted by independent entities who haven’t been involved in using, developing or distributing the algorithm they’re testing and who don’t have a relationship with the company submitting the algorithm for testing.

Will Local Law 144 affect change, ultimately? It’s too early to tell. But certainly, the success — or failure — of its implementation will affect laws to come elsewhere. As noted in a recent piece for Nerdwallet, Washington, D.C., is considering a rule that would hold employers accountable for preventing bias in automated decision-making algorithms. Two bills in California that aim to regulate AI in hiring were introduced within the last few years. And in late December, a bill was introduced in New Jersey that would regulate the use of AI in hiring decisions to minimize discrimination.



Source link