A 2020 report revealed that 55% of human resource leaders in the U.S. use predictive algorithms to support hiring. Unfortunately, there have been confirmed cases of bias when AI tools are used in hiring decisions, and some workers have even filed charges with the Equal Employment Opportunity Commission.
These issues have prompted lawmakers to consider ways to eliminate bias in automated employment decision tools and ensure these tools provide an equal opportunity for all candidates.
For example, Illinois passed a law regulating the use of AI in video interviews, and New York City passed the boldest measure by far, which aims to eliminate hiring bias in any automated employment decision-making tools used to hire employees in NYC.
Regarding the NYC AI legislation in particular, questions have emerged about how hiring teams can best stay compliant. With that in mind, we pulled together a list of some of the most commonly asked questions surrounding the NYC AI law to help answer these uncertainties.
What’s the NYC AI Law?
The New York City Council passed legislation that makes it unlawful for an employer or employment agency in NYC to use an automated employment decision tool to screen a candidate or employee for an employment decision unless that tool undergoes a bias audit and the results of the bias audit are made publicly available on the employer’s website. The law takes effect on January 2, 2023, and it includes civil penalties between $500 and $1,500 for each violation of the law.
What Are Automated Employment Decision Tools?
The law defines an “automated employment decision tool” as “any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons.” Based on the language in the statute, there appear to be three critical tests that will determine whether or not an employment tool is covered, including:
- Does the tool involve a computational process that is derived from machine learning, statistical modeling, data analytics, or artificial intelligence?
- Does the computational process produce simplified output such as a score, classification, or recommendation?
- Is the simplified output (score, classification, or recommendation) used to substantially assist or replace discretionary judgments for making employment decisions (e.g., hiring, promotion)?
If the answer to all three of these questions is “yes,” then it is likely the tool is covered by the requirement of the NYC law.
What Does the Law Define as a Bias Audit?
According to the statute, “bias audit” is “an impartial evaluation by an ‘independent auditor’” to assess the tool’s disparate impact on people, and it should be conducted each year. The legislation is somewhat vague in this area, but as with any new legislation, interpretations through the courts and regulatory agencies should bring clarity to expectations and requirements over time.
Does the NYC AI Law Only Apply to Companies Located in NYC?
While this law only applies to companies located in NYC, other states and municipalities may pass similar legislation regulating A.I.-based employment tools in the future. For example, the attorney general in Washington D.C. has proposed legislation to stop discrimination in automated decision-making tools, so all companies should prepare for new regulations focused on the fairness of automated employment decision tools.
How is HackerRank Responding to the Law?
We recognize the need for many of our customers to comply with the NYC statute governing the use of automated employment decision tools. We also recognize the larger set of interests this law represents and expect that other regions might adopt similar requirements in time. As a company founded on the principle of leveling the playing field in technical hiring, we welcome this opportunity to not only help our customers achieve compliance, but also demonstrate commitment to our talent communities.
Is HackerRank Helping Its Customers Prepare for the NYC AI Law?
We’ve reviewed the legislation and its compliance requirements with internal and external legal counsel to evaluate its applicability to our technology, customers and candidates. In August, we are publishing a set of guidelines customers can use in their efforts to comply with the new statute and help them evaluate whether or not they are required to conduct a bias audit.
What HackerRank Tools Does the NYC AI Law Apply to?
Currently, the only HackerRank feature that likely is subject to the New York City statute is the image analysis technology employed by our Advanced Proctoring functionality. Image analysis uses AI-based facial recognition software to capture and compare images of candidates while they complete a test. This feature is currently in beta and has not yet been released for general availability. When it is fully released, our customers will have the choice to determine whether or not to use it.
How Will HackerRank Continue to Help?
We’ve conducted similar exercises in conjunction with third parties for individual customers and have historically seen positive results. We’re actively monitoring developments and will adjust our plans as needed to respond to the changing legislative landscape. Also, we’re always on the lookout for other legislation that may affect our customers, and we will ensure they have the guidance they need to comply with any new employment legislation that affects their usage of the HackerRank platform.
Please don’t hesitate to let us know if you have any questions about the NYC AI Law or similar legislation. You can always contact us here.