Skip to content
Hoodie for your thoughts: take the Developer Skills Survey for a chance to win free merch Start survey
69% of tech leaders are preparing their teams for GenAI. Uncover more insights in the AI Skills Report. Read now
Adapt your hiring strategy for an AI-powered future. Uncover more insights in our latest whitepaper. Read now
Artificial Intelligence

A Practical Approach to Detecting and Correcting Bias in AI Systems [The New Stack]

Written By Sofus Macskassy | April 22, 2019

The New Stack's logo

The following piece on bias in AI was originally published in The New Stack by Sofus Macskássy, VP of Data Science at HackerRank. 


As companies look to bring artificial intelligence into the core of their business, calls for greater transparency into AI algorithms and accountability for the decisions they make are on the rise.

That makes sense: If people are going to rely on AI to make important decisions with real world consequences, they need to trust it. But trust comes in many forms — and that makes it difficult to pin down. First, AI needs to explain why it made a particular recommendation. That builds trust because people understand the reasoning. Deeper levels of trust come from knowing that a system is fair and unbiased. Showing this part is much harder.

This leaves companies in a tough spot when it comes to leveraging AI: they can either fly blind or fall behind. In 2018, Amazon — a clear frontrunner in AI — shut down its experimental AI recruiting tool after the team discovered major issues with bias in the system.

What’s needed is a more practical approach. Here’s what 15 years building AI and machine learning models at companies like Facebook and Branch, and now HackerRank, has taught me about detecting and correcting bias in AI systems.

Read the full article at The New Stack.

Banner reading "Using Machine Learning to Drive Recruiting Performance"