"Bias-free, data safe, human-first. HackerRank AI."
As HackerRank embraces AI in the products and solutions we build, we're focused on doing so responsibly and in a human-first way. We have identified 4 key pillars for building our AI systems with a responsible, customer-focused approach.
When building AI systems, we have a strong focus on ensuring customer and candidate data is held to a high standard and that data privacy is respected. We do this with the following key practices:
By employing these robust bias detection and mitigation practices, HackerRank ensures that our AI systems are fair, transparent, and equitable, reinforcing our commitment to unbiased hiring processes.
When building AI systems, we have a strong focus on ensuring customer and candidate data is held to a high standard and that data privacy is respected. We do this with the following key practices:
We remove personal information or personal data (as defined by applicable law), such as names, email addresses, and company information from datasets before training any of our AI systems. This ensures that training data cannot be linked back to individual candidates or users of our platform. This also helps reduce bias by ensuring that data used to train any solution used to assist in decision-making is not unfairly targeting a candidate’s previous performance and that candidate data is not cross-pollinated between company assessments.
Our systems use standard security protocols when transmitting any data such as SSL/TLS. We also conduct regular security audits on our infrastructure to ensure nothing falls out of compliance.
We undergo a strict vetting process for third-party AI systems that are used as part of our platform, to ensure they adhere to data privacy compliance standards. To date, you can see who our current third-party providers are within our AI Feature Terms.
HackerRank adheres to all relevant data protection laws and regulations, such as GDPR, CCPA, and other regional privacy laws.
We empower users with control over their data, providing options to access, correct, or delete their information. We offer clear and transparent information about our data practices to ensure users are fully informed.
We take a proactive approach to security, focusing on safeguarding both data and systems. This involves conducting regular vulnerability assessments, embedding security best practices in AI development, and establishing a comprehensive incident response strategy. We are focused on achieving a high level of reliability in our AI systems through continuous performance monitoring, stringent quality assurance, and critical failsafe mechanisms.
When possible, we will select models that allow for interpretable results. As an example, we may choose a simpler algorithm in place of a more complex algorithm despite better performance from the more complex algorithm. We do this to ensure that we can explain clearly which signals and which data points have the greatest impact on the prediction, decision, or generative output of the model. This is primarily relevant to the decision-making capabilities we are building such as our AI-powered plagiarism detection.
One area where this isn’t feasible yet is with generative AI models (GenAI). With GenAI models, we will lean towards measuring and guaranteeing the consistency of the outputs of the models. We do this by rigorously testing the outputs of the models through internal evaluation datasets and validation exercises. This is a rapidly developing area, and we are keeping pace with advancements in model interpretability to ensure we can quickly adopt any advancements that can help us provide our customers clarity on outputs from GenAI models.