Skip to content
Hoodie for your thoughts: take the Developer Skills Survey for a chance to win free merch Start survey
69% of tech leaders are preparing their teams for GenAI. Uncover more insights in the AI Skills Report. Read now
Adapt your hiring strategy for an AI-powered future. Uncover more insights in our latest whitepaper. Read now
Thought Leadership

Pymetrics CEO Frida Polli on Hiring for Aptitude

Written By Nicolette Garcia | September 4, 2018

This is the 7th episode of HackerRank Radio, a podcast for engineering leaders interested in solving developers’ toughest problems today: Hiring the right developers. Hosted by Vivek Ravisankar (CEO & Co-founder, HackerRank). You can subscribe to us on iTunes and Google Play.


Pymetrics' CEO Frida PolliWhen finding the right talent, there are at least two major sets of skills to evaluate: hard and soft skills.

When it comes to evaluating emotional qualities—like value fit and team chemistry—the best way to assess these qualities is through personal interaction but that often leads to inherent or unconscious biases. Enter: Pymetrics

Since its inception in 2012, Pymetrics has helped over 50 enterprise companies like LinkedIn, Accenture, and Tesla match with their ideal candidates by evaluating, not on skill, but on aptitude for the role. Through the use of neuroscience and data science, Pymetrics is changing the way companies hire the right talent.

We sat down with Pymetrics own Frida Polli, a neuroscientist turned founder, in our latest episode of HackerRank radio, to learn more about the science behind Pymetrics and discuss the greatest strength HackerRank and Pymetrics share--powering unbiased hiring.

HackerRank’s CEO Vivek Ravisankar and Frida cover:

  • How Pymetrics uses neuroscience to assess IQ and EQ
  • How artificial intelligence is affecting recruiting
  • Overcoming bias in tech hiring

Listen to the episode, or scroll below to skim the transcript.

iTunes logo

Jump ahead:

Vivek: Okay, so people probably have heard of the name Pymetrics, but would love to hear from you. What is Pymetrics and how does it work?

Frida: Sure---happy to let them know. So Pymetrics uses neuroscience and AI to help companies recruit in a more predictive, accurate and diversity-friendly way. Essentially what we do is instead of using resumes or other types of assessments, we use people's responses on different cognitive and emotional exercises that were developed by the cognitive neuroscience community globally and that we now use for HR. So that's step number one.

And then step number two, is instead of using generic models that we've built off of aggregate data, we benchmark all of our profiles on actual existing company employees that are performing well in the role. So we build a custom algorithm for each role and every company that we work for. So that's another piece that's different about Pymetrics.

The third thing is that we pretest all of our algorithms to make sure that they don't have any gender or ethnic bias. And we do this using a method that we've actually open sourced on Github called audit AI.

And the last thing that we do is we act as a common app for applicants. So if you apply to a company that's using Pymetrics and you are not matched to the role you apply for you first get matched to other roles of the company where you could be a good fit. And if you don't select any of those, you can actually match to other companies using Pymetrics. So it really tries to turn the typical rejection experience of a candidate into one of job fit and career matching.

Vivek: Interesting. This is obviously a new way of doing recruiting. Can you walk us through how did you come up with this idea of neuroscience and combine that with recruiting and maybe like your a-ha moment of where you thought ‘okay actually this works.’

Frida: Yeah, sure. So you know I spent 10 years at Harvard and MIT as a grad student and then a postdoc. And basically decided to leave academia, mostly because I wanted to do something more applied with our research. Somebody suggested business school. I thought that’s a crazy idea. I already had a Ph.D. And I didn't need more advanced degrees but I got a fellowship to go to Harvard. And so, therefore, I went to the MBA program for two years. Really MBA programs are a front seat to recruiting because everyone goes there to find their next gig. And so it was this really the a-ha moment, there were two aha moments.

One was just watching recruiting and thinking, wow this is so outdated. This hasn't changed since I was in college, and that was a while ago and in the day and age of all these behavioral assessments that we were doing in the lab and machine learning and technology platforms. And, you know, thinking about the experience of recruiting versus like finding a movie on Netflix and being like, it's so easy to match a movie on Netflix. Why isn't that possible in the job space? So that was kind of one big a-ha moment and then just realizing that we had all of the different component parts to build that system.

And then the second a-ha moment was when we actually started collecting data. HBS students and the companies that recruit there were kind of our early training ground. And we built algorithms for three companies that recruit there. They're basically in the exact same industry and recruit for the exact same role and we had enough students go through that were sponsored by these three companies that were able to build a custom algorithm. And then we showed one of these companies that that custom algorithm worked much better in terms of predicting their recruiting outcomes than the generic algorithm that we would have built had we combined across those three companies. And that was another a-ha moment of saying, Wow, like not only does this stuff work but it works much better when you're using data from an actual company rather than relying on sort of more generic training sets. So those were those are my two aha moments.

Vivek: How do you think about the different skills that you need to be successful in a job. The culture is one piece of it, which is basically the profile which is everything from: Do you have a valid work permit? Are you a student or professional? There is some personal information. Then there is the IQ. And then there's the EQ part of it, which is my guess that Pymetrics does, but I'd love to know how you think about it, what's makes it successful?

Frida: Yeah, so I think I wouldn't think about it slightly differently. I wouldn't argue about the first piece sort of the work visa stuff but with respect to “IQ and EQ”--What I would frame that as a cognitive neuroscientist is you're talking about cognitive abilities and emotional abilities. Cognitive abilities being related to what we would consider IQ, but not the same. And then EQ being related to emotional or social abilities. So really, as a platform, we measure both of those things we measure things like memory, attention, planning on the cognitive side. And the emotional side, we look at things like how risk-taking you are, how reward seeking you are, and so on.

So those two things we combine but what that really measures is what I would call an aptitude for a role. Meaning, do I have the profile of someone or the inherent aptitude of someone that that could be successful in this role? It doesn't measure if the role requires hard skills. And actually we've talked about this before. At HackerRank you measure hard, what I would consider skills---you measure coding skills. So we could build an algorithm for software engineering profile that says you know this person is a great fit from a cognitive and emotional perspective for what you're looking for, but they may have never coded a line in whatever language that company is using, in which case they wouldn't be able to fill that role immediately. So there is a skills component that actually varies tremendously across many roles. For many roles, there actually isn't a hard skill component coming out of the gate. It's much easier to train someone with the aptitude for something to learn the skill than it is to take somebody who might have the skill but isn't an inherently good fit to be good at that job. So that's really kind of the premise behind Pymetrics is that optimized for aptitude and skills can be learned.

Vivek: That's great. And so when you run these assessments or tests for these companies, I don't know how much you can share, is there like a distinct difference in, for example, a sales role in a large company one versus a sales role at a large company two? What are the factors that vary in the DNA? Is it the location? Is it the type of company? What they do? Would love to know if you can give two examples of here’s the DNA for this particular role in this company and here’s the DNA of this role at this company.

Frida: Well, I think the early HBS example that I gave you is a really good one, where basically you know we found out that 3 companies that were recruiting for the exact same role actually had very different profiles. And so that was, we learned that by working with these companies and we then built them unique profiles so basically what we learned was that it wasn't even about the geography. It wasn't about other aspects of it. It was really to do with what we think of as cultural fit, that's something that, you know, we don't really talk about that much but there is a culture fit component. And by culture fit, I think sometimes get a gets a bad name, but really what we're talking about is a values orientation. Every company has values, or most do, and I think that we also pick up on that in our assessments. So that's one thing that I think can vary quite tremendously across companies even in the same role.

And then to your point, I think if you look at a sales profile, it does change, if you think about kind of a small organization versus a large one. So again, I don't have the specifics about, okay, these are the factors that vary or whatever. But I do know that when we've done these deep dives into whether it's cross-company at the same role or to your point, kind of cross size of company at the same role there definitely are pretty meaningful differences.

And I think that's why we really try to avoid these generic profiles because you know the end of the day, a generic profile is basically like Netflix telling you “Hey Vivek, you're a male that lives in San Francisco. These are the movies that males that live in San Francisco like. That's so generic it has no value, or very little value and that is really what I think what a generic profile does. Versus that's not what Netflix does, it trains on the stuff you like and then it recommends that and that's really the model that we subscribe to.

Vivek: Yeah, that's interesting. And sort of on a related note, there are certain fields where AI is interesting and probably harmless. For example, let's say Spotify recommends the wrong song to me. I mean, I'm fine with that or Netflix has a has a bad movie recommendation. But then there are certain areas like health, self-driving cars, job matching, where you can significantly change the trajectory--like we're sort of impacting people's lives in real life. And AI is only is an indicator, right, it has its own level of accuracy. How do you think about using AI when it comes to hiring when you know that a percentage of the results are not going to be accurate, but it's going to have an impact on people.

Frida: Sure. Well, I think there are a couple ways to think about it is that you always have to think of what's the alternative right if the alternative was a perfect system that was always accurate then, of course, we would look at AI and be like, why would we ever use that? But the fact of the matter is, that's hardly the case and right now the failure rate for first your hires is 30 to 50%. If this were an algorithm, it would be failing up to 50% of the time. Right, so it's almost a chance, essentially, right. And so I think that like if you can produce an algorithm that's, you know, at an 80% above accuracy, you are improving on the status quo, even if it's not, you know, 90% or 100% and we do have thresholds around how accurate are algorithms need to be in terms of recall and precision. So I do think you know that's one way to look at it is that it is improving the status quo. And of course, you're always trying to get better.

The second thing is that it's not an either/or proposition. And I think that's actually very relevant to the medicine discussion, you just raised because as you know algorithms can be more accurate than doctors in certain tumor detection cases, but then doctors are better at others. And so if you combine those two that's actually where the strength lies. And I think that's really how we view our AI working is in combination or side by side. It's not like you pass Pymetrics, and you get hired. There's a bunch of other steps that happen along the way where there's human intervention that adds that.

And then the third thing I would say is that, and there was a piece in HBR about this last week, which basically said if you think about the potential for bias reduction, it's actually huge. And what I mean by that is there tons of studies showing that if you have an ethnic sounding name or female name your chance at getting called back for an interview is lower than if you're Caucasian or you were male. And really, that's where Pymetrics intervenes, is at the recommendation for interview stage. It's not saying hire/no hire it’s saying hey you should interview this person.

If we think of it that way. We know that humans are doing a very bad job. And so if we can produce an algorithm that essentially removes that bias and we can pre-certify that that's happening, which is what we do, then that plus and increased accuracy rate, I personally think it's a win-win for everybody. It's more accurate and it's more fair, which is the whole idea.

And so yes, it's not perfect because no system is perfect, but it's more accurate than the current system and it's more fair, so I sleep well at night, thinking that. And then the other thing, like I said, is that because it's not a generic algorithm, you can get rejected from one role and match to another so yes, you're right. If I recommend you for an engineering role versus a sales role I have “changed your trajectory” but that doesn't necessarily have to be a negative impact. It could be a very positive impact. So those are the ways that I that I think about it.

How do you think about it?

Vivek: Very similar. That's the easiest answer. I think about it from whatever you said makes sense, which is a right now it is a data point. And it's a very good data point, much better than the current set of tools that people are using, which is essentially resumes, which has like tons of bias all over it. But I do believe that over time, things will start to converge.

This is going to get better and better over time. I'm thinking that the recruiting process would probably reduce down to, hey, we know your skills, we also know your aptitude, the EQ, the core value fit. But I want to spend an hour with you to sort of validate all of those things to make sure that you and I can get along well and work well.

Frida: Well, and the other thing I was going to say is that I mean I really think that this whole notion of like a dystopian world created by algorithms is actually very misleading. In the sense that, you know, currently, our current situation for economic distribution is quite poor. We haven't really talked about socio-economic diversity. But right now, if you grew up in a top 20% household you're way more likely to go to a prestigious university or way more likely to then get a good job and we view that as being merit based. And you know in part it's merit based but there's lots of research which suggests that kids with the same SAT scores and grades, but whose parents are not in the top 20% socio-economic bracket actually don't apply to elite schools at a much lower rate.

So people are getting shut out of the economic equation or economic engine all the time based on, not merit, but really just what income bracket their parents are in. And I personally think we can use algorithms to help that. And we've seen that in companies that we work with all the time. If you think about campus recruiting, how it’s normally done, you go to 10 universities and most of those universities are prestigious ones, which I understand why you go there. That's why people go to HBS. But you're essentially ignoring 90% of the schools out there and is that really fair? So that's why I actually think it's totally the opposite. I think algorithms can be dramatically used to increase fairness across socio-economic levels as well. And we've seen that already in lots of our clients that we work with. So I feel very positive about that aspect of things as well.

Vivek: Yeah, absolutely. The internet is a great equalizer for a lot of for a lot of different things. From access to information, the technology that we all use every day whether this a phone, a MacBook, all of the other tools that we use. But one thing that hasn't yet happened and they know like HackerRank and Pymetrics and a lot of other entrepreneurs are actually working on this is how can you build an equalizer for matching a person to the right job based on their skill versus not their pedigrees. It shouldn't really matter what school you went to, which company you worked at before---what matters is are you the skill you have the skills and are you fit for this particular job role and opportunity.