Skip to content
Hoodie for your thoughts: take the Developer Skills Survey for a chance to win free merch Start survey
69% of tech leaders are preparing their teams for GenAI. Uncover more insights in the AI Skills Report. Read now
Adapt your hiring strategy for an AI-powered future. Uncover more insights in our latest whitepaper. Read now
Hiring Technical Talent

Screening | Step 2: Design Impactful Challenges

Written By Ritika Trikha | February 12, 2016


Welcome to the 2nd part of a 4-part series on screening senior engineers.
Step 1. Before You Do Anything
Step 2. Designing Impactful Challenges (This piece)
2a. The Challenge Checklist
Step 3. Setting Expectations, Warming Your Candidates
Step 4. Calibrating After the Screening


In Step 1, we made the argument that you need to think about which you value more: intelligence or knowledge. Now we’ll get to the core of the actual design of these questions.

Broadly speaking, we’ve found that there are three facets of technical skills you should test for:

  • How well does he or she know the fundamentals
  • How deep is his or her technical knowledge
  • How deeply can he or she think about problems

The first two can be tested well before even interacting with the candidate using code challenges. The third facet is best measured in person. This is the longest step of this guide because we offer detailed examples of each category of challenges, as well as a crucial checklist of biggest mistakes that companies make when designing challenges.

I. Designing Algorithm Code Challenges

There’s one thing you need to account for off the bat: Can they code? It sounds obvious. But for years, notable engineers have repeatedly (2007, 2012 and as recent as 2015) pointed out that a major chunk of applicants can't write simple programs. You can weed these folks out by asking them to solve a basic code challenge. Your first challenge should be light and breezy.
Before asking algorithm questions, Soham Mehta, founder of a technical interview preparation site Interview Kickstart, includes quick multiple-choice challenges that test for some very basic CS fundamentals. e.g.:

  1. Given N distinct numbers, how many subsets can you form? [Answer: 2^N]
  2. Given string of length N, how many permutations of that string exist? [Answer: N!, or simply N x (N-1) x (N-2) … x 1]

a. Multiple Choice
Multiple choice code challenges are lightweight and are easier on the candidates. Offer several questions, each with plenty of answer choices. That minimizes the chance of getting correct answers by clicking randomly.


Here’s an algorithm-based multiple choice example:

Screen Shot 2016-01-30 at 10.30.04 AM

b. Code Completion
Code completion is another lightweight style to consider as the first impression with candidates. A code completion challenge is when candidates have to fill-in-the-blank of a given piece of code. Some of the greatest programming minds of all time spend a great deal of time reading source code. If you provide most of the code, and ask candidates to fill in the lines, it’s an interesting way for candidates to solve a challenge while gauging critical thinking and their ability to read other people’s code. Plus, it’s often more fun. We initiated a contest series exclusively for code completion for our community of developers recently and it’s been one of the most engaging contests we’ve ever held. Check out the contest for examples of sentence completion code challenges which will help guide you in creating some of your own.  

II. Designing Knowledge-Based Code Challenges

Unlike fundamental algorithm challenges, knowledge-based code challenges are much more straightforward because they’re dependent on knowledge of a particular domain or technology. For instance, challenges based on Client-Server, Sockets, Multi-threading are good signals of seniority in distributed systems. We asked Dr. Memelli to offer a great example of a knowledge-based code challenge:

Screen Shot 2016-01-30 at 10.53.51 AM
Here’s a knowledge-based multiple choice example:
Screen Shot 2016-01-30 at 10.29.29 AM

III. Designing Challenges that Gauge Depth of Thinking

While intelligence-based challenges are the best predictors of success, there’s one major differentiator between novice and an expert. And it’s not the years of experience. Rather, it’s how deeply can you think about problems? 

Josh Tyler, who runs engineering at Course Hero, defines experts as engineers who have proven ownership or thought leadership of large systems, like services, apps or frameworks. With this experience of ownership and tools, experts are scientifically proven to intuitively look at problems in deeper context than novice programmers.
There are quite a few studies that support this:

Screen Shot 2016-01-30 at 10.54.54 AM

Novice: “Novices operate from an explicit rules and knowledge-based perspective. They are deliberate and analytical, and therefore slower to take action, they decide or choose.”
Expert: Experts operate from a mature, holistic well-tried understanding, intuitively and without conscious deliberation. This is a function of experience. They do not see problems as one thing and solutions as another, they act.

  • A second fascinating study similarly uses fMRI technology to measure blood flow in the brain, comparing a novice artist and an expert artist. Again, the findings were in line with what we’ve seen so far:

Novice = Process information in the area that deals with features (surface-level).
Experts = Process information in the area that deals with deeper meaning.
In a phrase: “The artist ‘thinks’ portraits more than he ‘sees’ them.

So, how do you really test for depth in thinking? How do we know if engineers can think about software rather than just see its code? Unlike the first two types of challenges (algorithm and knowledge-based), which can be clear-cut automated challenges, questions that measure depth of thought are usually best accomplished in person. This way, you can start off simple and proceed to make the problem statement more complex as you proceed to work it out. There are two different types of questions that can help you measure seniority by depth of thought:
a.) Open-ended questions
After you have a vetted stream of candidates who pass the questions for technical aptitude, asking open-ended questions about big picture strategy is a good way to gauge depth of thought. Taylor finds success with questions like:

Tell me something you did that had a big impact in a positive way as a result of time to think and strategize. How about the negative impact?

“I find that mistakes are made when technical people run straight into the how we build something instead of challenging what the right thing to build is and asking the customer facing questions on what success looks like,” he says.   

Other examples of open-ended questions that are critical to test seniority-level and leadership skills:

  • How do you build search for Gmail?
  • Describe to me some bad code you've read or inherited lately.
  • When do you know your code is ready for production?

b.) Infuse multiple parts in your challenges
Tyler says the most effective way to design challenges for senior folks is to create lots of room for depth in the problem statement.
“Most of our interview questions are designed in a way that has many parts. A junior candidate usually only gets through the first part, whereas senior folks get through the second or third parts,’ Tyler says.
For instance, here’s an example of a good optimization question.

You are given many words and you have to find frequencies of each word. Here simple maps, arrays, lists will not work when a huge number of words are given. Instead, you should go for Trie, an efficient data structure here.

Greg Badros, who founded Prepared Mind Innovations, would break up this problem into the following parts:

  • Tell the candidate that you’ll start simple and make it more complex as you work through the problem
  • Ask for word frequencies
  • Make sure they get the simple map solution without coding it
  • Tell them how much RAM they’ve got and how big the dataset its
  • Ask them to estimate how big the process will get for the language they’d write this in
  1. If they get this far, then ask them to propose an alternative data structure that would be more memory efficient because they’re out of RAM
  2. When they’re out of design ideas, have them code as far as they go


    The Code Challenge Checklist

    So far, in step 2, we’ve been focusing on the diverse categories of challenges, based on what you need. Now let’s get down to the practical ways of designing challenges. We looked at the question banks of various companies to pinpoint the patterns of what makes a code challenge successful. Based on these patterns, here’s a checklist of 5 common mistakes to avoid and  ensure each challenge will draw the best candidates:
    [1] Do you have the right answer?
    This sounds like a no-brainer. But you’d be surprised as to how many companies just get their own code challenge wrong. CareerCup CEO Gayle McDowell frequently consults with tech companies of all sizes on their hiring process.
    “When I’ve reviewed companies’ question banks, about 10% of answers are wrong,” McDowell says.
    And it’s not a matter of carelessness or minor bugs in typing up the solutions. The challenge designer genuinely believed their algorithm was right until they were proven wrong. Take the time to ensure that 100% of your answers are correct. Otherwise, you’ll turn away a lot of strong candidates who were actually answering the question right.
    [2] Are your algorithm challenges challenging enough?
    When it comes to algorithm code challenges, McDowell estimates that if 5% of your candidates instantly know the solution to your question, then it’s likely to be too easy. Easy challenges cluster the mediocre and the great, which of course makes identifying great developers very difficult. Your challenge should be hard enough to separate the average from the top.
    In an algorithm code challenge, you should have at least one challenging question where only about 20% of your candidates solve it perfectly. A good algorithm question would have 2 (or more) different solutions, one more optimized than the other. This leads to 3 tiers of performances:
    Tier 1: The best candidates will pass all test cases because they implemented a correct and optimal solution.
    Tier 2: The okay candidates will pass most of the test cases with an algorithm that’s correct but suboptimal. Their code gives the right output, but times out on the large test cases.
    Tier 3: The candidates you don’t want to hire won’t develop a correct algorithm at all.
    If you use a platform like HackerRank, for instance, you can evaluate based on the time to complete a test case, taking into account both asymptotic complexity and the lower constraints. But it doesn't impose a time condition on all problems. It’s up to you as the problem designer to decide if the efficiency is important.
    [3] Are your problem statements detailed and tailored to your company?
    Problem statements can be incredibly daunting, but for senior engineers, this should be a telling challenge. Engineers who can identify the right sub-problems, solve smaller problems and then merge them together are more likely to be successful as leaders in your org.
    These larger principles of being able to break things down, spot patterns and attack problems systematically are crucial for any senior engineer. The more tailored and specific the problem statements are to your industry, the higher the chances of high-quality candidates completing code challenges as part of their  application.
    We did an experiment in which we compared two similar companies’ challengesone with a generic problem statement and another with a more tailored challenge. We saw a nearly 10% higher completion rate with the latter. Even anecdotally, we consistently see this direct correlation time and time again.
    Candidates are more drawn to and interested in challenges themed to your mission versus generic, cold challenges. VMware, for instance, created a host of very detailed, complex problem statements that delved deep into virtualization. You can see the Logical Hub Controller problem is richly tailored to a typical virtualized datacenter problem they grapple with daily. By replacing resumes with tailored code challenges for two years, VMware:

    • Saved VMware engineers 100+ of hours per quarter by only calling candidates who passed the code challenges
    • Increased the success rate for candidates who come onsite (compared to manual resume screenings).

    [4] Did you rule out luck or bias?
    If a candidate gets stuck on some aspect of a problem, were they just unlucky? It’s important to distinguish between these factors, and the best way to do this is by getting multiple data points. That is, ask a question that involves multiple hurdles. This way, if a candidate gets stuck on one aspect, you can help them through it and they still have other logical leaps to overcome (and thus additional data points for your evaluation).
    For example, consider this question:
    Given an array of people, with their birth and death years, find the year with the highest population.
    This problem has a bunch of solutions, each of which presents a new hurdle.

    • The 1st hurdle is coming up with any correct solution. Most candidates should be able to come up with a brute force algorithm (e.g., for each year that a person is alive, walk through all the people to count the number of people alive in that year), but a few won’t. If they don’t, give them some help and see if this was just a brief lapse in reasoning or a consistent issue.
    • The 2nd hurdle might be noticing that you don’t need to check the same year repeatedly. You can use a hash table to cache this data.
    • A 3rd hurdle might be noticing that you only need to check the years between the first birth year and the last death year. So a candidate might first get that information, and then proceed with checking that range of years.
    • A 4th hurdle might be noticing that, actually, you only need to check the first birth year through the last birth year. You don’t need to check years in which people only died; they certainly won’t have the highest population.
    • And so on, until we arrive at an optimal solution. Great candidates might even leap past several hurdles at once, and that’s a great sign. A problem like this has so many hurdles that you can more effectively see if there’s a pattern in the candidate’s performance. For good measure, test out questions on your peers first. Ask at least 2-3 engineers to solve the problem before unleashing onto your candidates. This helps you see what sort of hurdles are in the question and what to expect from candidates.

    [5] Are you sure there aren’t any CS jargon words?
     There’s a common misconception that you need to have a computer science (CS) degree in order to be a good programmer. In reality, only 40% of working software engineers in one survey said they had a CS degree. When we wrote about this in TechCrunch, Wavefront CEO Sam Pullara told us he agreed.
    Screen Shot 2016-01-30 at 11.19.38 AM
    Senior candidates, particularly, are most likely to be unfamiliar with CS textbook words. For instance, avoid dropping terms like “state machine” or “dependency injection.” Likewise, in order to avoid ruling out senior engineers who haven’t taken CS fundamental classes (or don’t remember them), avoid questions that involve more obscure algorithm knowledge.

This brings us to not only the most important step but also the most overlooked component of hiring senior engineers.
Read onto Step 3. Setting Expectations, Warming Your Candidates


Please subscribe to our blog to get a quick note when we occasionally write thoughtful, long form posts.


Welcome to the 2nd part of a 4-part series on screening senior engineers.

Step 1. Before You Do Anything
Step 2. Designing Impactful Challenges (This piece)
2a. The Challenge Checklist
Step 3. Setting Expectations, Warming Your Candidates
Step 4. Calibrating After the Screening