Why your tech interviews are still broken, and what to do instead

Share us:

Research shows that many of the most common practices produce far more noise than signal. They test memory under pressure instead of problem-solving in context. They favor candidates with insider knowledge of the interview format rather than those with the skills to excel on the job. And too often, they leave talented engineers alienated by unfair or biased processes.

This article tears down the myth of the flawless tech interview. Together with Vic Fernandes, Technical Assessment Lead at Proxify, and Fellipe Capolupo, Technical Interviewer at Proxify, we’ll explore the weaknesses in traditional methods, how to balance signal-to-noise ratios, and the formats that actually predict on-the-job success.

Along the way, we’ll look at how Proxify approaches interviewing differently—and share actionable advice for building hiring processes that are fair, effective, and rooted in reality.

The weaknesses of common interview practices

“In my experience, one of the biggest weaknesses of traditional interview practices is that they often fail to accurately reflect how someone actually works on a day-to-day basis. Whiteboarding, for example, often puts candidates in a high-pressure, artificial environment where they’re solving problems without the usual tools or resources they’d normally have. It can favor people who are comfortable performing on the spot rather than those who are thoughtful and methodical,” says Vic Fernandes.

Fellipe Capolupo adds that with whiteboarding, the upside is that it shows how a candidate thinks, breaks down a problem, and communicates their reasoning under pressure. He points out that the weakness, though, is that it doesn’t really reflect how engineers work day to day. You don’t normally write code on a whiteboard without access to an editor, a compiler, or documentation. So while it can highlight problem-solving ability, it doesn’t tell you much about someone’s actual coding style, how they structure solutions, or whether their code is maintainable in practice.

Vic suggests that long take-home assignments, on the other hand, can feel more realistic, but not without their own issues. For example, he notes that they can be very time-consuming for candidates, especially when they are already working full-time. He adds that sometimes the effort required isn’t proportional to the stage of the process.

“And today, another challenge with take-home tasks is making sure the work actually reflects the candidate’s own skills. With AI tools being so accessible, it’s hard to be 100% confident the solution wasn’t heavily outsourced to something like ChatGPT or GitHub Copilot.”, he underlines. Fellipe agrees with this statement.

“In theory, take-home assignments are much better at showing how someone writes code, organizes a project, and applies general knowledge. But the challenge there is fairness and reliability. There’s no guarantee the work you’re reviewing is 100% the candidate’s — they might get outside help, or lean heavily on tools. It also raises questions of time investment: for someone who’s already working full-time, a multi-hour assignment can be a significant burden.” For Vic, the weakness isn’t just about whiteboards or assignments themselves, but about balance and fairness.

“The best interviews, in my view, combine a mix of approaches — maybe a small practical task, some collaborative problem-solving, and a good conversation about trade-offs and past experiences. That way, you’re not only testing technical ability but also making sure you see how the person really thinks and works.”

Boost your team

Proxify developers are a powerful extension of your team, consistently delivering expert solutions. With a proven track record across 500+ industries, our specialists integrate seamlessly into your projects, helping you fast-track your roadmap and drive lasting success.

Find your next developer

Common things candidates ignore or forget during technical assignments

In Vic’s experience, he says, the assessments that best predict real on-the-job performance are the ones that look and feel like the work candidates would actually be doing day to day.

“That’s why we try to focus on real-world coding tasks, often during live coding sessions with screen sharing. This setup doesn’t just show us if someone can write code, but also how they approach a problem, how they debug when something doesn’t work, and how clearly they explain their thinking as they go.”

He says that they’ve also seen that the candidates who go on to be the strongest performers usually bring a good balance of hard and soft skills. Technically, they can deliver clean, maintainable code, but just as importantly, they can collaborate, communicate clearly, and handle feedback well. Another factor they pay close attention to is motivation: those who genuinely want to commit full-time to one client, rather than juggling multiple side projects, tend to be more focused and effective in the long run.

Balancing signal-to-noise ratio

“I think the key to balancing signal and noise is recognizing that not every stage of the process should measure the same thing, or at the same depth. In the earlier stages, we usually give candidates more time and flexibility with coding tasks. That way, we get a good baseline of their technical ability without putting them under unnecessary pressure“, points out Vic.

He notes that as candidates move further into the process, it makes sense to raise the difficulty, maybe with tighter time constraints or more complex problem-solving, because by that point, we’re trying to see how they operate under pressure, how they think on their feet, and how they deal with ambiguity.

Throughout all stages, though, we don’t just look at whether the code works or passes test cases. We also pay close attention to factors such as code quality, maintainability, debugging approach, and the level of handholding required. Sometimes, how someone gets to the solution is just as telling as whether they get the solution at all.

“To me, that’s the balance: in the beginning, optimize for giving candidates space to show their skills without unnecessary noise. Later, introduce controlled stressors to observe how they adapt. And in every stage, measure not just the end product but the way they solve problems. That’s where the strongest signal about real-world performance comes from.”

Ensuring fairness and eliminating bias

Vic says that their team strives to make fairness a priority in the design of their assessments. One way they do that is by keeping the structure of the tests consistent across different skills

“Whether we’re assessing backend, frontend, or something else, we make sure the candidate has a clear idea of what to expect. We also benchmark the level of difficulty, not only against our own pool of candidates at Proxify but also against broader market standards, so we’re confident the tasks are challenging but realistic.”

They also constantly monitor approval rates to make sure we’re not unintentionally disadvantaging certain groups. If they notice discrepancies, they dig in to understand why. And, he says, probably the most important part: they actively gather and review candidate feedback after each assessment.

“Their perspective helps us spot things we might overlook, and it ensures that we continually improve the process to provide the best candidate experience possible.” In short, fairness stems from having a consistent structure, careful benchmarking, ongoing data monitoring, and genuinely listening to the individuals going through the process.

Formats that consistently predict strong on-the-job performance

Fellipe says he believes that there isn’t such a thing as a silver bullet when it comes to assessment formats. “No single exercise will perfectly predict on-the-job performance. The truth is, the more realistic and thorough an assessment is, the more insight you get — but that usually comes at a cost, either in terms of the candidate’s time or the company’s resources”, he notes.

“What I’ve seen work best is designing assessments that focus on real-life scenarios the person would actually face in the role. If it’s a backend engineer, that might be building a small API; if it’s frontend, maybe implementing a feature with some UI constraints. The key is to keep the exercise current with the technologies and challenges of the role, but also scoped so it doesn’t become an unreasonable burden.”

In other words, the formats that tend to predict strong performance are the ones that balance realism with practicality — giving candidates the chance to show how they think and work in situations that closely mirror what they’d actually do on the job.

Debunking widespread myths about technical interviews

“If I could debunk one widespread myth about technical interviews, it would be the idea that the whole process is now completely AI-driven”, Vic is adamant. “That’s simply not true, at least not at Proxify. We actually have a dedicated Assessment team of around 35 people, including Sourcing, Recruitment, and Technical Assessment specialists, and every single candidate matters to us.”

He adds that it’s very common for them to see internal discussions across teams when a candidate doesn’t pass an interview. People will ask for more context, review the decision together, and sometimes even challenge the outcome to see if the candidate can be given another chance. In some cases, these conversations lead to us ‘saving’ a candidate; in others, they help us improve our internal processes, and often they allow us to provide much more personal and constructive feedback to the candidate.

While tools and automation can enhance efficiency, the heart of the process remains very human. Real people are carefully evaluating, discussing, and making sure we give candidates the fairest shot possible.

Fellipe adds that the myth he would love to debunk is the idea that test scores or solving every problem perfectly are what really matters in a technical interview. “In practice, I’ve seen candidates who 'aced' the technical portion but struggled once on the job, and others who didn’t get perfect scores but turned out to be excellent teammates and strong contributors.”.

Hard skills are obviously important, he says, but they’re not the whole picture. Communication, collaboration, and the ability to clearly express ideas are just as critical, sometimes even more so.

“A candidate who can reason through a problem, explain their thinking, and work well with others will often be far more effective than someone who’s technically brilliant but can’t communicate. For me, that balance between coding ability and soft skills is what actually predicts success.”, he notes.

Why many tech interviews are still broken despite research against traditional methods

“I think a big reason tech interviews still feel broken is inertia,” says Fellipe.

“Traditional methods, such as whiteboarding or puzzle-style questions, have been around for years, and companies continue to use them partly because they’re familiar, easy to standardize, and scalable. Even when research shows they don’t always predict job performance, changing the process takes effort, alignment across teams, and sometimes more resources than companies are willing to spend.”

He adds that there is also the issue of risk. Hiring is high-stakes, and many organizations lean on methods that “feel” rigorous, even if they introduce bias or don’t reflect real-world work. He believes that it’s safer to stick with a flawed but well-known process than to try something new that might be harder to measure.

“So despite all the research, a lot of companies struggle to balance fairness, efficiency, and practicality. The good news is we’re seeing more movement toward project-based assessments, pair programming, and structured interviews — approaches that give a better signal — but it takes time for the industry as a whole to shift.”

How it’s done at Proxify

“Before I started managing the technical interviewer team and overseeing all aspects of candidate assessments, I was actually a technical interviewer at Proxify myself.”, recounts Vic. “At that time, I had conducted more interviews than anyone else, and the approach I still recommend to my team today comes directly from the guidance I got when I was onboarded by Viktor Jarnheimer, our CEO.”

“His advice was simple but powerful: make sure you are personal, make sure you are friendly, and make sure you are professional. That balance really matters. Candidates should feel comfortable enough to show their real abilities, but at the same time, they should feel that we take the process seriously and that the standards are high.”

“And the last point he stressed, which I fully agree with, is that you should only move forward with candidates you genuinely believe can excel in the job. It’s not just about passing a test; it’s about picturing them thriving in a real client environment. I think that philosophy is what has allowed us to keep the interviews both human and rigorous, and it’s the approach I continue to encourage across the team.”

A quick-fire round of actionable advice

From your vantage point, what’s the cost to companies of relying on poor interview signals?

Fellipe Capolupo: The cost of relying on poor interview signals is actually quite high. In tech, especially, people are often the biggest investments a company makes, and when you hire the wrong person or miss out on the right one, the costs add up quickly. It’s not just the expense of running another hiring process ( though that alone is costly). It’s also the time lost while a role stays unfilled.

And once someone is hired, there’s the ramp-up period. It can take months before a new engineer is fully productive, so if that person isn’t the right fit, the company ends up losing both the onboarding investment and the productivity they were hoping to gain. On the other hand, if a strong candidate is filtered out due to bad signals, that’s also a significant missed opportunity.

So the real cost is twofold: wasted money and time in the short term, and slower growth or execution in the long term. That’s why getting reliable signals in interviews is so critical — it directly impacts both team performance and the company’s bottom line.

What interview practices have you seen consistently repel top talent — and how should leaders rethink them?

Fellipe Capolupo: One of the practices that consistently pushes top talent away is a drawn-out process with too many stages or overly long, tedious tests. Strong candidates often have multiple opportunities on the table, and when the interview process feels more like a marathon than a fair evaluation, they’re likely to disengage or walk away. It signals that the company doesn’t value its time, which can leave a negative impression.

What tends to work much better is focusing on assessments that are meaningful and relevant. Instead of abstract puzzles or endless rounds, give candidates problems that mirror real work — even better if it’s a simplified version of a challenge the company is currently facing. Not only does that keep the process leaner, but it also gives candidates a chance to picture themselves contributing to something concrete.

So, leaders should rethink interviews less as hurdles to clear and more as realistic previews of the job. That way, they get stronger signals about technical fit while also giving top talent a positive experience that makes them want to join.

If you had to advise a CTO to redesign their hiring process tomorrow, what would be the first thing you’d tell them to cut, and what would you replace it with?

Fellipe Capolupo: I’d cut the long, abstract assessments that don’t reflect real work — things like endless algorithm puzzles or marathon take-homes. I’d replace them with shorter, up-to-date exercises that mirror the real problems the team actually solves. That way, the process is more respectful of the candidate’s time and gives the company a clearer, more realistic signal of how someone would perform on the job.

Do you believe there is such a thing as the “perfect” technical interview, or is the goal something else entirely?

Fellipe Capolupo: I don’t think there’s such a thing as a perfect technical interview. It’s always a balancing act between the company’s resources, the candidate’s time and engagement, and the depth of the technical assessment.

You can’t maximize all three at once. If the process is very thorough, it risks being too long or costly; if it’s too light, you may not get enough signal.

What really matters is treating the process as something that’s continuously improved. The more observability you have. For example, looking at how candidates perform on each part of the assessment and whether that correlates with success on the job, the better you can fine-tune it. The goal isn’t perfection, it’s finding that balance and always making adjustments to keep the process fair, efficient, and predictive of real performance.

Conclusion

Tech interviews may never be “perfect,” but they don’t have to be broken. By replacing puzzles and pressure tests with assessments that mirror real work, balancing signal with fairness, and keeping the process human, companies can finally start hiring for what truly matters: engineers who thrive on the job, not just in the interview room.

Share us:

Verified authors

We work exclusively with top-tier professionals.
Our writers and reviewers are carefully vetted industry experts from the Proxify network who ensure every piece of content is precise, relevant, and rooted in deep expertise.

Stefanija Tenekedjieva Haans

Stefanija Tenekedjieva Haans

Content Lead

Journalist turned content writer. Always loved to write, and found the perfect job in content. A self-proclaimed film connoisseur, cook and nerd in disguise.

Vic Fernandes

Vic Fernandes

Technical Assessment Lead at Proxify

25 years of experience

Expert in Technical Assessment

Electrical Engineer, 25+ years in tech, and a Master's in Neuroscience

Fellipe Capolupo

Fellipe Capolupo

Technical Interviewer

11 years of experience

Expert in Technical Interviews

Fellipe is a solution-focused software engineer with over ten years of experience in back-end software development. He is mostly focused on the .Net framework, proficient with object-oriented programming, especially C#. Fellipe has been working with .Net Core for the last two years.

Find your next developer within days, not months

In a short 25-minute call, we would like to:

  • Understand your development needs
  • Explain our process to match you with qualified, vetted developers from our network
  • You are presented the right candidates 2 days in average after we talk

Not sure where to start? Let’s have a chat