blog post

From gut feel to good evidence: fixing the interview

Colleen Gallagher

There's a step in hiring where most of the value gets created or destroyed, and almost nobody is measuring it. It's the interview.

Companies have built sophisticated systems around everything else. Sourcing pipelines, employer brand, scheduling tools, scorecards. The interview itself, the conversation that determines who actually gets hired, has been left almost entirely to gut feel.

I recently joined a group of talent acquisition leaders at TALK to dig into where the interview process breaks and what the research says about fixing it. Here’s what we discussed.

The interview is where the money is lost

In finance, there's a metric we lean on all the time to understand investments in winning new customers relative to the value those customers bring back to the business. It's customer acquisition cost (CAC) payback. And I see a parallel in how companies should think about the investment in finding the right person and the value that person brings to the organization. There's real money spent up front, and the impact to the business doesn't start coming back until the new hire is ramped, integrated, and contributing. For most knowledge roles, that's six to twelve months. For senior roles, often longer.

Which is why this stat from theWork Institute's 2025 Retention Report is so eye opening. 40% of all employee attrition comes from people hired in the last 12 months. Early attrition is the most expensive kind.

Replacing a senior hire costs up to 5x their annual salary, and 60–70% of that cost is indirect. It doesn't show up on a line item. It shows up in missed quarters, slower execution, and managers spending their time managing a problem instead of running their function.

Interviews are where hiring decisions are made

Talent acquisition has built its operating model around funnels. Conversion rates, time to fill, source of hire. These are pipeline metrics, and they matter. But the interview isn’t really a pipeline stage. It’s where you’re trying to answer a much harder question: will this person actually perform in this role? That takes a completely different kind of rigor than running an efficient funnel.

Structured interviews, where every candidate gets role-relevant questions scored against the same job criteria, are 34% better at predicting job performance than unstructured ones. That finding goes back to an analysis in the late 1990s, and it was revalidated in the last few years using cleaner methods. The updated work also found something more interesting: structured interviews are the single strongest predictor of job performance we have, ahead of cognitive ability tests.

Our own research at Textio backs this up. Our research on interview feedback found that candidates who receive offers are 12x more likely to be praised for their personality than for their skills.

Most interviews fail before they start

When interviewers come back with inconsistent signal, it’s tempting to think the interviewers are the problem. They’re not, at least not intentionally. Every interviewer I’ve worked with is trying to do a good job. When the process doesn’t specify what to assess and who’s responsible for assessing it, even good interviewers produce bad data. So much of what goes wrong in the interview actually went wrong before the interview ever happened.

Look at what happens in most intake meetings. When do you need this person? What’s the comp range? Where should we source? It’s administrative. The hardest and most important question almost always gets skipped: what specific evidence would convince us this person will succeed in this role?

Take a word like “strategic” for a Senior Product Manager. On its own, it could mean three completely different things to three different interviewers, and they’d each go off and assess for whichever one made sense to them. If you do the work up front, “strategic” turns into a few specific competencies with benchmarks (prioritizing across stakeholders, articulating a product vision, sequencing the highest-leverage problems). Now every interviewer knows what they’re responsible for assessing, and what they’re not. When teams do this work up front, interviewers agree with each other much more often, and the resulting hires actually perform.

The execution gap in the live interview

Even when teams have the right competencies defined, execution often breaks down in the room. The clearest place to see it is in the questions themselves. “How do you prioritize?” gets you a rehearsed framework. “Tell me about a time you had multiple stakeholders pushing for different priorities with real business consequences” surfaces specific evidence you can actually probe. A scorecard is only as good as the questions feeding it.

This is also where the gut-feel override starts. When interviewers don’t ask the questions that surface real evidence, they have nothing to evaluate against. So they walk out with an overall impression of how the conversation felt, and then they backfill the scorecard to match it.

Where feedback and evaluation go wrong

Even after the interview, two more failure modes can quietly compromise the decision.

  • Feedback contamination: when interviewers see each other’s feedback before submitting their own, anchoring bias takes over. Independent scoring produces a 34% reliability gain.
  • Vague feedback: “strong communicator,” “good energy,” “culture fit.” None of that helps anyone make a defensible decision, and it doesn’t hold up if that decision is ever questioned.

What I see in teams that get this right

Across every conversation I have with TA leaders who are actually doing this well, I see the same five practices.  

1.    Intake defines the required competencies, skills, and behaviors. The role has clear measurement criteria before sourcing begins. 

2.   Interview plans assign specific competencies to each interviewer. Everyone knows what they own. 

3.   Questions are role-relevant and consistent. The same questions are asked per competency being assessed.

4.   Feedback is evidence-based and independent. Competency-level ratings with required evidence are submitted before any group discussion or debriefs. 

5.   Debriefs focus on data, not vibes. Discussion centers on disagreement and missing signals, not quick consensus.

Technology can help here, but only if it’s built into the interview itself, not bolted on around it.

Diagnose your interview process today

Before your next interview loop, try this: pull up the last five scorecards your team submitted. Count how many have competency-level evidence, meaning actual evidence cited against specific criteria, not just an overall rating with gut-feel comments. That data will tell you exactly where you are.  

The interview is the highest-stakes, most under-measured step in hiring. The research on what works has been clear for over 25 years. What’s missing is execution, and execution comes down to how the process is designed. Give interviewers a real structure to work inside, and they can actually do the job you’re asking them to do. Recruiters have been trying to give them that for years. The piece that’s been missing is tools that live inside the interview itself, not all around it. That’s finally something we can do.

This is why we built Lavalier

We built Lavalier to make structure part of how you already work. The interview still feels like a conversation. The interviewer is still the one running it. What changes is that the right questions, the right competencies, and a clean way to capture evidence are all just there in the flow. The interviewer still makes every call. Lavalier just makes sure they have what they need to make a good one.

Practically, Lavalier shows up at every step of the process: interview questions tied to the competencies you defined at intake, real-time guidance during the live conversation, and candidate summaries grounded in evidence so debriefs are faster and decisions are defensible. The result is better interviews and hiring decisions teams can actually stand behind.

If you want to see what it looks like, try it for free at lavalier.ai. Or just reach out to me directly at cg@textio.com. I’d love to hear what you’re seeing in your own process.

 

Colleen Gallagher is the CEO of Textio and the co-creator of Lavalier. She has spent 20+ years building and scaling businesses, and is focused on helping recruiting teams make better hiring decisions through structured, evidence-based interviewing.

Colleen Gallagher