
Interviewer bias shows up most visibly in the language used to describe candidates—and in which candidates receive offers. When evaluation isn't anchored to specific competencies, interviewers fill the gap with impression-based judgment, and that judgment tends to favor candidates who feel familiar, warm, or easy to talk to over candidates who best demonstrate the required skills.
Research from Textio analyzed more than 10,000 documented interview assessments across nearly 4,000 candidates and found that candidates who received offers were 6.5x more likely to be described as "nice" and 3x more likely to be described as "enthusiastic" than candidates who were rejected. Offer recipients were also 4x more likely to be described as having "good energy."
These descriptors measure likeability, not job-relevant capability. When they appear consistently in the feedback of candidates who advance, it's a signal that bias—not evidence—is driving decisions.
Interviewers aren't typically aware they're substituting impression for evidence. The experience of a strong interview and the experience of clicking with a candidate feel similar in the moment. Without a mechanism that keeps evaluation tied to specific competencies throughout the conversation, the two are easy to conflate.
Bias also compounds across a panel. When each interviewer is working from their own unstructured read, the debrief tends to surface and reinforce shared impressions rather than surface disagreement. A candidate who charmed the room is harder to push back on than the evidence warrants.
Bias doesn't only happen during the interview itself. It can enter at several points:
Bias is reduced when evaluation is structured around evidence rather than impression at every stage: defined competencies before interviews start, questions mapped to those competencies, real-time note-taking during conversations, and feedback formats that require interviewers to cite specific candidate responses rather than overall reads.
Lavalier is designed to keep evaluation anchored to evidence at every stage. Role Setup defines competencies before interviews start. Live Guidance keeps interviewers on track during the conversation with real-time prompts. Candidate Compare structures the debrief around competency-by-competency comparison rather than open discussion—so bias has less room to drive the outcome. Try it free on your next role.