Interviewers focus on personality instead of skills primarily because interviews lack real-time structure that keeps evaluation tied to specific competencies. Without prompts, checklists, or in-the-moment guidance, interviewers default to overall impression—and overall impression is heavily shaped by how likable a candidate feels.
What the data shows
Research from Textio analyzing more than 10,000 documented interview assessments across nearly 4,000 candidates found:
- Candidates who received offers were 12x more likely to be described as having a "great personality"
- Offer recipients were 6.5x more likely to be called "nice" and 5x more likely to be described as "friendly"
- More than 1 in 3 interviewers had commented on a candidate's personality before an offer was extended
This pattern holds even when hiring teams have structured processes in place—question lists, scorecards, interview guides. Having the right tools in front of you doesn't automatically produce skill-based evaluation.
Why structure alone doesn't fix it
Most interviewers aren't being careless. The problem is situational:
- It's hard to know in the moment whether a candidate's answer actually demonstrated the competency being evaluated
- Vague or incomplete answers are easy to let slide without a prompt to follow up
- Conversations drift, time runs short, and interviewers reach the end having covered less than they intended
When that happens, post-interview feedback fills in the gaps with overall impression rather than specific evidence. Across a panel, those impressions compound—and debriefs drift toward "I just really clicked with her" rather than a comparison of what candidates actually demonstrated.
What keeps evaluation on skills
Four conditions make skill-based assessment more likely:
- Upfront alignment on competencies—Interviewers need to agree on what they're evaluating before anyone talks to a candidate. When criteria are vague or undefined, interviewers arrive with different ideas of what a strong candidate looks like.
- Questions mapped to those competencies—Generic interview questions produce generic answers. Questions designed to surface evidence of a specific skill give interviewers something concrete to evaluate.
- Real-time guidance during the interview—In-the-moment prompts help interviewers recognize incomplete answers and follow up before moving on. This is the gap a structured guide alone can't close.
- Evidence captured as it happens—Feedback reconstructed from memory hours after an interview reflects overall impression more than what was actually said. Notes taken in real time, mapped to competencies, produce more accurate and defensible assessments.
The result when these conditions aren't met
When interviews aren't structured around competency-based evidence capture, personality language fills the vacuum—not because interviewers are biased in intent, but because impression is what's available when evidence isn't.
Lavalier is built to close this gap. Role Setup aligns interviewers on competencies before anyone talks to a candidate. Live Guidance surfaces real-time prompts during the interview to keep evaluation on track. And Candidate Compare maps what candidates said to the competencies defined at the start—so debrief discussions are grounded in evidence, not impression. Try it free on your next role.