Is it really so hard to write good interview feedback? It is, actually.
Running a good interview is challenging. Taking good notes is challenging. Writing up accurate, fair, useful assessments is challenging—particularly when it’s one of a million things on your plate.
And yet the whole debrief and decision process depends on an interviewer's ability to recap what they uncovered in interviews.
Most teams struggle with this. The problem usually isn't effort or intent. It's that the conditions required to produce good feedback—consistent structured interviews, confident interviewers, enough support to make the process feel manageable—have historically been difficult to maintain at scale. The right AI recruiting tools are changing that.
The standard: each interviewer evaluates their assigned competencies consistently across every candidate they see, and documents what they found in a way that's actually comparable. The reality: not that.
Not because interviewers aren't trying—but because the conditions working against good documentation are real. Evidence captured hours after a conversation is filtered through memory and overall impression. Notes taken during the interview help, but when your attention is split—listening, following up, keeping the conversation on track—whatever gets jotted down tends to reflect what stood out most, which isn't always what's most relevant. The specific things a candidate said start to blur. What remains is a feeling, and that's what gets written.
The result is feedback that varies in depth, format, and focus. This is genuinely hard to do well without a lot of support.
Recent research from Textio looked at feedback across thousands of interviews to understand how interviewers write assessments. The findings reveal patterns that most hiring teams would recognize—even if they've never seen the data before.
Interviewers work harder to justify “No” than to explain “Yes.” On average, interviewers write 39% more feedback when a candidate is not getting an offer than when one is. That gap isn't explained by interviewer style or role type—it holds across the board. What it suggests is that interviewers are often using feedback to rationalize a decision they made on instinct, as opposed to documenting evidence that led to an objective decision.
When the answer is No, they write more words to try to sound more objective. When the answer is Yes, they write less—because the decision feels easier to defend.

Gender shapes how much feedback gets written—in contradictory ways. Interviewers write 17% more feedback about women candidates than men. At the same time, women are more likely than men to have no documented feedback whatsoever. Those two findings point to the same underlying problem: when evaluation criteria are not well-defined, impressions and instinct take over and feedback goes off script.
Interviewers seemingly over-document rejections to compensate for the absence of a clear rubric, listing reasons a candidate isn't a Yes rather than measuring them against a defined bar. Women candidates seem to be subject to more of that scrutiny when they're being rejected—and also more likely to fall through entirely when feedback doesn't get written at all.

The common thread: when the hiring bar isn't defined upfront, feedback reflects the interviewer's impression rather than the candidate's qualifications. The words multiply, but the signal doesn't improve.
Getting consistent, comparable feedback out of every interview isn't about asking interviewers to try harder. It's about making sure they always have the information and support they need to run high-quality interviews and effectively report back on their findings. A few tweaks to your process and pulling in the right tools can go a long way.
Before the interview, everyone on the panel needs to be clear on what they're evaluating: the specific competencies assigned to them for this role and this conversation. When interviewers have that clarity, they know what to focus the conversation on and what to make note of.
During the interview, the interviewer needs real-time support to make sure competencies are covered and appropriate follow-ups get asked. Modern AI tools for recruiting offer this: they listen, track answers, cross off questions, populate follow-ups. This way, every interviewer runs an effective conversation.
Evidence must be captured in real time as well. Memory degrades fast. By the time an interviewer sits down to write feedback—and especially if they have back-to-back conversations—the specific things a candidate said have started to blur. What lingers is the overall feeling, and that's what gets written.
After the interview, each interviewer's feedback needs to be aligned to their assigned competencies and connected to actual evidence. This is what makes it possible to compare candidates accurately and objectively—and what makes a debrief a discussion about skills rather than personality. A good AI recruiting tool helps here as well, incorporating meeting notes, the transcript, and the interviewer’s hiring recommendation to write robust feedback automatically.
For a long time, tackling the feedback problem meant a lot of difficult work: interviewer training, scorecard tweaks, chasing people down for their assessments. Most teams didn't have the bandwidth to do it consistently, so they didn't—and the feedback stayed weak.
Now, we have tools like Lavalier.
Live Guidance gives interviewers AI-powered checklists and prompts during the conversation, keyed to the competencies defined during role setup. Questions don't get forgotten. Answers get captured in the moment. When the interview ends, there's already a structured record of what happened—not a blank scorecard waiting to be filled from memory.

Candidate Compare takes that structured output and maps it across candidates against the competencies. Instead of reconciling five different assessments with five different degrees of data, hiring teams can see who demonstrated what, where the gaps are, and who actually sets the bar for the role. It's one of the most practical AI tools for recruiting teams that want to make hiring decisions faster—without sacrificing the rigor that good decisions require.

With tools like these, feedback stops being a reflection of how interviewers felt and becomes a true record of what candidates demonstrated.
Weak interview feedback is one of those problems that teams can keep putting off until a hire goes wrong—and then it's obvious in hindsight that the evidence was never really there and you have a gap in your process. The instinct was documented, the feeling was recorded—but the actual performance data wasn't.
An interview intelligence system solves this: every interview gets real-time support in asking the right questions, assessing competencies accurately, and writing up rich feedback that gives the hiring team all the data they need to see who sets the hiring bar and should get the offer.
The Lavalier interview intelligence system is available to try free for a limited number of roles. See how it could change your team’s feedback and hiring decisions. Try it today →