blog post

Why your team writes weak interview feedback (and how to fix it)

Lavalier

Is it really so hard to write good interview feedback? It is, actually.

Running a good interview is challenging. Taking good notes is challenging. Writing up accurate, fair, useful assessments is challenging—particularly when it’s one of a million things on your plate.

And yet the whole debrief and decision process depends on an interviewer's ability to recap what they uncovered in interviews.

Most teams struggle with this. The problem usually isn't effort or intent. It's that the conditions required to produce good feedback—consistent structured interviews, confident interviewers, enough support to make the process feel manageable—have historically been difficult to maintain at scale. The right AI recruiting tools are changing that.

Why interview feedback falls apart

The standard: each interviewer evaluates their assigned competencies consistently across every candidate they see, and documents what they found in a way that's actually comparable. The reality: not that.

Not because interviewers aren't trying—but because the conditions working against good documentation are real. Evidence captured hours after a conversation is filtered through memory and overall impression. Notes taken during the interview help, but when your attention is split—listening, following up, keeping the conversation on track—whatever gets jotted down tends to reflect what stood out most, which isn't always what's most relevant. The specific things a candidate said start to blur. What remains is a feeling, and that's what gets written.

The result is feedback that varies in depth, format, and focus. This is genuinely hard to do well without a lot of support.

What's actually happening in your feedback right now

Recent research from Textio looked at feedback across thousands of interviews to understand how interviewers write assessments. The findings reveal patterns that most hiring teams would recognize—even if they've never seen the data before.

Interviewers work harder to justify “No” than to explain “Yes.” On average, interviewers write 39% more feedback when a candidate is not getting an offer than when one is. That gap isn't explained by interviewer style or role type—it holds across the board. What it suggests is that interviewers are often using feedback to rationalize a decision they made on instinct, as opposed to documenting evidence that led to an objective decision.

When the answer is No, they write more words to try to sound more objective. When the answer is Yes, they write less—because the decision feels easier to defend.

Bar chart showing average word count of written candidate feedback in the final round. Candidates who received offers: 89 words. Candidates who did not receive offers: 124 words. Difference: 39% more feedback written for rejected candidates than for those who received offers.

Gender shapes how much feedback gets written—in contradictory ways. Interviewers write 17% more feedback about women candidates than men. At the same time, women are more likely than men to have no documented feedback whatsoever. Those two findings point to the same underlying problem: when evaluation criteria are not well-defined, impressions and instinct take over and feedback goes off script.

Interviewers seemingly over-document rejections to compensate for the absence of a clear rubric, listing reasons a candidate isn't a Yes rather than measuring them against a defined bar. Women candidates seem to be subject to more of that scrutiny when they're being rejected—and also more likely to fall through entirely when feedback doesn't get written at all.

Bar chart showing average word count of written candidate feedback in the final round, broken down by candidate gender. Feedback written about men: 43 words. Feedback written about women: 51 words. Difference: interviewers write 17% more feedback about women candidates than men across all rounds of interviewing.

The common thread: when the hiring bar isn't defined upfront, feedback reflects the interviewer's impression rather than the candidate's qualifications. The words multiply, but the signal doesn't improve.

What useful feedback actually requires

Getting consistent, comparable feedback out of every interview isn't about asking interviewers to try harder. It's about making sure they always have the information and support they need to run high-quality interviews and effectively report back on their findings. A few tweaks to your process and pulling in the right tools can go a long way.

Before the interview, everyone on the panel needs to be clear on what they're evaluating: the specific competencies assigned to them for this role and this conversation. When interviewers have that clarity, they know what to focus the conversation on and what to make note of.

During the interview, the interviewer needs real-time support to make sure competencies are covered and appropriate follow-ups get asked. Modern AI tools for recruiting offer this: they listen, track answers, cross off questions, populate follow-ups. This way, every interviewer runs an effective conversation.

Evidence must be captured in real time as well. Memory degrades fast. By the time an interviewer sits down to write feedback—and especially if they have back-to-back conversations—the specific things a candidate said have started to blur. What lingers is the overall feeling, and that's what gets written.

After the interview, each interviewer's feedback needs to be aligned to their assigned competencies and connected to actual evidence. This is what makes it possible to compare candidates accurately and objectively—and what makes a debrief a discussion about skills rather than personality. A good AI recruiting tool helps here as well, incorporating meeting notes, the transcript, and the interviewer’s hiring recommendation to write robust feedback automatically.

This used to be hard. Now it isn't.

For a long time, tackling the feedback problem meant a lot of difficult work: interviewer training, scorecard tweaks, chasing people down for their assessments. Most teams didn't have the bandwidth to do it consistently, so they didn't—and the feedback stayed weak.

Now, we have tools like Lavalier.

Live Guidance gives interviewers AI-powered checklists and prompts during the conversation, keyed to the competencies defined during role setup. Questions don't get forgotten. Answers get captured in the moment. When the interview ends, there's already a structured record of what happened—not a blank scorecard waiting to be filled from memory.

Screenshot of the Lavalier Live Guidance interface during an active interview with candidate Rachel Fukaya for a Product Manager role, with 44 minutes remaining. The left panel shows a prepared question list with two questions marked as completed (struck through) and two remaining, plus a suggested follow-up question generated by Lavalier: "Can you share a specific example of a decision you made with incomplete information, and what principles guided that decision?" The right panel shows the interviewer's real-time notes, including AI-generated prompt tags (thumbs up, thumbs down, and bookmark) linked to specific moments in the conversation, alongside the interviewer's own typed observations about the candidate's responses.

Candidate Compare takes that structured output and maps it across candidates against the competencies. Instead of reconciling five different assessments with five different degrees of data, hiring teams can see who demonstrated what, where the gaps are, and who actually sets the bar for the role. It's one of the most practical AI tools for recruiting teams that want to make hiring decisions faster—without sacrificing the rigor that good decisions require.

Screenshot of the Lavalier Candidate Compare interface for a Product Manager role, showing a side-by-side comparison of three candidates: Max Winderbaum, Olivia Gunton, and Rachel Fukaya. The right panel displays a chat-based interface where the interviewer has asked "Which candidate appears most senior in judgment and decision-making?" Lavalier responds with a structured comparison of all three candidates, with key phrases from interview transcripts highlighted as evidence. A follow-up question—"What about experience with early-stage growth?"—is visible in the input field, showing the plain-language query interface that allows hiring teams to ask questions across all candidates simultaneously.

With tools like these, feedback stops being a reflection of how interviewers felt and becomes a true record of what candidates demonstrated.

High-quality feedback enables high-quality hiring

Weak interview feedback is one of those problems that teams can keep putting off until a hire goes wrong—and then it's obvious in hindsight that the evidence was never really there and you have a gap in your process. The instinct was documented, the feeling was recorded—but the actual performance data wasn't.

An interview intelligence system solves this: every interview gets real-time support in asking the right questions, assessing competencies accurately, and writing up rich feedback that gives the hiring team all the data they need to see who sets the hiring bar and should get the offer.

The Lavalier interview intelligence system is available to try free for a limited number of roles. See how it could change your team’s feedback and hiring decisions. Try it today →

Lavalier