How do you get interviewers to leave more detailed feedback?

Lavalier
April 8, 2026

Getting interviewers to write detailed, useful feedback requires solving three separate problems: making sure they know what they're supposed to evaluate, giving them a way to capture evidence during the conversation without losing the thread of it, and reducing the effort required to turn that evidence into a written assessment after the fact. Most teams focus on the first and ignore the other two.

Set expectations before the interview, not after

Interviewers write better feedback when they know exactly what they're supposed to be evaluating. The more specific the brief—which competencies to cover, which questions to use—the more specific the feedback tends to be. Vague briefs produce vague feedback.

It also helps to show interviewers what a useful assessment looks like. An example of strong feedback for a similar role—specific, evidence-based, tied to competencies—sets a clear standard.

Make same-day submission the norm

The gap between "I'll do it later" and "I'll do it immediately" is significant. Recall degrades quickly after a conversation, and feedback written two days later tends to reflect an overall impression rather than specific evidence. Teams that consistently get detailed feedback tend to have a clear norm: submit before end of day.

That norm works best when it's built into the process itself—expected of everyone, with a clear submission window (end of day is a common standard) that's communicated at the start of every role, not chased after the fact.

Make it easier to write

AI note-taking tools have made it possible to capture what's said during a conversation in real time—but general transcripts aren't the same as interview evidence. The more useful application is AI that captures responses against specific competencies as the interview happens, and then drafts feedback from that evidence automatically. When interviewers finish a conversation and feedback is already drafted for their review, the bar to submitting something complete and useful drops considerably.

Research from Textio analyzing over 10,000 interview assessments, found that interviewers write 39% more feedback for candidates they're rejecting than candidates they're recommending for hire. That imbalance points to a specific pattern: when evaluation criteria aren't clearly defined, interviewers end up over-explaining rejections—building a case for an instinctive decision rather than documenting evidence against a clear bar. Define the bar upfront and the feedback tends to be more useful in both directions.

How Lavalier helps

The core challenge is that taking good notes and leading a good conversation are hard to do simultaneously. Interviewers who are focused on the candidate tend to have thinner notes; interviewers focused on documentation tend to have thinner conversations.

Lavalier addresses this directly—Live Guidance captures AI notes against specific competencies in real time so interviewers don't have to choose between listening and documenting. After the conversation, feedback is generated from what was actually captured rather than reconstructed from memory. Candidate Compare then synthesizes that feedback into candidate briefs and lets hiring teams ask direct questions about how each candidate performed against the role's criteria.

The Lavalier interview intelligence system is free to get started. Try it on your next role →

Lavalier
April 8, 2026