Getting consistent, useful feedback from every interviewer requires addressing four things: defining what good feedback looks like, making it easy to give good feedback, setting clear submission expectations, and handling non-compliance when it happens. Enforcement without the first three is just nagging.
Interviewers can't consistently hit a bar they haven't seen. Beyond telling interviewers which competencies to cover, it helps to show them what a strong assessment actually looks like: specific candidate responses tied to the competency being evaluated, a clear conclusion, and enough detail that someone who wasn't in the room can follow the reasoning.
Defining the bar upfront tends to shape what interviewers document. Research from Textio found that across more than 10,000 interview assessments, interviewers write 39% more feedback for candidates they're rejecting than those they're recommending for hire—a pattern consistent with interviewers justifying instinctive no-hire decisions rather than recording what they actually observed. When the standard is clear before the interview starts, there's less room for that dynamic to take hold.
The most common reason feedback is thin isn't that interviewers don't care—it's that leading a focused conversation and capturing useful evidence at the same time is genuinely hard. The most practical fixes are:
Without that kind of support, you're asking interviewers to do two demanding things simultaneously and hoping the write-up doesn't suffer for it.
A same-day submission window is worth codifying as a team norm. Feedback written the day of the interview is meaningfully better than feedback written two days later, and treating it as a standing expectation removes the negotiation. When feedback is drafted automatically from what was captured during the interview, same-day submission stops being a burden—it's a two-minute review rather than a time-consuming writing task.
When a specific interviewer repeatedly submits late or thin feedback, the conversation is worth having directly—framed around the impact on the hiring decision rather than the process requirement. Many interviewers don't realize how much the debrief depends on their specific assessment, or that what they're submitting isn't giving the team enough to work with. Showing them what useful feedback looks like versus what they've been submitting, and what that difference means for the quality of the decision, tends to land differently than a standard process reminder.
If the pattern continues, it may be worth limiting that person's involvement in interview loops until the norm is established. That's an uncomfortable conversation, but an incomplete picture going into a debrief affects everyone's ability to make a good decision.
Lavalier builds the conditions for good feedback into the process itself: Role Setup and Plan Builder align interviewers on competencies and questions before interviews start, Live Guidance keeps conversations on track and captures evidence in real time, and feedback is drafted automatically after each conversation—so interviewers can review and submit with no delay.
Stop chasing interviewers for feedback. See how Lavalier makes complete, evidence-based feedback the default →