When interview feedback is too short or too vague to act on, you have a few options depending on how much time you have before a decision needs to be made: go back to the interviewer, use the recording or transcript if there is one, or proceed with acknowledged gaps. The underlying causes vary—conversations that went off script, notes that weren't useful for writing assessments, feedback written too long after the fact, or competing priorities—and the fix in each case is different.
The first move is to go back to the interviewer with a specific prompt rather than a general one. "Can you add more detail?" rarely produces better feedback. "What did the candidate say when you asked about X?" surfaces actual evidence. Give them something concrete to respond to.
If the interview was recorded or transcribed, go to the source. You or the interviewer can review the relevant sections of the transcript directly and use what's there to anchor the debrief, even if the written assessment is thin.
If neither option works, be explicit about what's missing going into the debrief rather than treating thin feedback as complete. Knowing where the evidence gaps are—and naming them—is more useful than making a decision as if the picture is complete when it isn't.
Research from Textio analyzing over 10,000 interview assessments across nearly 4,000 candidates found that interviewers write 39% more feedback when a candidate is being rejected than when they're getting an offer. That pattern points to something important: when evaluation criteria aren't clear upfront, interviewers default to justifying gut reactions rather than documenting evidence. More words, but not more signal.

The underlying issue is structural. Interviewers are being asked to simultaneously lead a conversation, take useful notes, and later translate those notes into a complete assessment—three things that work against each other when there's no system holding them together.
The conditions for useful feedback have to be set up before the interview—but interviewers also need support during the conversation and after. They need the right questions going in, a way to capture what candidates actually say without losing the thread of the conversation, and enough structure in their notes that writing feedback doesn't require reconstructing the interview from scratch.
Lavalier's Live Guidance gives interviewers competency-based questions, suggests follow-ups, takes AI notes in real time, and captures evidence against specific competencies as the conversation happens—so the interviewer can focus on the candidate.

After interviews wrap, Candidate Compare synthesizes that evidence into comparison briefs and lets hiring teams ask direct questions about how each candidate performed against the role's criteria.

The Lavalier interview intelligence system is free to get started. See how it could change your team's feedback and hiring decisions. Try it today →