Next Roundtable · Thu 14 May · "Managing Candidate Overload"
Solutions · Assess

Two interviewers, same candidate, opposite recommendations.

Unstructured early-careers interviews land at 47% inter-rater reliability — about the level you’d expect from a coin flip, weighted slightly. Add AI-prepped candidates who all sound like senior managers, and the structured interview that worked in 2019 is now a hiring rounding error. Video Interviews brings the consistency back, with explainable AI scoring per question.

Why the interview broke

Your interviewers think they’re hiring on merit. The data says rapport.

Two real things have happened to the structured early-careers interview at once. AI prep has flattened the case-question signal — everybody arrives sounding articulate and well-rehearsed. And the interviewer side has drifted: rubrics get applied unevenly, junior interviewers anchor on first impressions, panel feedback meetings turn into rapport-aggregation exercises rather than evidence-comparison ones.

The result is an interview process that feels rigorous and is, on the data, mostly noise. The fix is not more interviewers; it’s structured questioning, explainable AI scoring per question, and per-interviewer drift monitoring.

Inter-rater reliability (unstructured)
47%
Two interviewers, same candidate, same questions, scored independently. UK early-careers benchmark.
Inter-rater reliability (TTP)
84%
Same setup with structured competency rubric + explainable AI scoring. Cohort-level benchmark.
AI-flag accuracy
92%
Detection of AI-prepped responses across the platform. Validated against annotated test corpus.
Time saved per interviewer
11hr/wk
Average reclaimed by senior interviewers across deployed cohorts.
If you don't fix this

You hire charm, not capability — and discover the difference at month nine.

When inter-rater reliability is in the 40s, the interview process is functionally drawing names from a slightly-weighted hat. The strongest predictor of a hire decision becomes whether the candidate happened to interview with someone who liked them — not anything that correlates with first-year performance. The cost lands at month nine, when the line manager is having a candid conversation with HR about the candidate the interview process pushed through.

The interview is the most-defended and least-validated part of most early-careers funnels.

The Video Interview Platform replaces the unstructured-by-default interview with a structured, scored, calibrated one. Same questions for every candidate per role. Explainable AI scoring per question, per competency rubric. Inter-rater drift flagged in real time. Adverse-impact monitoring per interviewer. The senior practitioners on the panel still make the hire decision — the platform gives them better evidence to make it on.

How we deliver this

Six things that fix the interview.

We didn’t replace the human interview, because the human interview has unique signal that an algorithm can’t replicate. We replaced the unstructured-by-default version with a structured, scored, calibrated one.
01 · Structure

Competency-rubric questions

Question banks per competency, validated against role-family success profiles. Same questions for every candidate in a role; calibrated panel rubrics across geographies and languages.

02 · AI scoring

Explainable AI scoring

Per-question AI scoring with full explainability — the rubric criteria the score was based on, the language patterns it picked up, the inter-rater anchor. No black-box ML; every score decomposes back to evidence.

03 · Async

Calendar-free video

Async-by-default. Candidates record at a time that works for them; interviewers review when they have time. Removes the worst single bottleneck in graduate hiring — senior-interviewer calendar availability.

04 · Drift

Inter-rater drift monitoring

Real-time alert when an interviewer is scoring outside their cohort’s pattern. Live recalibration suggestions. Particularly useful for panels that include junior interviewers or new joiners.

05 · AI-flag

AI-prep detection

92% accuracy on detecting AI-prepped responses, cohort-validated. Doesn’t auto-reject — flags the response for senior-panel review and shows what the AI substitution patterns look like.

06 · Fairness

Adverse-impact per interviewer

4-fifths-rule monitoring at the interviewer level, not just cohort. Identifies interviewers whose scoring patterns produce systematic adverse impact — usually fixable with calibration coaching, sometimes a harder conversation.

What this connects to

Sits between shortlist and offer.

Video Interviews is the second filter, after the Assessment Platform shortlist. The two work as a pair: behavioural evidence from IWX feeds the Assessment Platform, the Assessment Platform produces a ranked shortlist, the Video Interview Platform applies structured human-decisioned interviews to that shortlist with explainable AI scoring throughout.

Most clients deploy Video Interviews as part of an Assessment Platform engagement; some deploy it stand-alone as a structured-interview replacement for legacy panel processes.

Platform
TalentScreen
The Video Interview Platform — question banks, scoring, panel calibration.
Upstream
Assessment Platform
Behavioural-evidence shortlists feed into the structured interview.
Cross-feature
AI Scoring
Explainable AI scoring per question with full audit trail.
Compliance
Adverse impact
Per-interviewer 4-fifths rule monitoring with calibration alerts.

Video interview deployments at

PwC FTSE 100 bank Microsoft Schneider Electric Channel 4 DHL Costa TfL
Talk to the assessment team

Got an interview process that nobody trusts?

Most clients start with a 45-minute calibration session: we run a small panel through one of our existing competency rubrics on anonymised candidate footage, score independently, and compare. Most senior practitioners are uncomfortably surprised by the inter-rater spread. That’s the start of the conversation.

Get in touch Read AI-on-both-sides research