The situation
The agency runs around twenty consultants across a handful of specialist desks, placing mid-senior roles in a competitive UK market. Every live brief pulled in between 80 and 200 CVs, sometimes more when a role was posted on the bigger job boards. Consultants were spending five to seven hours per brief just to get to a first shortlist, reading each CV, copying notes into a tracker, and trying to remember who they had already seen for a similar role last quarter.
The growth director had been staring at the same pattern in the pipeline for months. Consultants were closing four to five briefs a month on average, and the ceiling was not effort, it was screening. When she worked the numbers back, roughly a third of billable-capable hours were being spent on CV triage rather than on client calls, candidate conversations, or business development. Hiring more consultants was the obvious lever, but onboarding in this market was slow and expensive, and she did not want to solve a workflow problem with headcount.
What we did
We started by sitting with two consultants for a full day each and watching them actually screen. That surfaced what the tracker never showed, which was that most of the time was spent re-reading the same CV three or four times because the criteria for a role lived in a consultant's head rather than on paper. The fix had to start there.
We built a lightweight intake step that turned a brief into a structured scorecard before any CVs were touched. Must-haves, nice-to-haves, deal-breakers, the salary band, and notes on soft signals the consultant had picked up from the client call. Only once that was agreed did we run CVs through a screening assistant built on top of Claude, which scored each candidate against the scorecard, flagged gaps, and pulled out the two or three lines of evidence that supported the score.
Bias and compliance were the first thing the growth director raised, and rightly so. We were careful here. The assistant does not see names, addresses, photos, dates of birth, or university names during scoring, and we documented the prompt and the fields it reads so the team could show a client or a regulator exactly how a shortlist was produced. We ran a two-week parallel test where consultants screened the old way and the new way side by side, and we checked whether protected-characteristic proxies were creeping into the rankings. A couple did, and we adjusted the scorecard language before going live. The human still makes the final call. The assistant produces a ranked longlist with reasoning, and a consultant reviews, reorders, and cuts before anything reaches the client.
The result
Screening time per brief dropped from an average of six hours to around forty-five minutes, a reduction of roughly 87 percent. Time-to-shortlist, measured from brief sign-off to first CVs landing with the client, fell from just over four days to under a day and a half. Across the desk, consultants are now closing closer to seven briefs a month on average, up from five, without anyone working longer hours.
The more interesting question was what happened to the time the team got back. The honest answer is that it did not all go into more billable work, and the growth director was fine with that. About half of it went into more candidate calls and more client check-ins, which is where the extra placements came from. The rest went into things that had been getting squeezed for years, proper debriefs after interviews, tidier CRM records, and in one case a consultant finally having the space to build out a new sub-sector desk the agency had been talking about since the previous summer. Screening was never the work the agency was paid for. It was just the work that had been eating the work that mattered.
Frequently asked questions
How does the tool handle bias and equal opportunities concerns?
Bias and compliance were the first thing the growth director raised, and rightly so. The screening assistant does not see names, addresses, photos, dates of birth or university names during scoring. We documented the prompt and the fields it reads so the team can show a client or a regulator exactly how a shortlist was produced. During a two-week parallel test we checked whether protected-characteristic proxies were creeping into the rankings. A couple did, and we adjusted the scorecard language before going live. The human still makes the final call.
Why build a structured scorecard before screening any CVs?
Sitting with two consultants for a full day each surfaced what the tracker never showed: most of the time was spent re-reading the same CV three or four times because the criteria for a role lived in the consultant's head rather than on paper. The scorecard fixes that. Must-haves, nice-to-haves, deal-breakers, salary band and notes on soft signals from the client call all get agreed before any CV is touched. It also gives the consultant something concrete to share back with the hiring manager when expectations have drifted.
What does Claude actually do in the screening flow?
The screening assistant is built on top of Claude. It scores each candidate against the agreed scorecard, flags gaps, and pulls out the two or three lines of evidence that supported the score. The output is a ranked longlist with reasoning attached, not a final shortlist. A consultant reviews, reorders and cuts before anything reaches the client. We chose Claude for this work because the reasoning quality on long, messy CVs is the part that mattered most, and the audit trail makes the bias review possible.
Did the consultants end up working longer hours to close 40% more briefs?
No, and the growth director was clear that mattered. Consultants moved from closing four to five briefs a month on average to closer to seven, without anyone working longer hours. About half the recovered time went into more candidate calls and client check-ins, which is where the extra placements came from. The rest went into proper interview debriefs, tidier CRM records, and one consultant finally building out a new sub-sector desk the agency had been talking about since the previous summer.
