Library · Long-form · 03Hyper · Vol. I · Spring 2026

The coaching scorecard.

Rapport, discovery, diagnosis, value, objections, closing. What each dimension means in practice, what a rep sees week to week, and what a manager can actually do with the data.

Hyper scores every meeting your reps run on six dimensions: rapport, discovery, problem diagnosis, value framing, objection handling, closing. Each rep gets a weekly scorecard, a keep, stop, start summary, and a quarterly development plan across the same six dimensions. Their managers see the team view. The point of the scorecard is not to grade reps. The point is to give every rep one specific thing to do differently next week, and to give every manager the evidence to know whether they did it.

I've scored thousands of calls over twenty-five years. Most of them sounded fine. The ones that moved a deal forward and the ones that didn't are distinguishable on six dimensions, and only six. The framework here is the tightened version of the one we ran inside the business that exited at $4.8bn of pipeline, distilled to what actually predicted whether the second meeting got booked.

Why six dimensions, not three, not twelve

A scorecard with three dimensions is too coarse to coach on. A rep can score well on a vague metric called "discovery" and still be missing every signal a buyer is sending. Twelve is too noisy. Reps don't remember twelve things, managers don't coach on twelve things, they coach on the two or three that have moved recently. Six is the largest set we've seen reps actually internalise and the smallest set that captures good discovery distinct from good value framing distinct from good closing.

Six also maps cleanly to the meeting itself. Roughly the first ten minutes is rapport. The next twenty is discovery and diagnosis. The middle is value framing. The end is objections and the close. A coach watching a recording can mark each segment as it happens. A rep watching their own recording can do the same.

The six, one by one

Rapport

Did the rep open the meeting in a way that earned the room. Not small talk for its own sake, which most senior buyers quietly resent, but a deliberate one or two minutes of context that established who's in the meeting, why they're there, and what success looks like in the next thirty minutes for both sides. Rapport scores high when the buyer leans into the conversation early. Low when the meeting feels like an interview the rep is conducting.

What good looks like: the rep names a specific thing about the buyer's business in the first sixty seconds, references the brief that triggered the meeting, and asks the buyer to frame the next thirty minutes themselves.

Discovery

Did the rep ask the questions that surfaced the actual situation, not a sanitised version of it. Strong discovery looks like layered questioning: each follow-up tightens what the previous answer left vague. Weak discovery looks like a checklist read aloud, and it produces the same generic notes every time, regardless of who's on the other end of the call.

What good looks like: by the end of discovery the rep can repeat back the buyer's situation in a sentence the buyer would write themselves. The CRM record afterwards has facts, not categories.

Problem diagnosis

Discovery surfaces the situation. Diagnosis surfaces the problem. The two get conflated all the time, and they're not the same skill. Diagnosis is naming the underlying issue the situation produces. The buyer says they want to triple outbound volume. Diagnosis is the rep noticing that the volume target is downstream of a different problem: their existing outbound isn't converting, and tripling volume on the same conversion rate makes the maths worse, not better.

What good looks like: the rep names a problem the buyer hadn't yet named. The buyer either agrees on the spot or pushes back substantively. Either response is useful. Silence is a warning sign.

Value framing

Once the problem is named, value framing is the rep mapping it to what your offer actually does. Not feature listing, which is a separate mistake. Framing is the rep saying: given the problem you just named, here's the specific bit of what we do that addresses it, and here's roughly what that's been worth to companies shaped like yours. A weak framing is a five-minute monologue about your platform. A strong framing is two sentences that force the buyer to decide whether they want to keep talking.

What good looks like: the rep references one specific comparable customer or one specific number, ties it to the problem named in diagnosis, and stops talking inside ninety seconds.

Objection handling

Every meeting that matters carries at least one objection. The dimension scores how the rep handles it. The bad version is rebuttal: the rep argues the buyer out of the objection. The good version is exploration: the rep asks the buyer to expand, finds the underlying concern, addresses that. Most stated objections aren't the real one. The rep's job is to surface the real one without the buyer feeling cross-examined.

What good looks like: the buyer says more after the rep's response than they did before. The rep doesn't reflexively produce a counter-stat. The objection ends with a shared next step, not the rep declaring victory.

Closing

Did the rep ask for the specific next thing, by name, with a calendar entry attached. Closing is unglamorous. It's the part of the meeting that gets dropped most often, because the rest of the meeting felt good and the rep let the buyer escape with a vague we'll-be-in-touch. Strong closing names the next meeting, names the participants, names the date, and books it before the call ends.

What good looks like: there's a calendar invite in the buyer's inbox before they hang up. The invite has a one-line agenda the rep wrote. The CRM has a stage move the rep didn't need to be reminded to make.

What a rep sees on a Monday morning

Every Monday morning a rep gets a weekly scorecard for the calls they ran the previous week. It opens with the headline: how many meetings, the average score across the six dimensions, the dimension that moved most. Then three blocks: keep, stop, start. Keep is the behaviour the coaching layer noticed working. Stop is the behaviour pulling a score down. Start is the one experiment to run this week.

Below that sits the per-call view. Each meeting shows the score on each dimension, the one minute of recording where the score was set, and a written coaching note. The rep can play the clip, read the note, and jump straight into the CRM record where their action items are filed. That's the loop that produces actual improvement: short, specific, anchored in evidence the rep can hear themselves saying.

Across a quarter, the rep sees a development plan: which two dimensions are trending up, which one is flat, what the coaching focus is for the next four weeks. That plan goes to the manager too, so the one-to-one has something concrete to discuss other than pipeline.

What a manager sees

The team view is the rep view, aggregated. Each rep's score across six dimensions. The dimension that needs the most coaching across the team. The reps trending up and the reps flat. Conversion at each stage, broken down against scorecard performance, so a manager can see whether the reps who score highest on diagnosis are also the ones whose opportunities convert fastest. They usually are.

The manager view also surfaces calls worth listening to: a top call from each rep that week, and the lowest-scoring call from a rep whose scores are dropping. Those two clips are what most coaching conversations end up being about. The manager doesn't have to listen to twenty hours of recordings to find them.

Three things the scorecard isn't

It isn't a performance management tool. The data is for coaching. I've seen sales orgs try to use the same numbers to fire reps, and the reps stop trusting the system inside weeks. That breaks the coaching loop. The scorecard is structured to be defensible in performance conversations but its primary purpose is development.

It isn't a substitute for a manager. The platform writes coaching notes from the recording, but the conversation about what to do differently next week happens between a rep and a human who knows them. The scorecard makes that conversation better. It doesn't replace it.

It isn't a black box. Every score on every dimension is attached to a specific minute of audio with a written reason. A rep who disagrees with a score can read the reason, listen to the clip, and discuss it with their manager. That's how you keep trust in the system. The platform shows its working.

How the scorecard ties back to the rest of the work

Coaching is one of the four functions Hyper runs. Research feeds outreach. Outreach books meetings. Meetings feed coaching. Coaching tightens the brief that feeds the next week of outreach. The scorecard is what makes that last step a measured loop rather than a hand-wave. When discovery scores are dropping across a cohort, the brief gets sharper. When objection handling is weak on a particular hook, the hook gets retired. The scorecard is the feedback the platform uses on itself, not just on your reps.

For more on how the four functions fit together, the mechanism chapter walks through it. For more on what the platform writes back into your CRM after each call, including the action items and the scorecard data, read how we plug into your CRM.

The questions reps actually ask

Whose calls get scored?

Every meeting your reps run that comes through the engagement. External demos, discovery calls, second meetings. We don't score internal stand-ups or pure account management calls unless you ask us to. We can score calls run on Zoom, Google Meet, Microsoft Teams or recorded over a dialler.

What about reps who are uncomfortable being recorded?

Recording disclosure goes on the calendar invite for every meeting, in line with UK and US recording-consent rules. Reps who don't want their calls used for the team view can opt their personal development plan to private. The data still exists, the team view doesn't show their calls.

Does it work for non-English calls?

For English, French, German, Spanish, Italian, Portuguese and Dutch the scoring runs natively. For other languages we score via translation, which is good enough for trend data but weaker on rapport, where the cultural register matters. We're honest about that on the rare engagements where it comes up.

Next step

See it run against one of your own accounts.

Try the demo →