Every competitive pickleball player knows their rating. They track it like a stock price. It goes up after a good tournament, down after a bad one. They introduce themselves with it at open play. "I'm a 3.8."

But ask that 3.8 player what specifically makes them a 3.8 and not a 4.2, and you'll get a shrug. Maybe a vague answer about consistency or needing to be more aggressive. Nothing actionable. Nothing you could build a practice plan around.

That's because ratings were never designed to tell you what to fix. They were designed to tell you who to play.

What ratings actually measure

Modern pickleball ratings -the algorithmic ones that have become the standard -are elegant systems built on a simple idea: if you beat someone rated higher than you, your number goes up. Lose to someone rated lower, it goes down. Over time, the system converges on a number that represents your competitive output.

This is genuinely useful. It makes matchmaking better. It adds structure to tournaments. It gives rec players a shorthand for finding competitive games. These are real contributions to the sport.

But here's what a rating cannot do: it cannot tell you why.

A rating is an outcome metric. It's the final score. It says nothing about the process that produced it. Two players can be rated 3.8 and have completely different skill profiles. One might have excellent hands at the kitchen but leaks points on the return. The other might have a devastating serve but falls apart in extended dink rallies.

Same number. Completely different players. Completely different improvement paths.

The gap between outcome and process

This distinction -between knowing your outcome and understanding your process -is the gap that matters for improvement.

In other sports, this is well understood. A baseball player doesn't just know their batting average. They know their launch angle, exit velocity, chase rate, whiff rate on breaking balls, performance by count. The number on the scoreboard is the result. The underlying metrics are the levers.

Pickleball doesn't have this layer. Not for recreational players. Not really for anyone outside of a handful of pros who have access to high-level coaching.

So when a 3.8 player decides to "get to 4.0," they're essentially trying to improve an outcome without understanding the inputs. It's like trying to lose weight by stepping on the scale more often. The scale isn't the problem. The absence of information about what you're eating is the problem.

What actually matters at the skill level

If you were going to build a real skill profile for a pickleball player -the kind that would actually tell them what to work on -what would you measure?

Here's what we think matters, based on what actually separates players at adjacent rating levels:

Serve and return consistency. Not just whether it goes in, but depth, placement, and how often the outcome sets up an advantageous next shot. Most intermediate players don't realize how many points they lose before the rally even starts, simply because a short return gave their opponents an easy third shot.

Third-shot decision-making. Not just "can you hit a drop" but do you choose the right shot for the situation? A lot of 3.5 players have a decent drop and a decent drive. They just pick the wrong one 40% of the time.

Transition zone play. The area between the baseline and the kitchen is where intermediate games are won and lost. How effectively you move through this zone -and how many unforced errors you make while doing it -is one of the strongest predictors of rating level we've found.

Reset quality. When you're under pressure, can you neutralize the point? This is a skill that barely shows up on highlight reels but is disproportionately important. The ability to absorb an attack and redirect it softly into the kitchen is what keeps points alive.

Kitchen positioning and patience. Where do you stand? How long can you sustain a dink rally without going for a low-percentage speed-up? Positioning errors at the kitchen are some of the most common and most invisible mistakes in the game. You don't realize you're six inches too far back until someone with better eyes points it out.

Unforced error patterns. Not just how many, but when, where, and on what shot type. Everyone makes errors. The question is whether yours are random or systematic. Systematic errors are fixable. Random errors usually indicate fatigue or mental lapses, which is a different kind of problem.

Why nobody tracks this

If this is what matters, why doesn't anyone measure it?

Because it's hard. A rating system just needs to know who won. Skill-level measurement requires observing behavior within the game. It requires knowing not just the score but the sequence of events that produced it. It requires distinguishing between a drop that was a strategic choice and a drop that was a panic response.

For a human coach, this is doable in a one-on-one session. They watch you play, they see the patterns, they tell you what they see. But it doesn't scale. It's expensive. And it doesn't persist -the coach's observations live in their head or in a few notes, not in a system that tracks changes over time.

The result is that the most important information about your game -the specific, behavioral, skill-level data that would actually tell you what to work on -has historically been inaccessible to most players.

What a real skill profile looks like

Imagine instead of a single number, you had a multi-dimensional view of your game. Not just "3.8" but a breakdown that showed exactly where your 3.8 is strong and where it's weak. A profile that told you: your dinking is 4.0-level, your resets are 3.5-level, and your transition zone play is what's capping your overall performance.

Now imagine that profile updated over time. You worked on your transition game for three weeks. The system measured the change. Your transition zone errors dropped 20%. Your overall win rate ticked up. And now the profile recalculates and says: nice work, here's the next thing.

That's what a real feedback loop looks like. Not a rating. Not a drill. A diagnostic system that knows where you are, tells you what matters, and adjusts as you improve.

This is what trainedB.ai is building. Not a better rating. A better mirror.

Next week, I'm going to share what surprised us when we actually started measuring these things -including why players' own sense of their game is almost always wrong in the same predictable ways.