There’s something mildly absurd about modern running tech. Every brand talks about AI now. Smart coaching, predictive training plans, readiness scores, recovery scores, stress scores, you name it. My watch apparently knows my future. It just doesn’t know what happened five minutes ago.
Take heart rate. I run with a chest strap or the wrist sensor, doesn’t matter. Every now and then the data goes completely off the rails. Suddenly my pulse jumps to 190 while I’m jogging easy, stays there for three minutes, then drops back like nothing happened. No hill, no sprint, no drama. Just noise. Same with GPS. Clean route along the river, then one glitch and the track cuts straight through buildings like I teleported. The device shrugs and saves it as truth.
I can live with imperfect sensors. Sweat, movement, bad satellite reception — physics is messy. What I don’t get is why all that so-called intelligence doesn’t clean up the mess afterwards. Because statistically speaking, this is the easy part. Outliers are not some exotic phenomenon. They’re textbook stuff. Every intro course in data analysis covers them. If one data point is physiologically implausible or completely detached from the surrounding values, you flag it, smooth it, or drop it. Signal processing has been doing this for decades. Median filters, Kalman filters, simple plausibility checks. Nothing fancy. This isn’t sci-fi AI. It’s basic hygiene.
Garbage in, Glossy Insights out

Instead, most platforms skip the present and jump straight into prophecy. They’ll happily calculate my predicted marathon time for next October but won’t question a heart rate spike that would put me in the ER. They build elaborate models on top of noisy data and then act surprised when the recommendations feel off. Garbage in, glossy insights out.
And this is where the whole AI narrative starts to feel backwards. Real intelligence would start with doubt. It would ask: does this even make sense? Did a human heart realistically jump 40 beats within one second on an easy run? Did someone really sprint across a lake at 30 km/h? If not, maybe fix the data first before building training plans and readiness scores on top of it. Clean signals beat clever predictions.
There are other examples everywhere. VO₂max estimates that swing wildly after one bad workout. »Overtraining« warnings triggered by a single night of poor sleep. Stress metrics reacting more to a loose watch strap than to actual life stress. It’s not that modeling performance is impossible — sports science has solid methods for that — but all of them assume reasonably reliable input. Without that, the numbers just look precise while being fundamentally shaky.
Maybe the next real upgrade for running tech isn’t another AI coach whispering split times. Maybe it’s something less sexy: better filtering, more skepticism, systems that quietly fix obvious nonsense before presenting it as insight. Less crystal ball, more common sense. I’d take that any day.
