Toolbox: Racing Data Review
A season review is a vital learning tool for athletes who want to consistently improve their performance. Why? When you know and can access the strengths and weaknesses of your annual racing performance and the training that supported it, you can use the knowledge gained from them to improve your plan for next year.
An annual season review process builds consistency in your approach to identifying your performance limiters and reveals areas of improvement needed in your annual training plan. This is yet another area where training with data excels, as you have a wealth of quantitative data to look back at and use for improvement.
When I conduct a season review, I typically focus on both power and heart rate, along with specific training metrics, and I start by separating the review into two different areas: racing and training. When reviewing races, it is important to look at performance, peak powers, fatigue resistance/endurance, and any areas that were specifically addressed in the current annual plan. In a review of training, I look at volume, intensity, specificity, progression, and overload.
In this two-part article series, I will demonstrate some select areas of review; there are too many possibilities to cover in any one article, but I will attempt to provide some insight into some of the more unique ideas. In part one I’ll focus on race performance review, and in part two, I’ll show how to use what we learned in race performance review to better review our training plan.
Race/Event Performance Review
Racing is hard! In races we tend to be highly motivated and push ourselves to the limits, so analyzing our race data gives us an insightful picture of our strengths and limiters when attempting maximal performance. I structure my race/event performance review around three areas: peak power, fatigue resistance, and specific targets.
Peak Power and Power Clusters
There are two ways to improve race performance: you can increase power over time or improve efficiency. In a season review, I focus on power over time, or Mean Max Power (MMP) as measured by specific time ranges. This does not mean that efficiency isn’t important, but in the season review I’m specifically looking at the relationship between training and power; efficiency is a skill built on that relationship.
The MMP review focuses on 5 time ranges (5 seconds, 1 minute, 5 minutes, 20 minutes, 60 minutes), as each tend to represent specific physiological performance areas. I review these numbers in two ways: peak vs. previous year peak and percentage cluster. Whereas most of us are familiar with the idea of comparing peak performance, the idea of clustering might be new to some. I define clustering as a percentage representation of the “tightness” of near maximal efforts when compared to the absolute max. For well-training athletes who achieve peak form, this cluster is typically very tight, roughly 96% or above (higher is better).
Take a look at this WKO4 MMP Peaks report, filtered to represent race data only. In this image we see select time ranges (I’ve chosen 3 time ranges only for visual clarity) being compared with the previous year for both max and average of top 5 MMPs. This athlete made substantial improvements in max MMP and average top 5 MMP, but reviewing the MMP cluster (how close the average top 5 are to the max), we see a broader range. This tells me that though this athlete did produce more power, he was not able to reproduce it in a tight range. This suggests that the athlete did not truly achieve peak form, which indicates we need to review his training load and content in relationship to peaking.
Key Learning: The athlete increased in peak power achieved but not in optimal peak/form.
Peak power gets a lot of attention these days because training is often focused on “more power,” and fatigue resistance training easily gets lost in the mix. It’s important to analyze the role of fatigue in performance because it gives us significant insight into athletes’ race performance as they fatigue. I’m still a little old school in this area, and I use work (kJ) as the basis for measuring race fatigue (Training Stress Score [TSS] is also a great way to do this). What I’m looking for is how an athlete performs after fatigue. There are plenty of riders who can produce a great 20-minute power when fresh, but can they do that after 1,000 or 2,000 kJ of work? One of the key analytics I use to review an athlete’s fatigue resistance is tracking his/her Power Duration and select MMP after a certain amount of work (the specific kilojoule target varies based on rider weight and racing level).
Take a look at this chart:
This chart displays a typical Cat 4 racer’s Power Duration Curve and peak 5-minute and 20-minute MMP after riding 1,000 kj of work. The red line is the athlete’s maximal power duration curve, and the green dashed line is after the 1,000 kJ. Notice the areas of decline in the chart. As noted in the curve shapes, this athlete loses a substantial amount of sprint power when fatigued (the left-hand side of the chart), and his power also drops off significantly after 40 minutes. We can see the specific drop-off for 5 and 20 minutes in the top right annotation; 5-minute power drops off only 11 watts, but 20-minute power drops off by 30 watts. This tells us a lot about this athlete’s general lack of fatigue resistance, as it represents a greater than 10% drop-off in 20-minute power after 1,000 kJ, which at race pace for an average-sized male can be as short as 75-90 minutes.
Key Learning: The athlete’s power increased, but his poor fatigue resistance/endurance negatively impacted his performance.
Based on either the demands of the event(s) or the ability of the rider, each year I focus on some specific performance criteria to achieve goals. For example, I coached an athlete who did the Tour of the Catskills race, an event well known for its climbing. He had done the race the previous year before starting to work with me, so we had some baseline data. As part of his season review, one of the areas we looked at was climbing. At first glance it seemed most of his MMPs occurred while climbing, but I took a deeper look and learned something. Take a look at this custom analytics chart I built to review the specific demands of this event:
I knew that this event has some steep grade climbing, so I separated mean max power by average grade over select time periods (8 minutes is shown in this example), and I learned something important. Although the initial data review suggested this athlete was climbing well, once we broke the climbs down by grade, we learned that he struggled when things got steeper, losing about 7% of his power on climbs steeper than an 8% grade. This was a crucial diagnosis that demonstrated the need for a specific focus in preparing to return to the event, in this case a combination of technique and strength.
Key Learning: This athlete has a significant issue with steep climbing power, which needs deeper review.
Reviewing race data give us a clear vision of how athletes succeed and how they fail. The deeper and more specific we look, the more we learn. In today’s environment of training with data, both coaches and self-coached athletes need to utilize the data that’s out there to support deeper diagnosis, which leads to better planning, which in turn creates better performance. Next week’s part 2 article will discuss how to review the previous year of training and how to extract key learning from the race review to determine the cause and effect of an athlete’s results (or lack thereof).
Tim Cusick is the TrainingPeaks WKO4 Product Development Leader, specializing in data analytics and performance metrics for endurance athletes. In addition to his role with TrainingPeaks, Tim is a USAC coach with over 10 years experience working with both road and mountain bike professionals around the world. You can reach Tim for comments at [email protected] [email protected] To learn more about TrainingPeaks and WKO4 visit us at TrainingPeaks.com.