How I used free Google products to create an Acute : Chronic Workload monitoring system… and why I would never do it again.

Intro:

Let’s set the stage: it’s 2022. I had just wrapped up my Master’s in Applied Sport Science Analytics. Fresh off the academic high, I stepped into a blended strength & conditioning / sport science role with a high school hockey club. I was excited, hungry, and maybe too eager to implement what I thought was a “cutting-edge” athlete monitoring system, on a shoestring budget.

The objective was simple: track training load and athlete readiness to reduce soft tissue injuries and improve performance consistency. I focused on the Acute:Chronic Workload Ratio (ACWR), a metric comparing short- to long-term workload to highlight elevated injury risk or undertraining.

But in the high school setting, you don’t get Catapult. You don’t get embedded force plates. You don’t even get reliable access to heart rate data at a scaled needed to perform. So I leaned into what I could control: subjective wellness monitoring.

Spoiler alert: that decision would teach me more about implementation failure than any graduate-level textbook ever could.

Building the Google Form:

I designed a clean, short, daily Google Form titled Athlete Self-Readiness Questionnaire (ASRQ). It included:

  • Name

  • Fatigue (1–5 scale; 5 = Very Fresh, 1 = Always Tired)

  • Sleep Quality (1–5 scale; 5 = Very Restful, 1 = Insomnia)

  • Muscle Soreness (1–5 scale; 5 = Feeling Great, 1 = Very Sore)

  • Stress (1–5 scale; 5 = Very Relaxed, 1 = Highly Stressed)

  • Mood (1–5 scale; 5 = Very Positive, 1 = Very Down/Irritable)

The instructions were clear: complete the form before noon each day. It was mobile-friendly and took less than 30 seconds. I wanted to encourage quick, honest, low-friction input. Each scale was defined so athletes wouldn’t guess blindly.

This fed directly into a linked Google Sheet, where I began to layer in automated calculations and visualizations.

Designing the Dashboard:

In the connected spreadsheet, I programmed formulas to aggregate daily scores (out of 25), and then calculate:

  • 7-day, 14-day, and 28-day Rolling Averages

  • ACWR (3:4, 7:7, 14:14 formats)

  • Daily Team Averages

  • Individual Adherence Rates

Each athlete had a profile row that displayed:

  • Their average scores across different time frames

  • Their ACWR values

  • Their team-relative deltas

  • Conditional formatting (green/yellow/red) based on ACWR thresholds

For example:

  • A score of 1.04 in the 3:4 ACWR meant the athlete was maintaining balance.

  • A score of 1.50+ was flagged in red as a spike in short-term load.

  • A drop below 0.80 was yellowed as a possible undertraining or missed-time issue.

The math worked. The logic was sound. I had visual clarity and trend lines. But as I’d soon realize, fancy formulas can’t save a broken process.

The Graph Home: Real-Time Monitoring

I created a secondary sheet, my "control tower" for a lack of better terms, to track each athlete’s daily scores, running averages, and day-to-day relative changes.

From this page I could:

  • Monitor which athletes were drifting below their norm

  • Spot who was recovering well vs. struggling

  • Identify high volatility or flatlining engagement

  • Track real-time team vs. individual changes

The math behind it:

  • Relative Delta = Athlete Daily Score – 7-Day Rolling Average

  • Positive ∆ = trending in a good direction

  • Negative ∆ = worth a conversation, sometimes intervention

Even team-level ACWR scores were displayed across time blocks, allowing me to make coaching decisions at scale.

At its peak, the system could’ve passed as a polished piece of sport science tech and it was built entirely with free tools.

But that’s also where the illusion began.

Why I Wouldn’t Do It Again:

This is where the cracks showed. Not in the spreadsheet but in how the system was lived, used, and understood.

1. Adherence Was Inconsistent and Fragile

Across all athletes, average adherence hovered between 57–71%, depending on the time frame. That means every week, I was working with 30–40% incomplete data.

That doesn’t mean “mostly good.” It means mostly noise.

Some players were consistent for a few days, then dropped off. Others never submitted a form unless I asked in person. The form became just another checkbox. And when they realized there were no direct consequences for skipping it, it disappeared from their habits.

2. Subjective Data Isn’t Bad But It’s Easy to Misuse

Some athletes always chose “5” across the board, even after tough practices or rough weekends. Some logged a “5” for fatigue after a poor night’s sleep and visibly low energy. Others reported “1”s just because they were annoyed or being sarcastic.

Subjective data has value, but only when the athlete understands:

  • What they’re rating

  • Why their honesty matters

  • How it helps them

I skipped this part. I made it about me needing data instead of them owning readiness. That was a big mistake.

3. I Became the Data Janitor

Instead of it saving me time, the system became a second job:

  • Cleaning up duplicates and errors

  • Texting reminders every morning

  • Explaining ACWR to skeptical teens

  • Debugging cell references

The more sophisticated it became, the more manual intervention it needed. In theory, it was automatic. In practice, it was always broken somewhere.

4. The Numbers Looked Clean But Weren’t Trustworthy

Seeing an ACWR of 1.03 felt validating, like I had captured a useful, scientific truth. But when I backtracked how that number came to be, I realized it was often based on 3-4 days of inconsistent or inaccurate data.

What looked like balance or overload might’ve just been noise, guesses, or missed entries. The illusion of precision became dangerous.

What I’d Do Instead (And Recommend to You):

This system wasn’t a total failure but it was a wake-up call.

If I could do it again:

  • Start with education and purpose.

    • Teach your team why monitoring matters, how it protects performance, and what the data tells us. Make them co-owners, not input monkeys.

  • Use objective data where possible.

    • Even simple metrics like vertical jump height, sprint time, or HRV provide more reliable feedback than mood scoring from a distracted teen.

  • Make subjective tools meaningful.

    • Pair wellness forms with short team check-ins or color-coded dashboards they can see. Visibility builds credibility.

  • Gamify or reward compliance.

    • Celebrate 100% weeks. Offer incentives. Build culture around consistency, not punishment.

  • Automate less. Integrate more.

    • The best tech doesn’t replace conversation, it enables it. Use data as a launchpad for connection, not replacement for it.

Final Thoughts:

This project didn’t fail because of tech. It failed because I prioritized systems over people.

I learned that the success of athlete monitoring isn’t about how beautiful your dashboard is, it’s about why athletes believe it matters, and what they do because of it.

In the end, the system looked clean, the charts looked smart, but the foundation was flawed. Because good data starts with good habits. And good habits come from culture, not code.

If you're considering building something similar, do it. But do it knowing that a spreadsheet doesn’t change behavior. A coach does.

Previous
Previous

Why I Ditched the 1RM Back Squat for Good

Next
Next

What does a week of programming look like for you?