How I spotted a 4-day data lag as a new joiner, self-initiated the fix, and turned a slow dashboard into the team's single source of truth.
Buying managers at Tesco are the connective tissue between the business and its suppliers. Every month, around 100 of them use the Commercial Performance Dashboard to identify which products are underperforming — and decide how to respond, whether that means negotiating internally or going back to suppliers.
But every time new monthly data arrived, the dashboard took 3–4 days to refresh. Multiple data sources feeding into cascading calculations, all manually validated before anyone could trust the numbers. The data wasn't stale — it was just delayed, and slow to load once it finally appeared.
If we could get this to people in hours instead of days, they wouldn't need to go anywhere else to check performance.
That wasn't a data problem. It was a pipeline problem. And I noticed it during my first weeks on the job.
I was in the middle of knowledge transfer when I spotted the lag. I didn't raise it immediately — I waited until KT was complete, understood the full system, then brought it to the team and took ownership of fixing it.
No team. Just Python, SQL, and Tableau. My scope covered the full pipeline: diagnosing the bottlenecks, rebuilding the backend architecture, and encoding the validation logic that had been eating up days of manual review.
Mapped every data source, transformation, and calculation layer to find where days were being lost
Built automated validation to catch bad calculations, missing data, and cross-source mismatches
Refactored the pipeline in Python and SQL — eliminating sequential bottlenecks without redesigning the frontend
Refresh time dropped from 4 days to under 1. No manual steps required on new data arrival
The fix wasn't just engineering. Every choice had a tradeoff between speed, reliability, and scope.
The manual review existed for good reason. The calculations were complex enough that errors could slip through. The answer was to encode that review into the pipeline itself, not bypass it.
Buying managers already knew the dashboard. Changing the interface risked confusion and retraining. All changes stayed in the data layer — same experience, radically faster underlying system.
Easy to patch one slow step. The real work was auditing every source and transformation to eliminate cumulative drag across the entire refresh cycle.
Raising this during onboarding would have been premature. Understanding the full system first meant the solution was grounded — not a new joiner's guess about something they'd seen for two weeks.
The numbers improved. But the more meaningful shift was qualitative. Across the buying department — and beyond — the dashboard became the canonical place to check commercial performance. People stopped doubting the numbers. They stopped cross-referencing other sources.
It became the holy grail — the one place people could check performance without doubting the numbers or the logic behind them.