Capture the last eight to twelve sprints of completed items, excluding rushed hotfixes if they distort reality. Note sprint length, planned holidays, capacity changes, and any work type mix shifts. Keep backlog items similarly sized where practical, or tag categories. The goal is just enough signal to forecast responsibly, while resisting the urge to collect every metric imaginable, which slows usage and quietly undermines trust.
Let the sheet compute throughput percentiles and aggregate them across the chosen horizon. Offer quick toggles for P50, P80, and P90 so decision makers can pick confidence levels consciously. If you sample throughput to simulate Monte Carlo runs, display iteration count and seed for transparency. Always expose formulas nearby, so people can validate logic, ask better questions, and improve the model without feeling excluded or mystified.
Communicate outcomes using plain language, legible colors, and tasteful emphasis. Pair a short narrative with a chart, highlighting assumptions, caveats, and next steps. Show ranges across sprints and note what would shift them, like headcount, dependencies, or scope volatility. Add a one‑page glossary to reduce confusion. Each detail restores trust, because everyone can understand how the numbers emerged, not just what they say.