Data Analysis

Google OKR Tool: Build It in Sheets (2026)

Marc SeanApril 18, 20268 min read

Why Dedicated OKR Tools Keep Disappointing Finance Teams

The cost comparison is real but overblown. At $9-11/user/month for tools like Lattice or Betterworks, a 15-person FP&A team spends roughly $1,800/year — not a material line item. The actual problem is integration reliability. Most dedicated OKR tools pull metric data from source systems via connectors that break silently: the Salesforce sync stops working after an API version update, nobody notices until the quarterly review, and someone has to manually reconcile three months of progress data. You already have those source-system numbers in your financial model. Keeping OKR actuals in the same environment eliminates the sync problem entirely.

As of April 2026, Google Sheets' 10 million cell limit is not a constraint for OKR tracking. The constraint is human attention: a sheet with 40+ key results across 8 objectives becomes a status page nobody reads, regardless of what software it's in.

The 3-Tab Structure

The workbook has three sheets: OKRs (live status, the one you present), Inputs (raw metric feeds from your financial model or manual entry), and History (quarterly snapshots). Keeping inputs separate from display logic means you can update actuals by pasting a column, not hunting through formulas.

The OKRs tab drives everything from Inputs! references. A VP-level view stays at 3-5 objectives with 2-4 key results each. Department-level tabs can live in the same workbook, rolling up to company level via SUMIFS across sheets.

The Progress Formula That Doesn't Lie

The standard percentage-of-target formula breaks for decrease-direction KRs — churn reduction, CAC reduction, headcount efficiency. If you track monthly churn against a target of 1.8% and you hit 1.6%, =(B2/C2) gives you 88.9% — which looks like you missed when you exceeded the goal.

The correct formula handles both directions:

=IF(D2="decrease",
   MAX(0, MIN(200%, (C2 - B2) / (C2 - E2))),
   MAX(0, MIN(200%, (B2 - E2) / (C2 - E2)))
)

Where B2 = Actual, C2 = Target, D2 = Direction ("increase" or "decrease"), E2 = Baseline. The MAX(0, MIN(200%,...)) wrapper caps the range so a KR that moved backward from baseline doesn't show negative progress and a blowout quarter doesn't show 340% in your board deck.

On the Inputs tab, the baseline column typically pulls from your Assumptions tab or prior-period actuals:

=IFERROR(VLOOKUP(B2, 'Prior Period'!$A:$C, 3, FALSE), 'Assumptions'!$C$8)

The History Tab: Quarterly Snapshots and Sandbag Detection

This is where most implementations fall short. "Paste as values at quarter-end" is fine advice, but if you don't structure what you paste, you can't answer the questions that actually come up in planning cycles.

Here's what the History tab should look like after two quarters of snapshots. Each row is one key result for one quarter — no merged cells, no formatting, just a flat table you can query:

QuarterKR_IDObjectiveKey ResultOwnerDirectionInitial_TargetFinal_TargetBaselineActualProgress_Pct
Q1-2026KR-01Revenue growthARR ≥ $4.2MRevenueincrease$4,200,000$4,200,000$3,600,000$4,381,000130.2%
Q1-2026KR-02Improve retentionGross churn ≤ 1.8%/moCSdecrease1.8%1.8%2.4%1.6%113.3%
Q1-2026KR-03Reduce CACBlended CAC ≤ $8,400Marketingdecrease$8,400$8,400$11,200$7,950116.3%
Q1-2026KR-04Headcount efficiencyRevenue/FTE ≥ $185KOpsincrease$185,000$185,000$162,000$196,400149.4%
Q2-2026KR-01Revenue growthARR ≥ $4.8MRevenueincrease$4,800,000$4,800,000$4,381,000$4,713,00079.0%
Q2-2026KR-02Improve retentionGross churn ≤ 1.6%/moCSdecrease1.6%1.6%1.6%1.7%-16.7%
Q2-2026KR-03Reduce CACBlended CAC ≤ $7,200Marketingdecrease$7,200$7,200$7,950$6,890140.0%
Q2-2026KR-04Headcount efficiencyRevenue/FTE ≥ $210KOpsincrease$210,000$210,000$196,400$201,50036.9%

Two things to notice in that data. First, the Initial_Target and Final_Target columns are separate. Teams that revise targets mid-quarter — and most do, at least informally — need both to reconstruct what actually happened. If they're always identical, fine. But the gap between them is where sandbagging gets papered over.

Second, the Baseline for Q2 KR-02 is 1.6% — which is the Q1 Actual, not the Q1 Target. That's correct. If you use prior-quarter target as this quarter's baseline, you're measuring improvement from where you said you'd be, not from where you actually were. Those compound differently over four quarters and it matters when you're presenting trend to a board.

Answering "Did We Sandbag Q1 Targets?"

The question is asking whether initial targets were set so conservatively that the team knew they'd exceed them. You can surface that systematically with QUERY:

=QUERY(History!A:K,
  "SELECT D, F, AVG(K)
   WHERE A = 'Q1-2026'
   GROUP BY D, F
   ORDER BY AVG(K) DESC",
  1)

That returns average Progress_Pct grouped by key result and owner for Q1. Anything consistently above 130% warrants a conversation, not because hitting 130% is bad, but because if the team knew the target was soft, the number misrepresented risk during the quarter.

The more revealing analysis is owner-level consistency across quarters. Pull the AVERAGEIFS for each owner across all quarters:

=AVERAGEIFS(
  History!K:K,         -- Progress_Pct
  History!E:E, "CS",   -- Owner = CS team
  History!K:K, ">0"    -- Exclude reversed KRs
)

If the CS team averages 118% over four quarters while Ops averages 72%, you have an asymmetry that's worth unpacking before next year's target-setting. Sometimes it's genuine outperformance. Sometimes it's conservative baselines. You can't tell without this data, and you can't build this data without the flat table structure.

For the Q1-Q2 comparison above: Marketing hit 116% in Q1 then 140% in Q2, both on decrease-direction KRs. That consistency, combined with Q2 targets that started lower than Q1 actuals, is the pattern a CFO is looking for when they ask whether the team is sandbagging.

Wiring OKR Targets to Your Financial Model

The non-obvious structural choice is where OKR targets live relative to your model's Assumptions tab. If they're separate, someone has to manually sync them. If they're the same cells, changing a target for planning purposes overwrites what the OKR tracker was tracking.

The clean solution: let your financial model's Assumptions tab be the single source, and have OKR targets pull from it via named ranges.

='Assumptions'!ARR_Q2_Target

Then when your model updates — say, you revise Q3 ARR guidance from $5.1M to $4.8M — the OKR target updates automatically, and the change is visible in both contexts. Department OKRs roll up to company level the same way you'd consolidate any multi-entity model:

=SUMIFS(
  'Dept OKRs'!$F:$F,
  'Dept OKRs'!$C:$C, "Revenue",
  'Dept OKRs'!$A:$A, "Q2-2026"
)

This is the part that makes a Google Sheets OKR tracker worth maintaining: it's not a separate system, it's a view into your model.

When the Sheet Actually Breaks Down

Past 30-35 key results, the update cycle becomes the bottleneck. With 40 KRs, a weekly sync means 40 cells to update, some of which require pulling from 3 different source tabs. People start skipping weeks, progress data goes stale, and the tracker stops being trusted. That's the real switching trigger — not cost, not features.

If you're at that scale and considering a dedicated tool, the integration reliability question is worth stress-testing before you commit. Ask specifically: what happens when your CRM API version changes? What does the data look like in the tool when a sync fails silently for two weeks? A tool that requires manual reconciliation after failures has the same problem as a spreadsheet, with less transparency.

ModelMonkey runs inside Google Sheets and can pull actuals from your connected data sources directly into your OKR Inputs tab — which keeps the flat-table structure intact and queryable without rebuilding anything. Try ModelMonkey free for 14 days — it works in both Google Sheets and Excel.

Frequently Asked Questions