Here's what the guide gets right, where it stops, and how to close the gap in Google Sheets without rebuilding it from scratch every quarter.
What the re:Work Guide Actually Says
The framework is clean: objectives are qualitative and directional, key results are measurable and time-bound. Google uses a 0.0–1.0 grading scale. An aspirational OKR scoring 0.7 is success — by design, not consolation prize. A 1.0 on an aspirational OKR means the bar was too low.
Two OKR types that matter in practice:
- Committed OKRs: must-dos, target 1.0. Hiring plan, product launch, regulatory deadline.
- Aspirational OKRs: stretch goals, target 0.7. Revenue growth rate, NPS improvement, new market penetration.
The guide recommends 3–5 objectives per organizational level, each with 3–5 key results, on a quarterly cadence — with annual OKRs set at the company level first. It also calls for full internal transparency: everyone should be able to see everyone else's OKRs.
One line worth flagging directly: Google's documentation states, "OKRs are not a performance management tool." That's aspirational guidance. In practice, every CFO you show this to will immediately ask how OKR scores tie to the annual bonus pool. The guide doesn't help you navigate that conversation.
OKRs were introduced at Google by John Doerr in 1999, adapted from Andy Grove's Intel framework. The fact that this 25-year-old system is still the dominant goal-setting methodology says something about how few improvements have been made.
Where the Guide Stops Short
The re:Work guide is written for operators, not finance. It's silent on:
- How financial KPIs map to key results (is "$4.2M ARR by Q4" a key result or an input assumption to something else?)
- How to weight scores when you have committed and aspirational OKRs mixed in the same department
- Roll-up math across departments with asymmetric OKR counts
- Period-over-period variance tracking against prior quarters
For an FP&A analyst building a tracker that needs to hold up in a board pack or quarterly management review, the guide gets you to the conceptual finish line and then disappears.
The Grading Math the Guide Glosses Over
Averaging 0.0–1.0 scores across a CFO with 4 OKRs and a VP of Sales with 7 — some committed, some aspirational — produces numbers that look precise and mean nothing.
A committed OKR at 0.5 is a serious problem. An aspirational OKR at 0.5 is slightly below the 0.7 target. Blending them into a single department score buries the signal.
The fix: split your scoring into two summary metrics. Committed OKR fulfillment rate (did you hit 0.95+ or not?) and aspirational OKR average score. Show both. Don't blend them into one number for the board.
=COUNTIFS('Q2_OKRs'!C:C,"Committed",'Q2_OKRs'!D:D,">=0.95") / COUNTIF('Q2_OKRs'!C:C,"Committed")
That gives you committed fulfillment as a clean percentage. Anything below 80% is a conversation, not a footnote.
For aspirational average by department:
=AVERAGEIFS('Q2_OKRs'!D:D,'Q2_OKRs'!B:B,Assumptions!$B$4,'Q2_OKRs'!C:C,"Aspirational")
Where Assumptions!$B$4 holds the department name — so the same formula works across all rows of your Dept_Rollup tab without rewriting it 8 times.
A Finance-Ready OKR Tracker Structure
The re:Work guide doesn't prescribe a template. Here's a 4-tab structure that produces something you can actually put in front of a CFO:
Assumptions — Current quarter, department list, OKR type definitions, scoring thresholds (committed floor: 0.95, aspirational target: 0.70). Single source of truth for everything else.
Q[N]_OKRs — One row per key result: Objective, Key Result, Type (Committed/Aspirational), Score (0.0–1.0), Weight (1–3 scale), Owner, Department. Weight matters — a key result tied to $12M in revenue shouldn't carry the same score contribution as a key result about updating a process doc.
Dept_Rollup — Weighted scores by department, split by type. Period-over-period delta vs. prior quarter:
=IFERROR(
AVERAGEIFS('Q2_OKRs'!D:D,'Q2_OKRs'!B:B,Assumptions!$B$4,'Q2_OKRs'!C:C,"Aspirational") -
AVERAGEIFS('Q1_OKRs'!D:D,'Q1_OKRs'!B:B,Assumptions!$B$4,'Q1_OKRs'!C:C,"Aspirational"),
"–"
)
Summary — Executive view: company-wide committed fulfillment rate, company-wide aspirational average, departments that missed committed OKRs (flagged), trend line. This tab feeds the board deck.
For the summary flag:
=IF(AND('Q2_OKRs'!C2="Committed",'Q2_OKRs'!D2<0.85),"MISS ⚠","OK")
Set the threshold at 0.85 rather than 0.95 if you want an early warning before a committed OKR officially fails. Your call depending on how your leadership reads these.
Where the Guide's Weighting Logic Breaks Down
Re:Work treats all key results as roughly equal within an objective. In a real finance-adjacent OKR, they're not. A Sales OKR with key results for $18.5M new ARR, 92% logo retention, and "improve CRM hygiene" shouldn't weight those three equally in the department score.
Adding a weight column and using weighted average instead of straight average is the single biggest improvement you can make to any OKR tracker. The formulas above use it; the re:Work guide doesn't mention it.
=SUMPRODUCT(
('Q2_OKRs'!B:B=Assumptions!$B$4)*('Q2_OKRs'!C:C="Aspirational")*'Q2_OKRs'!D:D*'Q2_OKRs'!E:E
) /
SUMPRODUCT(
('Q2_OKRs'!B:B=Assumptions!$B$4)*('Q2_OKRs'!C:C="Aspirational")*'Q2_OKRs'!E:E
)
That's a weighted average of aspirational scores for a single department — array formula, no helper columns, pulls cleanly from the OKR tab. The re:Work guide would have you eyeball it.
For the full formula framework connecting this kind of tracker to scenario routing and progress calculations, the OKR Sheets guide covers the structural piece in more depth.
The Mechanical Tax
A tracker built this way works. The problem is that each new quarter means copying 3 tabs, updating named ranges, fixing department references that broke when someone renamed a team mid-year, and rebuilding the Dept_Rollup formulas if org structure changed.
That's 2–3 hours of formula archaeology at the start of every quarter — exactly the kind of work that shouldn't be on your plate when you're also closing the books and prepping the board pack. ModelMonkey handles the structural rebuild: it can draft the cross-tab AVERAGEIFS chains, flag reference mismatches when a department name changes across tabs, and set up the weighted scoring logic from scratch.
The judgment calls — how to weight a committed headcount OKR against a stretch revenue target when they're in the same CFO scorecard — still yours to make.
Try ModelMonkey free for 14 days — it works in both Google Sheets and Excel.