The setup: why this role, why now
For two decades, people analytics has produced dashboards about individuals. Attrition risk, engagement scores, performance ratings. Josh Bersin's November 2024 assessment of the field is blunt: fewer than 10% of companies can correlate or directly link HR and people data to business metrics. The data exists. The wiring to outcomes does not.
Three signals from 2025 explain why companies are now building functions around teams as the primary unit of analysis. The first is Deloitte's 2025 Global Human Capital Trends report, drawing on roughly 10,000 leaders across 93 countries. Deloitte calls it the shift from productivity to human performance. Inside the data, 61% of managers and 72% of workers say they do not trust their organization's performance management process. Only 26% of organizations say managers are effective at enabling performance. And 36% of managers report they were not fully ready for people-management when they took the job. Organizations that grow employees' capacity to think deeply are 1.8x more likely to report better financial results.
The second signal is Gartner's October 2025 release of CHRO priorities for 2026. From a July 2025 survey of 222 CHROs, four priorities emerged: harness AI, shape work in the human-machine era, mobilize leaders for growth, and address culture atrophy. Only 47% of CHROs said culture drives employee performance today. That is a gap large enough to require its own function.
The third signal is tenure pressure. Russell Reynolds' Q1 2025 CHRO Turnover Index found that 19% of CHROs had served less than two years. Average outgoing tenure was 5.2 years, up from 4.4 in 2023. Russell Reynolds describes the first 12 to 18 months as the critical risk period. The Director of Team Intelligence typically reports up to that role and shares the same risk profile. This is a job where the first cycle has to ship.
Phase 1: Weeks 1-2. Behavioral baselines
The temptation in week one is to announce a strategy. Resist it. Russell Reynolds' interviews with new CHROs are unanimous on this: almost all of them went on an in-person listening tour across the regions before announcing anything. The advice they pass on to successors: in the first month, resist the urge to announce initiatives or restructure teams.
Michael Watkins' framework in The First 90 Days (HBR Press, 2013) gives the scaffolding. Promote yourself out of the role you came from. Accelerate learning. Match strategy to situation. Build coalitions. Secure early wins. The Economist calls the book the onboarding bible for a reason: it works.
Two parallel tracks run in weeks 1 and 2.
Track A: Listening tour
Schedule a CEO 1:1 in the first three days. Schedule 1:1s with every direct report of the CHRO in week one. Schedule the CFO and the senior revenue leader in week one (they are the outcome-owners who will judge the function). Add the CIO or CTO (the integrator who controls what data flows where), legal (the regulator), and two or three business-unit GMs (the customer set) by end of week two. Russell Reynolds is direct about why this matters: technology leaders fail in their first year because of poor stakeholder management, even when their technical skills are strong.
Use the listening tour for two outputs. First, a stakeholder map ranked by influence and pain. Second, a list of the three to five teams every senior leader independently mentions as either struggling or strategically important. That overlap is the shortlist for Phase 2.
Track B: Data inventory
Catalog what already exists. Microsoft Viva Insights, on initial deployment, surfaces 13 months of historical collaboration data, with weekly refreshes extending coverage to 27 months. A single-module deployment runs 4 to 6 weeks; a full Viva Suite enterprise rollout runs 3 to 6 months. Microsoft documents a 45-day pilot timeline. Data is either private to the user or aggregated and de-identified for leadership.
If the company runs Worklytics, it likely already has organizational network analysis (ONA) data on cross-functional collaboration, manager coaching relationships, and bottlenecks. Standard Chartered's Head of People Insights summarized the use case: they used Worklytics to measure cross-functional collaboration, understand manager coaching relationships, and detect bottlenecks. If the company runs Visier, it has a workforce analytics layer that can show retention deltas (one financial-institution customer with 5,000-plus employees reported a 14% lift in new-hire retention) and revenue-per-employee benchmarks. Visier-empowered organizations report $775,364 revenue per employee against a $650,797 baseline elsewhere.
The Insight222 Nine Dimensions for Excellence in People Analytics (Ferrar and Green) groups capabilities into three categories: building foundations, managing resources, and delivering value. Their framing is important: sequential maturity models are no longer valid. The function does not have to be perfect on dimension one before working on dimension nine. Pick the dimensions that match the strategy.
Then read the most-cited HBR piece on this work: Leonardi and Contractor's "Better People Analytics" (November-December 2018). Their argument is that most firms only examine employee attributes, while people's interactions are more telling. Structural signatures in communication networks predict who has good ideas and influence. The raw material is already in the company: the digital exhaust from internal communications.
Phase 1 deliverables
- Stakeholder map with named relationships and a ranked list of 8 to 12 senior leaders.
- Data inventory: collaboration analytics platform, HRIS, performance system, learning system, engagement survey, ONA tooling.
- Privacy and consent baseline. What is collected, what is stored, what is shared, and on whose authority.
- Three to five candidate teams for Phase 2.
Phase 1 mistakes to avoid
The first is the most common: working out what is easy to measure and measuring everything easy to measure regardless of its relevance to the business (a phrasing widely quoted from ADP's analytics group). The second is what myHRfuture calls the worst kind of stakeholder problem: ambivalence can be worse than resistance because everyone may outwardly cheer you on, but indirectly they don't provide the support you need. The fix for both is the same: tie every metric you propose to a specific business question that a specific named leader has agreed is important.
Phase 2: Weeks 3-4. High-leverage teams
By week three, the listening tour has produced a candidate list. The work in weeks 3 and 4 is to filter that list to two or three teams that will carry the first cycle. Three frameworks do most of the filtering.
The Hackman filter
Richard Hackman's five conditions for team effectiveness are the most validated diagnostic in the field. Real team. Compelling direction. Enabling structure. Supportive context. Expert coaching. In a 2004 study with O'Connor of 64 US Intelligence Community analytic teams, those five conditions controlled 74% of the variance in team performance. If a candidate team fails on three or more conditions, it does not belong in the first cycle. The intervention is too heavy and the win is too far away.
The Edmondson cross-boundary filter
Amy Edmondson and colleagues studied 299 cross-sector teams in their 2020 Academy of Management Discoveries paper on joint problem-solving orientation in fluid cross-boundary teams. The finding is that teams operating across functional, departmental, or organizational boundaries benefit most from explicit teaming practice. If a team is high-stakes, fluid, and cross-boundary, it is a candidate. If it is a stable group inside one function with predictable work, the value of a Team Intelligence intervention is lower.
The McKinsey leverage filter
McKinsey's December 2024 piece "All About Teams" reports that team-focused transformations can produce 30% efficiency gains. Their pattern is to start with a small cohort. At one global oil-and-gas client, an initial cohort of 30 change agents expanded to over 150 in two years. The math the playbook requires: which two or three teams, if they improved by 20% to 30%, would move a metric the CFO or CRO already cares about? Those are the high-leverage teams.
McKinsey's "Go, Teams" research adds the diagnostic vocabulary. Team health drivers explain 69% to 76% of the variance between low- and high-performing teams. The four core areas are configuration, alignment, execution, and renewal. Within those, the four highest-impact drivers are trust, communication, innovative thinking, and decision making. Well-performing teams are good at only 11 of 17 health behaviors on average. The implication: teams do not need to be excellent at everything, only excellent at the right things for their work.
The case against starting with sales
The Microsoft commercial-sales precedent from 2017 is the standard reference. Microsoft used Workplace Analytics to uncover the behaviors of sellers who outperformed their peers and replicate them. Upsells more than doubled. Roughly 5 hours per week per employee were freed at a Fortune 500 customer. The Gong and Chorus pattern (10 to 15 reps for 60 to 90 days before full rollout, then 20% to 30% improvement in deal closure and 25% reduction in sales cycle) follows the same logic.
Sales teams produce clean outcome data. They are the obvious first stop. The argument for going somewhere else first is that the Team Intelligence function differentiates by finding the non-sales high-leverage team: the marketing pod that accelerates product launches without fanfare, the customer-success squad whose escalation rate dropped after a single change to handoffs, the product team whose decision velocity doubled when one meeting was killed. Those wins are harder for incumbent functions to claim and easier for a new function to own.
Phase 2 deliverables
- Two or three teams selected for the first cycle, with the named outcome metric for each.
- Co-signed scope from the team's manager and the next layer up.
- Privacy notice circulated to participants. Voluntary participation confirmed in writing.
- Baseline measurement plan with at least one behavioral baseline (collaboration, meeting load, network density) and one outcome baseline (cycle time, escalation rate, win rate, defect rate).
Phase 3: Weeks 5-8. First cycle
Phase 3 is where the function becomes real. The cycle has to ship.
Borrow from aviation
Crew Resource Management began at a NASA workshop on June 26 to 28, 1979. The term was coined by NASA psychologist John Lauber. The trigger was United Airlines Flight 173 on December 28, 1978. United launched the first comprehensive CRM program in 1981. The validation moment came on July 19, 1989, when United Flight 232 lost all hydraulic systems near Sioux City, Iowa. Captain Al Haynes credited CRM with the partial recovery. After Sioux City, CRM became the global standard. The lesson for Team Intelligence is that an industry can convert ad-hoc team practice into a regulated, measured, repeatable discipline. The structure that worked was a recurring practice cycle embedded into how crews flew every day.
Borrow from Pixar
Pixar's Braintrust is the inverse precedent: a recurring forum where trusted colleagues review films in progress every few months. The Braintrust has no formal authority. The director is not required to take any specific note. Ed Catmull credits it as the cornerstone of what made Pixar movies memorable. The relevant design choice: Braintrust feedback is structured, recurring, and decoupled from authority. People can hear hard things because no one is grading them in the room.
Use intelligent failure as the unit of learning
Edmondson's 2023 book Right Kind of Wrong defines intelligent failure cleanly. It happens in new territory where you cannot look up the answer in advance. It is in pursuit of a goal. It is driven by a hypothesis. And it is no bigger than it needs to be for learning. The first Team Intelligence cycle should be designed to produce intelligent failures, then to debrief them. That is the loop that compounds.
Use Lencioni as the diagnostic vocabulary
Patrick Lencioni's five dysfunctions stack predictably: absence of trust leads to fear of conflict, which leads to lack of commitment, which leads to avoidance of accountability, which leads to inattention to results. The model works as a shared language that helps a team name what is happening without blaming individuals. In the first cycle, the diagnostic conversation should produce one sentence: "the dysfunction we are working on this quarter is X."
Borrow the DORA implementation pattern
The book Accelerate codifies how engineering teams adopted the four DORA metrics. Establish baselines first, then set realistic improvement targets. Decrease lead time for changes by X% within six months. Halve change failure rate over the next year. The pattern applies cleanly to Team Intelligence. Pick one team-level metric per pilot team, baseline it, set a target, run the cycle, measure, repeat.
Run on a continuous operating cadence
McKinsey's "From Me to We" piece on team-based performance management is direct: many organizations celebrate teamwork, but few have performance management systems that formally recognize teams as the muscle that delivers business results. McKinsey identifies three enabling practices: team goals (cascaded via QBRs), team appraisals, and team development. The cadence that works is continuous and embedded in how the team already operates. Standard advice the article echoes: establish a regular cadence for 1:1 meetings between managers and direct reports, ideally weekly, monthly at minimum. Team Intelligence cycles should follow the same logic. Weekly behavioral check-ins. Monthly outcome reviews. Quarterly recalibration.
Phase 3 deliverables
- One Team Intelligence cycle completed on each pilot team. Baseline, intervention, debrief, recalibration.
- One named team metric improved or held with a clear hypothesis for why.
- A written one-page case for each team that an executive can read in three minutes.
- A repeatable cadence the manager of each pilot team has agreed to continue.
Phase 4: Weeks 9-12. Exec report and scale
Phase 4 is the read-out. The Watkins frame applies: secure early wins and target the break-even point. By week 12, the function has to be visibly net-positive in the eyes of the executive team that funds it.
Lead with the story, then the numbers
HBR Analytic Services and Visier reported that storytelling is the best way to share people analytics insights and drive decision making with executives. The exec report from the first 90 days should open with a one-paragraph narrative for each pilot team. What was the team trying to do, what did the cycle change, what did the outcome show, what does the team plan next quarter. Charts support the narrative; they do not lead.
Show the ROI math
The numbers that travel are the ones tied to revenue, cost, retention, or risk. Visier customers report $37.5M in revenue uplift from quality-hire improvements, up to $15M saved by retaining new hires and top performers, and up to $100M saved through optimized workforce planning. Forrester's Total Economic Impact study on Microsoft 365 Copilot projected 112% to 457% ROI over three years for a 25,000-employee composite organization. Use these as external benchmarks for the Team Intelligence ROI model. Do not present them as your own results.
The 90-day exec report's own ROI math should be specific and small. If the marketing pod's launch cycle compressed from 11 weeks to 9 weeks, what is the value of two extra weeks of in-market revenue per launch, multiplied by launches per year. If the customer-success squad's escalation rate dropped 18%, what is the value of one fewer enterprise escalation per quarter. A finance review will trust a small verifiable number and discount a large unverifiable one.
Tie the function to the gaps the executive team already sees
Gartner's CHRO 2026 priorities surfaced the gap directly: only 47% of CHROs say culture drives employee performance today. The Team Intelligence function answers exactly that gap. The exec report should name it that way. Russell Reynolds' tenure data is the operational urgency: 19% of CHROs in role less than two years, with the first 12 to 18 months as the critical risk period. The Director of Team Intelligence and the CHRO succeed or fail on the same clock.
Phase 4 deliverables
- An executive read-out (10 to 15 slides) covering pilot results, ROI math, the proposed next cycle, and the scaling roadmap.
- A budget request for the next quarter tied to specific named teams.
- A privacy and ethics memo addressed to legal and the CHRO.
- A written charter for the function: what it does, what it does not do, who it serves.
Privacy and compliance: the constraint that shapes everything
An Insight222 survey of 57 companies found that 81% of people analytics leaders said data ethics or privacy concerns sometimes or often jeopardized their workforce analytics projects. Thirty percent of respondents could not affirm tangible value from people analytics work over the past 12 months. Ethics and value travel together. Functions that get this wrong stall.
Two specific regulatory anchors apply in 2026. The Colorado AI Act (SB 24-205), signed May 17, 2024, takes effect June 30, 2026. Employers using AI for consequential employment decisions are deployers of high-risk AI systems and carry disclosure and recordkeeping obligations. A March 2026 update from the Colorado AI Policy Work Group proposes an automated decision-making technology (ADMT) replacement framework. The EU AI Act sets the international model. The practical implication: any Team Intelligence tooling that influences hiring, promotion, or termination needs an explicit deployer record, a documented use case, and an internal policy on human review.
The principles to operate by are simpler than the regulations.
- Voluntary participation. Always. Documented.
- No surveillance language internally or externally. Patty McCord's Netflix rule applies: you only say things about fellow employees you say to their face.
- Aggregate data for leaders. Individual-level data only with consent and a clear purpose.
- No team data fed into performance reviews without an explicit governance process.
Common failure modes
Five patterns recur in functions that do not survive the first 18 months.
Over-scoping. The temptation to roll out across the whole company in quarter two. The fix is the Phase 2 filter. Two or three teams. Real outcomes. Visible wins.
Measuring what is easy. The ADP critique. The fix is to anchor every metric to a specific business question owned by a specific named leader.
Inside-out instead of outside-in. Building the perfect internal taxonomy before talking to a single business unit. The fix is the Phase 1 listening tour. The function exists to serve the business.
Data perfectionism. Refusing to ship a cycle until the warehouse is clean. The fix is the DORA pattern. Baseline what you can baseline. Improve from there.
Consulting cadence instead of operating cadence. Quarterly engagements with a slide deck at the end. The fix is the McKinsey "From Me to We" prescription: continuous, embedded, weekly check-ins, monthly outcome reviews, quarterly recalibration. The function is a muscle the team trains every week.
The QuestWorks approach
QuestWorks is the Team Intelligence Engine. It runs on its own cinematic, voice-controlled platform, and integrates with Slack for install, invites, onboarding, leaderboards, and admin commands. The game itself runs on QuestWorks' own web platform.
Teams of 2 to 5 run a 25-minute quest each week. Voice-controlled. Fully on-platform. Players surface one of nine public HeroTypes. The QuestDash leaderboard is visible to everyone and uses strengths-based callouts only. HeroGPT, the Slack-hosted private AI coach, never shares its conversations upstream. Leaders receive a separate Weekly Team Health Report with aggregate trends and individual strengths-based highlights. Participation is voluntary and is not tied to performance reviews.
For a Director of Team Intelligence running the 90-day playbook, QuestWorks slots into Phase 3 as the recurring behavioral practice instrument. It produces the weekly cadence the function needs without requiring a new survey program. The tagline is the operating principle: Team Intelligence, Powered by Play. Pricing is $20 per user per month with a 14-day free trial. Install in under a minute.
For more on the role itself, the operating system it sits inside, and the category it belongs to, see Director of Team Intelligence by 2030, The Team Management Operating System, First 90 Days With a New Team, and Team Intelligence: The New Category.