The Modern Data Stack Precedent
In 2018, "the modern data stack" was a term in five blog posts and a Twitter thread. By 2022, it was a board slide. By October 2025, Fivetran and dbt Labs announced an all-stock merger projecting roughly $600M in combined ARR, validating the thesis at exit (HPCwire). The arc took four years from naming to consolidation.
What named it was a layered description. Fivetran handled ingestion. Snowflake handled storage. dbt Labs (with Tristan Handy as the chief evangelist of the architecture) handled transformation. Looker handled the BI layer. Each tool was buyable on its own, but the layered description created a contract: vendors could plug into the standard, buyers could swap pieces, investors could underwrite category leaders inside each layer (dbt Labs).
Team data in 2026 is where analytics data was in 2018. There are point tools at every step. None of them know about each other. A pulse survey vendor doesn't see what the ONA platform sees. The leadership coach doesn't see what the L&D platform records. The decision-of-record (if there is one) sits in someone's Notion page nobody opens.
That is what makes this the moment to name the stack. The seven layers describe an architecture that already exists in fragments and is starting to consolidate.
Layer 1: Behavioral Telemetry
What it does: Passive capture of how teams actually work. Meetings attended, channels active, decisions logged, work shipped. The exhaust data the rest of the stack runs on.
Vendors today: Microsoft Viva Insights at roughly $3 per user per month, organized around three insight categories (cross-group collaboration, insular collaboration, brokers of information flow) (Microsoft). Worklytics for privacy-first ONA with 25+ pre-built connectors and 400+ metrics, working from log and exhaust data only (Worklytics). Visier as the people-data warehouse, used by more than 55,000 users across 1,000+ companies (Visier). Polinode for ONA visualization at scale, up to 50,000 nodes and 250,000 edges (Polinode). On the engineering side, DORA metrics from Accelerate (Forsgren, Humble, Kim, 2018) functions as the same idea: deploy frequency, lead time, MTTR, change failure rate. DORA was acquired by Google in 2018 (IT Revolution).
The gap: Every behavioral telemetry tool measures symptoms. Meeting load, response time, network density. None of them generate behavior under pressure. They watch what already happened and can't run a team through a scenario to see what the team would do next.
The privacy ceiling: Microsoft's Productivity Score launched in late 2020 with usernames attached. Privacy researcher Wolfie Christl called it "a full-fledged workplace surveillance tool" and Microsoft removed individual usernames within about five weeks (The Register). The pattern repeated in October 2025 when Microsoft introduced new Copilot Usage Benchmarks and Teams location tracking via Wi-Fi, with peer comparison groups required to contain at least 20 companies (Winbuzzer). An Insight222 survey of 57 companies found 81% of people analytics leaders said projects were sometimes or often jeopardized by data ethics and privacy concerns (AIHR).
Layer 1 is the most invested-in and the most legally fragile. Without a layer that produces causal signal (Layer 3), Layer 1 is stuck in a debate about whether watching is allowed.
Layer 2: Archetype Mapping
What it does: Identifies behavioral archetypes, strengths, and team-role tendencies so leaders can interpret behavior in context.
Vendors today: CliftonStrengths (Gallup), launched in 1999, has now been completed more than 30 million times (Gallup). MBTI is used by more than 88% of Fortune 500 companies in 115 countries per its publisher (JSTOR Daily). DiSC (Wiley) reports 30M+ assessments globally (Wiley). Belbin Team Roles (Meredith Belbin, 1981) defines nine roles and is translated into 16 languages (Belbin). Big Five / OCEAN remains the empirically-validated alternative, with conscientiousness predicting longevity and extraversion predicting sales success (Wikipedia). Roughly 80% of Fortune 500 companies use personality tests in some form (Leaders.com).
The gap: Archetype tools are static and self-reported. They don't observe behavior, they don't refresh as people grow, and they don't condition on the rest of the team. MBTI specifically draws fire here: some researchers estimate roughly 75% of test-takers receive a different type when retaking after a short period, while MBTI's own publisher-funded studies report 0.81 to 0.86 test-retest reliability over 6 to 15 weeks (Human Performance Ireland).
An archetype taken once at onboarding is a snapshot. The Team Intelligence Stack needs archetype state that updates from observed behavior in Layer 1 and observed performance in Layer 3.
Layer 3: Dynamics Simulation
What it does: Where teams practice. Scenarios, roleplay, simulated decisions, and structured debriefs that produce behavior that can be measured.
The historical precedent is real and well-documented. Aviation Crew Resource Management (CRM) was coined at NASA in 1979 after the United Airlines Flight 173 crash on December 28, 1978. The NASA workshop drew 70 people from 32 organizations across 9 countries, and United Airlines launched the first comprehensive CRM program in 1981. CRM is now the global standard in commercial aviation (FAA). Edmondson, Bohmer, and Pisano's 2001 field study of 16 hospitals adopting minimally invasive cardiac surgery found that success was driven by team learning, psychological safety, and team stability, not skill or resources (ASQ). Frazier's 2017 meta-analysis of 136 independent samples covering more than 22,000 individuals and roughly 5,000 groups confirmed psychological safety drives voice, creativity, learning behaviors, and task performance (Wiley).
Outside research: F1 pit crews practice more than 1,000 pit stops per season to complete 36 tasks in 2 seconds. Navy SEAL squads run 100+ mock raids before operations. Top esports rosters run 5 to 6 scrims per day across 8 to 10 hour days. Every elite team operates on simulation as the primary training surface.
The investor record: CodeSignal has raised roughly $90M across 6 rounds for individual coding skill assessment. Strivr has raised about $86M total for individual VR enterprise training across Walmart, Verizon, and Fidelity (VentureBeat). Mursion has raised about $55M for AI plus actor avatar roleplay (individual). Yoodli closed a $40M Series B in December 2025 on top of a $13.7M Series A in May 2025, with valuation tripling to $300M, for AI speech coaching (individual) (TechCrunch). Attensi has raised in the $32M to $39M range for gamified solo training simulation.
That is more than $250M raised across five companies for individual simulators. Zero went to team simulators. This is the gap. Every dollar has gone to making one person better in a sandbox. None of it built the team-level reasoning, dependency, and decision-making the aviation industry, surgical teams, F1 crews, SEAL squads, and esports rosters have all settled on as the only training surface that works.
Layer 4: Coaching Layer
What it does: One-to-one coaching adapted to the data the rest of the stack produces. Layer 4 is where insight turns into a conversation.
Vendors today: BetterUp raised $600M total through Series E in October 2021 at a $4.7B valuation, with $214.6M in 2024 revenue (BetterUp). LifeLabs Learning has run more than 500,000 learners across 2,400+ companies through live cohort workshops (LifeLabs). Hone runs live cohort-based leadership training and launched Hone AI, a voice-first coach, in May 2025 (Hone). Bravely has raised $24M total for anonymous on-demand coaching with customers including Autodesk, Pinterest, and Twilio (Reworked). AI conversational coaching is rising fast through Yoodli, Pi (Inflection), and others.
Market frame: The International Coaching Federation reported $5.34B in global coaching revenue (2022 data, published 2023), $6.25B in 2024, and projected roughly $7.30B in 2025, with active coaches climbing from 71,000 in 2019 to roughly 145,500 in 2024 (ICF).
The gap: Coaching is expensive, scarce, and disconnected from telemetry. A BetterUp coach doesn't see the manager's Slack patterns. A LifeLabs facilitator doesn't see the team's simulation results. The coaching layer needs to consume Layer 1 telemetry and Layer 3 simulation outputs as inputs. Without that pipe, every coaching engagement starts from a self-reported intake form.
Layer 5: Cross-Team Analytics
What it does: Aggregate views across teams. How do groups compare? Where are the silos? Where does collaboration actually flow?
Vendors today: Microsoft Viva Insights cross-team views, Visier org-level rollups, Lattice (whose own documentation reports 41% of customers run pulses weekly, 35% biweekly, and 24% monthly) (Lattice), Culture Amp, Polinode, Worklytics. The continuous-survey caveat: roughly 90% of companies running continuous surveys cannot keep response rates above 50% when surveying weekly or monthly.
The intellectual foundation: Conway's Law (1967) holds that organizational communication structure mirrors system architecture. MIT and Harvard research has produced "strong evidence to support the mirroring hypothesis" (Martin Fowler). Deloitte has published an ONA case where a client had 14 functional silos in one business unit; ONA revealed natural collaboration formed around 4 customer segments and the org restructured to 6 customer-aligned groups (Deloitte). Rob Cross's well-known sales-collaboration case found 5% of people accounted for 25% of revenue-producing collaborations, and after ONA-driven intervention saw a 30% increase in proposals up to $1M and roughly 10% annualized revenue growth (Rob Cross).
The gap: Cross-team analytics either run on survey data (lagging, fatigued) or passive ONA (privacy-fragile). Neither produces the causal signal of why a team performs. Knowing 5% of the org drives 25% of revenue collaboration is useful. Knowing why those 5 people behave that way under specific conditions, and how to develop the next 5, requires Layer 3.
Layer 6: Leadership Feedback Loop
What it does: Closes the loop. What gets reported up to leaders, in what form, at what cadence, with what action attached.
The shift: Adobe killed annual reviews in March 2012 and replaced them with "Check-in," saving roughly 80,000 hours of manager time per year and seeing a 30% reduction in voluntary turnover. Donna Morris was the catalyst (Stanford GSB). Deloitte and GE followed off annual reviews. Deloitte's research finds the optimal frequency of performance check-ins is weekly (Lighthouse). McKinsey's "From me to we" reported a financial institution that replaced individual contact-center objectives with team objectives saw more than 10% productivity gains within months (McKinsey).
Edmondson and Kerrissey reframed the work in HBR in 2025: "What People Get Wrong About Psychological Safety" (May 2025) and "In Tough Times, Psychological Safety Is a Requirement, Not a Luxury" (November 2025) (HBR May 2025 and HBR November 2025).
Failure modes inside Layer 6: Dashboards exceeding 12 KPIs show 40% lower engagement. Real-time dashboards cost roughly 4x more to maintain but benefit only about 8% of use cases. Vanity metrics measure exposure not impact (Improvado).
The gap: Leaders today get one of two extremes. Annual surveys, where action lags 11 months behind the problem. Or pulse-survey overload, where response rates collapse and the dashboard tells you nothing usable. Layer 6 has to close to behavior, which is why it can only work if Layer 3 is feeding it real evidence of how teams act.
Layer 7: Decision Audit
What it does: Records what decisions get made, by whom, with what reasoning, and what happened next.
Patterns that work: Amazon's six-pagers became the default after Bezos emailed the S-Team on June 9, 2004 at 6:02 PM, subject "No powerpoint presentations from now on at steam," replacing PowerPoint with a 30-minute silent reading then discussion. Bezos later wrote that "writing a good four-page memo is harder than 'writing' a 20-page PowerPoint" (Inc.). Bridgewater built the Dot Collector for real-time peer feedback during meetings and the Issue Log to record every mistake (Quartz). Bain's RAPID framework defines who Recommends, Agrees, Performs, Inputs, and Decides (Bain).
The cost: McKinsey's "Decision Making in the Age of Urgency" (2019) surveyed 1,259 respondents and found only 20% say their organization excels at decision-making. More than 50% spend over 30% of work time on decisions. The cost of bad decisions at a Fortune 500 was estimated at 530,000 days of manager time per year, roughly $250M in annual wages (McKinsey).
Market frame: The decision intelligence market sits in the $13B to $19B range in 2024 and is forecast to reach $36B to $58B by 2030 to 2032 across major analyst sources (MarketsandMarkets and Grand View Research).
Tools that touch the layer today: Linear (issues as source of truth), Notion, Confluence (used by 75,000+ customers including 80% of Fortune 500), Coda, Loom for async video decisions, and DORA metrics where deployment frequency functions as decision-to-shipped latency.
The gap: Most decisions never get audited. The few organizations that do audit them (Amazon, Bridgewater) had to engineer the practice in over years. There is no off-the-shelf decision system of record. The companies running the playbook built it themselves in custom tooling and culture.
The Integration Question
Naming seven layers is the easy part. The harder question is which vendor sits in the middleware position. In the modern data stack, dbt won that seat by owning the transformation layer and forcing every other tool to integrate against its conventions. Looker's LookML and Snowflake's marketplace played similar gravitational roles.
For Team Intelligence, no vendor has the seat yet. Layer 5 (cross-team analytics) is the natural candidate because it sits between raw signal and leadership reporting. But Lattice, Culture Amp, and Visier all want it, and none of them currently consume Layer 3 simulation outputs because Layer 3 has barely been built.
In 2026, the integration question is open. By 2028, it will not be.
The Counter-Argument
One critique writes itself: "this is just an excuse to buy seven tools." The same critique faced the modern data stack in 2018, and the answer turned out the same way. Naming the architecture made companies buy with intent. Most leadership teams in 2026 already pay for two or three of these layers. They're paying Lattice for pulses, BetterUp for coaching, and Microsoft for Viva, without realizing those vendors share a customer because no one has drawn the diagram.
Drawing the diagram exposes the seams between what they already own.
Where QuestWorks Fits
QuestWorks is the Team Intelligence Engine. It lives at Layer 3 (Dynamics Simulation) and produces signal that feeds Layer 1 (Behavioral Telemetry), with hooks into Layer 4 (Coaching) and Layer 6 (Leadership Feedback Loop).
Quests run 25 minutes for 2 to 5 people on QuestWorks' own cinematic, voice-controlled platform. Sessions surface how teams behave under pressure: who delegates, who defers, who escalates, who recovers. HeroTypes (nine archetypes, public to teammates and managers) update as people play. The QuestDash leaderboard shows strengths-based callouts visible to everyone. The Weekly Team Health Report is a separate, leader-only feature.
Slack is the integration layer for install, invites, onboarding, leaderboards, and admin commands. HeroGPT is the Slack-hosted exception: private AI coaching that never shares upstream. The game itself never runs inside Slack.
Pricing is $20 per user per month with a 14-day free trial. Participation is voluntary and never tied to performance reviews.
For the longer category argument, see why Team Intelligence is a new category, what Team Intelligence actually means, how to measure team dynamics, and the employee experience stack in 2026.
Team Intelligence, Powered by Play.