Big Picture 12 min read

Team Intelligence vs. People Analytics

People analytics covers the individual layer. Team Intelligence covers the system layer. The two are complementary, and the data model gap explains why.

By Asa Goldstein, QuestWorks

TL;DR

People analytics measures individuals: their attributes, performance, sentiment, and connections. Team Intelligence measures the system: how teams behave, decide, coordinate, and recover under pressure. HRIS vendors like Workday and SAP SuccessFactors run the employee-of-record layer. Engagement platforms like Lattice, Culture Amp, and 15Five run the sentiment layer. People analytics vendors like Visier, ChartHop, and Worklytics run the rollup-and-network layer. None of those vendors natively measure team behavior in repeatable conditions, because their data models are built around the individual employee record. The two categories are complementary. Team Intelligence sits on top of the people layer and adds the missing behavioral data captured during simulation.

The phrase "people analytics vs team analytics" surfaces a real category split that the HR technology stack has spent fifteen years pretending was solved by averaging individual scores. People analytics measures individuals. Team Intelligence measures the system. The data models of the dominant vendors make that difference structural, and the budget map for 2026 is starting to reflect it.

The HR technology market hit $43.66B in 2025 and is projected to reach $47.32B in 2026 at a 9.2% CAGR (Fortune Business Insights, 2025). The people analytics slice sits between $5B and $13B today, with mid-2030s projections ranging from $13B to $41B. The layer it covers is the individual employee record.

The Layers Are Different

Two stacks operate inside an HR technology budget today. The first is the people layer: a record of employees with attributes, performance, engagement, sentiment, and network position. The second is the team layer: behavioral data on how a group of those employees coordinates, decides, escalates, and recovers when the work gets hard.

The people layer is mature. Workday alone serves more than 11,000 customer organizations, including roughly 60 to 65 percent of the Fortune 500, with over 70 million users on the platform and FY2025 revenue of $8.446B (Workday FY2025 Results). SAP SuccessFactors covers more than 100 million users in 200+ countries (SAP SuccessFactors). Engagement platforms add another 25M+ employees served across Culture Amp's 6,500+ companies. The data is collected at the employee row.

The team layer is structurally absent from those schemas. Team data exists only as a derived attribute (manager_id, department_id, cost_center_id) without its own behavioral events. That distinction is the entire argument. A platform built to measure teams as systems needs an event model that captures "the team made a decision under pressure" or "the team handled a disagreement." HRIS schemas have no such event. People analytics vendors do not generate one. Engagement platforms approximate it through self-report.

The HRIS Layer

Workday was founded in March 2005 by Aneel Bhusri and Dave Duffield as the cloud-native successor to PeopleSoft. SAP SuccessFactors was acquired by SAP for roughly $3.4B in December 2011. Oracle HCM Cloud serves customers including the US Navy, Toyota, and UnitedHealth Group. BambooHR runs the small and mid-market HRIS layer with 34,000+ customers covering 3M+ employees across 190 countries.

What HRIS does well is straightforward. It maintains the employee record of truth: roles, compensation, reporting line, time off, benefits, performance reviews. The data model is the employee row, with relationships expressed as foreign keys.

What HRIS cannot see is team behavior. The schema has no event for "a cross-functional group made a resource allocation call." It has no field for "how this team handled escalation last quarter." The notion of a team as a measured object simply does not exist in the data model, because HRIS was designed in an era when team behavior was assumed to be the manager's job to assess in a one-on-one.

The Engagement Layer

Lattice was founded in 2015 by Jack Altman and Eric Koslow. It has raised approximately $332M total, including a $175M Series F in January 2022 at a $3B valuation, and serves more than 3,500 customers (Lattice, 2022). Culture Amp was founded in 2009 by Didier Elzinga and serves 6,500+ companies with 25M employees, with a $100M Series F in July 2021 at a $1.5B valuation (TechCrunch, 2021). 15Five was founded in May 2011 and runs check-ins and manager development for roughly 3,500 customers.

Engagement platforms collect sentiment. The data captured is self-report: how an employee feels about their manager, their work, their growth, their team. Aggregated to the team or department, those scores get read as if they were team-level signals. The signal is still individual sentiment, averaged.

Two problems sit underneath the surface. Self-report data has documented limits as a predictor of behavior. Decades of social-psychology research on attitudes versus action have shown that what people say on a questionnaire correlates only weakly with what they actually do in real situations. Survey fatigue compounds that ceiling. The third pulse-survey of the quarter does not produce three independent reads on team health. It produces declining response rates and increasingly self-curated answers.

The People Analytics Layer

Visier was founded in 2010 by John Schwarz and has raised approximately $220M total, including a Series E of $125M led by Goldman Sachs at a $1B+ valuation in June 2021 (Visier, 2021). ChartHop was founded in 2019 and has raised approximately $74.1M total, including $35M Series B in 2021 led by a16z and $20M Series C in January 2023. Worklytics runs privacy-first organizational network analysis with 25+ pre-built integrations across Slack, Teams, Google Workspace, Office 365, GitHub, Jira, Zoom, and more, surfacing 400+ metrics under GDPR and CCPA compliance. Microsoft Viva Insights runs at $6 per user per month for the management insights tier and $12 per user per month for the full Viva Suite (Microsoft Viva pricing).

The people analytics layer adds two capabilities the HRIS layer does not natively provide: rollups across the organization (department-by-department turnover, manager span of control, time-to-hire) and network analysis (who collaborates with whom, who is centrally connected, who is becoming siloed). Both capabilities matter. Neither one measures team behavior under pressure.

The rollup problem is mechanical. A team-level engagement score is the average of individual sentiment scores. Whether that average actually represents the team as a system depends on whether team members agree, and whether within-team variance is small enough to justify treating the mean as a team property. Most HR platforms do not run those statistical tests. The dashboard simply shows a number.

Network analysis is the closest thing the people analytics stack has to a team-layer signal. After more than fifty years of academic ONA research, only about 8 percent of companies use ONA in practice (Deloitte). Network density tells you who talks to whom. It does not tell you what happened when they did, whether the conversation produced a decision, whether the disagreement got resolved. The behavior is in the room, and the room is empty in the data.

The Aggregation Argument

Researchers in team effectiveness have long argued that aggregating individual scores into a team metric measures something else entirely. It averages the players. The line of work most often cited is J. Richard Hackman's Leading Teams (2002) and Ruth Wageman, Debra Nunes, James Burruss, and Hackman's Senior Leadership Teams (2008), which studied more than 120 senior leadership teams and found that fewer than a quarter were rated outstanding. Their Six Conditions framework attributes 60 to 80 percent of team-effectiveness variance to whether the team has the right structural conditions in place (6teamconditions.com).

The methodological literature is more pointed. Treating a team-level construct as valid requires empirical evidence of within-team agreement (rwg, ICC(1), ICC(2)) before averaging individual scores into a team property (Justifying Team-Level Constructs, 2015). Most HR platforms do not run those tests. The team-level number ships as if aggregation alone produced team-level meaning.

The DORA metrics work in software delivery offers a useful precedent for what team-layer measurement looks like when it is designed correctly. DORA explicitly measures team and system performance and warns that applying the metrics to individuals creates perverse incentives. Across more than 35,000 organizations surveyed, elite teams deploy multiple times per day; low performers deploy once a month to once every six months (DORA). DORA shows what a team-level data layer looks like when the unit of analysis is the team from the start.

Why the AI Moment Changes the Economics

Josh Bersin observed in November 2024 that after billions of dollars spent on HCM platforms, fewer than 10 percent of companies can correlate or directly link HR and people data to business metrics (Bersin, 2024). The platforms work; the layer they cover does not connect to outcomes the way executives expected.

Deloitte's 2025 Global Human Capital Trends, drawing on 13,000+ professionals across 90 countries, framed the shift as moving from productivity to human performance, with the team as the locus where performance actually emerges (Deloitte 2025 GHCT). Gartner's October 2025 CHRO research found that only 47 percent of 222 CHROs surveyed in July 2025 said culture currently drives performance in their organization (Gartner, 2025). The CHRO seat is acknowledging that the lever it has been pulling does not reach the layer where the work actually happens.

Cautionary Tales

Microsoft Productivity Score, launched in late 2020, became the canonical example of what happens when a vendor confuses individual telemetry for team performance. The product tracked, by name, how often workers sent email, used chat, and posted in Yammer. Privacy researcher Wolfie Christl described it as a full-fledged workplace surveillance tool. Within roughly five weeks, Microsoft removed individual-level data from the product (The Register, 2020). The lesson holds: shipping individual telemetry as a team or organizational signal collapses on contact with the workforce that has to live inside it.

Pixar's Braintrust runs the opposite pattern. Ed Catmull's practice was to sit back and examine the room's dynamics. The Braintrust has explicit team-level conditions: it has no authority to mandate solutions, it carries no consequences for the directors who present, and it operates as a peer review structure outside the chain of command (Fast Company). Those structural conditions are what produce truth in the room. Engagement surveys cannot capture them. ONA cannot capture them. The condition is the team, and the team is the unit.

The Integration Story

The right framing for Team Intelligence and people analytics is complementary. Workday Data Cloud, launched in September 2025, opened zero-copy data access via Databricks, Salesforce, and Snowflake, with Workday Marketplace and a global developer network designed to let third parties build on top of the people layer (Workday, 2025). The Lattice and Workday partnership, announced in 2025, explicitly connects operational data with real-time insights on employee performance, engagement, and growth (Lattice, 2025).

The integration map for Team Intelligence runs the same way. A Team Intelligence Engine pulls roster data from Workday or SAP SuccessFactors, sentiment context from Lattice or Culture Amp, and collaboration metadata from Worklytics or Microsoft Viva. It then adds the layer that none of those vendors generate natively: behavioral data on how the team actually behaves in repeatable conditions during simulation. The combined picture is what the budget line is starting to reflect, with people analytics covering the individual record and Team Intelligence covering the system.

Counter-Arguments

This is just a feature of Visier or Lattice or Workday. The objection treats Team Intelligence as a UI layer that any incumbent could ship. The data model says otherwise. HRIS schemas capture employee_id with attributes; team data is a query result, with no underlying measured object. There is no native event for a team decision under pressure in any incumbent schema, because the schema was built around the individual record. Bolting a team-behavior layer onto an employee-row model requires a re-architecture, beyond what a feature flag can deliver.

ONA already does this. ONA shows network density, centrality, and brokerage. It does not show team behavior. After 50+ years of research, ~8 percent of companies use ONA in practice. The signal describes structure without describing behavior.

Engagement scores are team metrics. They are aggregated individual sentiment self-reports. The Hackman and Wageman line of research argues that aggregation requires empirical justification, including agreement statistics like rwg and ICC, before treating an averaged score as a team-level construct. Most platforms do not run those tests.

We already spent millions on Workday or Visier or Lattice. The Bersin finding still applies: fewer than 10 percent of companies can link HR and people data to business outcomes. The measurement layer is what is missing. Adding the team layer on top of the existing stack does more for that gap than swapping vendors at the people layer that is already covered.

What a Team Intelligence Engine Adds

The Team Intelligence Engine is the platform that produces team-layer behavioral data in repeatable conditions. QuestWorks runs on its own cinematic, voice-controlled platform. Slack is the integration layer for install, invites, onboarding, leaderboards, admin commands, and HeroGPT coaching. The simulation itself happens in a dedicated environment built to capture what teams do when the work gets hard.

Sessions run 25 minutes, with two to five sessions per group, voluntary and opt-in. HeroTypes, the nine public archetypes that emerge from gameplay, give the team a shared language for working styles. QuestDash, the leaderboard with strengths-based callouts, is visible to everyone. The Weekly Team Health Report goes to leaders separately. HeroGPT coaching conversations stay private and are never shared upstream. Pricing is $20 per user per month with a 14-day free trial.

People analytics covers the individual layer. Team Intelligence covers the system layer. The two stacks integrate and run side by side. Team Intelligence, Powered by Play.

Related reading: Team Intelligence: The New Category for How Teams Actually Work, How to Measure Team Performance, The Team Management Operating System, and Tools That Track Psychological Safety Metrics.

Frequently Asked Questions

People analytics measures individuals: their attributes, performance, sentiment, and network position, rolled up across the organization for HR decisions. Team analytics, in the form of Team Intelligence, measures the system: how a team behaves, decides, coordinates, and recovers under pressure, with the team as the unit of analysis from the start. The data models behind the two layers are structurally different.

Workday is the system of record for the employee row. Lattice covers engagement and performance self-report. Visier rolls up individual records into organizational metrics. None of them have a native event model for team behavior. Team-level numbers in those platforms are averaged individual scores. Team Intelligence sits on top of that layer and adds behavioral data captured during simulation, so the integration story is complementary.

No. ONA, offered by vendors like Worklytics and embedded in Microsoft Viva Insights, surfaces who talks to whom and how dense the communication network is. After more than fifty years of academic research, ONA adoption sits around 8 percent according to Deloitte. Team Intelligence captures what happened in the conversation: whether the team made a decision, surfaced disagreement, recovered from a setback, or coordinated under time pressure.

Aggregating individual sentiment self-reports into a team-level number requires empirical evidence of within-team agreement, including statistics like rwg and ICC, before the average can be treated as a team property. Most HR platforms do not run those tests. Team-effectiveness research from Hackman, Wageman, and colleagues has long argued that the team is a system with conditions that explain 60 to 80 percent of effectiveness variance, and the system is not the same thing as the average of its members.

QuestWorks is a Team Intelligence Engine that pulls roster data from HRIS like Workday or SAP SuccessFactors, sentiment context from engagement platforms like Lattice or Culture Amp, and collaboration metadata from people analytics platforms like Worklytics or Microsoft Viva. It adds the layer none of those vendors produce natively: behavioral data captured during 25-minute team simulations on its own cinematic, voice-controlled platform. Pricing is $20 per user per month with a 14-day free trial.

Ready to Level Up Your Team?

14-day free trial. Install in under a minute.

Slack icon Try it free
Team Intelligence, Powered by Play Try QuestWorks Free