Most feedback advice targets managers. How to give constructive feedback in a one-on-one. How to deliver a performance review. How to have the hard conversation with a struggling report. That advice matters, and it addresses the easier half of the problem. Manager feedback has a structural mandate: it is literally part of the job description, and the power differential means the report expects it.
Peer feedback has no such mandate. Giving feedback to someone at your own level means risking a relationship you depend on, with no positional authority to absorb the awkwardness if it goes wrong. For engineering teams specifically, where the work is deeply collaborative and the people doing it often self-select for conflict avoidance, peer feedback is the hardest feedback to get right and the most transformative when it works. For manager feedback frameworks, see how to give feedback to engineers that actually lands. This piece covers the peer-to-peer side.
Why Engineers Avoid Peer Feedback
The avoidance is rational, not cowardly. Research published in Frontiers in Psychology found that when psychological safety is low, people default to self-protection strategies: staying silent, offering only safe answers, and avoiding visibility. Peers are especially vulnerable to this dynamic. A manager who gives tough feedback has institutional backing. A peer who gives tough feedback is just a coworker with an opinion, and if that opinion lands poorly, the working relationship takes damage that no HR process will repair.
Engineering teams have additional structural barriers. The work is often asynchronous and text-based, which strips away the tone and body language that soften feedback in person. Peer code review research found that almost all participants (90%) agree that reviewers tend to avoid reviewing large patches and make superficial reviews (88%) when they do engage. If engineers struggle to give thorough technical feedback in code review, where the norms are well established, behavioral feedback between peers is an even higher bar.
The result is a feedback vacuum. Managers give feedback during one-on-ones and review cycles. Peers say nothing, or only say positive things, or route their feedback through the manager as a proxy ("Can you tell Alex that..."). The person who most needs to hear how their behavior affects their teammates hears it from the person with the least direct observational data (the manager) and nothing from the people with the most (their peers).
Why Radical Candor Fails Without Psychological Safety
Kim Scott's Radical Candor framework (care personally, challenge directly) was designed for managers giving feedback to reports. The power dynamic is built in: the manager has both the authority and the organizational expectation to challenge. When organizations try to extend radical candor to peer relationships, the "care personally" half tends to evaporate. What remains is blunt feedback without the relational foundation to absorb it.
Amy Edmondson's research on psychological safety, reinforced by Google's Project Aristotle study of 180 teams, shows that psychological safety is the single most critical factor in team effectiveness. When safety is low, candor produces stress, fear, and lower job satisfaction. The problem is sequence: radical candor assumes the relationship can handle directness. Psychological safety research says the relationship has to be built first, and then directness becomes productive. See psychological safety is a perishable skill for why this foundation requires ongoing maintenance, not a one-time investment.
For engineering teams, this means that a Slack message saying "this approach is wrong" reads very differently depending on the team's psychological safety level. On a team with high safety, it reads as useful directness. On a team with low safety, it reads as a public attack. The words are identical. The team dynamics determine the outcome.
Code Review: The Hidden Peer Feedback Channel
Most engineering teams already have a structured peer feedback practice. They just do not think of it that way. Code review is peer feedback. It happens multiple times a week, it has established norms, and it involves one peer evaluating another peer's work and providing written commentary. The infrastructure already exists.
The research supports treating it as such. Studies on peer code review show that up to 75% of code review comments address software maintainability rather than functionality, meaning most review feedback is about how the code communicates to future readers, not whether it runs correctly. That is already behavioral feedback in disguise: it is feedback about how someone's work affects the experience of the people who will interact with it. A 2024 study in Empirical Software Engineering found that psychological safety directly advances teams' ability to pursue software quality through code review. And SmartBear research found that 73% of engineers feel more connected with peers when actively participating in code review feedback exchanges.
The opportunity is to make the behavioral dimension explicit. Instead of limiting code review comments to "this function should be extracted" or "consider using a map here," encourage reviewers to also note: "The way you documented the API contract here made it very easy to understand the expected inputs. That is the kind of documentation that saves the next person hours." That is peer feedback. It happens inside a process the team already trusts. It requires no new meeting and no new tool.
The SBI Framework Adapted for Peers
The SBI model (Situation, Behavior, Impact), developed by the Center for Creative Leadership, provides a structure that makes peer feedback concrete and behavioral rather than personal. It works between peers because it focuses on observable behavior, not character judgments.
Situation: Anchor the feedback to a specific time and place. "In yesterday's design review" is concrete. "You always do this" is not.
Behavior: Describe what you observed, not what you interpreted. "You asked three clarifying questions before proposing a solution" is observable. "You were being difficult" is interpretation.
Impact: Describe the effect the behavior had on you, the team, or the work. "Those questions surfaced a requirement we had missed, and the solution you proposed addressed it" is impact. "It was annoying" is not useful feedback.
An engineering-specific example of constructive SBI: "In the incident retro on Tuesday (situation), you walked through the timeline before anyone started assigning blame (behavior), and that kept the conversation focused on the system rather than the people, which meant we actually identified the root cause instead of spending the hour defending ourselves (impact)."
Another example, constructive with a growth edge: "During the sprint planning yesterday (situation), you estimated three stories at 1 point each when the team was leaning toward 3 points (behavior). The discussion surfaced real complexity we would have missed, and we ended up right-sizing the sprint. The tradeoff was that planning ran 20 minutes over, and two people had hard stops (impact)."
SBI works for peers because it removes the judgment layer. You are not telling a peer they are wrong. You are describing what happened and what the consequences were. The peer can draw their own conclusions.
How to Introduce Peer Feedback Without Drama
The sequence matters more than the framework. Teams that try to launch peer feedback with a "radical candor" workshop and then expect people to start giving constructive feedback to each other are setting themselves up for silence or conflict. The research on effective peer feedback consistently emphasizes that training and structured support minimize the negative effects of interpersonal risk, and that students (and professionals) with high confidence in their ability to provide helpful feedback produce significantly higher-quality peer feedback.
Phase 1: Positive SBI only (2-3 sprint cycles). Ask each team member to give one piece of positive SBI feedback per sprint, either in a retro, in code review, or in a Slack channel. This builds the muscle memory for structured feedback and establishes that peer feedback is a normal part of how the team works. It also builds the psychological safety that makes constructive feedback possible later.
Phase 2: Add constructive feedback in retros (2-3 sprint cycles). Once the team has practiced positive SBI and the behavior feels normal, introduce constructive SBI during retrospectives. The retro provides a structured container: there is a facilitator, the conversation is time-boxed, and the norms are already established. This is much safer than asking someone to give constructive peer feedback cold in a Slack DM.
Phase 3: Peer feedback becomes ambient. Once the team has enough reps, SBI feedback starts showing up in code reviews, in Slack threads, and in hallway conversations (virtual or otherwise) without needing a structured container. This is the goal: feedback that is frequent, specific, behavioral, and normal. Getting here takes three to six months of deliberate practice. It does not happen by announcement. See one-on-one meeting questions that surface real signal for how managers can support this transition.
Build the Feedback Muscle in Low-Stakes Practice
The behaviors that make peer feedback work (observing specific behavior, describing impact without judgment, receiving feedback without defensiveness) are all practice skills. They require repetition in situations where the emotional stakes are low enough to learn. The first time someone gives constructive peer feedback cannot also be the first time the relationship is tested, because that is how feedback cultures die before they start.
QuestWorks is the flight simulator for team dynamics. It runs engineering teams through scenario-based quests on its own cinematic, voice-controlled platform. Each 25-minute quest puts 2-5 teammates in situations that naturally generate peer-to-peer behavioral data: who stepped up during a decision point, how teammates navigated disagreement, where communication patterns helped or hindered the group. QuestDash surfaces these behavioral patterns so the team has concrete observations to discuss, not vague impressions. HeroGPT provides private AI coaching that helps each player navigate working-style differences with specific teammates, giving them language for feedback they might not know how to phrase. HeroTypes make personality and working styles visible so feedback has context: knowing that a teammate processes information differently changes how you frame your observation. Participation is voluntary and not tied to performance reviews.
QuestWorks works with Slack for install, onboarding, and admin. The game itself runs on QuestWorks' own platform. It starts at $20 per user per month with a 14-day free trial.
Peer feedback culture does not come from a policy, a framework, or a workshop. It comes from a team that has practiced giving and receiving specific, behavioral feedback in enough low-stakes situations that doing it in a high-stakes one feels normal. The structure (SBI), the channel (code review, retros, quests), and the psychological safety (built through shared experience, not mandated by management) all reinforce each other. Start with positive feedback. Build the muscle. Layer in constructive feedback once the trust exists to absorb it. The teams that do this develop a feedback culture that makes everyone better. The teams that skip to "radical candor" develop a silence culture that looks the same from the outside and is corrosive on the inside.