Thumbnail

Run Fair Performance Reviews in Hybrid Work

Run Fair Performance Reviews in Hybrid Work

Performance reviews in hybrid environments present unique challenges that require updated approaches to maintain fairness and accuracy. This article outlines four proven strategies that help managers assess remote and in-office employees equitably. Industry experts share practical methods for reducing bias and improving the review process across distributed teams.

Anchor Judgments to Concrete Proof

The most effective way we found to reduce proximity bias is to remove location as a performance signal. Managers often reward people they see more because familiarity feels like contribution. We addressed this by focusing reviews on clear proof of work, quality of decisions, ownership, and follow through. If a point cannot be backed by an example, we do not include it in the review.

This change improved behavior faster than training alone. It made every review depend on evidence instead of comfort or habit. We also require input from different work settings, not just office interactions. When leaders present balanced examples, it becomes easier to spot bias and keep reviews fair.

Pair Raters and Submit Scores Later

I fired someone over Zoom who I'd never met in person, and that's when I realized our whole review system was broken. This was 2021, post-acquisition of my fulfillment company, and I had warehouse teams I saw daily plus remote ops people I only knew through Slack. The disconnect was insane.

Here's what I changed that actually moved the needle: I made every manager do their reviews in pairs, with one person who worked near the employee and one who didn't. Sounds simple, but it killed the proximity effect immediately. My warehouse supervisor would review a remote customer service rep alongside our head of operations who was also remote. They had to reconcile their scores before submitting. The first quarter we did this, rating variance dropped by about 40 percent between remote and in-office employees.

The magic wasn't the pairing itself though. It was forcing managers to defend their ratings to someone with a completely different vantage point. Suddenly "Sarah's always on top of things" had to become "Sarah resolved 47 tickets last week with a 4.8 customer rating." You can't bullshit numbers to another manager who's actually looking at the same dashboard.

I also moved our review cycle from annual to quarterly, but here's the twist: only the quarterly conversations happened live. The actual rating submission happened async, 48 hours later. This gave everyone time to reference actual data instead of just recalling who they'd grabbed coffee with that week. The managers hated it at first because it was more work. But within six months, we had zero HR complaints about favoritism compared to five the prior year.

The real test was promotion decisions. Before this system, 73 percent of promotions went to people who worked in our main facility. After eighteen months of paired reviews, that dropped to 52 percent even though the office headcount stayed the same. We were finally promoting based on output, not visibility.

Most companies try to fix proximity bias with more training or guidelines. That's worthless. You need to structurally make it impossible for one person's gut feeling to be the whole story.

Set Individual Goals with Balanced Metrics

We try to really take an approach to performance reviews that's catered to the individual. Part of that is setting goals with each person. When we set goals with our employees individually, that helps account for differences with things like where they work from, what their unique responsibilities are, how new they are, etc. Of course we do also have some more standardized metrics to ensure fairness too, but accounting for the differences between employees is important, in my opinion.

Use Weekly Impact Logs Plus Redacted Calibration

The single change that most reduced proximity bias on our team at Dynaris was moving from manager-narrated reviews to evidence-first reviews, where every claim about performance has to be tied to a specific, dated artifact in writing before the manager adds their interpretation. Proximity bias compounds when managers are recalling impressions; it deflates when they're forced to cite receipts.

Here's the rhythm change. We replaced quarterly review prep with a continuous "impact log." Each team member, regardless of where they sit, is responsible for posting one short bullet per week into a shared doc: what they shipped, what they unblocked, what they learned. Two minutes. The manager comments asynchronously, sometimes with a question, sometimes with a kudos. By the time review season comes, there are 12 weeks of dated, employee-authored evidence per person, and the manager's job at review time is to synthesize and rank, not remember.

The specific anti-proximity rule we layered on: when calibrating across the team, the manager's first draft of every review has to quote at least three impact-log entries from the employee's own words. If the manager can't find three, the employee under-documented and the manager is required to ask for them in writing before the review proceeds. This forces a pause that disproportionately benefits remote and quieter team members, who tend to under-share verbally but log written impact at the same rate as their in-office peers.

The second change that helped: a 30-minute calibration meeting where managers present each report's review in randomized order, with names and locations redacted from the summary slide. We see the work first, the person second. It sounds elaborate but it takes about an hour total for a team of 10, and it has surfaced two cases where I caught myself ranking by visibility rather than impact.

The core insight: proximity bias is a memory problem dressed up as a judgment problem. Fix the memory inputs (continuous, employee-authored evidence) and the judgment becomes dramatically more fair without anyone changing their attitudes.

Related Articles

Copyright © 2026 Featured. All rights reserved.
Run Fair Performance Reviews in Hybrid Work - CHRO Daily