"Is my reply rate good?" is the single most asked question we hear from new customers. And it's a surprisingly hard one to answer honestly, because nearly every vendor you'll read has an incentive to quote a number that makes their product look magical. Forty percent. Sixty percent. Higher.

Those numbers aren't lies, exactly. They describe the top decile of carefully curated campaigns. But they're not what you should expect in month one with a cold list, and benchmarking against them only makes a healthy campaign feel like a failure.

Here is an honest look at LinkedIn reply rate benchmarks as of Q2 2026 — what "good" looks like, how it varies by industry and ICP, and how to tell whether your numbers point to a targeting problem, a message problem, or a profile problem.

1. Start With What "Reply Rate" Actually Means

Before we talk numbers, align on definitions. Most reporting lumps everything together and the result is a metric that measures nothing usefully.

When somebody quotes a "reply rate" without specifying the denominator, assume they mean raw reply rate. It looks bigger. Ask which number they mean before you compare yourself to it.

2. The Honest Baselines by Channel

Across the campaigns we see run through Infonet, the following ranges represent the middle 60% of performance — not the cherry-picked tops, not the failing bottom.

If your numbers land inside these ranges, the campaign is working. Optimizing from "within range" to "top of range" is a different problem than rescuing a campaign that's outside the range entirely.

3. How Industry Shifts the Benchmark

Industry matters more than most people acknowledge. Reply rates in high-signal, high-spam verticals (crypto, agency services to SMBs, generic "AI" tools) are structurally lower because recipients are being pitched constantly. Rates in quieter verticals are higher because every message is a novelty.

Rough adjustments we see consistently:

4. Seniority Changes the Dynamic

The executive myth — "C-suite never replies" — is half true. Executives reply less often in absolute terms, but the replies that do come are more qualified. A 9% reply rate to VPs with a 60% positive-reply ratio beats a 28% reply rate to managers with an 18% positive ratio, every time.

If you're pitching C-suite with the same message you pitched managers, you are silently selecting for an 8% rate when 14% is within reach.

5. Message Type Also Matters

Not all first messages carry the same conversion load. A "question" message and a "pitch" message produce very different numbers, even when both are personalized.

6. How to Measure Yours Correctly

Most outreach teams measure reply rate wrong in one of three ways: they include auto-responders, they mix send dates with reply dates, or they lump new and follow-up replies together. The cleanest measurement:

  1. Pick a closed cohort — every prospect messaged between two specific dates.
  2. Wait at least 21 days after the last message in the sequence to let late replies land.
  3. Classify every reply into positive, neutral, not interested, out-of-office, wrong person.
  4. Report human reply rate and positive reply rate separately. Don't collapse them.

Campaigns measured this way look different from campaigns measured loosely. You'll typically see 20–30% fewer "replies" but the number will be actionable. You can now tell whether a change moved the metric or just moved noise.

7. Reply Rate Is Sometimes the Wrong North Star

A 35% reply rate that produces zero meetings is a worse outcome than a 14% reply rate that produces a quarter of your pipeline. The higher rate usually means you're attracting a lot of "thanks but no thanks" politeness, or you're filtering for people who reply to everyone — neither predicts buying.

Healthier metrics once you're out of the cold-start phase:

8. Where Most Campaigns Leak

If your numbers are meaningfully below the ranges above, the leak is almost always in one of four places. Diagnose in this order, top to bottom:

  1. Targeting: If acceptance rate is below 20%, you're contacting people who don't recognize themselves as a fit for what you sell. No message fixes this.
  2. Profile: If acceptance is fine but first-message reply is under 15%, prospects are accepting out of politeness and then visiting your profile and bouncing. Fix the headline and About section before changing the message.
  3. Message craft: If acceptance and visits are fine but replies are low, the first message is either too long, too pitchy, or indistinguishable from every other vendor's opener.
  4. Follow-up cadence: If message 1 performs okay but total sequence reply rate is flat, your follow-ups are probably "just checking in" and adding no value. Every follow-up needs a new angle.

Almost nobody's reply-rate problem is actually a reply-rate problem. It's usually a targeting problem or a profile problem dressed up as a messaging problem.

9. Where AI Personalization Changes the Math

AI-generated personalization — when it's grounded in the prospect's actual recent activity and not just a merge field with a name — lifts first-message reply rate by roughly 8–14 percentage points in the campaigns we've instrumented. It moves a 20% baseline campaign to 30%, not to 60%.

Platforms like Infonet pull the prospect's recent posts, the company's recent news, and the context of how you found them to produce a first line that could only have been written to that specific person. That single line is doing most of the lift — not the rest of the message.

Be honest with yourself about where AI actually helps: openers, first-line hooks, and follow-up angles. It does not fix targeting. It does not fix a profile that looks like a resume. It does not fix asking for a 30-minute call in message one.

10. A Sane Target Curve for a New Campaign

If you're starting a new sequence from scratch, don't expect top-of-range numbers in week one. A realistic curve:

Campaigns that start above baseline in week one are usually working from a warm list or a referral graph. Cold from scratch takes a month to tune. If you treat that tuning period as failure, you'll change everything twice and learn nothing. Give each hypothesis a real cohort and three weeks.

Where to Go From Here

Benchmarks are useful only as a diagnostic — they tell you whether you have a problem worth solving. The harder work is figuring out which problem you actually have. Start by splitting your reply rate into the four diagnostic buckets above. One of them will be clearly worse than the others. That's where the next week of effort goes.

And resist the urge to A/B test everything at once. Change one variable per cohort. Let the data breathe. A month from now you'll have a campaign that hits the top of the range for your industry and ICP, and you'll know why.