The Breach Isn’t the Problem. Your Silence Is.

Image placeholder

Here’s an uncomfortable truth that most companies refuse to internalize: getting hacked is not what destroys customer trust. Hiding the security incident is what destroys customer trust. Treating your users as adversaries rather than allies during an incident destroys customer trust.

Security incidents are inevitable. If your company processes data, runs software, or employs human beings, you will eventually have one. The companies that emerge from incidents stronger are not the ones with the cleanest security records; those don’t really exist at scale. They’re the ones who communicate as if they actually respect the people whose data they’re responsible for.

So let’s talk about how to do that.

Speed beats polish

The first instinct after an incident is to wait. Wait until you have the full picture. Wait until legal signs off on every comma. Wait until you can frame things in the most flattering possible light. Wait until Monday, because announcing on Friday “looks bad.”

This is wrong. Every hour you spend polishing a statement is an hour your customers spend not knowing they need to rotate credentials, freeze their credit, or check their accounts. It’s also an hour where someone else, such as a journalist, a security researcher, or an impacted customer on a forum, might break the story for you. And being scooped on your own breach is roughly the worst possible outcome for trust.

The fix is to disclose early and update often. Your first communication does not need to have every answer. “Here’s what we know, here’s what we don’t, here’s what we’re doing to find out, and here’s when you’ll hear from us next” is a complete and credible message. Cloudflare has built a reputation for this. Their post-mortems often go up within hours and get updated as the investigation progresses. 

Customers don’t expect omniscience. They expect honesty about uncertainty.

What every disclosure needs to cover

Once you’ve committed to communicating, here’s the actual content. Five things, in roughly this order:

1. Explain what happened. 

In plain language. Not “a security event affecting a subset of systems,”  that phrase is corporate code for “we’re hoping you don’t read past it.” Tell people what the attacker did, how they got in (to the extent you know), and what systems were involved. If the investigation is ongoing, say so explicitly and commit to a follow-up.

2. Explain who is and isn’t impacted. 

Be specific. Which customers? Which data types? Names? Emails? Hashed passwords? Plaintext passwords? Payment information? Government IDs? The difference between “your email address was exposed” and “your Social Security number was exposed” is enormous, and conflating them, or being vague enough that customers have to assume the worst, is its own form of dishonesty.

3. Explain how customers can tell if they’re affected. 

Don’t make people guess. If you can email affected users directly, do it. If you have a lookup tool, build it. If the impact is universal, say so. Telling people “we’ll reach out if you’re affected” and then leaving them to wonder for weeks is cruel, and it’s the move that turns a manageable incident into a class-action lawsuit.

4. Tell customers what to do. 

Concrete steps. Reset your password. Enable two-factor authentication. Watch for phishing emails referencing the breach. Place a fraud alert with credit bureaus. If you’re offering identity monitoring, link to it directly; don’t make people dig through a help center to claim something you owe them.

5. Explain what you’re doing about it. 

This is where the CIA triad (confidentiality, integrity, availability) comes in, but talk about it like a human would. What did you patch? What architectural changes are you making? What process broke down, and how are you fixing the process, not just the symptom? Vague reassurance (“we take security very seriously”) without a concrete demonstration of how you’re taking security seriously is disingenuous.

The things you should add that most companies skip

Here’s where most disclosure templates stop, and where most disclosures actually go wrong.

Own it. Without weasel words. 

“Mistakes were made.” 

“We were the victim of a sophisticated nation-state actor.” 

“An unauthorized third party gained access.” 

Notice how none of these sentences have a subject? That’s deliberate, and customers can smell it. The companies that come out of incidents looking strong are the ones that say “we did this” and “we’re sorry” without hiding behind passive voice or blaming the attacker for being clever. Yes, the attacker did something wrong. They’re criminals. That doesn’t get you off the hook for the door you left open.

Keep communicating after the initial disclosure. Day one is the easy part. The companies that build trust are the ones who publish a real post-mortem two weeks later, update the FAQ as new questions emerge, and write a final report when the investigation closes. The silence after the initial bandage is ripped off suggests that you prioritized the news cycle over the customers.

Match the message to the audience without changing the facts. Regulators need legal and technical specificity. Customers need clarity and action items. Employees need to know what to say if family or friends ask. The press needs accuracy. These groups need different framings, but the underlying facts must be identical across all channels. The moment your customer-facing statement contradicts your SEC filing, you have a much bigger problem than the original breach.

The hall of fame and the hall of shame

You can learn a lot by looking at how this has actually played out.

Plex’s 2025 security incident is a strong recent model: prompt disclosure with a clear explanation of what data was accessed, immediate customer action, and concrete commitments to fix the root cause. They lost very little customer trust despite the incident affecting millions of users.

Cloudflare’s 2017 “Cloudbleed” disclosure is a frequently cited model: detailed technical post-mortem, clear timeline, no euphemism, full ownership. They lost very little customer trust despite the severity of the bug.

Equifax in 2017 was the canonical disaster: months of delay before disclosure, executives selling stock before the announcement, a confusing lookup tool that initially seemed to require waiving legal rights to use. The breach itself was bad. The handling turned it into a textbook case taught in business schools.

Uber in 2016 went a step further and actually paid the attackers to keep quiet, then disclosed only after a new CEO came in and forced the issue. The CSO was eventually convicted of obstruction of justice and sentenced to three years probation. (Currently under appeal.) Cover-ups aren’t just unethical; they can be criminal. And they destroy customer trust.

The pattern is consistent: the technical severity of the incident matters far less than the communication around it.

What this all comes down to

Your customers gave you their data because they trusted you to handle it responsibly. An incident is, by definition, a moment where that trust has been tested. You don’t rebuild it by spinning, stalling, or hiding. Companies that treat their customers like adults during a crisis don’t just survive the breach. They become the ones customers choose to trust again.

The breach will be a footnote. 

How you handled it will be the headline. 

Choose accordingly.

man working on a laptop

Join over 2,000 subscribers & receive my latest content