August 2025 - A Look at IBM's 2025 Cost of a Data Breach Report
  • 11 Aug 2025
  • 5 Minutes to read
  • Contributors
  • Dark
    Light

August 2025 - A Look at IBM's 2025 Cost of a Data Breach Report

  • Dark
    Light

Article summary

IBM recently released its 2025 Cost of a Data Breach report, and with it comes good news and bad news.

Good news rarely comes from these annual reports, but in the case of the 2025 IBM study, there has been some positive progress regarding the cost of these incidents.

However, one major issue is highlighted throughout the report, and it involves an ever-growing issue cybersecurity professionals must face: artificial intelligence.

This article will break down the findings within the IBM report.

The Good News: Costs of Data Breaches Went Down, and AI Helped

First let’s highlight the positive. IBM found the average global cost of a data breach dropped 9% in the past year, going from $4.88 million in 2023 to $4.44 million last year.

According to IBM, this is the first time that data breach costs have gone down in five years.

Ironically enough, artificial intelligence actually played a role in this decrease. IBM pointed to AI-powered defenses that allowed for faster breach containment. Organizations were able to identify and contain breaches on average within 241 days. Again, that is a low that has not been reached in quite some time. According to the IBM report, that is the shortest amount of time in nine years.

Google Threat Intelligence Vice President Sandra Joyce said during this year’s RSA Conference that, given how threat actors have used AI, security professionals have time to shore up their defenses, while offering examples of how her team is using artificial intelligence.

“We have yet to see a single incident where the threat actor has done something new and novel above the capabilities of a normal human incident at this stage,” Joyce said. “Right now, I think we are in this pre-phase where we have a lot of time to do some innovation on the defender side.”

“I would say right now that advantage is really truly rooted on the defender’s side,” Joyce added. “We are using it to reverse malware faster. We are using it to enhance our fuzzing harnesses by tremendous amounts. We’re using it to scan for vulnerabilities.”

Anthropic CISO Jason Clinton said his company has been keeping an eye on how AI has been used by threat actors, which also includes a lot of traditional cyberattacks.

“We’ve seen influence operations. We’ve seen augmentation of business email compromise-type social engineering attacks. We’ve seen influence Operations-as-a-Service, and the classic thing we’ve seen where there’s always like a very tech-savvy kingpin who is really up-to-date on technology and is offering their wares in exchange for cryptocurrency on the black market, that kind of thing,” said Clinton. “We’ve also seen, and I think this may be the most important thing that we need to keep in mind, that what is happening in AI right now is that we are democratizing access to the very top tier computer science talent in the world. And that means that your tier three actors who have typically been your script kitties, now have access to coding agents that can write code at the level of someone who’s maybe a very competent programmer in a Silicon Valley context.”

“So, we’ve seen ransomware being developed on our platform and shut that down, and those are the kinds of things where, when I think about the future and I think about the next few years on the attackers side, the attackers will have access to more of the typical technology that many of us on the more advanced defender side have had access to for a long time.”

AI Giveth and AI Taketh Away

While AI has been incredibly helpful for cybersecurity professionals, it has also been a source of headaches.

IBM found 97% of organizations that were breached in security incidents where AI was involved did not have proper AI access controls.

IBM even cited another report to emphasize this point. The Ponemon Institute found among the 600 organizations it contacted, 63% said they had no AI governance policies in place to manage AI or to prevent workers from using shadow AI.

Shadow AI, which are unapproved internet-based AI tools, could also threaten the positive data breach cost trends highlighted in the report. Shadow AI added an extra $670,000 to the global average data breach cost. Thus, even if cybersecurity professionals have tackled AI issues in other areas, they could still face problems should employees use unauthorized AI tools in a corporate setting.

It’s imperative for security professionals to use the time they have before threat actors can weaponize artificial intelligence and produce the innovative attacks Joyce said have not materialized just yet.

“It’s just a matter of time before a threat actor will use these capabilities for offensive purposes once they can get that type of innovation,” Joyce said. “We haven’t seen it yet, but I think that we are ahead, and what that means is, in whatever period of time we are going to stay ahead where we are the ones controlling the technology, we are the ones controlling the roadmaps, we are the ones implementing and refining the guardrails, that this period of innovation gives up the leg up.”

“This is where we really need to make our investments so that we keep ahead of the attackers,” Joyce continued. “I’m actually pretty optimistic about it being a great defender tool, but we always have to keep in mind that those defender tools can be used by threat actors for offensive purposes and are likely to do so at some point in the future.”

IBM Offers Recommendations To Reduce Risk

In response to these findings, IBM offered four measures cybersecurity professionals can take to reduce security risks:

  • Implement strong identity management and privilege access management for human and non-human identities, especially in cloud environments. Ensure advanced multifactor authentication methods are used rather than SMS-based codes
  • Perform thorough assessments of cloud configurations and permissions
  • Place an emphasis on robust AI governance, which should focus on the development and deployment of AI systems, and overseeing processes, policies and controls that address the risks posed by AI
  • Continue to educate your workforce on emerging AI threats. Create and deploy plans, playbooks and response strategies for different risk scenarios

Was this article helpful?

Changing your password will log you out immediately. Use the new password to log back in.
First name must have atleast 2 characters. Numbers and special characters are not allowed.
Last name must have atleast 1 characters. Numbers and special characters are not allowed.
Enter a valid email
Enter a valid password
Your profile has been successfully updated.