Check Point Unveils AI Security Report Highlighting Cybercrime Threats
Check Point Software Technologies has launched its first AI Security Report at the RSA Conference 2025, announced in a press release. The report provides an in-depth analysis of how cybercriminals are leveraging artificial intelligence (AI) to enhance their operations, and offers strategic insights for organizations to counter these threats.
The report identifies four primary AI-driven cyber threats. These include AI-enhanced impersonation and social engineering, where attackers use AI to create realistic phishing emails and deepfake videos. Another threat is LLM data poisoning, where malicious actors manipulate AI training data to skew outputs, as demonstrated by Russia's disinformation network, Pravda.
Additionally, the report highlights the creation of AI-driven malware and data mining techniques, and the weaponization of AI models, such as the development of custom-built Dark LLMs like FraudGPT. To combat these threats, Check Point emphasizes the need for AI-aware cybersecurity frameworks, including AI-assisted detection and enhanced identity verification methods.
The report underscores the importance of integrating AI into cybersecurity defenses to match the pace of evolving threats. It is available for download, providing a roadmap for securing AI environments effectively.
We hope you enjoyed this article.
Consider subscribing to one of several newsletters we publish. For example, in the Daily AI Brief you can read the most up to date AI news round-up 6 days per week.
Also, consider following us on social media:
Subscribe to Cybersecurity AI Weekly
Weekly newsletter about AI in Cybersecurity.
Market report
2025 State of Data Security Report: Quantifying AI’s Impact on Data Risk
The 2025 State of Data Security Report by Varonis analyzes the impact of AI on data security across 1,000 IT environments. It highlights critical vulnerabilities such as exposed sensitive cloud data, ghost users, and unsanctioned AI applications. The report emphasizes the need for robust data governance and security measures to mitigate AI-related risks.
Read more