OX Security Report Finds AI-Generated Code Breaches Engineering Best Practices

October 23, 2025
OX Security has released research showing that AI-generated code violates key software engineering principles and undermines security, based on an analysis of over 300 open-source repositories.

New research from OX Security reveals that AI-generated code often breaches core software engineering principles and weakens overall security, announced in a press release. The company analyzed more than 300 open-source repositories and identified ten recurring anti-patterns that contradict established best practices.

The report describes the "Army of Juniors" effect, where AI coding tools produce code similar to that of competent but inexperienced developers—functional yet lacking architectural and security awareness. While the study found that AI-generated code does not contain more vulnerabilities per line than human-written code, it highlighted widespread structural and design flaws.

Among the ten critical anti-patterns identified are excessive commenting, over-specification, avoidance of refactoring, and the recurrence of previously fixed bugs. Some patterns, such as unnecessary comments and rigid adherence to coding conventions, appeared in up to 100% of the analyzed AI-generated code.

OX Security recommends that organizations adapt their workflows by embedding security instructions directly into AI coding processes and shifting human focus toward architecture and oversight. The report also calls for abandoning traditional code review as the primary security mechanism, arguing that it cannot scale to match the speed of AI-generated development.

We hope you enjoyed this article.

Consider subscribing to one of our newsletters like AI Programming Weekly or Daily AI Brief.

Also, consider following us on social media:

Subscribe to AI Programming Weekly

Weekly news about AI tools for software engineers, AI enabled IDE's and much more.

Market report

2025 State of Data Security Report: Quantifying AI’s Impact on Data Risk

Varonis Systems, Inc.

The 2025 State of Data Security Report by Varonis analyzes the impact of AI on data security across 1,000 IT environments. It highlights critical vulnerabilities such as exposed sensitive cloud data, ghost users, and unsanctioned AI applications. The report emphasizes the need for robust data governance and security measures to mitigate AI-related risks.

Read more