STOCKHOLM / PHILADELPHIA, April 18, 2026 — As AI models grow powerful enough to autonomously discover thousands of software vulnerabilities, the same technology is being embedded into enterprise applications at an unprecedented scale, often without any security testing designed for AI-specific risks. Outpost24, a leading global provider of cybersecurity solutions, today launched AI Pentesting, an expert-led adversarial testing service that helps mid-to-large enterprises find and fix security weaknesses in their AI-powered systems before attackers or autonomous AI models do.
The new service extends Outpost24’s established penetration testing practice, drawing on more than five years of CREST-certified expertise and following OffSec’s AI-300 Advanced AI Red Teaming methodology. The launch comes as regulatory pressure intensifies, with the EU AI Act moving into its 2026 implementation phase and emerging models such as the NIST AI Risk Management Framework raising expectations for AI security due diligence.
The OWASP Top 10 for LLM Applications has formalized a category of attack vectors, including prompt injection, data leakage, unsafe output generation, and the exploitation of agent workflows, that existing application security tools were never designed to detect. Dynamic scanning, static code analysis, and API testing assess deterministic application logic; they cannot evaluate how a Large Language Model reasons, how it responds to adversarial inputs, or how it interacts with external tools and sensitive data it may access. The result is a widening blind spot that carries both operational and compliance consequences for any organization deploying AI at scale.
AI Pentesting applies the same human-validation methodology behind Outpost24’s established penetration testing practice, adapted for the behavioral complexity of AI systems. Testing spans the full AI attack surface, including the model and prompt layer, RAG pipelines, agent workflows, and the supporting APIs and interfaces that connect them. Engagements begin with system discovery and mapping, then move through adversarial testing and access-context validation before concluding with analysis and audit-ready reporting. Unlike automated scanning tools, which identify known patterns but cannot simulate creative adversarial thinking, Outpost24’s specialists follow OSAI+ guidelines to evaluate application behavior under real-world adversarial conditions, informed by the work of the Outpost24 Threat Intelligence Team. Clients receive actionable findings ranked by severity with remediation guidance specific to AI and LLM architectures, delivered through Outpost24’s single platform alongside the company’s broader portfolio of security testing capabilities.

“As organizations embed AI systems and LLMs into customer journeys and internal workflows, they create a new attack surface that traditional application testing was not built to measure. Prompt injection, sensitive data exposure, and unsafe agent behavior are only part of the picture. Closing that gap requires adversarial testing that treats AI behavior as part of the security boundary,” said Omri Kletter, Chief Product Officer at Outpost24.

“We are seeing a pattern that should concern every security leader: AI systems deployed with implicit trust in their inputs, minimal access controls between models and internal infrastructure, and zero adversarial testing before production. Twenty years ago, we learned these lessons the hard way with web applications. The difference now is that the barrier to exploitation is dramatically lower, because an LLM can be manipulated through natural language rather than crafted code,” said Martin Jartelius, AI Product Director at Outpost24.

Leave a Reply