
Promptfoo has agreed to be acquired by OpenAI. The open-source project will continue as Ian Webster and Michael D'Angelo begin a new chapter.
Today we are announcing that Promptfoo has agreed to be acquired by OpenAI.
Promptfoo will remain open source and we will continue to serve users and customers.
We founded Promptfoo in 2024 to make it easy for developers to systematically test their AI applications. We quickly realized that adversarial tests for security, safety, and other behavioral risks were the biggest blockers to shipping AI, especially at large enterprises.
What we built grew faster than we ever could have imagined. More than 350k developers have used it, 130k are active each month, and teams at more than 25% of the Fortune 500 rely on it.
We are joining OpenAI so that the security, evaluation, and compliance platform we've built - and the frontline experience behind it - can have the greatest impact on how teams build and deploy AI. At OpenAI, we'll improve and integrate Promptfoo's core tech within the model and infrastructure layers, so teams can catch vulnerabilities early and ship secure AI from the start.
OpenAI gives our work more resources and access to research at the model and inference layers that supercharge our goal of helping everyone ship secure, reliable AI. This is the fastest and most impactful path forward for the work we started at Promptfoo.
The team will continue working with customers and users to ensure continuity of service and support.
We will continue to maintain the open-source suite as a best-in-class red teaming, static scanning, and evals tool for any AI model or application. Promptfoo will continue to support a diverse range of providers and models, reflecting the way real teams build and deploy AI systems.
We have so much gratitude toward our investors: Ganesh at Insight Partners, Zane at a16z, their teams, and all the other angel investors that supported us. You helped us scale Promptfoo faster than we ever thought possible.
We are grateful for our team: we grew quickly to 23 people across engineering, GTM, and operations - the most talented and hard-working bunch we've ever met.
Finally, thank you to everyone who contributed code, filed issues, uses the product, or trusts Promptfoo in production.
You helped build something important. We're excited to continue this work.
Ian Webster and Michael D'Angelo
Co-founders, Promptfoo
The closing of the acquisition is subject to customary closing conditions.
Promptfoo built what we believe is a category-defining platform for AI evaluation and security. As enterprises deploy more complex AI systems, rigorous testing, red teaming, and evaluation become foundational. Ian, Michael, and the team built something essential.

Ganesh Bell
Managing Director, Insight Partners
Hey HN - Michael here, co-founder of Promptfoo.
Happy to answer questions.
The one I'd ask if I were reading this: what happens to Promptfoo open source? We're going to keep maintaining it. The repo will stay public under the same license, we will continue to support multiple providers, and we'll keep reviewing PRs and cutting releases.
We started Promptfoo because there was no good way to test AI systems before shipping them. That turned into evals, then red teaming, then a broader security platform. We're joining OpenAI because this work has more impact closer to the model and infrastructure layers.
Ask me anything.
You guys absolutely rock. You've built an industry standard with Promptfoo. Keeping it open source was the right choice.
Congrats!
What convinced you this was the right moment and the right company to join?
You went from founding to acquisition in roughly a year and change.
Did something about the AI security landscape cause this offer to make sense? Or was it the impact you could have inside OpenAI? Or something else?
Thanks for your work and I really really hope the open source project stays maintained. I’ve been using it from the very beginning and it’s been great!
congrats on the deal.
Congrats to the team. 23 people building something used by 25% of Fortune 500 is remarkable execution.
I'm curious about one thing though: the blog post mentions integration at the 'model and inference layer,' not just bolting it onto Frontier as a feature. That's a deeper integration than most acquisitions.
In practice, does that mean the security testing becomes invisible for OpenAI-hosted models? Because if so, that's great for OpenAI customers but creates an interesting gap for teams building on Claude, Gemini, or local models. They lose their independent testing tool.
The other thing I keep thinking about: promptfoo solves pre-deployment testing really well. But once agents are running in production making tool calls, the attack surface changes. A prompt that passes red-teaming can still be exploited through indirect injection via tool outputs.
Has anyone been working on runtime monitoring for agents? Not just testing before you ship, but watching what agents actually do during execution?
[dead]