We’re launching the Claude Partner Network, a program for partner organizations helping enterprises adopt Claude.
We’re launching the Claude Partner Network, a program for partner organizations helping enterprises adopt Claude. We’re committing an initial $100 million to support our partners with training courses, dedicated technical support, and joint market development. Partners who join from today will get immediate access to a new technical certification and be eligible for investment.
Anthropic is focused on ensuring that our AI model, Claude, serves the needs of businesses. To do this, we’ve partnered with a number of other companies. Notably, Claude is the only frontier AI model available on all three leading cloud providers: AWS, Google Cloud, and Microsoft.
We also work with large management consultancies, professional services firms, specialist AI firms, and similar agencies. These organizations help our enterprise customers identify where Claude can provide the most value to their work, and then help them get started with our AI tools. Our partners act as trusted guides in what can feel like uncharted territory: navigating the deployment requirements, compliance, and change management necessary inside large organizations.
Now, we’re doubling down our commitment to our partners, aiming to make it even easier for these organizations to support enterprises in adopting Claude.
"Anthropic is the most committed AI company in the world to the partner ecosystem—and we're putting $100 million behind that this year to prove it. The certification, the co-investment, the dedicated team—this infrastructure is built so that any firm, at any scale, can build a Claude practice. Our partners are instrumental in getting enterprises from proof of concept to production with Claude, and we're making sure they have everything they need to do it."—Steve Corfield, Head of Global Business Development and Partnerships, Anthropic.
The Claude Partner Network provides training, technical support, and joint market development for our partners helping enterprises adopt Claude. We’re committing an initial $100 million to this network for 2026, and expect to invest even more over time.
A significant proportion of our $100 million investment will go directly to our partners as direct support for training and sales enablement, and for market development (including work to make customer deployments successful) and co-marketing for joint campaigns and events. We’re also scaling our partner-facing team fivefold, so that we can provide dedicated Applied AI engineers to partners working on live customer deals, technical architects to scope more complex implementations, and localized go-to-market support in international markets.
Those who join the network will have access to our Partner Portal, where we’ll share our Anthropic Academy training materials, the sales playbooks used by our own go-to-market team, and other co-marketing documentation. Qualified partners will also be added to our Services Partner Directory, where enterprise buyers can find firms with Claude implementation experience.
Alongside the network, we’re introducing the first Claude technical certification: Claude Certified Architect, Foundations, available today for partners. This is a technical exam for solution architects building production applications with Claude. Later this year, we’ll introduce additional certifications for sellers, architects, and developers. Partners who join the network now will get priority access to new certifications as they roll out.
Finally, we’re launching a Code Modernization starter kit, which gives our partners a straightforward starting point for migrating legacy codebases and remediating enterprises’ technical debt. This is one of the highest-demand enterprise workloads, and one where Claude’s agentic coding capabilities most directly translate into client outcomes.
Any organization that is bringing Claude to market is eligible to join the Claude Partner Network. Membership is free of charge, and applications open today. You can find out more here.
Below, our partners share more about their work with Claude:
We're training 30,000 Accenture professionals on Claude because that's what it takes to meet the demand we're seeing. The Claude Partner Network gives us the structure to do that faster — the certification, the co-selling support, the shared investment. It matches how we actually build practices and deploy teams.
Enterprise AI needs to be powerful. The Claude Partner Network helps formalize and scale the work underway; the training, industry-focused solutions, and practical guidance for deploying AI.
We've opened Claude access across our global workforce—supporting an organization of roughly 350,000 associates—and we're embedding it into how we help clients modernize and transform. The Claude Partner Network gives us the co-investment and technical support to move faster, so our clients can advance pilot initiatives toward production without the usual delays.
We are enabling clients to scale AI with confidence—built on robust governance, security, and trust by design. Our dedicated Anthropic Center of Excellence accelerates readiness and capability-building, aligned with Infosys’ AI-first value approach. With teams applying Claude Code in real-world delivery, we are helping clients unlock AI value across industries.
I think it’s pretty clear what the purpose of this stuff is: get people so invested into the Claude ecosystem with certs and “modernization kits”, so that when the subsidies end and subscription costs shoot up they feel they’re in too deep now to switch to something cheaper.
> so that when the subsidies end and subscription costs shoot up
Subscription costs are capped to API rates as their ceiling (and, realistically, way lower than that - why would you even subscribe if you could just go pay-what-you-use instead), and those are already at a big margin for Anthropic. What still costs them a fuckton of money comparatively is training, but that is only going to get more efficient with more purpose-built hardware on the way.
Basicallly, I don’t see much of a reason to hike subscription prices dramatically. I don’t think they’ll stay at $100/$200 but anyone who’s paying that already knows how much value they’re getting out of that and probably wouldn’t mind paying more.
I'm not sure what you mean, if you max out your subscription perhaps? If you pay $100 and don't use it, you don't get refunded $100 because it's 'capped to API rates' which would've been 0.
He means that anthropic cannot increase the price of the sub because the users can just switch to the regular API pricing which consequently puts a ceiling on the cost of the sub.
Nobody would use a $1k sub if using the API pricing would only cost $500 for comparative service.
For the record, I'm only explaining what he put forward.
I don't agree with the opinion, mainly for two reasons:
The API cost can be increased in conjunction, hence the ceiling is just as variable
The harness is even more important then the model ime, and Claude Code is getting better every month. Even though the alternatives are getting better too, they're at least currently significantly worse IME - I'd say at least 3-6 months behind (compounded by the model, ofc).
And as a third point, unrelated to the original argument: there is no way anthropic is actually treating the sub as a loss leader. It is not cheap. It's only cheap compared to their API pricing, which they can freely set however they want. Compare their pricing to free models like Kimi k2.5 etc. I sincerely doubt anthropics model costs more to run then theirs, and they're profitable at 30% of the price anthropic charges.
Now huge amount of investment pays for training. This investment expects some returns, to be able to both turn profit and continue the training, rates must be much, much higher.
> I think it’s pretty clear what the purpose of this stuff is: get people so invested into the Claude ecosystem with certs and “modernization kits”, so that when the subsidies end and subscription costs shoot up they feel they’re in too deep now to switch to something cheaper.
It worked for cloud services :-)
Did it? AWS seems to be getting cheaper over time, not more expensive.
> Did it? AWS seems to be getting cheaper over time, not more expensive.
It was cheaper prior to them issuing certificates, then it got expensive.
Do you have a source for that? Certainly things like compute and other services that I'm aware of are objectively cheaper, so I'm curious what has gone up.
Or what if local models get good enough to threaten the server based product?
That is the biggest threat - and likely where things will end up eventually… it’s when that “eventually” is and what the server based providers can pivot to in that time.
This will probably happen unless the industry conspires to roll back the availability of general computation so common people can only own computers with enough power to be glorified thin clients. The way this might look is good hardware never officially being banned, just priced too high for anybody to afford, and produced in small quantities to keep it that way while all production shifts to making massively expensive powerful hardware for corporate buyers.
Seems unlikely. We're already seeing specialized hardware optimized for LLM performance (taalas, groq, cerebras), and simple economies of scale result in these sorts of products being a better value when rented from a server vs purchased/managed/upgraded for the typical the user.
Frontier models will continue to be either exclusively available from servers or significantly more affordable from servers vs local alternatives for the foreseeable future.
They're good enough already.
The moat is only
a) post-training magic for the elusive UX "vibes"
b) stickiness of the Claude UI's.
The first part will be eventually (give it a couple years) solved by a LoRA marketplace.
The second is not relevant because existing UI's are very sticky already and Claude won't be able to overcome decades of inertia anyways.
That and the price of hardware
Enshittifocation x rent seeking is the future of aithoritarian capitalism.
I recommend everyone explore local models.
Soon, we'll start seeing Claude certs getting listed on LinkedIn alongside Coursera courses.
People with titles like
Giga Chad, MBA, CSS, CKAD, XXX, PQRS
are gonna love this.
In no time, HRs will start slapping “10 years of certified Claude Code experience required” on job listings.
_Open to Claude_ ;)
it’s crazy how you could easily lie about having 10 years experience because your results are not that much different from someone who has only used Claude Code for like a week.
I think the older AI users are even held back because they might be doing things that are not neccessary any more, like explaining basic things, like please don't bring in random dependencies but prefer the ones that are there already, or the classic think really strongly and make a plan, or try to use a prestigious register of the language in attempts to make it think harder.
Nowadays I just paste a test, build, or linter error message into the chat and the clanker knows immediately what to do, where it originated, and looks into causes. Often times I come back to the chat and see a working explanation together with a fix.
Before I had to actually explain why I want it to change some implementation in some direction, otherwise it would refuse no I won't do that because abc. Nowadays I can just pass the raw instruction "please move this into its own function", etc, and it follows.
So yeah, a lot of these skills become outdated very quickly, the technology is changing so fast, and one needs to constantly revisit if what one had to do a couple of months earlier is still required, whether there's still the limits of the technology precisely there or further out.
> I think the older AI users are even held back because they might be doing things that are not neccessary any more
As the same age as Linus Torvalds, I'd say that it can be the opposite.
We are so used to "leaky abstractions", that we have just accepted this as another imperfect new tech stack.
Unlike less experienced developers, we know that you have to learn a bit about the underlying layers to use the high level abstraction layer effectively.
What is going on under the hood? What was the sequence of events which caused my inputs to give these outputs / error messages?
Once you learn enough of how the underlying layers work, you'll get far fewer errors because you'll subconciously avoid them. Meanwhile, people with a "I only work at the high-level"-mindset keeps trying to feed the high-level layer different inputs more or less at random.
For LLMs, it's certainly a challenge.
The basic low level LLM architecture is very simple. You can write a naive LLM core inference engine in a few hundred lines of code.
But that is like writing a logic gate simulator and feeding it a huge CPU gate list + many GBs of kernel+rootfs disk images. It doesn't tell you how the thing actually behaves.
So you move up the layers. Often you can't get hard data on how they really work. Instead you rely on empirical and anecdotal data.
But you still form a mental image of what the rough layers are, and what you can expect in their behavior given different inputs.
For LLMs, a critical piece is the context window. It has to be understood and managed to get good results. Make sure it's fed with the right amount of the right data, and you get much better results.
> Nowadays I just paste a test, build, or linter error message into the chat and the clanker knows immediately what to do
That's exactly the right thing to do given the right circumstances.
But if you're doing a big refactoring across a huge code base, you won't get the same good results. You'll need to understand the context window and how your tools/framework feeds it with data for your subagents.
I think GP meant 'longer time users of AI', not 'older aged users of AI'.
Their point being that it's not really an advantage to have learnt the tricks and ways to deal with it a year, two years ago when it's so much better now, and that's not necessary or there's different tricks.
Yeah I meant it in the context of the comment I was replying to, to be precise in the context of the comment that one was replying to, i.e. "10 years of certified Claude Code experience required".
The technology is moving so fast that the tricks you learned a year ago might not be relevant any more.
I still see people doing "you are a world class distributed systems engineer" think. Never fails me to chuckle.
I hope it’s at least a little tricky, since Claude was released only 3 years ago. That said, I would not be surprised to see companies asking for 10 years experience, despite that inconvenient truth.
I’ve seen it play out multiple times, highlights precisely why a candidate should never withhold their application based on preference of years of experience with anything. They simply haven’t put much thought into those numbers.
If you work on 10 projects in parallel for a year using Claude code… you have the equivalent of 10 years of experience in 1 year.
No you would have ten projects finished. You would have less than a year of actually programming experience.
That's not how it works...
You've never seen project managers basically propose the equivalent of getting a baby delivered in 1 month instead of 9 months by adding more people to the project?
But yeah, if the recruiters start asking for "10 years experience with Claude Code", then I guess a tongue-in-cheek answer would be "sure, I did 10 projects in parallel in one year".
If you can add more people to finish a project faster, I can add more projects to get experience faster.
You’re very confused i think.
Adding more people to a project doesn’t improve throughout - past a certain point. Communication and coordination overhead (between humans) is the limiting factor. This has been well known in the industry for decades.
Additionally, i’d much rather hire someone that worked on a a handful of projects, but actually _wrote_ a lot of the code, maintained the project after shipping it for a couple years, and has stories about what worked and didn’t, and why. Especially a candidate that worked on a “legacy” project. That type of candidate will be much more knowledgeable and able to more effectively steer an AI agent in the best direction. Taking various trade offs into account. It’s all too easy to just ship something and move on in our industry.
Brownie points if they made key architecture decisions and if they worked on a large scale system.
Claude building something for you isn’t “learning” in my opinion. That’s like saying I can study for a math exam by watching a movie about someone solving math problems. Experience doesn’t work like that. You can definitely learn with AI but it’s a slow process, much like learning the old fashioned way.
Maybe “experience” means different things to us…
I actually prefer removing people
It actually is. A year of experience is not equal at different companies.
You could spend years writing very little code and have “years of experience” in a language, and you can also output intense volumes of work and still be within a year.
Of those two people, the one who spent less real time but produced more work, can have the equivalent experience of the person who spent years.
The key is to figure out how much work a person using Claude Code would have been expected to produce in 10 years, then find a way to do that much in a single year. Boom, you just solved the years of experience problem.
The obvious solution is for Anthropic et al. to certify the skills of each user:
> “Good at explaining requirements, needs handholding to understand complex algorithms, picky with the wording of comments, slightly higher than average number of tokens per feature.”
I’m not saying this would be good at all, but the data (/insights) and the opportunity are clearly there.
You’re right, and I think this is the future.
For any proctored standardized testing a person takes, AI should be able to quickly summarize that person’s abilities. This way, instead of people writing their own BS resumes, a trusted test provider can evaluate an individual deeply, solving the problem of having to waste time on coding interviews etc. it will speed up hiring.
At work we’ve had like 10 hours of “AI training”. Like training us to use AI. I obviously learned nothing
watching all agile coaches turn into claude experts in 3 2 1 …
You joke, but that does seem to be happening from what I've seen - Agile Coaches are rebranding to become "AI coaches" or "AI Enablers".