Anthropic: Developing a Claude Code competitor using Claude Code is banned
You can use Claude Code to write code to make a competitor for Claude Code. What you cannot do is reverse engineer the way the Claude Code harness uses the API to build your own version that taps into stuff like the max plan. Which? makes sense?
From the thread:
A good rule of thumb is, are you launching a Claude code oauth screen and capturing the token. That is against terms of service.
Based on the terms, Section 3, subsection 2 prohibits using Claude/Anthropic's Services:
"To develop any products or services that compete with our Services, including to develop or train any artificial intelligence or machine learning algorithms or models or resell the Services."
Clarification:
This restriction is specifically about competitive use - you cannot use Claude to build products that compete with Anthropic's offerings.
What IS prohibited:
- Using Claude to develop a competing AI assistant or chatbot service
- Training models that would directly compete with Claude's capabilities
- Building a product that would be a substitute for Anthropic's services
What is NOT prohibited:
- General ML/AI development for your own applications (computer vision, recommendation systems, fraud detection, etc.)
- Using Claude as a coding assistant for ML projects
- Training domain-specific models for your business needs
- Research and educational ML work
- Any ML development that doesn't create a competing AI service
In short: I can absolutely help you develop and train ML models for legitimate use cases. The restriction only applies if you're trying to build something that would compete directly with Claude/Anthropic's core business.
So you can't use Claude to build your own chatbot that does anything remotely like Claude, which would be, basically any LLM chatbot.I know we want to turn everything into a rental economy aka the financialization of everything, but this is just super silly.
I hope we're 2-3 years away, at most, from fully open source and open weights models that can run on hardware you can buy with $2000 and that can complete most things Opus 4.5 can do today, even if slower or that needs a bit more handholding.
Including how much RAM?
In theory, China could catch up on memory manufacturing and break the OMEC oligopoly, but they could also pursue high profits and stock prices, if they accept global shrinkage of PC and mobile device supply chains (e.g. Xiaomi pivoted to EVs), https://news.ycombinator.com/item?id=46415338#46419776 | https://news.ycombinator.com/item?id=46482777#46483079
AI-enabled wearables (watch, glass, headphones, pen, pendant) seek to replace mobile phones for a subset of consumer computing. Under normal circumstances, that would be unlikely. But high memory prices may make wearables and ambient computing more "competitive" with personal computing.
One way to outlast siege tactics would be for "old" personal computers to become more valuable than "new" non-personal gadgets, so that ambient computers never achieve mass consumer scale, or the price deflation that powered PC and mobile revolutions.
This restriction is specifically about competitive use - you cannot use Claude to build products that compete with Anthropic's offerings.
I am not a lawyer, regardless of the list of examples below(I have been told examples in contracts and TOS are a mixed bag for enforceability), this text says that if anthropic decides to make a product like yours you have to stop using Claude for that product.
That is a pretty powerful argument against depending heavily on or solely on Claude.
And I'm pretty sure ban evasion can become an issue in the court of law, even if the original TOS may not hold up
This restriction is specifically about competitive use - you cannot use Claude to build products that compete with Anthropic's offerings.
Is more strict than the examples. The examples are what I think may not be enforceable.
So for example:
What is NOT prohibited: > - General ML/AI development for your own applications (computer vision, recommendation systems, fraud detection, etc.) > - Using Claude as a coding assistant for ML projects
If you use Claude for "General ML/AI development for your own applications..." and Anthropic puts out a specific product for "General ML/AI development for your own applications..." you probably can not use Claude for "General ML/AI development for your own applications..." and have to use the new specific product instead. Well as long as the example is not enforceable.
The first quote looks enforceable and if I want to be on the same side I have to assume it takes precedence over the example.
but also the prohibition goes way further as it's not limited to training competing LLMs but also for programming any of the plumbing etc. around it ....
You can use Claude Code to write code to make a competitor for Claude Code.
No, the ToS literally says you cannot.
If you use Claude models through CURSOR, Anthropic still applies its own policies on usage. Just recently they cut off xAI employees' access to Claude models on Cursor[0]. X has threatened to ban Anthropic from X.
[0] https://x.com/kyliebytes/status/2009686466746822731?s=46
And Anthropic really goes out their way in banning China. Other model providers do a decent job at restricting access in general but look away when someone tries to circumvent those restrictions. But Claude uses extra mechanisms to make it hard. And the CEO was on record about China issues: https://www.cnbc.com/2025/05/01/nvidia-and-anthropic-clash-o...
Good luck catching me, I'm behind 7 proxies.
it's a clear anti-competive clause by a dominant market leader, such clauses tend to be void AFIK
you can't violate a void clause in TOS or a contract
so you can't get a ban/termination for violating a void clause
they still can decide to not do business with you, after providing all service you already payed for
and that is _if_ you can decide to not to business with individuals without providing a reason, in certain situations companies can't do so (mainly related to unfair competition, market power abuse etc.). And this is the point where my knowledge details get to thin to really if/when/how this could or could not apply here.
Sure you cannot be sued for writing clone, in any country.
in countries where such a clause is valid/not-void, you very well can be sued for using Claude Code to work on/develop e.g. Open Code...
so you can't get a ban/termination for violating a void clausethey still can decide to not do business with you
What's the difference?
in countries where such a clause is valid/not-void, you very well can be sued for using Claude Code to work on/develop e.g. Open Code...
Has anyone in any country ever got successfully sued for violating ToS terms like these? Try to read any ToS and think that average use doesn't violate at least some of them. Even the EU's own site contains things like "including but not limited to".
Has anyone in any country ever got successfully sued for violating ToS terms like these?
yes, less private people but definitely companies
What's the difference?
e.g. YT had been forced to reinstate (and not shadow ban) channels in the EU multiple times because judges ruled that terminations did not had legal basis and they don't have the right to arbitrarily refuse doing business with the people in question (for various reasons, including but not limited to it being clearly a retaliation for suing for right you have. Through the market dominant position of YT also played a non negligible role there).
Or in other words, by itself it might not mean much but in combination with other laws, especially iff claude code because the marked dominant AI editor.
Most people still haven’t heard of Anthropic/Claude.
(For the record, I use Claude code all day long. But it’s still pretty niche outside of programming.)
So the only thing you kind of need to figure out is VPN and ProtonVPN provides free vpn service which does include EU servers access as well
I wonder if Claude Code or these AI services block VPN access though.
If they do, ehh, just buy a EU cheap vps (hetzner my beloved) and call it a day plus you also get a free dev box which can also run your code 24x7 and other factors too.
I worked at a cloud company a while ago, and if free tier user requests came from another cloud providers IPs we’d have to double check it wasn’t fraud since that happened more often than residential ranges.
TLDR You cannot reverse engineer the oauth API without encountering this message:
https://tcdent-pub.s3.us-west-2.amazonaws.com/cc_oauth_api_e...
Add in various 2nd/3rd place players (Codex and Copilot) with employees openly using their personal accounts to cash in on the situation and there's a lot of amplification going on here.
that said, it is absolutely being enforced against other big model labs who are mostly banned from using claude.
Opus might be currently the best model out there, and CC might be the best tool out of the commercial alternatives, but once someone switches to open code + multiple model providers depending on the task, they are going to have difficulty winning them back considering pricing and their locked down ecosystem.
I went from max 20x and chatgpt pro to Claude pro and chat gpt plus + open router providers, and I have now cancelled Claude pro and gpt plus, keeping only Gemini pro (super cheap) and using open router models + a local ai workstation I built using minimax m2.1 and glam 4.7. I use Gemini as the planner and my local models as the churners. Works great, the local models might not be as good as opus 4.5 or sonnet 4.7, but they are consistent which is something I had been missing with all commercial providers.
I went from max 20x and chatgpt pro to Claude pro and chat gpt plus + open router providers, and I have now cancelled Claude pro and gpt plus, keeping only Gemini pro (super cheap) and using open router models + a local ai workstation I built using minimax m2.1 and glam 4.7. I use Gemini as the planner and my local models as the churners. Works great, the local models might not be as good as opus 4.5 or sonnet 4.7, but they are consistent which is something I had been missing with all commercial providers.
You went from a 5 minute signup (and 20-200 bucks per month) to probably weeks of research (or prior experience setting up workstations) and probably days of setup. Also probably a few thousand bucks in hardware.
I mean, that's great, but tech companies are a thing because convenience is a thing.
Even paying API pricing it was significantly cheaper than the nearly $500 I was paying monthly (I was spending about $100 month combined between Claude pro, chat gpt plus, and open router credits).
Only when I knew exactly the setup I wanted locally did I start looking at hardware. That part has been a PITA since I went with AMD for budget reasons and it looks like I'll be writing my own inference engine soon, but I could have gone with Nvidia and had much less issues (for double the cost, dual Blackwell's vs quad Radeon W7900s for 192GB of VRAM).
If you spend twice what I did and go Nvidia you should have nearly no issues running any models. But using open router is super easy, there are always free models (grok famously was free for a while), and there are very cheap and decent models.
All of this doesn't matter if you aren't paying for your AI usage out of pocket. I was so Anthropics and OpenAIs value proposition vs basically free Gemini + open router or local models is just not there for me.
but I could have gone with Nvidia and had much less issues (for double the cost, dual Blackwell's vs quad Radeon W7900s for 192GB of VRAM).If you spend twice what I did and go Nvidia you should have nearly no issues running any models.
I goodled what a Radeon W7900 costs and the result on Amazon was €2800 a piece. You say "quad" so that's €11200 (and that's just the GPUs).
You also say "spend twice what I did", which would put the total hardware costs at ~€25000 total.
Excuse me, but this is peak HN detachment from the experience of most people. You propose spending the cost of a car on hardware.
The average person will just pay Anthropic €20 or €100 per month and call it a day, for now.
I'm planning a writing a ROCM inference engine anyways, or at least contributing to the rocm vllm or sglang implementations for my cards since I'm interested in the field. Funnily enough, I wouldn't consider myself bullish on AI, I just want to really learn the field so I can evaluate where it's heading.
I spent about 10k on the cards, though the upgrades were piece meal as I found them cheap. I still have to get custom water blocks for them since the original W7900s (which are cheap) are triple slot, so you can't fit 4 of them in any sort of workstation setup (I even looked at rack mount options).
Bought a used thread ripper pro wrx80 motherboard ($600), I bought the cheapest TR Pro CPU for the MB (3945wx, $150), I bought 3 128Gb DDR4-3200 sticks at 230 each before the craze, was planning on populating all 8 channels if prices went down a bit. Each stick is now 900, more than I paid for all 3 combined (730 with S&H and taxes). So the system is staying as is until prices come down a bit.
For AI assisted programming, the best value prop by far is Gemini (free) as the orchestrator + open code using either free models or grok / minimax / glm through their very cheap plans (for minimax or glm) or open router which is very cheap. You can also find some interest providers like Cerebras, who get silly fast token generation, which enables interesting cases.
Also, most of the lower end models aren't that good. At this point you can take an experienced dev and start implementing an app using any mature stack (I'm talking even about stuff like Ada, so not just C/C++, JS, etc) on any mainstream platform (big 3 desktop + big 2 mobile + web) and you can get quite far with Claude Code. By far I mean you'll at least do the 80% really quickly, at which point the "experienced dev" needs to take over. I think you can even get 95% of the way there. And that's with a stack that the dev is unfamiliar with at the start.
as a consumer, i do absolutely prefer the latter model - but i don't think that is the position I would want to be in if I were anthropic.
just wanted to make sure before I sign up for a openAI sub
e: for those downvoting, i would earnestly like to hear your thoughts. i want opencode and similar solutions to win.
Anthropic's cogs is rent of buying x amount of h100s. cost of a marginal query for them is almost zero until the batch fills up and they need a new cluster. So, API clusters are usually built for peak load with low utilization (filled batch) at any given time. Given AI's peak demand is extremely spiky they end up with low utilization numbers for API support.
Your subscription is supposed to use that free capacity. Hence, the token costs are not that high, hence you could buy that. But it needs careful management that you dont overload the system. There is a claude code telemetry which identifies the request as lower priority than API (and probably decide on queueing + caching too). If your harness makes 10 parallel calls everytime you query, and not manage context as well as claude code, its overwhelming the system, degrading the performance for others too. And if everyone just wants to use subscription and you have no api takers, the price of subscription is not sustainable anyway. In a way you are relying on others' generosity for the cheap usage you get.
Its reasonable for a company to unilaterally decide how they monetize their extra capacity, and its not unjustified to care. You are not purchasing the promise of X tokens with a subscription purchase for that you need api.
Your subscription is supposed to use that free capacity. Hence, the token costs are not that high, hence you could buy that. But it needs careful management that you dont overload the system. There is a claude code telemetry which identifies the request as lower priority than API (and probably decide on queueing + caching too). If your harness makes 10 parallel calls everytime you query, and not manage context as well as claude code, its overwhelming the system, degrading the performance for others too. And if everyone just wants to use subscription and you have no api takers, the price of subscription is not sustainable anyway. In a way you are relying on others' generosity for the cheap usage you get.
I understand what you mean but outright removing the ability for other agents to use the claude code subscription is still really harsh
If telemetry really is a reason (Note: I doubt it is, I think the marketing/lock-ins aspect might matter more but for the sake of discussion, lets assume so that telemetry is in fact the reason)
Then, they could've simply just worked with co-ordination with OpenCode or other agent providers. In fact this is what OpenAI is doing, they recently announced a partnership/collaboration with OpenCode and are actively embracing it in a way. I am sure that OpenCode and other agents could generate telemetry or atleast support such a feature if need be
Telemetry is a reason. And its also the mentioned reason. Marketing is a plausible thing and likely part of the reason too, but lock-in etc. would have meant this would have come way sooner than now. They would not even be offering an API in that case if they really want to lock people in. That is not consistent with other actions.
At the same time, the balance is delicate. if you get too many subs users and not enough API users, then suddenly the setup is not profitable anymore. Because there is less underused capacity available to direct subs users to. This probably explains a part of their stance too, and why they havent done it till now. Openai never allowed it, and now when they do, they will make more changes to the auth setup which claude did not. (This episode tells you how duct taped whole system was at ant. They used the auth key to generate a claude code token, and just used that to hit the API servers).
Maybe OpenAI has a 12x markup on API credits, or Anthropic is much better at running inference, but my best guess is that Anthropic is selling at a large loss.
And if inference is so profitable why is OpenAI losing 100B a year
This kind of thing is pretty standard. Nobody wants to be a vendor on someone else's platform. Anthropic would likely not complain too much about you using z.ai in Claude Code. They would prefer that. They would prefer you use gpt-5.2-high in Claude Code. They would prefer you use llama-4-maverick in Claude Code.
Because regardless of how profitable inference is, if you're not the closest to the user, you're going to lose sooner or later.
1. Pay for a stock photo library and train an image model with it that I then sell.
2. Use a spam detection service, train a model on its output, then sell that model as a competitor.
3. Hire a voice actor to read some copy, train a text to speech model on their voice, then sell that model.
This doesn't mean you can't tell Claude "hey, build me a Claude Code competitor". I don't even think they care about the CLI. It means I can't ask Claude to build things, then train a new LLM based on what Claude built. Claude can't be your training data.
There's an argument to be made that Anthropic didn't obtain their training material in an ethical way so why should you respect their intellectual property? The difference, in my opinion, is that Anthropic didn't agree to a terms of use on their training data. I don't think that makes it right, necessarily, but there's a big difference between "I bought a book, scanned it, learned its facts, then shredded the book" and "I agreed to your ToS then violated it by paying for output that I then used to clone the exact behavior of the service."
I don't even think they care about the CLI
No they actually do, basically they provide claude code subscription model for 200$ which is a loss making leader and you can realistically get value of even around 300-400$ per month or even more (close to 1000$) if you were using API
So why do they bear the loss, I hear you ask?
Because it acts as a marketing expensive for them. They get so much free advertising in sense from claude code and claude code is still closed source and they enforce a lot of restrictions which other mention (sometimes even downstream) and I have seen efforts of running other models on top of claude but its not first class citizen and there is still some lock-in
On the other hand, something like opencode is really perfect and has no lock-in and is absolutely goated. Now those guys and others had created a way that they could also use the claude subscription itself via some methods and I think you were able to just use OAuth sign up and that's about it.
Now it all goes great except for anthropic because the only reason Anthropic did this was because they wanted to get marketing campaign/lock-in which OpenCode and others just stopped, thus they came and did this. opencode also prevented any lockins and it has the ability to swap models really easily which many really like and thus removing dependence on claude as a lock-in as well
I really hate this behaviour and I think this is really really monopolistic behaviour from Anthropic when one comes to think about it.
Theo's video might help in this context: https://www.youtube.com/watch?v=gh6aFBnwQj4 (Anthropic just burned so much trust...)
In fact, there's exactly nothing illegal about me replacing what Anthropic is doing with books by me personally reading the books and doing the job of the AI with my meat body (unless I'm quoting the text in a way that's not fair use).
But that's not even what's at issue here. Anthropic is essentially banning the equivalent of people buying all the Stephen King books and using them to start a service that specifically makes books designed to replicate Stephen King writing. Claude being about to talk about Pet Sematary doesn't compete with the sale of Pet Sematary. An LLM trained on Stephen King books with the purpose of creating rip-off Stephen King books arguably does.
….you can use Claude code in Zed but you can’t hijack the rate limits to do other ai stuff in zed.
This was a response to my asking whether we can use the Claude Max subscription for the awesome inline assistant (Ctl+Enter in the editor buffer) without having to pay for yet another metered API.
The answer is no, the above was a response to a follow up.
An aside - everyone is abuzz about “Chat to Code” which is a great interface when you are leaning toward never or only occasionally looking at the generated code. But for writing prose? It’s safe to say most people definitely want to be looking at what’s written, and in this case “chat” is not the best interaction. Something like the inline assistant where you are immersed in the writing is far better.
I mean they could have put _exactly_ that into their terms of service.
Hijacking rate limits is also never really legal AFIK.
Very poor communication, despite some bit of reasonable intention, could be the beginning of the end for Claude Code.
they lose money on 200/month plan, maybe even quite a bit. So that plan only exist to subsidize their editor.
Could be about the typical "all must be under our control" power fantasies companies have.
But if there really is "no moat" and open model will be competitive just in a matter of time then having "the" coding editor might be majorly relevant for driving sales. Ironically they seem to already have kind lost that if what some people say about ClaudeCode vs. OpenCode is true...
│ Total │ │ 3,884,609 │ 3,723,258 │ 215,832,272 │ 3,956,313,197 │ 4,179,753,336 │ $3150.99 │
It calculates tokens & public API pricing. But also Anthropic models are generally more expensive than others, so I guess its sort of 'self made' value? Some of it?
│ Total │ │ 2,102,742 │ 622,848 │ 78,507,465 │ 1,670,798,000 │ 1,752,031,055 │ $1283.69 │
But being first doesn't mean you're necessarily the best. Not to mention, they weren't the first anyway (aider was).
Codex and Open Code are competition for coding, but if we talk about an open-ended agentic harness for doing useful work... well it was bad enough a year ago I'd consider anyone even claiming one exists to be a grifter and now we have one.
And while being first might not matter, I think having both post-training and the harness being developed under the same roof is going to be hard to beat.
Developers want to use these 3p client and pay you 200 a month, why are you pissing us off
Presumably because it costs them more than $200 per month to sell you it. It's a loss leader to get you into their ecosystem. If you won't use their ecosystem, they'd rather you just go over to OpenAI.
that has 80% what CC does
OpenCode already does 120% of what CC does.
I don't pay for any AI subscription. I just end up building single file applications but they might not be that good sometimes so I did this experiment where I ask gemini in aistudio or chatgpt or claude and get files and end up just pasting it in opencode and asking it to build the file structure and paste it in and everything
If your project includes setting up something say sveltekit or any boilerplate project and contains many many files I recommend this workflow to get the best of both worlds for essentially free
To be really honest, I just end up mostly creating single page main.go files for my use cases from the website directly and I really love them a lot. Sure the code understandability takes a bit of hit but my projects usually stay around ~600 to 1000 at max 2000 lines and I really love this workflow/ for me personally, its just one of the best.
When I try AI agents, they really end up creating 25 files or 50 files and end up overarchitecting. I use AI for prototypes purposes for the most part and that overarchitecture actually hurts.
Mostly I just create software for my own use cases though. Whenever I face any problem that I find remotely interesting that impacts me, I try to do this and this trend has worked remarkably well for me for 0 dollars spent.
I remember when I was part of procuring an analytics tool for a previous employer and they had a similar clause that would essentially have banned us from building any in-house analytics while we were bound by that contract.
We didn't sign.
Sounds like standard terms from lawyers – not very friendly to customers, very friendly to company – but is it particularly bad here?
Compilers don't come with terms that prevent you from building competing compilers. IDEs don't prevent you from writing competing IDEs. If coding agents are supposed to be how we do software engineering from now on, yeah, it's pretty bad.
Sounds like standard terms from lawyers
If they were "standard" terms, then how come no other AI provider imposes them?
Literally the first 4 SaaS companies that came to my mind to check (Atlassian/Jira, Linear, Pipedrive, Stackblitz/Bolt.new) have a similar clause in their TOS.
How would they even detect that you used CC on a competitor? There's surely no ethical reason to not do it, it seems unenforceable.
OpenAI hoovered up everything they could to train their model with zero shits about IP law. But the moment other models learned from theirs they started throwing tantrums.
This tweet reads as nonsense to me
It's quoting:
This is why the supported way to use Claude in your own tools is via the API. We genuinely want people building on Claude, including other coding agents and harnesses, and we know developers have broad preferences for different tool ergonomics. If you're a maintainer of a third-party tool and want to chat about integration paths, my DMs are open.
And the linked tweet says that such integration is against their terms.
The highlighted term says that you can't use their services to develop a competing product/service. I don't read that as the same as integrating their API into a competing product/service. It does seem to suggest you can't develop a competitor to Claude Code using Claude Code, as the title says, which is a bit silly, but doesn't contradict the linked tweet.
I suspect they have this rule to stop people using Claude to train other models, or competitors testing outputs etc, but it is silly in the context of Claude Code.
Its always been socialize the losses, and capitalize the gains. And all the while, legislating rules to block upstart companies.
Its never ever been "fair". He who has the most gold makes the rules.
It would be like if Carnegie Steel somehow could have prohibited people from buying their steel in order to build a steel mill.
And moreover in this case the entire industry as it exists today wouldn't exist without massive copyright infringement, so it's double-extra ironic because Anthropic came into existence in order to make money off of breaking other people's rules, and now they want to set up their own rules.
Is this them saying that their human developers don’t add much to their product beyond what the AI does for them?
If this is to only limit knowledge distillation for training new models or people Copying claude code specifically or preventing max plan creds used as API replacement, they could properly carve exceptions rather than being too broad which risks turning away new customers for fear of (future) conflict
Also seen a 2-week latency 10+ times increase of Gemini 2.5 Flash finetuned (= enterprise) model endpoints, again undocumented and unacknowledged, because they shifted all of their GPU capacity towards going "viral" on people generating slop artwork around Nano Banana Pro release.
So plenty of silent shenanigans do happen, including by the Big 3 on API endpoints. At the same time I agree with you that all those rumors of "degradation in code quality" are very much unproven.
prompts where we'd ask for e.g. 15 sections, would only do 10 sections and then ask "Would you like me to continue?".
I can’t speak to any kind of identifiable pattern but man that behavior drives me up the wall when it happens.
When I run into a specific task that starts to trigger that behavior, even a clean session with explicit instructions directing it to complete ALL sub steps isn’t enough to push it to finish the entire request to the end.
Not a very hacker-friendly strategy, but Apple's market cap is pretty big. I think it comes down to whether Anthropic can make a product with enough of a lead over competitors to offset the restrictions.
The ToS is concerning, I have concerns with Anthropic in general, but this policy enforcement is not problematic to me.
(yes, I know, Anthropic's entire business is technically built on scraping. but ideally, the open web only)
Which, seems fine? They could've just not offered the 200$ plan and perhaps nobody would've complained. They tried it, noticed it being unsustainable, so they're trying to remodel it to it _is_ sustainable.
I think the upset is misplaced. :shrug:
I swear to god everyone is spoiling for a fight because they're bored. All these AI companies have this language to try to prevent people from "distilling" their model into other models. They probably wrote this before even making Claude Code.
Worst case scenario they cancel your account if they really want to, but almost certainly they'll just tweak the language once people point it out.
This really shouldn't be the direction Anthropic should even go about. It is such a negative direction to go through and they could've instead tried to cooperate with the large open source agents and talking with them/communicating but they decide to do this which in the developer community is met with criticism and rightfully so.
Anthropic blocks third-party use of Claude Code subscriptions