Stop selling “unlimited”, when you mean “until we change our minds”
I firmly believe that AI will not replace developers, but a developer using AI will replace a developer who does not.
Ugh, anyone who says that and really believes it can no longer see common sense through the hype goggles.
It's just stupid and completely 100% wrong, like saying all musicians will use autotune in the future because it makes the music better.
It's the same as betting that there will be no new inventions, no new art, no works of genius unless the creator is taking vitamin C pills.
It's one of the most un-serious claims I can imagine making. It automatically marks the speaker as a clown divorced from basic facts about human ability
Just as the developer who refused to adopt version control, IDEs, or Stack Overflow eventually became unemployable, those who reject tools that fundamentally expand their problem-solving capacity will find themselves unable to compete with those who can architect solutions across larger possibility spaces on smaller teams.
Will it be used for absolutely every problem? No - There are clearly places where humans are needed.
But rejecting the enormous impact this will have on the workforce is trading hype goggles for a bucket of sand.
cognitive augmentation that allows developers to navigate complexity at scales human cognition wasn't designed for
I don't think you should use LLMs for something you can't master without.
will find themselves unable to compete
I'd wait a bit more before concluding so affirmatively. The AI bubble would very much like us to believe this, but we don't yet know very well the long term effects of using LLMs on code, both for the project and for the developer, and we don't even know how available and in which conditions the LLMs will be in a few months as evidenced by this HN post. That's not a very solid basis to build on.
To your second point -- With as much capital as is going into data center buildout, the increasing availability of local coding LLMs that near the performance of today's closed models, and the continued innovation on both open/closed models, you're going to hang your hat on the possibility that LLMs are going to be only available in a 'limited or degraded' state?
I think we simply don't have similar mental models for predicting the future.
Which one wins?
We don't really know yet, that's my point. There are contradictory studies on the topic. See for instance[1] that sees productivity decrease when AI is used. Other studies show the opposite. We are also seeing the first wave of blog posts from developers abandoning the LLMs.
What's more, most people are not masters. This is critically important. If only masters see a productivity increase, others should not use it... and will still get employed because the masters won't fill in all the positions. In this hypothetical world, masters not using LLMs also have a place by construction.
With as much capital as is going into
Yes, we are in a bubble. And some are predicting it will burst.
the continued innovation
That's what I'm not seeing. We are seeing small but very costly improvements on a paradigm that I consider fundamentally flawed for the tasks we are using it for. LLMs still cannot reason, and that's IMHO a major limitation.
you're going to hang your hat on the possibility that LLMs are going to be only available in a 'limited or degraded' state?
I didn't say I was going to, but since you are asking: oh yes, I'm not putting my eggs in a box that could abruptly disappear or become very costly.
I simply don't see how this thing is going to be cost efficient. The major SaaS LLM providers can't seem to manage profitability, and maybe at some point the investors will get bored and stop sending billions of dollars towards them? I'll reconsider when and if LLMs become economically viable.
But that's not my strongest reason to avoid the LLMs anyway:
- I don't want to increase my reliance on SaaS (or very costly hardware)
- I have not caved in yet in participating in this environmental disaster, and in this work pillaging phenomenon (well, that last part, I guess I don't really have a choice, I see the dumb AI bots hammering my forgejo instance).
[1] https://www.sciencedirect.com/science/article/pii/S016649722...
AI presently has a far lower footprint on the globe than the meat industry -- The US Beef industry alone far outpaces the impact of AI.
As far as "work pillaging" - There is cognitive dissonance in supporting the freedom of information/cultural progress and simultaneously desiring to restrict a transformative use (as it has been deemed by multiple US judges) of that information.
We don't see eye to eye, and I think that's ok. We can re-evaluate our predictions in a year!
consuming media narratives about why AI is bad
That's quite uncharitable.
I don't need to use it to make these points. While I might show a lack of perspective, I don't need to do X to reasonably think X can be bad. You can replace X with all sorts of horrible things, I'll let the creativity of the readers fill in the gap.
AI presently has a far lower footprint on the globe than [X]
We see the same kind of arguments for planes, cars, anything with a big impact really. It still has a huge (and growing) environmental impact, and the question is do the advantages outweigh the drawbacks?
For instance, if a video call tool allowed you to have a meeting without taking a plane, the video call tool had a positive impact. But then there's also the ripple effect: if without the tool, the meeting hadn't happened at all, the positive impact is less clear. And/or if the meeting was about burning huge amounts of fuel, the positive impact is even less clear, just like LLMs might just allow us to produce attention-seeking, energy-greedy shitty software at a faster speed (if they indeed work well in the long run).
And while I can see how things like ML can help (predicting weather, etc), I'm more skeptical about LLMs.
And I'm all for stopping the meat disaster as well.
We don't see eye to eye, and I think that's ok. We can re-evaluate our predictions in a year!
Yep :-)
I don't know how else one gets information and forms an opinion on the technology except for media consumption or hands-on experience. Note that I count "social media" as media.
My proposition is that without hands-on experience, your information is limited to media narratives, and it seems like the "AI is net bad" narrative seems to be the source of perspectives.
Skepticism is warranted, and there are a million ways this technology could be built for terrible ends.
But, I'm of the opinion that: A) The technology is not hype, and is getting better B) That it can, and will, be built -- Time horizon debatable. C) That for it to result in good outcomes for humanity, it requires good people to help shape it in its formative years.
If anything, more people like you need to be engaging it to have grounded perspectives on what it could become.
I don't know how else one gets information and forms an opinion on the technology except for media consumption or hands-on experience.
Okay, I think I got your intent better, thanks for clarifying.
You can add discussion with other people outside software media, or opinion pieces outside media (I would not include personal blogs in "media" for instance, but would not be bothered if someone did), including people who tried and people who didn't. Medias are also not uniform in their views.
But I hear you, grounded perspectives would be a positive.
That for it to result in good outcomes for humanity, it requires good people to help shape it in its formative years.
I hear you as well, makes perfect sense.
OTOH, it's difficult to engage into something that feels fundamentally wrong or a dead end, and that's what LLMs feel like to me. It would be also frightening: the risk that, as a good person, you help shape a monster.
The only way out I can see is inventing the thing that will make LLMs irrelevant, but also don't have their fatal flaws. That's quite the undertaking though.
We'd not be competing on an equal footing: LLM providers have been doing things I would never have dared even considering: ingesting considerable amount of source materials completely disregarding their licenses, hammering everyone servers, spending a crazy amount of energy, sourcing a crazy amount of (very closed) hardware, burning an insane amount of money even on paid plans. It feels very brutal.
Can an LLM be built avoiding any of this stuff? Because otherwise, I'm simply not interested.
(of course, the discussion has shifted quite a bit! The initial question was if a dev not using the LLMs would remain relevant, but I believe this was addressed at large in other comments already)
The actions of a few companies does not invalidate the entire category. There are open models, trained on previously aggregated datasets (which, for what its worth, nobody had a problem with being collected a decade ago!), doing research to make training and usage more efficient.
The technology is here. I think your assessment on its relevance is not informed by actual usage, your frame of its origins is black/white (rather than understanding the actual landscape of different model approaches), and that your lack of interest in using it does nothing to change the absolutely massive shift that is happening in the nature of work. I'm a Product Manager, and the Senior Engineer I work with has been reviewing my PRs before they get merged - 60%+ were merged without much comment, and his bar is high. I did half of our last release, while also doing my day job. Safe to say, his opinion has changed based on that.
Were they massive changes? No. But these are absolutely impactful in the decision calculus that goes into what it takes to build and maintain software.
The premise of my argument is that what you see as "fatal flaws" are an illusion created by bias (which bleeds into the second-hand perspectives you cite just as readily as it does the media), rather than your direct and actual validation that those flaws exist.
My suggestion is to be an objective scientist -- use the best model released (regardless of origins) with minor research into 'best practices' to see what is possible, and then ask yourself if the 'origins' issue were addressed by a responsible open-source player in 6-12 months, whether it would change anything about your views on the likely impact of this technology and your willingness to adopt it.
I believe that it's coming, not because the hype machine tells me so (and it's WAY hyped) - But because I've used it, seen its flaws and strengths, and forecast how quickly it will change the work that I've been doing for over a decade even if it stopped getting better (and it hasn't stopped yet)
On the technical part I do believe LLMs are fundamentally limited in their design and are going to plateau, but this we shall see. I can imagine they can already be useful is certain cases despite their limitations. I'm willing to accept that my lack of experience doesn't make my opinion so relevant here.
My suggestion is to be an objective scientist
Sure, but I also want to be a reasonable Earth citizen.
-- use the best model released (regardless of origins) with minor research into 'best practices' to see what is possible
Yeah… but no, I won't. I don't think it will have much practical impact. I don't feel like I need this anecdotal experience, I'd not use it either way. Reading studies will be incredibly more relevant anyway.
and then ask yourself if the 'origins' issue were addressed by a responsible open-source player in 6-12 months, whether it would change anything about your views on the likely impact of this technology
I doubt so, but open to changing my mind on this.
and your willingness to adopt it.
Yeah, if the thing is actually responsible (I very much doubt it is possible), then indeed, I won't limit myself. I'd try it and might use it for some stuff. Note: I'll still avoid any dependency on any cloud for programming - this is not debatable - and in 6-12 months, I won't have the hardware to run a model like this locally unless something incredible happens (including not having to depend on proprietary nvidia drivers).
What's more, an objective scientist doesn't use anecdotal datapoints like their own personal experience, they run well-designed studies. I will not conduct such studies. I'll read them.
I think that it also seems like we disagree on the foundations/premise of the technology.
Yeah, we have widely different perspectives on this stuff. It's an enriching discussion. I believe we start having said all that could be said.
[1] https://salsa.debian.org/deeplearning-team/ml-policy/-/blob/...
There's a clear difference between...
There's also a clear difference between users of this site that come here for all types of content, and users who have "AI" in their usernames.
I think that the latter type might just have a bit of a bias in this matter?
Beef has the benefit of seeing an end, though. Populations are stabilizing, and people are only ever going to eat so much. As methane has a 12 year life, in a stable environment the methane emissions today simply replace the emissions from 12 years ago. The carbon lifecycle of animals is neutral, so that is immaterial. It is also easy to fix if we really have to go to extremes: Cull all the cattle and in 12 years it is all gone!
Whereas AI, even once stabilized, theoretically has no end to its emissions. Emissions that are essentially permanent, so even if you shut down all AI when you have to take extreme measures, the effects will remain "forever". There is always hope that we'll use technology to avoid that fate, but you know how that usually goes...
I don't think you should use LLMs for something you can't master without.
I'm not sure, I frequently use LLMs for well-scoped math-heavy functions (mostly for game development) where I don't neccessarly understand what's going on inside the function, but I know what output I expect given some inputs, so it's easy for me to kind of blackbox test it with unit tests and iterate on the "magic" inside with an LLM.
I guess if I really stopped and focused on math for a year or two I'd be able to code that myself too, but every time I tried to get deeper into math it's either way too complex for me to feel like it's time well spent, and it's also boring. So why bother?
I didn't have such cases in mind, was replying to the "navigate complexity at scales human cognition wasn't designed for" aspect.
The use cases of these GPT tools are extremely limited. They demo well and are quite useful for highly documented workflows (E.G. they are very good at creating basic HTML/JS layouts and functionality).
However, even the most advanced GPT tools fall flat on their face when you start working with any sort of bleeding edge, or even just less-ubiquitous technology.
The Godot engine is an open-source project that has matured significantly since GPT tools hit the market.
The GPTs don't know what the new Godot features are, and there is a training gap that I'm not sure Open AI and their competitors will ever be able to overcome.
Do you work on web applications? I've found that GPT tools get pretty stupid once you stop working with HTML/JS.
Godot with AI was definitely a worse experience than usual for me. I did not use the Godot editor. It seems like the development flow for Godot however is based around it. Scenes were generated through a Python script, which was of course written by Claude Code. Personally, I reviewed no line of code during the process.
My findings afterwards are;
1) Code quality was not good. Personally I have a year of experience working with Unity and online the code examples tend to be of incredibly poor quality. My guess is if AI is trained on the online corpus of game development forums, the output should be absolutely terrible. For the field of game development especially AI is tainted with this poor quality. It did indeed not follow modern practices, even after having hooked up a context MCP which provides code examples.
2) It was able to refactor the codebase to modern practices upon instructing it to; I told it to figure out what modern practices were and to apply them; it started making modifications like adding type hints and such. Commonly you would use predefined rules for this with an LLM tool, I did not use any for my experiment. That would be a one-time task after which the AI will prefer your way of working. An example for Godot can be found here: https://github.com/sanjeed5/awesome-cursor-rules-mdc/blob/ma...
3) It was very difficult to debug for Claude Code. The platform seems to require working with a dedicated editor, and the flow for debugging is either through that editor or by launching the game and interacting with it. This flow is not suitable at the moment for out of the box Claude Code or similar tools which need to be able to independently verify that certain functions or features work as expected.
Do you work on web applications? I've found that GPT tools get pretty stupid once you stop working with HTML/JS.
Not really - I work on developer experience and internal developer platforms. That is 80~90% Python, Go, Bash, Terraform and maybe a 10~20% Typescript with React depending on the project.
I just use "AI" instead of Google/SO when I need to find something out.
So far it mostly answers correctly, until the truthful answer comes close to "you can't do that". Then it goes off the rails and makes up shit. As a bonus, it seems to confuse related but less popular topics and mixes them up. Specific example, it mixes couchdb and couchbase when I ask about features.
The worst part is 'correctly' means 'it will work but it will be tutorial level crap'. Sometimes that's okay, sometimes it isn't.
So it's not that it doesn't work for my flow, it's that I can't trust it without verifying everything so what flow?
Edit: there's a codebase that i would love to try an "AI" on... if i wouldn't have to send my customer's code to $random_server with $random_security with $untrustable_promises_of_privacy. Considering how these "AI"s have been trained, I'm sure any promise that my code stays private is worth less than used toilet paper.
Gut feeling is the "AI" would be useless because it's a not-invented-here codebase with no discussion on StackOverflow.
Human cognition wasn't designed to make rockets or AIs, but we went to the moon and the LLMs are here. Thinking and working and building communities and philosophies and trust and math and computation and educational institutions and laws and even Sci Fi shows is how we do
we went to the moon
We also killed quite a few astronauts.
But the loss of their lives also proves a point: that achievement isn't a function of intelligence but of many more factors like people willing to risk and to give their lives to make something important happen in the world. Loss itself drives innovation and resolve. For evidence, look to Gene Kranz: https://wdhb.com/wp-content/uploads/2021/05/Kranz-Dictum.pdf
https://en.wikipedia.org/wiki/Rogers_Commission_Report#Flawe...
Loss itself drives innovation and resolve
True, but did NASA in 1986 really need to learn this lesson?
This isn't (just) rocket science, it's the fundamentals of risk liability, legality and process that should be well established in a (quasi-military) agency such as this.
They knew they were taking some gambles to try to catch up in the Space Race. The urgency that justified those gambles was the Cold War.
People have a social tendency to become complacent about catastrophic risks when there hasn't been a catastrophe recently. There's a natural pressure to "stay chill" when the people around you have decided to do so. Speaking out about risk is scary unless there's a culture of people encouraging other to speak out and taking the risks seriously because they all remember how bad things can be if they don't.
Someone actually has to stand up and say "if something is wrong I really actually want to and need to know." And the people hearing that message have to believe it, because usually it is said in a way that it is not believed.
I see people starting to unlearn working by themselves rapidly and becoming dependant on GPT, making themselves quite useless in the process. They no longer understand what they're working with and need the help from the tool to work. They're also entirely helpless when whatever 'AI' tool they use can't fix their problem.
This makes them both more replaceable and less marketable than before.
It will have and already has a huge impact. But it's kinda like the offshoring hype from a decade ago. Everyone moved their dev departments to a cheaper country, only to later realize that maybe cheap does not always mean better or even good. And it comes with a short term gain and a long term loss.
This passage forces me to concluse that this comment is sarcasm. Neither IDEs nor the use of Stack Overflow is anywhere near a requirement for being a professional programmer. Surely you realize there are people out there who are happily employed while still using stock Vim or Emacs? Surely you realize there are people out there who solve problems simply by reading the docs and thinking deeply rather than asking SO?
The usage of LLM assistance will not become a requirement for employment, at least not for talented programmers. A company gating on the use of LLMs would be preposterously self-defeating.
And AI already excels at building those sorts of things faster and with cleaner code. I’ve never once seen a model generate code that’s as ugly and unreadable as a lot of the low quality code I’ve seen in my career (especially from Salesforce “devs” for example)
And even the ones that do the more creative problem solving can benefit from AI agents helping with research, documentation, data migration scripts, etc.
Yet the blanket statement is that I will fail and be replaced, and in fact that people like me don't exist!
So heck yeah I'll come clap back on that.
Code is like law. The goal isn't to have a lot of it. Come to me when by "more productive" you actually mean that the person using the LLM deleted more lines of code than anyone else around them while preserving the stability and power of the system
There is absolutely something real here, whether you choose to believe it or not. I'd recommend taking a good faith and open minded look at the last few months of developments. See where it can benefit you (and where it still falls way short).
So even if you may have arrived at your conclusion years ago, I assure you that things continue to improve by the week. You will be pleasantly surprised. This is not all or nothing, nor does it have to be.
the overwhelming majority of developers who are paid for their work daily do pretty mundane and boring software developmet
So are musicians. We think of them as doing creative stuff but a vast majority is mundane.
(though who knows, maybe at some time in the future there will be significant numbers of people programming as a hobby and wanting to be coached by a human...)
*: I'm aware of cases like the recent ffmpg assembly usage that gave a big performance boost. When talking about industrial trend lines, I'm OK with admitting 0.001% exceptions.
(Apologies if it comes across as snarky or pat, but I honestly think the comparison is reasonable.)
But... what else? These things are rare. It’s not like there’s a new thing that comes along every few years and we all have to jump on or be left behind, and LLMs are the latest. There’s definitely a new thing that comes along every few years and people say we have to jump on or be left behind, but it almost never bears out. Many of those ended up being useful, but not essential.
I see no indication that LLMs or associated tooling are going to be like compilers and version control where you pretty much can’t find anyone making a living in the field without them. I can see them being like IDEs or debuggers or linters where they can be handy but plenty of people do fine without them.
The basic nature of my job is to maintain the tallest tower of complexity I can without it falling over, so I need to take complexity and find ways to confine it to places where I have some way of knowing that it can't hurt me. LLMs just don't do that. A leaky abstraction is just a layer of indirection, while a true abstraction (like a properly implemented high-level language) is among the most valuable things in CS. Programming is theory-building!
Do the people in this corner use compilers? Would they agree that programmers who don't use them* have been replaced by those that do?
Are you aware compilers are deterministic most of the time?
If a compiler had a 10% chance of erasing your code instead of generating an executable you'd see more people still using assembly.
Where would you put the peak? Fortran was invented in the 50’s. The total population of programmers was tiny back then…
Nobody knows how this will play out yet. Reality does not care about your feelings, unfortunately.
But on the other hand there is the other end who think AGI coming in a few months and LLMs are omniscient knowledge machines.
There is a sweet spot in the middle.
But the big thing is using AI to learn new things, explain some tricky math in a paper I am reading, help brain storm, etc. The value of AI is in improving ourselves.
explain some tricky math in a paper I am reading
To me this seems to be the single most valuable use case of newer "AI tools"
generating a Bash shell script quickly
I do this very often, and to me this seems to me the second most valuable use case of newer "AI tools"
The value of AI is in improving ourselves
I agree completely.
help brain storm
This strikes me as very concerning. In my experience, AI brainstorming ideas are exceptionally dull and uninspired. People who have shared ideas from AI brainstorming sessions with me have OVERWHELMINGLY come across as AI brained dullards who are unable to think for themselves.
What I'm trying to say is that Chat GPT and similar tools are much better suited for interacting with closed systems with strict logical constraints, than they are for idea generation or writing in a natural language.
Really, it is like students using AI: some are lazy and expect it to do all the work, some just use it as a tool as appropriate. Hopefully I am not misunderstanding you and others here, but I think you are mainly complaining about lazy use of AI.
but you're right that "I firmly believe that AI will not replace developers, but a developer using AI will replace a developer who does not." could have multiple other readings too.
Let's be fair - I made it intentionally a little provocative :)
What I might not have mentioned is that I've spent the last 5 years and 20,000 or so hours building an IDE from scratch. Not a fork of VSCode, mind you, but the real deal: a new "kernel" and integration layer with abilities that VSCode and its forks can't even dream of. It's a proper race and I'm about to drop the hammer on you.
Even if things are going the direction you say, though, Kilo is still just a fork of VSCode. Lipstick on a pig, perhaps. I would bet that I know the strengths and weaknesses of your architecture quite a lot better than anyone on the Kilo team because the price of admission for you is not questioning any of VSCode's decisions, while I consider all of them worthy of questioning and have done so at great length in the process of building something from scratch that your team bypassed.
I believe that at some point, AI will get good enough that most companies will eventually stop hiring someone that doesn’t utilize AI. Because most companies are just making crud (pun intended). It’ll be like specialized programming languages. Some will exist, and they may get paid a lot more, but most people won’t fall into that category. As much as we like to puff ourselves up, our profession isn’t really that hard. There are a relative handful of people doing some really cool, novel things. Some larger number doing some cool things that aren’t really novel, just done very nicely. And the majority of programmers are the rest of us. We are not special.
What I don’t know is the timing. I don’t expect it to be within 5 years (though I think it will _start_ in that time), but I do expect it within my career.
But I literally can not cancel. Trying the app says "you signed up on a different platform, go there" but it doesn't tell me which platform that might be.
Trying to cancel on mobile web gives several upgrade options but no cancel options.
So, do I need to call my credit card? This is the worst dark pattern on subscription I have seen of any service I have ever paid for!
Anthropic had a fairly positive image in my head until they cut off my access and are not giving me a way to cancel my plan.
Edit: after mucking with the Stripe credit card payment options I found a cancel plan button underneath the list of all invoices. So there is an option, I just had a harder time finding it then I have had with other services. Successfully cancelled!
Gemini Advanced offered 2.5 Pro with nearly unlimited rate limits, then nerfed it to 100/day.
OpenAI silently nerfed the maximum context window of reasoning models in their Pro plan.
Accompanying the nerf is usually a psy op, like nerfing to 50/day then increasing it to 100/day so the anchoring effect reduces the grievance.
It's a smart ploy because as much as we like to say there's no moat, the user does face provider switching costs (time and effort), which serves as a mini-moat for status quo provider.
So providers have an incentive to rope people in with a loss leader, and then rug pull once they gained market share. Maybe 40% of the top 5% of Claude users are now too accustomed to their Claude-based workflows, and inertia will keep them as customers, but now they're using the more expensive API instead. Anthropic won.
Modern bait and switch, although done intelligently so no laws are broken.
To the degree there is a moat, I do not think it will be effective at keeping people in. I had already been somewhat disillusioned with the AI hype, but now I am also disillusioned with the company who I thought was the best actor in the space. I am happy that there is unlikely to be a dominant single winner like there was for web search or for operating systems. That is, unless there's a significant technological jump, rather than the same gradual improvement that all the AI companies are making.
I had already been somewhat disillusioned with the AI hype, but now I am also disillusioned with the company
Likewise: a faulty, unproven, hallucinating, error-prone service, however good, was a good value at approx 25 USD/month in an "absolutely all you can eat", wholesale regime ...
... now? Reputational risk aside, they force their users to appraise their offering in terms of actual value offered, in the market.-
On the rare occasion that it does, I try to circle back and mitigate the root cause so that I can resume a loyalty-free life thereafter.
As if google would say that yes, emails are $5/mo, but there's actually a limit on number of emails daily, and also number of characters in the email. It just feels so illegal to nerf a product that much.
Same with AI companies changing routing and making models dumber from time to time.
I'm not sure what harm you think you're suffering from, and what a proper remedy might be, if you think it's illegal. I don't know if I would go that far, as there are all kinds of words most terms of service use to somehow make it so that you have already acknowledged and agreed to whatever they decide to do. So a lawyer will probably be helpful there as well.
You pay for Gemini by the token and you get the full firehose. It costs money, but less than Opus and it smokes that.
It just works. Gemini 2.5 Pro is the king of AI coding and literally everything else has to catch up.
Trust me, I can't wait until there's a model that can run locally that's as good...but for now there isn't.
Always just look at the token cost and get used the token economics. Go into it paying. You'll get better results. I think people thinking they were somehow cheating and getting away with something similar (or better) for $20/mo are in for a big surprise.
I don't know if I would say they should have known better of course. I think Anthropic and Cursor and Windsurf were hiding it a bit. Now it's all coming out into the open and I guess you know the saying, if it's too good to be true...
the user does face provider switching costs (time and effort), which serves as a mini-moat for status quo provider.
When a provider gets memory working well, I expect them to use this to be a huge moat - ie. they won't let you migrate the memories, because rather than being human readable words they'll be unintelligible vectors.
I imagine they'll do the same via API so that the network has a memory of all previous requests for the same user.
Hell, “just open a new chat and start over” is an important tool in the toolbox when using these models. I can’t imagine a more frustrating experience than opening a new chat to try something from scratch only for it to reply based on the previous prompt that I messed up.
Maybe they added a card fee in at the end, but if they didn’t make that abundantly clear, they’ve broken a law in most countries which use the Euro.
€170 + 21.5% (Irish VAT rate) is €206.55. So not sure what you expected.
Parent clearly stated they only saw "€170+VAT" and not €206.55, so of course they expected to see €206.55 before the purchase went through. Not sure what anyone else would expect?
As long as you don't cancel, you do owe them money. But if they make cancelling intentionally hard, one would likely have a good case in court to still not pay, if one would want to go to court over this.
Update: below the fold at the bottom of the Billing page is the cancel section and cancel button.
Update 2: just clicked cancel and was offered a promo of 20% off for three months...
Update 3: FYI, I logged in to my Claude account via computer (not iOS or Android).
At the rate the Chinese are going it won't be long before I can shake the dust off my sandals of this bullshit for good.
One thing I miss for the other users, i.e. the casual users that never use anywhere near of their quota, is rollover. If you haven't used your quota this month, the unused will roll over to the next month.
Even better: provide a counter displaying both remaining usage available and the quota reset time.
But companies probably earn so much money from the vast majority of users that having good and clear limits would only empower them to actually benefit as much from the product as they can.
The AI models have a bunch of different consumption models aimed at different types of use. I work at a huge company, and we’re experimenting with different ways of using LLMs for users based on different compliance and business needs. The people using all you can eat products like NotebookLM, Gemini, ChatGPT use them much more on average and do more varied tasks. There is a significant gap between low/normal/high users.
People using an interface to a metered API, which offers a defined LLM experience consume fewer resources and perform more narrowly scoped tasks.
The cost is similar and satisfaction is about the same.
There is no such thing as "unlimited" or "lifetime" unless it's self-hosted.
In the same way your next-door supermarket has effectively "infinite soup cans" for the needs of most people.
I thought I had a low usage with my 1.5 years' worth saved. Only reason I paysfor that plan is anything lower and my provider does not offer rollover.
Eg here in slovenia, if you want unlimited calls and texting, you get 150GB in your "package" for 9.99eur, but you somehow can't save that data for the next month.
https://www.hot.si/ponudba/paketi.html (not affiliated)
Nothing in our world is truly unlimited. Digital services and assets have different costs than their physical counterparts, but that just means different limits, not a lack of them. Electrical supply, compute capacity, and storage are all physical things with real world limits to how much they can do.
These realities eventually manifest when someone tries to build an "unlimited" service on top of limited components, similar to how you can't build a service with 99.999% reliability when it has a critical piece that can only get to 99.9%.
In some cases, people discover creative ways to resell the service. Anthropic mentioned they suspect this was happening.
The weirdest part about this whole internet uproar, though, is that Anthropic never offered unlimited usage. It was always advertised as higher limits.
Yet all the comment threads about it are convinced it was unlimited and now it’s not. It’s weird how the internet will wrap a narrative around a story like this.
When you order the second plate, it comes without the sauce and it tastes flatter. You're filled at this point and you can't order the third.
Very creative and fun if you ask me. I was prepared for this though, because the people we went together said how it's going to go, exactly.
0.1% of userbase that will absolutely treat it is unlimited. Whale users/hoarders
This is somewhat a different issue that’s largely accepted by courts and society bar that one neighbour who is incensed they can’t run a rack off their home internet that was marketed unlimited.
This has been a known problem since the dawn of hosting...hell, it goes back to times before computers.
yep
Adverse selection has been discussed for life insurance since the 1860s,[3] and the phrase has been used since the 1870s.[4]
Or the American Airlines lifetime pass.. https://www.aerotime.aero/articles/american-airlines-unlimit...
When some users burn massive amounts of compute just to climb leaderboards or farm karma, it’s not hard to imagine why providers might respond with tighter limits—not because it's ideal, but because that kind of behavior makes platforms harder to sustain and less accessible for everyone else. Because on the other hand a lot of genuine customers are canceling because they get API overload message after paying $200.
I still think caps are frustrating and often too blunt, but posts like that make it easier to see where the pressure might be coming from.
[1] https://www.reddit.com/r/ClaudeAI/comments/1lqrbnc/you_deser...
Surely they thought about 'bad users' when they released this product. They can't be that naive.
Now that they have captured developer mindshare. users are bad.
anthropic bait and switch
what was the bait and switch? where in the launch announcement (https://www.anthropic.com/news/max-plan) did they suggest it provided unlimited inference?
why is anthropic tweeting about 'naughty users that ruined it for everyone' ?
Switch: "We limited your usage weekly and monthly. You don't know how those limits were set, we do but that's not information you need to know. However instead of choosing to hoard your usage out of fear of hitting the dreaded limit again, you've kept it again and again, using the product exactly the way it was intended to and now look what you've done."
they launched Claude Max (and Pro) as being limited. it was limited before, and it's limited now, with a new limit to discourage 24/7 maxing of it.
in what way was there a bait and switch?
Stop selling "unlimited", when you mean "until we change our minds"
The limits don't go in to affect until August 28th, one month from yesterday. Is there an option to buy the Max plan yearly up front? I honestly don't know; I'm on the monthly plan. If there isn't a yearly purchase option, no one is buying unlimited and then getting bait-and-switched without enough time for them to cancel their sub if they don't like the new limits.
A Different Approach: More AI for Less Money
I think it's really funny that the "different approach" is a limited time offer for credits that expire.
I don't like that the Claude Max limits are opaque, but if I really need pay-per-use, I can always switch to the API. And I'd bet I still get >$200 in API-equivalents from Claude Code once the limits are in place. If not? I'll happily switch somewhere else.
And on the "happily switch somewhere else", I find the "build user dependency" point pretty funny. Yes, I have a few hooks and subagents defined for Claude Code, but I have zero hard dependency on anything Anthropic produces. If another model/tool comes out tomorrow that's better than Claude Code for what I do, I'm jumping ship without a second thought.
The field is moving so fast that whatever was best 6 months ago is completely outdated.
And what is top tier today, might be trash in a few months.
Services are not the same thing as physical goods.
You are also paying for a service with no clear SLA or measurable performance indicators. You have no way to determine if what you got at launch is as powerful as what you're getting now. It's all about feels
When companies sell unlimited plans, they’re making a bet that the average usage across all of those plans will be low enough to turn a profit.
These people “abusing” the plan are well within their right to use the API as much as they want. It just didn’t fall into the parameters Anthropic had expected.
LLM subscriptions need to go away, why can’t we just pay as we go? It’s the fairest way for everyone.
When companies sell unlimited plans,
Anthropic never sold an unlimited plan
It’s amazing that so many people think there was an unlimited plan. There was not an unlimited plan.
These people “abusing” the plan are well within their right to use the API as much as they want. It just didn’t fall into the parameters Anthropic had expected.
Correct! And they did. And now Anthropic is changing those limits in a month.
LLM subscriptions need to go away, why can’t we just pay as we go? It’s the fairest way for everyone.
This exists. You use the API. It has always been an option. Again, I’m confused about why there’s so much anger about something that already exists.
The subscriptions are nice for people who want a consistent fee and they get the advantage of a better deal for occasional heavy usage.
Anthropic never sold an unlimited plan
I'm told the $200/month plan was practically unlimited, I heard you could leave ~10 instances of Claude Code running 24/7. I will never pay for any of these subscriptions however so I haven't verified that.
And now Anthropic is changing those limits in a month.
Which indicates the seller was being scammed. Now they're changing the limits so it swings back to being a scam for the user.
I’m confused about why there’s so much anger about something that already exists
Yes but much LLM tooling requires a subscription. I'm not talking only about Anthropic/Claude Code. I can't use chatgpt.com using my own API key. Even though behind the scenes, if I had a subscription, it would be calling out to the exact same API.
LLM subscriptions need to go away, why can’t we just pay as we go? It’s the fairest way for everyone.
Because Claude Code is absolutely impossible to use without a subscription? I’m fine with being limited, but I’m not with having to pay more than $200/month
Anybody that feels they’re not getting enough out of their subscription is welcome to use API instead.
Because Claude Code is absolutely impossible to use without a subscription?
Claude Code accepts an API key. You do not need a subscription
https://docs.anthropic.com/en/docs/claude-code/settings#envi...
I would not personally, as I can't spend thousands per month on an agentic tool. I hope they figure out limits that work. $100 / $200 is still a great deal. And the predictability means my company will pay for it.
Unlimited plans encourage wasting resources[0]. By actually paying for what you use, you can be a bit more economical and still get a lot of mileage out of it.
$100/$200 is still a great deal (as you said), but it does make sense for actually-$2000 users to get charged differently.
0: In my hometown, (some) people have unlimited central heating (in winter) for a fixed fee. On warmer days, people are known to open windows instead of turning off the heating. It's free, who cares...
Of course unlimited implicitly means “unlimited without abuse.”
So, not unlimited? Like, if the abuse is separate from amount of use (like reselling; it can be against ToS to resell it even in tiny amounts) then sure, but if you're claiming "excessive" use is "abuse", then it is by any reasonable definition not unlimited.
So, not unlimited?
Correct, not “unlimited” as in the dictionary definition of unlimited. Unlimited as in the plain meaning of unlimited as it is commonly used this subject matter area. i.e., Use it reasonably or hit the bricks, pal.
If there is a clear limit to that (and it seems there is now), then stop saying "unlimited" and start selling "X queries per day". You can even let users pay for aditional queries if needed.
(yes i know queries is not a proper term to use here, but the principle stands)
Thoughts? If you want me to, I can elaborate on what I really mean, but I hope it was understandable enough.
There is a case to be made that they sold a multiple and are changing x or rate limiting x differently, but the tone seems different from that.
It always had limits and those limits were not specified as concrete numbers.
It’s amazing how much of the internet outrage is based on the idea that it was unlimited and now it’s not. The main HN thread yesterday was full of comments complaining about losing unlimited access.
It’s so weird to watch people get angry about thinking they’re losing something they never had. Even Anthropic said less than 5% of accounts would even notice the new limits, yet I’ve seen countless comments raging that “everyone must suffer” due to the actions of a few abusing the system.
Some facts for sanity:
1- The poster of this blog article is Kilocode who makes a (worse) competitor to Claude Code. They are definitely capitalizing on this drama as much as they can. I’ve been getting hit by Reddit ads all day from Kilocode, all blasting Anthropic, with the false claim that their plan was "unlimited".
2- No one has any idea yet what the new limits will be, or how much usage it actually takes to be in the top 5% to be affected. The limits go into effect in a few days. We'll see then if all the drama was warranted.
They appear to have removed reference to this 50-session cap in their usage documents. (https://gist.github.com/eonist/5ac2fd483cf91a6e6e5ef33cfbd1e...)
So even if these mystery people Anthropic reference who did run it "in the background, 24/7", they still would've had to stay within usage limits.
Did the Max plan ever promise unlimited anything?
no, even their announcement blog[0] said:
With up to 20x higher usage limits
in the third paragraph.
- note: "unlimited" does not mean free.
quote source: "Apple Just Found a Way to Sell You Nothing" https://www.youtube.com/watch?v=ytkk5NFZGjs
Don't blame the company, it acts within boundaries allowed by its paying customers, and apple customers are known to be... much less critical of the company and its products to be polite, especially given its premium prices.
- note: "unlimited" does not mean free.
Repairs have always come with deductibles.
This is standard in virtually every insurance program. There are a lot of studies showing that even the tiniest amount of cost sharing completely changes how people use a service.
When something is unlimited and free, it enticed people to abuse it in absurd ways. With hardware, you would get people intentionally damaging their gear to get new versions for free because they know it costs them nothing.
https://www.forbes.com/sites/barrycollins/2024/11/28/mac-own...
gained during its unlimited phase
It was never unlimited.
They never advertised unlimited usage. The Max plan clearly said it had higher limits.
This fabrication of a backstory is so weird. Why do so many people believe this?
The real damage: You're not frustrating 5% of users—you're breaking trust with the exact people who drive growth and adoption.When developers get "rate limit exceeded" while debugging at 2 AM, they're not thinking about your infrastructure costs—they're shopping for alternatives.
Notice a pattern here?
And how does this compare to case with "Unlimited". Overall will the total used be higher or lower?
Claude Max, to my knowledge, was never marketed as "unlimited". Claude Max gives you WAY more tokens then $100/$200 would buy. When you get rate limited, you have the option to just use the API. Overall, you will have gotten more value than just using the API alone.
And you always had, and continue to have, the option of just using the API directly. Go nuts.
The author sounds like a petulant child. It's embarrassing, honestly.
The real damage: You're not frustrating 5% of users—you're breaking trust with the exact people who drive growth and adoption.
It's not rocket science. It’s our way of attracting users. Not bait and switch, but credits to try.
you can see here in this Reddit thread from April, when Claude Max was launched, that it was explicitly explained as being limited: https://www.reddit.com/r/ClaudeAI/comments/1jvbpek/breaking_...
Max is described as "5-20x more than Pro", clearly indicating both are limited.
here's their launch blog post: https://www.anthropic.com/news/max-plan
The new Max plan delivers exactly that. With up to 20x higher usage limits, you can maintain momentum on your most demanding projects with little disruption.
obviously everyone wants everything for free, or cheap, and no one wants prices to change in a way that might not benefit them, but the endless whinging from people about how unfair it is that anthropic is limiting access to their products sold as coming with limited access is really extremely tedious even by HN standards.
and as pointed out dozens of times in these threads, if your actual worry is running out of usage in a week or month, Anthropic has you covered - you can just pay per token by giving Claude Code an API key. doing that 24/7 will cost ~100x what Max does though, I wonder if that's a useful bit of info about the situation or not?
I hold them no ill will for rapidly changing pricing models, raising pricing, doing whatever they need to do in what must be a crazy time of finding insane PMF in such a short time
BUT the communication is basically inexcusable IMO. I don't know what I'm paying for, I don't know how much I get, their pricing and product pages have completely different information, they completely hide the fact that Opus use is restricted to the Max plan, they don't tell you how much Opus use you get, their help pages and pricing pages look they were written by an intern and pushed directly to prod. I find out about changes on Twitter/HN before I hear about them from Anthropic.
I love the Claude Code product, but Anthropic the company is definitely nudging me to go back to OpenAI.
This is also why competition is great though - if one company had a monopoly the pricing and UX would be 20x worse.
Claude 4 is good enough that people will pay whatever they ask as long as it's significantly less than the cost of doing it by hand. The loss leaders will need to fade away to manage the demand, now that there is significant value.
Differentiation through honesty: In a market full of fluff, directness stands out. Customers might respect a brand more for telling the truth plainly, even if the truth isn’t ideal.
The risk: It could scare off some customers who don’t read the fine print anyway. But that may not be a loss—it might actually filter in the right kind of customer, the one who wants to know what they’re really getting.
Gemini did go from a huge free tier to 100 free uses a day, but I expected that.
EDIT: let me clarify: I just retired after over 50 very happy years working as a software developer and researcher. My number one priority was always self-improvement: learning new things and new skills that incidentally I could sometimes use to make money for whoever was paying me. AI is awesome for learning and general intellectual pursuits, and pairs nicely with reading quality books, listening to lectures on YouTube, etc.
Can you really ever compete when you are renting someone else's GPUs?
Can you really ever compete when you are going up against custom silicon built and deployed at scale to run inference at scale (i.e. TPUs built to run Gemini and deployed by the tens-of-thousands in data centers around the globe)?
Meta and Google have deep pockets and massive existing world-class infrastructure (at least for Google, Meta probably runs their php Facebook thing on a few VPS dotted around in some random colos /s ) . They've literally written the book on this.
It remains to be seen how much more money OpenAI can burn, but we've started to see how much Anthropic can burn if nothing else.
why the whole users need to suffer???
Everyone was better off without the deception. Now we are in the early days of AI. Providers should be honest but won’t until forced to.
Because just think about it. Unlimited is untenable. Another example, in the early days of broadband in Australia a friend’s parents were visited by a Telstra manager because he “downloaded more than his entire suburb”. A manager!
Really you can’t blame the providers; some users will ruin it for everyone. I am not saying that is anyone specific. But none of this should surprise us. We’ve been here before. Just look back at how other markets developed & you will see patterns that tell you what’s next.
Just below me as I type there's a comment saying they're refusing to cancel a subscription (may not be below me any more when I finish typing).
Somewhere lower there's a comment saying they do not show the full price when you subscribe, but add taxes on top of it and leave you to notice the surprise on your credit card statement.
Is there an ethical "AI" service anywhere?
For Claude Code and similar services, we’re still in the very early stages of the market. We’re using AI almost for free right now. It’s clear this isn’t sustainable. The problem is that they couldn’t even sustain it at this earliest stage.
I promise I’m not being snarky here - I don’t understand how people are burning through their $200/mo plan usage so quickly. Are they spamming prompts? Not using planning mode? I’ve seen a few folks running multiple instances at once… is that more common than I think?
some say they have to define a huge limit and that's it.
Limits are sometime hard to define :
- they must be such huge, a user (human) finaly understand it's unlimited else he will compare to competitors
- but no such huge because the 0.1% of users will try to reach it
A fair word could be the one which categorize the type of use :
- human : human has physical limit (e.g: typing word to keyboard / per time).
- bot : from 1 arduino to heavyweight hardcore clusting, virtualy no limits.
But hey, this is just a sales pitch from one company I wouldn't trust by taking a dump on another company I wouldn't trust.
Anthropic never sold Max plans as unlimited. There are two tiers, explicitly labeled "5x" and "20x", both referring to the increased usage over what you get with Pro. Did all the people complaining that Anthropic reneged on their "promise" of unlimited usage not read anything about what they were signing up to pay $100 or $200/month for? Or are they not even customers?
On a more serious note, I'm sure most of the people can't fathom or even think about the resources they are consuming when using AI tools. This things doesn't use energy, they consume it like how a black hole sucks light.
In some cases, your queries can consume your home's daily energy needs in a hour or so.
The transparency problem compounds this. The sustainable path forward likely involves either much more transparent/clear usage-based pricing or significantly higher flat rates that actually cover heavy usage.
wait until their investors get fed up with pouring money down the drain and demand they make a profit from the median user
that model training and capex to build the giant DCs and fill them with absurdly priced nvidia chips isn't free
as an end user: you will be the one paying for it
But it is so hard to explain to product people, that there is a limit how much certain services can scale and be profitably supported.
Let people cook and give them some time find out how to do this. Voice discontent but don't be an asshole.
And to be clear, the users abusing the "unlimited" rates they were offering to do absolutely nothing productive (see vibe-coding subreddits) are no better.
Unlimited for startups work better because they have zero idea on load challenges that come in the future. And they don’t have much idea how well their product will be taken in the market.
Anthropic got the experience and decided they needed to maximize on reasonableness over customer trust. And they are a startup so we all get this.
OTOH there is no such thing as unlimited. Atoms in the universe are finite. Your use is finite. Your time is finite. Your abuse is limited and finite. You are a sucker for believing in the unlimited myth just like think others are suckers for believing in divine intervention or conspiracy theorists are suckers to believe in unlimited power.