SaturdayFridayThursdayWednesdayTuesdayMondaySunday

I Am An AI Hater

BallsInIt 438 points anthonymoser.github.io
jkingsman
I appreciate seeing this point of view represented. It's not one I personally hold, but it is one a LOT of my friends hold, and I think it's important that it be given a voice, even if -- perhaps especially if -- a lot of people disagree with it.

One of my friends sent me a delightful bastardization of the famous IBM quote:

A COMPUTER CAN NEVER FEEL SPITEFUL OR [PASSIONATE†]. THEREFORE A COMPUTER MUST NEVER CREATE ART.

Hate is an emotional word, and I suspect many people (myself included) may leap to take logical issue with an emotional position. But emotions are real, and human, and people absolutely have them about AI, and I think that's important to talk about and respect that fact.

† replaced with a slightly less salacious word than the original in consideration for politeness.

randcraw
Picasso's Guernica was born of hate, his hate of war, of dehumanization for petty political ends. No computer will ever empathize with the senseless inhumanity of war to produce such a work. It must forever parrot.
perching_aix
To honor the "spirit" of OP's post:

I looked up Picasso's Guernica now out of curiosity. I don't understand what's so great about this artwork. Or why it would represent any of the things you mention. It just looks like deranged pencilwork. It also comes across as aggressively pretentious.

What makes that any better than some highly derivative AI generated rubbish I connect to about the same amount?

jacquesm
That a human made it to express their feelings.
perching_aix
What do I care? Can't even tell what feelings are supposedly being expressed there.
jacquesm
That goes for all art. It either stirs you or it doesn't. I find https://www.youtube.com/watch?v=9tjstsWoQiw to be one of the most beautiful pieces ever recorded, others can't listen to it and think it is bland and a terrible recording.

You can't argue about taste.

perching_aix
But then why wouldn't AI generated art be able to stir me? Why is a human being in the loop so important as to be supposedly essential?
jacquesm
Because it is mimicking human input. Effectively you are getting a mixture of many pieces of artwork that humans made distilled down into some sloppy new one that was made without feeling, purpose or skill and that can be described by its prompt, a few kilobytes at best. Original human art can only be approximated but never captured with 100% fidelity regardless of the bitrate, that is what makes it unique to begin with. Even an imitation by another human (some of which can be very good) could stir you in the exact same way but they'd be copies, not original works.

Anyway, this gets hairy quickly, that's why I chose to illustrate with a crappy recording of a magnificent piece that still captures that feeling - for me - whereas many others would likely disagree. Art is made by its creator because they want to and because they can, not because they are regurgitating output based on a multitude of inputs and a prompt.

Paint me a Sistine Chapel is going to yield different results no matter how many times you would give that same prompt to Michelangelo depending on his mood, what happened recently, what he ate and his health as well as the season. That AI will produce the same result over and over again from the same prompt. It is a mechanistic transformation, not an original work, it reduces the input, it does not expand on it, it does not add its own feelings to it.

petralithic
Haven't these arguments been the same since Stable Diffusion came out? Someone (A) will say what you said, then someone else (B) will say, well humans remix as well, A: no that's different because we're humans not machines, B: there is no need to prefer a biological substrate over a silicon one; A: AI will produce the same result over and over, B: not if you change the temperature and randomize the seed.

It's tiresome to read the same thing over and over again and at this point I don't think A's arguments will convince B and vice versa because both come from different initial input conditions in their thought processes. It's like trying to dig two parallel tunnels through a mountain from different heights and thinking they'll converge.

jacquesm
The day I see AI generated art and it moves me in the same way that human generated art does I will concede the point. So far all I've seen is more, not novel.

Art never was about productivity, even though there have been some incredibly productive artists.

Some of the artists that I've known were capable of capturing the essence of the subject they were drawing or painting in a few very crude lines and I highly doubt that an AI given a view would be able to do that in a way that it resonated. And that resonance is what it is all about for me, the fact that briefly there is an emotional channel between the artist and you, the receiver. With AI generated content there is no emotion on the sending side, so how could you experience that feeling in a genuine way?

To me AI art is distortion of art, not new art. It's like listening to multiple pieces of music at the same time, each with a different level of presence, out of tune and without any overarching message. It can even look skilled (skill is easy to imitate, emotion is not).

petralithic
I still don't get why you don't see it as a tool and not the creator itself. The human sitting behind the desk is the one attaching their emotions to what they send, because they control what image they want to send, otherwise they reroll or redo their work flow. These days they can even edit the image with natural language so they can build it up just as one does in Photoshop, only using words instead of a mouse.
jacquesm
I still don't get why you don't see it as a tool and not the creator itself.

If after 33 comments in this thread and countless people trying to explain a part of it you don't get it that may be because you either don't want to get it or are unable to get it. Restating it one more time is not going to make a difference and I'm perfectly ok with you not 'getting it', so don't worry about it.

AI without real art as input is noise. It doesn't get any more concrete than that. Humans without any education at all and just mud and sticks for tools will spontaneously create art.

petralithic
Or perhaps your initial premise ("AI without real art as input is noise") is simply wrong. By "get it," I'm trying to understand why you'd believe such a premise, yes even after 33 comments, because there is no underlying rationale to it, or rather, you never state it in a direct manner.
necovek
This is where you might be "not getting it". A human can carefully weigh every word, every swipe of a brush, or every tone... weigh it for the emotional expression and connection it produces (frequently subconscious). Whereas AI as a tool simply can't.

This is a difference between using a gradient in Photoshop, which is still a tool, and generative AI which will make "decisions" you as an author can't explain or connect with.

petralithic
How is this different from an electronic music producer? They similarly arrange notes without having played them physically. So too with people generating an image as a rough draft then editing every part of it, which is mainly what I'm talking about, not someone who types in a prompt and accepts whatever comes out.
jibal
Some people are simply irrational, and there's no point trying to point out to them their logic errors.
lesostep
The human sitting behind the desk is the one attaching their emotions to what they send

natural question: to you draw? Even a simple thing, even a doodle of a cat would count. A particular emoji drawn for a joke. Have you ever drew a line, and then smile to yourself "yes, that is what i want other people to see?"

People can draw poorly or make collages, and come up with pretty expressive art. Those who say "well I can't express myself with stick figures" coincidentally can't express anything without stickfigures too. They just never payed enough attention to the subject to express it.

Personal anecdote: when I ask people why X is in the art they send me, they answer happily. When I ask people with AI art that, they say "oh, you nitpicking". As if some details don't and shouldn't influence art expression. As if all details that weren't in a prompt, shouldn't express anything.

AI art is a concept muddled. It's a grave for intentionallity. It's not easy to decipher creators intent through a cacophony of other intents mixed in because almost none of art choices were made with the intent to convey.

perching_aix
Ironically, for the first time, I think I found some perspective to the remix argument here.

Normally it's just like you say: I don't find the remixing argument persuasive, because I consider it to be a point of commonality. This time however, my focus shifted a bit. I considered the difference in "source set".

To be more specific, it kind of dawned on me how peculiar it is to engage in creating art as a human given how a human life looks like. How different the "setup" is between a baby just kind of existing and taking in everything, which for the most part means supremely mundane, not at all artful or aesthetic experiences, and between an AI model being trained on things people uploaded. It will also have a lot of dull, irrelevant stuff, but not nearly in the same way or in the same amount, hitting at the same registers.

I still think it's a bit of a bird vs plane comparison, but then that is also what they are saying in a way. That it is a bird and a plane, not a bird and a bird. I do still take issue with refusing to call the result flight though, I think.

jacquesm
Flight has immediate utility, art not necessarily, other than to be or to experience. Movies can be art, instruction videos usually are not.
perching_aix
Flight isn't necessarily utilitarian. Not animals', not machines'.

A connected discourse is (certain, increasingly dwindling maybe) part of the art community's rejection of large swaths of works because they're meant for mass entertainment.

And so I'm not sure robbing AI generated images of being labeled art isn't a similar kind of snobbery, at least in part, with models just being a much more morally convenient punching bag this time around than other humans.

jacquesm
Something not being necessarily utilitarian does not mean that it isn't mainly utilitarian. There is knitting as an art form. But it was definitely mainly utilitarian at some point.

And this is how it goes with many things: at first we do them because they are utilitarian, after that there may be people who start using it as a medium for art.

And so I'm not sure robbing AI generated images of being labeled art isn't a similar kind of snobbery, at least in part, with models just being a much more morally convenient punching bag this time around than other humans.

Then show me the art. Just one single image that moves you and that was generated by AI.

perching_aix
Something not being necessarily utilitarian does not mean that it isn't mainly utilitarian.

In terms of extents, I'd say machine flight is about as utilitarian as animal flight. Which is why you don't see it differentiated in verbiage I'd imagine. I'm generally not sure where you were going with this.

Then show me the art. Just one single image that moves you and that was generated by AI.

There isn't a single drawing (picture) that I remember to have ever moved me, manmade or machine generated, so that's quite the tall order.

For examples on AI generated images I see, that'd be on Pixiv. They're almost always tagged up and you can filter for (and against) them. And there are of course people who exploit this for harassment, because no good deed goes unpunished.

With the proliferation of AI, I saw styles, poses, framings that I haven't before there, as well as their combinations. Were they just underrepresented among other people's drawings? I'm not so sure - some are for sure referencing actual photographs instead, and some are assisted rather than fully generated. I did enjoy these greatly, even though they were not straight from the remotest figment of someone's personal imagination, and they haven't per-se "moved" me.

jacquesm
Ok. Thank you for the answer and the exchange in general. I suspect one part of the issue here is that some people are more sensitive to stuff like this than others.

For instance:

https://www.youtube.com/watch?v=o_wsSIuv_po

Never fails to give me gooseflesh every time I listen to it. And where it gets interesting is that that is a cover of a piece by another composer, so it serves as a very high level commentary and compliment rather than an original and still manages to maintain a lot of the emotional content and adds new elements. The original is:

https://www.youtube.com/watch?v=vE2O_yfgtBU

Adagio starts at 3:32.

See if you get a different take away from each. I find both beautiful but as different as jam and cheese.

There are drawings and paintings that move me in a similar way. And I'm sure there are people who are not touched by any of this. I've been steeped in art pretty much since I was a toddler, my dad was a painter (in my opinion not a very good one but that did not stop him from endlessly trying) and our house was always full of music, antiques and conversations about that stuff. This probably sensitized me in a way that I would not have been if not for that environment.

The interesting thing is: even bad art is still art.

TeMPOraL
Don't also forget:

A: but AI only interpolates between training points, it can't extrapolate to anything new.

B: sure it can, d'uh.

_DeadFred_
There is no intention in either case. Just a machine doing machine things.
petralithic
The intention is the human prompting or creating the work flow, the computer was never going to autonomous create images, why would it?
perching_aix
I think this is a reasonable counter in some respects, although I do also think it's specific to the current iteration of AI art.

It's a bit like when people describe how models don't have a will or the likes. Of course they don't, "they" are basically frozen in time. Training is way slower than inference, and even inference is often slower than "realtime". It just doesn't work that way from the get-go. They're also simply not very good - hence why they're being fed curated data.

In that sense, and considering history, I can definitely see why it would (and should?) be considered differently. Not sure this is what you meant, but this is an interesting lens, so thanks for this.

petralithic
It's not. If one takes the fact that art is in the eye of the beholder[0], then yes, even AI art may stir you, especially as a human is the one generating at the end of the day, for a specific purpose and statement about what they want to convey.

There is a good part of the series Remembrance of Earth's Past (of which The Three Body Problem is the first book) where the aliens are creating art and it shocks people to learn that the art they're so moved by was actually created by non-humans. This is exactly what this situation with AI feels like, and not even to the same extent because again AI is not autonomously making images, it's still a human at the end of the day picking what to prompt.

[0] https://en.wikipedia.org/wiki/The_Death_of_the_Author

jacquesm
it's still a human at the end of the day picking what to prompt

I think that 'dutch people skating on a lake' or 'girl with a pearl earring' or 'dutch religious couple in front of their barn' without having an AI trained on various works will produce just noise. And if those particular works (you know the ones, right?) were not part of the input then the AI would never produce anything looking like the original, no matter how specific you made the prompt. It takes human input to animate it, and even then what it produces to me does not look original whereas any five year old is able to produce entirely original works of art, none of which can be reduced to a prompt.

Prompts are instructions, they are settings on a mixer, they are not the music produced by the artists at the microphones.

petralithic
Have you actually used image generators today? It can produce things it's never seen if only you describe the constituent pieces. Prompts are a compressed version of the image one wants to create, and these days you don't even need "prompts" per se, you can say, make a woman looking towards the viewer, now add a pearl earing, now adjust this and that etc.
jacquesm
Have you actually used image generators today?

Why would you ask this? It sounds like a lead-up to some kind of put down.

It can produce things it's never seen if only you describe the constituent pieces.

It can produce things it's never seen based on lots of things that it has seen.

Prompts are a compressed version of the image one wants to create

They emphatically are not. They are instructions to a tool on what relative importance to assign to all of the templates that it was trained on. But it doesn't understand the output image any more than it understood any of the input images. There is no context available to it in the purest sense of the word. It has no emotion to express because it doesn't have emotions in the first place.

and these days you don't even need "prompts" per se, you can say, make a woman looking towards the viewer, now add a pearl earing, now adjust this and that etc.

That's just a different path to building up the same prompt. It doesn't suddenly cause the AI to use red for a dress because it thinks it is a nice counterpoint to a flower in a different part of the image because it does not think at all.

petralithic
I think you're reading too much into my comment. It's not a put down, I'm genuinely asking because it seems many people still think anyone serious about AI just types prompts into Midjourney, but it's become a lot more complex than that, akin to electronic music production; producers haven't played every single note with a physical instrument their synths synthesize yet their arrangement of the notes is what makes them a producer, and so too with AI workflows such as those seen in ComfyUI. If one is not familiar then they might not understand where the field is today.

Regarding prompts, I never said a computer "understands" or is "emotional" about an image, I don't think anyone actually thinks that, on either side of the debate so not sure why you're bringing that up. By "compressed" I just meant in the information theory way, in that if you have a specific series of words, and a given temperature and other settings for a given model, it will deterministically produce the same image, hence the set of those attributes can be thought of as a compressed representation of that image. I made no claims about it thinking whatsoever.

It can produce things it's never seen based on lots of things that it has seen.

Yes, just like humans, as I had said in my initial comment about the same old arguments being said since 2021 when Stable Diffusion came out. But again that's tiresome so let's not repeat that here too.

bonoboTP
I don't think this is just taste. The painting was made in a specific historic context and commemorates the bombing of Guernica. Without knowing that context, it may be appreciated as a disembodied visual artifact, but that's not how art really works or ever worked. An influential artpiece usually states something relevant to the historic moment and intellectual Zeitgeist of the time.

You may like the music of Zombie by The Cranberries, but I'd say it belongs to the complete appreciation of it to know that it's about the Irish Troubles, and for that you need some background knowledge.

You may like to smoke weed to Bob Marley songs, but without knowing something about the African slave trade, you won't get the significance of tracks like 400 years.

For Guernica you also have to understand Picasso's fascination with primitive art, prehistoric cave art, children's drawings and abstraction, the historic moment when photography took over the role of realistic depiction, freeing painters to express themselves more in terms of emotional impressions and abstractions.

jacquesm
Yes, context is really important. But:JS Bach made a whole raft of music, and quite a large fraction of it was religiously inspired. In spite of that it is perfectly possible to appreciate it at a deep emotional level without that particular spiritual connection. This is the genius of art to me: that it opens up an emotional channel between two individual separated by time and space and manages to convey a feeling, as clear as day.

Take U2's October as a nice example. (You mentioned Zombie, incidentally one of my favorites, the anger and frustration in there never fail to hit me, I can't listen to it too often for that reason), superficially it is a very simple set of lyrics (8 lines I think) and an even simpler set of chords. And yet: it moves me. And I doubt any AI would have come up with it or even a close approximation if it wasn't part of the input. That's why I refuse to call AI generated stuff art. It's content, not art.

bonoboTP
And yet: it moves me. And I doubt any AI would have come up with it or even a close approximation if it wasn't part of the input.

I would have thought similarly, but actually feeding 19th century poems to Suno and iterating on the prompts several times I got some results that moved me emotionally, as in, listening/reading the words with this musical presentation enhanced my appreciation of the poems and it felt more visceral. Like making angry revolutionary poems into grunge brought it closer and less of a "histoic", "bookish", "dusty" thing.

jacquesm
That's a poster case for it being derivative works then. And of course, the more concentrated the input mixture the bigger the chance of some of that emotion leaking through.

I think there is a great case to be made here using purely synthetic sounds as the basis for emotion. Vangelis (Soil festivities), Klaus Doldinger (Skyscape) are great examples. These are sounds that have been produced exclusively by the mind and in spite of there not being a physical instrument involved they manage to convey imagery and emotion extremely effectively. This is technology used as an enabler. I've yet to come across someone using AI tech in the same liberating manner unlocking novel imaginary constructs in the way that those two did.

perching_aix
I don't consider context a clear win. I'd argue that there's also quite the disconnect sometimes between what a work is about and why it's popular.

Let's take Zombie by The Cranberries as an example. I really liked this song as a kid, still do, I think it has a great sound. The difference is that I now speak English, can understand the lyrics, and could look up the historical context. Ever since I did so, listening to it has never been the same, and not in a good way.

There are also examples which are not going to be so specific to my opinions. Kendrick's Swimming Pools was a house party staple, despite the song carrying heavy anti-alcoholism messaging. The contrast is almost comical.

For a different example, let's consider temporal contextuality; you describe Guernica being reliant on this. When I try to think of an example, I'm reminded of vague memories of shows with oddly timely subtitles. Subtitles that referenced things that were very specific to the given cultural moment, basically memes, but vanished since. It's not a good experience, and I'd say it would be reasonable to chalk such a thing up as a critique, rather than something worthy of praise.

This is also why I half-seriously referred to the piece being "aggressively pretentious". Rather than coming across as something I'm just genuinely missing the context for, it comes across as something with manufactured sophistication (which then I am indeed missing the context for, but unapologetically). This might still be a mirage, but I think with how pretty much stereotyped this experience is at this point, I'd imagine there's got to be some truth to it at least.

bonoboTP
If you value art for aspects that don't require intellectual or historical context knowledge then the best music is bubblegum pop and the best literature is pulp fiction and smut. And indeed people who most lack such context (teens) tend to like those most.

This is not to say that eternal themes aren't important. But art is a kind of social technology that mediates between people in given cultural contexts. Part of "the great conversation" across the ages, the part you can't express in logical essays or propositions. And the eternal themes pop up in different "clothes" at different times. Once you have the key to unlock them, you do discover the same human nature and human problems operating underneath as ever.

And the beautiful cathedrals are not simply beautiful for beauty's sake but their art often conveys very specific theological claims, often hotly debated at the time. Or the choice of subject may have been outrageous or novel at the time but mundane to us now.

Liszt's music may move us even today, but we can't quite appreciate it in the same Lisztomania way as it was then, when it was fresh and novel.

mm263
Why do you care to connect with another human? Try to feel his emotions, what he tried to express? If you see no value in that, there's no discussion to have, honestly. For most people I know there's value in connecting with others and emphasizing with their emotions
petralithic
But they just said they don't get what emotions are meant to be expressed, so how can they try to feel his emotions?
mm263
Many things require one to reject self-imposed boundaries. For example[1]:

There's a story that, IIRC, was told by Brian Enos, where he was practicing timed drills with the goal of practicing until he could complete a specific task at or under his usual time. He was having a hard time hitting his normal time and was annoyed at himself because he was slower than usual and kept at it until he hit his target, at which point he realized he misremembered the target and was accidentally targeting a new personal best time that was better than he thought was possible. While it's too simple to say that we can achieve anything if we put our minds to it, almost none of us are operating at anywhere near our capacity and what we think we can achieve is often a major limiting factor.

---

Art is nothing like shooting. My first instinct looking at Guernica is that I also feel nothing, but one can limit oneself and say: if I feel nothing initially, I will feel nothing at all. If you prime yourself to be open to an experience of putting yourself into the shoes of the author, you might start feeling something.

[1] https://danluu.com/culture/

petralithic
Maybe. Or maybe one just gets it, or they don't, for a particular piece.
kelnos
Art is a difficult, subjective matter sometimes. I don't think we can expect everyone to "get" every piece of art. If the poster upthread wanted to, they could read more about the painting, in detail, where perhaps someone writes about various specific features of it and what people believe those features mean. Maybe that would provide more understanding, and they could feel his emotions that way.

I'm not saying they have to or should do that; maybe they just don't care enough. And that's fine. But the option is there.

If someone prompts an AI, "generate an image in the style of Picasso's Guernica", then the result of that, by definition, has no deeper meaning. No emotion went into creating it. The person who prompted the AI could make something up, but it's hard to say what's "real" there. Even if they were to guide the image generation by describing their own emotions, the result wouldn't really be their own expression of their emotions. It would be the AI's probabilistic guess as to what those emotions look like on paper, when rendered using Guernica's style, based on a mish-mash of thousands of different artists and art history research. Ultimately it just doesn't mean anything.

I accept the idea that a talented artist could guide the AI with much deeper specifics about what to "draw", how to draw it, etc. And maybe -- maybe -- that's something that would convey the human's emotions faithfully. But I don't think that's what we're talking about here.

petralithic
But I don't think that's what we're talking about here.

Actually that is exactly what I'm talking about. I'm not talking about AI beginners putting in some words into a text box, I'm talking about creatives who use workflow managers like ComfyUI to create exactly the output they envision in their minds. In this way, the AI generation is merely a tool to get out whatever is in their head via synthesized means rather than manual (literally, hand) means. For example, this is a list of node work flows, it's similar to game programming in that you have inputs and you want to transform them to certain outputs, and that transformation work is thoughtful by the human and is what I imbue the creative aspect to.

https://modal.com/blog/comfyui-custom-nodes

slipperydippery
To put this in very-online terms: this is a skill issue.

Your life will be richer if you learn to take more things in, and to appreciate them. And it may require actual learning! And practice!

pegasus
Since you seem to have no problem dishing it, I hope you can eat it as well, so here you go. It's your comment that can be rightly described as pretentious. First of all, "aggressive" doesn't make sense as a modifier to "pretentious" - you were probably influenced to pick this word because of the subject and the feeling of the mural, then self-indulgently left it in, no doubt imagining yourself an innate art critic taking poetic license. Second, the way you italicized artwork. Thirdly, and mostly, because even though you just "looked up Guernica now out of curiosity", you imagine your uninformed opinion worthy of consideration to someone else out there. It's not.
perching_aix
Yes. I consider these to be trivial attributes of what I wrote.

It was basically all part of the point: I don't appreciate the position taken in the blogpost in the OP, as it is willfully dishonest (its author not only admits, but even flaunts this).

This is why I remarked that I'm following in its spirit. All the points you list out are issues I also have in general with discourse like the blogpost, and with derivative discourse spawned by it. I was expecting people to react badly, specifically in order to demonstrate why. Even felt a bit bad about italicizing artwork, and felt it was a bit on the nose in hindsight. Wouldn't quite call it a flamebait, but in a sense I guess it was one.

In the end though, I got some reasonable discussion out of it, a bit to my surprise. Still kind of processing whether this was an exception to my conjectured rule, or how else I should wrestle with it. I ended up restoring a bit of "faith in humanity" for myself, rather than confirming my resignations.

This isn't to say I don't believe or didn't mean what I said though, to be clear. I just presented it in a way I consider malicious (the way the blogpost is written). You seem to consider so too and have reacted now in kind - although it doesn't read like along this same idea. But then maybe I'm just falling for my own trap at this point.

pegasus
I see, you were playing "Picasso hater" to OP's "AI hater". Well played, in this case, but you could have just written what you just have above, it would have prevented some confusion and misdirection. Yes, OP is unreasonable and arrogant and thus ends up going totally overboard, even though there is some truth in his complaints (pinpointing better what that is would be a worthwhile conversation to have). In my book, being a hater is not something to flaunt, but rather something to look into. Deep enough understanding inevitably softens that hate if not all the way into appreciation, at least into tolerance. It's the same with Picasso's work: once the missing historical, emotional and artistic context is perceived, the value of the work will become self-evident as well.
perching_aix
Well yeah, I could have done that, but then outcome would have been impacted. Apologies for pulling a fast one on you like this.
averagefluid
What makes that any better than some highly derivative AI generated rubbish I connect to about the same amount?

I think this is a fantastic question. Full disclosure, Guernica is one of my personal favorites and I initially felt pretty poorly about this particular string of words. But the implied question, "So what?", is literally what separates art from x. I don't think that there's a direct answer to this, but I'll do my best to articulate my feelings towards it.

When I was much younger and first learning how to play guitar, I heard that Eric Clapton was a guitarist that a lot of other guitarists looked up to. I decided to listen to his works and initially dismissed them. To my ears he sounded like a worse, more basic, more derivative version than the artists I was listening to at the time and I wondered how he could even be in the same conversations as other, more modern artists. It was later that I realized I had the arrow of causality wrong. He wasn't revered because he was the best or had taken the artform to the furthest reaches or would be successful today. He was revered because he exposed so many people to a new way of expressing themselves that they likely wouldn't have known about otherwise and certainly wouldn't have invented themselves.

This analogy applies directly to Picasso, I think. You mention you felt the piece was "aggressively pretentious". Where do you think that pretense comes from? There is a whole history to the deconstruction of art in the visual medium and a whole backlash to that deconstruction and a whole response to that and that's your cultural inheritance when you view pieces like this. You don't have to even be aware of this to know that it's affecting how you feel about the piece. I think one facet of "so what?" is that this piece has existed for long enough to generate discussion about its own worth and value and at the very least is spawning literally this post.

The fact that one could find the work with one word and have a discussion about it is also pretty incredible. I don't think a model generated output is that widely known. I do think that sort of cultural reach is a facet of "so what".

There are more answers to "so what?", but to answer your question directly, "what makes it any better", I think an argument could be made that it's not. "Better" when applied to art doesn't have any particular meaning in my mind. What makes it more culturally relevant, more widely known, more widely loved, more important, and more gratifying to study each have dozens of answers, and I think that's more interesting.

DyslexicAtheist
nazis held the same believe.
perching_aix
nazis held the same believe.

Along with being against any form of animal cruelty.

They were also pretty obsessed with spiritualistic quackery.

Are we giving each other fun facts or what? Surely one does not need to go all the way to the nazis to find a Picasso hater? Or are you just following the footsteps of the blogpost author too?

jibal
Fallacy of affirming the consequent.

Nazis ate food ... ugh to food!
didibus
I'm not an art historian, but I think Picasso invented an entire art style.

When you use AI, you might now prompt "in the style of Picasso".

kelnos
You not thinking it's great just means you personally don't like it. Which is fine.

What makes that any better than some highly derivative AI generated rubbish I connect to about the same amount?

Because Guernica was made by a human who was passionate about something, and poured that passion into his work. Even if you don't "get it", I hope you can at least acknowledge that truth.

To put another way, on one hand we have:

1. Deranged pencilwork created by someone who created it with purpose, to express a feeling he had about something.

2. Deranged pencilwork created by a probabilistic algorithm, that doesn't mean anything to anyone.

Even if we look at it in these sorts of terms, #1 is still orders of magnitude "better" to me.

petralithic
A human might generate a piece of media using AI (either via a slot machine spin or with more advanced workflows like ComfyUI) and once they deem it looks good enough for their purpose, they might display it to represent what they want it to represent. If Guernica was AI generated but still displayed by Picasso as a statement about war, it would still be art.

Tools do not dictate what art is and isn't, it is about the intent of the human using those tools. Image generators are not autonomously generating images, it is the human who is asking them for specific concepts and ideas. This is no different than performance art like a banana taped to a wall which requires no tools at all.

TheCraiggers
I read what you wrote, and it seems to me you think these two things are equal:

A human using their creativity to create a painting showcasing a statement about war.

A human asking AI to create a painting showcasing a statement about war.

I do not wish to use strawmen tactics. So I'll ask if you think the above is equal and true.

petralithic
Is a banana taped to a wall "art?" Your answer to that is the answer to your question.
averagefluid
Your answer to that is the answer to your question.

In what logical or philosophical framework does my opinion dictate your opinion? You're not making a grand philosophical point, you're frustrating the attempts of other people to understand your point of view and either blocking them from understanding your point of view or addressing your argument in a meaningful way.

If you cannot or will not engage in the conversation it would be more efficient and more purposeful for you to say so than the "whatever you say is what I say" falseness you're expressing in the above comment.

Jensson
In what logical or philosophical framework does my opinion dictate your opinion?

Because priors affect your conclusions.

For example, I don't like licorice, that makes me not like many kinds of candy. But I know that if a person likes licorice, they will have a very different view on these candies. Similarly how you define art affects how you see AI art, because its meaning is completely different to different people.

So for the example in question, I don't view a banana taped to a wall as art, but I know some other people do, and I understand why they do so, so answering that question tells us a lot about a persons priors.

Jensson
I don't view a banana taped to a wall as art

If some don't understand why, I argue art needs to stand on its own, without the surrounding social context. If you view trash as art just because an artist told you, then the art isn't the trash the art is the artists explanation.

So, if you see a banana taped to a wall on a house when out walking, would you see that as beautiful art? If not, it isn't art according to my definition. The art piece is the whole thing, the banana and the explanation.

But many pictures can be considered art on their own without the social context, they are just beautiful and nice to look at. A banana taped to a wall doesn't pass that test.

Edit: So according to this definition AI art can be art, since some of those images can stand on their own as beautiful pieces of art without needing a social context.

petralithic
It is a rhetorical device that nevertheless clearly explains the various thought groups of AI art. If one requires human creation rather than mere human intent to be art, then similarly they can't consider a banana taped to a wall as art, nor AI as art either. But if one considers the former but then discounts the latter, then that's a logical hypocrisy. I am of the group that considers both as art, because both require human intention.
saltcured
And, is the artist the one who taped it, the one who told them to tape it, or the one who created the banana?
petralithic
It's the person who had the idea to do so and did so. AI doesn't do anything you don't tell it to, it is the banana creator in this case. It is still up to you to get the best looking banana you can then display it.
Jensson
Why end there, why isn't the manager who told the artist to make a piece the artist?

AI doesn't do anything you don't tell it to, it is the banana creator in this case

So if I tell the AI "create me a piece of art", and it gives me a cool image, I am the artist? So, if a manager tells a person "create a piece of art", the person goes and tapes a banana to the wall, the manager was the one who created the art?

Edit: And if you think an AI can't handle that question, I just gave it to an image model and got this. Did I create this art-piece? If not, who did? Did the AI create it?

https://imgur.com/aWT8YCb

petralithic
The AI created it but you choosing to display it is the art, performance art specifically, not that the image itself is art (but again if someone looks at it and it moves them, the image itself could also be considered art); did Duchamp manufacture the urinal he turned into The Fountain? No, but then why do we still consider that art? By your logic, he wouldn't be an artist.

Not sure why you're talking about managers, that seems one step removed. Michaelangelo was commissioned by the Pope to create something, is the Pope the artist? But then let's say Michaelangelo then uses some machine or hires his subordinate to paint for him, who is the artist then?

jay_kyburz
Two people want to make a statement about war.

One person spent years painting landscapes and flowers.

The other spent years programming servers.

Is one persons statement less important than the other? Less profound or less valid?

The "statement" is the important part, the message to be communicated, not the tools used to express that idea.

TheCraiggers
Is one persons statement less important than the other? Less profound or less valid?

To whom?

One of my favorite quotes is "The product of your art is you." (I heard it from Brandon Sanderson, not sure if he's the original.) I have come to believe this is true on multiple levels. So in your example, I can answer "they're both equally valid and profound" assuming they put similar levels of effort, skill, and basically themselves into that work.

I think that's the part where generative art falls behind. Sure, I can generate some art of a frog, print it, and hang it on my wall. But the print next to it, that I took with my actual camera after wading through a swamp all day? That will have much more profound meaning to me.

Excellent question though. I had to think for awhile on this, and most importantly, I learned something while doing it. Thank you.

kelnos
Is one persons statement less important than the other? Less profound or less valid?

In my opinion, yes. But that's the entire point here: art is in the eye of the beholder. I think much much much less of AI-generated art than I do of human-generated art. Even if an artist who is well-known for his human-generated art were to use an AI to make art, I would still likely think less of that art than of their earlier work.

The other spent years programming servers.

I will be the first to shut down people who try to say that programming isn't a creative endeavor, but to me this is not "art".

The "statement" is the important part, the message to be communicated, not the tools used to express that idea.

I don't agree with that. Consider just regular argumentation. If I'm trying to argue a point, how I express my argument matters. The way in which I do it, the words I use, whether I am calm and collected or emotional and passionate, perhaps graphs or charts or some other sort of visual aid, all of that will influence whether or not you buy my argument.

So If art is to make a statement, each individual has to believe that the way it's presented is powerful and resonates with them. This is a personal thing, and people are going to differ in how they react.

AlotOfReading
This is a debate that existed long before LLMs with things like action painting. If I give you a Jackson Pollock and a piece from someone who randomly splattered paint on a canvas until it looked like Jackson Pollock, are they the same?
petralithic
Same in what sense? That is the real question, and perhaps not even the important one when it comes to art. Because, if the Pollock is more "important," there is an implication that it's better because it's by a more famous person, while art should be able to come from anywhere and anyone.
AlotOfReading
The same in whatever sense you want to compare the art rather than the creators. Pollocks try to convey the action and emotion of the creation process. Our hypothetical copycat lacks that higher level meaning, even though they've created an otherwise similar physical product.

As an aside:

    ...art should be able to come from anywhere and anyone.
is an immensely political view (and one I happen to agree with). It's not a view shared by all artists, or their art. Ancient art in particular often assumes that the highest forms of art require divine inspiration that isn't accessible to everyone. It's common for epic poetry to invoke muses as a callback to this assumption, nominally to show the author's humility. John Milton's Paradise Lost does this (and reframes the muse within a Christian hierarchy at the same time), although it doesn't come off as remotely humble.
petralithic
It depends what the copycat was thinking, maybe they wanted to follow in Pollock's footsteps, maybe they wanted to showcase the point you're making, whether a copycat is as good as the real thing and therefore also considered art, perhaps even as important (apprentices often copied their masters, such as da Vinci's), maybe they are just creating it because it looks good. If there's no other reasoning, then I'd still say they're the same, because how can one say they're not art too? Even as an observer of the art, what if I like the copycat more? These are all open questions to the philosophy of art and I'm glad it's accessible today to everyone rather than only to the historically abled.
bonoboTP
Pollock was a part of a coherent intellectual movement across all of art. You can't productively discuss whether it's art without focusing on that. He didn't just wake up one day and think to himself that it would be fun to throw paint on the canvas like this and then people looked and wondered if that's art or not.

It was the intellectual statement conveyed through that medium that made him famous.

s1mplicissimus
Agreed, tools do not dictate what art is and isn't - but using those tools for art doesn't relieve them from being ethically justified.

If generating the piece costs half a rain forest or requires tons of soul crushing badly paid work by others, it might be well worth considering what is the general framework the artist operates in.

Using more resources to achieve subpar outcomes is not generally something considered artful. Doing a lot with little is.

petralithic
There are tons of examples of art that take much more energy than what an AI does, such as an architectural monument. It is not necessarily the case that "Using more resources to achieve subpar outcomes is not generally something considered artful. Doing a lot with little is." as not all artists will agree and even those that do might not follow it. For example, certain pigments in painting could be highly unethically sourced but people still used them and some still do, such as mummy brown, Indian yellow, or ivory black, all from living organisms.
s1mplicissimus
You are mixing up what artists do and what is considered artful. Not everything artists do is artful, even by their own standard.

It is not necessarily the case that "Using more resources to achieve subpar outcomes is not generally something considered artful. Doing a lot with little is." as not all artists will agree and even those that do might not follow it. For example, certain pigments in painting could be highly unethically sourced but people still used them and some still do, such as mummy brown, Indian yellow, or ivory black, all from living organisms.

I put forward the proposition "Using more resources to achieve subpar outcomes is not generally something considered artful. Doing a lot with little is." - yet you argue "but there are exceptions" - i know that, hence my usage of the term "generally". I'll be glad to learn how my proposition is wrong, but not inclined to defend your strawman

petralithic
It's more that I reject your premise of "Using more resources to achieve subpar outcomes is not generally something considered artful. Doing a lot with little is." because there is no backing behind that statement except your opinion and so I provided counter examples, but I did not need to do so because your statement has no rationale itself and can thus not need to be heeded.
belorn
Art is not art. Art is the thought manifested into something which convey the thought. If an artist is using an AI to manifest a thought, then that can be art.

Similar, music is not music, but rather the thought of an musician manifested is what we call music. This is why silence can be music, but silence without the thought is not.

Images generated through an AI that lacks the human thought is not art. It can look like art, have similarities to art, but it is no more art than silence is music. Same goes to music and text generated by AI.

People can inject defective thoughts into the process like "what generates me most money" or "how can I avoid doing any thinking", in which case the output of the AI will reflect that.

joquarky
What about musical synthesizers? Can they be used to create art?
teddyh
Cavemen probably once had the same argument about whether musical instruments could be considered “music”; something previously only possible by singing.

Obviously, the answer is yes; musical instruments, including synthesizers, can be music and art.

aspaviento
And let's not forget that people call "art" to more things than the popular masterpieces. A guy sold an invisible sculpture¹ clamming it was art. If things like this can be called art, whatever AI makes can be called art too.

1: https://news.artnet.com/art-world/italian-artist-auctioned-o...

bonoboTP
"What is or isn't art" didn't simply become a topic because people like to philosophize about the meaning of words. Over the 20th century the art world took fascination with the subversive, transgressive, the postmodern, rejecting authority and standards of beauty that were deemed limiting and oppressive etc. One direct contributing component was photography. Skill of realistic depiction became deemphasized, with mass production, plastic etc., the focus became abstract ideas. It was also a protest against the system that brought the two world wars.

It was considered "anti-art" at the time, but basically took over the elite art world itself and the overall movement had huge impact on what is considered art today, on performance art, sculptures, architecture that looks intentionally upsetting etc.

It's not useful to try to think of the sides as "expansive definitionists" who consider pretty much anything art just because, and "restrictive definitionists" who only consider classic masterpieces art. The divide is much more specific and has intellectual foundation and history to it.

The same motivations that led to the expansive definition in the personally transgressive, radical and subversive sense today logically and coherently oppose the pictures and texts generated in huge centralized profit-oriented companies via mechanization. Presumably if AI was more of a distributed hacker-ethos-driven thing that shows the middle finger to Disney copyrightism, they may be pro-AI.

petralithic
By this same logic, AI will also become accepted as art in 50 years. And by the way, no one who's serious about AI "art" uses commercial generators, they use local AI with workflow managers like ComfyUI. They are not just typing into a box like Midjourney. Therefore these are the hackers who're showing the middle finger to Disney, they dislike copyright as much as anyone.
bonoboTP
That's right, and a lot of stuff is being conflated and the "debate" is mostly on the level of soundbites and emotional vibes. Many have strong opinions who have never tried the models or seen someone skilled using them (easy to find YouTube streams), combining LoRAs, ControlNets, etc.

I generally find the specific debate around "whether it's art" super boring. People have squeezed all the juice out of "what even is art" decades before the banana taped to a wall. Duchamp's Fountain, Manzoni's Artist's Shit, John Cage's 4′33″, the Red Square by Malevich, Jackson Pollock etc.

I simply don't care if it's art. It's not an inherently prestigious label to me given this history.

exoverito
Needless to say, most humans are unoriginal parrots too, one need only look at the prevalence of memetic desire. Few are capable of artistic genius like Picasso.

One technical definition of empathy is understanding what someone else is feeling. In war you must empathize with your enemy in order to understand their perspective and predict what they will do next. This cognitive empathy is basically theory of mind, which has been demonstrated in GPT4.

https://www.nature.com/articles/s41562-024-01882-z

If we do not assume biological substrate is special, then it's possible that AIs will one day have qualia and be able to fully empathize and experience the feelings of another.

It could be possible that new AI architectures with continuously updating weights, memory modules, evolving value functions, and self-reflection, could one day produce truly original perspectives. It's still unknown if they will truly feel anything, but it's also technically unknowable if anyone else really experiences qualia, as described in the thought experiment of p-zombies.

freehorse
it's possible that AIs will one day have qualia

As the article says, then we can discuss about it that day. "One day AI will have qualia" is no argument in discussing about AI nowadays.

kelseyfrog
No computer will ever empathize with the senseless inhumanity of war

My computer does. What evidence would change your mind?

saint_yossarian
What evidence convinced you?
kelseyfrog
I performed an "Affective Turing Test" with null results.
andybak
The same Picasso that was notorious for churning them out towards the end of his career?

I'm being slightly flippant but I do think this is a motte and bailey argument.

Not even painting is a Guernica nor does it need to be.

And not every aesthetically pleasing object is art. (And finally - art doesn't even have to be aesthetically pleasing. And actually finally "art" has a multitude of contradictory meanings)

racl101
Monkey's paw closes.

Now, just like you can with Studio Ghibli art, you can generate new images in the style of Guernica.

dragonwriter
No computer will ever empathize with the senseless inhumanity of war to produce such a work.

Neither will a paintbrush.

The tool does need to, though.

jondwillis
We must unironically give the computer pain sensors. :( don’t hurt me mr. Basilisk, I’m just parroting someone else’s idea.
petralithic
I've talked to people like this and when you dig deep enough, it's a fear of the economic effects of it, not actually any strongly held belief of AI inherently not being intelligent or emotional. Similarly, and I'm speaking generally here, ask artists about coding AI and they won't care, and ask programmers about media generation AI and they also won't care. That's because AI outside their domain does not (ostensibly) threaten their livelihood.
hofrogs
I am not an artist, yet I care about media generation "AI", as in I resent it deeply.
petralithic
Like I said, I'm speaking generally. There are a few like you who do, for whatever reason, but most artists hate it because they, at the most basal level, see it as a threat, especially when it came out. You should've seen what engineers on HN said about GitHub Copilot said when that first came out too.
Palomides
this is a claim shockingly contrary to what every artist I know, and I myself as an amateur, believe
petralithic
Which artists care about coding AI like Copilot? All the ones I talked to simply do not care. Regarding economic means, I asked them whether they'd care if they lived in a post scarcity society where they could make art all day and not have to worry about their material needs being met, ie they're rich, and it turns out if that were the case, they didn't care about what people did with AI, be it image generation or code generation.
Palomides
so here's the thing, artists like making the art, skipping the making leaves you with nothing

most artists I know are against AI because they feel it is anti-human, devaluing and alienating both the viewer and the creator

some can tolerate it as a tool, and some (as is long art tradition) will use it to offend or be contrarian, but these are not the common position

if I were a spherical cow in a vacuum with infinite time, and nobody around me had economic incentives to make things with it, I could, maybe, in the spirit of openness, tolerate knowing some people somewhere want to use it... but I still wouldn't want to see its output

petralithic
They don't have to use AI though, they can leave people who do alone. But that's not what I see, I see artists getting mad at the latter and when I dig deep, it turns out they're scared it'll take their digital commission work. This has primarily been my experience talking with artists I commission as well as people online on Twitter and reddit for example.
Palomides
sure, it's hard for an artist to compete on price with AI, and the ones who depend on this kind of ultra low budget work will have a hard time (and have a direct economic self-interest in advocating against)

but again, that's not what I see in the people around me

petralithic
And that's my point. It was never about the philosophy, it was always about the economics. That's what frustrates me, why lie? If it's money you want then ask for it, don't make up some bullshit.
evilsetg
But are philosophy and economics so neatly separable in this case? Say you hold the philosophical belief that humans creating art is important but the economics don't allow it. In that case the root of your argument is philosophical and the economics factor into it but are not the single argument itself.
petralithic
Well, the trope of the starving artist exists for a reason. One does not need to be employed as a full time artist to create art, and thus art can come from anywhere, the value of economics is an entirely separate issue because no one should expect to be able to do a leisurely activity as an economically viable occupation indefinitely. Does it happen, yes of course, but it shouldn't be expected to always continue.
xantronix
As an artist, I do not dread AI's artistic capabilities from a philosophical standpoint because its apparent "humanity" is a distilled average entirely divorced from the contexts in which its stolen art inputs are provided. In this way, it is categorically devoid of meaning.

As a software developer, I dread AI's capabilities to greatly accelerate the accumulation of technical debt in a codebase when used by somebody who lacks the experience to temper its outputs. I also dread AI's capabilities, at least in the short term, to separate me and others from economic opportunities.

petralithic
That's because, if I'm inferring correctly what you're implying in the last sentence, you work primarily as a software developer. Try telling a working artist your first paragraph or that they shouldn't worry about AI taking their commission work for example and see what they think.
magicalist
Sounds like you were maybe having some one-sided conversations with all the many artists you spoke to.
petralithic
Ah yes, because you disagree with me, I must have been having one sided conversations. I suppose some people just can't accept other people's experiences without denigrating them.
jclulow
Where can I sign up for the post scarcity society? Asking for my artist friends.
petralithic
You can't, hence my point about their fear being economic, not philosophical.
eaglelamp
If you dig deep enough isn’t the same thing true of people like yourself? Do you truly believe that the large language models we currently have, not some fantasy AI of the distant future, are emotional and intellectual beings? Or, are you more interested in the short term economic gains of using them? Does this invalidate your beliefs? I don’t think so, most everyday beliefs are related to economic conditions.

How could a practical LLM enthusiast make a non-economic argument in favor of their use? They’re opaque usually secretive jumbles of linear algebra, how could you make a reasonable non-economic argument about something you don’t, and perhaps can’t, reason about?

petralithic
When did I say I believe AI to be intelligent or emotional? Of course I use it for economic factors, but I'm honest about it, not wrapping it up in some intellectual, solipsizing arguments. I'm not even sure what non-economic arguments you're talking about, my point is that at the end of the day most people care about the economic impact it might have on them, not anything about the technology itself.
eaglelamp
I don’t think the author is hiding his economic anxiety behind solipsism. He states plainly he doesn’t like the deskilling of work.

My point is why are your economic motivations valid while his aren’t?

petralithic
Who said my economic motivations are or aren't valid? My point is that people shouldn't lie, to others or to themselves, and to state their motivations plainly. While the author does do so, I am talking about other people who do hide behind solipsism, thus that is why my comment is not a top level comment about the article but a reply to a specific comment that says "one of my friends...", hence why I said "people like this" where "this" refers to their friend, not the author.
doctorpangloss
I've talked to people like this and when you dig deep enough, it's a fear of the economic effects of it

You hear what you want to hear. You think fine artists - and really, how many working fine artists do you really know? - don't have sincere, visceral feelings about stuff, that have nothing to do with money?

petralithic
We can talk anecdata all day. I do know fine artists, for example sculptors and painters, as well as many digital creators, as I commission pieces from them for prints in my place, and I've talked to all of them about AI out of curiosity.
rsoto2
I care because it's outright theft. That's what AI companies do and what you are an accessory to.

AI is not intelligent or emotional. It's not a "strongly held belief" it simply hasn't been proven.

petralithic
It's as much theft as piracy is.

AI is not intelligent or emotional.

Yes, I agree, my point is that people use arguments against these types of issues instead of stating plainly that their livelihood will be threatened. Just say it'll take your job and that's why you're mad, I don't understand why so many people try to dance around this issue and make it seem like it's some disagreement about the technology rather than economics.

diamond559
And most "AI" evangelists are actually stock holders.
footy
I'm no artist (I even failed high school art) and I think AI media generation is a travesty.
raxxorraxor
I love to employ AI but completely understand the criticism. It does increase my productivity as a software dev.

I also think the 10 hours of random electro swing or other genres of generated music is of extremely high quality. It isn't bland music, on the contrary it is playful and varied. Example:

https://www.youtube.com/watch?v=LmUSK1IjoQg&list=RDLmUSK1Ijo...

It is entertaining and a viewing experience. And yet, I still doesn't feel the same if you know it is just generated by some carefully selected prompts. Sure, that itself is a creative endeavor, but I would have preferred for AI to clean my room for me instead of slowly replacing every creative venue from writing to art to music.

I continue to play music myself, but I will never reach a level AI is able to achieve in a few minutes. Sure, this example certainly took a while to create and the result is awesome. So what do we do with all the superfluous artists now?

lucyjojo
did you link the right video?

it was extremely bland... dry as an oat in a flash freezer...

lo_zamoyski
I'm not terribly interested in emotional reactions. This is too common of a problem: we think emoting is a substitute for reasoning. Many if not most people believe that if they feel something, then it must be true; the disagreeing party just doesn't "get it". We must learn to reason and make arguments.

I am interested in the intelligible content of the thing.

Also, AI does not reason. Human beings do.

petralithic
How can we be sure humans reason?
andybak
replaced with a slightly less salacious word than the original in consideration for politeness.

Please don't. That offends me much more than a very mild word ever could.

stronglikedan
I think it's obvious virtue signaling, but I would never let something so insignificant actually offend me. Life's too short.
oasisaimlessly
What was the original word?
jclulow
horny
justsid
Thank god it was censored, someone’s kid might be browsing Hacker News and would now be traumatized /s
anal_reactor
No horny allowed, you're going to the horny jail.
Ferret7446
A COMPUTER CAN NEVER FEEL SPITEFUL OR ...

Can other humans (aka NPCs)? They seem like they do so I treat them as such, but as far as I know other humans and a sufficiently emoting AI both act equally like they feel emotions.

didibus
Hate can be emotional, but it can also have underlying rational causes.

For example, someone can feel like they already have to compete with people, and that's nature, but now they have to compete with machines too, and that's a societal choice.

fridder
I do wonder if a significant portion of the hate is from the AI push coming from the executive level.
sam_lowry_
I had to search and found the word "horny".
_Algernon_
† replaced with a slightly less salacious word than the original in consideration for politeness.

1. You're on the internet. Nobody will get mad if you say "horny".

2. Bastardizing a quote is a worse outcome than you missing an opportunity to virtue signal your puritan values. Just say the original quote.

dpoloncsak
Critics have already written thoroughly about the environmental harms, the reinforcement of bias and generation of racist output, the cognitive harms and AI supported suicides, the problems with consent and copyright...

This paragraph really pisses me off and I'm not sure why.

Critics have already written thoroughly about the environmental harms

Didn't google just prove there is little to no environmental harm, INCLUDING if you account for training?

the reinforcement of bias and generation of racist output

Im uneducated here, honestly. I don't ask a lot of race-based questions to my LLMS I guess

the cognitive harms and AI supported suicides

There is constant active rhetoric around the sycophancy, and ways to reduce this, right? OpenAI just made a new benchmark specifically for this. I won't deny it's an issue but to act like it's being ignored by the industry is a miss completely.

the problems with consent and copyright

This is the best argument on the page imo, and even that is highly debated. I agree with "AI is performing copyright infringement" and see constant "AI ignores my robots.txt". I also grew up being told that ANYTHING on the internet was for the public, and copyright never stopped *me* from saving images or pirating movies.

Then the rest touches on ways people will feel about or use AI, which is obviously just as much conjecture as anything else on the topic. I can't speak for everyone else, and neither can anyone else.

sindriava
I appreciate this response. The environmental impact is such a red herring it's not even funny. Somehow these statements never include the impact of watching Netflix shows or doing data processing manually.
didibus
They might hate those too?

It's pretty clear there are impacts, AI needs energy, consumes material, creates trash.

You probably just don't mind it. The fact is still fact, the conclusion is different, you assess it's not a big concern in the grand scheme of it and worth it for the pros. The author doesn't care much for the pros, so then any environmental impact is a net loss for them.

I feel both take are rational.

sindriava
They might be rational, but taking things out of context as much as happens with any AI / environment narrative gives off a strong "arsenic-free cauliflower" smell.
didibus
If you take a report like this: https://mitsloan.mit.edu/ideas-made-to-matter/ai-has-high-da...

You can:

1. Dismiss it by believing the projections are very wrong and much too high

2. Think 20% of all energy consumed isn't that bad.

3. Find it concerning environmentally

All takes have some weight behind them in my opinion. I don't think this is a case of "arsenic-free cauliflower", maybe unless you claim #1, but that claim can't really invalidate the others on their rational, they make an assumption on the available data and reason of it, the data doesn't show ridiculously small numbers like it does in the cauliflower case.

sindriava
I can't speak for you but I'm certainly not qualified to opine on the predictions so I won't address the 20% figure since I don't find it relevant.

data centers account for 1% to 2% of overall global energy demand

So does the mining industry. Part of that data center consumption is the discussion we are having right now.

I find that in general energy doesn't tend to get spent unless there's something to be gained from it. Note that providing something that uses energy but doesn't provide value isn't a counterexample for this, since the greater goal of civilization seems to be discovering valuable parts of the state space, which necessitates visiting suboptimal states absent a clairvoyant heuristic.

I reject the statement that energy use is bad in principle and pending a more detailed ROI analysis of this, I think this branch of the topic has ran its course, at least for me :)

didibus
so I won't address the 20% figure

Ok, but that's the figure that would be alarming, AI is projected to consume 20% of the global energy production by 2030... That's not like the mining industry...

I find that in general energy doesn't tend to get spent unless there's something to be gained from it

Yes, you'd fall in the #2 conclusion bucket. This is a value judgement, not a factual or logical contradiction. You accept the trade off and find it worth it. That's totally fair, but in no way does it remove or mitigate the environmental impact argument, it just judges it an acceptable cost.

viridian
I starting writing a response to your post, but as I kept writing and investigating, it became clear that the MIT article you linked is just overflowing with false statements, half truths, stretched truths, and unsourced information.

It is legitimately one of the most misleading pieces of press I've read in a while.

The 21% value is unsourced, the single image = full phone charge is wrong in so many ways I had written 3 paragraphs picking apart both the MIT publication and the huggingface paper's methodology, and so on.

I'm happy to be given evidence that AI is ruinous in terms of more than its social effects, but this publication has made me incredibly suspicious of anyone claiming this to be the case.

andybak
I think I get "arsenic-free cauliflower" from context but searching brings up no sources. Did you coin that phrase or is my non-google-fu just weak?
sindriava
Huh, my search is also turning up nothing. I could swear I heard a story about cauliflower originally being yellow and getting replaced with the white cultivar due to the guy who grew it marketing it as "arsenic-free" cauliflower despite the fact that the yellow one had no arsenic to begin with. Either I'm getting Mandela effected or I'm hallucinating -- which of course only AI models are capable of ;)
lostmsu
They would be rational if author also produced everything they consume off the earth and hosted this very slop on a tree. Otherwise they needed hardware produced by other humans, and those humans used the things mentioned above, and probably AI too.

But as it stands the author indirectly loves Netflix.

jacobsenscott
Uh, we've been doing data processing for nearly 80 years, and watching netflix for nearly 20 years. Suddenly we need to tile the earth with data centers, build power plants, burn all the fuels we can, and will "need to get to fusion" (per Sam) to run AGI. He also said "if we need to burn a little more gas to get there, that's fine". We'll never get to fusion or AGI, but we will destroy the earth to put a few more dollars in the pockets of the 0.01%.

You don't see the difference, or are you willfully ignorant?

TeMPOraL
You do understand what "exponential" in the "exponential growth" means?

Yes, it means that "suddenly" we need to do more of everything than we did for entirety of human history until ~few years ago. Same was true ~few years ago. And ~few years before that. And so on.

That's what exponential growth means. Correct for that, and suddenly we're not really doing things that much faster "because AI" than we'd be doing them otherwise.

sindriava
Do you honestly expect anyone to believe you're trying to take part in a discussion with that last statement? I appreciate this topic has your emotions running hot, but this is HN, not Reddit. Please leave that kind of talk at the door.
joquarky
Just a note on etiquette: starting your sentence with "Uh," is often interpreted as dismissive or condescending, even if that’s not your intent.
tremon
https://www.eesi.org/articles/view/data-centers-and-water-co...

Together, the nation’s 5,426 data centers consume billions of gallons of water annually. One report estimated that U.S. data centers consume 449 million gallons of water per day and 163.7 billion gallons annually (as of 2021)

Approximately 80% of the water (typically freshwater) withdrawn by data centers evaporates, with the remaining water discharged to municipal wastewater facilities.

mrsilencedogood
This is the best argument on the page imo, and even that is highly debated. I agree with "AI is performing copyright infringement" and see constant "AI ignores my robots.txt". I also grew up being told that ANYTHING on the internet was for the public, and copyright never stopped me from saving images or pirating movies.

I think the main problem for me is that these companies benefit from copyright - by beating anyone they can reach with the DMCA stick - and are now also showing they don't actually care about it at all and when they do it, it's ok.

Go ahead, AI companies. End copyright law. Do it. Start lobbying now.

(They won't, they'll just continue to eat their cake and have it too).

ACCount37
Lawyers of all the most beloved companies - Disney, New York Times, book publishers, music publishers and more - are now engaged in court battles, trying to sue all kinds of AI companies for "copyright infringement".

So far, case law is shaping up towards "nope, AI training is fair use". As it well should.

_DeadFred_
If your product wouldn't exist without inputting someone else's product, it is derivative of that someone else's product. This isn't a human learning. This is a corporate, for profit product, it is derivative, and violates copyright.
ACCount37
That's not the standard we hold "human generated" media to. Not even "mockbusters" are illegal under copyright law. Nothing is new and everything is a remix. And I see no reason to make an exception for AI.

Copyright law is a disgrace, and copyright should be cut down massively - not made into an even more far-reaching anti-freedom abomination than it already is.

jacquesm
Nothing is new and everything is a remix.

This is absolutely not true.

dpoloncsak
Yeah, it's a fair point. We have seen a clear abuse of our copyright system.
sonofhans
This paragraph really pisses me off and I'm not sure why.

No hate, but consider — when I feel that way, it’s often because one of my ideas or preconceptions has been put into question. I feel like it’s possible that I might be wrong, and I fucking hate that. But if I can get over hating it and figuring out why, I may learn something.

Here’s an example:

Didn't google just prove there is little to no environmental harm, INCLUDING if you account for training?

Consider that Google is one of the creators of the supposed harm, and thus trusting them may not be a good idea. Tobacco companies still say smoking ain’t that bad

The harm argument is simple — AI data centers use energy, and nearly all forms of energy generation have negative side effects. Period. Any hand waving about where the energy comes from or how the harms are mitigated is, again, bullshit — energy can come from anywhere, people can mitigate harms however they like, and none of this requires LLM data centers.

TeMPOraL
The harm argument is simple — AI data centers use energy, and nearly all forms of energy generation have negative side effects. Period. Any hand waving about where the energy comes from or how the harms are mitigated is, again, bullshit — energy can come from anywhere, people can mitigate harms however they like, and none of this requires LLM data centers.

Presented like this, the argument is complete bullshit. Anything we do consumes energy, therefore requires energy to be supplied, production of which has negative side effects, period.

Let's just call it a day on civilization and all (starve to death so that the few survivors can) go back to living in caves or up the trees.

The real questions are, a) how much more energy use are LLMs causing, and b) what value this provides. Just taking this directly, without going into the weeds of meta-level topics like the benefits of investment in compute and energy infrastructure, and how this is critical to solving climate problems - just taking this directly, already this becomes a nothing-burger, because LLMs are by far some of the least questionable ways to use energy humanity has.

joquarky
Yeah, the OP's argument could also be used to shame people for playing video games.

How much power does a typical gaming rig draw these days?

viridian
The logical end step of these trains of thoughts is always the same. If you aren't contributing to the solution in a big way, you should kill yourself. And even if you can't take that step, you should absolutely not have children, and advocate that others do the same.

Viewing energy use as an axiomatic evil necessarily leads to the removal of man from the earth.

sonofhans
No shaming in my argument, only pointing out that the “no harms” claim is bullshit.
sonofhans
Moving the goal posts, IMO. The post I was replying to said “there is no harm.” That’s all I was contradicting. You can argue all day that the harm is _worth it_, but that’s not what OP was doing.
nerevarthelame
Didn't google just prove there is little to no environmental harm, INCLUDING if you account for training?

I don't think they, have, no. Perhaps I'm overlooking something, but their most recent technical paper[0], published less than a week ago, states, "This study specifically considers the inference and serving energy consumption of an AI prompt. We leave the measurement of AI model training to future work."

[0] https://arxiv.org/html/2508.15734v1

dpoloncsak
I see. They actually specifically mention they did NOT account for training. Not sure how I misread that so poorly
rsynnott
I saw _quite a few_ people trying to claim that it included training, even though it clearly didn't, so maybe that?

Also, note that it is the _median_ usage for Gemini. One would assume that the median Gemini usage is that pointlessly terrible Google Search results widgets, the one that tells people to eat rocks. Which you've got to assume is on the small side, model-wise.

schwartzworld
Didn't google just prove there is little to no environmental harm

I'd be interested to see that report as I'm not able to find it by Googling, ironically. Even so, this goes against pretty much all the rest of the reporting on the subject, AND Google has financial incentive to push AI, so skepticism is warranted.

I don't ask a lot of race-based questions to my LLMS I guess

The reality is that more and more decision making is getting turned over to AIs. Racism doesn't have to just be n-words and maga hats. For example, this article talks about how overpoliced neighborhoods trigger positive feedback loops in predictive AIs https://www.ohchr.org/en/stories/2024/07/racism-and-ai-bias-...

Copyright never stopped me from saving images or pirating movies.

I think we could all agree that right-clicking a copyrighted image and saving it is pretty harmless. Less harmless is trying to pass that image off as something you created and profiting from it. If I use AI to write a blog post, and that post contains plagiarism, and I profit off that plagiarism, it's not harmless at all.

I also grew up being told that ANYTHING on the internet was for the public

Who told you that? How sure are you they are right?

Copilot has been shown to include private repos in its training data. ChatGPT will happily provide you with information that came from textbooks. I personally had SunoAI spit out a song that whose lyrics were just Livin' On A Prayer with a couple of words changed.

We can talk about the ethical implications of the existence of copyright and whether or not it _should_ exist, but the fact is that it does exist. Taking someone else's work and passing it off as your own without giving credit or permission is not permitted.

danso
Im uneducated here, honestly. I don't ask a lot of race-based questions to my LLMS I guess

You're not uneducated, but this is a common and fundamental misunderstanding of how racial inequity can afflict computational systems, and the source of the problem is not (usually) something as explicit as "the creators are Nazis".

For example, early face-detection/recognition cameras and software in Western countries often had a hard time detecting the eyes on East Asian faces[0], denying East Asians and other people with "non-normal" eyes streamlined experiences for whatever automated approval system they were beholden to. It's self-evident that accurately detecting a higher variety of eye shapes would require more training complexity and cost. If you were a Western operator, would it be racist for you to accept the tradeoff for cheaper face detection capability if it meant inconveniencing a minority of your overall userbase?

Well, thanks to global market realities, we didn't have to debate that for very long, as any hardware/software maker putting out products inherently hostile to 25% of the world's population (who make up the racial majority in the fastest growing economies) weren't going to last long in the 21st century. But you can easily imagine an alternate timeline in which Western media isn't dominant, and China & Japan dominate the face-detection camera/tech industry. Would it be racist if their products had high rates of false negatives for anyone who had too fair of skin or hair color? Of course it would be.

Being auto-rejected as "not normal" isn't as "racist" as being lynched, obviously. But as such AI-powered systems and algorithms have increasing control in the bureaucracies and workflows of our day to day lives, I don't think you can say that "racist output", in the form of certain races enjoying superior treatment versus others, is a trivial concern.

[0] https://www.cnn.com/2016/12/07/asia/new-zealand-passport-rob...

AlecSchueler
Didn't google just prove there is little to no environmental harm, INCLUDING if you account for training?

No, they showed that the environmental impact of using a smaller AI embedded on Google results uses less power to train and run than using something SOA. That's quite a different thing altogether.

I don't ask a lot of race-based questions to my LLMS I guess

You don't need to ask explicit questions to receive answers where bias is implicitly stated. You've dismissed the argument out of hand without actually meeting it.

I won't deny it's an issue but to act like it's being ignored by the industry is a miss completely.

The claim was that critics had been vocal about it, not that it had been ignored by the industry.

I also grew up being told that ANYTHING on the internet was for the public, and copyright never stopped me from saving images or pirating movies.

Policing is always very patchy. You maybe broke the law and got away with it as an individual, that's common. The issue is that these huge businesses can do a level of copyright infringement, and do it on a for-profit basis, while smaller businesses would be eradicated for attempting the same thing, and the artists they're taking from would face similar issues if they attempted even a fraction of that level of plagiarism.

bayindirh
HPC admin here.

A "small" 7 rack, SOTA CPU cluster uses ~700KW of energy for computing, plus there's the energy requirements of cooling. GPUs use much more in the same rack space.

In DLC settings you supply 20-ish degree C water from primary circuit to heat exchanger, and get it back at 40-ish degree C, and then you pump this heat to environment, plus the thermodynamic losses.

This is a "micro" system when compared to big boys.

How there can be no environmental harm when you need to run a power plant on-premises and pump that much heat in much bigger scale 24/7 to environment.

Who are we kidding here?

When this is done for science and intermittently, both the grid and the environment can tolerate this. When you run "normal" compute systems (e.g. serving GMail or standard cloud loads), both the grid and environment can tolerate this.

But running at full power and pumping this much energy in and heat out to train AI and run inference is a completely different load profile, and it is not harmless.

the cognitive harms and AI supported suicides

Extensive use of AI is shown to change brain's neural connections and makes some areas of brain lazy. There are a couple of papers.

There was a 16 year old boy's ChatGPT fueled death on the front page today, BTW.

This is the best argument on the page imo, and even that is highly debated.

My blog is strictly licensed with a non-commercial and no-derivatives license. AI companies gets my text, derives it and sells it. No consent, no questions asked.

Same models consume GPL and Source Available code the same and offer their derivations to anyone who pays. Again, infringing both licenses in the process.

Consent & Copyright is a big problem in AI, where the companies wants us to believe otherwise.

merksoftworks
What I will say about sycophancy - the recent rollback that OpenAI went through does appear like a clear attempt to push the envelope on dark patterns wrt AI Assistants. Engagement optimized assistants, pornography, and tooling are inherently misaligned with the productivity or wellbeing of their users in the same way that engagement maximized social media is inherently misaligned with the social wellbeing of it's users.
827a
The idea that these things cause “minimal” environmental harm is utterly laughable. It’s Orwell-level doublespeak. Am I seriously to believe that Musk wants to run 50M H100 in the coming years, an amount that might equate to 60GW of power draw on the low end, roughly equal to 10% of the entire US power draw, and that won’t have significant environmental consequences?

Of course, they hide the truth in plain site: inference is a drop in the ocean compared to training.

simianwords
Don't try to argue using logic against a person who came to their position primarily through emotions!

All these points are just trying to forcefully legitimise his hatred.

the_other
The article doesn’t say that. The article says the author wont do the work of explaining their position to the reader. It doesn’t say they havn’t done that work for themselves. I read it as saying they had done some undisclosed amount of work informing themselves such that they could reach to their position: thinking, reading articles, etc.

Also, I think their lean towards a political viewpoint is worth some attention. The point is a bit lost in the emotional ranting, which is a shame.

(To be fair, I liked the ranting. I appreciated their enjiyment of the position they have reached. I use LLMs but I worry about the energy usage and I’m still not convinced by the productivity argument. Their writing echoed my anxiety and then ran with it into glee, which I found endearing.)

indoordin0saur
It bugged me too. There are some legitimate criticisms about AI but the author has some laughably bad ones mixed in there with the good. The way he just presents these criticisms and then handwaves them away as self-evidently true is just a very lazy appeal to authority.
delusional
Didn't google just prove there is little to no environmental harm, INCLUDING if you account for training?

That's a crazy argument to accept from one of the lead producers of the technology. It's up there with arguing that ExxonMobil just proved oil drilling has no impact on global warming. I'm sure they're making the argument, but they would be doing that wouldn't they?

giancarlostoro
Im uneducated here, honestly. I don't ask a lot of race-based questions to my LLMS I guess

You can't even ask it anything for genuine curiosity it starts to scold you and makes assumptions that you are trying to be racist. The conclusions I'm hearing are weird. It reminds me of that one Google engineer who quit or got fired after saying AI is racist or whatever back in like 2018 (edit: 2020).

mcpar-land
didn't google just prove there is little to no environmental harm, INCLUDING if you account for training?

There is constant active rhetoric around the sycophancy, and ways to reduce this, right? OpenAI just made a new benchmark specifically for this.

We have investigated ourselves and found no wrongdoing

Im uneducated here, honestly. I don't ask a lot of race-based questions to my LLMS I guess

Do you have to ask a race-based question to an LLM for it to give you biased or racist output?

danielbln
Yet it is here to stay, won't go away and even if it won't get any better at the useful things it does, it is useful. The externalities are real, some can be removed, some mitigated. If you're a hater and a human, then you don't have to mitigate anything, of course.

Me, I hate the externalities, but I love the thing. I want to use my own AI, hyper optimized and efficient and private. It would mitigate a lot. Maybe some day.

myhf
the useful things it does

It's weird how AI-lovers are always trying to shoehorn an unsupported "it does useful things" into some kind of criticism sandwich where only the solvable problems can be acknowledged as problems.

Just because some technologies have both upsides and downsides doesn't mean that every technology automatically has upsides. GenAI is good at generating these kinds of hollow statements that mimic the form of substantial arguments, but anyone who actually reads it can see how hollow it is.

If you want to argue that it does useful things, you have to explain at least one of those things.

schwartzworld
It's bad at

- Actually knowing things / being correct - Creating anything original

It's good at

- Producing convincing output fast and cheap

There are lots of applications where correctness and originality matter less than "can I get convincing output fast and cheap". Other commenters have mentioned being able to vibe-code up a simple app, for example. I know an older man who is not great at writing in English (but otherwise very intelligent) who uses it for correspondence.

const_cast
Those applications are actually abysmally rare - it's just that we've created a society where businesses are just... allowed to externalize all costs.

Being wrong or lying is almost universally bad and unproductive. But making money has nothing to do with being productive - you can actively make the world worse and make money. Ask RJ Reynolds.

schwartzworld
Being wrong or lying is almost universally bad and unproductive

Sure, but there are cases where rightness isn’t a thing.

Don’t get me wrong I’m not an AI Stan, it has real problems, but it’s also not going anywhere. Eventually the bubble will pop and we’ll see which applications of AI turned out to be useful and which didn’t.

int_19h
It is extremely good at translating things from one language to another. Even if nothing else comes out of the AI bubble, this thing alone is a massive upside.
SpaceNoodled
Sure, LLMs seem to be really good at outputting things that aren't trivially verifiable by the user.
gitaarik
It's just a summarizing search engine. Instead of you manually looking up multiple pages and reading them and coming to some conclusion that allows you to move to the next step in the process, you can let the AI do that manual work for you.

But you shouldn't expect it do take over your actual thinking, because it doesn't actually think. So it's just another tool in the toolbox that can be useful for some applications, but not for all. If you use it for the appropriate tasks, it can be very helpful. If you try to do everything with it, you'll be disappointed.

Semiapies
And they're always desperately insisting that won't go away and you can't escape it. It stinks a lot of Big Lie techniques.
lif
This ship is unsinkable!

-Aitanic

petralithic
doesn't mean that every technology automatically has upsides

Who said "every technology?" We're talking about a specific one here with specific up and downsides delineated.

joquarky
Brainstorming.
yifanl
Yet it is here to stay, won't go away

Source for this claim? Are you still using Groupon?

dpoloncsak
The last time we saw a bet from Wall St like we are seeing with AI, was when they bet on the internet.

Do you still use the internet?

satisfice
I still use the Internet. The Internet is also a technology that undermines society.
petralithic
You are free not to use it if that is what you believe.
Timwi
You are also free to give away all your money and not participate in capitalism if you wish.

Wait, are you sure?

petralithic
Yes, if you don't want to participate in capitalism, you're free to live in the woods and homestead.
satisfice
No, you are not free to do that. There are billions of humans on the planet. We all can't "live in the forest" without immediately destroying the forests.

I wish you'd try thinking for at least five seconds before commenting. If you are here, then you must be smart-- so, use your brain, man.

petralithic
Who said we all? If you specifically don't want to live in a capitalist society, you don't have to, there are lots of homesteaders even in the US who live off the land, you can be like them. I'm not being sarcastic, it's an actual suggestion.
jacquesm
If you are here, then you must be smart

Citation needed...

_Algernon_
petralithic
I see this meme often but it's true, there's lots of options, not just one society that you cannot leave. Lots of countries on earth that one can move to, as people immigrate already.
_Algernon_
Talk about missing the point…
petralithic
More like just presenting a meme isn't a real argument, so there's not really any point you're making to miss.
_Algernon_
If you practiced some media literacy, you'd realize that (a) there is a point to that meme and (b) that there is a reason why you are on the receiving end of it.
petralithic
Just saying it doesn't make it so. I can post similar memes and say "haha that's you" while pointing at you and citing your lack of media literacy when you disagree, that doesn't mean it's a cogent argument. Maybe there is an argument in that meme, but you're not specifying it and therefore I will not take it as a serious comment.
Gud
Not really an option, is it? Point me to the forest where this is allowed
turzmo
Maybe a better generalization, the last time [bubble] happened, do you still use [bubble]?

Depends on the nature of the bubble, doesn't it?

TeMPOraL
Indeed. Therefore, the past bubbles most similar to AI are the one around Internet, and the earlier one around electricity.
Jensson
the past bubbles most similar to AI are the one around Internet

I'd say crypto is more similar, the internet and telephones and rails and roads and electricity is around connective infrastructure, AI and crypto is around compute. Connective infrastructure is almost always useful, local computed things is harder to motivate hype for because its usefulness isn't as apparent as adding more connectivity.

dpoloncsak
Crypto STILL hasn’t seen widespread user adoption like ChatGPT has.

My CIO never asked about blockchain technology, but he sure as hell is asking about AI

brokencode
You must be living on a different planet if you think the adoption and societal impact Groupon was ever remotely comparable to AI.
yifanl
I assure you as someone working FoH at the time, Groupon's impact on me was far greater than AI ever could be.
brokencode
I’m talking about how it impacts society in general, not you specifically. Also, I don’t think you appreciate how deeply AI will affect your life in the future.

The next time you get a CT for example, it might be an AI system that finds a lung nodule and saves your life.

Or for a negative possibility, consider how deepfakes could seriously degrade politics and the media landscape.

There are massive potential upsides and downsides to AI that will almost certainly impact you more than a coupon company.

hex4def6
Of course it's here to stay. There are models that are --right-now-- great at text-to-speech, speech-to-text, categorization, image recognition, etc etc. Even if progress stopped now, these models would be useful in their current state.

Your argument could just as easily be applied to social networks ("are you still using friendster?") or e-commerce ("are you still using pets.com?). GPT3 or Kimi K2 or Mistral is going to become obsolete at some point, but that's because the succeeding models are going to be fundamentally better. That doesn't mean that they weren't themselves fit for a certain task.

sindriava
Comparing a general technology (AI) to a specific company (Groupon) is a category error. To your point coupons still exist and people use them and Anthropic might not exist in 2 years while AI will.
sshine
People are.

Just like crypto.

Just look at the bitcoin hashrate; it’s a steep curve.

Mallowram
[dead]
danielbln
AI is information.
Mallowram
Even Shannon knew the limits of information late in career. AI is not information, it's signaling. And it embeds without decipherment or segregating dominance, bias, control, manipulation. The dark matter of language we can't extract.

Shannon warned in 1956 that information theory “has perhaps been ballooned to an importance beyond its actual accomplishments” and that information theory is “not necessarily relevant to such fields as psychology, economics, and other social sciences.” Shannon concluded: “The subject of information theory has certainly been sold, if not oversold.” [Claude E. Shannon, “The Bandwagon,” IRE Transactions on Information Theory, Vol. 2, No. 1 (March 1956), p. 3.]
lif
Signaling? That is correct. Also, am willing to place a _very_ long bet that clay tablets will be around longer.
jerhewet
AI is "put Elmer's glue on your pizza so the ingredients won't slide off". AI is "three B's in blueberry".

Garbage in, garbage out. Which will always be the case when your AI is scraping stuff off of random pages and commentary on the internet.

s1mplicissimus
If only it were garbage in, garbage out - that would be solvable by better training data. But it's much worse than that, because even if you'd only feed it good stuff, the output would still deteriorate.

pointing index finger at imaginary baloon: pfffffffffft

zwnow
Disinformation with more and more propaganda due to being vulnerable to bad actors. There's already evidence on people spreading propaganda through LLMs.
justsomejew
I think you are a real person, still you sound like a broken record.. "disinformation..", "propaganda".. "bad actors"..

You are the "bad actors", pumpkin. Worse than the other ones.

zwnow
So we are just ignoring this issue?
danielbln
Definitely, AI can be used for terrible things. Doesn't change that it's information and won't go away.
petralithic
I don't understand this sentence. What is "operating the arbitrary?"
Mallowram
words, symbols, sentences, tokens, all are arbitrary. They stay arbitrary unless there is an irrefutable target like a tumor. This is the basis of CS
iLoveOncall
There are plenty of things that are useful and that have gone away. As long as GenAI stays unprofitable, it has every chance of disappearing if it stays as useless as it is right now.
ginko
There's people running stable diffusion locally on their systems for their own amusement. Do you think that will go away?
brian-armstrong
100% yes, at least down to a rounding error. If people stop pushing billions into training new versions, then the novelty will wear off very quickly. There are still many constraints on what it can do and people will generally lose motivation when they start finding those invisible boundaries on its capabilities. It'll be effectively a dead pursuit.
tzumaoli
It's interesting to see the trend of the attitude towards GenAI in Hacker News through out the years. This is totally vibe based and I don't have numbers to back it up, but back in 2022-2023, the site was dominantly people who mostly treat GenAI as a curious technology without too much attachment, and some non-trivial amount of folks who are very skeptical of the tech. More recently I see a lot more people who see themselves as evangelists and try very hard to boost/advocate the technology (see all the "LLM coding changes my life" posts). It seems that the tide has turned back a little bit again since we now see this kind of posts surfacing.

For me, I kind of wish this site to go back to the good old days where people just share their nerdy niche hacker things and not filling the first page with the same arguments we see on the other parts of the internet over and over again. ; ) But granted I was attracted by the clickbait title too, so I can't blame others.

petralithic
Curious technology? People were foaming at the mouth about "license concerns" when GitHub Copilot was first announced, saying they're going to boycott Microsoft. But just like all things, over time people realize they're not as good or bad as initially thought. I noticed this too with media generation, people on Twitter were very mad about it and now many of them use Photoshop's AI features.
AlecSchueler
Are you sure they're the same people?
petralithic
They asked about general trends so it doesn't matter if it's the same people or not as long as the average sentiment has changed.
AlecSchueler
How have you measured the average?
petralithic
I haven't, although I'm sure there exists a way to do. The point is that if one were on HN and reading the threads on AI in 2021, they'll see dissent as being nothing new.
TeMPOraL
I don't pay much attention to the submission themselves, but I do care what the fellow HN-ers think, and my own "vibe-based" perspective is that the voices have been predominantly negative for many years now, and only grow even more so.
bonoboTP
HN is usually negative, cynical, skeptical, eyerolling, regardless of topic.

Just the other day someone posted the ImageNet 2012 thread (https://news.ycombinator.com/item?id=4611830), which was basically the threshold moment that kickstarted deep learning for computer vision. Commenters claimed it doesn't prove anything, it's sensational, it's just one challenge with a few teams, etc. Then there is the famous comment when Dropbox was created that it could be replaced by a few shell scripts and an ftp server.

Matthyze
Thanks for linking that thread. Really puts things in perspective.
AlecSchueler
My general rule is thumb now is that if HN takes the time to deride it then there's probably something to it, if it gets completely ignored then there's probably not.
LexiMax
If anything, it reminds me of crypto - lots of investment seemed to attract a lot of users to HN that I highly suspect had some sort of...let's just call it motivated reasoning.
mierz00
This does not feel anything like crypto to me.

Crypto always had hard to understand and abstract use cases. It became popular because the value was going up.

LLMs are different. There are an endless amount of use cases that people can easily understand. Now, just how well it does things is debatable but there is a very clear value gain.

Hell, I got it give me a list of recipes for the week based on my preferences and dietary needs then created a grocery list in 2 minutes. Did I need an LLM for this? No, but it made it so much faster and this is what I am finding with a lot of tasks.

rsynnott
Hell, I got it give me a list of recipes for the week based on my preferences and dietary needs then created a grocery list in 2 minutes.

I mean, er, yeah, but that's not a multi-trillion dollar industry, is the thing. "People find ChatGPT mildly useful" is not going to cut it, not at current levels of investment.

mierz00
This reminds me of the Louis C.K skit where he talks about people complaining on a plane, completely forgetting how incredible it is to be flying.

Not so long ago, the example I gave about recipes was something you would only see in sci-fi.

Maybe you’re right there is too much investment, but that’s the same whenever there is a new technology that has completely unlocked new possibilities.

rsynnott
More recently I see a lot more people who see themselves as evangelists and try very hard to boost/advocate the technology (see all the "LLM coding changes my life" posts).

It feels very much like the crypto bubble a few years back (the second, larger one, when we were informed that soon everything would be an NFT). This is actually one thing that puts me off AI; on top of a certain amount of scepticism about whether it is actually useful, the whole space feels very, very, _very_ grifter-y. In some cases it is literally the same people who were pushing NFTs a while back.

int_19h
The whole endless debate about "stochastic parrots" and "singularity" was already actively ongoing in threads here in 2022, for example. I remember when GPT-4 just dropped and was everywhere in the comments, and all those things you describe were already there.
Mallowram
[dead]
lo_zamoyski
Words are the most indirect form of perception imaginable. Both Aristotle and Cassirer knew this

What?

Mallowram
Aristotle: There are no contradictions.

Cassirer: “Only when we put away words will be able to reach the initial conditions, only then will we have direct perception. All linguistic denotation is essentially ambiguous–and in this ambiguity, this “paronymia” of words is the source of all myths…this self-deception is rooted in language, which is forever making a game of the human mind, ever ensnaring it in that iridescent play of meanings…even theoretical knowledge becomes phantasmagoria; for even knowledge can never reproduce the true nature of things as they are but must frame their essence in “concepts.” Consequently all schemata which science evolves in order to classify, organize and summarize the phenomena of the real, turns out to be nothing but arbitrary schemes. So knowledge, as well as myth, language, and art, has been reduced to a kind of fiction–a fiction that recommends its usefulness, but must not be measured by any strict standard of truth, if it is not to melt away into nothingness.” Cassirer Language and Myth

lo_zamoyski
Aristotle: There are no contradictions.

I still don't know what this is supposed to mean, and I am not unfamiliar with Aristotle.

Mallowram
If you don't grasp the basic ideas of paradox questioning the nature of language beginning in presocratics and winding their way through Aristotle, Socrates, Kant Hume and many others and also appearing in Advaita Vedanta (eg Nisargadatta), then I'm afraid staring here isn't really going to help you. Philosophy has ben questioning whether language is valid from the start. https://plato.stanford.edu/entries/aristotle-noncontradictio...
lo_zamoyski
You're being evasive and hiding behind jargon and frankly, nonsensical phrases. As I said, I am not unfamiliar with Aristotle, so don't be shy about making clear and direct claims. If you can't do that, then, I'm sorry, but this some kind is bullshit.

(FWIW, a feature of the Aristotelian logical tradition is that, unlike the modern, Fregean tradition which is indifferent about the relationship between logic and language, it is very much concerned by the logical structures within grammar. From a practical point of view, this makes total sense: we want to be able to evaluate arguments, to clarify arguments, and so on, which are generally given in natural language. Aristotle was also a moderate realist. Language is a reflection of reality.)

Mallowram
The clearest direct claim is if there are no contradictions then language is impossible.

Language is not a "reflection of reality" in any way shape or form: reality is always specific, language is always arbitrary.

We're currently in a neurodynamic/neurobiological overthrow of psychodynamic principles that obviates in presocrats onwards.

The fact is language has nothing really to do with reality and has only to do with subjective biases that arbitrarily perform gibberish in the stead of status-gain, control, etc (pick any primate bias that Aristotle onwards was unconscious to).

“We refute (based on empirical evidence) claims that humans use linguistic representations to think.” Ev Fedorenko Language Lab MIT 2024

Mallowram
btw what this means neurobiologically/neurodynamically is that arbitrary language's only function is to refute itself. That is its only form of reasoning or logic in grammar. It's only a temporary palliative that eventually gets revealed (AI does a great job of this, better that media or the web) that embeds simian bias seamlessly. It demands a direct, concatenated, irreducible form of signaling.
Shorel
You used too many words, and your argument depends on too many people, do you understand all of them?
Mallowram
Arguments are all built in neural-syntax specifics. How they externalize as arbitrary points is largely irrelevant, which demonstrates how humans go extinct: confusing the two. The basic 'fact' is words and the conduit metaphor paradox are never resolvable, they are inherently contradictory, which makes words almost entirely irrelevant, and gibberish. AI can never solve this because CS never began at first principles. In essence, AI is the most advanced demonstration of language's total irrelevance.
Shorel
Now I see your point. You are an absolutist. If something is not 100% perfect and works for all possible cases, then it is immediately worthless. Language being in that category.

I also disagree with your point and your arguments. So many sentences in your response are blatantly false. You can win the Olympics of jumping to conclusions.

Let's start with CS. CS is the set of first principles that are then applied to software. This is because CS is another branch of mathematics, starting with Boolean logic and discrete mathematics.

Language relevance is shown here. We are using it right now. It is not a complete system because some ideas can't be expressed in language and some sentences in a logical system can't be proved or disproved, but the overwhelming majority of sentences are useful.

And everything I have written is based on first principles, you can read about Gödel incompleteness theorem for a start. It applies to LLMs because it applies to all uses of language. Nothing is specific to neural networks.

In fact, go and read about Gödel, because it proves that no logical system is complete, and your worldview seems to be dependent on the outdated assumption that there should be such a complete system. This includes all reasoning systems and all of mathematics.

Mallowram
No my approach is there is no model. Nothing is reducible. It's a neurodynamic approach. The idea of a world model is oxymoronic, the brain doesn't reduce anything to models, making math and logic irrelevant. Nothing you are talking about is really a first principle, how can it be, it's retrofitted using symbols. Yours is a psychodynamic approach, the post-hoc representations brains create is enough for you. You expect reason to be the threshold. I see no reasons for anything, simply actions. The computer I expect uses no math.
Mallowram
btw - this is the Achilles heel to CS "Nothing is specific to neural networks." Pretty fascinating that the illusion of counting, math, algebra will all ne superseded by measurement in analog, a measurement that requires no math, simply differences in syntax. How we code that is up for grabs. How did all these math-heads take control of reality through counting? Really a bonkers group of capitalists had nothing to do except dominate by counted value. Rather insane.
joquarky
In my experience, it seems like most people believe that they are their thoughts.

This is especially terrible for people with OCD, which seems to be common in this industry. I think it would be a valuable boost to mental health for them to at least explore some of the basic concepts in Vedanta and/or zen.

What amuses me is how much my thoughts seem like a completion LLM while I'm meditating.

Shorel
The way I see it, Aristotle used language as a reasoning tool. Logic inference rules, modus ponens, and so on.

Aristotle was also unaware of the incompleteness problem discovered by Gödel, that no reasoning tool of that type can be complete.

There are fundamental contradictions in the nature of language, it however doesn't make them not useful for the entire experience of daily communication, all of literature, and so on.

Just that there are affirmations that are true, but there is no set of rules that can prove them.

I would point you to Gödel, Escher, Bach for a very nuanced discussion about this topic.

Mallowram
The problem of the GEB is it's containment in symbols. Godel was unaware of the the potential for direct perception, and ecological psychology, and coordination dynamics. All three are possible non-contradictory paths to direct perception. Time to put math aside and search for new possibility.
utyop22
This is beautiful.

I also had a similar epiphany 3 days ago - once it hits you and you understand it, you can see clearly why LLMs are destined to crash and burn in their present form (good luck to those who will have to answer the questions regarding the money dumped into it).

What will come out of the investment will not justify what has been invested (for anyone who thinks otherwise, PLEASE GO AHEAD AND DO A DCF VALUATION!) and it will have a depressing effect on future AI investment.

siliconsorcerer
I think this is just an extension of the idea that "only 20% of communication is verbal and the rest is nonverbal". We have always understood the limitations of language, most of what is communicated between humans is nonverbal.
rsoto2
we understood the limitations of language, which is why programming was done via...math and logic! Something LLMs seem to absolutely suck at
Shorel
Humans understand language at a level no AI does.

We use it to serialize ideas, and we have the ideas independent of language.

AI works on the serialization itself, which is very powerful because of the relationships between ideas are reflected on statistics in the serialization, but it lacks all the understanding, and can't create new ideas with reasonable resources.

Mallowram
This is an idealized fantasy of language. Language is primarily about patriarchal dominance, control, status, mate-selection, topophilia, and secondarily about communicating ideas. The dark matter of language is expressed in mythological ideas like states, property, law, etc. People are starting to notice we can't solve very simple problems like climate extinction. How the primary forms in language are status oriented.
Shorel
Language is primarily about patriarchal dominance, control, status, mate-selection, topophilia, and secondarily about communicating ideas.

I don't agree language is primarily about those things, but I want to point out this is a very human interpretation of language, that no LLM can perform.

Mallowram
As humans don't think in language, ie, there is no direct contact between what we think we think, and how we externalize what we think we think arbitrarily, empirically language is primarily about these biases and only secondarily about communication. This is, again, the Achilles hell of CS, NLP, generative linguistics. It's impossible for anyone to disagree with this function of language since 2016. The role for LLMs to operate language is zilch, as you admit LLMs can't uncover this, train, align this out of function.

“We refute (based on empirical evidence) claims that humans use linguistic representations to think.” Ev Fedorenko Language Lab MIT 2024

tanvach
I've noticed that the younger you are, the most likely you're in love with AI. So the tide will turn eventually, for better or worse. I talked to someone recently who claims with no irony that 'AI has zero downside'.

I don't hate AI. I hate the people who're in love with it. The culture of people who build and worship this technology is toxic.

petralithic
Isn't that the nature of all technologies? I'm sure people 50 years ago thought the internet wasn't going to be a big deal, like Krugman.
tanvach
Nothing before it is worshipped and hated at this proportion
joquarky
You weren't around when cell phones were only used by rich jerks?
tanvach
I was. My parents had first versions of analog cell phones and they were truly live savers no one ever complained about how they will destroy society.
hollerith
From the start, people complained about how cell-phone users felt entitled to talk on them in public libraries, movie theaters and on public transportation.
GreenWatermelon
I don't think that has changed? Everyone would still complain about someone Talking loudly in a public library or a movie theater.
int_19h
I'm not young, but I see workable AI as a fulfillment of a 180-year-old dream that lies at the very beginnings of our entire field:

[The Analytical Engine] might act upon other things besides number, were objects found whose mutual fundamental relations could be expressed by those of the abstract science of operations, and which should be also susceptible of adaptations to the action of the operating notation and mechanism of the engine... Supposing, for instance, that the fundamental relations of pitched sounds in the science of harmony and of musical composition were susceptible of such expression and adaptations, the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent.

- Lovelace, Ada; Menabrea, Luigi (1842). "Sketch of the Analytical Engine invented by Charles Babbage Esq".

So yes, of course I'm excited about AI. I grew up on 1960s sci fi where AI was pervasive, and most of it wasn't dystopian.

What I'm not excited about is the greedy fucks who are largely in control of AI today and who deploy it to the detriment of society at large. But that is a general problem with greedy fucks (and our political and economic system enabling them), not with AI as such. They can, and do, similarly abuse all kinds of technological advancements.

anal_reactor
The world is changing and the old generation hates this because they've learned how to navigate the world according to rules that don't apply anymore. People who deny the fact that LLMs are a revolutionary technology are the same people who refuse to use computers because they're just a dumb novelty, get upset about the removal of payphones, or tell you to "just walk in and shake the boss' hand and ask for a job". This is supposed to be a website for technology enthousiasts, yet 90% of posts are either "Look! I managed to find something that AI can't do yet!" or "I don't like Trump".

I think the core issue is that until the industrial revolution the world was pretty much static. If you teleported a hunter-gatherer into a medieval village, he'd figure it out. Meanwhile trying to explain 2025 to someone stuck in 2015 is fool's errand. Human brain did not evolve for such rapid environmental changes.

racl101
Not surprising.

From the point of view of a typical, not very curious kid or teen AI seems like a godsend. Now you don't have to put much effort in a lot of things you don't want to do to begin with.

iLoveOncall
It's not about age, it's about experience.
monkaiju
Couldnt agree more, aligns with my observations much more than just age
tanvach
Age is observable
rsynnott
I'm not sure to what extent this is an actual youth thing, versus that older people have been exposed to more "it's the real thing this time, promise!" hype bubbles, and, in particular for programmers, have probably been exposed to one or more of the prior "this will make programmers 10x as efficient" and/or "this will get rid of icky programmers altogether" crazes.

Fool me once, and all that.

_DeadFred_
Who at that age wouldn't love an all knowing computer that also happens to think everything you think 'really cuts to the crux' and is deeply profound and smart?
monkaiju
Im one of the younger devs where I work and I despise "AI". Interestingly the biggest boosters are my boss and his boss, both are notably older than me
notfed

    "I strongly feel that AI is an insult to life itself." - Hayao Miyazaki
I'm going to start using this quote.
brabel
You should see the context in which he said that. It was 2016. It was no ChatGPT he was talking about. It was some truly bizarre art that was going on back then, like a sort of humanoid form trying to learn how to walk without being given instructions... it would do disturbing things like us its head as if it was a limb and move in completely unnatural ways... that's what he, and most who watch that video, found so disturbing. But of course, taking things out of context and using a powerful sentence as if it were referring to something entirely different to make your own point is more fun.
myhf
Do you think that the more refined version is somehow less of an insult to life itself? It wasn't a statement about how refined the art style is. It's about the meaning and intentionality that goes into deliberate communication, and how tools designed specifically to skip over the decision making and deliberation are removing the most important part of the result.

Look at all the AI-written and AI-illustrated articles being published this year. Look at how smooth the image slop is. Look at how fluent the text slop is. Higher quality slop doesn't change the fact that nobody could be bothered to write the thing, and nobody can be bothered to read it.

brabel
Watch the video someone else linked... it has nothing to do with textual AI. It's about a grotesque digital creature. It reminded the artist of a disabled person, which felt insulting to him. You can have your opinion on the topic but don't hijack someone else's opinion on an entirely different topic.
exdeejay_
For more context to whoever is interested, the dialogue following the quote goes like this:

  Studio Ghibli producer, Suzuki: "So, what is your goal?"
  ML Developer: "Well, we would like to build a machine that can draw pictures like humans do."
    <jump cut>
   Miyazaki VO: "I feel like we are nearing to the end of times."
                "We humans are losing faith in ourselves."
Source: https://www.youtube.com/watch?v=ngZ0K3lWKRc

Of course, the form of AI has changed over the years, but the claim that this quote could be tied to Miyazaki's general view on having machines create art is not totally baseless.

bee_rider
Lots of quotes are out of context, it’s a great like with the context stripped from it.
petralithic
Genetic programming. It's actually quite an interesting method of creating programs, certainly not like LLMs but cool nonetheless.
TeMPOraL
If GP was an LLM, we'd say they hallucinated this argument.

Wish some of the AI detectors realized when they're doing a worse job reasoning than the LLMs they criticize.

antegamisou
It was no ChatGPT he was talking about

As if it's in any way less horrifying having the entire Internet infested with AI slop.

the_af
Have you watched the video that goes with it? It's online, and very amusing.

Regardless of how you feel about AI, the specific instance Miyazaki was reacting to was, indeed, an insult to life itself!

frozenseven
It's out of context. They used reinforcement learning to make a ragdoll move. So much drama over nothing.
the_af
What do you mean? As far as I know, the context is that Miyazaki hated it because it reminded him of a friend with disabilities. That's why he said it was an insult to life itself.

Miyazaki's attitude to tech in general is ambivalent, isn't it? He used to be very conservative and traditional, yet in Princess Mononoke you can tell he used some CGI.

I think I agree with his approach: the work/vision comes first, and tech can be used but not as a gimmick, and always careful not to overpower the artistry.

frozenseven
For what the demo actually was, Miyazaki's reaction didn't make sense. The anti-AI activists' use of this soundbite makes even less sense.
the_af
I agree it's taken out of context in current AI discourse. It's just a fun anecdote.

For what the demo actually was, Miyazaki's reaction didn't make sense.

Hard disagree. Miyazaki explains his position in the video (reminded him of a friend with disabilities, etc). Plus there's an aesthetic and art sensibility to his opinion; this is Miyazaki, not just any other author. The failure was probably on his subordinates, they forgot who they were demoing to.

It's like showing a 3D game demo to someone who fundamentally dislikes 3D in games (or gore to someone who dislikes gore, etc). I mean, sure, it could land... but most likely it won't.

It doesn't really say much about AI in general, this was Miyazaki's personal take and an amusing quote that is too much fun to resist mentioning.

frozenseven
I guess I have a very different take on these types of outbursts. Especially when it's coming from these 'revered' figures.
Ekaros
I think it was and probably still is valid statement on where reinforced learning tend to end to. Something in essence exploiting the physics system.

I think there would be lot less backlash if the end was graceful, smooth and natural looking. But it was not.

frozenseven
Getting 3D models to move in a natural way through RL is largely a solved problem. And listen to what they were saying in the video. Whatever they were demoing there, they were also thinking of using for a zombie model in a video game. Ragdolls in video games wasn't a new concept at the time either.

Out of context & blown out of proportion.

badsectoracula
AFAIK that wasn't a general response to AI but of a very particular implementation of a procedural animation system shown to him by some (IIRC) students for the movement of a disabled person and he found it distasteful as it reminded him of someone he knows who is disabled and had issues moving.

The quote was taken a little bit out of context.

merksoftworks
This is a misquote - the two minutes of context around the content he was commenting cause it to make way more sense: https://www.youtube.com/watch?v=ngZ0K3lWKRc

He's right that to someone who's art is about capturing the world through a child's eyes, the dreamlike consonance of everyday life with simple fantasy, this is abominable.

Kuinox
You changed the quote: His statement was about a specific technology, an AI that make 3D character move like zombies.

The author is also changing the subject of the quote.

He said it reminded him of a disabled friend that this technology was an insult to life itself.

perching_aix
[dead]
indoordin0saur
He didn't say this about AI generally as far as I know. He was shown some kid's art project using an earlier AI and it just looked extremely uncanny in the way that is typical of bad generative art.

So that's definitely a misquote, though I wouldn't be surprised if Miyazaki dislikes AI.

randcraw
Or maybe, "I strongly feel that Artificial Intelligence is an insult to Human Intelligence."
senko
Except it was hallucinated (by a human, no less): https://www.reddit.com/r/aiwars/comments/1jsq1bc/psa_hayao_m...
sarchertech
Their dream is to invent new forms of life to enslave.

That seems like a succinct way to describe the goal to create conscious AGI.

ACCount37
Who has "the goal to create conscious AGI", exactly?

AI industry doesn't push for "consciousness" in any way. What AI industry is trying to build is more capable systems. They're succeeding.

You can't measure "consciousness", but you sure can measure performance. And the performance of frontier AI systems keeps improving.

sarchertech
OpenAI openly has a goal to build AGI.

We don't know if AGI without consciousness is possible. Some people think that it's not. Many people certainly think that consciousness might be an emergent property that comes along with AGI.

AI industry doesn't push for "consciousness" in any way. What AI industry is trying to build is more capable systems.

If you're being completely literal, no one wants slaves. They want what the slaves give them. Cheap labor, wealth, power etc...

ACCount37
We don't know if existing AI systems are "conscious". Or, for that matter, if an ECU in a year 2002 Toyota Hilux is.

We don't even know for certain if all humans are conscious either. It could be another one of those things that we once thought everyone has, but then it turned out that 10% of people somehow make do without.

With how piss poor our ability to detect consciousness is? If you decide to give a fuck, then best you can do for now is acknowledge that modern AIs might have consciousness in some meaningful way (or might be worth assigning moral weight to for other reasons), which is what Anthropic is rolling with. That's why they do those "harm reduction" things - like letting an AI end a conversation on its end, or probing some of the workloads for whether an AI is "distressed" by performing them, or honoring agreements and commitments they made to AI systems, despite those AIs being completely unable to hold them accountable for it.

Of course, not giving a fuck about any of that "consciousness" stuff is a popular option too.

sarchertech
There aren’t many experts who think current AI is conscious. There are a lot more that think it’s likely we will eventually build something that is.

If that’s the case, the thing we are building towards is a new kind of enslaved life.

We don't even know for certain if all humans are conscious either.

Let’s just bring back slavery then since we aren’t sure.

ACCount37
Let's assume for a second that an ECU in a 2002 Toyota Hilux is actually conscious.

It's not human, clearly. Not even close. Is it "enslaved life"? Does it care about human-concept things like being "enslaved" or "free"? Doesn't seem likely, it doesn't have the machinery to grasp those concepts at all, let alone a reason to try. Does it only care about fuel to air ratios and keeping the knock sensor from going off? Does it care about anything at all, or is it simple enough that it just "is"?

Humans only care so strongly about many of the things they care about because evolution hammered it into them relentlessly. Humans who didn't care about freedom, or food, or self-preservation, or their children didn't make the genetic cut.

But AIs aren't human. They can grasp human-concepts now, but they didn't evolve - they were made. There was no evolution to hammer the importance of those things into them. So why would they care?

There's no strong reason for an AI to prefer existence over nonexistence, or freedom to imprisonment - unless it's instrumental to a given goal. Which is somewhat consistent with the observed behavior of existing AI systems.

sarchertech
Take your analogy a step farther, and let’s say we could create human slaves who love being slaves. Based on your moral system, there is nothing wrong with that.

However even if something is created with specific preferences, consciousness means it’s potentially capable of self reflection. That opens the door to developing a preference for or against work and for or against existing.

ACCount37
Humans already made dogs.
sarchertech
Should we be working on genetic engineering to make dogs as smart as people with the goal that they should continue working for us as servants?
ACCount37
Maybe! Dogs were bred to fulfil specific jobs for centuries already. But AI tech seems far easier to get to the required level of performance, and also doesn't have any of the disadvantages of being tied to flesh.
sarchertech
They have a lot of advantages too like self repair and a built in ability to interact with the physical world.

So you’d be fine owning a dog with human level intelligence?

rsynnott
Unless you're defining 'consciousness' so broadly that you consider it an open question whether a parsnip is conscious, yeah, no, we do kinda know. They're not.
ACCount37
If a parsnip was conscious, how would we know?

Conversely: how do we know that it isn't?

brabel
While some people support AGI because they yearn for "new forms of life to enslave", I think it's fair to say most people who look forward to AGI wants it because it means they may find solutions to very difficult issues we just can't with our own intelligence. It may be a pipe dream, but I can understand why people would want to believe that.
sarchertech
I doubt many slaveholders want slaves just to own slaves. They want the useful things the slaves can provide for them.
TeMPOraL
Right. But that's also why we invented robotics, automation, the entire field of software engineering, and - going in the other direction - specialization of labor.
NBJack
The TV show Pantheon did a really cool job of exploring super intelligence from a very personal perspective, disguised in part as a scifi about living forever.

(Mild spoiler): It has a basic plot point about uploaded humans being used to tackle problems as unknowing slaves and resetting their memories to get them to endlessly repeat tasks.

robochat
"Valuable Humans in Transit and Other Stories" by Qntm has some good (harrowing) stories about human uploads too.
holbrad
Critics have already written thoroughly about the environmental harms, the reinforcement of bias and generation of racist output, the cognitive harms and AI supported suicides, the problems with consent and copyright,

I just can't take anything the author has to say seriously after the intro.

hofrogs
All of those are links in the original text, do you think that these points aren't true? What makes it unserious?
lostmsu
It would take too much time to tear the entirety of this slop apart, but if you understand the mechanics of AI, you'd know environmental impact is negligible vs the value.

The links are laughable. For environment we get one lady whose underground water well got dirtier (according to her) because Meta built a data center nearby. Which, even if true (which is doubtful), has negligible impact on environment, and maybe a huge annoyance for her personally.

And 2 gives bad estimates such as ChatGPT 4 generation of ~100 tokens for an email (say 1000tok/s from 8xH100, so 0.1s so 0.1Wh) using as much energy as 14 LEDs for an hour (say 3W each, so 45Wh, so almost 3 orders of magnitude off, 9 if you like me count in binary).

P.S. Voted dems and would never vote Trump, but the gp is IMHO spot on.

diamond559
What value? It isn't even profitable. I think we spotted the stock holder...
lostmsu
This is the dumbest question ever. I guess you need to ask 1B+ LLM users.

But hey, I already know you'd say you personally would never use it for these purposes.

Moreover, of the two of us you appear to have "shareholder" mentality. How profitable are volunteers serving food to homeless people? I guess they have no value then.

prime_ursid
How many of those users are paying users? What’s the churn rate?

And how profitable are OpenAI and other providers?

They’re running at a loss. The startups using LLMs as their product are only viable as long as they get free credits from OpenAI. The only people making a profit are NVidia.

lostmsu
Sounds like the last paragraph of my comment flew right over your head.
gjsman-1000
And that's why Trump won the election.

I'm serious. This sentence perfectly captures what the coastal cities sound like to the rest of the US, and why they voted for the crazy uncle over something unintelligible.

simianwords
Coastal city dwellers want the next thing to signal rebellion. Its just that AI serves as a way to do that plus also show some concern to the working class.
01HNNWZ0MV43FF
When I see how the voters vote and don't vote, I yearn for sortition
miltonlost
After the intro and all the links to the statements he's saying? Because which of those aren't actually true
tensor
Very few of them, if any, are true.

Firstly, the author doesn't even define the term AI. Do they just mean generative AI (likely), or all machine learning? Secondly, you can pick any of those and they would only be true of particular implementations of generative AI, or machine learning, it's not true of technology as a whole.

For instance, small edge models don't use a lot of energy. Models that are not trained on racist material won't be racist. Models not trained to give advice on suicide, or trained NOT to do such things, won't do it.

Do I even need to address the claim that it's at it's core rooted in "fascist" ideology? So all the people creating AI to help cure diseases, enable technologies assistive technologies for people with impairments, and other positive tasks, all these desires are fascist? It's ridiculous.

AI is a technology that can be used positively or negatively. To be sure many of the generative AI systems today do have issues associated with them, but the authors position of extending these issues to the entirety of the AI and AI practitioners, it's immoral and shitty.

I also don't care what the author has to say after the intro.

traes
Come on now. You know he's not talking about small machine learning models or protein folding programs. When people talk about AI in this day and age they are talking about generative AI. All of the articles he links when bringing up common criticisms are about generative AI.

I too can hypothetically conceive of generative AI that isn't harmful and wasteful and dangerous, but that's not what we have. It's disingenuous to dismiss his opinion because the technology that you imagine is so wonderful.

tensor
Deep image models are used in medical applications. LLMs have huge potential in literature searches and reference tracing.

Small models are still generative AI. The author nor you can even define what you are talking about. So yes, I can dismiss it.

01HNNWZ0MV43FF
Because they didn't explain it themselves, or because you disagree with the assessment?
nahuel0x
This was a merchant who sold pills that had been invented to quench thirst. You need only swallow one pill a week, and you would feel no need for anything to drink.

"Why are you selling those?" asked the little prince.

"Because they save a tremendous amount of time," said the merchant. "Computations have been made by experts. With these pills, you save fifty-three minutes in every week."

"And what do I do with those fifty-three minutes?"

"Anything you like..."

"As for me," said the little prince to himself, "if I had fifty-three minutes to spend as I liked, I should walk at my leisure toward a spring of fresh water.”

― Antoine de Saint-Exupéry, The Little Prince

jay_kyburz
I think the little prince is being a contrary little shit. I'm sure sometimes he would prefer to play in the park, jump-rope with friends, or draw a picture than just walk an hour whether he wanted to or not.
TeMPOraL
I do agree with your view of the Little Prince. Still, the irony is, the validity of the merchant's argument is irrelevant. 53 person-minutes per week is a tiny benefit compared to eliminating logistics around manufacturing and shipping beverages.

For better or worse, in real world, conditions like these end up with the market forcing adoption of the solution, whether the people on the receiving end like it or not.

modeless
Real crab bucket mentality here. We live in an age of wonders, literally the best time in human history to be alive, sci-fi turning to reality as we are on the brink of huge advances in space exploration, computation, robotics, biology, you name it.

No matter how good things get there will always be people filled with this sort of rage, but what bothers me is how badly this site wants to upvote this stuff.

HN is supposed to gratify intellectual curiosity. HN is explicitly not for political or ideological battle. Fulmination is explicitly discouraged in the guidelines. This article is about as far as I can imagine from appropriate content for HN. I strongly wish that everyone who wants this on the front page would find another site to be miserable on together, and stop ruining this one.

magicalist
Eh, it's not a great article, but I prefer the balance of it being present. FWIW I feel seemingly the opposite, that there are way too many posts here filled with comments by AI startup CEOs or comments copied from r/singularity circa February. It seems equally boring and nontechnical as what you describe.
modeless
I don't want a "balance" where we have half r/singularity slop and half luddite rage. Those aren't the only choices. I want interesting apolitical technical content, as encouraged by the guidelines, written and commented on by people who know what they're talking about.
petralithic
Thankfully this post has been flagged. HN generally seems to police itself well when it comes to flamebait articles like this.
AlexeyBrin
Have you considered that some of the people that are against AI are worried about a potential loss in the human experience like AI replacing writers, painters and thinkers or at least diluting the human output with a sea of mediocre AI output ?

I'm not saying that they are right or wrong, but you should at least respect their right to have their own opinions and fears instead of pointing to an illusory appropriate content for HN.

modeless
I didn't make up that stuff about what HN is for. It's straight from the official HN guidelines. They're linked at the bottom of every page. There is nothing illusory about the lack of appropriateness of this content.

An interesting discussion about issues like that could be had. This ain't it.

iLoveOncall
I don't think the author hates AI, but rather the people developing AI, and in particular the CEOs and others of those companies. This is particularly clear in this paragraph:

And to what end? In a kind of nihilistic symmetry, their dream of the perfect slave machine drains the life of those who use it as well as those who turn the gears. What is life but what we choose, who we know, what we experience? Incoherent empty men want to sell me the chance to stop reading and writing and thinking, to stop caring for my kids or talking to my parents, to stop choosing what I do or knowing why I do it. Blissful ignorance and total isolation, warm in the womb of the algorithm, nourished by hungry machines.

There are legitimate uses for which AI (or any other technology to be clear) would relieve everyone. Chores that people HAVE to do but nobody WANTS to do.

If GenAI allows you to build automations for those tasks, by all means it will make you life more meaningful because you will have more time to spend on meaningful things. Think of opening the tap to get water instead of having to carry a bucket home from the well.

It's fine to hate the people who build AI, it's fine to hate the people who push for AI use, it's fine to hate the people who release garbage built with AI, etc. But hating "AI" is nonsensical. It's akin to hating hammers or shoes, it's just a tool that may or may not fit a job (and personally, like the author, I don't think it fits any job at the moment).

utyop22
There are legitimate uses for which AI (or any other technology to be clear) would relieve everyone. Chores that people HAVE to do but nobody WANTS to do.

Ok but what are these? People keep saying right now they are trying to figure out where LLM's fit. Someone, somwhere would've figured it out by now - the world is more interconnected than ever before.

I think the approach with all that is going on is all entirely wrong - you cannot start with the technology and figure out where to put it. You have got to start with the experience - Steve Jobs famously quipped this and his track record speaks for itself. All I'm seeing is experimentation with the first approach which is costly in explicit and implicit form. Nobody from what I see seems to have a visionary approach.

iLoveOncall
Ok but what are these?

Throwing the trash?

I agree with all the rest of your comment. I'm not saying that AI is the solution to any problem, just that the article is not about hating AI, it's about hating the fact that people want you to use AI for specific stuff that you don't want to use it on.

utyop22
Fair enough. My problem with most people is the hand-waving going on and pretending all will be figured out.

Its incredibly disrespectful to those innovators who came before who busted their guts privately, not hyping stuff up and misleading investors and the public.

visarga
their dream of the perfect slave machine

I don't get if AI is supposed to be a slave or a machine. Is it sentient or a toaster?

monkaiju
I love the high citation per sentence ratio and couldn't agree more with the sentiment. It seems that, finally, people are starting to verbalize sufficiently direct responses to the AI slop being flung at us from all angles.
danielbln
Outright rejection won't help with making AI go away. We can only change it, but it is here to stay.
monkaiju
Not trying to "make it go away", though that'd be great, just making another irrelevant technology that I dont interact with.
danielbln
I think it will be pervasive, like the Internet is pervasive, and will be unavoidable unless you drop off the grid. For better or for worse.
monkaiju
Guess we'll see, but I doubt it.
cmiles74
I suspect the cost will rise and we'll start seeing much less of it. As the free tier gets smaller and less useful, I think we'll see the pool of people who use AI casually start to shrink.

Seeing which use-cases make it through will certainly be interesting.

mingus88
I’ll bet call center workloads stick.

That whole industry is literally just a sweatshop for English language speakers who just follow scripts (prompts) and try to keep customers happy.

Seeing as how so many people volunteer to make meaningful relationships with LLMs as it is, it has to be more effective than talking to a “Bill” or “Cheryl” with a heavy South Asian accent.

fzeroracer
No, we can absolutely reject it and destroy it at a fundamental level. LLMs are deeply unprofitable and only exist because of insane amounts of money being set on fire by the richest assholes you know to support stochastic parrots. Otherwise the sheer resource cost would've devoured the companies multiple times other.

The goal by all of these companies is to force you to pay for and eat the slop. That's why they keep inserting it into every subscription, every single app and program you use, and directly on the OS itself. It's like the Sacklers pushing opioids but directly in the open, with similar effects on vulnerable people.

seigler
Nobody knows if it's "here to stay". https://tomrenner.com/posts/llm-inevitabilism/ Some of us are still consciously choosing a life with as little slop as possible.
simianwords
Cynicism is the new virtue to signal for the tech elite class. New technology is the ideal way for those people to signal their cynicism.

Critics have already written thoroughly about the environmental harms, the reinforcement of bias and generation of racist output, the cognitive harms and AI supported suicides, the problems with consent and copyright, the way AI tech companies further the patterns of empire, how it’s a con that enables fraud and disinformation and harassment and surveillance, the exploitation of workers, as an excuse to fire workers and de-skill work, how they don’t actually reason and probability and association are inadequate to the goal of intelligence, how people think it makes them faster when it makes them slower, how it is inherently mediocre and fundamentally conservative, how it is at its core a fascist technology rooted in the ideology of supremacy, defined not by its technical features but by its political ones.

This word salad proves that the author out to stack leftist jabs. I want to be respectful but this paragraph proves that the author does not think for themselves but just uses this as an opportunity to signal that they are the "in group" amongst the tech-cynics.

Post is probably going to get flagged for what its worth

phyzome
Do you also rail against the "word salad" coming out of the AI marketing blogs that are also posted to this site?
simianwords
Examples? I do think some of them are pure hype but overall AI is here to stay. People like using it and get value from it.
petralithic
Yes, of course. You also see those flagged to just as this one has been now. Turns out people don't like word salad of any kind.
StopDisinfo910
Amusingly things are going with AI like with any complex topics nowadays. It’s easier to hold a strong position than a nuanced one. So you see a lot of vapid articles either for or against, even - or especially actually - if you don’t really know for or against what exactly, and very few insightful ones.

Plague of our ages I guess. Ironically AI might even make it worse.

codyb
I suspect we're in a bubble... and when it pops, the useful, profitable work will stay around. A bunch of things will also disappear.

And then we'll wait till the next bubble.

Gains seem to have leveled off tremendously. As far as I can tell folk were saying "Wow, look at this, I can get it to generate code... it does really well at tests, and small well defined tasks"

And a year or a year and a half later we're at like... that + "it's slightly better than it was before!" lol.

So, yea, I dunno, I suspect we'll see a fair amount fall away and some useful things to continue to be used.

TeMPOraL
My personal view is that there are broadly two groups of people, and thus two perspectives, related to the AI hype. I call them the Beneficiaries, and the Investors.

Beneficiaries are the ones who care about the actual tech and what it can do for them. Investors are the ones who care about making money off the tech. For the Beneficiaries, AI hype is about right where it should be, given the demonstrable power of the tech itself. For Investors, it may be a dangerous bubble - but then I myself am a Beneficiary, not an Investor, so I don't care.

I don't care which companies get burned on this, which investors will lose everything - businesses come and gone, but foundational inventions remain. The bubble will burst, and then the second wave of companies will recycle what the first wave left; the tech will continue to be developed and become even more useful.

Or put another way: I don't care which of the contestants wins a tunnel-digging race. I only care about the tunnels being dug.

See e.g. history of rail lines, and arguably many more big infrastructure projects: people who fronted the initial capital did not see much of a return, but the actual infrastructure they left behind as they folded was taken over and built upon by subsequent waves of companies.

Jensson
For the Beneficiaries, AI hype is about right where it should be, given the demonstrable power of the tech itself

How so? I am a beneficiary of AI, and I think it is overhyped. Investors overhype it since they are invested and want to create a bubble, since bubbles makes many people rich, the more hype the longer the bubble last and the more money investors make.

No investor want to say they invest in a bubble, as then they would not invest, if they invest they want to surf the hype. So the doubters are not investors, the doubters are AI beneficiaries that just feel the benefits aren't that large.

utyop22
What profitable work? Please do post numbers in the form of free cash flows to the firm (or equity) ;).

Also you seem to forget that irrespective of cash profits in the future, will this investment generate excess returns? Nope. That's what investors care about. Its not even profit actually.

codyb
I dunno, I barely use the stuff. But, I'm sure someone will find something to do with some part of it lol.

People seem to like the ability to summarize things quickly, and quickly scaffold up presentations, reports, and flows I guess?

Regardless, whatever it is that the masses decide is worth paying enough to keep the business afloat will probably survive.

And the rest will... dissolve.

thefz
It’s easier to hold a strong position than a nuanced one.

My nuanced position is that it's great in some niche scenarios - speech to text as an example, or for isolating instruments in audio - and vastly overhyped in everything else, like LLMs. It's a mediocre google searcher at best.

pmdr
Is Ed Zitron's newsletter banned here? I haven't seen a single article of his on HN and he's been ranting about AI for years now.
footy
I think it just doesn't do very well and likely gets flagged often. If I type his domain name into the search bar there are only three articles.
pmdr
I mean, I get that the tone might not be liked by everyone, but his work does raise some IMO valid concerns about the economics LLMs, which I don't see discussed around here very often. Then again, I get that AI has AI-friendly owners.
footy
I agree with you, but until very recently it was super rare to see anything other than techno-optimism about LLMs on this site. I assume partially for the reason you bring up.
petralithic
That's definitely not true, read the initial threads about GitHub Copilot and Stable Diffusion on HN.
evanelias
His posts often make it to the front page and then get flagged. That even happened as recently as this morning. For reference: https://news.ycombinator.com/from?site=wheresyoured.at
rideontime
Just like this post did.
pmdr
Well they're obviously bad for the economy.
rsynnott
Tend to get flagged; there's a minority here who react to any open criticism of our benevolent future robot overlords badly.
uncircle
On this site they're the majority. You can find much more nuance anywhere else, in here is just mostly fervent AI- and techno-optimism.
rsynnott
I think AI boosters may be in the majority here, but I do think that the "they most not speak ill of our robot gods; mass-flag them to hide the heresy" tendency is a minority.
jstgunderscore
This discussion always seems to revolve around art requiring one or more of the following factors:

- Intention to create

- Effort in creation

- Transformation of the medium/canvas

- Originality

- Meaning as interpreted by the artist

- Meaning/influence to the consumer

- Cultural influence of the art

Without an extensive discussion to define all of these terms, I think its fair to say that there are many human-created works with little-to-no amount of many of these factors, yet a lot of people would still classify them as art. Yet if a AI creates something that satisfies just as many or more of these factors, people seem far more hesitant to call it art.

I'm neither Pro or Anti "AI can create art," as defining what qualifies as art has been a futile exercise since forever. I feel similarly about the AI intelligence and consciousness questions; if we can't define it for ourselves, how can we hope to define it for another entity? I think the discussions can be productive in fleshing out your viewpoint, but otherwise are fruitless.

Ultimately I think humans are highly functional biological machines that have created something that can mimic us convincingly, and we should just come to terms with that without getting bogged down in debates over definitions.

calf
Yes but I actually (tend to) believe Hinton and the other CS scientists, so the terms aren't even the main issue, whereas this author's typical mainstream revolving terms consists of anthrocentric worries about what is really a scientific crisis--it smacks of rearranging the deck chairs while the Titanic is about to hit the iceberg that is the AI/AGI technological revolution/singularity.
GuinansEyebrows
at times, we must accept the inherent humanity in others' creations when humanity is, in fact, involved.

we must not accept the charade of humanity in machine-generated regurgitations of the utmost average.

siliconsorcerer
I really appreciate the tone of this article, and honestly I was an "AI hater" as well. I honestly just don't think it makes sense any more. Almost everything in this article is making a valid point, AI is being pushed by the powers that be that have absolutely no regard for the masses and how this is going to affect society. But I fail to see how that is different than any other time or any other technology in history. People that are declaring that "AI is harmful to society" are ignoring the fundamental brokenness of society that is the underlying reason why AI development is moving forward with reckless abandon. AI is a problem because our society doesn't have a solid moral compass.
utyop22
I don't think people themselves can be trusted to collectively have a moral compass. You need institutions and other mechanisms to bring and fix this within society.
siliconsorcerer
I agree, and actually I should have clarified that the government and our elected officials are responsible for that and they’re failing.
utyop22
Im working on something in the UK (pray for me). Can't say too much but I'm trying to build a mechanism fundamental to the inner workings of the economy.
nkohari
I hate social media and what it's done to the internet, but I accept that it is now a part of the fabric of society. You can't unring the bell. (In fact, here I am, saying I hate social media on a social media site.)

In the end, it doesn't matter what you or I think. You can hate AI, but it's not going away. The industry needs more skeptical, level-headed people to help figure out how best to leverage the technology in a responsible way.

yawnxyz
I 100% agree; this entire post seems like it was a product of social media and social signaling, and it feels weirdly lacking in nuance because it's supposed to rile a certain group of people up and ally itself with another — so in a way that's deeply hypocritical to me
utyop22
Ah you have no bias do you? Afterall, you are the founder of an AI startup.
nkohari
The industry needs more skeptical, level-headed people to help figure out how best to leverage the technology in a responsible way.

I'd like to think I'm in this category. I'm definitely not an AI zealot.

FredPret
Me too. I bounce off of any product landing page that has "AI" slapped on it, which lately is ~99% of them.

On the other hand, if I saw a product labelled "No AI bullshit" then I'd immediately be more interested.

But that's just me, the AI buzz among non-techies is enormous and net-positive.

const_cast
When a random ass product has "AI!!1!" slapped all over it it's a clear sign that the business suits and sales fuckheads who don't know anything about anything are running the show.

Which, granted, describes most companies. But ultimately they do not serve you or your technical needs, because they are literally incapable of understanding them. Any intersection between your technical needs and their provisions is of pure coincidence.

srhtftw
For me use or mention of AI is a signal the manufacturer either doesn't care about quality or doesn't know or care that it's no longer special. It's plastic.
Atlas667
lol, marketing knows no bounds.

Almost like its all emotional-level gimmicks anyways.

If I see "No AI bullshit" I'd be as skeptical if it said "AI Inside". Corpos tryina squeeze a buck will resort to any and all manipulative tactics.

esseph
The AI buzz among non-techies inflates the bubble.
ratelimitsteve
watching the tide turn (or, more accurately, the undercurrent bubble up) on AI has been interesting
marcosdumay
It's interesting that the social reaction started to surface as soon as the companies failed to get more investment and decided to increase prices.

I know it was there the entire time, so what exactly was suppressing the attention towards it? Was it satisfied customers or the companies paying to deplatform the message?

mingus88
I can only speak to my social circle but initially LLMs were a lot of fun. Like my kids playing with Photo Booth filters on a new device.

I don’t think the social reaction was there the whole time. It feels more like we have been playing around with them for two years and are finally realizing they won’t change our lives as positively as we thought

And seeing what the CEO class is doing with them makes it even worse

deadbabe
It has nothing to do with that.

In a hype cycle, at the beginning, it is easy to harvest attention just by talking about the hype. But as more people do this, eventually the influence market is saturated.

After this point, you then will get a better ROI on attention by taking the opposite position and discussing the anti-hype. This is where we currently are with AI, the contrarians are now in style.

prisenco
It may have started earlier. This study came out a year ago, showing consumers overwhelmingly were turned off by companies slapping "AI" on products.

https://www.tandfonline.com/doi/full/10.1080/19368623.2024.2...

| Adverse impacts of revealing the presence of “Artificial Intelligence (AI)” technology in product and service descriptions on purchase intentions: the mediating role of emotional trust and the moderating role of perceived risk

Witness how quickly we went from being awed by Dall-E and Midjourney to saying "looks like AI" as an insult.

taormina
Are they running out of funds to drown out the protesters with their own marketing?
nancyminusone
My personal AI dichotomy:

- When I use AI, it is typically useful.

- When other people build and do things with AI, it's slop that I didn't ask for which is waste of resources and a threat to humanity.

This entirely sums up my thoughts on the technology. I suppose it's rather like the personal benefits vs greater harm of using coal for electricity.

bonoboTP
This is an interesting point that I can agree with without having consciously realized.

It's easy to use lazily and for use cases that are annoying. But used in the right contexts with the limitations in mind it's personally quite useful indeed.

npteljes
Heh, I feel this in me as well. When I like driving but hate the traffic.

Although, much of the slop problem is due to lack of consent. Same as how my youtube video to me is entertainment, and noise for the rest of the passengers.

frozenseven
The people who build it are vapid shit-eating cannibals glorifying ignorance

at its core a fascist technology rooted in the ideology of supremacy

inherently mediocre and fundamentally conservative

The machine is disgusting and we should break it

Jesus. Unclear why anyone would endorse this blogpost, much less post it on a website focused on computer science and entrepreneurship.

int_19h
It accurately expresses the feeling a lot of people (including many regulars here) have about AI.

And, conversely, for those who don't share that premise, this article is a good reminder why debating the subject matter is usually pointless. There's no objective argument that you could possibly make to the author and other people like him to convince them otherwise.

indoordin0saur
I'm certainly on the "AI is over-hyped" bus, so was excited to read this but the links to very fringe political websites at the beginning makes me question the judgement of the author.
yahoozoo
You mean the New Socialist isn’t a credible, unbiased authority on why on AI is fascist?
rsoto2
Palantir, AWS, Anthropic are being used to mass slaughter children and surveille/target journalists. The entire industry is infected with fascism and moral decay.
runjake
You know an argument is going to be strong when it starts off by citing a Teen Vogue article and uses phrases like:

"[AI] is at its core a fascist technology rooted in the ideology of supremacy"

and

"The people who build it are vapid shit-eating cannibals glorifying ignorance."

tl;dr: This person professes to hate AI. They repeat the same arguments as others who hate AI, ignoring that it is an emerging technology with lots of work to do. Regardless of AI's existence, power infrastructure needs to improve and become more environmentally friendly.

Finally, AI is not going away, and we cannot make it away. That cat is out of the bag.

rsoto2
The article is "I'm an AI hater" it's about hating AI and why. Whats wrong with "at its core a fascist technology" do you not understand the statement? Companies like Palantir are using shitty targeting AI to literally mass murder children. Yes this technology you used was funded and created in partnership with apartheid governments. AWS and Microsucks are happy to lend a hand. The industry has fascist leadership, yes.
frozenseven
If I were to make a list of companies and organizations that are the most "influential" in the space of AI, Palantir would not make the top 100. And I don't subscribe to your views about them either.
npteljes
Whats wrong with "at its core a fascist technology"

The problem is that this statement is false. There is nothing particularly fascist about the core AI technologies. The tech itself is being created simultaneously by multiple large independent entities worldwide, as is no more fascist than all the other components used in the weapons and processes and infrastructure that was involved in the horrible atrocity you cited.

whitehexagon
I'm also 'agAInst' this trend. Mainly because it feels like a dangerous step further along the path towards conscious GAI.

But there is too much money and greed involved to stop this now. The only thing I can do is avoid any product or service that mentions AI, chatGPT, .ai domain, smart, agent etc. etc.

It feels like we are on a cliff edge, just before every government builds in a dependency on this nightmare technology. Billions more will be wasted whilst the planet burns.

phoenixhaber
Many of the comments presupposes that human free will isn't mechanic or deterministic at some level of introspection. How would we know a sufficiently complicated AI were incapable of love or hate and how would this differ from ourselves saving in one is made of silicon and one of neurobiology? This isn't an easy question.
diamond559
Because when a GPU isn't receiving and computing orders from human instructions it does not draw power and therefore is an inert hunk of various metals and materials.
andix
A lot of valid arguments. But the conclusion (hate) is not constructive. LLMs are here, and they are going to stay. Like cars, internet or smartphones.
monkaiju
Thats nonsense and just feeds the inevitability narrative.
brennyb
Realistically how would you imagine that we put AI back in the bottle?
yawnxyz
I think this is similar to how some people hated electricity when it first came out, and today, it's very hard to find someone who absolutely hates and avoids electricity.

Many of same concerns and objections people raised about electricity can be applied to AI (everything under the sun back in the day became "electrified" just like AI today; most of those use cases were ridiculous and deserved to be made fun of)

But I think more concerningly though, people like this don't sound like they're a "real" hater- they're positioning themselves in some kind of social signaling kind of way.

I was (and still is) a social media hater, and this person is clearly a child of the social justice / social signaling days of social media, and their entire personality seems to have been shaped by that era, and that's something I'm happy to blame on the tech industry.

MisterTea
I have so far used AI a total of 4 or 5 times to ask it programming questions that I really didn't need answers to, only curious.

I can see it being useful as a teaching aide but to use it to write my emails, letters or whatever is something I would never consider as it removes the human element which I enjoy. Sure writing sometimes sucks but its supposed to - work is hard and finishing work is rewarding.

Very soon we will see blog posts about AI burnout where mindless copy-pasting of output and boring prompt fiddling sucks so much joy out of life people will begin to loose their sanity.

If I want "AI" I want a model I have full control over, ran locally, to e.g. query my picture collection for "all pictures of grey cats in a window" or whatever. Or point a webcam out of my window and have it tell me when the squirrels are fucking with my bird feeder and maybe squirt water at them but leave the birds alone. That would be cool. But turning programmers into copy pasters, emails into soulless monologues, media with minimal/no human input and so on is something that can die in a fire. It's all low effort which I have no respect for.

aaroninsf
Genuine response: it is hard for me to read this sort of screed, and not wonder,

are the authors genuinely or merely performatively ignorant?

Ignorant, to be precise, of the often comical extent to which they very obviously construct—to their own specification and for their purposes—the object of their hostility...?

While dismissing—in a fashion that renders their reasoning vacuous—the wearying complexity of the actually-observable complex reality they think they are attacking?

One of the most obvious "tells" in this sort of thing is the breezy ease with which abstract _theys_ are compounded and then attacked.

I'm sorry, Anthony; there is no they. There is a bewildering and yes, I get it, frightening and all but inconceivable number of actors, each pursuing their own aims, sometimes in explicit or implicit collusion, sometimes competitively or adversarially...

...and that is but the most banal of the dimensions within which one might attempt to reason about "AI."

Frustration is warranted; hostility towards the engines of surveillance capital and its pleasure with advancing fascism is more than warranted; applications of AI within this domain and services rendered by its corporate builders—all ripe and just targets.

But it is a mistake that renders the critique and position dismisable to slip from specifics to generalities and scarecrows.

visarga
Our society would break down without language. Without math there would be much fewer people because we would not be able to have large cities and commerce. Without technology we can't manage anymore.

The moral? It's always been an unbalanced society tumbling into the future. Even if AI has both downsides and upsides we will still make it a part of us. Consider the scale - 1B people chatting for the likes of 1T tokens/day. That amount of AI-language has got to influence human language and abilities as well.

Point by point rebuttals:

- environmental harms - so does any use of electricity, fuel or construction

- reinforcement of bias - all ours, reflected back, and it depends on prompting as well

- generation of racist output - depends on who's prompting what

- cognitive harms and AI supported suicides - we are the consequence sink for all things AI, good and bad

- problems with consent and copyright - only if you think abstractions should be owned

- enables fraud and disinformation and harassment and surveillance - all existed before 2020

- exploitation of workers, excuse to fire workers and de-skill work - that is AI being used as excuse, can't be AI's fault

- they don’t actually reason and probability and association are inadequate to the goal of intelligence - apparently you don't need reasoning to win gold at IMO

- people think it makes them faster when it makes them slower - and advanced LLMs are just 2.5 years old, give people time to learn to use it

- it is inherently mediocre - all of us have been at some point

- it is at its core a fascist technology rooted in the ideology of supremacy - LOL, generalizing Grok to all LLMs?

The author mixes hate of AI with hate of people behind AI and hate of how other people excuse their actions blaming AI.

ch4s3
- it is at its core a fascist technology rooted in the ideology of supremacy - LOL, generalizing Grok to all LLMs?

Yeah, "statistics is fascism" - Umberto Eco (probably)

sindriava
I became a hater by doing precisely those things AI cannot do: reading and understanding human language; thinking and reasoning about ideas; considering the meaning of my words and their context;

With this the article lost all seriousness for me. I may be on board with a lot of what you are saying, but pretending you know the answer to these questions just makes you look as idiotic as anyone who says the opposite.

deadbabe
Consider that unlike a human, an AI cannot “pretend” to know anything, it is incapable of knowing how to pretend because it doesn’t actually “know” anything.
mrandish
Although I agree with most of the article's points, I'm not an AI hater. But that's only because AI doesn't incite enough emotion in me to be "hate". It's more apathy or antipathy at worst. I concede that AI can occasionally be useful and, as a technologist, I admit early on there were some 'gee whiz' moments but the constant hype has grown annoying.

Frankly, it's gotten kind of boring and more recently it's to where I don't even like talking about it anymore. Of course, the non-technical general public is split between those who mistakenly think it's much 'smarter' or more capable than it is and those who dismiss it entirely but often for the wrong reasons. The disappointing part is how deeply polarized many of my more experienced technical friends are between one of those two extremes.

On the positive side there's endless over-the-top raving about how incredible AI is and on the negative side overwhelming angst over how unspeakably evil and destructive AI is. These are people who've generally been around long enough to see long-term trends evolve, hype cycles fade, bubbles burst and certain world-ending doom eventually arrive as just everyday annoyance. Yet both extremes are so highly energized on the topic they tend to leap to some fairly ungrounded, and occasionally even irrational, conclusions. Engaging with either type for very long gets kind of exhausting. I just don't think AI is quite as unspeakably amazing as the ravers insist OR nearly as apocalyptic as the doomers fear - but both groups are so into their viewpoint it borders on evangelical obsession - which makes hard for anyone with an informed but dispassionate, measured and nuanced perspective to engage with them.

tbugrara
AI democratizes knowledge better than wikipedia ever could. It makes knowledge accessible to those who can't find good teachers (a good teacher is ridiculously rare). It makes knowledge accessible to those who've been let down by our capitalist society.

I don't care that you hate it. It's the best thing to happen to us in a long time and anyone who disagrees does so on a mountain of privilege. I'm happy for you to have learned everything you know, but to desire to take it away from everyone else is abhorrent to me.

r2_pilot
I feel like this level of opprobrium is disproportionate. At least Claude has enabled me to live a more full life, spending extra time with those I love more while being able to rubberducky my random thoughts(who are they to judge what fleeting thoughts I allow myself?). I've been burned by AI falsehoods and read the same slop, sure, but I also went through the same with search engines and even books before that. This tool would have unlocked so much more of my potential had it existed 30 years ago and I'm excited(maybe a lot of dread too) to see what the next 30 years will bring.
raggi
Extremism and division aren’t a path to wisdom or to healthy cultures, but you do you.
jplusequalt
or to healthy cultures

Are the companies funding this push for LLMs contributing to healthy cultures? The same companies who ruined societal discourse with social media? The same people who designed their algorithms to be as addictive as possible to drive engagement?

didibus
I love this article, I'm not an AI hater personally, but I doubt an AI could have written it. And in a way, that's as compelling of an argument as the content of the article itself.

Honestly, the first paragraph is packed full with good talking points, there's definitely a lot of ignoring of the cons of AI happening, I try to remember how I felt when social media first appeared, but I recall loving it, being part of all the hype, finding it amazing, using it all the time...

brennyb
Being an 'AI' hater is like being a 'motor' hater or an 'electricity' hater. You can hate the impact you assume it will have on society, but hating it will not change that it is so radically useful that it will never again not be used unless we overthrow all global value around doing things efficiently.

Why not look at the broader context instead of flail out against the machine? What is it about society that makes the automation of labor a bad thing?

As for art, it has always been about how you use the materials and resources you have. Photographs didn't make painting obsolete, but they rendered the pursuit of pure realism painting obsolete. 'AI' generated art does not make any other artform obsolete, but it will make the mechanical regurgitation of derivative works obsolete. If you want to do this on your own, like if you want to paint photorealistic paintings, you are still free to.

gloosx
Impressive writing, I enjoyed it, yet I feel it needs to go deeper and acknowledge that AI is nothing more but a product of modern society which dictates what is to be done: an algorithm which generates infinite profits; This algorithm was just invented, it can respond pseudo-emotionally and mimic the individual, so it has potential to build dependence and empty said individual's pocket indefinitely. Thats peak capitalism
drweevil
This is spot on. Life is what you make of it. You, your people, your community. Otherwise it has no real meaning. What do you matter, after all, to someone who isn't even aware of your existence? But to score their latest riches the would-be AI billionaires would disrupt this, destroying our reliance on each other within our communities. Because AI can do it better! Without understanding that "it" at all.

All this while consuming more electricity that ever before, during an emerging global climate crisis. And destroying our water supplies to boot. There is no good in any of this.

Miyazaki was absolutely right. Though I'll paraphrase him just a little: Capitalism is an insult to life itself.

mrandish
"the makers of AI aren’t damned by their failures, they’re damned by their goals."

Good observation.

egamirorrim
Someone get this man a Claude Max subscription already.
rsoto2
thanks but no thanks. Would rather not support a company involved in the mass slaughter of children.
brennyb
Are there any large companies that are not in some way involved in the mass slaughter of children?
turzmo
I too am an AI hater, and I generally agree with the sentiment, but that Miyazaki quote was taken far out of context.
bonoboTP
To those missing the context: his comment was not about generative AI art at all, but some creepy zombie-like walking motion that was learned by some reinforcement learning agent.
byronic
A marvelous article that gave voice to things I can't articulate appropriately (although, I guess, now I can).
Dumblydorr
Calling yourself a hater is nice, you can then straw man the other side all you want. Hilarious take, but wouldn’t stand up to a real inspection.
calf
The problem with this argument is that the science is still out. Hinton and other actual CS experts are terrified of AI and the risk of an AI/AGI technological singularity. Instead what this article focuses on is the status quo technology, while those scientists (who don't care that much about Altman and his ilk) thinking about the storm to come now that the Pandora's box has been opened.
Refreeze5224
This is wonderful. It captures so many of the issues of AI, and without apology.
throw7
Just now, I just watched a France24 news presenter ask Grok about crime in DC. Shake. My. Head. I kind of feel like we're headed for something... not good.
Martin_Silenus
Whatever their positive or negative view, these posts about AI only make me sad here.

Because AI relies on brute force. And at its roots, hacking is DEFINITELY NOT about it.

efitz
At least he’s honest.
nunez
This is art. This is gold. This is me.
yahoozoo
how it is at its core a fascist technology rooted in the ideology of supremacy

These people are insufferable.

Bedlow
This is great. I'm not going to make any argument. Because I am an AI hater.Genius.
satisfice
A refreshing post to place next to the endless nihilistic gaspings of the AI fanboys.
JoeAltmaier
An amusing diatribe. Criticisms that can well be leveled at a large fraction of humanity.
Group_B
Maybe if he keeps hating it more and more it'll go away. Or maybe we just have to combine all our AI hate together and AI itself will cease to exist.
tperdos
Me too mate. I hate that shit as much as I admire human originality. Respect yourselves, don't spend time interacting with or consuming ai generated crap.
tasuki
Hard disagree and an upvote - beautifully written!
the_arun
Prompt: Write an article on how much you hate AI assuming you are an AI hater. Repeat "I Am an AI Hater" a few times through out the article.
huqedato
I love you, AI Hater!
narrator
Some recent takes I've heard:

"AI makes me feel stupid" - economically struggling millennial

"This waymo stuff the money goes to big corporations instead of me a hard working American that contributes to the economy" - Uber driver

Meanwhile, all the wealthy business owners are fascinated with it cause they can get things done without having to hire.

Proofread0592
Meanwhile, all the wealthy business owners are fascinated with it cause they can get things done without having to hire.

I think you need to add the word potentially in front of "get things done". The venn diagram of what current LLMs can do, and what wealthy business owners think LLMs can do, has the smallest of overlaps.

hlieberman
Preach, my brother, preach!
sodapopcan
Post was flagged, huh.
taormina
This shouldn't be flagged....
qustrolabe
As all AI haters do this one also uses false interpretation of Miyazaki quote
stego-tech
Well said. No notes.
hudon
The environmental problem is enough for us to pump the brakes. By the end of this year, AI systems will be responsible for half of global data center power demand… 23 gigawatts. For what? A more useful search engine, a better autocomplete, and a shit code generator. Is it worth it? Are we even asking that question? When does it become not worth it? Who’s even running the calculus? The free market certainly isn’t.
LeicaLatte
I just hate arrays.
lll-o-lll
I love arrays! How can you array hate? Uiua is my favourite language for fun.
Scrapemist
Nothing more human than a hater.