Being too ambitious is a clever form of self-sabotage
the "taste-skill discrepancy." Your taste (your ability to recognize quality) develops faster than your skill (your ability to produce it). This creates what Ira Glass famously called "the gap," but I think of it as the thing that separates creators from consumers.
This resonated quite strongly with me. It puts into words something that I've been feeling when working with AI. If you're new to something and using AI for it, it automatically boosts the floor of your taste, but not your skill. And you end up never slowing down to make mistakes and learn, because you can just do it without friction.
Detractors say it's the process and learning that builds depth.
Proponents say it doesn't matter because the tool exists and will always exist.
It's interesting seeing people argue about AI, because they're plainly not speaking about the same issue and simply talking past each other.
Detractors from AI often refuse to learn how to use it or argue that it doesn't do everything perfectly so you shouldn't use it.
Proponents say it's the process and learning that builds depth and you have to learn how to use it well before you can have a sensible opinion about it.
The same disconnect was in place for every major piece of technology, from mechanical weaving, to mechanical computing, to motorized carriages, to synthesized music. You can go back and read the articles written about these technologies and they're nearly identical to what the AI detractors have been saying.
One side always says you're giving away important skills and the new technology produces inferior work. They try to frame it in moral terms. But at heart the objections are about the fear of one's skills becoming economically obsolete.
Detractors from AI often refuse to learn how to use it or argue that it doesn't do everything perfectly so you shouldn't use it.
But here is the problem - to effectively learn the tool, you must learn to use. Not learning how to effectively AI and then complaining that the results are bad is building a straw-men and then burning it.
But what I am giving away when using LLM is not skills, it's the ability to learn those skills. Because if the LLM instead of me is solving all easy and intermediate problems I cannot learn how to solve hard problems. The process of digging for an answer through documentation gives me a better understanding of how some technology works.
Those kinds of problems existed before - programming languages robed people of the necessity to learn assembly - high level languages of the necessity to learn low level languages - low code solutions of the necessity to learn how to code. Some of these solutions (like low level and high level programming languages) are robust enough that this trade-off makes sense - some are not (like low code).
I think it's too early to call weather AI agents go one way or the other. Putting eggs in both baskets means learning how to use AI tools and at the same time still maintaining the ability to work without them.
They try to frame it in moral terms. But at heart the objections are about the fear of one's skills becoming economically obsolete.
In short, what it comes down to, is you do not know this to be true: "Detractors from AI often refuse to learn how to use it or argue that it doesn't do everything perfectly so you shouldn't use it." If you do know that to be true, please provide the citations. Sociology is a bitch, because we like to make stereotypes but it turns out that you really don't know anything about the individual you are talking to. You don't know their experiences, their learnings, their age.
Further, humans tend to have very small sample sizes based on their experiences. If you met one detractor every second for the rest of the year, your experiences would still not be statistically significant.
You can say, in your experience, in your conversations, but as a general true-ism - you need to provide some data. Further, even in your conversations, do you always really know how much the other person knows? For example, you assumed (or at least heavily implied) that I just learned the name of logical fallacies. I'm actually quite old, it's been a long while since I learned the name of logical fallacies. Regardless, it does not matter so long as the fallacies are correctly applied. Which I think they were, and I'll defend it in depth compared to your shallow dismissal.
Quoting from earlier:
Detractors from AI often refuse to learn how to use it.. you have to learn how to use it well before you can have a sensible opinion about it.
Clearly, if you don't like AI, you just have not learned enough about it. This argument assumes that detractors are not coming from a place of experience. This is an no-true-scotsman. They wouldn't be detractors if they had more experience, you just need to do it better! The assumption of the experience level of detractors gives away the fallacy. Clearly detractors just have not learned enough.
From a definition of no-true-scotsman[1], "The no true Scotsman fallacy is the attempt to defend a generalization by denying the validity of any counterexamples given." In this case, the counterexamples provided by detractors are discounted because they (assumingly) simply have not learned how to use AI. A detractor could say "this technology does not work", and of course they are 'wrong' because they don't know how to use it well enough. Thus, the generalization is that AI is useful and the detractors are wrong due to a lack of knowledge (and so implying if they knew more, they would not be detractors).
-----
I'll define here that straw man is misrepresenting a counter argument in a weaker form, and then showing that weaker form to be false in order to discredit the entirety of the argument.
There multiple straw man:
The same disconnect was in place for every major piece of technology, from mechanical weaving, to mechanical computing, to motorized carriages, to synthesized music. You can go back and read the articles written about these technologies and they're nearly identical to what the AI detractors have been saying... They try to frame it in moral terms.
Perhaps the disconnect is actually different. I'd say it is. Because there is no fear of job loss from AI (from this detractor at least) these examples are not relevant. That makes them a strawman.
But at heart the objections are about the fear of one's skills becoming economically obsolete.
So:
(1) The argument of detractors is morality based
(2) The argument of detractors is rooted in the fear of "becoming economically obsolete".
I'd say the strongest arguments of detractors is that the technology simply doesn't work well. Period. If that is the case, then there is NO fear of "becoming economically obsolete."Let's look at the original statement:
Detractors say it's the process and learning that builds depth.
Which means detractors are saying that AI tools are bad because they prohibit learning. Yet, now we have words put in their mouths that the detractors actually fear becoming 'economically obsolete' and it's similar to other examples that did not prove to be the case. That is exactly a weaker form of the counter argument that is then discredited through the examples of synthesized music, etc..
So, it's not the case that AI hinders learning, it's that the detractors are afraid AI will take their jobs and they are wrong because there are similar examples where that was not the case. That's a strawman.
[1] https://www.scribbr.com/fallacies/no-true-scotsman-fallacy/
But at heart the objections are about the fear of one's skills becoming economically obsolete.
I won't deny that there is some of this in my AI hesitancy
But honestly the bigger barrier for me is that I fear signing my name on subpar work that I would otherwise be embarrassed to claim as my own
If I don't type it into the editor myself, I'm not putting my name on it. It is not my code and I'm not claiming either credit nor responsibility for it
I think I'm using it more than it sounds like you are, but I make very clear notations to myself and others about what's a big generated test suite that I froze in amber after it cleared a huge replay event, and what I've been over a fine tooth comb with personally. I type about the same amount of prose and code every day as ever, but I type a lot of code into the prompt now "like this, not like that" in a comment.
The percentage of hand-authored lines varies wildly from probably 20% of unit tests to still close to 100℅ on io_uring submission queue polling or whatever.
If it one shots a build file, eh, I put opus as the meta.authors and move on.
And in a lot of areas it's clearly just copyright laundering, the way the Valley always says that breaking the law is progress if it's done with a computer (AI means computer now in policy circles).
But on code? Coding is sort of a special case in the sense that our tradition of sharing/copying/pasting/gisting-to-our-buddies-fuck-the-boss is so strong that it's kind of a different thing. Coding is also a special case on LLMs being at all useful over and above like, non-spammed Google, it's completely absurd that they generalize outside of that hyper-specific niche. And it's completely absurd the `gpt-4-1106-preview` was better than pre-AI/pre-SEO Google: LLM is both arsonist and fireman like Ethan Hunt in that Mission Impossible flick with Alex Baldwin.
So if you're asking if I think the frontier vendors have the moral high ground on anything? No, they're very very bad people and I don't associate with people who even work there.
But if you're asking if I care about my code going into a model?
If I don't type it into the editor myself, I'm not putting my name on it. It is not my code and I'm not claiming either credit nor responsibility for it
This of course isn't just a moral concern, it's a legal one. I want ownership of my code, I don't want to find out later the AI just copied another project and now I've violated a license by not giving attribution.
Very few open-source projects are in the public domain and even the most permissive license requires attribution.
Proponents say it's the process and learning that builds depth and you have to learn how to use it well before you can have a sensible opinion about it.
That's like telling a chef they'll improve their cooking skills by adding a can of soup to everything.
It's interesting seeing people argue about AI, because they're plainly not speaking about the same issue and simply talking past each other.
There's actually some ground truth facts about AI many people are not knowledgeable about.
Many people believe we understand in totality how LLMs work. The absolute truth of this is that we overall we do NOT understand how LLMs work AT all.
The mistaken belief that we understand LLMs is the driver behind most of the arguments. People think we understand LLMs and that we Understand that the output of LLMs is just stochastic parroting, when the truth is We Do Not understand Why or How an LLM produced a specific response for a specific prompt.
Whether the process of an LLM producing a response resembles anything close to sentience or consciousness, we actually do not know because we aren't even sure about the definitions of those words, Nor do we understand how an LLM works.
This erroneous belief is so pervasive amongst people that I'm positive I'll get extremely confident responses declaring me wrong.
These debates are not the result of people talking past each other. It's because a large segment of people on HN literally are Misinformed about LLMs.
For the general populace including many tech people who are not ML researchers, understanding how convolutional neural nets work is already tricky enough. For non tech people, I'd hazard a guess that LLM/ generative AI is complexity-indistinguishable from "The YouTube/Tiktok Algorithm".
And this lack of understanding, and in many cases lack of conscious acknowledgement of the lack of understanding has made many "debates" sound almost like theocratic arguments. Very little interest in grounding positions against facts, yet strongly held opinions.
Some are convinced we're going to get AGI in a couple years, others think it's just a glorified text generator that cannot produce new content. And worse there's seemingly little that changes their mind on it.
And there are self contradictory positions held too. Just as an example: I've heard people express AI produced stuff to not qualify as art (philosophically and in terms of output quality) but at the same express deep concern how tech companies will replace artists...
Just as an example: I've heard people express AI produced stuff to not qualify as art (philosophically and in terms of output quality) but at the same express deep concern how tech companies will replace artists...
I don't think this is self contradictory at all.
One may have beliefs about the meaning of human produced art and how it cannot -- and shouldn't -- be replaced by AI, and at the same time believe that companies will cut costs and replace artists with AI, regardless of any philosophical debates. As an example, studio execs and producers are already leveraging AI as a tool to put movie industry professionals (writers, and possibly actors in the future) "in their place"; it's a power move for them, for example against strikes.
I don't think people will suddenly accept worse standards for art, and anyone producing high quality work will have a significant advantage.
And now if your argument is that the average consumer can't tell the difference, then well for mass production does the difference actually matter?
Let's be cynical for a moment. A lot of Hollywood (and adjacent) movies are effectively slop. I mean, take almost all blockbusters, almost 99% action/scifi/superhero movies... they are slop. I'm not saying you cannot like them, but there's no denying they are slop. If you take offense at this proposition, just pretend it's not about any particular movie you adore, it's about the rest -- I'm not here to argue the merits of individual movies.
(As an aside, the same can be said about a lot of fantasy literature, Young Adult fiction, etc. It's by the numbers slop, maybe done with good intentions but slop nonetheless).
Superhero movie scripts could right now be written by AI, maybe with some curation by a human reviewer/script doctor.
But... as long as we accept these movies still exist, do we want to cut most humans out of the loop? These movies employ tons of people (I mean, just look at the credits), people with maybe high aspirations to which this is a job, an opportunity to hone their craft, earn their paychecks, and maybe eventually do something better. And these movies take a lot of hard, passionate work to make.
You bet your ass studios are going to either get rid of all these people or use AI to push their paychecks lower, or replace them if they protest unhealthy working conditions or whatever. Studio execs are on record admitting to this.
And does it matter? After all, the umpteenth Star Wars or Spiderman movie is just more slop.
Well, it matters to me, and I hope it's clear my argument is not exactly "AI cannot make another Avengers movie".
I also hope to have shown this position is not self-contradicting at all.
> we do NOT understand how LLMs work AT all.
> We Do Not understand Why or How an LLM produced a specific response for a
> specific prompt.
You mean the system is not deterministic? How the system works should be quite clear. I think the uncertainty is more about the premise if billions of tokens and their weights relative to each other is enough to reach intelligence. These debates are older than LLM's. In 'old' AI we were looking at (limited) autonomous agents that had the capability to participate in an environment and exchange knowledge about the world with each other. The next step for LLM's would be to update their own weights. That would be too costly in terms of money and time yet. What we do know is that for something to be seen as intelligent it cannot live in a jar. I consider the current crop as shared 8-bit computers, while each of us need one with terabytes of RAM.For context, George Hinton is basically the Father of AI. He's responsible for the current resurgence of machine learning and utilizing GPUs for ML.
The video puts it plainly. You can get pedantic and try to build scaffolding around your old opinion in attempt to fit it into a different paradigm but that's just self justification and an attempt to avoid realizing or admitting that you held a strong belief that was utterly incorrect. The overall point is:
We have never understood how LLMs work.
That's really all that needs to be said here.It's interesting seeing people argue about AI, because they're plainly not speaking about the same issue and simply talking past each other.
It's important to realize this is actually a general truth of humans arguing. Sometimes people do disagree about the facts on the ground and what is actually true versus what is bullshit, but a lot of the time what really happens is people completely agree on the facts and even most of the implications of the facts but completely disagree on how to frame them. Doesn't even have to be Internet arguments. A lot of hot-button political topics have always been like this, too.
It's easy to dismiss people's arguments as being irrelevant, but I think there's room to say that if you were to interrogate their worldview in detail you might find that they have coherent reasoning behind why it is relevant from their perspective, even if you disagree.
Though it hasn't really improved my ability to argue or even not argue (perhaps more important), I've definitely noticed this in myself when introspecting, and it definitely makes me think more about why I feel driven to argue, what good it is, and how to do it better.
The fact isn't that we don't know how to use AI. We've done so and the result can be very good sometimes (mostly because we know what's good and not). What's pushing us away from it is its unreliability. Our job is to automate some workflow (the business's and some of our owns') so that people can focus on the important matters and have the relevant information to make decisions.
The defect of LLM is that you have to monitor its whole output. It's like driving a car where the steering wheel loosely connected to the front wheels and the position for straight ahead varies all the time. Or in the case of agents, it's like sleeping in a plane and finding yourself in Russia instead of Chile. If you care about quality, the cognitive load is a lot. If you only care about moving forward (even if the path made is a circle or the direction is wrong), then I guess it's OK.
So we go for standard solutions where fixed problems stays fixed and the amount of issues is a downward slope (in a well managed codebase), not an oscillating wave that is centered around some positive value.
What’s not mentioned is the utter frustration when you can see your own output is not up to your own expectations, but you can’t execute on any plan to resolve that discrepancy.
“I know what developers want, so I can build it for them” is a death knell proportionate to your own standards…
The most profitable business I built was something I hacked together in two weeks during college holiday break, when I barely knew how to code. There was no source control (I was googling “what is GitHub” at the time), it was my first time writing Python, I stored passwords in plaintext… but within a year it was generating $20k a month in revenue. It did eventually collapse under its own weight from technical debt, bugs and support cost… and I wasn’t equipped to solve those problems.
But meanwhile, as the years went on and I actually learned about quality, I lost the ability to ship because I gained the ability to recognize when it wasn’t ready… it’s not quite “perfectionism,” but it’s borne of the same pathology, of letting perfect be the enemy of good.
letting perfect be the enemy of good.
My attempt to improve the cliche:
Let skill be the enemy of taste
2 issues here. Neither can be developed (perfected?) in isolation, but they certainly ramp up at different rates. They should probably feed back into each other somehow, whether adversarially or notMore grown-up way to do it is to consume your mates' stuff?
(Trying to go from where TFA left off)
Of course the problem of taste growing much faster than skill remains, but I don't think the answer is to "consume" (yuck) less. I actually don't know if there's an answer.
You can be a passive consumer and never improve your taste or skill. However when you consume with the intent of asking how and then attempting to answer that question ( for skill ) and why ( for taste ) you get a much different experience.
Read code, looking for patterns. Look at design looking for patterns.
Then play, try to implement what you saw, implement to opposite and see how if feels, see what happens to the code.
This is a lot of work, but helps you improve.
a developer-focused startup
I'm sorry to tell you it doesn't just apply to developer-focused startups!The taste-skill gap emerges when you intellectually recognize what a quality creation would be, but are physically unable to produce that creation, and judge the creations you are physically capable of producing as low quality
The oft cited example is drawing a circle. Everyone knows what a perfectly round circle looks like, but drawing one takes practice.
It doesn't take practice to type code. If you know what code you're supposed to write, you write it. The problem is all in the taste step, to know what code to write in the first place.
There's no meaningful taste-skill gap in programming because programming doesn't involve tacit skills. If you know what you're supposed to do, it is trivial to type that into a keyboard.
Strongly disagree here. The taste-skill gap still applies even when there's no mechanical skill involved. A lot of amateur music production is entirely "in the box" and the taste-skill gap very much exists, even though it's trivial to e.g. click a button to change a compressor's settings.
In programming, or more broadly application development, this manifests as crappy user interfaces or crappy APIs. Some developers may not notice or care, sure, but for many the feeling is, "this doesn't seem right, but I'm not exactly sure what's wrong or how to fix it." And that feeling is the taste-skill gap.
If you don't know what sound you want to hear at all, that's undeveloped taste.
If you know what code you want to type, but don't know how to use a keyboard, that would be a taste-skill gap.
If you don't know what code you want to type at all, that's undeveloped taste.
He can't really play an instrument, but he knows exactly what works and what doesn't and can articulate it.
Sort of like saying Bill Belichick has a skill gap because he’s not a top NFL player. AFAIK he never played pro ball at all (and college wasn’t at a top D1 program). Bit, he’s undeniably one of the most successful coaches in the business.
I pay a lot of attention to football as a hobby (and a gambling outlet) so these next two seasons at UNC for ‘ol Bill will be really telling.
I’m very torn at the moment if he was an incredible coach or just rode the wave or Brady talent.
Honestly, it’s hard to imagine they’d have been anywhere near that successful if the answer wasn't just "both."
You see plenty of examples of great coaches stuck with lousy rosters (Parcells with the Cowboys), and also great players on poorly run teams (Patricia-era Lions). Usually when a team only has one or the other, they continually flame out early in the playoffs.
these next two seasons at UNC for ‘ol Bill will be really telling.
I wouldn’t read too much into that. He’s 73, the game’s evolved a lot, and coaching college is a whole different thing from the NFL. It’s incredibly rare for someone to excel at both — guys like Pete Carroll being the exception that prove the rule.
Everyone has always said Belichick is basically an encyclopedia of football knowledge.
What GP is saying is not that Rick Rubin has no skill anywhere, but that he recognized he has 100/100 taste and instead of trying to become a hip hop artist, instead became a producer for other artists.
In the same way, you’ve described how Bill Belichick recognized his taste in what makes a player good is not enough to make him also a good player, so he positioned himself to take advantage of his 100/100 taste rather than whatever skill value he may have.
Putting out Run-DMC – Raising Hell, Slayer – Reign in Blood and Beastie Boys – Licensed to Ill in the same year is completely insane but things are probably much different if he is 20 years older or 20 years younger. '
He was in the perfect place as hip hop and metal were taking off.
Most of the time when you chase taste you are working on splitting hairs. Or it will look like that to an outside observer.
I imagine this is a large part of why tooling and language wars are still compelling throughout decades of computing. No amount of lecturing on the joy of e.g. Rails vs. Node will really convince anyone to use an “outdated”, slow, dynamically typed language like Ruby in 2025 — even in places where it’d be a major win.
LLMs are good at things with a lot of quantity in the training set, you can signal boost stuff, but its not perfect (and its non-obvious that you want rare/special/advanced stuff to be the sweet spot as a vendor, that's a small part of your TAM by construction).
This has all kinds of interesting tells, for example Claude is better at Bazel than Gemini is, which is kind of extreme given Google has infinite perfect Bazel and Anthropic has open source (really bad) Bazel, so you know Gemini hasn't gotten the google4 pipeline decontamination thing dialed in.
All else equal you expect a homogenizing effect where over time everything is like NextJS, Golang, and Docker.
There are outlier events, like how Claude got trained on nixpkgs in a serious way recently, but idk, maybe they want to get into defense or something.
Skill is very rarely the problem for computers, if you're considering it as district from taste (sometimes you call them both together just skill).
Here is a copy paste of the quote:
“Nobody tells this to people who are beginners, I wish someone told me. All of us who do creative work, we get into it because we have good taste. But there is this gap. For the first couple years you make stuff, it’s just not that good. It’s trying to be good, it has potential, but it’s not. But your taste, the thing that got you into the game, is still killer. And your taste is why your work disappoints you. A lot of people never get past this phase, they quit. Most people I know who do interesting, creative work went through years of this. We know our work doesn’t have this special thing that we want it to have. We all go through this. And if you are just starting out or you are still in this phase, you gotta know its normal and the most important thing you can do is do a lot of work. Put yourself on a deadline so that every week you will finish one story. It is only by going through a volume of work that you will close that gap, and your work will be as good as your ambitions. And I took longer to figure out how to do this than anyone I’ve ever met. It’s gonna take awhile. It’s normal to take awhile. You’ve just gotta fight your way through.” ― Ira Glass
Maybe that actually is what you were saying? But I'm confused because you used the opposite words.
While it lets you create something you previously can't, the qualities of the medium are replaced with those of languguage.
I.e. to produce visual images you don't need an understanding of conrtast, composition, tranparency, chroma and all that, you just need to be able to articulate what you want.
I think that's where the lack of taste appears, you have a text-based interaction with a non-language medium.
How when a movie tries to keep as close as possible to a book it rarely will be a noteworthy movie, versus something built from the ground up in that medium
But they're saying your taste, in the context of self-judgment at attempting to learn to draw, might also be raised to a professional aesthetic, because you can already produce images of that level by typing words.
I guess I will add that a difference here is we are talking about taste somewhat differently. To me, genai has been a demonstration that taste and skill are not two points on the same dimension.
You were destined to great things. You were exceptional as a child, you learnt to associate your great potential with all the good in yourself, you built your identity around it. You were ahead of your peers in elementary school, whatever you applied towards - you exceled at.
So you value that potential as the ultimate good, and any decision which reduces it in favour of actually doing something - you fear and avoid with all your soul. Any decision whatsoever murders part of that infinite potential to deliver something subpar (at best - it's not even guaranteed you achieve anything).
Over time this fear takes over and stunts your progress. You could be great, you KNOW you have this talent, but somehow you very rarely tap into it. You fall behind people you consider "mediocre" and "beneath you". Because they seem to be able to do simple things like it's the simplest thing in the world, while you somehow can't "motivate" yourself to do the "simple boring things".
When circumstances are just right you are still capable of great work, but more and more the circumstances are wrong, and you procrastinate and fail. You don't understand why, you focus on the environment and the things you fail to achieve. You search for the right productivity hack or the exact right domain that will motivate you. But any domain has boring repeative parts. Any decision is a chance to do sth OK in exchange of infinite potential. It never seems like it's worth it, so you don't do it.
You start doubting yourself. Maybe you're just an ordinary lazy person? Being ordinary is the thing you fear the most. It's a complete negation of your identity. You can be exceptional genius with problems, you take that any time if the alternative is "just a normal guy".
So, what is the lesson here?
Gotta let go of pride and risk it for the biscuit (ship something)?
Everything you know is material for your brain to make excuses and rationalizations. So no lessons work.
What works is retraining the part of the brain that distorts the reality and directs all your thoughts towards these patterns.
It's a lot like debugging. There's a callback in your brain that is harmful. It triggers every time you have to sacrifice some future potential for uncertain reality. It is subconscious. Put a breakpoint in that callback. Try to notice every time it triggers. At first just notice it, notice what it urges you to do.
When you have it nailed down - try to change it. At that point you'll realize the urge and where it comes from. Then it's a matter to making the decision and committing to sth, no matter what. It doesn't only have to be big things, it can be small things unrelated to work. It's the same "code". If you do it every time - you'll retrain it eventually.
At least that's the theory, I'm not there yet.
Put a breakpoint in that callback. Try to notice every time it triggers. At first just notice it, notice what it urges you to do.
Damn I love this advice phrased like this.
Reward them for listening, integrating, being nice towards others, relaxed, comfortable, flourishing, in their lane
Best: good job studying for that exam.
Meh: good job passing that exam.
Worst: you so smart, everything comes easy to you.
Got this from Steve Peters: https://en.m.wikipedia.org/wiki/Steve_Peters_(psychiatrist)
I was a "gifted kid", now I'm a lonely adult living by herself constantly cycling between complacency, failure, panic, and productivity. Diagnosed ADHD, choose to stay unmedicated, sometimes the best employee in my office, usually one of the laziest and most disappointing employees in my office. Constantly daydreaming about how better circumstances would change things for the better even while knowing deep down I'd cause the exact same set of problems for myself all over again even if I got my Dream Job.
Spent my whole life being told I was exceptional, and, to be fair, I lived up to it as a kid. These days I'm so terrified of regressing to being "normal" that I sabotage myself at every turn.
Thank you for leaving this comment. I may bring up the concept with my therapist and see what she thinks of it.
Compare Lincoln’s life with that of John Quincy Adams. Great expectations inspired, pursued, and haunted Adams, depriving him, at critical moments, of common sense. Overestimations by others—which he then magnified—placed objectives beyond his reach: only self-demotion brought late-life satisfaction. No expectations lured Lincoln apart from those he set for himself: he started small, rose slowly, and only when ready reached for the top. His ambitions grew as his opportunities expanded, but he kept both within his circumstances. He sought to be underestimated.
The point -- being too ambitious can slow you down if you're not strategic.
e.g. By definition the 99.9th percentile person cannot live a 99.999th percentile life, if they did they would in fact be that amazing.
e.g. By definition the 99.9th percentile person cannot live a 99.999th percentile life, if they did they would in fact be that amazing.
This seems far too deterministic and I think is contrary to what you're replying to.
It sounds more like a 99.999th percentile person[0] that constantly reaches too far too early, before being prepared, will not have a 99.999th percentile life. A 99th percentile person who, on the other hand, does not constantly fail due to over-reach, can easily end up accomplishing more. (And there are many other things that might hold them back too - they might get hit by a car while crossing the street.)
[0] in whatever measurement of "capability" you have in mind
There’s no practical way to determine that looking forwards in time.
In particular IQ is not associated with better life outcomes after you have "enough", and that "enough" isn't Mensa level.
I suspect this might have to do with praise patterns in childhood.
Instead of trying to imagine a thing that someone else might or might not need.
I've been slowly chipping away at a heroku alternative called Canine[1] for the better part of a year now on the side, and for once, I don't feel tons of pressure or self loathing for not working on it quickly enough.
I use it every day now, and whenever I come across something that I wish was a little better (at the moment, understanding how much memory is used by the cluster is a pet peeve), I ruminate on it for a few days before hopping in and making some changes. No more, no less. It helps me get away from "what is the perfect solution", to "can i fix this thing that annoys me right now"
"Did I solve the problem I had"
I really think that's the wrong question, but I don't know how to formulate it any better... it should be somewhere between playful curiosity ("how did it advance me a step in my own interests?"), pragmatic foresight ("how did it open up new possibilities?"), and bland reflection ("why was it the necessary thing to do at that moment?").
"can i fix this thing that annoys me right now"
Whatever your questions might be, I sure hope they won't only aim for a boolean answer.
Congratulations: you have successfully turned your cool idea into a chore. It’s just a lot of trivial typing and package management and it might not even be all that impressive when it is done.
Your idea is not at all a path well-trodden, but it is a path down which you’ve sent a high-resolution camera FPV drone so many times that you doubt you will see anything new in person.
What might happen then is that you try to keep it interesting by making it more impressive and raising the bar, by continuing to think and plan even harder. Why not write it in Rust? Why not make it infinitely extensible? More diagrams, hundreds more of open tabs…
It can absolutely lead to cool ideas with strategic and well-defined execution plans. Unfortunately, it is also difficult to break this loop and actually implement without an external force or another mind giving you some reframing.
Congratulations: you have successfully turned your cool idea into a chore.
The article gave me a vague, off-topic sense of unease but your comment crystallised the feeling for me.
I really wish less emphasis is placed on this kind of blue-sky, "strategic" thinking, and more placed on the "chores". Legwork, maintenance, step-by-step execution of a plan, issue tracking, perspective shifting etc. are all, in my opinion, critically important and much more deserving of praise and respect than so-called "strategic" thinking.
Which, IME, most people can't do anyway! After they've talked their big talk you suggest that there's a practical, on-ground problem and they look at you accusingly, like you're sabotaging their picture. And I'm like, no, my friend; reality is sabotaging your picture, it's just the two of us here and you're not losing any face by me pointing that out, and also if you were an actual strategic thinker you'd have taken my on-ground problem into account already...
It’s possible to make no mistakes and still lose, it’s when people get offended about something they are wrong about that creates a tolerance for Pyrrhic victories.
I think it is important to be able to strategise, especially if you can delegate parts of the work. If you cannot delegate, there needs to be a balance with capacity for grunt work. One way to address it perhaps is learning to get in the zone and enjoy ongoing work as a process. Unfortunately, sometimes it is hard to snap out of big picture view and get to it.
It’s just a lot of trivial typing and package management and it might not even be all that impressive when it is done. > What might happen then is that you try to keep it interesting by making it more impressive
This feeling is something that immediately sets off an alarm in my head.
IRL every time I tried to impress someone, I said or did stupid things. These experiences are now part of cringe memories about myself.
In software, the paradox is often that making something simple is difficult, but easily reproducible and unimpressive for most people. It is kind of like the engineers' version of when people say that their 4yo kid could do the same drawings as Picasso.
Just go through the last 90% and finish the thing. Like Antoine de Saint-Exupéry said, perfection is reached not when there's nothing else to add, but when there's nothing more to remove.
Then put the V1.0 tag on it and move it to maintenance mode. Then move to the next project, which very well might be about covering a different set of needs in the same area.
I'm seeing a therapist later this month because in a talk with my GP she saw strong enough hints of ADHD to send me there, and the kind of situations and some feelings talked about in the article came up a lot in the conversation.
I size up my oil paints against the old masters, not the old ladies in the atelier. I paint miniatures way better than average but hang around with Golden Demon winners so I always find myself wanting. Can play beautiful Renaissance pieces on my uke, but infuriatingly not at a professional performance level. Can win many sim races, but not against the top 0.1%, yet I size myself against their telemetry and laptimes. I dabble in Chess but being forever stuck around lowly 1300 ELO makes me feel dumb. My dead side projects cemetery has subdirectories approaching 3 figures. I go out and cycle with my brother but I huff and puff while he tops the Strava segments and wins the regional amateur championship again.
So too many days I just sit and do nothing, or just look for something else to enjoy for a few months until I become an unhappy promising beginner at yet another thing, adding to the overall problem.
My own route out of this trap was to explore theories of mind and, more profoundly, practices of no-mind. Doing nothing is much harder to achieve than doing something and can create a space for insight that the analytical mind cannot access. From this place, which is free of comparison and judgement, incredibly beautiful things can emerge.
If you would like to get to the root of it, I would suggest Taoist teachings and reading a few things by Krishnamurti. To understand the fundamental limitations of the mind can tell you something about who you through negation. For me, this has brought a deep sense of peace as well as an ability to use my mind in a more satisfying way.
Just my two cents :)
They're both arguably unreasonable standards but one is for the end-product (i.e. a novel/album/software project) as opposed to reaching some apparent level of general skill at your hobby. The latter is full of traps because for subjective hobbies like arts, how does one even evaluate that?
The quantity group learned something that cannot be taught: that excellence emerges from intimacy with imperfection, that mastery is built through befriending failure, that the path to creating one perfect thing runs directly through creating many imperfect things.
This reminded me of Roger Federer, who has won 82% of all matches but only 54% of all points.
I really enjoyed this article and also believe that in many cases doing is superior to planning.
Just a word of caution: the author doesn’t account for cost. All examples given are relatively low-cost and high-frequency: drawing pictures, taking photos, writing blog posts.
The cost-benefit ratio of simply doing changes when costs increase.
Quitting your high-paid job to finally start the startup you’ve been dreaming of is high-cost and rather low-frequency.
I don’t want to discourage anyone from doing these things, but it’s obvious to me that the cost/frequency aspect shouldn’t be neglected.
This reminded me of Roger Federer, who has won 82% of all matches but only 54% of all points.
This is in large part just a function of the way the rules of tennis work. e.g. consider gambling games, where there are games where the house only has a 1% edge, but if you play long enough, the casino will get 100% of your money.
This book focuses on extremely high-cost "megaprojects" and emphasizes the critical importance of thorough "planning" before execution. This stands in stark contrast to the low-risk creative activities discussed in the article, which makes the point about cost even more compelling.
However, rather than being a complete counter-argument, I see a significant overlap. The book advocates *for low-risk, low-cost experimentation and creative exploration during the planning phase* through methods like miniature prototyping and CAD simulations. In this sense, both the article and the book highlight the value of iterative approaches, whether it's through frequent, small-scale actions or through meticulous, low-cost trials before committing to high-cost endeavors.
Are there dreamers who overthink and never get anything done? Absolutely!
Are there also people who do what other people regularly say is impossible? Also an absolute yes.
Ambition has nothing to do with it. There are doers and there are talkers.
There are doers and there are talkers.
There are those who use their ambition to define a goal and then work tirelessly to achieve it. Think of the mountaineer who plans and trains for decades to eventually ascend Mt Everest.
Then there are those who share their ambition by talking about it. Seeking recognition, etc for "being ambitious". Staying with the mountaineer theme, those who refuse to climb a lesser mountain as not being important enough to expend their precious talents upon. It is these folks that if they somehow make enough money in some form, end up chartering a helicopter and sherpas to climb Mt Everest.
In the strict sense, ambition[0] is an inordinate love of honor.
Perseverance[1], OTOH, is the ability to endure suffering in pursuit of a good. Both effeminacy (refusal or inability to endure suffering to attain a good) and pertinacity (obstinate pursuit of something one should not) are opposed to perseverance.
It seems that ambition is therefore opposed to perseverance, since it can either be effeminate (the ineffectual daydreamer that makes big plans that he never realizes) or pertinacious (the person who bites off more than he can chew).
Prudence[3] involves the application of right reason to action, which itself presupposes right desire. An inordinate love of honor is therefore opposed to prudence, because it involves an inordinate desire. Furthermore, prudence presupposes humility[2], which involves knowing the actual limits of your strengths and qualities (it is not the denial of the strengths and qualities you actual have, which is opposed to humility and a common misconception!). Humility allows us to moderate our desires. In that sense, ambition as an inordinate desire for honors beyond one’s reach lacks humility.
[0] https://www.newadvent.org/cathen/01381d.htm
[1] https://www.newadvent.org/summa/3138.htm#article2
Your taste develops faster than your skill"the quality group could tell you why a photograph was excellent"
They are critics now. People with a huge taste-skill gap are basically critics — first towards themselves and gradually towards others. I don't want to generalize by saying "critics are just failed creators", but I've certainly found it true for myself. Trying to undo this change in me and this article kind of said all the words I wanted to hear. :)
It's both dense and beautifully written. Feels like every paragraph has something profound to say. This kind of "optimizing-for-screenshot-shares" writing usually gets overdone, but since this actually had substance, it was amazing to read.
(See how I turned into a critic?)
It is not the critic who counts; not the man who points out how the strong man stumbles, or where the doer of deeds could have done them better. The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood; who strives valiantly; who errs, who comes short again and again, because there is no effort without error and shortcoming; but who does actually strive to do the deeds; who knows great enthusiasms, the great devotions; who spends himself in a worthy cause; who at the best knows in the end the triumph of high achievement, and who at the worst, if he fails, at least fails while daring greatly, so that his place shall never be with those cold and timid souls who neither know victory nor defeat.
Don't get me wrong, I agree fully with the article. I put it into practice plenty well in many areas of my life. I've made great progress with my diet, self-care, and physical fitness routines by keeping my goals SMART.
And yet, a few years ago, I got this idea in my head for a piece of software I wanted to create that is, if not too ambitious, then clearly asking all of me and then some. The opening paragraph of the article really resonated with me -- "The artwork that will finally make the invisible visible."
And so, I've chipped away at the idea here and there, but I find myself continually put off by "the gap" - even though I know it's to be expected and is totally human.
Part of me wishes I had never dared to dream so big and wishes I could let the idea go entirely. Another part of me is mad and ashamed for thinking like that about a personal dream.
Anyway, don't know where I'm going with all this. Just felt like remarking on the article since it struck close to home.
P.S. if you haven't seen the Ira Glass video, I'd take a look. It's pretty inspirational. Here's Part 3 which is what the article was referencing. https://www.youtube.com/watch?v=X2wLP0izeJE
WTF is "too ambitious"? When people *don't* want to make the only necessary "sacrifice" aka exchange/trade off? It's usually time that is otherwise spend on something else, which includes family, friends, other hobbies but the latter can be taken off the list because implicit to ambition is the higher priority of the thing or state aspired and worked on.
The ability to recognize quality grows quicker because of the amount of people who have successfully made the exchange and either improved their skill or found and implemented acceptable workarounds.
Most post-modern creation is fractal remixing. It's just effort put into time. The most untalented people can create superb stuff if they just keep grinding adequate levels of skill and workarounds.
The beauty, IMO, is in accepting the process of others and to support, motivate, inspire them, with anything one can provide. That will help them grow both, skill and taste, which in turn augments your world and raises your ambition.
Look at it this way: if you poison your neighbors you lower the quality in your environment which lowers the quality of your personal IO, input and output. You even lower the standards of the evaluation of your IO. Both, those of others and your own. You basically keep yourself low, and thus, your own creation. That applies to content, products, code and any writing.
People are stuck in the old hierarchical ways of thinking. That's not even annoying. Please hone your sense for quality. You don't owe that to the old world and guard but it would prove their effort was not for nothing.
Creation is not birth; it is murder. The murder of the impossible in service of the possible.
What a stupid quote. You know why it's stupid. Because murder is creation. It is the creation of death while destroying life.
Just use the word the way it's meant to be used. Don't come up with quotes that sound clever and trick the mind into thinking a statement is profound when really it's just more word trickery.
Just use the word the way it's meant to be used.
Ha ha, you are funny:-)
This is the whole point of a (natural) language – the meaning of words is inevitably floating.
Do not nail down a meaning of a word, it’s impossible. Instead, try to imagine there is no word;-)
It's not the only way of looking at it, but it is one way, and it's not wrong.
Taste comes quicker and can be more generalized. It's also pretty easy to express. Skill has many hidden components, takes experience to hone and is typically very specific.
The algorithmic machinery of attention has, of course, engineered simple comparison. But it has also seemingly erased the process that makes mastery possible. A time-lapse of someone creating a masterpiece gets millions of views. A real-time video of someone struggling through their hundredth mediocre attempt disappears into algorithmic obscurity.
Honestly, I have found that the most important reason something gets a million views is because it got 999,999 views (so the algorithm likes it more). Lots of popular content doesn't demonstrate that mastery at all; it demonstrates a dumbed-down presentation of relatively little actual content, while the really good stuff is something you only stumble upon by random chance, buried in hundredth-mediocre-attempts.
I see this in wannabe founders listening to podcasts on loop, wannabe TikTokkers watching hours of videos as “research,”
... Which feeds right into that. It becomes too easy to mistake fluff for content and convince yourself of the value of that research. I think it's something specific to watching video content, too.
One of my own possibly-self-sabotaging ambitions is video rendering software that I would then use to produce my own content. But then, on top of the actual software, I would have to figure out how to actually write the shorter-but-still-compelling scripts that I imagine to be possible. And I would spend the whole time expecting my work to be ignored and despairing over that anyway.
That said, "Do-learn" sort of begs the question, and it's only a half-step. How do you know when you're polishing a turd? Who's to say this cycle is virtuous or vicious?
The second part is that after you drop your self-centered delusion of seeking perfection, you actually start to find and solve other people's problems.
It might not be pretty or fun, but that's what has value.
If you're interested in building companies, the key factor is not the technology or even the team, but the market -- the opportunity to help.
Then it's not really your ambition: it's a need that needs filling, and the question is whether you can find the people and means to do it, and you'll find both the people and the means are inspired not by your ambition, but by your vision for how to fill the need, in a kind of self-selected alignment and mutual support.
Unconstrained curiosity is a vice, not virtue.
Unconstrained curiosity is a superpower. Some of the greatest people in history have had immense curiosity. Think Newton, Darwin, Feynman. In fact pretty much any great scientist is great because of their wide curiosity. It's often the crossover between things that seem unrelated where the breakthroughs lie.
It's a joy to have "the pleasure of finding things out" and I pity anyone who lacks it.
Must say, it was a bit long. At the beginning, and after looking up the author, I confess to thinking "Oh no another pretty face influencer". But it built up very well. My respect level increased a lot when I saw Olin College of Engineering on her bio. Had checked it out for my daughter and came away very impressed by their approach. Most all American engineering colleges are so full of theory and so little doing, when it should be the other way around. Kudos.
There are two claims in this post: Initial goals get adjusted as we discover operating constraints, and it is easier to work with fewer variables to pay attention to.
I didn't like these sentences in this post:
- "I see this in wannabe <people trying>..."
- "Here's what happens to those brave enough to actually begin ..."
Here the author was brave enough to put themselves on a pedestal; like a true wannabe profound.
You can definitely skip a lot of the tedious bits where the author essential copy-pastes other books for analysis, but this is a very common pattern where people tend to hold themselves back because doing the unambitious, rather pedestrian next step forward requires one to face these preconceived notions about oneself, e.g. "I should've done this long ago", etc.
The market usually doesn't want advanced technology, but rather the comfortable nostalgic dysfunctional totems they always purchased in the past. =)
"The Man In The White Suit" ( 1951)
“There is a moment, just before creation begins, when the work exists in its most perfect form in your imagination.”
I think TS Eliot said this exact thing, but more poetically, in “The Hollow Men” (1925):
“Between the conception and the creation, falls the Shadow”
Which remains one of my all time fave pieces of writing. So much said in so few words.
We are still the only species cursed with visions of what could be. But perhaps that's humanity's most beautiful accident. To be haunted by possibilities we cannot yet reach, to be driven by dreams that exceed our current grasp. The curse and the gift are the same thing: we see further than we can walk, dream bigger than we can build, imagine more than we can create.And so we make imperfect things in service of perfect visions. We write rough drafts toward masterpieces we may never achieve. We build prototypes of futures we can barely envision. We close the gap between imagination and reality one flawed attempt at a time.
Still not sure if it will help me overcome this, but the "quitting point" concept and the drawing example made it a good read for me.
Not 100% the same, but I've also heard there is a correlation between procrastrination and perfectionism, narcisissm (not only grandiosity, also vulnerabity and low self-esteem):
https://pmc.ncbi.nlm.nih.gov/articles/PMC11353843/#sec3-ijer...
Relevant proverbs are plenty... "There is no failure except in no longer trying" etc
At semester's end, all the best photos came from the quantity group.
My parents once owned a photography studio. My stepfather often said something like, "A great photographer doesn't only take great photos; he takes many photos of various quality, and never shows anyone the bad ones."
1-Just because the single photo group only submitted one photo, they may have taken just as many as the quantity group
2-How were "best" photos determined (by prof? by class vote?)
If quality group took as many photos, then the issue is really about the subjective selection of "best" photo. The first group had 100x as many photos to choose from than the 2nd group, so it could be more about how well each person in the 2nd group was able to select best photo from their collection compared to however "best" photos were selected out of all photos.
The quantity group would be graded on volume: one hundred photos for an A, ninety photos for a B, eighty photos for a C, and so on.The quality group only need to present one perfect photo.
At semester's end, all the best photos came from the quantity group.
I think the more interesting experiment would be to give both groups the same assignment in terms of volume, but tell the quality group they had to submit N photos but designate one as their choice, to be graded on the quality of it. I think this would be interesting because my hypothesis is that people differ in what they consider "good" and the quality group would end up indicating the "wrong" photo as their choice nearly 100% of the time.
I also think being a beginner at other things reminds me that failure is what learning feels like, which gives me some perspective when my “real” job feels difficult although I’m supposedly so good at it.
When I look back at big things I’ve done, they’re all the result of just “doing the thing” for a long time and making thousands of course corrections. Never the result of executing the perfect crystalline plan.
I have spent a year on a project that is not really much closer to completion than when I started. But I have been shaving yaks like a motherfucker. Research, design iterations, acquiring tools, making jigs, creating space. (I have also wasted a lot of time due to coping with ADHD and depression)
I could have done it sooner if I had compromised more. But I wasn't yet experienced enough to know what compromises to make and still end up with an acceptable solution. Many things have come up that I didn't expect in my initial dream. If I'd known then what I know now, I would have dialed things down.
Ignorance amplified my ambition, and my ambition exceeded my grasp. But if you never give up, it's not sabotage: it's perseverance. And I refuse to quit. My grasp is getting stronger. I'm moving forward faster, getting better. So my ambition (in this case) is a stupid form of self-improvement. It turns out I'm not building a camper. I'm building Me.