"Coding is like taking a lump of clay and slowly working it into the thing you want it to become. It is this process, and your intimacy with the medium and the materials you’re shaping, that teaches you about what you’re making – its qualities, tolerances, and limits – even as you make it. You know the least about what you’re making the moment before you actually start making it. That’s when you think you know what you want to make. The process, which is an iterative one, is what leads you towards understanding what you actually want to make, whether you were aware of it or not at the beginning. Design is not merely about solving problems; it’s about discovering what the right problem to solve is and then solving it. Too often we fail not because we didn’t solve a problem well but because we solved the wrong problem.
When you skip the process of creation you trade the thing you could have learned to make for the simulacrum of the thing you thought you wanted to make. Being handed a baked and glazed artefact that approximates what you thought you wanted to make removes the very human element of discovery and learning that’s at the heart of any authentic practice of creation. Where you know everything about the thing you shaped into being from when it was just a lump of clay, you know nothing about the image of the thing you received for your penny from the vending machine."
This is very insightful, thanks. I had a similar thought regarding data science in particular. Writing those pandas expressions by hand during exploration means you get to know the data intimately. Getting AI to write them for you limits you to a superficial knowledge of said data (at least in my case).
And when programming with agentic tools, you need to actively push for the idea to not regress to the most obvious/average version. The amount of effort you need to expend on pushing the idea that deviates from the 'norm' (because it's novel), is actually comparable to the effort it takes to type something out by hand. Just two completely different types of effort.
There's an upside to this sort of effort too, though. You actually need to make it crystal clear what your idea is and what it is not, because of the continuous pushback from the agentic programming tool. The moment you stop pushing back, is the moment the LLM rolls over your project and more than likely destroys what was unique about your thing in the first place.
With the basic and enormous difference that the feedback loop is 100 or even 1000x faster. Which changes the type of game completely, although other issues will probably arise as we try this new path.
I can't help but imagine training horses vs training cats. One of them is rewarding, a pleasure, beautiful to see, the other is frustrating, leaves you with a lot of scratches and ultimately both of you "agreeing" on a marginal compromise.
That's not an upside in that it's unique to LLM vs human written code. When writing it yourself, you also need to make it crystal clear. You do that in the language of implementation.
Yet another example of "comments that are only sort of true because high temperature sampling isn't allowed".
If you use LLMs at very high temperature with samplers which correctly keep your writing coherent (i.e. Min_p, or better like top-h, P-less decoding, etc), than "regression to the mean" literally DOES NOT HAPPEN!!!!
Have you actually tried high temperature values for coding? Because I don’t think it’s going to do what you claim it will.
LLMs don’t “reason” the same way humans do. They follow text predictions based on statistical relevance. So raising the temperature will more likely increase the likelihood of unexecutable pseudocode than it would create a valid but more esoteric implementation of a problem.
You can't without hacking it! That's my point! The only places you can easily are via the API directly, or "coomer" frontends like SillyTavern, Oobabooga, etc.
Same problem with image generation (lack of support for different SDE solvers, the image version of LLM sampling) but they have different "coomer" tools, i.e. ComfyUI or Automatic1111
This is an amazing quote - thank you. This is also my argument for why I can't use LLMs for writing (proofreading is OK) - what I write is not produced as a side-effect of thinking through a problem, writing is how I think through a problem.
Coding is not at all like working a lump of clay unless you’re still writing assembly.
You’re taking a bunch of pre-built abstractions written by other people on top of what the computer is actually doing and plugging them together like LEGOs. The artificial syntax that you use to move the bricks around is the thing you call coding.
The human element of discovery is still there if a robot stacks the bricks based on a different set of syntax (Natural Language), nothing about that precludes authenticity or the human element of creation.
You’re both right. It just depends on the problems you’re solving and the languages you use.
I find languages like JavaScript promote the idea that of “Lego programming” because you’re encouraged to use a module for everything.
But when you start exploring ideas that haven’t been thoroughly explored already, and particularly in systems languages which are less zealous about DRY (don’t repeat yourself) methodologies, the you can feel a lot more like a sculptor.
Likewise if you’re building frameworks rather than reusing them.
So it really depends on the problems you’re solving.
For general day-to-day coding for your average 9-to-5 software engineering job, I can definitely relate to why people might think coding is basically “LEGO engineering”.
I think the analogy to high level programming languages misunderstands the value of abstraction and notation. You can’t reason about the behavior of an English prompt because English is underspecified. The value of code is that it has a fairly strong semantic correlation to machine operations, and reasoning about high level code is equivalent to reasoning about machine code. That’s why even with all this advancement we continue to check in code to our repositories and leave the sloppy English in our chat history.
Exactly, and that's why I find AI coding solving this well, because I find it tedious to put the bricks together for the umpteenth time when I can just have an AI do it (which I will of course verify the code when it's done, not advocating for vibe coding here).
This actually leaves me with a lot more time to think, about what I want the UI to look like, how I'll market my software, and so on.
yes, this is maybe it's my preference to jump directly to coding, instead of canva to draw the gui and stuff.
i would not know what to draw because the involvemt is not so deep ...or something
I dunno, when you've made about 10,000 clay pots its kinda nice to skip to the end result, you're probably not going to learn a ton with clay pot #10,001. You can probably come up with some pretty interesting ideas for what you want the end result to look like from the onset.
I find myself being able to reach for the things that my normal pragmatist code monkey self would consider out of scope - these are often not user facing things at all but things that absolutely improve code maintenance, scalability, testing/testability, or reduce side effects.
Depends on the problem. If the complexity of what you are solving is in the business logic or, generally low, you are absolutely right. Manually coding a signup flow #875 is not my idea of fun either. But if the complexity is in the implementation, it’s different. Doing complex cryptography, doing performance optimization or near-hardware stuff is just a different class of problems.
> If the complexity of what you are solving is in the business logic or, generally low, you are absolutely right.
The problem is rather that programmers who work on business logic often hate programmers who are actually capable of seeing (often mathematical) patterns in the business logic that could be abstracted away; in other words: many business logic programmers hate abstract mathematical stuff.
So, in my opinion/experience this is a very self-inflected problem that arises from the whole culture around business logic and business logic programming.
In my experience AI is pretty good at performance optimizations as long as you know what to ask for.
Can't speak to firmware code or complex cryptography but my hunch is if it's in it's training dataset and you know enough to guide it, it's generally pretty useful.
Ironic. The frequency and predictability of this type of response — “This criticism of new technology is invalid because someone was wrong once in the past about unrelated technology” — means there might as well be an LLM posting these replies to every applicable article. It’s boring and no one learns anything.
It would be a lot more interesting to point out the differences and similarities yourself. But then if you wanted an interesting discussion you wouldn’t be posting trite flamebait in the first place, would you?
Interesting comparison. I remember watching a video on that. Landscape paintings, portraits, etc, was an art that has taken an enormous nosedive. We, as humans, have missed out on a lot of art because of the invention of the camera. On the other hand, the benefits of the camera need no elaboration. Currently AI had a lot of foot guns though, which I don't believe the camera had. I hope AI gets to that point too.
> Eloquent, moving, and more-or-less exactly what people said when cameras first hit the scene.
This is a non sequitur. Cameras have not replaced paintings, assuming this is the inference. Instead, they serve only to be an additional medium for the same concerns quoted:
The process, which is an iterative one, is what leads you
towards understanding what you actually want to make,
whether you were aware of it or not at the beginning.
Just as this is applicable to refining a software solution captured in code, just as a painter discards unsatisfactory paintings and tries again, so too is it when people say, "that picture didn't come out the way I like, let's take another one."
What stole the joy you must have felt, fleetingly, as a child that beheld the world with fresh eyes, full of wonder?
Did you imagine yourself then, as your are now, hunched over a glowing rectangle. Demanding imperiously that the world share your contempt for the sublime.
Share your jaundiced view of those that pour the whole of themselves into the act of creation, so that everyone might once again be graced with wonder anew.
I hope you can find a work of art that breaks you free of your resentment.
I took the liberty of pasting it to chatgpt and asked it to write another paragraph in the same style:
Perhaps it is easier to sneer than to feel, to dull the edges of awe before it dares to wound you with longing. Cynicism is a tidy shelter: no drafts of hope, no risk of being moved. But it is also a small room, airless, where nothing grows. Somewhere beyond that glowing rectangle, the world is still doing its reckless, generous thing—colors insisting on being seen, sounds reaching out without permission, hands shaping meaning out of nothing. You could meet it again, if you chose, not as a judge but as a witness, and remember that wonder is not naïveté. It is courage, practiced quietly.
Plot twist. The comment you love is the cynical one, responding to someone who clearly embraces the new by rising above caution and concern. Your GPT addition has missed the context, but at least you've provided a nice little paradox.
Art history. It's how we ended up with Impressionism, for instance.
People felt (wrongly) that traditional representational forms like portraiture were threatened by photography. Happily, instead of killing any existing genres, we got some interesting new ones.
I think just as hard, I type less. I specify precisely and I review.
If anything, all we've changed is working at a higher level. The product is the same.
But these people just keep mixing things up like "wow I got a ferrari now, watch it fly off the road!"
Yeah so you got a tools upgrade; it's faster, it's more powerful. Keep it on the road or give up driving!
We went from auto completing keywords, to auto completing symbols, to auto completing statements, to auto completing paragraphs, to auto completing entire features.
Because it happened so fast, people feel the need to rename programming every week. We either vibe coders now, or agentic coders or ... or just programmers hey. You know why? I write in C, I get machine code, I didn't write the machine code! It was all an abstraction!
Oh but it's not the same you say, it changes every time you ask. Yes, for now, it's still wonky and janky in places. It's just a stepping stone.
Just chill, it's programming. The tools just got even better.
You can still jump on a camel and cross the desert in 3 days. Have at it, you risk dying, but enjoy. Or you can just rent a helicopter and fly over the damn thing in a few hours. Your choice. Don't let people tell you it isn't travelling.
We're all Linus Torvalds now. We review, we merge, we send back. And if you had no idea what you were doing before, you'll still have no idea what you're doing today. You just fat-finger less typos today than ever before.
I think I understand what the author is trying to say.
We miss thinking "hard" about the small details. Maybe "hard" isn't the right adjective, but we all know the process of coding isn't just typing stuff while the mind wanders. We keep thinking about the code we're typing and the interactions between the new code and the existing stuff, and keep thinking about potential bugs and issues. (This may or may not be "hard".)
And this kind of thinking is totally different from what Linus Torvalds has to think about when reviewing a huge patch from a fellow maintainer. Linus' work is probably "harder", but it's a different kind of thinking.
You're totally right it's just tools improving. When compilers improved most people were happy, but some people who loved hand crafting asm kept doing it as a hobby. But in 99+% cases hand crafting asm is a detriment to the project even if it's fun, so if you love writing asm yourself you're either out of work, or you grudgingly accept that you might have to write Java to get paid. I think there's a place for lamenting this kind of situation.
I agree. I think some of us would rather deal with small, incremental problems than address the big, high-level roadmap. High-level things are much more uncertain than isolated things that can be unit-tested. This can create feelings of inconvenience and unease.
Spot on. It’s the lumberjack mourning the axe while holding a chainsaw. The work is still hard. it’s just different.
The friction comes from developers who prioritize the 'craft' of syntax over delivering value. It results in massive motivated reasoning. We see people suddenly becoming activists about energy usage or copyright solely to justify not using a tool they dislike. They will hunt for a single AI syntax error while ignoring the history of bugs caused by human fatigue. It's not about the tech. it's about the loss of the old way of working.
And it's also somewhat egotistical it seems to me. I sense a pattern that many developers care more about doing what they want instead of providing value to others.
I disagree. It's like the lumberjack working from home watching an enormous robotic forestry machine cut trees on a set of tv-screens. If he enjoyed producing lumber, then what he sees on those screens will fill him with joy. He's producing lots of lumber. He's much more efficient than with both axe and chainsaw.
But if he enjoyed being in the forest, and _doesn't really care about lumber at all_ (Because it turns out, he never used or liked lumber, he merely produced it for his employer) then these screens won't give him any joy at all.
That's how I feel. I don't care about code, but I also don't really care about products. I mostly care about the craft. It's like solving sudokus. I don't collect solved sudokus. Once solved I don't care about them. Having a robot solve sudokus for me would be completely pointless.
> I sense a pattern that many developers care more about doing what they want instead of providing value to others.
And you'd be 100% right. I do this work because my employer provides me with enough sudokus. And I provide value back which is more than I'm compensated with. That is: I'm compensated with two things: intellectual challenge, and money. That's the relationship I have with my employer. If I could produce 10x more but I don't get the intellectual challenge? The employer isn't giving me what I want - and I'd stop doing the work.
I think "You do what the employer wants, produce what needs to be produced, and in return you get money" is a simplification that misses the literal forest for all the forestry.
It is simple. Continuing your metaphor, I have a choice of getting exactly where I want on a camel in 3 days, or getting to a random location somewhere on the other side of the desert on a helicopter in few hours.
And being a reasonable person I, just like the author, choose the helicopter. That's it, that's the whole problem.
I think the more apt analog isn't a faster car, a la Ferrari, it's more akin to someone who likes to drive and now has to sit and monitor the self-driving car steer and navigate. Comparing to the Ferrari is incorrect since it still takes a similar level of agency from the driver versus a <insert slower vehicle>
FSD is very very good most of the time. It's so good (well, v14 is, anyway), it makes it easy to get lulled into thinking that it works all the time. So you check your watch here, check your phone there, and attend to other things, and it's all good until the car decides to turn into a curb (which almost happened to me the other day) or swerve hard into a tree (which happened to someone else).
Funny enough, much like AI, Tesla is shoving FSD down people's throats by gating Autopilot 2, a lane keeping solution that worked extremely well and is much friendlier to people who want limited autonomy here and there, behind the $99/mo FSD sub (and removing the option to pay for the package out of pocket).
I get your point that wetware stills matter, but I think it's a bit much to contend that more than a handful of people (or everyone) is on the level of Linus Torvalds now that we have LLMs.
I should have been clearer. It was a pun, a take, a joke. I was referring to his day-to-day activity now, where he merges code, doesn't write hardly any code for the linux kernel.
I didn't imply most of use can do half the thing he's done. That's not right.
> his day-to-day activity now, where he merges code
But even then...don't you think his insight into and ability to verify a PR far exceeds that of most devs (LLM or not)? Most of us cannot (reasonably) aspire to be like him.
Like I said if you didn't know what you were doing before, you won't know what you're doing with today.
Agentic coding in general only amplify your ability (or disability).
You can totally learn how to build an OS and invest 5 years of your life doing so. The first version of Linux I'm sure was pretty shoddy. Same for a SCM.
I've been doing this for 30 years. At some point, your limit becomes how much time you're willing to invest in something.
> We're all Linus Torvalds now. We review, we merge, we send back. And if you had no idea what you were doing before, you'll still have no idea what you're doing today. You just fat-finger less typos today than ever before.
Except Linus understands the code that is being reviewed / merged in since he already built the kernel and git by hand. You only see him vibe-coding toys but not vibe-coding in the kernel.
Today, we are going to see a gradual skill atrophy with developers over-relying on AI and once something like Claude goes down, they can't do any work at all.
The most accurate representation is that AI is going to rapidly make lots of so-called 'senior engineers' who are over-reliant and unable to detect bad AI code like juniors and interns.
I got excited about agents because I told myself it would be "just faster typing". I told myself that my value was never as a typist and that this is just the latest tool like all the tools I had eagerly added to my kit before.
But the reality is different. It's not just typing for me. It's coming up with crap. Filling in the blanks. Guessing.
The huge problem with all these tools is they don't know what they know and what they don't. So when they don't know they just guess. It's absolutely infuriating.
It's not like a Ferrari. A Ferrari does exactly what I tell it to, up to the first-order effects of how open the throttle is, what direction the wheels face, how much pressure is on the brakes etc. The second-order effects are on me, though. I have to understand what effect these pressures will have on my ultimate position on the road. A normie car doesn't give you as much control but it's less likely to come off the road.
Agents are like a teleport. You describe where you want to be and it just takes you directly there. You say "warm and sunny" and you might get to the Bahamas, but you might also get to the Sahara. So you correct: "oh no, I meant somewhere nice" and maybe you get to the Bahamas. But because you didn't travel there yourself you failed to realise what you actually got. Yeah, it's warm, sunny and nice, but now you're on an island in the middle of nowhere and have to import basically everything. So I prompt again and rewrite the entire codebase, right?
Linus Torvalds works with experts that he trusts. This is like a manic 5 year old that doesn't care but is eager to work. Saying we all get to be Torvalds is like saying we all get to experience true love because we have access to porn.
But that's the entire flippin' problem. People are being forced to use these tools professionally at a stagering rate. It's like the industry is in its "training your replacement" era.
Like I said that's temporary. It's janky and wonky but it's a stepping stone.
Just look at image generation. Actually factually look at it. We went from horror colours vomit with eyes all over, to 6 fingers humans, to pretty darn good now.
it's not. We were able to get rid of 6 fingered hands by getting very specific, and fine tuning models with lots of hand and finger training data.
But that approach doesn't work with code, or with reasoning in general, because you would need to exponentially fine tune everything in the universe. The illusion that the AI "understands" what it is doing is lost.
Nothing new. Whenever a new layer of abstraction is added, people say it's worse and will never be as good as the old way. Though it's a totally biased opinion, we just have issues with giving up things we like as human being.
99% of people writing in assembly don't have to drop down into manual cobbling of machine code. People who write in C rarely drop into assembly. Java developers typically treat the JVM as "the computer." In the OSI network stack, developers writing at level 7 (application layer) almost never drop to level 5 (session layer), and virtually no one even bothers to understand the magic at layers 1 & 2. These all represent successful, effective abstractions for developers.
In contrast, unless you believe 99% of "software development" is about to be replaced with "vibe coding", it's off the mark to describe LLMs as a new layer of abstraction.
As someone who's been coding for several decades now (i.e. I'm old), I find the current generation of AI tools very ... freeing.
As an industry, we've been preaching the benefits of running lots of small experiments to see what works vs what doesn't, try out different approaches to implementing features, and so on. Pre-AI, lots of these ideas never got implemented because they'd take too much time for no definitive benefit.
You might spend hours thinking up cool/interesting ideas, but not have the time available to try them out.
Now, I can quickly kick off a coding agent to try out any hare-brained ideas I might come up with. The cost of doing so is very low (in terms of time and $$$), so I get to try out far more and weirder approaches than before when the costs were higher. If those ideas don't play out, fine, but I have a good enough success rate with left-field ideas to make it far more justifiable than before.
Also, it makes playing around with one-person projects a lot practical. Like most people with partner & kids, my down time is pretty precious, and tends to come in small chunks that are largely unplannable. For example, last night I spent 10 minutes waiting in a drive-through queue - that gave me about 8 minutes to kick off the next chunk of my one-person project development via my phone, review the results, then kick off the next chunk of development. Absolutely useful to me personally, whereas last year I would've simply sat there annoyed waiting to be serviced.
I know some people have an "outsourcing Lego" type mentality when it comes to AI coding - it's like buying a cool Lego kit, then watching someone else assemble it for you, removing 99% of the enjoyment in the process. I get that, but I prefer to think of it in terms of being able to achieve orders of magnitude more in the time I have available, at close to zero extra cost.
Totally agree. I can spend an afternoon trying out an approach to a problem or product (usually while taking meetings and writing emails as well). If it doesn't work, then that's a useful result from my time. If it does work, I can then double-down on review, tests, quality, security, etc and make sure it's all tickety-boo.
I see the current generation of AI very much as a thing in between. Opus 4.5 can think and code quite well, but it cannot do these "jumps of insight" yet. It also struggles with straightforward, but technically intricate things, where you have to max out your understanding of the problem.
Just a few days ago, I let it do something that I thought was straightforward, but it kept inserting bugs, and after a few hours of interaction it said itself it was running in circles. It took me a day to figure out what the problem was: an invariant I had given it was actually too strong, and needed to be weakened for a special case. If I had done all of it myself, I would have been faster, and discovered this quicker.
For a different task in the same project I used it to achieve a working version of something in a few days that would have taken me at least a week or two to achieve myself. The result is not efficient enough for the long term, but for now it is good enough to proceed with other things. On the other hand, with just one (painful) week more, I would have coded a proper solution myself.
What I am looking forward to is being able to converse with the AI in terms of a hard logic. That will get rid of the straightforward but technically intricate stuff that it cannot do yet properly, and it will also allow the AI to surface much quicker where a "jump of insight" is needed.
I am not sure what all of this means for us needing to think hard. Certainly thinking hard will be necessary for quite a while. I guess it comes down to when the AIs will be able to do these "jumps of insights" themselves.
That’s funny cause I feel the opposite: LLMs can automate, in a sloppy fashion, building the first trivial draft. But what remains is still thinking hard about the non trivial parts.
I don't agree with OP. I still think hard. I can guarantee anyone that in complex contexts, where you have to work on large distributed systems and very large codebases, with multiple people involved, OP’s argument falls apart. Quite simply, compared to before, we now delegate the "build" part to AI. But the engineering and architectural side always remains a product of my own mental process. We’ve moved to a higher level of abstraction; that doesn’t mean you have to think or reason less about things. On the contrary, sometimes we end up reasoning about more important problems.
I’d been feeling this until quite literally yesterday, where I sort of just forced myself to not touch an AI and grappled with the problem for hours. Got myself all mixed up with trig and angles until I got a headache and decided to back off a lot of the complexity. I doubt I got everything right, I’m sure I could’ve had a solution with near identical outputs using an AI in a fraction of the time.
But I feel better for not taking the efficient way. Having to be the one to make a decision at every step of the way, choosing the constraints and where I cut my losses on accuracy, I think has taught me more about the subject than even reading literature would’ve directly stated.
I'm using LLMs to code and I'm still thinking hard. I'm not doing it wrong: I think about design choices: risks, constraints, technical debt, alternatives, possibilities... I'm thinking as hard as I've ever done.
Yeah, but thinking with an LLM is different. The article says:
> By “thinking hard,” I mean encountering a specific, difficult problem and spending multiple days just sitting with it to overcome it.
The "thinking hard" I do with an LLM is more like management thinking. Its chaotic and full of conversations and context switches. Its tiring, sure. But I'm not spending multiple days contemplating a single idea.
The "thinking hard" I do over multiple days with a single problem is more like that of a scientist / mathematician. I find myself still thinking about my problem while I'm lying in bed that night. I'm contemplating it in the shower. I have little breakthroughs and setbacks, until I eventually crack it or give up.
YMMV, but I've found that I actually do way more of that type of "thinking hard" thanks to LLMs. With the menial parts largely off my plate, my attention has been freed up to focus on a higher density of hard problems, which I find a lot more enjoyable.
I very much think its possible to use LLMs as a tool in this way. However a lot of folks are not. I see people, both personally and professionally, give it a problem and expect it to both design and implement a solution, then hold it as a gold standard.
I find the best uses, for at least my self, are smaller parts of my workflow where I'm not going to learn anything from doing it:
- build one to throw away: give me a quick prototype to get stakeholder feedback
- straightforward helper functions: I have the design and parameters planned, just need an implementation that I can review
- tab-completion code-gen
- If I want leads for looking into something (libraries, tools) and Googling isn't cutting it
I just changed employers recently in part due to this: dealing with someone that appears to now spend his time coercing LLM's to give the answers he wants, and becoming deaf to any contradictions. LLMs are very effective at amplifying the Reality Distortion Field for those that live in them. LLMs are replacing blog posts for this purpose.
I echo this sentiment. Even though I'm having Claude Code write 100% of the code for a personal project as an experiment, the need for thinking hard is very present.
In fact, since I don't need to do low-thinking tasks like writing boilerplate or repetitive tests, I find my thinking ratio is actually higher than when I write code normally.
I'm with you, thinking about architecture is generally still a big part of my mental effort. But for me most architectural problems are solve in short periods of thought and a lot of iteration. Maybe its an skill issue, but not now nor in the pre-LLM era I've been able to pre-solve all the architecture with pure thinking.
That said architectural problems have been also been less difficult, just for the simple fact that research and prototyping has become faster and cheaper.
I think it depends on the scope and level of solution I accept as “good”. I agree that often the thinking for the “next step” is too easy architecturally. But I still enjoy thinking about the global optimum or a “perfect system”, even it’s not immediately feasible, and can spend large amounts of time on this.
And then also there’s all the non-systems stuff - what is actually feasible, what’s most valuable etc. Less “fun”, but still lots of potential for thinking.
I guess my main point is there is still lots to think about even post-LLM, but the real challenge is making it as “fun” or as easily useful as it was pre-LLM.
I think local code architecture was a very easy domain for “optimality” that is actually tractable and the joy that comes with it, and LLMs are harmful to that, but I don’t think there’s nothing to replace it with.
Ya, they are programming languages after all. Language is really powerful when you really how to use it. Some of us are more comfortable with the natural variety, some of us are more comfy with code ¯\_(ツ)_/¯
Agreed. My recent side projects involve lots of thinking over days and weeks.
With AI we can set high bars and do complex original stuff. Obviously boilerplate and common patterns are slop slap without much thinking. That's why you branch into new creative territory. The challenge then becomes visualising the mental map of modular pieces all working nicely together at the right time to achieve your original intent.
Reading this comment and other similar comments there's definitely a difference between people.
Personally I agree and resonate a lot with the blog post, and I've always found designs of my programs to come sort of naturally. Usually the hard problems are the technical problems and then the design is figured out based on what's needed to control the program. I never had to think that hard about design.
Aptitude testing centers like Johnson O'Connor have tests for that. There are (relatively) huge differences between different people's thinking and problem solving styles. For some, creating an efficient process feels natural, while others need stability and redundancy. Programmers are by and large the latter.
It's certainly a different style of thinking hard. I used to really stress myself over coding - i.e. I would get frustrated that solving an issue would cause me to introduce some sort of hack or otherwise snowball into a huge refactor. Now I spend most of my time thinking about what cool new features I am going to build and not really stressing myself out too much.
I feel this too. I suspect its a byproduct of all the context switching I find myself doing when I'm using an LLM to help write software. Within a 10 minute window, I'll read code, debug a problem, prompt, discuss the design, test something, do some design work myself and so on.
When I'm just programming, I spend a lot more time working through a single idea, or a single function. Its much less tiring.
It wasn't until I read your comment that I was able to pinpoint why the mental exhaustion feels familiar. It's the same kind (though not degree) of exhaustion as formal methods / proofs.
Except without the reward of an intellectual high afterwards.
I use Claude Code a lot, and it always lets me know the moment I stopped thinking hard, because it will build something completely asinine. Garbage in, garbage out, as they say...
its how you use the tool... reminds me of that episode of simpsons when homer gets a gun lic... he goes from not using it at all, to using it a little, to using it without thinking about what hes doing and for ludicrous things...
thinking is tiring and life is complicated, the tool makes it easy to slip into bad habits and bad habits are hard to break even when you recognise its a bad habit.
Many people are too busy/lazy/self-unaware to evaluate their behaviour to recognise a bad habit.
there's no such thing as right or wrong , so the following isn't intended as any form of judgement or admonition , merely an observation that you are starting to sound like an llm
My observation: I've always had that "sound." I don't know or care much about what that implies. I will admit I'm now deliberately avoiding em dashs, whereas I was once an enthusiastic user of them.
I haven't reduced my thinking! Today I asked AI to debug an issue. It came with a solution that it was clearly correct, but it didn't explain why the code was in that state. I kept steering AI (which just wanted to fix) toward figuring out the why and it digged through git and github issue at some point,in a very cool way.
And finally it pulled out something that made sense. It was defensive programming introduced to fix an issue somewhere else, which was also in turn fixed, so useless.
At that point an idea popped in my mind and I decided to look for similar patterns in the codebase, related to the change, found 3. 1 was a non bug, two were latent bugs.
Shipped a fix plus 2 fixes for bugs yet to be discovered.
I think my message is doing a disservice to explaining what actually happened because a lot of it happens in my head.
1. I received the ticket, as soon as I read it I had a hunch it was related to some querying ignoring a field that should be filtered by every query (thinking)
2. I give this hunch to the AI which goes search in the codebase in the areas I suggested the problem could be and that's when it find the issue and provide a fix
3. I think the problem could be spread given there is a method that removes the query filter, it could have been used in multiple places, so I ask AI to find other usages of it (thinking, this is my definition of "steering" in this context)
4. AI reports 3 more occurrences and suggests that 2 have the same bug, but one is ok
5. I go in, review the code and understand it and I agree, it doesn't have the bug (thinking)
6. AI provide the fix for all the right spots, but I said "wait, something is fishy here, there is a commit that explicitly say it was added to remove the filter, why is that?" (thinking), so I ask AI to figure out why the commit says that
7. AI proceeds to run a bunch of git-history related commands, finds some commit and then does some correlation to find another commit. This other commit introduced the change at the same time to defend from a bug in a different place
8. I understand what's going on now, I'm happy with the fix, the history suggests I am not breaking stuff. I ask AI to write a commit with detailed information about the bug and the fix based on the conversation
There is a lot of thinking involved. What's reduced is search tooling. I can be way more fuzzy, rather than `rg 'whatever'` I now say "find this and similar patterns"
I've had the completely opposite experience as somebody that also likes to think more than to build: LLMs take much of the legwork of actually implementing a design, fixing trivial errors etc. away from me and let me validate theories much more quickly than I could do by myself.
More importantly, thinking and building are two very different modes of operating and it can be hard to switch at moment's notice. I've definitely noticed myself getting stuck in "non-thinking building/fixing mode" at times, only realizing that I've been making steady progress into the wrong direction an hour or two in.
This happens way less with LLMs, as they provide natural time to think while they churn away at doing.
Even when thinking, they can help: They're infinitely patient rubber ducks, and they often press all the right buttons of "somebody being wrong on the Internet" too, which can help engineers that thrive in these kinds of verbal pro/contra discussions.
One thing this discussion made me realize is that "thinking hard" might not be a single mode of thinking.
In grad school, I had what I'd call the classic version. I stayed up all night mentally working on a topology question about turning a 2-torus inside out. I already knew you can't flip a torus inside out in ordinary R^3 without self-intersection. So I kept moving and stretching the torus and the surrounding space in my head, trying to understand where the obstruction actually lived.
Sometime around sunrise, it clicked that if you allow the move to go through infinity(so effectively S^3), the inside/outside distinction I was relying on just collapses, and the obstruction I was visualizing dissolves. Birds were chirping, I hadn't slept, and nothing useful came out of it, but my internal model of space felt permanently upgraded. That's clearly "thinking hard" in the sense.
But there's another mode I've experienced that feels related but different. With a tough Code Golf problem, I might carry it around for a week. I'm not actively grinding on it the whole time, but the problem stays loaded in the background. Then suddenly, in the shower or on a walk, a compression trick or a different representation just clicks.
That doesn't feel "hard" moment to moment. It's more like keeping a problem resident in memory long enough for the right structure to surface.
One is concentrated and exhausting, the other is diffuse and slow-burning. They're different phenomenologically, but both feel like forms of deep engagement that are easy to crowd out.
I had similar thoughts recently. I wouldn't consider myself "the thinker", but I simply missed learning by failure. You almost don't fail anymore using AI. If something fails, it feels like it's not your fault but the AI messed up. Sometimes I even get angry at the AI for failing, not at myself. I don't have a solution either, but I came up with a guideline on when and how to use AI that has helped me to still enjoy learning. I'm not trying to advertise my blog and you don't need to read it, the important part is the diagram at the end of "Learning & Failure": https://sattlerjoshua.com/writing/2026-02-01-thoughts-on-ai-.... In summary, when something is important and long-term, I heavily invest into understanding and use an approach that maximizes understanding over speed. Not sure if you can translate it 100% to your situation but maybe it helps to have some kind of guideline, when to spend more time thinking instead of directly using and AI to get to the solution.
To be honest, I do not quite understand the author's point. If he believes that agentic coding or AI has negative impact on being a thinker, or prevent him from thinking critically, he can simply stop using them.
Why blame these tools if you can stop using them, and they won't have any effect on you?
In my case, my problem was often overthinking before starting to build anything. Vibe coding rescued me from that cycle. Just a few days ago, I used openclaw to build and launch a complete product via a Telegram chat. Now, I can act immediately rather than just recording an idea and potentially getting to it "someday later"
To me, that's evolutional. I am truly grateful for the advancement of AI technology and this new era. Ultimately, it is a tool you can choose to use or not, rather than something that prevents you from thinking more.
For me personally, the problem is my teammates. The ability or will to critically think, or investigate existing tools in the codebase seems to disappear. Too often now I have to send back a PR where something is fixed using novel implementations instead of the single function call using existing infrastructure.
I miss entering flow state when coding. When vibe coding, you are in constant interruption and only think very shallow. I never see anyone enter flow state when vibe coding.
I've found that it's often useful to spend the time thinking about the way I would architect the code (down to a fair level of minutia) before letting the agent have a go.
That way my 'thinker' is satiated and also challenged - Did the solution that my thinker came up with solve the problem better than the plan that the agent wrote?
Then either I acknowledge that the agent's solution was better, giving my thinker something to chew on for the next time; or my solution is better which gives the thinker a dopamine hit and gives me better code.
I will never not be upset at my fellow engineers for selling out the ONE thing that made us valuable and respected in the marketplace and trying to destroy software engineering as a career because "Claude Code go brrrrrr" basically.
It's like we had the means for production and more or less collectively decided "You know what? Actually, the bourgeoisie can have it, sure."
The personification of the quote “your scientists were so preoccupied with whether or not they could that they didn't stop to think if they should”
I feel the existential problem for a world that follows the religion of science and technology to its extreme, is that most people in STEM have no foundation in humanities, so ethical and philosophical concerns never pass through their mind.
We have signed a pact with the devil to help us through boring tasks, and no one thought to ask what we would give in exchange.
My solution has been to lean into harder problems - even as side projects, if they aren't available at work.
I too am an ex-physcist used to spending days thinking about things, but programming is a gold mine as it is adjacent to computer science. You can design a programming language (or improve an existing one), try to build a better database (or improve an existing one), or many other things that are quite hard.
The LLM is a good rubber duck for exploring the boundaries of human knowledge (or at least knowledge common enough to be in its training set). It can't really "research" on its own, and whenever you suggest something novel and plausable it gets sycophantic, but it can help you prototype ideas and implementation strategies quite fast, and it can help you explore how existing software works and tackles similar problems (or help you start working on an existing project).
I miss the thrill of running through the semi-parched grasslands and the heady mix of terror triumph and trepidation as we close in on our meal for the week.
I can relate to this. Coding satisifies my urge to build and ship and have an impact on the world. But it doesn't make me think hard. Two things which I've recently gravitated to outside of coding which make me think: blogging and playing chess.
Maybe I subconsciously picked these up because my Thinker side was starved for attention. Nice post.
While I see where you are coming from but I think what has really gone for a toss is the utility of thinking hard.
Thinking hard has never been easier.
I think AI for an autodidact is a boon. Now I suddenly have a teacher who is always accessible and will teach me whatever I want for as long as I want exactly the way I want and I don;t have to worry about my social anxiety kicking in.
Learn advanced cryptography? AI, figure out formal verification - AI etc.
I have to think more rigorously. I have to find ways to tie up loose ends, to verify the result efficiently, to create efficient feedback loops and define categorical success criteria.
I've thought harder about problems this last year than I have in a long time.
The sampling rate we use to take input information is fixed. And we always find a way to work with the sampled information, no matter if the input information density is high or low.
We can play a peaceful game and a intense one.
Now, when we think, we can always find a right level of abstract to think on. Decades ago a programmer thought with machine codes, now we think with high level concepts, maybe towards philosophy.
A good outcome always requires hard thinking. We can and we WILL think hard at a appropriate level.
People here seem to be conflating thinking hard and thinking a lot.
Most examples mentioned of “thinking hard” in the comments sound like they think about a lot of stuff superficially instead one particular problem deeply, which is what OP is referring to.
If you actually have a problem worth thinking deeply about, AI usually can’t help with it. For example, AI can’t help you make performant stencil buffers on a Nokia Ngage for fun. It just doesn’t have that in it. Plenty of such problems abound, especially in domains involving some or the other extreme (like high throughput traffic). Just the other day someone posted a vibe coded Wikipedia project that took ages to load (despite being “just” 66MB) and insisted it was the best it was possible to do, whereas Google can load the entire planet (perceptually) in a fraction of a second.
Great article. Moment I finished reading this article, I thought of my time in solving a UI menu problem with lot of items in it and algorithm I came up with to solve for different screen sizes. It took solid 2 hrs of walking and thinking. I still remember how I was excited when I had the feeling of cracking the problem. Deep thinking is something everyone has it within and it varies how fast you can think. But we all got it with right environment and time we all got it in us. But thats long time ago. Now I always off load some thinking to AI. it comes up with options and you just have to steer it. By time it is getting better. Just ask it you know. But I feel like it is good old days to think deep by yourself. Now I have a partner in AI to think along with me. Great article.
I feel like AI has given me the opportunity to think MORE, not less. I’m doing so much less mindless work, spending most of my efforts critically analyzing the code and making larger scale architectural decisions.
The author says “ Even though the AI almost certainly won't come up with a 100% satisfying solution, the 70% solution it achieves usually hits the “good enough” mark.”
The key is to keep pushing until it gets to the 100% mark. That last 30% takes multiples longer than the first 70%, but that is where the satisfaction lies for me.
I dont think LLMs really took away much thinking, for me they replaced searching stackexchange to find incantations. Now I can get them instantly and customized to my situation. I miss thinking hard too, but I dont blame that on AI, its more that as a dev you are paid to think the absolute minimal amount needed to solve an issue or implement a feature. I dont regret leaving academia, but being paid to think I will always miss.
I generally feel the same. But in addition, I also enjoy the pure act of coding. At least for me that’s another big part why I feel left behind with all this Agent stuff.
I agree, that's another factor. Definitely the mechanical act of coding specially if your are good at it gives the type of joy that I can imagine an artisan or craftsman having when doing his work.
Good highlight of the struggle between Builder and Thinker, I enjoyed the writing.
So why not work on PQC? Surely you've thought about other avenues here as well.
If you're looking for a domain where the 70% AI solution is a total failure, that's the field. You can't rely on vibe coding because the underlying math, like Learning With Errors (LWE) or supersingular isogeny graphs, is conceptually dense and hasn't been commoditized into AI training data yet. It requires that same 'several-day-soak' thinking you loved in physics, specifically because we're trying to build systems that remain secure even against an adversary with a quantum computer. It’s one of the few areas left where the Thinker isn't just a luxury, but a hard requirement for the Builder to even begin.
Many people here might be in a similar situation to me, but I took an online masters program that allowed for continuing education following completion of the degree. This has become one of my hobbies; I can take classes at my own expense, not worry about my grades, and just enjoy learning. I can push myself as much as I want and since the classes are hard, just completing 1 assignment is enough to force me to "think". Just sharing my experience for people who might be looking for ways to challenge themselves intellectually.
In my experience you will need to think even harder with AI if you want a decent result, although the problems you'll be thinking about will be more along the lines of "what the hell did it just write?"
The current major problem with the software industry isn't quantity, it's quality; and AI just increases the former while decreasing the latter. Instead of e.g. finding ways to reduce boilerplate, people are just using AI to generate more of it.
for "Thinker" brain food. (it still has the issue of not being a pragmatic use of time, but there are plenty interesting enough questions which it at least helps)
I think what plagues a lot of pure STEM types in this tumultuous period of AI (or "AI") is that they've spent a majority of their lives mulling over some problem until they've worked out every possible imperfection, and once they've achieved something they consider close to that level of perfection, that's when they say they're done.
While this may be an unfair generalization, and apologies to those who don't feel this way, but I believe STEM types like the OP are used to problem solving that's linear in the sense that the problem only exists in its field as something to be solved, and once they figure it out, they're done. The OP even described his mentality as that of a "Thinker" where he received a problem during his schooling, mulled over it for a long time, and eventually came to the answer. That's it, next problem to crack. Their whole lives revolve around this process and most have never considered anything outside it.
Even now, despite my own healthy skepticism of and distaste for AI, I am forced to respect that AI can do some things very fast. People like the OP, used to chiseling away at a problem for days, weeks, months, etc., now have that throughput time slashed. They're used to the notion of thinking long and hard about a very specific problem and finally having some output; now, code modules that are "good enough" can be cooked up in a few minutes, and if the module works the problem is solved and they need to find the next problem.
I think this is more common than most people want to admit, going back to grumblings of "gluing libraries together" being unsatisfying. The only suggestion I have for the OP is to expand what you think about. There are other comments in this thread supporting it but I think a sea change that AI is starting to bring for software folks is that we get to put more time towards enhancing module design, user experience, resolving tech debt, and so on. People being the ones writing code is still very important.
I think there's more to talk about where I do share the OP's yearning and fears (i.e., people who weren't voracious readers or English/literary majors being oneshot by the devil that is AI summaries, AI-assisted reading, etc.) but that's another story for another time.
> I think what plagues a lot of pure STEM types in this tumultuous period of AI (or "AI") is that they've spent a majority of their lives mulling over some problem until they've worked out every possible imperfection, and once they've achieved something they consider close to that level of perfection, that's when they say they're done.
These people are miserable to work with if you need things done quickly and can tolerate even slight imperfection.
That operating regime is, incidentally, 95% of the work we actually get paid to do.
Personally: technical problems I usually think for a couple days at most before I need to start implementing to make progress. But I have background things like future plans, politics, philosophy, and stories, so I always have something to think about. Close-up technical thinking is great, but sometimes step back and look at the bigger picture?
I don't think AI has affected my thinking much, but that's because I probably don't know how to use it well. Whenever AI writes a lot of code, I end up having to understand if not change most of it; either because I don't trust the AI, I have to change the specification (and either it's a small change or I don't trust the AI to rewrite), the code has a leaky abstraction, the specification was wrong, the code has a bug, the code looks like it has a bug (but the problem ends up somewhere else), I'm looking for a bug, etc. Although more and more often the AI saves time and thinking vs. if I wrote the implementation myself, it doesn't prevent me from having to think about the code at all and treating it like a black box, due to the above.
I believe it is a type of burnout. AI might have accelerated both the work and that feeling.
I found that doing more physical projects helped me. Large woodworking, home improvement, projects. Built-in bookshelves, a huge butcher block bar top (with 24+ hours of mindlessly sanding), rolling workbenches, and lots of cabinets. Learning and trying to master a new skill, using new design software, filling the garage with tools...
I am thinking harder than ever due to vibe coding. How will markets shift? What will be in demand? How will the consumer side adapt? How do we position? Predicting the future is a hard problem... The thinker in me is working relentlessly since December. At least for me the thinker loves an existential crisis like no other.
I don’t think you can get the same satisfaction out of these tools if what you want to do is not novel.
If you are exploring the space of possibilities for which there are no clear solutions, then you have to think hard. Take on wildly more ambitious projects. Try to do something you don’t think you can do. And work with them to get there.
Thinking harder than I have in a long time with AI assisted coding.
As I'm providing context I get to think about what an ideal approach would look like and often dive into a research session to analyze pros and cons of various solutions.
I don't use agents much because it's important to see how a component I just designed fits into the larger codebase. That experience provides insights on what improvements I need to make and what to build next.
The time I've spent thinking about the composability, cohesiveness, and ergonomics of the code itself have really paid off. The codebase is a joy to work in, easy to maintain and extend.
The LLMs have helped me focus my cognitive bandwidth on the quality and architecture instead of the tedious and time consuming parts.
If it's this easy to convince you to stop being creative, to stop putting in effort to think critically, then you don't deserve the fulfilment that creativity and critical thinking can give you. These vibe coding self pity articles are so bizarre.
What OP wants to say is that they miss the process of thinking hard for days and weeks and one day this brilliant idea popping up on their bed before sleep. I lost my "thinking hard" process again too today at work against my pragmatism, or more precisely my job.
The author's point is, If you use AI to solve the problem and after the chat gives you the solution you say “oh yes, ok, I understand it, I can do it”(and no, you can’t do it).
A lot of productive thinking happens when asleep, in the shower, in flow walking or cycling or rowing.
It's hard to rationalise this as billable time, but they pay for outcome even if they act like they pay for 9-5 and so if I'm thinking why I like a particular abstraction, or see analogies to another problem, or begin to construct dialogues with mysel(ves|f) about this, and it happens I'm scrubbing my back (or worse) I kind of "go with the flow" so to speak.
Definitely thinking about the problem can be a lot better than actually having to produce it.
> At the end of the day, I am a Builder. I like building things. The faster I build, the better.
This I can’t relate to. For me it’s “the better I build, the better”. Building poor code fast isn’t good: it’s just creating debt to deal with in the future, or admitting I’ll toss out the quickly built thing since it won’t have longevity. When quality comes into play (not just “passed the tests”, but is something maintainable, extensible, etc), it’s hard to not employ the Thinker side along with the Builder. They aren’t necessarily mutually exclusive.
Then again, I work on things that are expected to last quite a while and aren’t disposable MVPs or side projects. I suppose if you don’t have that longevity mindset it’s easy to slip into Build-not-Think mode.
Cognitive skills are just like any other - use them and they will grow, do not and they will decline. Oddly enough, the more one increases their software engineering cognition, the less the distance between "The Builder" and "The Thinker" becomes.
I think it's just another abstraction layer, and moves the thinking process from "how do I solve this problem in code?" to "how do I solve this problem in orchestration?".
I recently used the analogy of when compilers were invented. Old-school coders wrote machine code, and handled the intricacies of memory and storage and everything themselves. Then compilers took over, we all moved up an abstraction layer and started using high-level languages to code in. There was a generation of programmers who hated compilers because they wrote bad, inelegant, inefficient, programs. And for years they were right.
The hard problems now are "how can I get a set of non-deterministic, fault-prone, LLM agents to build this feature or product with as few errors as possible, with as little oversight as possible?". There's a few generic solutions, a few good approaches coming out, but plenty of scope for some hard thought in there. And a generic approach may not work for your specific project.
I always search the web, ask others, or read books in order to find a solution. When I do not find an answer from someone else, that's where I have to think hard.
I think this problem existed before AI. At least in my current job, there is constant, unrelenting demand for fast results. “Multi-day deep thinking” sounds like an outrageous luxury, at least in my current job.
Even 30 years ago when I started in the industry, most jobs required very little deep thinking. All of mine has been done on personal projects. Thats just the reality of the typical software engineering job.
I really don't believe AI allows you to think less hard. If it did, it would be amazing, but the current AI hasn't got to that capability. It forces you to think about different things at best.
you forget about fake sleeping being loaded with fake dopamine hits before sleep AND broken sleep schedules; and eating fake ultraprocessed food instead of wholefoods.
Eventually I always get to a problem I can't solve by just throwing an LLM at it and have to go in and properly debug things. At that point knowing the code base helps a hell of a lot, and I would've been better off writing the entire thing by hand.
For me, Claude, Suno, Gemini and AI tools are pure bliss for creation, because they eliminate the boring grunt work. Who cares how to implement OAuth login flow, or anything that has been done 1000 times?
What a bizarre claim. If you can solve anything by thinking, why don't you become a scientist? Think of a theory that unites quantum physics and general relativity.
I feel that AI doesn't necessarily replace my thinking, but actually helps to explore deeper - on my behalf - alternative considerations in the approach to solving a problem, which in turn better informs my thinking.
I definitely relate to this. Except that while I was in the 1% in university who thought hard, I don't think my success rate was that high. My confidence in the time was quite high, though, and I still remember the notable successes.
And also, I haven't started using AI for writing code yet. I'm shuffling toward that, with much trepidation. I ask it lots of coding questions. I make it teach me stuff. Which brings me to the point of my post:
The other day, I was looking at some Rust code and trying to work out the ownership rules. In theory, I more or less understand them. In practice, not so much. So I had Claude start quizzing me. Claude was a pretty brutal teacher -- he'd ask 4 or 5 questions, most of them solvable from what I knew already, and then 1 or 2 that introduced a new concept that I hadn't seen. I would get that one wrong and ask for another quiz. Same thing: 4 or 5 questions, using what I knew plus the thing just introduced, plus 1 or 2 with a new wrinkle.
I don't think I got 100% on any of the quizzes. Maybe the last one; I should dig up that chat and see. But I learned a ton, and had to think really hard.
Somehow, I doubt this technique will be popular. But my experience with it was very good. I recommend it. (It does make me a little nervous that whenever I work with Claude on things that I'm more familiar with, he's always a little off base on some part of it. Since this was stuff I didn't know, he could have been feeding me slop. But I don't think so; the explanations made sense and the the compiler agreed, so it'd be tough to get anything completely wrong. And I was thinking through all of it; usually the bullshit slips in stealthily in the parts that don't seem to matter, but I had to work through everything.)
I feel like I'm doing much nicer thinking now, I'm doing more systems thinking, not only that I'm iterating on system design a lot more because it is a lot easier to change with AI
Well thinking hard is still there if you work on hard abstract problems. I keep thinking very hard, even though 4 CCs pump code while I do this. Besides, being a Gary Kasparov, playing on several tables, takes thinking.
Give the AI less responsibility but more work. Immediate inference is a great example: if the AI can finish my lines, my `if` bodies, my struct instantiations, type signatures, etc., it can reduce my second-by-second work significantly while taking little of my cognitive agency.
These are also tasks the AI can succeed at rather trivially.
Better completions is not as sexy, but in pretending agents are great engineers it's an amazing feature often glossed over.
Another example is automatic test generation or early correctness warnings.
If the AI can suggest a basic test and I can add it with the push of a button - great. The length (and thus complexity) of tests can be configured conservatively relative to the AI of the day.
Warnings can just be flags in the editors spotting obvious mistakes. Off-by-one errors for example, which might go unnoticed for a while, would be an achievable and valuable notice.
Also, automatic debugging and feeding the raw debugger log into an AI to parse seems promising, but I've done little of it.
...And go from there - if a well-crafted codebase and an advanced model using it as context can generate short functions well, then by all means - scale that up with discretion.
These problems around the AI coding tools are not at all special - it's a classic case of taking the new tool too far too fast.
The problem with the "70% solution" is that it creates a massive amount of hidden technical debt. You aren't thinking hard because you aren't forced to understand the edge cases or the real origin of the problem. It used to be the case that you will need plan 10 steps ahead because refactoring was expensive, now people just focus in the next problem ahead, but the compounding AI slop will blow up eventually.
yes but you solved problems already solved by someone else.
how about something that hasn't been solved, or yet even noticed? that gives the greatest satisfaction
With AI, I now think much harder. Timelines are shorter, big decisions are closer together, and more system interactions have to be "grokked" in my head to guide the model properly.
I'm more spent than before where I would spend 2 hours wrestling with tailwind classes, or testing API endpoints manually by typing json shapes myself.
“We now buy our bread… it comes sliced… and sure you can just go and make your sandwich and it won’t be a rustic, sourdough that you spent months cultivating. Your tomatoes will be store bought not grown heirlooms. In the end… you have lost the art of baking bread. And your sandwich making skills are lost to time… will humanity ever bake again with these mass factories of bread? What have we lost! Woe is me. Woe is me.”
That is a very good analogy - sliced shop bread is tasteless and not that good for you compared to sourdough. Likewise awful store bought tomatoes taste like nothing compared to heirloom tomatoes and arguably have different nutritional content.
Shop bread and tomatoes though can be manufactured without any thought of who makes them, though they can be reliably manufactured without someone guiding an LLM which is perhaps where the analogy falls down, and we always want them to be the same, but software is different in every form.
There's an irony here -- the same tools that make it easy to skim and summarize can also be used to force deeper thinking. The problem isn't the tools, it's the defaults.
I've found that the best way to actually think hard about something is to write about it, or to test yourself on it. Not re-read it. Not highlight it. Generate questions from the material and try to answer them from memory.
The research on active recall vs passive review is pretty clear: retrieval practice produces dramatically better long-term retention than re-reading. Karpicke & Blunt (2011) showed that practice testing outperformed even elaborative concept mapping.
So the question isn't whether AI summarizers are good or bad -- it's whether you use them as a crutch to avoid thinking, or as a tool to compress the boring parts so you can spend more time on the genuinely hard thinking.
> I have tried to get that feeling of mental growth outside of coding
A few years before this wave of AI hit, I got promoted into a tech lead/architect role. All of my mental growth since then has been learning to navigate office politics and getting the 10k ft view way more often.
I was already telling myself "I miss thinking hard" years before this promotion. When I build stuff now, I do it with a much clearer purpose. I have sincerely tried the new tools, but I'm back to just using google search if anything at all.
All I did was prove to myself the bottleneck was never writing code, but deciding why I'm doing anything at all. If you want to think so hard you stay awake at night, try existential dread. It's an important developmental milestone you'd have been forced to confront anyway even 1000 years ago.
My point is, you might want to reconsider how much you blame AI.
You don't have to miss it, buy a differential equation book and do one per day. Play chess on hard mode. I mean there's so many ways to make yourself think hard daily, this makes no sense.
It's like saying I miss running. Get out and run then.
Great, so does that mean that it is time to vibe code our own alternatives of everything such as the Linux kernel because the AI is sure 'smarter' than all of us?
Seen a lot of DIY vibe coded solutions on this site and they are just waiting for a security disaster. Moltbook being a notable example.
"Before you read this post, ask yourself a question: When was the last time you truly thought hard?
...
a) All the time. b) Never. c) Somewhere in between."
Dude, I know you touched on this but seriously. Just don't use AI then. It's not hard, it's your choice to use it or not. It's not even making you faster, so the pragmatism argument doesn't really work well! This is a totally self inflicted problem that you can undo any time you want.
That sucks, but honestly I’d get out of there as fast as possible. Life is too short to live under unfulfilling work conditions for any extended amount of time.
I have a Claude code set up in a folder with instructions on how to access iMessage. Ask it questions like “What did my wife say I should do next Friday?”
Reads the SQLite db and shit. So burn your tokens on that.
It's not hard to burn tokens on random bullshit (see moltbook). If you really can deliver results at full speed without AI, it shouldn't be hard to keep cover.
Instant upvote for a Philiip Mainlander quote at the end. He's the OG "God is Dead" guy and Nietzsche was reacting (very poorly) to Mainlander and other pessimists like Schopenhauer when he followed up with his own, shittier version of "god is dead"
Please read up on his life. Mainlander is the most extreme/radical Philosophical Pessimist of them all. He wrote a whole book about how you should rationally kill yourself and then he killed himself shortly after.
> I tried getting back in touch with physics, reading old textbooks. But that wasn’t successful either. It is hard to justify spending time and mental effort solving physics problems that aren’t relevant or state-of-the-art
I tried this with physics and philosophy. I think i want to do a mix of hard but meaningful. For academic fields like that its impossible for a regular person to do as a hobby. Might as well just do puzzles or something.
Every time I try to use LLMs for coding, I completely lose touch with what it's doing, it does everything wrong and it can't seem to correct itself no matter how many times I explain. It's so frustrating just trying to get it to do the right thing.
I've resigned to mostly using it for "tip-of-my-tongue" style queries, i.e. "where do I look in the docs". Especially for Apple platforms where almost nothing is documented except for random WWDC video tutorials that lack associated text articles.
I don't trust LLMs at all. Everything they make, I end up rewriting from scratch anyway, because it's always garbage. Even when they give me ideas, they can't apply them properly. They have no standards, no principle. It's all just slop.
I hate this. I hate it because LLMs give so many others the impression of greatness, of speed, and of huge productivity gains. I must look like some grumpy hermit, stuck in their ways. But I just can't get over how LLMs all give me the major ick. Everything that comes out of them feels awful.
My standards must be unreasonably high. Extremely, unsustainably high. That must also be the reason I hardly finish any projects I've ever started, and why I can never seem to hit any deadlines at work. LLMs just can't reach my exacting, uncompromising standards. I'm surely expecting far too much of them. Far too much.
I guess I'll just keep doing it all myself. Anything else really just doesn't sit right.
I refer to it as "Think for me SaaS", and it should be avoided like the plague. Literally, it will give your brain a disease we haven't even named yet.
It's as if I woke up in a world where half of resturaunts worldwide started changing their name to McDonalds and gaslighting all their customers into thinking McDonalds is better than their "from scratch" menu.
Just dont use these agentic tools, they legitimately are weapons who's target is your brain. You can ship just as fast with autocomplete and decent workflows, and you know it.
Its weird, I dont understand why any self respecting dev would support these companies. They are openly hostile about their plans for the software industry (and many other verticles).
I see it as a weapon being used by a sect of the ruling class to diminsh the value of labor. While im not confident they'll be successful, I'm very disappointed in my peers that are cheering them on in that mission. My peers are obviously being tricked by promises of being able join that class, but that's not what's going to happen.
You're going to lose that thinking muscle and therefor the value of your labor is going to be directly correlated to the quantity and quality of tokens you can afford (or be given, loaned!?)
I'm with you. It scares me how quickly some of my peers' critical thinking and architectural understanding have noticeably atrophied over the last year and a half.
Pre-processed food consumer complains about not cooking anymore. /s
... OK I guess. I mean sorry but if that's revelation to you, that by using a skill less you hone it less, you were clearly NOT thinking hard BEFORE you started using AI. It sure didn't help but the problem didn't start then.
This March 2025 post from Aral Balkan stuck with me:
https://mastodon.ar.al/@aral/114160190826192080
"Coding is like taking a lump of clay and slowly working it into the thing you want it to become. It is this process, and your intimacy with the medium and the materials you’re shaping, that teaches you about what you’re making – its qualities, tolerances, and limits – even as you make it. You know the least about what you’re making the moment before you actually start making it. That’s when you think you know what you want to make. The process, which is an iterative one, is what leads you towards understanding what you actually want to make, whether you were aware of it or not at the beginning. Design is not merely about solving problems; it’s about discovering what the right problem to solve is and then solving it. Too often we fail not because we didn’t solve a problem well but because we solved the wrong problem.
When you skip the process of creation you trade the thing you could have learned to make for the simulacrum of the thing you thought you wanted to make. Being handed a baked and glazed artefact that approximates what you thought you wanted to make removes the very human element of discovery and learning that’s at the heart of any authentic practice of creation. Where you know everything about the thing you shaped into being from when it was just a lump of clay, you know nothing about the image of the thing you received for your penny from the vending machine."
This is very insightful, thanks. I had a similar thought regarding data science in particular. Writing those pandas expressions by hand during exploration means you get to know the data intimately. Getting AI to write them for you limits you to a superficial knowledge of said data (at least in my case).
And when programming with agentic tools, you need to actively push for the idea to not regress to the most obvious/average version. The amount of effort you need to expend on pushing the idea that deviates from the 'norm' (because it's novel), is actually comparable to the effort it takes to type something out by hand. Just two completely different types of effort.
There's an upside to this sort of effort too, though. You actually need to make it crystal clear what your idea is and what it is not, because of the continuous pushback from the agentic programming tool. The moment you stop pushing back, is the moment the LLM rolls over your project and more than likely destroys what was unique about your thing in the first place.
You just described the burden of outsourcing programming.
With the basic and enormous difference that the feedback loop is 100 or even 1000x faster. Which changes the type of game completely, although other issues will probably arise as we try this new path.
We need a new word for on-premise offshoring.
On-shoring ;
> On-shoring
I thought "on-shoring" is already commonly used for the process that undos off-shoring.
100%! There is significant analogy between the two!
There is a reason management types are drawn to it like flies to shit.
I can't help but imagine training horses vs training cats. One of them is rewarding, a pleasure, beautiful to see, the other is frustrating, leaves you with a lot of scratches and ultimately both of you "agreeing" on a marginal compromise.
> need to make it crystal clear
That's not an upside in that it's unique to LLM vs human written code. When writing it yourself, you also need to make it crystal clear. You do that in the language of implementation.
Yet another example of "comments that are only sort of true because high temperature sampling isn't allowed".
If you use LLMs at very high temperature with samplers which correctly keep your writing coherent (i.e. Min_p, or better like top-h, P-less decoding, etc), than "regression to the mean" literally DOES NOT HAPPEN!!!!
Have you actually tried high temperature values for coding? Because I don’t think it’s going to do what you claim it will.
LLMs don’t “reason” the same way humans do. They follow text predictions based on statistical relevance. So raising the temperature will more likely increase the likelihood of unexecutable pseudocode than it would create a valid but more esoteric implementation of a problem.
How do you configure LLM température in coding agents, e.g. opencode?
https://opencode.ai/docs/agents/#temperature
set it in your opencode.json
You can't without hacking it! That's my point! The only places you can easily are via the API directly, or "coomer" frontends like SillyTavern, Oobabooga, etc.
Same problem with image generation (lack of support for different SDE solvers, the image version of LLM sampling) but they have different "coomer" tools, i.e. ComfyUI or Automatic1111
Once again, porn is where the innovation is…
This is an amazing quote - thank you. This is also my argument for why I can't use LLMs for writing (proofreading is OK) - what I write is not produced as a side-effect of thinking through a problem, writing is how I think through a problem.
Coding is not at all like working a lump of clay unless you’re still writing assembly.
You’re taking a bunch of pre-built abstractions written by other people on top of what the computer is actually doing and plugging them together like LEGOs. The artificial syntax that you use to move the bricks around is the thing you call coding.
The human element of discovery is still there if a robot stacks the bricks based on a different set of syntax (Natural Language), nothing about that precludes authenticity or the human element of creation.
You’re both right. It just depends on the problems you’re solving and the languages you use.
I find languages like JavaScript promote the idea that of “Lego programming” because you’re encouraged to use a module for everything.
But when you start exploring ideas that haven’t been thoroughly explored already, and particularly in systems languages which are less zealous about DRY (don’t repeat yourself) methodologies, the you can feel a lot more like a sculptor.
Likewise if you’re building frameworks rather than reusing them.
So it really depends on the problems you’re solving.
For general day-to-day coding for your average 9-to-5 software engineering job, I can definitely relate to why people might think coding is basically “LEGO engineering”.
I think the analogy to high level programming languages misunderstands the value of abstraction and notation. You can’t reason about the behavior of an English prompt because English is underspecified. The value of code is that it has a fairly strong semantic correlation to machine operations, and reasoning about high level code is equivalent to reasoning about machine code. That’s why even with all this advancement we continue to check in code to our repositories and leave the sloppy English in our chat history.
Exactly, and that's why I find AI coding solving this well, because I find it tedious to put the bricks together for the umpteenth time when I can just have an AI do it (which I will of course verify the code when it's done, not advocating for vibe coding here).
This actually leaves me with a lot more time to think, about what I want the UI to look like, how I'll market my software, and so on.
yes, this is maybe it's my preference to jump directly to coding, instead of canva to draw the gui and stuff. i would not know what to draw because the involvemt is not so deep ...or something
I dunno, when you've made about 10,000 clay pots its kinda nice to skip to the end result, you're probably not going to learn a ton with clay pot #10,001. You can probably come up with some pretty interesting ideas for what you want the end result to look like from the onset.
I find myself being able to reach for the things that my normal pragmatist code monkey self would consider out of scope - these are often not user facing things at all but things that absolutely improve code maintenance, scalability, testing/testability, or reduce side effects.
Depends on the problem. If the complexity of what you are solving is in the business logic or, generally low, you are absolutely right. Manually coding a signup flow #875 is not my idea of fun either. But if the complexity is in the implementation, it’s different. Doing complex cryptography, doing performance optimization or near-hardware stuff is just a different class of problems.
> If the complexity of what you are solving is in the business logic or, generally low, you are absolutely right.
The problem is rather that programmers who work on business logic often hate programmers who are actually capable of seeing (often mathematical) patterns in the business logic that could be abstracted away; in other words: many business logic programmers hate abstract mathematical stuff.
So, in my opinion/experience this is a very self-inflected problem that arises from the whole culture around business logic and business logic programming.
In my experience AI is pretty good at performance optimizations as long as you know what to ask for.
Can't speak to firmware code or complex cryptography but my hunch is if it's in it's training dataset and you know enough to guide it, it's generally pretty useful.
> In my experience AI is pretty good at performance optimizations as long as you know what to ask for.
This rather tells that the kind of performance optimizations that you ask for are very "standard".
> my hunch is if it's in it's training dataset and you know enough to guide it, it's generally pretty useful.
Presumably humanity still has room to grow and not everything is already in the training set.
import claypot
trillion dollar industry boys
Eloquent, moving, and more-or-less exactly what people said when cameras first hit the scene.
Ironic. The frequency and predictability of this type of response — “This criticism of new technology is invalid because someone was wrong once in the past about unrelated technology” — means there might as well be an LLM posting these replies to every applicable article. It’s boring and no one learns anything.
It would be a lot more interesting to point out the differences and similarities yourself. But then if you wanted an interesting discussion you wouldn’t be posting trite flamebait in the first place, would you?
Interesting comparison. I remember watching a video on that. Landscape paintings, portraits, etc, was an art that has taken an enormous nosedive. We, as humans, have missed out on a lot of art because of the invention of the camera. On the other hand, the benefits of the camera need no elaboration. Currently AI had a lot of foot guns though, which I don't believe the camera had. I hope AI gets to that point too.
> Eloquent, moving, and more-or-less exactly what people said when cameras first hit the scene.
This is a non sequitur. Cameras have not replaced paintings, assuming this is the inference. Instead, they serve only to be an additional medium for the same concerns quoted:
Just as this is applicable to refining a software solution captured in code, just as a painter discards unsatisfactory paintings and tries again, so too is it when people say, "that picture didn't come out the way I like, let's take another one."Cameras have not replaced paintings, assuming this is the inference.
You wouldn't have known that, going by all the bellyaching and whining from the artists of the day.
Guess what, they got over it. You will too.
What stole the joy you must have felt, fleetingly, as a child that beheld the world with fresh eyes, full of wonder?
Did you imagine yourself then, as your are now, hunched over a glowing rectangle. Demanding imperiously that the world share your contempt for the sublime. Share your jaundiced view of those that pour the whole of themselves into the act of creation, so that everyone might once again be graced with wonder anew.
I hope you can find a work of art that breaks you free of your resentment.
Love your comment.
I took the liberty of pasting it to chatgpt and asked it to write another paragraph in the same style:
Perhaps it is easier to sneer than to feel, to dull the edges of awe before it dares to wound you with longing. Cynicism is a tidy shelter: no drafts of hope, no risk of being moved. But it is also a small room, airless, where nothing grows. Somewhere beyond that glowing rectangle, the world is still doing its reckless, generous thing—colors insisting on being seen, sounds reaching out without permission, hands shaping meaning out of nothing. You could meet it again, if you chose, not as a judge but as a witness, and remember that wonder is not naïveté. It is courage, practiced quietly.
Plot twist. The comment you love is the cynical one, responding to someone who clearly embraces the new by rising above caution and concern. Your GPT addition has missed the context, but at least you've provided a nice little paradox.
Thank you for brightening my morning with a brief moment of romantic idealism in a black ocean of cynicism
>> Cameras have not replaced paintings, assuming this is the inference.
> You wouldn't have known that, going by all the bellyaching and whining from the artists of the day.
> Guess what, they got over it.
You conveniently omitted my next sentence, which contradicts your position and reads thusly:
> You will too.This statement is assumptive and gratuitous.
Username checks out, at least.
> Username checks out, at least.
Thoughtful retorts such as this are deserving of the same esteem one affords the "rubber v glue"[0] idiom.
As such, I must oblige.
0 - https://idioms.thefreedictionary.com/I%27m+rubber%2c+you%27r...
Logic needs to be shown the door on occasion. Sometimes via the help of an ole Irish bar toss.
> Guess what, they got over it. You will too.
Prediction is difficult, especially of the future.
Yeah, and cameras changed art forever.
people still make clay pots and paint landscapes
Source?
Art history. It's how we ended up with Impressionism, for instance.
People felt (wrongly) that traditional representational forms like portraiture were threatened by photography. Happily, instead of killing any existing genres, we got some interesting new ones.
I don't get it.
I think just as hard, I type less. I specify precisely and I review.
If anything, all we've changed is working at a higher level. The product is the same.
But these people just keep mixing things up like "wow I got a ferrari now, watch it fly off the road!"
Yeah so you got a tools upgrade; it's faster, it's more powerful. Keep it on the road or give up driving!
We went from auto completing keywords, to auto completing symbols, to auto completing statements, to auto completing paragraphs, to auto completing entire features.
Because it happened so fast, people feel the need to rename programming every week. We either vibe coders now, or agentic coders or ... or just programmers hey. You know why? I write in C, I get machine code, I didn't write the machine code! It was all an abstraction!
Oh but it's not the same you say, it changes every time you ask. Yes, for now, it's still wonky and janky in places. It's just a stepping stone.
Just chill, it's programming. The tools just got even better.
You can still jump on a camel and cross the desert in 3 days. Have at it, you risk dying, but enjoy. Or you can just rent a helicopter and fly over the damn thing in a few hours. Your choice. Don't let people tell you it isn't travelling.
We're all Linus Torvalds now. We review, we merge, we send back. And if you had no idea what you were doing before, you'll still have no idea what you're doing today. You just fat-finger less typos today than ever before.
I think I understand what the author is trying to say.
We miss thinking "hard" about the small details. Maybe "hard" isn't the right adjective, but we all know the process of coding isn't just typing stuff while the mind wanders. We keep thinking about the code we're typing and the interactions between the new code and the existing stuff, and keep thinking about potential bugs and issues. (This may or may not be "hard".)
And this kind of thinking is totally different from what Linus Torvalds has to think about when reviewing a huge patch from a fellow maintainer. Linus' work is probably "harder", but it's a different kind of thinking.
You're totally right it's just tools improving. When compilers improved most people were happy, but some people who loved hand crafting asm kept doing it as a hobby. But in 99+% cases hand crafting asm is a detriment to the project even if it's fun, so if you love writing asm yourself you're either out of work, or you grudgingly accept that you might have to write Java to get paid. I think there's a place for lamenting this kind of situation.
I agree. I think some of us would rather deal with small, incremental problems than address the big, high-level roadmap. High-level things are much more uncertain than isolated things that can be unit-tested. This can create feelings of inconvenience and unease.
Spot on. It’s the lumberjack mourning the axe while holding a chainsaw. The work is still hard. it’s just different. The friction comes from developers who prioritize the 'craft' of syntax over delivering value. It results in massive motivated reasoning. We see people suddenly becoming activists about energy usage or copyright solely to justify not using a tool they dislike. They will hunt for a single AI syntax error while ignoring the history of bugs caused by human fatigue. It's not about the tech. it's about the loss of the old way of working.
And it's also somewhat egotistical it seems to me. I sense a pattern that many developers care more about doing what they want instead of providing value to others.
I disagree. It's like the lumberjack working from home watching an enormous robotic forestry machine cut trees on a set of tv-screens. If he enjoyed producing lumber, then what he sees on those screens will fill him with joy. He's producing lots of lumber. He's much more efficient than with both axe and chainsaw.
But if he enjoyed being in the forest, and _doesn't really care about lumber at all_ (Because it turns out, he never used or liked lumber, he merely produced it for his employer) then these screens won't give him any joy at all.
That's how I feel. I don't care about code, but I also don't really care about products. I mostly care about the craft. It's like solving sudokus. I don't collect solved sudokus. Once solved I don't care about them. Having a robot solve sudokus for me would be completely pointless.
> I sense a pattern that many developers care more about doing what they want instead of providing value to others.
And you'd be 100% right. I do this work because my employer provides me with enough sudokus. And I provide value back which is more than I'm compensated with. That is: I'm compensated with two things: intellectual challenge, and money. That's the relationship I have with my employer. If I could produce 10x more but I don't get the intellectual challenge? The employer isn't giving me what I want - and I'd stop doing the work.
I think "You do what the employer wants, produce what needs to be produced, and in return you get money" is a simplification that misses the literal forest for all the forestry.
You _think_ you're thinking as hard. Reading code != writing it. Just like watching someone do a thing isn't the same as actually doing it.
Correct… reading code is a much more difficult and ultimately, productive, task.
I suspect those using the tools in the best way are thinking harder than ever for this reason.
[delayed]
It is simple. Continuing your metaphor, I have a choice of getting exactly where I want on a camel in 3 days, or getting to a random location somewhere on the other side of the desert on a helicopter in few hours.
And being a reasonable person I, just like the author, choose the helicopter. That's it, that's the whole problem.
Why is that the reasonable choice if it doesn't get you to your destination?
I too did a lot of AI coding but when I saw the spaghetti it made, I went back to regular coding, with ask mode not agent mode as a search engine.
You did something smart and efficinently using the least amount of energy and time needed. +1 for consciousness being a mistake
I think the more apt analog isn't a faster car, a la Ferrari, it's more akin to someone who likes to drive and now has to sit and monitor the self-driving car steer and navigate. Comparing to the Ferrari is incorrect since it still takes a similar level of agency from the driver versus a <insert slower vehicle>
This is exactly the right analogy here.
FSD is very very good most of the time. It's so good (well, v14 is, anyway), it makes it easy to get lulled into thinking that it works all the time. So you check your watch here, check your phone there, and attend to other things, and it's all good until the car decides to turn into a curb (which almost happened to me the other day) or swerve hard into a tree (which happened to someone else).
Funny enough, much like AI, Tesla is shoving FSD down people's throats by gating Autopilot 2, a lane keeping solution that worked extremely well and is much friendlier to people who want limited autonomy here and there, behind the $99/mo FSD sub (and removing the option to pay for the package out of pocket).
> We're all Linus Torvalds now.
So...where's your OS and SCM?
I get your point that wetware stills matter, but I think it's a bit much to contend that more than a handful of people (or everyone) is on the level of Linus Torvalds now that we have LLMs.
I should have been clearer. It was a pun, a take, a joke. I was referring to his day-to-day activity now, where he merges code, doesn't write hardly any code for the linux kernel.
I didn't imply most of use can do half the thing he's done. That's not right.
> his day-to-day activity now, where he merges code
But even then...don't you think his insight into and ability to verify a PR far exceeds that of most devs (LLM or not)? Most of us cannot (reasonably) aspire to be like him.
Like I said if you didn't know what you were doing before, you won't know what you're doing with today.
Agentic coding in general only amplify your ability (or disability).
You can totally learn how to build an OS and invest 5 years of your life doing so. The first version of Linux I'm sure was pretty shoddy. Same for a SCM.
I've been doing this for 30 years. At some point, your limit becomes how much time you're willing to invest in something.
> We're all Linus Torvalds now. We review, we merge, we send back. And if you had no idea what you were doing before, you'll still have no idea what you're doing today. You just fat-finger less typos today than ever before.
Except Linus understands the code that is being reviewed / merged in since he already built the kernel and git by hand. You only see him vibe-coding toys but not vibe-coding in the kernel.
Today, we are going to see a gradual skill atrophy with developers over-relying on AI and once something like Claude goes down, they can't do any work at all.
The most accurate representation is that AI is going to rapidly make lots of so-called 'senior engineers' who are over-reliant and unable to detect bad AI code like juniors and interns.
If you can't rebuke code today. You can't rebuke code tomorrow.
> You just fat-finger less typos today than ever before.
My typos are largely admissible.
I get it.
I got excited about agents because I told myself it would be "just faster typing". I told myself that my value was never as a typist and that this is just the latest tool like all the tools I had eagerly added to my kit before.
But the reality is different. It's not just typing for me. It's coming up with crap. Filling in the blanks. Guessing.
The huge problem with all these tools is they don't know what they know and what they don't. So when they don't know they just guess. It's absolutely infuriating.
It's not like a Ferrari. A Ferrari does exactly what I tell it to, up to the first-order effects of how open the throttle is, what direction the wheels face, how much pressure is on the brakes etc. The second-order effects are on me, though. I have to understand what effect these pressures will have on my ultimate position on the road. A normie car doesn't give you as much control but it's less likely to come off the road.
Agents are like a teleport. You describe where you want to be and it just takes you directly there. You say "warm and sunny" and you might get to the Bahamas, but you might also get to the Sahara. So you correct: "oh no, I meant somewhere nice" and maybe you get to the Bahamas. But because you didn't travel there yourself you failed to realise what you actually got. Yeah, it's warm, sunny and nice, but now you're on an island in the middle of nowhere and have to import basically everything. So I prompt again and rewrite the entire codebase, right?
Linus Torvalds works with experts that he trusts. This is like a manic 5 year old that doesn't care but is eager to work. Saying we all get to be Torvalds is like saying we all get to experience true love because we have access to porn.
except the thing does not work as expected and it just makes you worse not better
That's your opinion and you can not use those tools.
People are paying for it because it helps them. Who are you to whine about it?
But that's the entire flippin' problem. People are being forced to use these tools professionally at a stagering rate. It's like the industry is in its "training your replacement" era.
Like I said that's temporary. It's janky and wonky but it's a stepping stone.
Just look at image generation. Actually factually look at it. We went from horror colours vomit with eyes all over, to 6 fingers humans, to pretty darn good now.
It's only time.
Why is image generation the same as code generation?
it's not. We were able to get rid of 6 fingered hands by getting very specific, and fine tuning models with lots of hand and finger training data.
But that approach doesn't work with code, or with reasoning in general, because you would need to exponentially fine tune everything in the universe. The illusion that the AI "understands" what it is doing is lost.
It isn't.
Code generation progression in LLMs still carries higher objective risk of failure depending on the experience on the person using it because:
1. They still do not trust if the code works (even if it has tests) thus, needs thorough human supervision and still requires on-going maintainance.
2. Hence (2) it can cost you more money than the tokens you spent building it in the first place when it goes horribly wrong in production.
Image generation progression comes with close to no operational impact, and has far less human supervision and can be safely done with none.
Comments like these are why I don't browse HN nearly ever anymore
Nothing new. Whenever a new layer of abstraction is added, people say it's worse and will never be as good as the old way. Though it's a totally biased opinion, we just have issues with giving up things we like as human being.
> Whenever a new layer of abstraction is added
LLMs aren't a "layer of abstraction."
99% of people writing in assembly don't have to drop down into manual cobbling of machine code. People who write in C rarely drop into assembly. Java developers typically treat the JVM as "the computer." In the OSI network stack, developers writing at level 7 (application layer) almost never drop to level 5 (session layer), and virtually no one even bothers to understand the magic at layers 1 & 2. These all represent successful, effective abstractions for developers.
In contrast, unless you believe 99% of "software development" is about to be replaced with "vibe coding", it's off the mark to describe LLMs as a new layer of abstraction.
As someone who's been coding for several decades now (i.e. I'm old), I find the current generation of AI tools very ... freeing.
As an industry, we've been preaching the benefits of running lots of small experiments to see what works vs what doesn't, try out different approaches to implementing features, and so on. Pre-AI, lots of these ideas never got implemented because they'd take too much time for no definitive benefit.
You might spend hours thinking up cool/interesting ideas, but not have the time available to try them out.
Now, I can quickly kick off a coding agent to try out any hare-brained ideas I might come up with. The cost of doing so is very low (in terms of time and $$$), so I get to try out far more and weirder approaches than before when the costs were higher. If those ideas don't play out, fine, but I have a good enough success rate with left-field ideas to make it far more justifiable than before.
Also, it makes playing around with one-person projects a lot practical. Like most people with partner & kids, my down time is pretty precious, and tends to come in small chunks that are largely unplannable. For example, last night I spent 10 minutes waiting in a drive-through queue - that gave me about 8 minutes to kick off the next chunk of my one-person project development via my phone, review the results, then kick off the next chunk of development. Absolutely useful to me personally, whereas last year I would've simply sat there annoyed waiting to be serviced.
I know some people have an "outsourcing Lego" type mentality when it comes to AI coding - it's like buying a cool Lego kit, then watching someone else assemble it for you, removing 99% of the enjoyment in the process. I get that, but I prefer to think of it in terms of being able to achieve orders of magnitude more in the time I have available, at close to zero extra cost.
Totally agree. I can spend an afternoon trying out an approach to a problem or product (usually while taking meetings and writing emails as well). If it doesn't work, then that's a useful result from my time. If it does work, I can then double-down on review, tests, quality, security, etc and make sure it's all tickety-boo.
I see the current generation of AI very much as a thing in between. Opus 4.5 can think and code quite well, but it cannot do these "jumps of insight" yet. It also struggles with straightforward, but technically intricate things, where you have to max out your understanding of the problem.
Just a few days ago, I let it do something that I thought was straightforward, but it kept inserting bugs, and after a few hours of interaction it said itself it was running in circles. It took me a day to figure out what the problem was: an invariant I had given it was actually too strong, and needed to be weakened for a special case. If I had done all of it myself, I would have been faster, and discovered this quicker.
For a different task in the same project I used it to achieve a working version of something in a few days that would have taken me at least a week or two to achieve myself. The result is not efficient enough for the long term, but for now it is good enough to proceed with other things. On the other hand, with just one (painful) week more, I would have coded a proper solution myself.
What I am looking forward to is being able to converse with the AI in terms of a hard logic. That will get rid of the straightforward but technically intricate stuff that it cannot do yet properly, and it will also allow the AI to surface much quicker where a "jump of insight" is needed.
I am not sure what all of this means for us needing to think hard. Certainly thinking hard will be necessary for quite a while. I guess it comes down to when the AIs will be able to do these "jumps of insights" themselves.
That’s funny cause I feel the opposite: LLMs can automate, in a sloppy fashion, building the first trivial draft. But what remains is still thinking hard about the non trivial parts.
I don't agree with OP. I still think hard. I can guarantee anyone that in complex contexts, where you have to work on large distributed systems and very large codebases, with multiple people involved, OP’s argument falls apart. Quite simply, compared to before, we now delegate the "build" part to AI. But the engineering and architectural side always remains a product of my own mental process. We’ve moved to a higher level of abstraction; that doesn’t mean you have to think or reason less about things. On the contrary, sometimes we end up reasoning about more important problems.
I’d been feeling this until quite literally yesterday, where I sort of just forced myself to not touch an AI and grappled with the problem for hours. Got myself all mixed up with trig and angles until I got a headache and decided to back off a lot of the complexity. I doubt I got everything right, I’m sure I could’ve had a solution with near identical outputs using an AI in a fraction of the time.
But I feel better for not taking the efficient way. Having to be the one to make a decision at every step of the way, choosing the constraints and where I cut my losses on accuracy, I think has taught me more about the subject than even reading literature would’ve directly stated.
I'm using LLMs to code and I'm still thinking hard. I'm not doing it wrong: I think about design choices: risks, constraints, technical debt, alternatives, possibilities... I'm thinking as hard as I've ever done.
Yeah, but thinking with an LLM is different. The article says:
> By “thinking hard,” I mean encountering a specific, difficult problem and spending multiple days just sitting with it to overcome it.
The "thinking hard" I do with an LLM is more like management thinking. Its chaotic and full of conversations and context switches. Its tiring, sure. But I'm not spending multiple days contemplating a single idea.
The "thinking hard" I do over multiple days with a single problem is more like that of a scientist / mathematician. I find myself still thinking about my problem while I'm lying in bed that night. I'm contemplating it in the shower. I have little breakthroughs and setbacks, until I eventually crack it or give up.
Its different.
YMMV, but I've found that I actually do way more of that type of "thinking hard" thanks to LLMs. With the menial parts largely off my plate, my attention has been freed up to focus on a higher density of hard problems, which I find a lot more enjoyable.
There are a lot of hard problems to solve in orchestration. We've barely scratched the surface on this.
I very much think its possible to use LLMs as a tool in this way. However a lot of folks are not. I see people, both personally and professionally, give it a problem and expect it to both design and implement a solution, then hold it as a gold standard.
I find the best uses, for at least my self, are smaller parts of my workflow where I'm not going to learn anything from doing it: - build one to throw away: give me a quick prototype to get stakeholder feedback - straightforward helper functions: I have the design and parameters planned, just need an implementation that I can review - tab-completion code-gen - If I want leads for looking into something (libraries, tools) and Googling isn't cutting it
> then hold it as a gold standard
I just changed employers recently in part due to this: dealing with someone that appears to now spend his time coercing LLM's to give the answers he wants, and becoming deaf to any contradictions. LLMs are very effective at amplifying the Reality Distortion Field for those that live in them. LLMs are replacing blog posts for this purpose.
I echo this sentiment. Even though I'm having Claude Code write 100% of the code for a personal project as an experiment, the need for thinking hard is very present.
In fact, since I don't need to do low-thinking tasks like writing boilerplate or repetitive tests, I find my thinking ratio is actually higher than when I write code normally.
I'm with you, thinking about architecture is generally still a big part of my mental effort. But for me most architectural problems are solve in short periods of thought and a lot of iteration. Maybe its an skill issue, but not now nor in the pre-LLM era I've been able to pre-solve all the architecture with pure thinking.
That said architectural problems have been also been less difficult, just for the simple fact that research and prototyping has become faster and cheaper.
I think it depends on the scope and level of solution I accept as “good”. I agree that often the thinking for the “next step” is too easy architecturally. But I still enjoy thinking about the global optimum or a “perfect system”, even it’s not immediately feasible, and can spend large amounts of time on this.
And then also there’s all the non-systems stuff - what is actually feasible, what’s most valuable etc. Less “fun”, but still lots of potential for thinking.
I guess my main point is there is still lots to think about even post-LLM, but the real challenge is making it as “fun” or as easily useful as it was pre-LLM.
I think local code architecture was a very easy domain for “optimality” that is actually tractable and the joy that comes with it, and LLMs are harmful to that, but I don’t think there’s nothing to replace it with.
And thinking of how to convey all of that to Claude without having to write whole books :)
tfw you start expressing your thoughts as code because its shorter instead
Ya, they are programming languages after all. Language is really powerful when you really how to use it. Some of us are more comfortable with the natural variety, some of us are more comfy with code ¯\_(ツ)_/¯
Agreed. My recent side projects involve lots of thinking over days and weeks.
With AI we can set high bars and do complex original stuff. Obviously boilerplate and common patterns are slop slap without much thinking. That's why you branch into new creative territory. The challenge then becomes visualising the mental map of modular pieces all working nicely together at the right time to achieve your original intent.
Thats not thinking hard, you are making decisions
Reading this comment and other similar comments there's definitely a difference between people. Personally I agree and resonate a lot with the blog post, and I've always found designs of my programs to come sort of naturally. Usually the hard problems are the technical problems and then the design is figured out based on what's needed to control the program. I never had to think that hard about design.
Aptitude testing centers like Johnson O'Connor have tests for that. There are (relatively) huge differences between different people's thinking and problem solving styles. For some, creating an efficient process feels natural, while others need stability and redundancy. Programmers are by and large the latter.
[1]: https://www.jocrf.org/how-clients-use-the-analytical-reasoni...
It's certainly a different style of thinking hard. I used to really stress myself over coding - i.e. I would get frustrated that solving an issue would cause me to introduce some sort of hack or otherwise snowball into a huge refactor. Now I spend most of my time thinking about what cool new features I am going to build and not really stressing myself out too much.
I'd go as far as to say I think harder now – or at least quicker. I'm not wasting cycles on chores; I can focus on the bigger picture.
I've never felt more mental exhaustion than after a LLM coding session. I assume that is a result of it requiring me to think harder too.
I feel this too. I suspect its a byproduct of all the context switching I find myself doing when I'm using an LLM to help write software. Within a 10 minute window, I'll read code, debug a problem, prompt, discuss the design, test something, do some design work myself and so on.
When I'm just programming, I spend a lot more time working through a single idea, or a single function. Its much less tiring.
It wasn't until I read your comment that I was able to pinpoint why the mental exhaustion feels familiar. It's the same kind (though not degree) of exhaustion as formal methods / proofs.
Except without the reward of an intellectual high afterwards.
I think OP's post is an attempt to move us past this stage of the discussion, which is frankly an old hat.
The point they are making is that using AI tools makes it a lot harder for them to keep up the discipline to think hard.
This may or may not be true for everyone.
It is a different kind of thinking, though.
I use Claude Code a lot, and it always lets me know the moment I stopped thinking hard, because it will build something completely asinine. Garbage in, garbage out, as they say...
its how you use the tool... reminds me of that episode of simpsons when homer gets a gun lic... he goes from not using it at all, to using it a little, to using it without thinking about what hes doing and for ludicrous things...
thinking is tiring and life is complicated, the tool makes it easy to slip into bad habits and bad habits are hard to break even when you recognise its a bad habit.
Many people are too busy/lazy/self-unaware to evaluate their behaviour to recognise a bad habit.
there's no such thing as right or wrong , so the following isn't intended as any form of judgement or admonition , merely an observation that you are starting to sound like an llm
> you are starting to sound like an llm
My observation: I've always had that "sound." I don't know or care much about what that implies. I will admit I'm now deliberately avoiding em dashs, whereas I was once an enthusiastic user of them.
Yes, if anything I think harder because I know it's on the frontier of whatever I'm building (so i'm more motivated and there's much more ROI)
I haven't reduced my thinking! Today I asked AI to debug an issue. It came with a solution that it was clearly correct, but it didn't explain why the code was in that state. I kept steering AI (which just wanted to fix) toward figuring out the why and it digged through git and github issue at some point,in a very cool way. And finally it pulled out something that made sense. It was defensive programming introduced to fix an issue somewhere else, which was also in turn fixed, so useless.
At that point an idea popped in my mind and I decided to look for similar patterns in the codebase, related to the change, found 3. 1 was a non bug, two were latent bugs.
Shipped a fix plus 2 fixes for bugs yet to be discovered.
>I haven't reduced my thinking!
You just detailed an example of where you did in fact reduce your thinking.
Managers who tell people what to get done do not think about the problem.
I think my message is doing a disservice to explaining what actually happened because a lot of it happens in my head.
There is a lot of thinking involved. What's reduced is search tooling. I can be way more fuzzy, rather than `rg 'whatever'` I now say "find this and similar patterns"Did you use your AI to create that list for you?
I've had the completely opposite experience as somebody that also likes to think more than to build: LLMs take much of the legwork of actually implementing a design, fixing trivial errors etc. away from me and let me validate theories much more quickly than I could do by myself.
More importantly, thinking and building are two very different modes of operating and it can be hard to switch at moment's notice. I've definitely noticed myself getting stuck in "non-thinking building/fixing mode" at times, only realizing that I've been making steady progress into the wrong direction an hour or two in.
This happens way less with LLMs, as they provide natural time to think while they churn away at doing.
Even when thinking, they can help: They're infinitely patient rubber ducks, and they often press all the right buttons of "somebody being wrong on the Internet" too, which can help engineers that thrive in these kinds of verbal pro/contra discussions.
One thing this discussion made me realize is that "thinking hard" might not be a single mode of thinking.
In grad school, I had what I'd call the classic version. I stayed up all night mentally working on a topology question about turning a 2-torus inside out. I already knew you can't flip a torus inside out in ordinary R^3 without self-intersection. So I kept moving and stretching the torus and the surrounding space in my head, trying to understand where the obstruction actually lived.
Sometime around sunrise, it clicked that if you allow the move to go through infinity(so effectively S^3), the inside/outside distinction I was relying on just collapses, and the obstruction I was visualizing dissolves. Birds were chirping, I hadn't slept, and nothing useful came out of it, but my internal model of space felt permanently upgraded. That's clearly "thinking hard" in the sense.
But there's another mode I've experienced that feels related but different. With a tough Code Golf problem, I might carry it around for a week. I'm not actively grinding on it the whole time, but the problem stays loaded in the background. Then suddenly, in the shower or on a walk, a compression trick or a different representation just clicks.
That doesn't feel "hard" moment to moment. It's more like keeping a problem resident in memory long enough for the right structure to surface.
One is concentrated and exhausting, the other is diffuse and slow-burning. They're different phenomenologically, but both feel like forms of deep engagement that are easy to crowd out.
I had similar thoughts recently. I wouldn't consider myself "the thinker", but I simply missed learning by failure. You almost don't fail anymore using AI. If something fails, it feels like it's not your fault but the AI messed up. Sometimes I even get angry at the AI for failing, not at myself. I don't have a solution either, but I came up with a guideline on when and how to use AI that has helped me to still enjoy learning. I'm not trying to advertise my blog and you don't need to read it, the important part is the diagram at the end of "Learning & Failure": https://sattlerjoshua.com/writing/2026-02-01-thoughts-on-ai-.... In summary, when something is important and long-term, I heavily invest into understanding and use an approach that maximizes understanding over speed. Not sure if you can translate it 100% to your situation but maybe it helps to have some kind of guideline, when to spend more time thinking instead of directly using and AI to get to the solution.
To be honest, I do not quite understand the author's point. If he believes that agentic coding or AI has negative impact on being a thinker, or prevent him from thinking critically, he can simply stop using them.
Why blame these tools if you can stop using them, and they won't have any effect on you?
In my case, my problem was often overthinking before starting to build anything. Vibe coding rescued me from that cycle. Just a few days ago, I used openclaw to build and launch a complete product via a Telegram chat. Now, I can act immediately rather than just recording an idea and potentially getting to it "someday later"
To me, that's evolutional. I am truly grateful for the advancement of AI technology and this new era. Ultimately, it is a tool you can choose to use or not, rather than something that prevents you from thinking more.
For me personally, the problem is my teammates. The ability or will to critically think, or investigate existing tools in the codebase seems to disappear. Too often now I have to send back a PR where something is fixed using novel implementations instead of the single function call using existing infrastructure.
I miss entering flow state when coding. When vibe coding, you are in constant interruption and only think very shallow. I never see anyone enter flow state when vibe coding.
I've found that it's often useful to spend the time thinking about the way I would architect the code (down to a fair level of minutia) before letting the agent have a go.
That way my 'thinker' is satiated and also challenged - Did the solution that my thinker came up with solve the problem better than the plan that the agent wrote?
Then either I acknowledge that the agent's solution was better, giving my thinker something to chew on for the next time; or my solution is better which gives the thinker a dopamine hit and gives me better code.
I will never not be upset at my fellow engineers for selling out the ONE thing that made us valuable and respected in the marketplace and trying to destroy software engineering as a career because "Claude Code go brrrrrr" basically.
It's like we had the means for production and more or less collectively decided "You know what? Actually, the bourgeoisie can have it, sure."
The personification of the quote “your scientists were so preoccupied with whether or not they could that they didn't stop to think if they should”
I feel the existential problem for a world that follows the religion of science and technology to its extreme, is that most people in STEM have no foundation in humanities, so ethical and philosophical concerns never pass through their mind.
We have signed a pact with the devil to help us through boring tasks, and no one thought to ask what we would give in exchange.
Money. It's always money. It was always money.
Couldn't agree more. AI as it's designed today is very heavy on the "f u; got mine" vibe.
My solution has been to lean into harder problems - even as side projects, if they aren't available at work.
I too am an ex-physcist used to spending days thinking about things, but programming is a gold mine as it is adjacent to computer science. You can design a programming language (or improve an existing one), try to build a better database (or improve an existing one), or many other things that are quite hard.
The LLM is a good rubber duck for exploring the boundaries of human knowledge (or at least knowledge common enough to be in its training set). It can't really "research" on its own, and whenever you suggest something novel and plausable it gets sycophantic, but it can help you prototype ideas and implementation strategies quite fast, and it can help you explore how existing software works and tackles similar problems (or help you start working on an existing project).
Just sit down and think hard. If it doesn’t work, think harder.
I miss the thrill of running through the semi-parched grasslands and the heady mix of terror triumph and trepidation as we close in on our meal for the week.
I think that feeling is fairly common across the entire population. Play more tag, it’ll help.
There are people who still hunt, fish and run. Some even climb without ropes. It would seem the feeling is missed.
I can relate to this. Coding satisifies my urge to build and ship and have an impact on the world. But it doesn't make me think hard. Two things which I've recently gravitated to outside of coding which make me think: blogging and playing chess.
Maybe I subconsciously picked these up because my Thinker side was starved for attention. Nice post.
While I see where you are coming from but I think what has really gone for a toss is the utility of thinking hard.
Thinking hard has never been easier.
I think AI for an autodidact is a boon. Now I suddenly have a teacher who is always accessible and will teach me whatever I want for as long as I want exactly the way I want and I don;t have to worry about my social anxiety kicking in.
Learn advanced cryptography? AI, figure out formal verification - AI etc.
I think harder because of AI.
I have to think more rigorously. I have to find ways to tie up loose ends, to verify the result efficiently, to create efficient feedback loops and define categorical success criteria.
I've thought harder about problems this last year than I have in a long time.
The sampling rate we use to take input information is fixed. And we always find a way to work with the sampled information, no matter if the input information density is high or low.
We can play a peaceful game and a intense one.
Now, when we think, we can always find a right level of abstract to think on. Decades ago a programmer thought with machine codes, now we think with high level concepts, maybe towards philosophy.
A good outcome always requires hard thinking. We can and we WILL think hard at a appropriate level.
People here seem to be conflating thinking hard and thinking a lot.
Most examples mentioned of “thinking hard” in the comments sound like they think about a lot of stuff superficially instead one particular problem deeply, which is what OP is referring to.
If you actually have a problem worth thinking deeply about, AI usually can’t help with it. For example, AI can’t help you make performant stencil buffers on a Nokia Ngage for fun. It just doesn’t have that in it. Plenty of such problems abound, especially in domains involving some or the other extreme (like high throughput traffic). Just the other day someone posted a vibe coded Wikipedia project that took ages to load (despite being “just” 66MB) and insisted it was the best it was possible to do, whereas Google can load the entire planet (perceptually) in a fraction of a second.
Great article. Moment I finished reading this article, I thought of my time in solving a UI menu problem with lot of items in it and algorithm I came up with to solve for different screen sizes. It took solid 2 hrs of walking and thinking. I still remember how I was excited when I had the feeling of cracking the problem. Deep thinking is something everyone has it within and it varies how fast you can think. But we all got it with right environment and time we all got it in us. But thats long time ago. Now I always off load some thinking to AI. it comes up with options and you just have to steer it. By time it is getting better. Just ask it you know. But I feel like it is good old days to think deep by yourself. Now I have a partner in AI to think along with me. Great article.
I have a very similar background and a very similar feeling when i think of programming nowadays.
Personally, I am going deeper in Quantum Computing, hoping that this field will require thinkers for a long time.
I feel like AI has given me the opportunity to think MORE, not less. I’m doing so much less mindless work, spending most of my efforts critically analyzing the code and making larger scale architectural decisions.
The author says “ Even though the AI almost certainly won't come up with a 100% satisfying solution, the 70% solution it achieves usually hits the “good enough” mark.”
The key is to keep pushing until it gets to the 100% mark. That last 30% takes multiples longer than the first 70%, but that is where the satisfaction lies for me.
I dont think LLMs really took away much thinking, for me they replaced searching stackexchange to find incantations. Now I can get them instantly and customized to my situation. I miss thinking hard too, but I dont blame that on AI, its more that as a dev you are paid to think the absolute minimal amount needed to solve an issue or implement a feature. I dont regret leaving academia, but being paid to think I will always miss.
I generally feel the same. But in addition, I also enjoy the pure act of coding. At least for me that’s another big part why I feel left behind with all this Agent stuff.
I agree, that's another factor. Definitely the mechanical act of coding specially if your are good at it gives the type of joy that I can imagine an artisan or craftsman having when doing his work.
Good highlight of the struggle between Builder and Thinker, I enjoyed the writing. So why not work on PQC? Surely you've thought about other avenues here as well.
If you're looking for a domain where the 70% AI solution is a total failure, that's the field. You can't rely on vibe coding because the underlying math, like Learning With Errors (LWE) or supersingular isogeny graphs, is conceptually dense and hasn't been commoditized into AI training data yet. It requires that same 'several-day-soak' thinking you loved in physics, specifically because we're trying to build systems that remain secure even against an adversary with a quantum computer. It’s one of the few areas left where the Thinker isn't just a luxury, but a hard requirement for the Builder to even begin.
Many people here might be in a similar situation to me, but I took an online masters program that allowed for continuing education following completion of the degree. This has become one of my hobbies; I can take classes at my own expense, not worry about my grades, and just enjoy learning. I can push myself as much as I want and since the classes are hard, just completing 1 assignment is enough to force me to "think". Just sharing my experience for people who might be looking for ways to challenge themselves intellectually.
In my experience you will need to think even harder with AI if you want a decent result, although the problems you'll be thinking about will be more along the lines of "what the hell did it just write?"
The current major problem with the software industry isn't quantity, it's quality; and AI just increases the former while decreasing the latter. Instead of e.g. finding ways to reduce boilerplate, people are just using AI to generate more of it.
have a look at https://projecteuler.net/
for "Thinker" brain food. (it still has the issue of not being a pragmatic use of time, but there are plenty interesting enough questions which it at least helps)
You were walking to your destination which was three miles away
You now have a bicycle which gets you there in a third of the time
You need to find destinations that are 3x as far away than before
I think what plagues a lot of pure STEM types in this tumultuous period of AI (or "AI") is that they've spent a majority of their lives mulling over some problem until they've worked out every possible imperfection, and once they've achieved something they consider close to that level of perfection, that's when they say they're done.
While this may be an unfair generalization, and apologies to those who don't feel this way, but I believe STEM types like the OP are used to problem solving that's linear in the sense that the problem only exists in its field as something to be solved, and once they figure it out, they're done. The OP even described his mentality as that of a "Thinker" where he received a problem during his schooling, mulled over it for a long time, and eventually came to the answer. That's it, next problem to crack. Their whole lives revolve around this process and most have never considered anything outside it.
Even now, despite my own healthy skepticism of and distaste for AI, I am forced to respect that AI can do some things very fast. People like the OP, used to chiseling away at a problem for days, weeks, months, etc., now have that throughput time slashed. They're used to the notion of thinking long and hard about a very specific problem and finally having some output; now, code modules that are "good enough" can be cooked up in a few minutes, and if the module works the problem is solved and they need to find the next problem.
I think this is more common than most people want to admit, going back to grumblings of "gluing libraries together" being unsatisfying. The only suggestion I have for the OP is to expand what you think about. There are other comments in this thread supporting it but I think a sea change that AI is starting to bring for software folks is that we get to put more time towards enhancing module design, user experience, resolving tech debt, and so on. People being the ones writing code is still very important.
I think there's more to talk about where I do share the OP's yearning and fears (i.e., people who weren't voracious readers or English/literary majors being oneshot by the devil that is AI summaries, AI-assisted reading, etc.) but that's another story for another time.
> I think what plagues a lot of pure STEM types in this tumultuous period of AI (or "AI") is that they've spent a majority of their lives mulling over some problem until they've worked out every possible imperfection, and once they've achieved something they consider close to that level of perfection, that's when they say they're done.
These people are miserable to work with if you need things done quickly and can tolerate even slight imperfection.
That operating regime is, incidentally, 95% of the work we actually get paid to do.
Personally: technical problems I usually think for a couple days at most before I need to start implementing to make progress. But I have background things like future plans, politics, philosophy, and stories, so I always have something to think about. Close-up technical thinking is great, but sometimes step back and look at the bigger picture?
I don't think AI has affected my thinking much, but that's because I probably don't know how to use it well. Whenever AI writes a lot of code, I end up having to understand if not change most of it; either because I don't trust the AI, I have to change the specification (and either it's a small change or I don't trust the AI to rewrite), the code has a leaky abstraction, the specification was wrong, the code has a bug, the code looks like it has a bug (but the problem ends up somewhere else), I'm looking for a bug, etc. Although more and more often the AI saves time and thinking vs. if I wrote the implementation myself, it doesn't prevent me from having to think about the code at all and treating it like a black box, due to the above.
I believe it is a type of burnout. AI might have accelerated both the work and that feeling.
I found that doing more physical projects helped me. Large woodworking, home improvement, projects. Built-in bookshelves, a huge butcher block bar top (with 24+ hours of mindlessly sanding), rolling workbenches, and lots of cabinets. Learning and trying to master a new skill, using new design software, filling the garage with tools...
I wish the author would give some examples of what he wants to think hard about.
I am thinking harder than ever due to vibe coding. How will markets shift? What will be in demand? How will the consumer side adapt? How do we position? Predicting the future is a hard problem... The thinker in me is working relentlessly since December. At least for me the thinker loves an existential crisis like no other.
I think for days at a time still.
I don’t think you can get the same satisfaction out of these tools if what you want to do is not novel.
If you are exploring the space of possibilities for which there are no clear solutions, then you have to think hard. Take on wildly more ambitious projects. Try to do something you don’t think you can do. And work with them to get there.
Thinking harder than I have in a long time with AI assisted coding.
As I'm providing context I get to think about what an ideal approach would look like and often dive into a research session to analyze pros and cons of various solutions.
I don't use agents much because it's important to see how a component I just designed fits into the larger codebase. That experience provides insights on what improvements I need to make and what to build next.
The time I've spent thinking about the composability, cohesiveness, and ergonomics of the code itself have really paid off. The codebase is a joy to work in, easy to maintain and extend.
The LLMs have helped me focus my cognitive bandwidth on the quality and architecture instead of the tedious and time consuming parts.
If it's this easy to convince you to stop being creative, to stop putting in effort to think critically, then you don't deserve the fulfilment that creativity and critical thinking can give you. These vibe coding self pity articles are so bizarre.
What OP wants to say is that they miss the process of thinking hard for days and weeks and one day this brilliant idea popping up on their bed before sleep. I lost my "thinking hard" process again too today at work against my pragmatism, or more precisely my job.
The author's point is, If you use AI to solve the problem and after the chat gives you the solution you say “oh yes, ok, I understand it, I can do it”(and no, you can’t do it).
I’d love to be able to see statistics that show LLM use and reception according to certain socioeconomic factors.
Anything in particular you expect to see?
A lot of productive thinking happens when asleep, in the shower, in flow walking or cycling or rowing.
It's hard to rationalise this as billable time, but they pay for outcome even if they act like they pay for 9-5 and so if I'm thinking why I like a particular abstraction, or see analogies to another problem, or begin to construct dialogues with mysel(ves|f) about this, and it happens I'm scrubbing my back (or worse) I kind of "go with the flow" so to speak.
Definitely thinking about the problem can be a lot better than actually having to produce it.
> At the end of the day, I am a Builder. I like building things. The faster I build, the better.
This I can’t relate to. For me it’s “the better I build, the better”. Building poor code fast isn’t good: it’s just creating debt to deal with in the future, or admitting I’ll toss out the quickly built thing since it won’t have longevity. When quality comes into play (not just “passed the tests”, but is something maintainable, extensible, etc), it’s hard to not employ the Thinker side along with the Builder. They aren’t necessarily mutually exclusive.
Then again, I work on things that are expected to last quite a while and aren’t disposable MVPs or side projects. I suppose if you don’t have that longevity mindset it’s easy to slip into Build-not-Think mode.
Cognitive skills are just like any other - use them and they will grow, do not and they will decline. Oddly enough, the more one increases their software engineering cognition, the less the distance between "The Builder" and "The Thinker" becomes.
I think it's just another abstraction layer, and moves the thinking process from "how do I solve this problem in code?" to "how do I solve this problem in orchestration?".
I recently used the analogy of when compilers were invented. Old-school coders wrote machine code, and handled the intricacies of memory and storage and everything themselves. Then compilers took over, we all moved up an abstraction layer and started using high-level languages to code in. There was a generation of programmers who hated compilers because they wrote bad, inelegant, inefficient, programs. And for years they were right.
The hard problems now are "how can I get a set of non-deterministic, fault-prone, LLM agents to build this feature or product with as few errors as possible, with as little oversight as possible?". There's a few generic solutions, a few good approaches coming out, but plenty of scope for some hard thought in there. And a generic approach may not work for your specific project.
I always search the web, ask others, or read books in order to find a solution. When I do not find an answer from someone else, that's where I have to think hard.
That's weird as I do the opposite: think by myself, then look for help if I don't know.
I think this problem existed before AI. At least in my current job, there is constant, unrelenting demand for fast results. “Multi-day deep thinking” sounds like an outrageous luxury, at least in my current job.
Which is a reason for software becoming worse across the board. Just look at Windows. The "go go go" culture is ruinous to products.
Even 30 years ago when I started in the industry, most jobs required very little deep thinking. All of mine has been done on personal projects. Thats just the reality of the typical software engineering job.
this is why productivity is a word that should really just be reserved for work contexts, and personal time is better used for feeding "The Thinker"
I really don't believe AI allows you to think less hard. If it did, it would be amazing, but the current AI hasn't got to that capability. It forces you to think about different things at best.
When people missed working hard, they turned to fake physical work (gyms). So people now need some fake thinking work.
Except for eating and sleeping, all other human activities are fake now.
We've always been doing fake thinking work since the beginning, see: puzzles
you forget about fake sleeping being loaded with fake dopamine hits before sleep AND broken sleep schedules; and eating fake ultraprocessed food instead of wholefoods.
Make sure you start every day with the type of confidence that would allow you to refer to yourself as an intellectual one-percenter
Eventually I always get to a problem I can't solve by just throwing an LLM at it and have to go in and properly debug things. At that point knowing the code base helps a hell of a lot, and I would've been better off writing the entire thing by hand.
"Sometimes you have to keep thinking past the point where it starts to hurt." - Fermi
If you feel this way, you arent using AI right.
For me, Claude, Suno, Gemini and AI tools are pure bliss for creation, because they eliminate the boring grunt work. Who cares how to implement OAuth login flow, or anything that has been done 1000 times?
I do not miss doing grunt work!
What a bizarre claim. If you can solve anything by thinking, why don't you become a scientist? Think of a theory that unites quantum physics and general relativity.
I feel that AI doesn't necessarily replace my thinking, but actually helps to explore deeper - on my behalf - alternative considerations in the approach to solving a problem, which in turn better informs my thinking.
I definitely relate to this. Except that while I was in the 1% in university who thought hard, I don't think my success rate was that high. My confidence in the time was quite high, though, and I still remember the notable successes.
And also, I haven't started using AI for writing code yet. I'm shuffling toward that, with much trepidation. I ask it lots of coding questions. I make it teach me stuff. Which brings me to the point of my post:
The other day, I was looking at some Rust code and trying to work out the ownership rules. In theory, I more or less understand them. In practice, not so much. So I had Claude start quizzing me. Claude was a pretty brutal teacher -- he'd ask 4 or 5 questions, most of them solvable from what I knew already, and then 1 or 2 that introduced a new concept that I hadn't seen. I would get that one wrong and ask for another quiz. Same thing: 4 or 5 questions, using what I knew plus the thing just introduced, plus 1 or 2 with a new wrinkle.
I don't think I got 100% on any of the quizzes. Maybe the last one; I should dig up that chat and see. But I learned a ton, and had to think really hard.
Somehow, I doubt this technique will be popular. But my experience with it was very good. I recommend it. (It does make me a little nervous that whenever I work with Claude on things that I'm more familiar with, he's always a little off base on some part of it. Since this was stuff I didn't know, he could have been feeding me slop. But I don't think so; the explanations made sense and the the compiler agreed, so it'd be tough to get anything completely wrong. And I was thinking through all of it; usually the bullshit slips in stealthily in the parts that don't seem to matter, but I had to work through everything.)
I feel like I'm doing much nicer thinking now, I'm doing more systems thinking, not only that I'm iterating on system design a lot more because it is a lot easier to change with AI
Just work on more ambitious projects?
Well thinking hard is still there if you work on hard abstract problems. I keep thinking very hard, even though 4 CCs pump code while I do this. Besides, being a Gary Kasparov, playing on several tables, takes thinking.
Why not think hard about what to build instead of how to build it?
Give the AI less responsibility but more work. Immediate inference is a great example: if the AI can finish my lines, my `if` bodies, my struct instantiations, type signatures, etc., it can reduce my second-by-second work significantly while taking little of my cognitive agency.
These are also tasks the AI can succeed at rather trivially.
Better completions is not as sexy, but in pretending agents are great engineers it's an amazing feature often glossed over.
Another example is automatic test generation or early correctness warnings. If the AI can suggest a basic test and I can add it with the push of a button - great. The length (and thus complexity) of tests can be configured conservatively relative to the AI of the day. Warnings can just be flags in the editors spotting obvious mistakes. Off-by-one errors for example, which might go unnoticed for a while, would be an achievable and valuable notice.
Also, automatic debugging and feeding the raw debugger log into an AI to parse seems promising, but I've done little of it.
...And go from there - if a well-crafted codebase and an advanced model using it as context can generate short functions well, then by all means - scale that up with discretion.
These problems around the AI coding tools are not at all special - it's a classic case of taking the new tool too far too fast.
The problem with the "70% solution" is that it creates a massive amount of hidden technical debt. You aren't thinking hard because you aren't forced to understand the edge cases or the real origin of the problem. It used to be the case that you will need plan 10 steps ahead because refactoring was expensive, now people just focus in the next problem ahead, but the compounding AI slop will blow up eventually.
would you agree that there's more time to think about what problems are worth solving?
The ziphead era of coding is over. I'll miss it too.
Would like to follow your blog, is there an rss feed?
yes but you solved problems already solved by someone else. how about something that hasn't been solved, or yet even noticed? that gives the greatest satisfaction
With AI, I now think much harder. Timelines are shorter, big decisions are closer together, and more system interactions have to be "grokked" in my head to guide the model properly.
I'm more spent than before where I would spend 2 hours wrestling with tailwind classes, or testing API endpoints manually by typing json shapes myself.
“We now buy our bread… it comes sliced… and sure you can just go and make your sandwich and it won’t be a rustic, sourdough that you spent months cultivating. Your tomatoes will be store bought not grown heirlooms. In the end… you have lost the art of baking bread. And your sandwich making skills are lost to time… will humanity ever bake again with these mass factories of bread? What have we lost! Woe is me. Woe is me.”
That is a very good analogy - sliced shop bread is tasteless and not that good for you compared to sourdough. Likewise awful store bought tomatoes taste like nothing compared to heirloom tomatoes and arguably have different nutritional content.
Shop bread and tomatoes though can be manufactured without any thought of who makes them, though they can be reliably manufactured without someone guiding an LLM which is perhaps where the analogy falls down, and we always want them to be the same, but software is different in every form.
There's an irony here -- the same tools that make it easy to skim and summarize can also be used to force deeper thinking. The problem isn't the tools, it's the defaults.
I've found that the best way to actually think hard about something is to write about it, or to test yourself on it. Not re-read it. Not highlight it. Generate questions from the material and try to answer them from memory.
The research on active recall vs passive review is pretty clear: retrieval practice produces dramatically better long-term retention than re-reading. Karpicke & Blunt (2011) showed that practice testing outperformed even elaborative concept mapping.
So the question isn't whether AI summarizers are good or bad -- it's whether you use them as a crutch to avoid thinking, or as a tool to compress the boring parts so you can spend more time on the genuinely hard thinking.
> I have tried to get that feeling of mental growth outside of coding
A few years before this wave of AI hit, I got promoted into a tech lead/architect role. All of my mental growth since then has been learning to navigate office politics and getting the 10k ft view way more often.
I was already telling myself "I miss thinking hard" years before this promotion. When I build stuff now, I do it with a much clearer purpose. I have sincerely tried the new tools, but I'm back to just using google search if anything at all.
All I did was prove to myself the bottleneck was never writing code, but deciding why I'm doing anything at all. If you want to think so hard you stay awake at night, try existential dread. It's an important developmental milestone you'd have been forced to confront anyway even 1000 years ago.
My point is, you might want to reconsider how much you blame AI.
At the day job there was a problem with performance loading data in an app.
7 months later waffling on it on and off with and without ai I finally cracked it.
Author is not wrong though, the number of times i hit this isnt as often since ai. I do miss the feeling though
Rich Hickey and the Clojure folks coined the term Hammock Driven Development. It was tongue in cheek but IMO it is an ideal to strive towards.
I think AI didn't do this. Open source, libraries, cloud, frameworks and agile conspired to do this.
Why solve a problem when you can import library / scale up / use managed kuberneted / etc.
The menu is great and the number of problems needing deep thought seems rare.
There might be deep thought problems on the requirements side of things but less often on the technical side.
You don't have to miss it, buy a differential equation book and do one per day. Play chess on hard mode. I mean there's so many ways to make yourself think hard daily, this makes no sense.
It's like saying I miss running. Get out and run then.
Great, so does that mean that it is time to vibe code our own alternatives of everything such as the Linux kernel because the AI is sure 'smarter' than all of us?
Seen a lot of DIY vibe coded solutions on this site and they are just waiting for a security disaster. Moltbook being a notable example.
That was just the beginning.
"Before you read this post, ask yourself a question: When was the last time you truly thought hard? ... a) All the time. b) Never. c) Somewhere in between."
What?
Dude, I know you touched on this but seriously. Just don't use AI then. It's not hard, it's your choice to use it or not. It's not even making you faster, so the pragmatism argument doesn't really work well! This is a totally self inflicted problem that you can undo any time you want.
Spoken like someone who doesn't have their company measuring their AI usage and regularly laying people off.
Need to be in the top 5% of AI users while staying in your budget of $50/month!
That sucks, but honestly I’d get out of there as fast as possible. Life is too short to live under unfulfilling work conditions for any extended amount of time.
If you can't figure out how to game this, you're both not thinking hard and not using AI effectively.
I have a Claude code set up in a folder with instructions on how to access iMessage. Ask it questions like “What did my wife say I should do next Friday?”
Reads the SQLite db and shit. So burn your tokens on that.
It's not hard to burn tokens on random bullshit (see moltbook). If you really can deliver results at full speed without AI, it shouldn't be hard to keep cover.
To me thinking hard involved the following steps-
1. Take a pen and paper.
2. Write down what we know.
3. Write down where we want to go.
4. Write down our methods of moving forward.
5. Make changes to 2, using 4, and see if we are getting closer to 3. And course correct based on that.
I still do it a lot. LLM's act as assist. Not as a wholesale replacement.
Why not find a subfield that is more difficult and requires some specialization then?
I think hard all the time, AI can only solve problems for me that don't require thinking hard. Give it anything more complex and it's useless.
I use AI for the easy stuff.
Instant upvote for a Philiip Mainlander quote at the end. He's the OG "God is Dead" guy and Nietzsche was reacting (very poorly) to Mainlander and other pessimists like Schopenhauer when he followed up with his own, shittier version of "god is dead"
Please read up on his life. Mainlander is the most extreme/radical Philosophical Pessimist of them all. He wrote a whole book about how you should rationally kill yourself and then he killed himself shortly after.
https://en.wikipedia.org/wiki/Philipp_Mainl%C3%A4nder
https://dokumen.pub/the-philosophy-of-redemption-die-philoso...
Max Stirner and Mainlander would have been friends and are kindred spirits philosophically.
https://en.wikipedia.org/wiki/Bibliography_of_philosophical_...
> Yes, I blame AI for this.
Just don't use it. That's always an option. Perhaps your builder doesn't actually benefit from an unlimited runway detached from the cost of effort.
> I tried getting back in touch with physics, reading old textbooks. But that wasn’t successful either. It is hard to justify spending time and mental effort solving physics problems that aren’t relevant or state-of-the-art
I tried this with physics and philosophy. I think i want to do a mix of hard but meaningful. For academic fields like that its impossible for a regular person to do as a hobby. Might as well just do puzzles or something.
Every time I try to use LLMs for coding, I completely lose touch with what it's doing, it does everything wrong and it can't seem to correct itself no matter how many times I explain. It's so frustrating just trying to get it to do the right thing.
I've resigned to mostly using it for "tip-of-my-tongue" style queries, i.e. "where do I look in the docs". Especially for Apple platforms where almost nothing is documented except for random WWDC video tutorials that lack associated text articles.
I don't trust LLMs at all. Everything they make, I end up rewriting from scratch anyway, because it's always garbage. Even when they give me ideas, they can't apply them properly. They have no standards, no principle. It's all just slop.
I hate this. I hate it because LLMs give so many others the impression of greatness, of speed, and of huge productivity gains. I must look like some grumpy hermit, stuck in their ways. But I just can't get over how LLMs all give me the major ick. Everything that comes out of them feels awful.
My standards must be unreasonably high. Extremely, unsustainably high. That must also be the reason I hardly finish any projects I've ever started, and why I can never seem to hit any deadlines at work. LLMs just can't reach my exacting, uncompromising standards. I'm surely expecting far too much of them. Far too much.
I guess I'll just keep doing it all myself. Anything else really just doesn't sit right.
I refer to it as "Think for me SaaS", and it should be avoided like the plague. Literally, it will give your brain a disease we haven't even named yet.
It's as if I woke up in a world where half of resturaunts worldwide started changing their name to McDonalds and gaslighting all their customers into thinking McDonalds is better than their "from scratch" menu.
Just dont use these agentic tools, they legitimately are weapons who's target is your brain. You can ship just as fast with autocomplete and decent workflows, and you know it.
Its weird, I dont understand why any self respecting dev would support these companies. They are openly hostile about their plans for the software industry (and many other verticles).
I see it as a weapon being used by a sect of the ruling class to diminsh the value of labor. While im not confident they'll be successful, I'm very disappointed in my peers that are cheering them on in that mission. My peers are obviously being tricked by promises of being able join that class, but that's not what's going to happen.
You're going to lose that thinking muscle and therefor the value of your labor is going to be directly correlated to the quantity and quality of tokens you can afford (or be given, loaned!?)
Be wary!!!
I'm with you. It scares me how quickly some of my peers' critical thinking and architectural understanding have noticeably atrophied over the last year and a half.
Guy complains about self vibe coding.. stop doing it then!! Do you really think it's practical? Your job must be really easy if it is.
Pre-processed food consumer complains about not cooking anymore. /s
... OK I guess. I mean sorry but if that's revelation to you, that by using a skill less you hone it less, you were clearly NOT thinking hard BEFORE you started using AI. It sure didn't help but the problem didn't start then.