I do have an actual diagnostic and I had the same experience over the past year with early coding harness at the beginning of the year, then Claude code since its release date. But after 1+year going that direction I really don’t want to continue. The novelty is gone, dealing with AI now feels frustrating and boring, I miss engaging deeply with the actual lower level technical challenges. I do not want to manage fleets of agents. I do not want to rediscover for the hundredth time that in fact all this time an agent took shortcuts for acceptance tests I rely upon and didn’t catch. Or once again get the agent to understand why and what I want it to do after its context got bloated and it start to drift completely. While I got artifacts I can use (libraries, tools, docs), including some things that I’m pretty confident are SoA I do not feel satisfied anymore knowing that I used a model to generate them, even if I was the one designing every part of it. I do feel that I’m lying anytime I come to a colleague to share a new cool tool I have made. And I do not feel that relying on AI actually helped me improve with dealing with my executive function issues.
YMMV but I’m personally feeling burnt out with AI coding agents and ready to go back to the old ways for my next personal project
The addiction part, the ADHD part and the pending test part.
The fear of becoming addicted to AI is real and I don't think I'll be capable to stop it, considering we're asking people who struggle with avoiding quick dopamine to use it professionally in their daily work life.
My Pro went to Max(5) to Max(20) pretty quickly and I was burning through that weekly limit still, without large agentic workflows that burn tokens. Just me and 4-5 terminals. Sometimes I was happy to hit the limit because I was forced back to normal life.
I've gone back to Pro to stop what was happening.
Now I'm self-aware enough to notice the trend and put up safe guards, but that's because I've always had to adapt my environment to control my behaviour because I know direct behaviour control is abnormally challenging. I fear for those who won't see it coming, until they're in deep.
> [...] considering we're asking people who struggle with avoiding quick dopamine to use it professionally in their daily work life.
It's so wild that it never dawned on me, why some people around me were so quick with "Let AI do that!". I'm not saying that each and everyone has ADHD, but I think I underestimated a) the flow of dopamine a successful prompt can set free and b) the craving for it by folks that I deemed more stable than myself.
I find that the new "drug" is constantly hunting down new cheaper models.. z.ai/glm, mistral, deepseek.. if you need to get your fix - find the cheaper path..
So the end game for the current generation of AI companies won't be productivity improvements but gambling, just like everything else nowadays. That's why they want to get us all into these massive casinos they call data centers and don't want us to own the slot machines.
So what that you have ideas - other people have them too. It's not ideas that build businesses but knowing right people or ability to sell products.
The gambling trope is so tired. AI development doesn't involve luck to any appreciable degree, certainly not more than hiring people to do a job can be considered "gambling" (you never know what you're going to get!).
It's just paying to get stuff done, which is how it's always been, since the dawn of man.
For most people who are not doing their day to day jobs it's just a prompt of their idea roughly sketched out and a miracle happens - LLM fills in the blanks. Every time it's different but it works, sometimes even better than initially expected. That's why the addiction and gambling. Gambling is a lot of things, not only flashing lights or play sounds. Some people claim prediction markets isn't gambling either, though that doesn't change the fact.
How is this different from hiring a designer, telling them "make me a website" and then waiting to see if they resolve the uncertainty into something you like or not?
I tell LLMs what to do in pretty high detail, and they do it. With LLMs I have much less variance than with coworkers.
It is different because for humans, it takes time to produce some result, while AI does it instantly. So if you tell a programmer to do X, you have a week for your adrenaline to cool off. If you tell AI, it will do it in minutes.
Then you miss the point - AI use is being compared to gambling because it is addictive, partly due to same mechanism - the results (and rewards) are somewhat random, but it makes you feel as if you're completely in control of the outcome.
It's to the point that I just push the output of that to production and know it'll be OK, except for very large changes where I'm unlikely to have specified everything at the required level of detail. Even then, things won't so much be wrong, as they'll just not be how I want them.
The gambling part is because of the (hopefully emergent and not purposefully designed) intermittent reinforcement due to the limits. You don't get that with regular hires.
You usually don't get immediate responses from hires which means delayed gratification and avoiding much of the potential dopaminergic effects like the ones you get when engaging with LLMs.
You can play overextending the hire analogy all you want but it is simply not the same.
AI has replaced video games for me. And there are plenty of cheaper models that "do it" for me, I don't have to spend $$$$ just for entertainment. I will step up to the frontier for serious work. But if I'm just playing, I'm going for the free stuff on openrouter.
Also, ai art is fine. It looks better than me using paint. That said, there are plenty of foss art pieces and public domain that you can leverage if all you really need is placeholders, and that is much cheaper.
> What is it good for?
> For me, personally? It helps me overcome my task paralysis. As mentioned earlier: I have a plan. A strategy. An idea. I just need someone (or something), who has fun in churning through the implementation. I have the ideas. But boy is coding exhausting.
I find the same. AI helps me overcome any paralysis. I just think "hey it's cheap to write the prompt" and go on.
This resonates. The "idea to result" loop getting shorter with AI is genuinely addictive, I've noticed it in my own workflow too. But theres a flip side nobody talks about: once you get used to that speed, going back to manual implementation feels 10x worse than it did before. The paralysis dosn't go away, it just gets masked. The real question is whether AI is solving the problem or just compressing the dopamine cycle around it.
Don't know about ADHD and whatnot, but I do feel this "task paralysis" pretty often. One thing that I found works really well for me is to work on multiple projects at once. Go one to two weeks on one, then switch to another. I'm not lacking motivation anymore and it feels great.
I do have an actual diagnostic and I had the same experience over the past year with early coding harness at the beginning of the year, then Claude code since its release date. But after 1+year going that direction I really don’t want to continue. The novelty is gone, dealing with AI now feels frustrating and boring, I miss engaging deeply with the actual lower level technical challenges. I do not want to manage fleets of agents. I do not want to rediscover for the hundredth time that in fact all this time an agent took shortcuts for acceptance tests I rely upon and didn’t catch. Or once again get the agent to understand why and what I want it to do after its context got bloated and it start to drift completely. While I got artifacts I can use (libraries, tools, docs), including some things that I’m pretty confident are SoA I do not feel satisfied anymore knowing that I used a model to generate them, even if I was the one designing every part of it. I do feel that I’m lying anytime I come to a colleague to share a new cool tool I have made. And I do not feel that relying on AI actually helped me improve with dealing with my executive function issues.
YMMV but I’m personally feeling burnt out with AI coding agents and ready to go back to the old ways for my next personal project
I could have written this article myself.
The addiction part, the ADHD part and the pending test part.
The fear of becoming addicted to AI is real and I don't think I'll be capable to stop it, considering we're asking people who struggle with avoiding quick dopamine to use it professionally in their daily work life.
My Pro went to Max(5) to Max(20) pretty quickly and I was burning through that weekly limit still, without large agentic workflows that burn tokens. Just me and 4-5 terminals. Sometimes I was happy to hit the limit because I was forced back to normal life.
I've gone back to Pro to stop what was happening.
Now I'm self-aware enough to notice the trend and put up safe guards, but that's because I've always had to adapt my environment to control my behaviour because I know direct behaviour control is abnormally challenging. I fear for those who won't see it coming, until they're in deep.
> [...] considering we're asking people who struggle with avoiding quick dopamine to use it professionally in their daily work life.
It's so wild that it never dawned on me, why some people around me were so quick with "Let AI do that!". I'm not saying that each and everyone has ADHD, but I think I underestimated a) the flow of dopamine a successful prompt can set free and b) the craving for it by folks that I deemed more stable than myself.
I find that the new "drug" is constantly hunting down new cheaper models.. z.ai/glm, mistral, deepseek.. if you need to get your fix - find the cheaper path..
So the end game for the current generation of AI companies won't be productivity improvements but gambling, just like everything else nowadays. That's why they want to get us all into these massive casinos they call data centers and don't want us to own the slot machines.
So what that you have ideas - other people have them too. It's not ideas that build businesses but knowing right people or ability to sell products.
The gambling trope is so tired. AI development doesn't involve luck to any appreciable degree, certainly not more than hiring people to do a job can be considered "gambling" (you never know what you're going to get!).
It's just paying to get stuff done, which is how it's always been, since the dawn of man.
For most people who are not doing their day to day jobs it's just a prompt of their idea roughly sketched out and a miracle happens - LLM fills in the blanks. Every time it's different but it works, sometimes even better than initially expected. That's why the addiction and gambling. Gambling is a lot of things, not only flashing lights or play sounds. Some people claim prediction markets isn't gambling either, though that doesn't change the fact.
How is this different from hiring a designer, telling them "make me a website" and then waiting to see if they resolve the uncertainty into something you like or not?
I tell LLMs what to do in pretty high detail, and they do it. With LLMs I have much less variance than with coworkers.
It is different because for humans, it takes time to produce some result, while AI does it instantly. So if you tell a programmer to do X, you have a week for your adrenaline to cool off. If you tell AI, it will do it in minutes.
I don't think the difference between a designer and a slot machine is that one gives you results more slowly, "therefore it's not gambling".
If you're making the argument that LLMs are gambling simply because they're faster than humans, I'd like to see some evidence.
> certainly not more than hiring people to do a job can be considered "gambling"
Actually it's quite possible that being a business manager/owner is actually addictive (having power over people), we just don't recognize it as such.
All gambling addiction is addiction, not all addiction is gambling.
Then you miss the point - AI use is being compared to gambling because it is addictive, partly due to same mechanism - the results (and rewards) are somewhat random, but it makes you feel as if you're completely in control of the outcome.
Yeah, that hasn't been my experience. The outcome, for me, is extremely consistent. I ~never have to "reroll" by wiping work and doing it again.
Strange. I tell Claude Code to do things differently all the time.
I'd recommend a different workflow, with extensive upfront planning. This works extremely well for me:
https://www.stavros.io/posts/how-i-write-software-with-llms/
It's to the point that I just push the output of that to production and know it'll be OK, except for very large changes where I'm unlikely to have specified everything at the required level of detail. Even then, things won't so much be wrong, as they'll just not be how I want them.
The gambling part is because of the (hopefully emergent and not purposefully designed) intermittent reinforcement due to the limits. You don't get that with regular hires.
Really? All the hires I've seen had an 8-hour/5-day limit, or you had to pay through the nose for extended usage outside that window.
Where do you get your 24/7 hires from?
You usually don't get immediate responses from hires which means delayed gratification and avoiding much of the potential dopaminergic effects like the ones you get when engaging with LLMs.
You can play overextending the hire analogy all you want but it is simply not the same.
AI has replaced video games for me. And there are plenty of cheaper models that "do it" for me, I don't have to spend $$$$ just for entertainment. I will step up to the frontier for serious work. But if I'm just playing, I'm going for the free stuff on openrouter.
Also, ai art is fine. It looks better than me using paint. That said, there are plenty of foss art pieces and public domain that you can leverage if all you really need is placeholders, and that is much cheaper.
This resonates. The "idea to result" loop getting shorter with AI is genuinely addictive, I've noticed it in my own workflow too. But theres a flip side nobody talks about: once you get used to that speed, going back to manual implementation feels 10x worse than it did before. The paralysis dosn't go away, it just gets masked. The real question is whether AI is solving the problem or just compressing the dopamine cycle around it.
Don't know about ADHD and whatnot, but I do feel this "task paralysis" pretty often. One thing that I found works really well for me is to work on multiple projects at once. Go one to two weeks on one, then switch to another. I'm not lacking motivation anymore and it feels great.
It is really weird reading things but I guess normal? It seems many feel this, including me. AI just compounds this behavior even more! Darn.