The only prediction that I think is robust is: Those who use AI as tool today will replace those that aren't tomorrow.
Same situation with internet, we saw a bubble but ultimately those that changed their business around it monopolized various industries where they were slow to react.
Some jobs will be replaced outright but most will use AI tools and we might see reduced wages/positions available for a very long time coupled with economic downturn.
> The only prediction that I think is robust is: Those who use AI as tool today will replace those that aren't tomorrow.
That's not a robust prediction. Many people who don't use AI today simply don't do so because they've tried it, and found it subtracts value. Those people will not be replaced tomorrow, they will merely reevaluate the tool and start using it if it has started to add value.
If jobs were based on self-perceived value addition there would never be a layoff ever
Your executive team is going to "remove" non-AI folks regardless of their claims about efficiency.
Just like they forced you to return to office while ignoring the exact same efficiency claims. They had realestate to protect. Now they have AI to protect.
"Now they have AI to protect," you're ultimately talking about corporate leadership being susceptible to the sunk cost fallacy here. But AI investment is a particularly easy sunk cost to get out of compared to real estate. Real estate has lease obligations and moving costs; it will cost you a lot more in the short term to get rid of your real estate. AI you could just stop using and immediately see cost reduction.
Identifying the 1% of ai use cases that are useful and refusing to have your attention stolen by the 99% that is mild melting garbage will be the key ai skill for the ai future
We can already see it. Take a look at AI image generation a few years ago. People were creating complex prompts, tweaking values, adding overlay models, stringing together several AI tools to get to a decent output. Now you can get a better result typing a simple phrase into one of the many major AI web interfaces. Tools like adobe have simplified all these features to the point where they can be learnt in under 5mins.
This is only going to be the start once AI gets good it will be so easy to use I doubt there will be any human unable to use it. Its the nature of natural language queries and companies working to build a model that can handle "anything" thrown at it.
I'm sure there are people who are more skilled at using a cell phone than I am. It doesn't matter.
Similarly, we all have had co-workers or friends who aren't very good at using search engines. They still use them and largely still have jobs.
Now that I think of it, most regularly-used technology is like this. Cars, dishwashers, keyboards, electric shavers. There is a baseline level of skill required for operation, but the marginal benefits for being more skilled than the baseline drop off pretty quickly.
I think you mean precedents but in any case the precedent is that often a new tech is heralded with “this time is different! Ignore the precedents” and yet so far that has been wrong every time.
One day the sun won’t rise in the morning but it’s not something I’m going to plan on happening in my lifetime.
Okay what we're saying is slightly different, you mean to reach a certain bar. I kind of agree to that
Through the marginal improvement is still pretty high to knowing how the tools work and how to use them more effectively, in a way that people that spend time with the tools will be _more_ effective
The point is the comparison between the levels of tech. Your accountant is constant, the tools they use is variable.
Interpreting the OPs point as "absolute zero skill" is against HN rules to interpret comments reasonably. You guys are trying to find the most stupid angle possible for the sake of an "argument". I hate this antagonistic debate culture so much.
Some businesses survive without using the internet so this isn’t the strongest argument. And even more use it minimally, eg they just have a Yelp page or something.
The easier "AI" gets to use (as it is being "promised" it will), the quicker a skilled engineered is going to be able to adapt to it whenever they give up and start using it. They'll likely be ahead of any of those previous adopters who just couldn't resist the allure of simply accepting whatever is spit out without thoroughly reviewing it first.
Yes there is, for coding for example you need to learn how to use the tools efficiently otherwise you'll get garbage... And end up either discarding everything and claiming AI is crap, or push it to prod and have to deal with the garbage code in prod.
There always has been, thus far. When I was attending CC for an A+ class in High School, my lab partner was a woman in her early 40s who pulled down a staggering amount of money doing COBOL programming. I learned first hand that for every advancement in technology, there will always be folks who (rightly or wrongly) find no value proposition in upgrading needlessly.
AI as it presently stands is very much one of those things where in the immediate, sure, there’s money to be made jumping on the bandwagon. Even I keep tinkering with it in some capacity from an IT POV, and it has some legitimate use cases that even surprise me sometimes.
However, I aim to build a career like the COBOL programmer did: staying technically sharp as the world abstracts away, because someone, somewhere, will eventually need help digging out of a hole that upgrades or automation got them into.
And at that point, you can bill for the first class airfare, the five-star hotel, and four-figures a day to save their ass.
> it means you have the skill and knowledge to ...
sure, but in the real world the overwhelming majority of people loudly proclaiming the benefits of AI don't actually have the skill or knowledge (or discipline) to do so / judge its correctness. it's peak dunning-kruger
> The only prediction that I think is robust is: Those who use AI as tool today will replace those that aren't tomorrow.
And I make the inverse prediction.
I work for a FAANG and I see it, from juniors to senior engineers, the one who use AI generate absolute slop that is unreadable, unmaintainable, and is definitely going to break. They are making themselves not just redundant, but an actual danger to the company.
> The worse thing about this parasitic trend is that most of the time it’s basically a dude who wants to appear visionary and so he makes a prediction of the future.
This is basically an entire genre of low effort Hackernews posts.
I pretty much agree but using "As an AI engineer myself" or a variation of that in your blog post should get you ridiculed. Who exactly are you trying to impress/differentiate yourself from?
> Now, I should clarify: I am not against talking about the impact of AI. It is a truly transformative technology after all.
This is how I feel. You see so many articles prognosticating and living in the world of hypotheticals, meanwhile AI is transforming industries today and article tracking those changes feel rare. Are they on an obscure blog?
This has already been discussed so many times. No good discussion will come out of this - and it'll just be people moping.
There's much better content on Show HN, one of which won't hit the homepage because this has more votes. It's a problem that HN has to fix - people upvote because they agree, and that vote carries the same weight as another which required far more effort (trying a product, looking at code etc).
The biggest growth industry in AI is people doing podcasts and writing blog posts about the implications of AI or predictions of AI. It seems like >90% of articles from major media sources mention AI somewhere.
I’ve felt the same. Also the AGI outcome for software engineers is:
A) In 5 years no real improvement, AI bubble pops, most of us are laid off.
B) In 5 years near—AGI replaces most software engineers, most of us are laid off.
Woohoo. Lose-lose scenario! No matter how you imagine this AI bubble playing out, the musics going to stop eventually.
None of the predictions have any substance. It's always vague. Where are the ideas around which algorithms will be next after Transformers? Why is there no thought around the real planning on HBM memory and what we will do with the increased throughput? The forecasts, as the author aptly mentioned, are for the headlines.
Algortithms: State space models, diffusion models, KANs, hierarchical attention. There are no shortage of ideas. Determining what works well is a process that is going on right now.
The question on planning on HBM is too vague to really address, but people are separately working on providing more bandwidth, using more bandwidth, and figuring out how to not need so much bandwidth.
> On the weekend, I hack around with ML & AI to build cool stuff.
Stopped reading this rage-bait when I saw this. The company he works at is starting to go all in on AI and prediction content themselves the very same thing that he is opposing. [0]
> But even myself, as an AI engineer, I am just soooo sick of that type of content. It’s the same generic stuff. It appears we have become the LLMs, regurgitating what’s already out there as if it was new ideas.
The author is not an AI engineer™ (whatever that means these days). Just yet another "dev".
I suspect that a part of this unusually-long discourse over the same, admittedly tired issues, stems from deeper societal concerns than mere technology posturing alone. That’s why it continues retreading the same ground, over and over again, trying to build armies for a given “side”.
If we break down every single AI post over the past two years, we get the same conclusions every single time:
* Transformer and Diffusion models (current “AI”) won’t replace jobs wholesale, BUT-
* Current AI will drastically reshape certain segments of employment, like software development or copywriting, BUT-
* Likely only to the point that lower-tier talent is forced out or to adapt, or that bad roles are outright eliminated (slop/SEO farms)
As for the industry itself:
* There’s no long-term market for subscription services beyond vendor lock-in and users with skill atrophy
* The build-out of inference and compute is absolutely an unsustainable bubble barring a profound revolution in machine learning that enables AI to replace employment wholesale AND do so using existing compute architectures
* The geopolitical and societal shifts toward sovereignty/right-to-repair means the best path forward is likely local-inferencing, which doesn’t support the subscription-based models of major AI players
* Likely-insurmountable challenges in hallucinations, safeguards, and reliable outputs over time will restrict adoption to niche processes instead of general tasks
And finally, from a sociological perspective:
* The large AI players/proponents are predominantly technocrat billionaires and wealthy elites seeking to fundamentally reshape societal governance in their favor and hoard more resources for themselves, a deeply diseased viewpoint that even pro-AI folks are starting to retch at the prospect of serving
* The large resistance to AI at present is broadly coming from regular people angry at the prospect of their replacement (and worse) by technology in a society where they must work to survive, and are keenly aware of the real motives in Capital eliminating the need for labor in terms of power distribution
* Humans who have dedicated their lives to skilled and/or creative pursuits in particular are vocally resistant to the mandate by technocrats of “AI everywhere”, and continue to lead the discourse not in how to fight against AI (a losing battle now that Pandora’s Box is open), but in building a healthier and more equitable society where said advancements benefit humans first/equally, and Capital last
* The “creator” part of society in particular is enraged at having their output stolen/seized by Capital for profit without compensation and destroying their digital homes and physical livelihoods in the process, and that is a wound that cannot be addressed short of direct, substantial monetary compensation in perpetuity - essentially holding Capital accountable for piracy much like Capital holds consumers accountable (or tries to). This is a topic of ongoing debate that will likely reshape IP laws at a fundamental level for the century to come.
There. You can skip the glut of blogs, now, at least until any one of the above points substantially changes.
This stuff is like the monster in The Blob. The more energy you direct at it, the bigger it gets. So your post, and my comment are just feeding it.
Yeah we need comment-level exclusions;
User-Agent: AI-Bot
Disallow: /ai-bot/
The equivalent of tossing a cookie at its sticky, gelatinous hide with the words “Don’t Eat Me” written with icing.
The only prediction that I think is robust is: Those who use AI as tool today will replace those that aren't tomorrow.
Same situation with internet, we saw a bubble but ultimately those that changed their business around it monopolized various industries where they were slow to react.
Some jobs will be replaced outright but most will use AI tools and we might see reduced wages/positions available for a very long time coupled with economic downturn.
> The only prediction that I think is robust is: Those who use AI as tool today will replace those that aren't tomorrow.
That's not a robust prediction. Many people who don't use AI today simply don't do so because they've tried it, and found it subtracts value. Those people will not be replaced tomorrow, they will merely reevaluate the tool and start using it if it has started to add value.
If jobs were based on self-perceived value addition there would never be a layoff ever
Your executive team is going to "remove" non-AI folks regardless of their claims about efficiency.
Just like they forced you to return to office while ignoring the exact same efficiency claims. They had realestate to protect. Now they have AI to protect.
Then the answer isn't to adopt AI it's to unionise.
As always, the answer is that we need to found our own companies and stop letting the vampires suck our blood.
I upvoted both posts
"Now they have AI to protect," you're ultimately talking about corporate leadership being susceptible to the sunk cost fallacy here. But AI investment is a particularly easy sunk cost to get out of compared to real estate. Real estate has lease obligations and moving costs; it will cost you a lot more in the short term to get rid of your real estate. AI you could just stop using and immediately see cost reduction.
There are trillions invested innit at this point.
50 year lease write off would be peanuts compared to what's been invested with the expectation of payouts
Identifying the 1% of ai use cases that are useful and refusing to have your attention stolen by the 99% that is mild melting garbage will be the key ai skill for the ai future
So same as with the internet
This is both the sickest burn and truest statement I’ve read today.
Here, have a (digital) shortbread cookie: o
AI tools will get easier and easier to use. There will be no skill level required to use them effectively.
Has this been true for any technology, ever? In that the curve of skill to output quality will be completely flat?
I would be suspicious of this claim.
We can already see it. Take a look at AI image generation a few years ago. People were creating complex prompts, tweaking values, adding overlay models, stringing together several AI tools to get to a decent output. Now you can get a better result typing a simple phrase into one of the many major AI web interfaces. Tools like adobe have simplified all these features to the point where they can be learnt in under 5mins.
This is only going to be the start once AI gets good it will be so easy to use I doubt there will be any human unable to use it. Its the nature of natural language queries and companies working to build a model that can handle "anything" thrown at it.
Skill to effective output quality.
I'm sure there are people who are more skilled at using a cell phone than I am. It doesn't matter.
Similarly, we all have had co-workers or friends who aren't very good at using search engines. They still use them and largely still have jobs.
Now that I think of it, most regularly-used technology is like this. Cars, dishwashers, keyboards, electric shavers. There is a baseline level of skill required for operation, but the marginal benefits for being more skilled than the baseline drop off pretty quickly.
Does everything always need precedence?
I think you mean precedents but in any case the precedent is that often a new tech is heralded with “this time is different! Ignore the precedents” and yet so far that has been wrong every time.
One day the sun won’t rise in the morning but it’s not something I’m going to plan on happening in my lifetime.
Jep, sorry not an native english speaker.
It’s been wrong every time, except for the times it wasn’t. Nobody remembers those though. Something something confirmation bias.
Thank you. AI is different from tools we've seen before. There won't always be a precedent to refer to.
> Has this been true for any technology, ever?
Yes?
Try abacus, slide rules or mechanical calculating machines vs electronic calculators.
Or ancient vs modern computers and software. They didn't even have "end-users" like we understand them now, every computer user was a specialist.
Programming.
Writing. Quill vs. ballpen, but also alphabets vs what you had to write before.
Photography, more than one big jump in usability. Film cameras, projectors/screens.
Transportation: From navigation to piloting aircraft or cars. Originally you had to be a part-time mechanic.
Many advanced (i.e. more complex than e.g. a hammer) tools in manufacturing or at home.
I would argue that for all of these there's still a skill element evolved.
If I give an accountant an electronic calculator and a problem to solve, they'll be more efficient than me
If I give someone who spent thousands of hours on a computer a task on it, they'll be able to do more than my parents
If I give someone that writes a lot a ballpen, their writing will be faster and more legible than someone like me who barely writes on paper.
Okay what we're saying is slightly different, you mean to reach a certain bar. I kind of agree to that
Through the marginal improvement is still pretty high to knowing how the tools work and how to use them more effectively, in a way that people that spend time with the tools will be _more_ effective
> there's still a skill element evolved.
Uhm... yes???
Obviously even a baby has "skills".
The point is the comparison between the levels of tech. Your accountant is constant, the tools they use is variable.
Interpreting the OPs point as "absolute zero skill" is against HN rules to interpret comments reasonably. You guys are trying to find the most stupid angle possible for the sake of an "argument". I hate this antagonistic debate culture so much.
And all of these still require skills today. Yes, electronic calculators too.
See https://news.ycombinator.com/item?id=45983440
Are you really claiming that calculators require no skill?
See https://news.ycombinator.com/item?id=45983440
The internet was the same. Didnt stop legacy businesses from getting their lunch eaten by internet native companies.
Some businesses survive without using the internet so this isn’t the strongest argument. And even more use it minimally, eg they just have a Yelp page or something.
Just like internet... completely safe to use, no malicious downloads, no scams to spot, totally safe and easy to use with no skill...oh wait...
That's not what they're saying.
The easier "AI" gets to use (as it is being "promised" it will), the quicker a skilled engineered is going to be able to adapt to it whenever they give up and start using it. They'll likely be ahead of any of those previous adopters who just couldn't resist the allure of simply accepting whatever is spit out without thoroughly reviewing it first.
Have you considered the inverse?
Those who use AI as tool today will be replaced by those that aren't tomorrow.
Those who know how to fix the messes made by AI today will replace those who don't tomorrow.
This is what I'm seeing currently at work. YMMV.
Everyone uses or will use ai, there is no learning curve so this is not an advantage
Yes there is, for coding for example you need to learn how to use the tools efficiently otherwise you'll get garbage... And end up either discarding everything and claiming AI is crap, or push it to prod and have to deal with the garbage code in prod.
> Those who use AI as tool today will replace those that aren't tomorrow.
Unless they let their skills atrophy by offloading them to AI. The things they can do will be commodified and low value.
I suspect there will be demand for those who instead chose to hone their skills.
There always has been, thus far. When I was attending CC for an A+ class in High School, my lab partner was a woman in her early 40s who pulled down a staggering amount of money doing COBOL programming. I learned first hand that for every advancement in technology, there will always be folks who (rightly or wrongly) find no value proposition in upgrading needlessly.
AI as it presently stands is very much one of those things where in the immediate, sure, there’s money to be made jumping on the bandwagon. Even I keep tinkering with it in some capacity from an IT POV, and it has some legitimate use cases that even surprise me sometimes.
However, I aim to build a career like the COBOL programmer did: staying technically sharp as the world abstracts away, because someone, somewhere, will eventually need help digging out of a hole that upgrades or automation got them into.
And at that point, you can bill for the first class airfare, the five-star hotel, and four-figures a day to save their ass.
I think if you have that problem, then you're not using AI as a tool; AI is using you.
Using AI as a tool doesn't mean having it do everything; it means you have the skill and knowledge to know where and how you can use it.
> it means you have the skill and knowledge to ...
sure, but in the real world the overwhelming majority of people loudly proclaiming the benefits of AI don't actually have the skill or knowledge (or discipline) to do so / judge its correctness. it's peak dunning-kruger
Using AI as a tool is similar to using a search engine and specific sites in the past. People are using it naturally (for what it works)
> The only prediction that I think is robust is: Those who use AI as tool today will replace those that aren't tomorrow.
And I make the inverse prediction.
I work for a FAANG and I see it, from juniors to senior engineers, the one who use AI generate absolute slop that is unreadable, unmaintainable, and is definitely going to break. They are making themselves not just redundant, but an actual danger to the company.
> The worse thing about this parasitic trend is that most of the time it’s basically a dude who wants to appear visionary and so he makes a prediction of the future.
This is basically an entire genre of low effort Hackernews posts.
The ones making grandiose predictions or the ones making broad, mildly cynical dismissals?
:-)
Or the twitter account of any VC
I pretty much agree but using "As an AI engineer myself" or a variation of that in your blog post should get you ridiculed. Who exactly are you trying to impress/differentiate yourself from?
I think he's trying to differentiate himself from all the people who are not AI engineers
Problem is, he isn't even remotely an AI engineer™ himself.
The entire article is a complete joke and is ragebait.
Flagged.
> Now, I should clarify: I am not against talking about the impact of AI. It is a truly transformative technology after all.
This is how I feel. You see so many articles prognosticating and living in the world of hypotheticals, meanwhile AI is transforming industries today and article tracking those changes feel rare. Are they on an obscure blog?
The AI writes the AI prediction content. It can’t give you any new information.
So long as you are enabling folks to pump out AI garbage, I'll be pumping out my garbage predictions, thank you much.
This has already been discussed so many times. No good discussion will come out of this - and it'll just be people moping.
There's much better content on Show HN, one of which won't hit the homepage because this has more votes. It's a problem that HN has to fix - people upvote because they agree, and that vote carries the same weight as another which required far more effort (trying a product, looking at code etc).
"Well i'm something of an AI engineer myself" - This author
The best action in a reply-all storm is to send a response to everyone pleading for them to stop replying all.
> It appears we have become the LLMs
Always have been.
Anyway, complaining about them doesn't add any value either. And complaining about complaining... well you get the idea.
The biggest growth industry in AI is people doing podcasts and writing blog posts about the implications of AI or predictions of AI. It seems like >90% of articles from major media sources mention AI somewhere.
I’ve felt the same. Also the AGI outcome for software engineers is:
A) In 5 years no real improvement, AI bubble pops, most of us are laid off. B) In 5 years near—AGI replaces most software engineers, most of us are laid off.
Woohoo. Lose-lose scenario! No matter how you imagine this AI bubble playing out, the musics going to stop eventually.
The glee I see in many people over this possibility is quite chilling.
All bubbles eventually pop. But it doesn’t mean we end up worse off than before.
None of the predictions have any substance. It's always vague. Where are the ideas around which algorithms will be next after Transformers? Why is there no thought around the real planning on HBM memory and what we will do with the increased throughput? The forecasts, as the author aptly mentioned, are for the headlines.
Algortithms: State space models, diffusion models, KANs, hierarchical attention. There are no shortage of ideas. Determining what works well is a process that is going on right now.
The question on planning on HBM is too vague to really address, but people are separately working on providing more bandwidth, using more bandwidth, and figuring out how to not need so much bandwidth.
Isn't that the same guy who was pile-driving everyone who was talking about AI?
> On the weekend, I hack around with ML & AI to build cool stuff.
Stopped reading this rage-bait when I saw this. The company he works at is starting to go all in on AI and prediction content themselves the very same thing that he is opposing. [0]
> But even myself, as an AI engineer, I am just soooo sick of that type of content. It’s the same generic stuff. It appears we have become the LLMs, regurgitating what’s already out there as if it was new ideas.
The author is not an AI engineer™ (whatever that means these days). Just yet another "dev".
[0] https://www.medbridge.com/educate/webinars/ai-in-healthcare-...
I suspect that a part of this unusually-long discourse over the same, admittedly tired issues, stems from deeper societal concerns than mere technology posturing alone. That’s why it continues retreading the same ground, over and over again, trying to build armies for a given “side”.
If we break down every single AI post over the past two years, we get the same conclusions every single time:
* Transformer and Diffusion models (current “AI”) won’t replace jobs wholesale, BUT-
* Current AI will drastically reshape certain segments of employment, like software development or copywriting, BUT-
* Likely only to the point that lower-tier talent is forced out or to adapt, or that bad roles are outright eliminated (slop/SEO farms)
As for the industry itself:
* There’s no long-term market for subscription services beyond vendor lock-in and users with skill atrophy
* The build-out of inference and compute is absolutely an unsustainable bubble barring a profound revolution in machine learning that enables AI to replace employment wholesale AND do so using existing compute architectures
* The geopolitical and societal shifts toward sovereignty/right-to-repair means the best path forward is likely local-inferencing, which doesn’t support the subscription-based models of major AI players
* Likely-insurmountable challenges in hallucinations, safeguards, and reliable outputs over time will restrict adoption to niche processes instead of general tasks
And finally, from a sociological perspective:
* The large AI players/proponents are predominantly technocrat billionaires and wealthy elites seeking to fundamentally reshape societal governance in their favor and hoard more resources for themselves, a deeply diseased viewpoint that even pro-AI folks are starting to retch at the prospect of serving
* The large resistance to AI at present is broadly coming from regular people angry at the prospect of their replacement (and worse) by technology in a society where they must work to survive, and are keenly aware of the real motives in Capital eliminating the need for labor in terms of power distribution
* Humans who have dedicated their lives to skilled and/or creative pursuits in particular are vocally resistant to the mandate by technocrats of “AI everywhere”, and continue to lead the discourse not in how to fight against AI (a losing battle now that Pandora’s Box is open), but in building a healthier and more equitable society where said advancements benefit humans first/equally, and Capital last
* The “creator” part of society in particular is enraged at having their output stolen/seized by Capital for profit without compensation and destroying their digital homes and physical livelihoods in the process, and that is a wound that cannot be addressed short of direct, substantial monetary compensation in perpetuity - essentially holding Capital accountable for piracy much like Capital holds consumers accountable (or tries to). This is a topic of ongoing debate that will likely reshape IP laws at a fundamental level for the century to come.
There. You can skip the glut of blogs, now, at least until any one of the above points substantially changes.