AI - in this case, LLMs and their ecosystem - is an incredibly impactful technology. I would put it up with:
- the printing press
- radio
- tv
- personal computers
- internet
in terms of important contributors to human civilization. We live in the information age, and all of these are significant advances in information.
The printing press allowed small organizations to create written information. It de-centralized the power of the written text and encouraged the rapid growth of literacy.
Radio allowed humans to communicate quickly across long distances
TV allowed humans to communicate visually across long distances - what we see is very important to the way we process information
PCs allowed for digitizing of information - made it denser, more efficient, easier to store and generate larger datasets
The internet is a way to transfer large amounts of this complex digital information even more quickly across large distances
AI is the ability to process this giant lake of digital information we've made for ourselves. We can no longer handle all the information that we create. We need automated ways to do it. LLMs, which translate information to text, is a way for humans to parse giant datasets in our native tongue. It's massive.
Hyped because I chose to pursue an MFA and dabble in CS on the side instead of getting a CS degree and dabbling in art on the side. The human subjective experience, knowing how to properly communicate it will be a future source of meaning for people who have honed the skill, and the imperfect but passable AI programming infrastructure will help support the bones of the operation
HN tends to amplify tools that change leverage.
AI feels like a multiplier, so the hype follows.
The real question is which parts of it remain useful once the novelty fades.
Hype combined with a community of excited engineers is a great innovator. People are hyped about these new tools and its popularity is pushing them to create new projects, discover new uses, solve challenges that before were unattainable and in general push the frontier forward. Hype is almost a prerequisite to this.
By the way, good job at pointing out some low hanging fruit for your example cases.
You are selection biasing towards the most extreme cases of AI absurdity.
> GPT-5.3-Codex and Opus 2.6 were released. Reviewers note they're struggling to find tasks the previous versions couldn't handle. The improvements are incremental at best.
I have not seen any claims of this other than Opus 4.6 being weirdly token-hungry.
The human brain seeks novelty and excitement. It’s why new projects are always exciting, new companies to join etc. This obviously extends to most trends. Cloud, crypto, AI. Obviously, there’s some utility(debatably with crypto), but overall it’s moreso that new stuff is just more fun and interesting
Some are placing their bets and hoping to win big, some are just having fun and exploring the new technology and most are probably like me and have found those few areas that the technology works for them and ignores everything else. It is a fairly amazing technology, as a humanities sort that has no clue when it comes to programming but also has some things I want to make, LLMs are invaluable and not in the sense that they can write them for me—I am not capable enough in programming to get them to do what I want—but in the sense they can present things in a way a humanities sort can understand so I can write those programs I want.
A recent prompt of mine:
>Write a dialogue where H.P. Lovecraft and David Foster Wallace are tasked with developing a DSL for audio synthesis and composition, they are trying to sell each other on their preferred language, Lovecraft wants rust, Wallace Zig. Their discussion should be deeper than just the talking points and include code examples. Thomas Pynchon is also there, he is a junior dev and enamored with C, he occasionally interjects into the discussion and generally fails to follow the discussion but somehow manages to make salient points; Lovecraft never address Pynchon or C but alludes to the horror, Wallace tries to include and encourage Pynchon but mostly ignores him.
I pretty much knew how ChatGPT would use each of them; Lovecraft would view everything other than rust as some lurking unnameable horror, Pynchon would seem random and nonsensical, and Wallace would vainly attempt to explore all options without weighing in, and that is exactly what happened. Being able to get this sort of information within contexts I understand is amazing, 15 or 20 minutes later I had more direction than I ever had before and it answered a dozen or so language agnostic questions I had regarding implementing such a DSL which is really what I was after. And it gave me some good laughs.
A year ago I thought these projects would never move beyond being ideas, when ever I tried to get help from people they just told me what I was doing wrong and tried to send me down the path of their ideals, how they think things should be done, which was probably more a failure on my part than theirs with my being a humanities sort and incompetent programmer. LLMs have infinite patience and time, those people who I sought help from in the past were not completely wrong but they did try to lead me down a wrong path for what ever reason, partially because they don't have infinite time and patience but also because they thought they were right.
Henry James has been walking me through the PureData source code this past week.
Everyone is not hyping AI/LLM, that is just the bias of tech communities and the like, most of us see it about the same as we see a blender or toaster. Can't remember the last time AI/LLMs came up in general conversation for me.
from a psychological stand point it is to do with Introjection.
Introjection occurs when a person unconsciously adopts the ideas, attitudes, or behaviors of another person or group, often an authority figure.
like someone with high status in the world of Big Tech. The CEO's we see selling their bullshit every day
The influencers, boosters and shills are perfectly places to create this type of environment.
The more you promote a product, the more likely people will introject that product even if it does not work work.
Imagine how a child learns the rules of life from parents.
These are introjected by the child without question or any consideration whether the rules are right or wrong, the child just takes it all in to become part of them
AI - in this case, LLMs and their ecosystem - is an incredibly impactful technology. I would put it up with:
- the printing press
- radio
- tv
- personal computers
- internet
in terms of important contributors to human civilization. We live in the information age, and all of these are significant advances in information.
The printing press allowed small organizations to create written information. It de-centralized the power of the written text and encouraged the rapid growth of literacy.
Radio allowed humans to communicate quickly across long distances
TV allowed humans to communicate visually across long distances - what we see is very important to the way we process information
PCs allowed for digitizing of information - made it denser, more efficient, easier to store and generate larger datasets
The internet is a way to transfer large amounts of this complex digital information even more quickly across large distances
AI is the ability to process this giant lake of digital information we've made for ourselves. We can no longer handle all the information that we create. We need automated ways to do it. LLMs, which translate information to text, is a way for humans to parse giant datasets in our native tongue. It's massive.
Hyped because I chose to pursue an MFA and dabble in CS on the side instead of getting a CS degree and dabbling in art on the side. The human subjective experience, knowing how to properly communicate it will be a future source of meaning for people who have honed the skill, and the imperfect but passable AI programming infrastructure will help support the bones of the operation
Most engineers I know are now picking off backlog items and tech debt //TODOs dating back several years.
Things that I had labelled "too hard, pain in the ass" I'm now finishing in half an hour or so with proper tests and everything.
It's an exciting time to be a product engineer IMO.
HN tends to amplify tools that change leverage. AI feels like a multiplier, so the hype follows. The real question is which parts of it remain useful once the novelty fades.
Hype combined with a community of excited engineers is a great innovator. People are hyped about these new tools and its popularity is pushing them to create new projects, discover new uses, solve challenges that before were unattainable and in general push the frontier forward. Hype is almost a prerequisite to this.
By the way, good job at pointing out some low hanging fruit for your example cases.
You are selection biasing towards the most extreme cases of AI absurdity.
> GPT-5.3-Codex and Opus 2.6 were released. Reviewers note they're struggling to find tasks the previous versions couldn't handle. The improvements are incremental at best.
I have not seen any claims of this other than Opus 4.6 being weirdly token-hungry.
The human brain seeks novelty and excitement. It’s why new projects are always exciting, new companies to join etc. This obviously extends to most trends. Cloud, crypto, AI. Obviously, there’s some utility(debatably with crypto), but overall it’s moreso that new stuff is just more fun and interesting
Some are placing their bets and hoping to win big, some are just having fun and exploring the new technology and most are probably like me and have found those few areas that the technology works for them and ignores everything else. It is a fairly amazing technology, as a humanities sort that has no clue when it comes to programming but also has some things I want to make, LLMs are invaluable and not in the sense that they can write them for me—I am not capable enough in programming to get them to do what I want—but in the sense they can present things in a way a humanities sort can understand so I can write those programs I want.
A recent prompt of mine:
>Write a dialogue where H.P. Lovecraft and David Foster Wallace are tasked with developing a DSL for audio synthesis and composition, they are trying to sell each other on their preferred language, Lovecraft wants rust, Wallace Zig. Their discussion should be deeper than just the talking points and include code examples. Thomas Pynchon is also there, he is a junior dev and enamored with C, he occasionally interjects into the discussion and generally fails to follow the discussion but somehow manages to make salient points; Lovecraft never address Pynchon or C but alludes to the horror, Wallace tries to include and encourage Pynchon but mostly ignores him.
I pretty much knew how ChatGPT would use each of them; Lovecraft would view everything other than rust as some lurking unnameable horror, Pynchon would seem random and nonsensical, and Wallace would vainly attempt to explore all options without weighing in, and that is exactly what happened. Being able to get this sort of information within contexts I understand is amazing, 15 or 20 minutes later I had more direction than I ever had before and it answered a dozen or so language agnostic questions I had regarding implementing such a DSL which is really what I was after. And it gave me some good laughs.
A year ago I thought these projects would never move beyond being ideas, when ever I tried to get help from people they just told me what I was doing wrong and tried to send me down the path of their ideals, how they think things should be done, which was probably more a failure on my part than theirs with my being a humanities sort and incompetent programmer. LLMs have infinite patience and time, those people who I sought help from in the past were not completely wrong but they did try to lead me down a wrong path for what ever reason, partially because they don't have infinite time and patience but also because they thought they were right.
Henry James has been walking me through the PureData source code this past week.
Everyone is not hyping AI/LLM, that is just the bias of tech communities and the like, most of us see it about the same as we see a blender or toaster. Can't remember the last time AI/LLMs came up in general conversation for me.
maybe people are not hyped, just benefitting from it
from a psychological stand point it is to do with Introjection.
Introjection occurs when a person unconsciously adopts the ideas, attitudes, or behaviors of another person or group, often an authority figure.
like someone with high status in the world of Big Tech. The CEO's we see selling their bullshit every day
The influencers, boosters and shills are perfectly places to create this type of environment.
The more you promote a product, the more likely people will introject that product even if it does not work work.
Imagine how a child learns the rules of life from parents.
These are introjected by the child without question or any consideration whether the rules are right or wrong, the child just takes it all in to become part of them