I find most developers fall into one of two camps:
1. You treat your code as a means to an end to make a product for a user.
2. You treat the code itself as your craft, with the product being a vector for your craft.
The people who typically have the most negative things to say about AI fall into camp #2 where AI is automating a large part of what they considered their art while enabling people in group #1 to iterate on their product faster.
Personally, I fall into the first camp.
No one has ever made a purchasing decision based on how good your code is.
The general public does not care about anything other than the capabilities and limitations of your product. Sure, if you vibe code a massive bug into your product then that'll manifest as an outcome that impacts the user negatively.
With that said, I do have respect for people in the latter camp. But they're generally best fit for projects where that level of craftsmanship is actually useful (think: mission critical software, libraries us other devs depend on, etc).
I just feel like it's hard to talk about this stuff if we're not clear on which types of projects we're talking about.
> No one has ever made a purchasing decision based on how good your code is.
absolutely false.
> The general public does not care about anything other than the capabilities and limitations of your product.
also false.
People may not know that the reason they like your product is because the code is so good, but everyone likes software that is mostly free from bugs, performs extremely well, helps them do their work quickly, and is obviously created by people the care deeply about the quality of the product they produce (you know, the kind that acutally read bug reports, and fix problems quickly).
The longer your product exists the more important the quality of the code will be. This obsession so many have with "get it out the door in 5 seconds" is only going to continue the parade of garbage software that is slow as a dog, and uses gigabytes of memory to perform simple tasks.
You don't have to pick on camp over the other. In my opinion, if you want to make a good product for a user, you should also treat the code you produce for them as your craft. There is no substitute for high quality work.
> obviously created by people the care deeply about the quality of the product they produce
This obviously doesn't represent all of the billions of dollars spent on software like Salesforce, SAP, Realpage, Booking.com, etc. etc. (all notoriously buggy, slow, and complex software). You can't tell me with a straight face that all of the thousands of developers who develop these products/services care deeply about the quality of the product. They get real nice paychecks, benefits and put dinner on the table for their families. That's the market.
> There is no substitute for high quality work.
You're right because there really isn't a consistent definition of what "high quality" software work looks like.
> This obviously doesn't represent all of the billions of dollars spent on software like Salesforce, SAP, Realpage, Booking.com, etc. etc. (all notoriously buggy, slow, and complex software). You can't tell me with a straight face that all of the thousands of developers who develop these products/services care deeply about the quality of the product. They get real nice paychecks, benefits and put dinner on the table for their families. That's the market.
Those first three are "enterprise" or B2B applications, where the person buying the software is almost never one of the people actually using the software. This disconnect means that the person making the buying decision cannot meaningfully judge the quality of any given piece of software they are evaluating beyond a surface level (where slick demos can paper over huge quality issues) since they do not know how it is actually used or what problems the actual users regularly encounter.
Users care about quality, even if the people buying the software do not. You can't just say "well the market doesn't care about quality" when the market incentives are broken for a paricular type of software. When the market incentives are aligned between users and purchasers (such as when they are the same person) quality tends to become very important for the market viability of software (see Windows in the consumer OS market, which is perceptibly losing share to MacOS and Linux following a sustained decline in quality over the last several years).
I couldn't book travel at a previous company because my address included a `.`, which passed their validation. Awful, awful software. I wouldn't expect slop code to improve it.
> You don't have to pick on camp over the other. In my opinion, if you want to make a good product for a user, you should also treat the code you produce for them as your craft. There is no substitute for high quality work.
Exactly, thank you for putting it like that.
So far it’s been my observation that it’s only the people who think like the OP who put the situation in the terms they did. It’s a false dichotomy which has become a talking point. By framing it as “there are two camps, it’s just different, none of them is better”, it lends legitimacy to their position.
For an exaggerated, non-comparable example meant only to illustrate the power of such framing devices, one could say: “there are people who think guns should be regulated, and there are people who like freedom”. It puts the matter into an either/or situation. It’s a strategy to frame the conversation on one’s terms.
> People may not know that the reason they like your product is because the code is so good, but everyone likes software that is mostly free from bugs, performs extremely well, helps them do their work quickly, and is obviously created by people the care deeply about the quality of the product they produce (you know, the kind that acutally read bug reports, and fix problems quickly).
I would classify all of those as "capabilities and limitations of your product"
I read OPs "good code" to mean "highly aesthetic code" (well laid out, good abstractions, good comments, etc. etc.), and in that sense I agree no customer who's just using the product actually cares about that.
Another definition of "good code" is probably "code that meets the requirements without unexpected behavior" and in that sense of course end users care about good code, but you could give me two black boxes that act the same externally, one written as a single line , single character variables, etc. etc. etc. and another written to be readable, and I wouldn't care so long as I wasn't expected to maintain it.
>but you could give me two black boxes that act the same externally, one written as a single line , single character variables, etc. etc. etc. and another written to be readable, and I wouldn't care so long as I wasn't expected to maintain it.
The reality of software products is that they are in nearly in all cases developed/maintained over time, though--and whenever that's the case, the black box metaphor fails. It's an idealization that only works for single moments of time, and yet software development typically extends through the entire period during which a product has users.
> I read OPs "good code" to mean "highly aesthetic code" (well laid out, good abstractions, good comments, etc. etc.)
The above is also why these properties you've mentioned shouldn't be considered aesthetic only: the software's likelihood of having tractable bugs, manageable performance concerns, or to adapt quickly to the demands of its users and the changing ecosystem it's embedded in are all affected by matters of abstraction selection, code organization, and documentation.
But those aesthetics stem from that need for fewer bugs, performance, maintainability. Identifying/defining code smell comes from experience of what does and doesn’t work.
> I wouldn't care so long as I wasn't expected to maintain it.
But, if you’re the one putting out that software, of course you will have to maintain it! When your users come back with a bug or a “this flow is too slow,” you will have to wade into the innards (at least until AI can do that without mistakes).
But the thing is that someone has to maintain it. And while beautiful code is not the same as correct code, the first is impactful in getting the second and keeping it.
And most users are not consuming your code. They’re consuming some compiled, transpiled, or minified version of it. But they do have expectations and it’s easier to amend the product if the source code is maintainable.
> The longer your product exists the more important the quality of the code will be. This obsession so many have with "get it out the door in 5 seconds" is only going to continue the parade of garbage software that is slow as a dog, and uses gigabytes of memory to perform simple tasks.
Exactly. A lot of devs optimizing for whether the feature is going to take a day or an hour, but not contemplating that it's going to be out in the wild for 10 years either way. Maybe do it well once.
> but not contemplating that it's going to be out in the wild for 10 years either way
I think there are a lot of developers working in repos where it's almost guaranteed that their code will _not_ still be there in 10 years, or 5 years, or even 1 year.
>I think there are a lot of developers working in repos where it's almost guaranteed that their code will _not_ still be there in 10 years, or 5 years, or even 1 year.
And in almost all of those cases, they'd be wrong.
In my experience the code will, but by year 5 nobody is left who worked on it from inception, and by year 10 nobody knows anybody who did, and during that time it reaches a stage where nobody will ever feel any sense of ownership or care about the code in its entirety again.
I come into work and work on a 20 year old codebase every day, working on slowly modernizing it while preserving the good parts. In my experience, and I've been experimenting with both a lot, LLM-based tools are far worse at this than they are at starting new greenfield projects.
When it comes to professional development, I've almost never worked on a codebase less than 10 years old, and it was always [either silently or overtly] understood that the software we are writing is a project that's going to effectively live forever. Or at least until the company is no longer recognizable from what it is today. It just seems wild and unbelievable to me, to go to work at a company and know that your code is going to be compiled, sent off to customers, and then nobody is ever going to touch it again. Where the product is so throwaway that you're going to work on it for about a year and then start another greenfield codebase. Yet there are companies that operate that way!
How can you possibly know which type of repo you're in ahead of time? My experience is that "temporary" code frequently becomes permanent and I've also been on the other side of those decisions 40 years later.
Unless you’re producing demos for sales presentation (internally or externally), it’s always worth it to produce something good. Bad code will quickly slow you down and it will be a never ending parade of bug tickets.
If a product looks pretty and seems to work great at first experience, but is really an unmaintainable mess under the hood, has an unvetted dependency graph, has a poorly thought through architecture that no one understands, perhaps is unsustainable due to a flawed business model, etc., to me it simply suffers from bad design[0], which will be felt sooner or later. If I know this—which is, admittedly, sometimes hard to know (especially in case of software products compared to physical artifacts)—I would, given alternatives, make the choice to not be a customer.
In other words, I would, when possible, absolutely make a purchasing decision based on how good the code is (or based on how good I estimate the code to be), among other things.
[0] The concept of design is often misunderstood. First, obviously, when it’s classified as “how the thing looks”; then, perhaps less obviously, when it’s classified as “how the thing works”. A classification I am arriving at is, roughly, “how the thing works over time”.
Demos might ne nice an flashy. But eventually you actually have to have generally working product. Too many issues with too many annoyances and eventually users of even enterprise software will be heard. Especially so if there is some actual loss of money or data that is not corrected very fast.
In the end software is means to the end. And if you do not get to end because software is crap it will be replaced, hopefully by someone else.
The history of technology is filled with examples where between two competing analogous products, the inferior always wins. It does not matter if it is only slightly inferior or extraordinarily inferior, both win out. It's often difficult to come up with counter-examples. Why is this? Economic pressure. "Inferior" costs less. Sometimes the savings are passed on to the customer... they choose the inferior. Other times the greedy corporate types keep all of it (and win simply because they outmarket the competitor). It does not matter.
If there are people who, on principle, demand the superior product then those people simply aren't numerous enough to matter in the long run. I might be one of those people myself, I think.
And yet somehow the shittiest buggiest software ends up being the most popular.
Look through the list of top apps in mobile app stores, most used desktop apps, websites, SaaS, and all other popular/profitable software in general and tell me where you see users rewarding quality over features and speed of execution.
You have it backwards. Excellent software becomes popular, and then becomes enshittified later once it already has users. Often there is a monopoly/network effect that allows them to degrade the quality of their software once they already have users, because the value in their offering becomes tied to how many people are using it, so even a technically superior newcomer won't be able to displace it (eg. Youtube is dogshit now but all of the content creators are there, and all of the viewers are there, so content creators won't create content for a better platform with no viewers and viewers won't visit a better platform with no content).
If your goal is to break into the market with software that is dogshit from day 1, you're just going to be ones of millions of people failing their get-rich-quick scheme.
If code is craft and minimalism is hip then why ruby, and python, and go and... when it's electrical state in machines?
That's the minimalism that's been lost.
That's why I find the group 2 arguments disingenuous. Emotional appeal to conservatism, which conveniently also props up their career.
Why all those parsers and package systems when what's really needed is dials min-max geometric functions from grand theft auto geometry to tax returns?
Optimization can be (and will be) engineered into the machine through power regulation.
There's way too many appeals to nostalgia emanating from the high tech crowd. Laundering economic anxiety through appeals to conservatism.
Give me an etch a sketch to shape the geometry of. Not another syntax art parser.
Sloppy technical design ends up manifesting in bugs, experiential jank, and instability.
There are some types of software (e.g. websites especially), where a bit of jank and is generally acceptable. Sessions are relatively short, and your users can reload the webpage if things stop working. The technical rigor of these codebases tends to be poor, but it's generally fine.
Then there's software which is very sensitive to issues (e.g. a multi-player game server, a driver, or anything that's highly concurrent). The technical rigor here needs to be very high, because a single mistake can be devastating. This type of software attracts people who want to take pride in their code, because the quality really does matter.
I think these people are feeling threatened by LLMs. Not so much because an LLM is going to outperform them, but because an LLM will (currently) make poor technical design decisions that will eventually add up to the ruin of high-rigor software.
If this level of quality/rigor does matter for something like a game, do you think the market will enforce this? If low rigor leads to a poor product, won't it sell less than a good product in this market? Shouldn't the market just naturally weed out the AI slop over time, assuming it's true that "quality really does matter"?
Or were you thinking about "matter" in some other sense than business/product success?
A lot of software is forced upon people against their will, and purchased bu people who will never use it.
This obscures things in favour of the “quality/performance doesn’t matter argument”.
I am, for example, forced to use a variety of microslop and zoom products. They are unequivocally garbage. Given the option, I would not use them. However, my employer has saddled us with them for reasons, and we must now deal with it.
Yes, I think the market will enforce this. A bit. Eventually. But the time horizon is long, and crummy software with a strong business moat can out-compete great software.
Look at Windows. It's objectively not been a good product for a long time. Its usage is almost entirely down to its moat.
Even if you're confident you can stop your own company from shipping terrible products, I worry the trend is broad enough and hard enough to audit that the market will enforce it by pulling back on all purchases of such software. If gamers learn that new multiplayer games are just always laggy these days, or CTOs learn that new databases are always less reliable, it's not so easy to convince them that your product is different than the rest.
Yes, there's every reason to believe the market will weed out the AI slop. The problem is, just like with stocks, the market can stay irrational longer than you can stay solvent. While we all wait for executives to learn that code rigor matters, we still have bills to pay. After a year when they start trying to hire people to clean up their mess, we'll be the ones having to shovel a whole new level of shit; and the choice will be between that and starving.
As someone who also falls into camp one, and absolutely loves that we have thinking computers now, I can also recognize that we're angling towards a world of hurt over the next few years while a bunch of people in power have to learn hard lessons we'll all suffer for.
I keep seeing this idea repeated, but I don't accept the dichotomy between those who care about 'crafting code' and those who care about 'building products' as though they are opposite points on a spectrum.
To me, the entire point of crafting good code is building a product with care in the detail. They're inseparable.
I don't think I've ever in my life met someone who cared a lot about code and technology who didn't also care immensely about detail, and design, and craft in what they were building. The two are different expressions of the same quality in a person, from what I've seen.
No-one comes out of the womb caring about code quality. People learn to care about the craft precisely because internal quality -- cohesion, modularity, robustness -- leads to external quality (correctness, speed, evolvability).
People who care about code quality are not artists who want to paint on the company's dime. They are people who care about shipping a product deeply enough to make sure that doing so is a pleasant experience both for themselves and their colleagues, and also have the maturity to do a little bit more thinking today, so that next week they can make better decisions without thinking, so that they don't get called at 4 AM the night after launch for some emergency debugging of an issue that that really should have been impossible if it was properly designed.
> No one has ever made a purchasing decision based on how good your code is.
Usually they don't get to see the internals of the product, but they can make inferences based on its externals. You've heard plenty of products called a "vibe-coded piece of crap" this year, even if they're not open source.
But also, this is just not true. Code quality is a factor in lots of purchasing decisions.
When buying open source products, having your own team check out the repo is incredibly common. If there are glaring signs in the first 5 minutes that it was hacked together, your chances of getting the sale have gone way down. In the largest deals, inspecting the source code
It was for an investment decision rather than for a purchase, but I've been personally hired to do some "emergency API design" so a company can show that it both has the thing being designed, and that their design is good.
This is like when people decided that everyone was either "introvert" or "extrovert" and then everyone started making decisions about how to live their life based on this extremely reductive dichotomy.
There are products that are made better when the code itself is better. I would argue that the vast majority of products are expected to be reliable, so it would make sense that reliable code makes for better product. That's not being a code craftsman, it's being a good product designer and depending on your industry, sometimes even being a good businessman. Or, again, depending on your industry, not being callous about destroying people's lives in the various ways that bad code can.
I’m an introvert. I make sure that all my “welcome to the company” presentations are in green. I am also an extrovert in that I add more green than required.
I respect your opinion and especially your honesty.
And at the same time I hope that you will some day be forced to maintain a project written by someone else with that mindset. Cruel, yes. But unfortunately schadenfreude is a real thing - I must be honest too.
I have gotten to old for ship now, ask questions later projects.
I'm in camp 1 too. I've maintained projects developed with that mindset. It's fine! Your job is to make the thing work, not take on its quality as part of your personal identity.
If it's harder to work with, it's harder to work with, it's not the end of the world. At least it exists, which it probably wouldn't have if developed with "camp 2" tendencies.
I think camp 2 would rather see one beautiful thing than ten useful things.
I think I fall in camp 1.5 (I don't fall in camp 1 or camp 2) as in I can see value in prototyping (with AI) and sometimes make quick scripts when I need them, but long term I would like to grow with an idea and build something genuinely nice from those prototypes, even manually writing the code as I found personally, AI codebases are an hassle to manage and have many bugs especially within important things (@iamcalledrob message here sums it up brilliantly as well)
> I think camp 2 would rather see one beautiful thing than ten useful things.
Both beautiful and useful are subjective (imo). Steve job's adding calligraphy to computer fonts could've considered a thing of beauty which derived from his personal relation to calligraphy, but it also is an really useful thing.
It's my personal opinion that some of the most valuable innovations are both useful and beautiful (elegant).
Of course, there are rough hacks sometimes but those are beautiful in their own way as well. Once again, both beauty and usefulness is subjective.
(If you measure Usefulness with the profit earned within a purely capitalistic lens, what happens is that you might do layoffs and you might degrade customer service to get to that measure, which ultimately reduces the usefulness. profit is a very lousy measure of usefulness in my opinion. We all need profit though but doing solely everything for profit also feels a bit greedy to me.)
Neither is the original assertion. There are thousands of examples of exceptionally well crafted code bases that are used by many. I would posit the Linux kernel as an example, which is arguably the most used piece of software in the world.
Craft, in coding or anything else, exists for a reason. It can bleed over into vain frivolity, but craft helps keep the quality of things high.
Craft often inspires a quasi-religious adherence to fight the ever-present temptation to just cut this one corner here real quick, because is anything really going to go wrong? The problems that come from ignoring craft are often very far-removed from the decisions that cause them, and because of this craft instills a sense of always doing the right thing all the time.
This can definitely go too far, but I think it's a complete misunderstanding to think that craft exists for reasons other than ensuring you produce high-quality products for users. Adherents to craft will often end up caring about the code as end-goal, but that's because this ends up producing better products, in aggregate.
I mostly agree with this. Part of the confusion with the discourse around AI is the fact that "software engineering" can refer to tons of different things. A Next.js app is pretty different from a Kubernetes operator, which is pretty different from a compiler, etc.
I've worked on a project that went over the complexity cliff before LLM coding even existed. It can get pretty hairy when you already have well-established customers with long-term use-cases that absolutely cannot be broken, but their use-cases are supported by a Gordian Knot of tech debt that practically cannot be improved without breaking something. It's not about a single bug that an LLM (or human) might introduce. It's about a complete breakdown in velocity and/or reliability, but the product is very mature and still makes money; so abandoning it and starting over is not considered realistic. Eager uptake of tech debt helped fuel the product's rise to popularity, but ultimately turned it into a dead end. It's a tough balancing act. I think a lot of LLM-generated platforms will fall eventually into this trap, but it will take many years.
Trying to describe craftsmanship always brings me back to the Steve Jobs quote:
“When you’re a carpenter making a beautiful chest of drawers, you’re not going to use a piece of plywood on the back, even though it faces the wall and nobody will ever see it. You’ll know it’s there, so you’re going to use a beautiful piece of wood on the back. For you to sleep well at night, the aesthetic, the quality, has to be carried all the way through.”
Steve Jobs didn't really know anything about cabinetry, because using plywood / MDF in places where it won't be seen but which would benifit from dimensional stability is absolutely common and there's no reason it shouldn't be.
Yes great so we sell shit. No one buys a ticket because ofnhow safe the 737 Max plane is or buys a post office franchise based on how good the software Fujitsu sold the post office is, but fuck lets take some pride in outselves and try to ship quality work.
Professionally, I've always been in camp #2. The quality of your code at least partially represents you in the eyes of your peers. I imagine this is rapidly changing, but the fact will always remain that readable code that you can reason about is objectively better.
For personal projects, I've been in both camps:
For scripts and one-offs, always #1. Same for prototypes where I'm usually focused on understanding the domain and the shape of the product. I happily trade code quality for time when it's simple, throwaway, or not important.
But for developing a product to release, you want to be able to jump back in even if it's years later.
That said, I'm struggling with this with my newest product. Wavering between the two camps. Enforcing quality takes time that can be spent on more features...
It is possible to exist in both camps. The quality of the process affects the quality of the product, and the quality of your thought affects the quality of the process. It's a cycle of continual learning, and from that perspective, thought, process and product are indivisible.
Treating code as a means to an end doesn't guarantee success for your product anymore than treating code as a craft.
> The people who typically have the most negative things to say about AI fall into camp #2 where AI is automating a large part of what they considered their art while enabling people in group #1 to iterate on their product faster.
I'm weird, I'm part of camp 2, but I think AI can be used to really craft some interesting things. While I appreciate camp 1, camp 2 is what produces better codebases that are easier to maintain, myself and others have realized, that the best practices for humans are also the best practice for AI models to fit your code.
That's true, but I think there is a gray area in between. As things scale up in one way or another, having high quality is important for both #1 and #2. Its hard to extend software that was designed poorly.
The question where experience comes in is when quality is and isnt worth the time. I can create all sorts of cool software I couldn't before because now I can quickly pump out "good enough" android apps or react front ends! (Not trying to denigrate front end devs, it's just a skill I dont have)
I think type1 vs type2 dev requirements are also dependent on lifecycle / scale of your project, not just that its library / framework / mission critical software.
If you aren't even sure if your idea is even gonna work, whether you have PMF, or the company will be around next year.. then yeah.. speed over quality all day long.
On the other hand, I've never done the startup thing myself, and tend to work on software project with 10-20 year lifecycles. When code velocity maximalism leads to outages, excess compute cost and reputational issues.. good code matters again.
Re: "No one has ever made a purchasing decision based on how good your code is."
Sonos very much could go out of business for agreeing with this line. I can tell you lots of people stopped buying their products because of how bad their code quality became with the big app change debacle. Lost over a decade of built up good will.
Apple is going through this lately with the last couple major OS releases across platforms and whatever is going on with their AI. This despite having incredible hardware.
It’s much more complex. Part of your value as an engineer is having a good feel for balancing the trade offs on your code.
Code is usually a liability. A means to an end. But is your code going to run for a minute, a month, a year, or longer? How often will it change? How likely are you going to have to add unforeseen features? Etc. Etc. Etc.
How about the type of developer who comes up with statistics and made up "camps" as if enjoying the craft itself makes you, out of necessity of the false premise, unable to enjoy the fact that what you made is useful enough to people that they choose your product because you are obsessed with making a good work?
John Carmack has talked about it in a podcast a few years ago, and he's the closest popular programmer that I can think of who was simply obsessed with milking every tiny ounce of GPU performance, yet none of his effort would matter if Doom and Quake weren't fun games.
I think this is a false dichotomy. Maybe there is some theoretical developer who cares about their craft only due to platonic idealism, but most developers who care about their craft want their code to be correct, fast, maintainable, usable, etc. in ways that do indeed benefit its users. At worst, misalignment in priorities can come into play, but it's much more subtle than developers either caring or not caring about craft.
I think this is a false dichotomy. If you're passionate about your craft, you will make a higher quality product. The real division is between those who measure the success of a project in:
- revenue/man-hour, features shipped/man-hour, etc.
- ms response time, GB/s throughput, number of bugs actually shipped to customers, etc.
People in the second camp use AI, but it's a lot more limited and targeted. And yes, you can always cut corners and ship software faster, but it's not going to be higher quality by any objective metric.
With AI you actually don't need to choose anymore. Well laid out abstractions actually make AI generate code faster and more accurately. Spending the time in camp 2 to design well and then using AI like camp 1 gives you the best of both worlds.
> No one has ever made a purchasing decision based on how good your code is.
I routinely close tabs when I sense that low-quality code is wasting time and resources, including e-commerce sites. Amazon randomly cancelled my account so I will never shop from them. I try to only buy computers and electronics with confirmed good drivers. Etc.
This is a very useful insight. It nicely identifies part of the reason for the stark bifurcation of opinion on AI. Unfortunately, many of the comments below it are emotional and dismissive, pointing out its explanatory limitations, rather than considering its useful, probative value.
I find most home inspectors fall into one of two camps:
1. You treat the house as a means to an end to make a living space for a person.
2. You treat the building construction itself as your craft, with the house being a vector for your craft.
The people who typically have the most negative things to say about buildings fall into camp #2 where cheap unskilled labor is streamlining a large part of what they considered their art while enabling people in group #1 to iterate on their developments faster.
Personally, I fall into the first camp.
No one has ever made a purchasing decision based on how good the pipes inside the walls are.
The general public does not care about anything other than the square footage and color of your house. Sure, if you mess up and one of the houses collapses then that'll manifest as an outcome that impacts the home owner negatively.
With that said, I do have respect for people in the latter camp. But they're generally best fit for homes where that level of craftsmanship is actually useful (think: mansions, bridges, roads, things I use, etc).
I just feel like it's hard to talk about this stuff if we're not clear on which types of construction we're talking about.
The general public does not know how to identify or care about the pipes in the walls. They do care when they bust and cause tens of thousands of dollars of damage. Thats why they hire someone with a keen eye to it to act on their behalf.
"I have an opinion and everyone on the planet agrees with me, if you disagree, you don't matter" is not a useful insight, and is, in fact, far more emotional and dismissive than any of the replies to it.
That's a horribly broken misrepresentation of what was said in the original post. If that's what you took away from it, you're not reading carefully or critically.
> it's hard to talk about this stuff if we're not clear on which types of projects we're talking about.
It mystifies me when people don't intuit this.
For any suitably sized project, there are parts where elegance and friction removal are far more important than others. By an order or two of magnitude.
I have shipped beautifully-honed, highly craft code. Right alongside jank that was debugged in the "Well it seems to work now" and "Don't touch anything behind this in-project PI" category.
There are very good reasons and situations for both approaches, even in one project.
As developers we have a unique advantage over everybody else dealing with the way AIgen is revolutionizing careers:
Everybody else is dealing with AIgen is suffering the AI spitting out the end product. Like if we asked AI to generate the compiled binary instead of the source.
Artists can't get AIgen to make human-reviewed changes to a .psd file or an .svg, it poops out a fully formed .png. It usurps the entire process instead of collaborating with the artist. Same for musicians.
But since our work is done in text and there's a massive publicly accessible corpus of that text, it can collaborate with us on the design in a way that others don't get.
In software the "power of plain text" has given us a unique advantage over kinds of creative work. Which is good, because AIgen tends to be clumsy and needs guidance. Why give up that advantage?
I think "make a product" is the important point of disagreement here. AI can generate code that users are willing to pay for, but for how long? The debate is around the long-term impact of these short-term gains. Code _is_ a means to an end, but well-engineered code is a more reliable means than what AI currently generates. These are ends of a spectrum and we're all on it somewhere.
You ever notice how everyone who drives slower than you is a moron and everyone who drives faster than you is a maniac? Your two camps have a similar bias.
> The people who typically have the most negative things to say about AI fall into camp #2 where AI is automating a large part of what they considered their art while enabling people in group #1 to iterate on their product faster.
I am in both camps. Always have been.
Code janitors about to be in high demand. We’ve always been pretty popular with leadership and it’s gonna get even more important.
Treat code design and architecture as the thing that lets your slop canons (90% of engineers even pre-ai) move fast without breaking things
> Treat code design and architecture as the thing that lets your slop canons (90% of engineers even pre-ai) move fast without breaking things
I'm currently of the opinion that humans should be laser focused on the data model. If you've got the right data model, the code is simpler. If you've got the relevant logical objects and events in the database with the right expressivity, you have a lot of optionality for pivoting as the architecture evolves.
It's about that solid foundation - and of course lots of tests on the other side.
It's called "systems analysis". Programmers are generally pretty terrible at it because it requires holistic, big-picture thinking. But it used to take up the bulk of the design activity for a new enterprise system.
It's easy to write off critics of this slop development as just "caring about the wrong thing", but that is couched in incorrect assumptions. This is the unfortunately common mistake of confusing taking responsibility with some sort of "caring about the code" in an artistic sense. I can certainly appreciate the artistry of well-written code, but I care about having a solid and maintainable code-base because I am accountable for what the code I write does.
Perhaps this is an antiquated concept which has fallen out of favor in silicon valley, but code doesn't just run in an imaginary world where there are no consequences and everything is fun all the time. You are responsible for the product you sell. If you sell a photo app that has a security bug, you are responsible for your customers nude photos being leaked. If you vibe-code a forum and store passwords in plaintext, you are responsible for the inevitable breech and harm.
The "general public" might not care, but that is only because the market is governed by imperfect information. Ultimately the public are the ones that get hurt by defective products.
> No one has ever made a purchasing decision based on how good your code is.
There are two reasons for this. One is that the people who make purchasing decisions are often not the people who suffer from your bad code. If the user is not the customer, then your software can be shitty to the point of being a constant headache, because the user is powerless to replace it.
The other reason is that there's no such thing as "free market" anymore. We've been sold the idea that "if someone does it better, then they'll win", but that's a fragile idea that needs constant protection from bad actors. The last time that protection was enacted was when the DOJ went against Microsoft.
> Sure, if you vibe code a massive bug into your product then that'll manifest as an outcome that impacts the user negatively.
Any semblance of accountability for that has been diluted so much that it's not worth mentioning. A bug someone wrote into some cloud service can end up causing huge real-world damage in people's lives, but those people are so far removed from the suits that made the important decisions that they're powerless to change anything and won't ever see that damage redressed in any way.
So yeah, I'm in camp #2 and I'm bitter about AI, because it's just accelerating and exacerbating the enshittification.
Someone on the HN wrote recently that everyone who's foaming at the mouth about how AI helps us ship faster is forgetting that velocity is a vector -- it's not just about how fast you're going, but also in what direction.
I'd go further and say that I'm not even convinced we're moving that much faster. We're just cranking out the code faster, but if we actually had to review that code properly and make all the necessary fixes, I'm pretty sure we would end up with a net loss of velocity.
> No one has ever made a purchasing decision based on how good your code is.
If you have buggy software, people don’t use it if there are alternatives. They don’t care about the code but hard to maintain, buggy code will eventually translate to users trying other products.
You realize you’re essentially building a false dichotomy? I work in video games where code really is a means to an end but still see that authorship is important even if that code is uuuuugly due to it being the expression of the game itself. From that perspective I’m neither worried about craft or product but my ability to express myself though code as the game behaves. Although if you really must have only two categories I’d be in camp one.
As such AI is a net negative as it would be in writing a novel or making any other kind of art.
Now you done it! Yeah, one of the difficult things is being able to see both sides. At the end of the day, I happen to write code because that's how I can best accomplish the things I need to do with the minimum of effort. While I do take pride in elegance and quality of code, it is always a means to an end. When I start gold plating, I try to remind myself of the adage I learned in a marketing class: No one ever needed a drill, they needed the ability to make holes.
It is strange, but not really upsetting to me, that I am not particularly anal about the code Claude is generating for me anymore but that could also be a function of how low stakes the projects are or the fact nothing has exploded yet.
I generally fall into the first camp, too, but the code that AI produces is problematic because it's code that will stop working in an unrecoverable way after some number of changes. That's what happened in the Anthropic C compiler experiment (they ended up with a codebase that wasn't working and couldn't be fixed), and that's what happens once every 3-5 changes I see Codex making in my own codebase. I think, if I had let that code in, the project would have been destroyed in another 10 or so changes, in the sense that it would be impossible to fix a bug without creating another. We're not talking style or elegance here. We're talking ticking time bombs.
I think that the real two camps here are those who haven't carefully - and I mean really carefully - reviewed the code the agents write and haven't put their process under some real stress test vs those who have. Obviously, people who don't look for the time bombs naturally think everything is fine. That's how time bombs work.
I can make this more concrete. The program wants to depend on some invariant, say that a particular list is always sorted, and the code maintains it by always inserting elements in the right place in the list. Other code that needs to search for an element depends on that invariant. Then it turns out that under some conditions - due to concurrency, say - an element is inserted in the wrong place and the list isn't sorted, so one of the places that tries to find an element in the list fails to find it. At that point, it's a coin toss of whether the agent will fix the insertion or the search. If it fixes the search, the bug is still there for all the other consumers of the list, but the testing didn't catch that. Then what happens is that, with further changes, depending on their scope, you find that some new code depends on the intended invariant and some doesn't. After several such splits and several failed invariants, the program ends up in a place that nothing can be done to fix a bug. If the project is "done" before that happens - you're in luck; if not, you're in deep, deep trouble. But right up until that point, unless you very carefully review the code (because the agents are really good at making code seem reasonable under cursory scrutiny), you think everything is fine. Unless you go looking for cracks, every building seems stable until some catastrophic failure, and AI-generated code is full of cracks that are just waiting for the right weight distribution to break open and collapse.
So it sounds to me that the people you think are in the first camp not only just care how the building is built as long as it doesn't collapse, but also believe that if it hasn't collapsed yet it must be stable. The first part is, indeed, a matter of perspective, but the second part is just wrong (not just in principle but also when you actually see the AI's full-of-cracks code).
It can be especially bad if the architecture is layered with each one having its own invariant. Like in a music player, you may have the concept of a queue in the domain layer, but in the UI layer you may have additional constraints that does not relate to that. Then the agent decide to fix a bug in the UI layer because the description is a UI bug, while it’s in fact a queue bug
Shit like this is why you really have to read the plans instead of blindly accepting them. The bots are naturally lazy and will take short cuts whenever they think you won't notice.
I was just using an app that competes with airbnb. That the app's code is extraordinarly unreliable was a significant factor in my interactions with others on the app, especially, I gradually realized I couldn't be sure messages were delivered or data was up-to-date.
That influenced some unfortunate interactions with people and meant that no one could be held to their agreements since you never knew if they received the agreements.
So, well, code quality kind of matters. But I suppose you're still right in a sense - currently people buy and use complete crap.
This is absolutely false. The purpose of craft is a make a good product.
I don’t care what kind of steel you used to design my car, but I care a great deal that it was designed well, is safe, and doesn’t break down all the time.
1. The people who don't understand (nor care) about the risks and complexity of what they're delivering; and
2. The people that do.
Widespread AI usage is going to be a security nightmare of prompt injection and leaking credentials and PII.
> No one has ever made a purchasing decision based on how good your code is.
This just isn't true. There's a whole process in purchasing software, buying a company or signing a large contract called "due diligence". Due diligence means to varying degree checking how secure the product is, the company's processes, any security risks, responsiveness to bugfixes, CVEs and so on.
AI is going to absolutely fail any kind of due diligence.
There's a little thing called the halting problem, which in this context basically means there's no way to guarantee that the AI will be restricted from doing anything you don't want it to do. An amusing example was an Air Canada chatbot that hallucinated a refund policy that a court said it had to honor [1].
How confident are we going to be that AIs won't leak customer information, steal money from customers and so on? I'm not confident at all.
3. You use the act of writing code to think about a given problem, and by so doing not only produce a better code, but also gain a deeper understanding of the problem itself - in combination a better product all around.
> No one has ever made a purchasing decision based on how good your code is. The general public does not care about anything other than the capabilities and limitations of your product.
The capabilities and limitations of your product are defined in part by how good the code is. If you write a buggy mess (whether you write it yourself or vibe code it), people aren't going to tolerate that unless your software has no competitors doing better. People very much do care about the results that good code provides, even if they don't care about the code as an end in itself.
While creating good software is as much of an art as it is a science, this is not why the craft is important. It is because people who pay attention to detail and put care into their work undoubtedly create better products. This is true in all industries, not just in IT.
The question is how much does the market value this, and how much it should value it.
For one-off scripts and software built for personal use, it doesn't matter. Go nuts. Move fast and break things.
But the quality requirement scales proportionally with how many people use and rely on the software. And not just users, but developers. Subjective properties like maintainability become very important if more than one developer needs to work on the codebase. This is true even for LLMs, which can often make a larger mess if the existing code is not in good shape.
To be clear, I don't think LLMs inevitably produce poor quality software. They can certainly be steered in a good direction. But that also requires an expert at the wheel to provide good guidance, which IME often takes as much, if not more, work than doing it by hand.
So all this talk about these new tools replacing the craft of programming is overblown. What they're doing, and will continue to do unless some fundamental breakthrough is reached, is make the creation of poor quality software very accessible. This is not the fault of the tools, but of the humans who use them. And this should concern everyone.
> The general public does not care about anything other than the capabilities and limitations of your product.
It's absolutely asinine to say the general public doesn't care about the quality and experience of using software. People care enough that Microsoft's Windows director sent out a very tail-between-legs apology letter due to the backlash.
It's as it always has been, balancing quality and features is... well, a balance and matters.
The public doesn't care about the code itself, they absolutely care about the quality and experience of using the software.
But you can have an extremely well designed product that functions flawlessly from the perspective of the user, but under the hood it's all spaghetti code.
My point was that consuming software as a user of the product can be quite different from the experience of writing that software.
Facebook is a great example of this, there's some gnarly old spaghetti code under the hood just from the years of legacy code but those are largely invisible to the user and their experience of the product.
I'd just be careful to separate code elegance from product experience, since they are different. Related? Yeah, sure. But they're not the same thing.
That's true, but excel '98 would still cover probably 80% of users use cases.
A lot, and I mean a lot, of software work is trying to justify existence by constantly playing and toying with a product that worked for for everyone in version 1.0, whether it be to justify a job or justify charging customers $$ per month to "keep current".
> Facebook is a great example of this, there's some gnarly old spaghetti code under the hood just from the years of legacy code but those are largely invisible to the user and their experience of the product.
I'm sure that's the case in basically everything, it sorta doesn't matter (until it does) if it's cordoned off into a corner that doesn't change and nominally works from the outside perspective.
But those cases are usually isolated, if they aren't it usually quickly becomes noticeable to the user in one way or another, and I think that's where these new tools give the illusion of faster velocity.
If it's truly all spaghetti underneath, the ability to make changes nosedives.
I have yet to meet anyone whose problem with AI is that the code is not aesthetically pleasing, but that would actually be an indicator to me that people are using these things responsibly.
My own two cents: there's an inherent tension with assistants and agents as productivity tools. The more you "let them rip", the higher the potential productivity benefits. And the less you will understand the outputs, or even if they built the "correct thing", which in many cases is something you can only crystalize an understanding about by doing the thing.
So I'm happy for all the people who don't care about code quality in terms of its aesthetic properties who are really enjoying the AI-era, that's great. But if your workload is not shifting from write-heavy to read-heavy, you inevitably will be responsible for a major outage or quality issue. And moreso, anyone like this should ask why anyone should feel the need to employ you for your services in the future, since your job amounts to "telling the LLM what to do and accepting it's output uncritically".
>But if your workload is not shifting from write-heavy to read-heavy, you inevitably will be responsible for a major outage or quality issue.
I think that's actually a good way to look at it. I use AI to help produce code in my day to day, but I'm still taking quite a while to produce features and a lot of it is because of that. I'm spending most of my time reading code, adjusting specs, and general design work even if I'm not writing code myself.
There's no free lunch here, the workflow is just different.
Facebook.com is a monstrosity though, and their mobile apps as well are slow and often broken. And the younger generations are using other networks, Facebook is in trouble.
> You treat your code as a means to an end to make a product for a user.
It isn’t that though, the “end” here is making money not building products for users. Typically people who are making products for users cares about the craft.
If the means-to-end people could type words into a box and get money out the other side, they would prefer to deal with that than products or users.
Thats why ai slop is so prevalent — the people putting it out there don’t care about the quality of their output or how it’s used by people, as long as it juices their favorite metrics - views, likes, subscribes, ad revenue whatever. Products and users are not in scope.
I don't think all means-to-end people are just in it for money, I'll use the anecdote of myself. My team is working on a CAD for drug discovery and the goal isn't to just siphon money from people, the goal is legitimately to improve computational modeling of drug interactions with targets.
With that in mind, I care about the quality of the code insofar as it lets me achieve that goal. If I vibe coded a bunch of incoherent garbage into the platform, it would help me ship faster but it would undermine my goal of building this tool since it wouldn't produce reliable or useful models.
I do think there's a huge problem with a subset of means-to-end people just cranking out slop, but it's not fair to categorize everyone in that camp this way ya'know?
Meanwhile, the complexity of the average piece of software is drastically increasing. ... The stats suggest that devs are shipping more code with coding agents. The consequences may already be visible: analysis of vendor status pages [3] shows outages have steadily increased since 2022, suggesting software is becoming more brittle.
We've already seen a large-scale AWS outage because of this. It could get much worse. In a few years, we could have major infrastructure outages that the AI can't fix, and no human left understands the code.
AI coders, as currently implemented, don't have a design-level representation of what they're doing other than the prompt history and the code itself. That inherently leads to complexity growth. This isn't fundamental to AI. It's just a property of the way AI-driven coding is done now.
Is anybody working on useful design representations as intermediate forms used in AI-driven coding projects?
"The mending apparatus is itself in need of mending" - "The Machine Stops", by E.M. Forster, 1909.
The authors updated their title so I've updated it here too. Previous title was "Good code will still win" - but it was leading to too much superficial discussion based entirely on the phrase "good code" in the title. It's amazing how titles do that!
(Confession: "good code will still win" was my suggestion- IIRC they originally had "Is AI slop the future?". You win some you lose some.)
Why build each new airplane with the care and precision of a Rolls-Royce? In the early 1970s, Kelly Johnson and I [Ben Rich] had dinner in Los Angeles with the great Soviet aerodynamicist Alexander Tupolev, designer of their backfire Bear bomber. 'You Americans build airplanes like a Rolex watch,' he told us. 'Knock it off the night table and it stops ticking. We build airplanes like a cheap alarm clock. But knock it off the table and still it wakes you up.'...The Soviets, he explained, built brute-force machines that could withstand awful weather and primitive landing fields. Everything was ruthlessly sacrificed to cut costs, including pilot safety.
We don't need to be ruthless to save costs, but why build the luxury model when the Chevy would do just as well? Build it right the first time, but don't build it to last forever. - Ben Rich in Skunk Works
That's an interesting story, but not a great analogy for software.
If a technology to build airplanes quickly and cheaply existed and was made available to everyone, even to people with no aeronautical engineering experience, flying would be a much scarier ordeal than it already is.
There are good reasons for the strict safety and maintenance standards of the aviation industry. We've seen what can happen if they're not followed.
The fact that the software industry doesn't have similar guardrails is not something to celebrate. Unleashing technology that allows anyone to create software without understanding or even caring about good development practices and conventions is fundamentally a bad idea.
> Markets will not reward slop in coding, in the long-term.
Forgive my cynical take, but we're currently experiencing a market that doesn't appear to be rewarding anything specific in the long-term, as huge sums of money are traded on a minute-to-minute, day-to-day, and week-to-week basis. There's an explosion of uncertainty in today's markets and complete collapse of long-range planning echoing at many levels in society--particularly at the highest levels of governments. So I kind of don't want to hear about markets are going to reward.
But what exactly is "good code" (presumably the opposite of slop)?
I'd say that good code is terse, robust, suits its function, yet admits just the right amount of testability, performance, and customizability for the future. Good code anticipates change well. That means that if it has one job, it does that one job well, and not twenty others. If the job is going to stay the same, the code is going to stay the same. Good systems are made from well-factored pieces of code that have proper jobs and do their proper jobs without taking on jobs they shouldn't.
I for one think that AI code is going to reflect its training. If it's trained on just a random selection of everything out there, it's probably going to be pretty mediocre and full of bugs.
The economic angle is not as clear cut as the authors seem to think.
There is an abundance of mediocre and even awful code in products that are not failing because of it.
The worst thing about poorly designed software architecture is that it tends to freeze and accumulate more and more technical debt. This is not always a competitive issue, and with enough money you can maintain pretty much any codebases.
Even with enough money, you may not be able to attract/keep talented engineers who are willing to put up with such a work environment (the codebase itself, and probably the culture that led to its state) and who want to ship well built/designed software but are slowed down by the mess.
My issue with this is that a simple design can set you up for failure if you don’t foresee and account for future requirements.
Every abstraction adds some complexity. So maybe the PoC skips all abstractions. Then we need to add a variant to something. Well, a single if/else is simpler than an abstract base class with two concrete implementations. Adding the 3rd as another if clause is simpler than refactoring all of them to an ABC structure. And so on.
“Simple” is relative. Investing in a little complexity now can save your ass later. Weighing this decision takes skill and experience
Agreed on the economics side. Clean code saves you time and money whether a human or AI wrote it. That part doesn't change.
But I don't think the models are going to get there on their own. AI will generate a working mess all day long if you let it. The pressure to write good code has to come from the developer actually reviewing what comes out and pushing back. The incentive is there but it only matters if someone acts on it.
I wish it was true, but it sounds like copium. I bet garment makers, or artisan woodworkers said the same when big store cheap retails came. I bet they said "people value quality and etc", but in the end, outside of a group of people who has principles, everyone else floods their home with H&Ms and crap from Temu.
So yeah, good code might win among small group of principled people, but the majority will not care. And more importantly, management won't care. And as long as management don't care, you have two choices: "embrace" slop, or risk staying jobless in a though market.
Edit: Also, good code = expensive code. In an economy where people struggle to afford a living, nobody is going to pay for good code when they can get "good enough" code for 200$ a month with Claude.
For a lot of companies their entire income entirely depends on their uptime.
Might be fine if your HR software isn't approving holiday requests, but your checkout breaks, there's no human that can pick apart the mess and you lose your entire income for a week and that might be the end of the business.
Engineers don't build the cheapest bridge that just barely won't fail. They build the cheapest bridge that satisfies thousands of pages of regulatory requirements maintained and enforced by dozens of different government entities. Those regulations range from safety, to aesthetic, to environmental, to economic, to arcane.
Left to their own devices, engineers would build the cheapest bridge they could sell that hopefully won't collapse. And no care for the impact on any stakeholder other than the one paying them.
> Left to their own devices, engineers would build the cheapest bridge they could sell that hopefully won't collapse.
I don't know any real (i.e. non-software) engineers, but I would love to ask them whether what you said is true. For years now, I've been convinced that we should've stuck with calling ourselves "software developers", rather than trying to crib the respectability of engineering without understanding what makes that discipline respectable.
Our toxic little industry would benefit a lot from looking at other fields, like medicine, and taking steps to become more responsible for the outcomes of our work.
Civil engineers are licensed and carry insurance. When software developers have similar requirements, then I'll call them engineers. In some fields like avionics, the certification regime is a good proxy for licensing -- I think we could extend the "engineer" title to those developers too.
Such a world still has room for unlicensed developers too -- I'd certainly be among them.
> What if we built things that are meant to last? Would the world be better for it?
You'd have a better bridge, at the expense of other things, like hospitals or roads. If people choose good-enough bridges, that shows there is something else they value more.
It might very well be that building and maintaining a bridge for 100 years costs three or four times as much as building and maintaining one that last 50 years. If demolition costs are not same as cost of bridge well in long run replacing the bridge ever 50 years is cheaper.
On whole it is entirely reasonable optimisation problem. What is the best lifespan of single bridge over desired total lifespan.
Certainly, "enough" is doing a lot of work and things get blurry, but I think "good enough" is meant to capture some of that. Over building is also a problem. It isn't strictly true that building longer lived things is cheaper over time either, it obviously depends on the specific things getting compared. And if you go 100 years rather than 25 years, you'll have fewer chances to adjust and optimize for changes to the context, new technology, changing goals, or more efficient (including cost saving) methods.
Obviously, there's a way to do both poorly too. We can make expensive things that don't last. I think a large chunk of gripes about things that don't last are really about (1) not getting the upside of the tradeoff, cheaper (in both senses) more flexible solutions, and (2) just getting bad quality period.
> There is nothing special about roman concrete compared to moderns concrete. Modern concrete is much better
Roman concrete is special because it is much more self-healing than modern concrete, and thus more durable.
However, that comes at the cost of being much less strong, set much slower and require rare ingredients. Roman concrete also doesn’t play nice with steel reinforcement.
I think you are incorrect. Compared to modern concrete, roman concrete was more poorly cured at the time of pouring. So when it began to weather and crack, un-cured concrete would mix with water and cure. Thus it was somewhat self healing.
Modern concrete is more uniform in mix, and thus it doesn't leave uncured portions.
We have modern architecture crumbling already less than 100 years after it has been built. I know engineering is about tradeoffs but we should also acknowledge that, as a society, we are so much used to put direct economic cost as the main and sometimes only metric.
That is definitely not the free market at play. It's legislative body at play.
Engineers (real ones, not software) face consequences when their work falls apart prematurely. Doubly so when it kills someone. They lose their job, their license, and they can never work in the field again.
That's why it's rare for buildings to collapse. But software collapsing is just another Monday. At best the software firm will get fined when they kill someone, but the ICs will never be held responsible.
This only works when the barrier of entry to sue is low enough to be done and when the law is applied impartially without corruption with sanctions meaningful enough , potentially company-ending, to discourage them.
At the moment you remove one of these factors, free market becomes dangerous for the people living in it.
Safety factors account for uncertainty. Uncertainty the quality of materials, of workmanship, of unaccounted-for sources of error. Uncertainty in whether the maximum load in the spec will actually be followed.
Without a safety factor, that uncertainty means that, some of the time, some of your bridge will fall down
Um, ackshually, real civil/structural engineers—at least, those in the global north—design bridges, roads, and buildings with huge tolerances (multiple times the expected loads) because unexpected shit happens and you don't want to suffer catastrophic failure when conditions are just outside of your typical use case and have a Tacoma Narrows Bridge type situation on your hands.
We might be arguing semantics, but safety margins aren't considered 'overbuilding' but part of the bare minimum requirements for a bridge to stand. They aren't there "just in case" they are there because it is known for a fact that bridges in the real world will experience degradation and overloading.
If you build a bridge that is rated to carry 100k lbs of weight, and you build it to hold 100k lbs, you didn't build it to barely meet spec -- you under built it -- because overloading is a known condition that does happen to bridges.
People are not emotionally ready to accept that certain layers of abstraction don’t need as much care and effort if they can be automated.
We are at the point where a single class can be dirty but the API of the classes should be clean. There’s no point reviewing the internals of a class anymore. I’m more or less sure that they would work as intended.
Next step is that of a micro service itself. The api of that micro service should be clean but internals may be however. We are 10% here.
No, unfortunately. In a past life, in response to an uptime crisis, I drove a multi-quarter company-wide initiative to optimize performance and efficiency, and we still did not manage to change the company culture regarding performance.
If it does not move any metrics that execs care about, it doesn't matter.
The industry adage has been "engineer time is much more expensive than machine time," which has been used to excuse way too much bloated and non-performant code shipped to production. However, I think AI can actually change things for the better. Firstly, IME it tends to generate algorithmically efficient code by default, and generally only fails to do so if it lacks the necessary context (e.g. now knowing that an input is sorted.)
More importantly though, now engineer time is machine time. There is now very little excuse to avoid extensive refactoring to do things "the right way."
Performance can be a direct target in a feedback loop and optimised away. That's the easy part. Taking an idea and poof-ing a working implementation is the hard part.
The current iteration of models don't write clean code by itself but future ones will. The problem in my view is extremely similar to agentic/vibe coding. Instead of optimizing for results you can optimize for clean code. The demand is there, clean code will lead to less bugs, faster running code and less tokens used (thus less cost) when understanding the code from a fresh session. It makes sense that the first generation of vibe coding focused on the results first and not clean code. Am I missing something?
Getting AI tools to produce better code is not that hard; if you know how to do things right yourself. Basically, all you need to do is ask it to. Ask it to follow SOLID principles. Ask it to stick to guard rails. Ask it to eliminate code duplication, write tests, harden code bases, etc. It will do it. Most people just don't know what to ask for. Or even that they should be asking for it. Or how to ask for it. A lot of people getting messy results need to look in the mirror.
I'm using AI for coding just like everybody else. More or less exclusively since a few months. It's sometimes frustrating to get things done the right way but mostly I get the job done. I've been coding since the nineties. So, I know how to do things right and what doing it wrong looks like. If I catch my AI coding tools doing it wrong, I tell it to fix it and then adjust skills and guard rails to prevent it going off the rails.
AI tools actually seem to self correct when used in a nice code base. If there are tests, they'll just write more tests without needing to be prompted. If there is documentation, that gets updated along with the code. When you start with a vibe coded mess it can escalate quickly unless you make it clean up the mess. Sometimes the tests it adds are a bit meh and you have to tell it off by "add some tests for the non happy path cases, make sure to cover all possible exceptions, etc.". You can actually ask for a critical code review and then tell it "fix all of that". Sometimes it's as simple as that.
"AI will write good code because it is economically advantageous to do so. Per our definition of good code, good code is easy to understand and modify from the reduced complexity."
---------
This doesn't necessarily follow. Yes, there might be economic pressure for AI to produce "good" code, but that doesn't necessarily mean efforts to make this so will succeed. LLM's might never become proficient at producing "good" code for the same reasons that LLM's perform poorly when trained on their own output. A heuristic prediction of what "good" code for a given solution looks like is likely always going to be less "good" than code produced by skilled and deliberate human design.
Just as there is a place for fast and dirty human code, there will be a place for slop code. Likely the same sort of place. However, we may still need humans to produce "good" code that AI can be trained on as well as for solutions that actually need to be "good". AI might not be able to do that for us anytime soon, no matter what the economic imperatives are.
& I have thus far made a large portion of my living off of fixing bad code "later".
… but lately, the rate at which some dev with an LLM can just churn out new bad code has just shot through the roof. I can still be struggling to pick apart the last piece of slop, trying to figure out "okay, if someone with a brain had written this, what would the inputs & outputs be?" and "what is it that production actually needs and relies on, and what causes problems, and how can we get the code from point A to point B without more outages"; but in the meantime, someone has spit out 8 more modules of the same "quality".
So sure, the basic tenants haven't changed, but these days I feel like I'm drowning in outages & bugs.
The irony is that "good" code and good documentation have top priority now in most orgs. For decades the best developers have been screaming about good code and documentation but leadership couldn't give a fuck. But now that their favorite nepobaby is here, now it's the most important thing all of a sudden.
I was told by an exec...once a company or technology implements something and gets mindshare, the community (including companies) moves on.
Competition is essentially dead for that segment given there is always outward growth.
With that being said, AI enables smaller players to implement their visions with enough completeness to be viable. And with a hands off approach to code, the underlying technology mindshare does not matter as much.
If that were true first movers would always win. Hotmail came before Gmail. Yahoo came before Google. Myspace came before Facebook. Et cetera. Of course it is best to avoid competition by creating a new (sub)category but category kings can change.
And just to be clear: AI continues to progress. There are already rumors about the next Anthropic model coming out and we are now in the phase of the biggest centralized reinforcement loop ever existed: everyone using ai for writing and giving it feedback.
We are, thanks to LLMs, able now to codify humans and while its not clear how fast this is, i do not believe anymore that my skills are unique.
A small hobby application costed me 11 dollars on the weekend and took me 3h to 'build' while i would have probably needed 2-3 days for it.
And we are still limited by resources and normal human progress. Like claude team is still exerpimental. Things like gastown or orchestrator architecture/structure is not that estabslihed and consumed quite a lot of tokens.
We have not even had time yet to build optimzed models. Claude code still understand A LOT of languages (human languages and programming languages)
Do not think anyone really cares about code quality. I do but i'm a software engineere. Everyone around me doesn't. Business doesn't. Even fellow co-workers don't or don't understand good code.
Even stupid things like the GTA 5 Online (or was it RDR2?) startup code wasn't found for ages (there was some algo complexity in loading some config file which took ages until someone non rockstar found it and rockstar fixed it).
We also have plenty of code were it doesn't matter as long as it works. Offline apps, scripts, research scripts etc.
Maybe the models get better on the code side but I thought slop referred to any AI generated text or imagery? It’s hard to see how most of the internet’s written words won’t be slop, especially when there’s no binding compiler contract like in code.
My prediction is that we'll start to see a whole new layer of abstraction to help us write high quality code with LLMs - meaning new programming languages, new toolchains, stricter typechecking, in-built feedback loops etc.
The slop we're seeing today comes primarily from the fact that LLMs are writing code with tools meant for human users.
A certain big PaaS I won't name here has had lots of clusterfucks in the last 3 months. The CEO is extremely bought into AI and "code not mattering anymore". He's also constantly talking about the meteoric growth because Claude and other AI providers are using railway as default suggestions.
The toll has come to collect and now a lot of real production users are looking at alternatives.
The reality is the market is rewarding slop and "velocity now". There will come a time where it will reward quality again.
> economic forces will drive AI models toward generating good, simpler, code because it will be cheaper overall
Economic forces are completely irrelevant to the code quality of AI.
> I believe that economic incentives will start to take effect and AI models will be forced to generate good code to stay competitive amongst software developers and companies
Wherever AI succeeds, it will be because a dev is spending time on a process that requires a lot of babysitting. That time is about the same as writing it by hand. Language models reduce the need to manually type something because that's what they are designed to do, but it doesn't mean faster or better code.
AI is rubber duck that can talk back. It's also a natural language search tool. It's training wheels for devs to learn how to plan better and write half-decent code. What we have is an accessibility tool being sold as anything and everything else because investors completely misunderstand how software development works and are still in denial about it.
Code quality starts and ends with business needs being met, not technical capability. There is no way to provide that to AI as "context" or automate it away. AI is the wrong tool when those needs can be met by ideas already familiar to an experienced developer. They can write that stuff in their sleep (or while sitting in the meetings) and quickly move on.
Everyone's talking about AI, but let's posit that today's coding models are as good as a SDE on the performance/experience distribution, maybe in the lower quartile, but can we also posit that this will improve and over time the coding models equal and then better the median software engineer? It's not like SDE's are not also churning out poor quality code "it worked for me", "what tests?" "O(what?)", etc, we've all worked with them.
The difference is that over the years while tooling and process have dramatically improved, SDE's have not improved much, junior engineers still make the same mistakes. The assumption is that (not yet proven, but the whole bubble is based on this) that models will continue to improve - eventually leaving behind human SDEs (or other domain people, lawyers, doctors, etc) - if this happens these arguments I keep seeing on HN about AI slop will all be moot.
Assuming AI continues to improve, the cost and speed of software development will dramatically drop. I saw a comment yesterday that predicted that AI will just plateau and everyone will go back to vim and Makefiles (paraphrasing).
Maybe, I don't know, but all these people saying AI is slop, Ra Ra Humans is just wishful thinking. Let's admit it, we don't know how it will play out. There's people like Dario and Sam who naturally are cheerleading for AI, then there's the HN collective who hate every new release of MacOS and every AI model, just on principle! I understand the fear, anyone who's ever read Flora Thompson's Lark Rise to Candleford will see the parallels, things are changing, AI is the plough, the railway, the transistor...
I'm tired on the debate, my experience is that AI (Gemini for me) is awesome, we all have gaps in our knowledge/skills (but not Gemini), AI helps hardcore backend engineers throw together a Gradio demo in minutes to make their point, helps junior devs review their code before making a PR, helps Product put together presentations. I could go on and on, those that don't see value in AI are doing it wrong.
As Taylor Swift said "It's me, hi, I'm the problem, it's me" - take that to heart and learn to leverage the tools, stop whining please, it's embarrassing to the whole software industry.
None of this is true. Models will soon scale to several million tokens of context. That, combined with the combined experience of millions of feedback cycles, will make software a solved problem for machines, even as humans remain dumb. Yes, even complex software. Complex software is actually better because it is, generally, faster with more features. It’s smarter. Like a jet fighter, the more complex it is, the more capable it is.
I find most developers fall into one of two camps:
1. You treat your code as a means to an end to make a product for a user.
2. You treat the code itself as your craft, with the product being a vector for your craft.
The people who typically have the most negative things to say about AI fall into camp #2 where AI is automating a large part of what they considered their art while enabling people in group #1 to iterate on their product faster.
Personally, I fall into the first camp.
No one has ever made a purchasing decision based on how good your code is.
The general public does not care about anything other than the capabilities and limitations of your product. Sure, if you vibe code a massive bug into your product then that'll manifest as an outcome that impacts the user negatively.
With that said, I do have respect for people in the latter camp. But they're generally best fit for projects where that level of craftsmanship is actually useful (think: mission critical software, libraries us other devs depend on, etc).
I just feel like it's hard to talk about this stuff if we're not clear on which types of projects we're talking about.
> No one has ever made a purchasing decision based on how good your code is.
absolutely false.
> The general public does not care about anything other than the capabilities and limitations of your product.
also false.
People may not know that the reason they like your product is because the code is so good, but everyone likes software that is mostly free from bugs, performs extremely well, helps them do their work quickly, and is obviously created by people the care deeply about the quality of the product they produce (you know, the kind that acutally read bug reports, and fix problems quickly).
The longer your product exists the more important the quality of the code will be. This obsession so many have with "get it out the door in 5 seconds" is only going to continue the parade of garbage software that is slow as a dog, and uses gigabytes of memory to perform simple tasks.
You don't have to pick on camp over the other. In my opinion, if you want to make a good product for a user, you should also treat the code you produce for them as your craft. There is no substitute for high quality work.
> obviously created by people the care deeply about the quality of the product they produce
This obviously doesn't represent all of the billions of dollars spent on software like Salesforce, SAP, Realpage, Booking.com, etc. etc. (all notoriously buggy, slow, and complex software). You can't tell me with a straight face that all of the thousands of developers who develop these products/services care deeply about the quality of the product. They get real nice paychecks, benefits and put dinner on the table for their families. That's the market.
> There is no substitute for high quality work.
You're right because there really isn't a consistent definition of what "high quality" software work looks like.
> This obviously doesn't represent all of the billions of dollars spent on software like Salesforce, SAP, Realpage, Booking.com, etc. etc. (all notoriously buggy, slow, and complex software). You can't tell me with a straight face that all of the thousands of developers who develop these products/services care deeply about the quality of the product. They get real nice paychecks, benefits and put dinner on the table for their families. That's the market.
Those first three are "enterprise" or B2B applications, where the person buying the software is almost never one of the people actually using the software. This disconnect means that the person making the buying decision cannot meaningfully judge the quality of any given piece of software they are evaluating beyond a surface level (where slick demos can paper over huge quality issues) since they do not know how it is actually used or what problems the actual users regularly encounter.
Which might be true, but is totally irrelevant to the OP's comment.
Users care about quality, even if the people buying the software do not. You can't just say "well the market doesn't care about quality" when the market incentives are broken for a paricular type of software. When the market incentives are aligned between users and purchasers (such as when they are the same person) quality tends to become very important for the market viability of software (see Windows in the consumer OS market, which is perceptibly losing share to MacOS and Linux following a sustained decline in quality over the last several years).
SAP, Salesforce, Booking.com… all awful products. We use them because monopolies.
I couldn't book travel at a previous company because my address included a `.`, which passed their validation. Awful, awful software. I wouldn't expect slop code to improve it.
> You don't have to pick on camp over the other. In my opinion, if you want to make a good product for a user, you should also treat the code you produce for them as your craft. There is no substitute for high quality work.
Exactly, thank you for putting it like that.
So far it’s been my observation that it’s only the people who think like the OP who put the situation in the terms they did. It’s a false dichotomy which has become a talking point. By framing it as “there are two camps, it’s just different, none of them is better”, it lends legitimacy to their position.
For an exaggerated, non-comparable example meant only to illustrate the power of such framing devices, one could say: “there are people who think guns should be regulated, and there are people who like freedom”. It puts the matter into an either/or situation. It’s a strategy to frame the conversation on one’s terms.
> People may not know that the reason they like your product is because the code is so good, but everyone likes software that is mostly free from bugs, performs extremely well, helps them do their work quickly, and is obviously created by people the care deeply about the quality of the product they produce (you know, the kind that acutally read bug reports, and fix problems quickly).
I would classify all of those as "capabilities and limitations of your product"
I read OPs "good code" to mean "highly aesthetic code" (well laid out, good abstractions, good comments, etc. etc.), and in that sense I agree no customer who's just using the product actually cares about that.
Another definition of "good code" is probably "code that meets the requirements without unexpected behavior" and in that sense of course end users care about good code, but you could give me two black boxes that act the same externally, one written as a single line , single character variables, etc. etc. etc. and another written to be readable, and I wouldn't care so long as I wasn't expected to maintain it.
>but you could give me two black boxes that act the same externally, one written as a single line , single character variables, etc. etc. etc. and another written to be readable, and I wouldn't care so long as I wasn't expected to maintain it.
The reality of software products is that they are in nearly in all cases developed/maintained over time, though--and whenever that's the case, the black box metaphor fails. It's an idealization that only works for single moments of time, and yet software development typically extends through the entire period during which a product has users.
> I read OPs "good code" to mean "highly aesthetic code" (well laid out, good abstractions, good comments, etc. etc.)
The above is also why these properties you've mentioned shouldn't be considered aesthetic only: the software's likelihood of having tractable bugs, manageable performance concerns, or to adapt quickly to the demands of its users and the changing ecosystem it's embedded in are all affected by matters of abstraction selection, code organization, and documentation.
But those aesthetics stem from that need for fewer bugs, performance, maintainability. Identifying/defining code smell comes from experience of what does and doesn’t work.
> I wouldn't care so long as I wasn't expected to maintain it.
But, if you’re the one putting out that software, of course you will have to maintain it! When your users come back with a bug or a “this flow is too slow,” you will have to wade into the innards (at least until AI can do that without mistakes).
Good abstractions translate directly into how quickly the devs can fix bugs and add new features.
But the thing is that someone has to maintain it. And while beautiful code is not the same as correct code, the first is impactful in getting the second and keeping it.
And most users are not consuming your code. They’re consuming some compiled, transpiled, or minified version of it. But they do have expectations and it’s easier to amend the product if the source code is maintainable.
> The longer your product exists the more important the quality of the code will be. This obsession so many have with "get it out the door in 5 seconds" is only going to continue the parade of garbage software that is slow as a dog, and uses gigabytes of memory to perform simple tasks.
Exactly. A lot of devs optimizing for whether the feature is going to take a day or an hour, but not contemplating that it's going to be out in the wild for 10 years either way. Maybe do it well once.
> but not contemplating that it's going to be out in the wild for 10 years either way
I think there are a lot of developers working in repos where it's almost guaranteed that their code will _not_ still be there in 10 years, or 5 years, or even 1 year.
>I think there are a lot of developers working in repos where it's almost guaranteed that their code will _not_ still be there in 10 years, or 5 years, or even 1 year.
And in almost all of those cases, they'd be wrong.
In my experience the code will, but by year 5 nobody is left who worked on it from inception, and by year 10 nobody knows anybody who did, and during that time it reaches a stage where nobody will ever feel any sense of ownership or care about the code in its entirety again.
I come into work and work on a 20 year old codebase every day, working on slowly modernizing it while preserving the good parts. In my experience, and I've been experimenting with both a lot, LLM-based tools are far worse at this than they are at starting new greenfield projects.
This conversation shows how diverse the field is!
When it comes to professional development, I've almost never worked on a codebase less than 10 years old, and it was always [either silently or overtly] understood that the software we are writing is a project that's going to effectively live forever. Or at least until the company is no longer recognizable from what it is today. It just seems wild and unbelievable to me, to go to work at a company and know that your code is going to be compiled, sent off to customers, and then nobody is ever going to touch it again. Where the product is so throwaway that you're going to work on it for about a year and then start another greenfield codebase. Yet there are companies that operate that way!
It's important to know which type of repo/project you are in and hire/code accordingly.
I've seen mismatch in each direction..
How can you possibly know which type of repo you're in ahead of time? My experience is that "temporary" code frequently becomes permanent and I've also been on the other side of those decisions 40 years later.
Unless you’re producing demos for sales presentation (internally or externally), it’s always worth it to produce something good. Bad code will quickly slow you down and it will be a never ending parade of bug tickets.
indeed, being on-call cleanses many developers of slopulist habits
If a product looks pretty and seems to work great at first experience, but is really an unmaintainable mess under the hood, has an unvetted dependency graph, has a poorly thought through architecture that no one understands, perhaps is unsustainable due to a flawed business model, etc., to me it simply suffers from bad design[0], which will be felt sooner or later. If I know this—which is, admittedly, sometimes hard to know (especially in case of software products compared to physical artifacts)—I would, given alternatives, make the choice to not be a customer.
In other words, I would, when possible, absolutely make a purchasing decision based on how good the code is (or based on how good I estimate the code to be), among other things.
[0] The concept of design is often misunderstood. First, obviously, when it’s classified as “how the thing looks”; then, perhaps less obviously, when it’s classified as “how the thing works”. A classification I am arriving at is, roughly, “how the thing works over time”.
Demos might ne nice an flashy. But eventually you actually have to have generally working product. Too many issues with too many annoyances and eventually users of even enterprise software will be heard. Especially so if there is some actual loss of money or data that is not corrected very fast.
In the end software is means to the end. And if you do not get to end because software is crap it will be replaced, hopefully by someone else.
The history of technology is filled with examples where between two competing analogous products, the inferior always wins. It does not matter if it is only slightly inferior or extraordinarily inferior, both win out. It's often difficult to come up with counter-examples. Why is this? Economic pressure. "Inferior" costs less. Sometimes the savings are passed on to the customer... they choose the inferior. Other times the greedy corporate types keep all of it (and win simply because they outmarket the competitor). It does not matter.
If there are people who, on principle, demand the superior product then those people simply aren't numerous enough to matter in the long run. I might be one of those people myself, I think.
And yet somehow the shittiest buggiest software ends up being the most popular.
Look through the list of top apps in mobile app stores, most used desktop apps, websites, SaaS, and all other popular/profitable software in general and tell me where you see users rewarding quality over features and speed of execution.
You have it backwards. Excellent software becomes popular, and then becomes enshittified later once it already has users. Often there is a monopoly/network effect that allows them to degrade the quality of their software once they already have users, because the value in their offering becomes tied to how many people are using it, so even a technically superior newcomer won't be able to displace it (eg. Youtube is dogshit now but all of the content creators are there, and all of the viewers are there, so content creators won't create content for a better platform with no viewers and viewers won't visit a better platform with no content).
If your goal is to break into the market with software that is dogshit from day 1, you're just going to be ones of millions of people failing their get-rich-quick scheme.
I don’t think this search will really reveal speed of execution and feature set rewarded over quality either.
> The longer your product exists the more important the quality of the code will be
From working on many many old and important code bases, the code quality is absolutely trash.
>> No one has ever made a purchasing decision based on how good your code is.
> absolutely false.
Actually, you are both correct.
Nobody makes a purchasing decision based on code quality.
But they may later regret a purchasing decision based on code quality.
If code is craft and minimalism is hip then why ruby, and python, and go and... when it's electrical state in machines?
That's the minimalism that's been lost.
That's why I find the group 2 arguments disingenuous. Emotional appeal to conservatism, which conveniently also props up their career.
Why all those parsers and package systems when what's really needed is dials min-max geometric functions from grand theft auto geometry to tax returns?
Optimization can be (and will be) engineered into the machine through power regulation.
There's way too many appeals to nostalgia emanating from the high tech crowd. Laundering economic anxiety through appeals to conservatism.
Give me an etch a sketch to shape the geometry of. Not another syntax art parser.
Sloppy technical design ends up manifesting in bugs, experiential jank, and instability.
There are some types of software (e.g. websites especially), where a bit of jank and is generally acceptable. Sessions are relatively short, and your users can reload the webpage if things stop working. The technical rigor of these codebases tends to be poor, but it's generally fine.
Then there's software which is very sensitive to issues (e.g. a multi-player game server, a driver, or anything that's highly concurrent). The technical rigor here needs to be very high, because a single mistake can be devastating. This type of software attracts people who want to take pride in their code, because the quality really does matter.
I think these people are feeling threatened by LLMs. Not so much because an LLM is going to outperform them, but because an LLM will (currently) make poor technical design decisions that will eventually add up to the ruin of high-rigor software.
> the quality really does matter.
If this level of quality/rigor does matter for something like a game, do you think the market will enforce this? If low rigor leads to a poor product, won't it sell less than a good product in this market? Shouldn't the market just naturally weed out the AI slop over time, assuming it's true that "quality really does matter"?
Or were you thinking about "matter" in some other sense than business/product success?
A lot of software is forced upon people against their will, and purchased bu people who will never use it.
This obscures things in favour of the “quality/performance doesn’t matter argument”.
I am, for example, forced to use a variety of microslop and zoom products. They are unequivocally garbage. Given the option, I would not use them. However, my employer has saddled us with them for reasons, and we must now deal with it.
Yes, I think the market will enforce this. A bit. Eventually. But the time horizon is long, and crummy software with a strong business moat can out-compete great software.
Look at Windows. It's objectively not been a good product for a long time. Its usage is almost entirely down to its moat.
Yes, both the article and GP are making that exact point about it mattering from a customer's perspective.
Even if you're confident you can stop your own company from shipping terrible products, I worry the trend is broad enough and hard enough to audit that the market will enforce it by pulling back on all purchases of such software. If gamers learn that new multiplayer games are just always laggy these days, or CTOs learn that new databases are always less reliable, it's not so easy to convince them that your product is different than the rest.
Yes, there's every reason to believe the market will weed out the AI slop. The problem is, just like with stocks, the market can stay irrational longer than you can stay solvent. While we all wait for executives to learn that code rigor matters, we still have bills to pay. After a year when they start trying to hire people to clean up their mess, we'll be the ones having to shovel a whole new level of shit; and the choice will be between that and starving.
As someone who also falls into camp one, and absolutely loves that we have thinking computers now, I can also recognize that we're angling towards a world of hurt over the next few years while a bunch of people in power have to learn hard lessons we'll all suffer for.
I keep seeing this idea repeated, but I don't accept the dichotomy between those who care about 'crafting code' and those who care about 'building products' as though they are opposite points on a spectrum.
To me, the entire point of crafting good code is building a product with care in the detail. They're inseparable.
I don't think I've ever in my life met someone who cared a lot about code and technology who didn't also care immensely about detail, and design, and craft in what they were building. The two are different expressions of the same quality in a person, from what I've seen.
No-one comes out of the womb caring about code quality. People learn to care about the craft precisely because internal quality -- cohesion, modularity, robustness -- leads to external quality (correctness, speed, evolvability).
People who care about code quality are not artists who want to paint on the company's dime. They are people who care about shipping a product deeply enough to make sure that doing so is a pleasant experience both for themselves and their colleagues, and also have the maturity to do a little bit more thinking today, so that next week they can make better decisions without thinking, so that they don't get called at 4 AM the night after launch for some emergency debugging of an issue that that really should have been impossible if it was properly designed.
> No one has ever made a purchasing decision based on how good your code is.
Usually they don't get to see the internals of the product, but they can make inferences based on its externals. You've heard plenty of products called a "vibe-coded piece of crap" this year, even if they're not open source.
But also, this is just not true. Code quality is a factor in lots of purchasing decisions.
When buying open source products, having your own team check out the repo is incredibly common. If there are glaring signs in the first 5 minutes that it was hacked together, your chances of getting the sale have gone way down. In the largest deals, inspecting the source code
It was for an investment decision rather than for a purchase, but I've been personally hired to do some "emergency API design" so a company can show that it both has the thing being designed, and that their design is good.
This is like when people decided that everyone was either "introvert" or "extrovert" and then everyone started making decisions about how to live their life based on this extremely reductive dichotomy.
There are products that are made better when the code itself is better. I would argue that the vast majority of products are expected to be reliable, so it would make sense that reliable code makes for better product. That's not being a code craftsman, it's being a good product designer and depending on your industry, sometimes even being a good businessman. Or, again, depending on your industry, not being callous about destroying people's lives in the various ways that bad code can.
I’m an introvert. I make sure that all my “welcome to the company” presentations are in green. I am also an extrovert in that I add more green than required.
I respect your opinion and especially your honesty.
And at the same time I hope that you will some day be forced to maintain a project written by someone else with that mindset. Cruel, yes. But unfortunately schadenfreude is a real thing - I must be honest too.
I have gotten to old for ship now, ask questions later projects.
I'm in camp 1 too. I've maintained projects developed with that mindset. It's fine! Your job is to make the thing work, not take on its quality as part of your personal identity.
If it's harder to work with, it's harder to work with, it's not the end of the world. At least it exists, which it probably wouldn't have if developed with "camp 2" tendencies.
I think camp 2 would rather see one beautiful thing than ten useful things.
I think camp 1 would rather see ten useless things than one useful thing.
I think I fall in camp 1.5 (I don't fall in camp 1 or camp 2) as in I can see value in prototyping (with AI) and sometimes make quick scripts when I need them, but long term I would like to grow with an idea and build something genuinely nice from those prototypes, even manually writing the code as I found personally, AI codebases are an hassle to manage and have many bugs especially within important things (@iamcalledrob message here sums it up brilliantly as well)
> I think camp 2 would rather see one beautiful thing than ten useful things.
Both beautiful and useful are subjective (imo). Steve job's adding calligraphy to computer fonts could've considered a thing of beauty which derived from his personal relation to calligraphy, but it also is an really useful thing.
It's my personal opinion that some of the most valuable innovations are both useful and beautiful (elegant).
Of course, there are rough hacks sometimes but those are beautiful in their own way as well. Once again, both beauty and usefulness is subjective.
(If you measure Usefulness with the profit earned within a purely capitalistic lens, what happens is that you might do layoffs and you might degrade customer service to get to that measure, which ultimately reduces the usefulness. profit is a very lousy measure of usefulness in my opinion. We all need profit though but doing solely everything for profit also feels a bit greedy to me.)
> At least it exists, which it probably wouldn't have if developed with "camp 2" tendencies.
Ah yes, if you aren't shitting code out the door as fast as possible, you're probably not shipping anything at all.
That isn't a fair reading.
Neither is the original assertion. There are thousands of examples of exceptionally well crafted code bases that are used by many. I would posit the Linux kernel as an example, which is arguably the most used piece of software in the world.
Craft, in coding or anything else, exists for a reason. It can bleed over into vain frivolity, but craft helps keep the quality of things high.
Craft often inspires a quasi-religious adherence to fight the ever-present temptation to just cut this one corner here real quick, because is anything really going to go wrong? The problems that come from ignoring craft are often very far-removed from the decisions that cause them, and because of this craft instills a sense of always doing the right thing all the time.
This can definitely go too far, but I think it's a complete misunderstanding to think that craft exists for reasons other than ensuring you produce high-quality products for users. Adherents to craft will often end up caring about the code as end-goal, but that's because this ends up producing better products, in aggregate.
I mostly agree with this. Part of the confusion with the discourse around AI is the fact that "software engineering" can refer to tons of different things. A Next.js app is pretty different from a Kubernetes operator, which is pretty different from a compiler, etc.
I've worked on a project that went over the complexity cliff before LLM coding even existed. It can get pretty hairy when you already have well-established customers with long-term use-cases that absolutely cannot be broken, but their use-cases are supported by a Gordian Knot of tech debt that practically cannot be improved without breaking something. It's not about a single bug that an LLM (or human) might introduce. It's about a complete breakdown in velocity and/or reliability, but the product is very mature and still makes money; so abandoning it and starting over is not considered realistic. Eager uptake of tech debt helped fuel the product's rise to popularity, but ultimately turned it into a dead end. It's a tough balancing act. I think a lot of LLM-generated platforms will fall eventually into this trap, but it will take many years.
Trying to describe craftsmanship always brings me back to the Steve Jobs quote:
“When you’re a carpenter making a beautiful chest of drawers, you’re not going to use a piece of plywood on the back, even though it faces the wall and nobody will ever see it. You’ll know it’s there, so you’re going to use a beautiful piece of wood on the back. For you to sleep well at night, the aesthetic, the quality, has to be carried all the way through.”
Steve Jobs didn't really know anything about cabinetry, because using plywood / MDF in places where it won't be seen but which would benifit from dimensional stability is absolutely common and there's no reason it shouldn't be.
Yes great so we sell shit. No one buys a ticket because ofnhow safe the 737 Max plane is or buys a post office franchise based on how good the software Fujitsu sold the post office is, but fuck lets take some pride in outselves and try to ship quality work.
Professionally, I've always been in camp #2. The quality of your code at least partially represents you in the eyes of your peers. I imagine this is rapidly changing, but the fact will always remain that readable code that you can reason about is objectively better.
For personal projects, I've been in both camps:
For scripts and one-offs, always #1. Same for prototypes where I'm usually focused on understanding the domain and the shape of the product. I happily trade code quality for time when it's simple, throwaway, or not important.
But for developing a product to release, you want to be able to jump back in even if it's years later.
That said, I'm struggling with this with my newest product. Wavering between the two camps. Enforcing quality takes time that can be spent on more features...
It is possible to exist in both camps. The quality of the process affects the quality of the product, and the quality of your thought affects the quality of the process. It's a cycle of continual learning, and from that perspective, thought, process and product are indivisible.
Treating code as a means to an end doesn't guarantee success for your product anymore than treating code as a craft.
> The people who typically have the most negative things to say about AI fall into camp #2 where AI is automating a large part of what they considered their art while enabling people in group #1 to iterate on their product faster.
I'm weird, I'm part of camp 2, but I think AI can be used to really craft some interesting things. While I appreciate camp 1, camp 2 is what produces better codebases that are easier to maintain, myself and others have realized, that the best practices for humans are also the best practice for AI models to fit your code.
> With that said, I do have respect for people in the latter camp.
Well, you certainly should. Those people made AI based coding a possibility in the first place.
That's true, but I think there is a gray area in between. As things scale up in one way or another, having high quality is important for both #1 and #2. Its hard to extend software that was designed poorly.
The question where experience comes in is when quality is and isnt worth the time. I can create all sorts of cool software I couldn't before because now I can quickly pump out "good enough" android apps or react front ends! (Not trying to denigrate front end devs, it's just a skill I dont have)
I think developers fall into two camps:
1. you care about shipping working, tested code that solves a specific business/user problem
2. you care about closing tickets that were assigned to you
I think type1 vs type2 dev requirements are also dependent on lifecycle / scale of your project, not just that its library / framework / mission critical software.
If you aren't even sure if your idea is even gonna work, whether you have PMF, or the company will be around next year.. then yeah.. speed over quality all day long.
On the other hand, I've never done the startup thing myself, and tend to work on software project with 10-20 year lifecycles. When code velocity maximalism leads to outages, excess compute cost and reputational issues.. good code matters again.
Re: "No one has ever made a purchasing decision based on how good your code is." Sonos very much could go out of business for agreeing with this line. I can tell you lots of people stopped buying their products because of how bad their code quality became with the big app change debacle. Lost over a decade of built up good will.
Apple is going through this lately with the last couple major OS releases across platforms and whatever is going on with their AI. This despite having incredible hardware.
It’s much more complex. Part of your value as an engineer is having a good feel for balancing the trade offs on your code.
Code is usually a liability. A means to an end. But is your code going to run for a minute, a month, a year, or longer? How often will it change? How likely are you going to have to add unforeseen features? Etc. Etc. Etc.
How about the type of developer who comes up with statistics and made up "camps" as if enjoying the craft itself makes you, out of necessity of the false premise, unable to enjoy the fact that what you made is useful enough to people that they choose your product because you are obsessed with making a good work?
John Carmack has talked about it in a podcast a few years ago, and he's the closest popular programmer that I can think of who was simply obsessed with milking every tiny ounce of GPU performance, yet none of his effort would matter if Doom and Quake weren't fun games.
I think this is a false dichotomy. Maybe there is some theoretical developer who cares about their craft only due to platonic idealism, but most developers who care about their craft want their code to be correct, fast, maintainable, usable, etc. in ways that do indeed benefit its users. At worst, misalignment in priorities can come into play, but it's much more subtle than developers either caring or not caring about craft.
I think this is a false dichotomy. If you're passionate about your craft, you will make a higher quality product. The real division is between those who measure the success of a project in:
- revenue/man-hour, features shipped/man-hour, etc.
- ms response time, GB/s throughput, number of bugs actually shipped to customers, etc.
People in the second camp use AI, but it's a lot more limited and targeted. And yes, you can always cut corners and ship software faster, but it's not going to be higher quality by any objective metric.
With AI you actually don't need to choose anymore. Well laid out abstractions actually make AI generate code faster and more accurately. Spending the time in camp 2 to design well and then using AI like camp 1 gives you the best of both worlds.
> No one has ever made a purchasing decision based on how good your code is
Because the ones that sell crappy code don’t sell to people that can tell the difference.
You think I’d pay for Jira or Confluence if it wasn’t foisted upon me by a manager that has got it in with the Atlassian sales rep?
I don’t even need to see Atlassian’s source code to know it’s sh*t.
> No one has ever made a purchasing decision based on how good your code is.
I routinely close tabs when I sense that low-quality code is wasting time and resources, including e-commerce sites. Amazon randomly cancelled my account so I will never shop from them. I try to only buy computers and electronics with confirmed good drivers. Etc.
This is a very useful insight. It nicely identifies part of the reason for the stark bifurcation of opinion on AI. Unfortunately, many of the comments below it are emotional and dismissive, pointing out its explanatory limitations, rather than considering its useful, probative value.
I find most home inspectors fall into one of two camps:
1. You treat the house as a means to an end to make a living space for a person.
2. You treat the building construction itself as your craft, with the house being a vector for your craft.
The people who typically have the most negative things to say about buildings fall into camp #2 where cheap unskilled labor is streamlining a large part of what they considered their art while enabling people in group #1 to iterate on their developments faster.
Personally, I fall into the first camp.
No one has ever made a purchasing decision based on how good the pipes inside the walls are.
The general public does not care about anything other than the square footage and color of your house. Sure, if you mess up and one of the houses collapses then that'll manifest as an outcome that impacts the home owner negatively.
With that said, I do have respect for people in the latter camp. But they're generally best fit for homes where that level of craftsmanship is actually useful (think: mansions, bridges, roads, things I use, etc).
I just feel like it's hard to talk about this stuff if we're not clear on which types of construction we're talking about.
The general public does not know how to identify or care about the pipes in the walls. They do care when they bust and cause tens of thousands of dollars of damage. Thats why they hire someone with a keen eye to it to act on their behalf.
"I have an opinion and everyone on the planet agrees with me, if you disagree, you don't matter" is not a useful insight, and is, in fact, far more emotional and dismissive than any of the replies to it.
That's a horribly broken misrepresentation of what was said in the original post. If that's what you took away from it, you're not reading carefully or critically.
> it's hard to talk about this stuff if we're not clear on which types of projects we're talking about.
It mystifies me when people don't intuit this.
For any suitably sized project, there are parts where elegance and friction removal are far more important than others. By an order or two of magnitude.
I have shipped beautifully-honed, highly craft code. Right alongside jank that was debugged in the "Well it seems to work now" and "Don't touch anything behind this in-project PI" category.
There are very good reasons and situations for both approaches, even in one project.
As developers we have a unique advantage over everybody else dealing with the way AIgen is revolutionizing careers:
Everybody else is dealing with AIgen is suffering the AI spitting out the end product. Like if we asked AI to generate the compiled binary instead of the source.
Artists can't get AIgen to make human-reviewed changes to a .psd file or an .svg, it poops out a fully formed .png. It usurps the entire process instead of collaborating with the artist. Same for musicians.
But since our work is done in text and there's a massive publicly accessible corpus of that text, it can collaborate with us on the design in a way that others don't get.
In software the "power of plain text" has given us a unique advantage over kinds of creative work. Which is good, because AIgen tends to be clumsy and needs guidance. Why give up that advantage?
I think "make a product" is the important point of disagreement here. AI can generate code that users are willing to pay for, but for how long? The debate is around the long-term impact of these short-term gains. Code _is_ a means to an end, but well-engineered code is a more reliable means than what AI currently generates. These are ends of a spectrum and we're all on it somewhere.
You ever notice how everyone who drives slower than you is a moron and everyone who drives faster than you is a maniac? Your two camps have a similar bias.
> The people who typically have the most negative things to say about AI fall into camp #2 where AI is automating a large part of what they considered their art while enabling people in group #1 to iterate on their product faster.
I am in both camps. Always have been.
Code janitors about to be in high demand. We’ve always been pretty popular with leadership and it’s gonna get even more important.
Treat code design and architecture as the thing that lets your slop canons (90% of engineers even pre-ai) move fast without breaking things
My output is org velocity.
> Treat code design and architecture as the thing that lets your slop canons (90% of engineers even pre-ai) move fast without breaking things
I'm currently of the opinion that humans should be laser focused on the data model. If you've got the right data model, the code is simpler. If you've got the relevant logical objects and events in the database with the right expressivity, you have a lot of optionality for pivoting as the architecture evolves.
It's about that solid foundation - and of course lots of tests on the other side.
> I 'm currently of the opinion that humans should be laser focused on the data model
yes. good programmers talk about data structures, bad programmers talk about code
How do you even converge on the right data model without refining code? Elegant code and elegant data model are the exact same thing!
It's called "systems analysis". Programmers are generally pretty terrible at it because it requires holistic, big-picture thinking. But it used to take up the bulk of the design activity for a new enterprise system.
I agree and I like how you describe it. The phrase from Django, "perfectionists with deadlines", also resonates with me.
> My output is org velocity.
Amen, slow and steady and the feature fly wheel just keeps getting faster.
>slop cannons
I am stealing that phrase haha
It's easy to write off critics of this slop development as just "caring about the wrong thing", but that is couched in incorrect assumptions. This is the unfortunately common mistake of confusing taking responsibility with some sort of "caring about the code" in an artistic sense. I can certainly appreciate the artistry of well-written code, but I care about having a solid and maintainable code-base because I am accountable for what the code I write does.
Perhaps this is an antiquated concept which has fallen out of favor in silicon valley, but code doesn't just run in an imaginary world where there are no consequences and everything is fun all the time. You are responsible for the product you sell. If you sell a photo app that has a security bug, you are responsible for your customers nude photos being leaked. If you vibe-code a forum and store passwords in plaintext, you are responsible for the inevitable breech and harm.
The "general public" might not care, but that is only because the market is governed by imperfect information. Ultimately the public are the ones that get hurt by defective products.
> No one has ever made a purchasing decision based on how good your code is.
There are two reasons for this. One is that the people who make purchasing decisions are often not the people who suffer from your bad code. If the user is not the customer, then your software can be shitty to the point of being a constant headache, because the user is powerless to replace it.
The other reason is that there's no such thing as "free market" anymore. We've been sold the idea that "if someone does it better, then they'll win", but that's a fragile idea that needs constant protection from bad actors. The last time that protection was enacted was when the DOJ went against Microsoft.
> Sure, if you vibe code a massive bug into your product then that'll manifest as an outcome that impacts the user negatively.
Any semblance of accountability for that has been diluted so much that it's not worth mentioning. A bug someone wrote into some cloud service can end up causing huge real-world damage in people's lives, but those people are so far removed from the suits that made the important decisions that they're powerless to change anything and won't ever see that damage redressed in any way.
So yeah, I'm in camp #2 and I'm bitter about AI, because it's just accelerating and exacerbating the enshittification.
Someone on the HN wrote recently that everyone who's foaming at the mouth about how AI helps us ship faster is forgetting that velocity is a vector -- it's not just about how fast you're going, but also in what direction.
I'd go further and say that I'm not even convinced we're moving that much faster. We're just cranking out the code faster, but if we actually had to review that code properly and make all the necessary fixes, I'm pretty sure we would end up with a net loss of velocity.
I view every single line of code as a liability, the best solution is if you can avoid writing any code. Does that put me into group 1 or group 2?
false dichotomy - you need to care about both.
> No one has ever made a purchasing decision based on how good your code is.
If you have buggy software, people don’t use it if there are alternatives. They don’t care about the code but hard to maintain, buggy code will eventually translate to users trying other products.
>No one has ever made a purchasing decision based on how good your code is.
That's however what makes for stable systems, deep knowledgable engineers, and structurally building the basis for the future.
If all you care about is getting money for your product slop, it's not different than late night marketed crap, or fast fashion...
3. Coding is fun, prompting not so much
You realize you’re essentially building a false dichotomy? I work in video games where code really is a means to an end but still see that authorship is important even if that code is uuuuugly due to it being the expression of the game itself. From that perspective I’m neither worried about craft or product but my ability to express myself though code as the game behaves. Although if you really must have only two categories I’d be in camp one.
As such AI is a net negative as it would be in writing a novel or making any other kind of art.
Now you done it! Yeah, one of the difficult things is being able to see both sides. At the end of the day, I happen to write code because that's how I can best accomplish the things I need to do with the minimum of effort. While I do take pride in elegance and quality of code, it is always a means to an end. When I start gold plating, I try to remind myself of the adage I learned in a marketing class: No one ever needed a drill, they needed the ability to make holes.
It is strange, but not really upsetting to me, that I am not particularly anal about the code Claude is generating for me anymore but that could also be a function of how low stakes the projects are or the fact nothing has exploded yet.
I generally fall into the first camp, too, but the code that AI produces is problematic because it's code that will stop working in an unrecoverable way after some number of changes. That's what happened in the Anthropic C compiler experiment (they ended up with a codebase that wasn't working and couldn't be fixed), and that's what happens once every 3-5 changes I see Codex making in my own codebase. I think, if I had let that code in, the project would have been destroyed in another 10 or so changes, in the sense that it would be impossible to fix a bug without creating another. We're not talking style or elegance here. We're talking ticking time bombs.
I think that the real two camps here are those who haven't carefully - and I mean really carefully - reviewed the code the agents write and haven't put their process under some real stress test vs those who have. Obviously, people who don't look for the time bombs naturally think everything is fine. That's how time bombs work.
I can make this more concrete. The program wants to depend on some invariant, say that a particular list is always sorted, and the code maintains it by always inserting elements in the right place in the list. Other code that needs to search for an element depends on that invariant. Then it turns out that under some conditions - due to concurrency, say - an element is inserted in the wrong place and the list isn't sorted, so one of the places that tries to find an element in the list fails to find it. At that point, it's a coin toss of whether the agent will fix the insertion or the search. If it fixes the search, the bug is still there for all the other consumers of the list, but the testing didn't catch that. Then what happens is that, with further changes, depending on their scope, you find that some new code depends on the intended invariant and some doesn't. After several such splits and several failed invariants, the program ends up in a place that nothing can be done to fix a bug. If the project is "done" before that happens - you're in luck; if not, you're in deep, deep trouble. But right up until that point, unless you very carefully review the code (because the agents are really good at making code seem reasonable under cursory scrutiny), you think everything is fine. Unless you go looking for cracks, every building seems stable until some catastrophic failure, and AI-generated code is full of cracks that are just waiting for the right weight distribution to break open and collapse.
So it sounds to me that the people you think are in the first camp not only just care how the building is built as long as it doesn't collapse, but also believe that if it hasn't collapsed yet it must be stable. The first part is, indeed, a matter of perspective, but the second part is just wrong (not just in principle but also when you actually see the AI's full-of-cracks code).
It can be especially bad if the architecture is layered with each one having its own invariant. Like in a music player, you may have the concept of a queue in the domain layer, but in the UI layer you may have additional constraints that does not relate to that. Then the agent decide to fix a bug in the UI layer because the description is a UI bug, while it’s in fact a queue bug
Shit like this is why you really have to read the plans instead of blindly accepting them. The bots are naturally lazy and will take short cuts whenever they think you won't notice.
You need both to be a great software engineer. The "means to an end" people will happily slop out PRs and let the "craft" people worry about it.
I was just using an app that competes with airbnb. That the app's code is extraordinarly unreliable was a significant factor in my interactions with others on the app, especially, I gradually realized I couldn't be sure messages were delivered or data was up-to-date.
That influenced some unfortunate interactions with people and meant that no one could be held to their agreements since you never knew if they received the agreements.
So, well, code quality kind of matters. But I suppose you're still right in a sense - currently people buy and use complete crap.
This is absolutely false. The purpose of craft is a make a good product.
I don’t care what kind of steel you used to design my car, but I care a great deal that it was designed well, is safe, and doesn’t break down all the time.
Craft isn’t a fussy thing.
I'm going to re-characterize your categorization:
1. The people who don't understand (nor care) about the risks and complexity of what they're delivering; and
2. The people that do.
Widespread AI usage is going to be a security nightmare of prompt injection and leaking credentials and PII.
> No one has ever made a purchasing decision based on how good your code is.
This just isn't true. There's a whole process in purchasing software, buying a company or signing a large contract called "due diligence". Due diligence means to varying degree checking how secure the product is, the company's processes, any security risks, responsiveness to bugfixes, CVEs and so on.
AI is going to absolutely fail any kind of due diligence.
There's a little thing called the halting problem, which in this context basically means there's no way to guarantee that the AI will be restricted from doing anything you don't want it to do. An amusing example was an Air Canada chatbot that hallucinated a refund policy that a court said it had to honor [1].
How confident are we going to be that AIs won't leak customer information, steal money from customers and so on? I'm not confident at all.
[1]: https://arstechnica.com/tech-policy/2024/02/air-canada-must-...
It's perfectly possible to write very clean code with AI, it just takes a lot more time and prompting.
Easier to just write it yourself.
3. You use the act of writing code to think about a given problem, and by so doing not only produce a better code, but also gain a deeper understanding of the problem itself - in combination a better product all around.
> No one has ever made a purchasing decision based on how good your code is. The general public does not care about anything other than the capabilities and limitations of your product.
The capabilities and limitations of your product are defined in part by how good the code is. If you write a buggy mess (whether you write it yourself or vibe code it), people aren't going to tolerate that unless your software has no competitors doing better. People very much do care about the results that good code provides, even if they don't care about the code as an end in itself.
While creating good software is as much of an art as it is a science, this is not why the craft is important. It is because people who pay attention to detail and put care into their work undoubtedly create better products. This is true in all industries, not just in IT.
The question is how much does the market value this, and how much it should value it.
For one-off scripts and software built for personal use, it doesn't matter. Go nuts. Move fast and break things.
But the quality requirement scales proportionally with how many people use and rely on the software. And not just users, but developers. Subjective properties like maintainability become very important if more than one developer needs to work on the codebase. This is true even for LLMs, which can often make a larger mess if the existing code is not in good shape.
To be clear, I don't think LLMs inevitably produce poor quality software. They can certainly be steered in a good direction. But that also requires an expert at the wheel to provide good guidance, which IME often takes as much, if not more, work than doing it by hand.
So all this talk about these new tools replacing the craft of programming is overblown. What they're doing, and will continue to do unless some fundamental breakthrough is reached, is make the creation of poor quality software very accessible. This is not the fault of the tools, but of the humans who use them. And this should concern everyone.
I agree on the software dev camps.
> The general public does not care about anything other than the capabilities and limitations of your product.
It's absolutely asinine to say the general public doesn't care about the quality and experience of using software. People care enough that Microsoft's Windows director sent out a very tail-between-legs apology letter due to the backlash.
It's as it always has been, balancing quality and features is... well, a balance and matters.
The public doesn't care about the code itself, they absolutely care about the quality and experience of using the software.
But you can have an extremely well designed product that functions flawlessly from the perspective of the user, but under the hood it's all spaghetti code.
My point was that consuming software as a user of the product can be quite different from the experience of writing that software.
Facebook is a great example of this, there's some gnarly old spaghetti code under the hood just from the years of legacy code but those are largely invisible to the user and their experience of the product.
I'd just be careful to separate code elegance from product experience, since they are different. Related? Yeah, sure. But they're not the same thing.
There are other players in the game: the business and the market.
Good code makes it easier for the business to move fast and stay ahead of the competition while reducing expenses for doing so.
That's true, but excel '98 would still cover probably 80% of users use cases.
A lot, and I mean a lot, of software work is trying to justify existence by constantly playing and toying with a product that worked for for everyone in version 1.0, whether it be to justify a job or justify charging customers $$ per month to "keep current".
That's fair!
> Facebook is a great example of this, there's some gnarly old spaghetti code under the hood just from the years of legacy code but those are largely invisible to the user and their experience of the product.
I'm sure that's the case in basically everything, it sorta doesn't matter (until it does) if it's cordoned off into a corner that doesn't change and nominally works from the outside perspective.
But those cases are usually isolated, if they aren't it usually quickly becomes noticeable to the user in one way or another, and I think that's where these new tools give the illusion of faster velocity.
If it's truly all spaghetti underneath, the ability to make changes nosedives.
I have yet to meet anyone whose problem with AI is that the code is not aesthetically pleasing, but that would actually be an indicator to me that people are using these things responsibly.
My own two cents: there's an inherent tension with assistants and agents as productivity tools. The more you "let them rip", the higher the potential productivity benefits. And the less you will understand the outputs, or even if they built the "correct thing", which in many cases is something you can only crystalize an understanding about by doing the thing.
So I'm happy for all the people who don't care about code quality in terms of its aesthetic properties who are really enjoying the AI-era, that's great. But if your workload is not shifting from write-heavy to read-heavy, you inevitably will be responsible for a major outage or quality issue. And moreso, anyone like this should ask why anyone should feel the need to employ you for your services in the future, since your job amounts to "telling the LLM what to do and accepting it's output uncritically".
>But if your workload is not shifting from write-heavy to read-heavy, you inevitably will be responsible for a major outage or quality issue.
I think that's actually a good way to look at it. I use AI to help produce code in my day to day, but I'm still taking quite a while to produce features and a lot of it is because of that. I'm spending most of my time reading code, adjusting specs, and general design work even if I'm not writing code myself.
There's no free lunch here, the workflow is just different.
Facebook.com is a monstrosity though, and their mobile apps as well are slow and often broken. And the younger generations are using other networks, Facebook is in trouble.
> You treat your code as a means to an end to make a product for a user.
It isn’t that though, the “end” here is making money not building products for users. Typically people who are making products for users cares about the craft.
If the means-to-end people could type words into a box and get money out the other side, they would prefer to deal with that than products or users.
Thats why ai slop is so prevalent — the people putting it out there don’t care about the quality of their output or how it’s used by people, as long as it juices their favorite metrics - views, likes, subscribes, ad revenue whatever. Products and users are not in scope.
Yeah, I'm not trying to defend slop.
I don't think all means-to-end people are just in it for money, I'll use the anecdote of myself. My team is working on a CAD for drug discovery and the goal isn't to just siphon money from people, the goal is legitimately to improve computational modeling of drug interactions with targets.
With that in mind, I care about the quality of the code insofar as it lets me achieve that goal. If I vibe coded a bunch of incoherent garbage into the platform, it would help me ship faster but it would undermine my goal of building this tool since it wouldn't produce reliable or useful models.
I do think there's a huge problem with a subset of means-to-end people just cranking out slop, but it's not fair to categorize everyone in that camp this way ya'know?
This is just cope to avoid feeling any shame for shipping slop to users.
Meanwhile, the complexity of the average piece of software is drastically increasing. ... The stats suggest that devs are shipping more code with coding agents. The consequences may already be visible: analysis of vendor status pages [3] shows outages have steadily increased since 2022, suggesting software is becoming more brittle.
We've already seen a large-scale AWS outage because of this. It could get much worse. In a few years, we could have major infrastructure outages that the AI can't fix, and no human left understands the code.
AI coders, as currently implemented, don't have a design-level representation of what they're doing other than the prompt history and the code itself. That inherently leads to complexity growth. This isn't fundamental to AI. It's just a property of the way AI-driven coding is done now.
Is anybody working on useful design representations as intermediate forms used in AI-driven coding projects?
"The mending apparatus is itself in need of mending" - "The Machine Stops", by E.M. Forster, 1909.
The authors updated their title so I've updated it here too. Previous title was "Good code will still win" - but it was leading to too much superficial discussion based entirely on the phrase "good code" in the title. It's amazing how titles do that!
(Confession: "good code will still win" was my suggestion- IIRC they originally had "Is AI slop the future?". You win some you lose some.)
And then everyone disagrees what counts as luxury in software.
That's an interesting story, but not a great analogy for software.
If a technology to build airplanes quickly and cheaply existed and was made available to everyone, even to people with no aeronautical engineering experience, flying would be a much scarier ordeal than it already is.
There are good reasons for the strict safety and maintenance standards of the aviation industry. We've seen what can happen if they're not followed.
The fact that the software industry doesn't have similar guardrails is not something to celebrate. Unleashing technology that allows anyone to create software without understanding or even caring about good development practices and conventions is fundamentally a bad idea.
> Markets will not reward slop in coding, in the long-term.
Forgive my cynical take, but we're currently experiencing a market that doesn't appear to be rewarding anything specific in the long-term, as huge sums of money are traded on a minute-to-minute, day-to-day, and week-to-week basis. There's an explosion of uncertainty in today's markets and complete collapse of long-range planning echoing at many levels in society--particularly at the highest levels of governments. So I kind of don't want to hear about markets are going to reward.
But what exactly is "good code" (presumably the opposite of slop)?
I'd say that good code is terse, robust, suits its function, yet admits just the right amount of testability, performance, and customizability for the future. Good code anticipates change well. That means that if it has one job, it does that one job well, and not twenty others. If the job is going to stay the same, the code is going to stay the same. Good systems are made from well-factored pieces of code that have proper jobs and do their proper jobs without taking on jobs they shouldn't.
I for one think that AI code is going to reflect its training. If it's trained on just a random selection of everything out there, it's probably going to be pretty mediocre and full of bugs.
Markets have always rewarded popularity in the short term. in the long term though it has always rewarded quality.
The economic angle is not as clear cut as the authors seem to think.
There is an abundance of mediocre and even awful code in products that are not failing because of it.
The worst thing about poorly designed software architecture is that it tends to freeze and accumulate more and more technical debt. This is not always a competitive issue, and with enough money you can maintain pretty much any codebases.
Even with enough money, you may not be able to attract/keep talented engineers who are willing to put up with such a work environment (the codebase itself, and probably the culture that led to its state) and who want to ship well built/designed software but are slowed down by the mess.
“John Ousterhout [..] argues that good code is:
- Simple and easy to understand
- Easy to modify”
In my career at fast-moving startups (scaling seed to series C), I’ve come to the same conclusion:
> Simple is robust
I’m sure my former teams were sick of me saying it, but I’ve found myself repeating this mantra to the LLMs.
Agentic tools will happily build anything you want, the key is knowing what you want!
My issue with this is that a simple design can set you up for failure if you don’t foresee and account for future requirements.
Every abstraction adds some complexity. So maybe the PoC skips all abstractions. Then we need to add a variant to something. Well, a single if/else is simpler than an abstract base class with two concrete implementations. Adding the 3rd as another if clause is simpler than refactoring all of them to an ABC structure. And so on.
“Simple” is relative. Investing in a little complexity now can save your ass later. Weighing this decision takes skill and experience
Yes. Which is why "I generated X lines of code" "I used a billion tokens this month" sound stupid to me.
Like I used 100 gallons of petrol this month and 10 kilos of rabbit feed!
Agreed on the economics side. Clean code saves you time and money whether a human or AI wrote it. That part doesn't change.
But I don't think the models are going to get there on their own. AI will generate a working mess all day long if you let it. The pressure to write good code has to come from the developer actually reviewing what comes out and pushing back. The incentive is there but it only matters if someone acts on it.
Yup - In the end, it’s still just a tool that adheres to the steering (or lack thereof) of the user.
When has this ever been true
Did the best processor win? no x86 is trash
Did the best computer language win? no (not that you can can pick a best)
The same is true pretty much everywhere else outside computers, with rare exception.
I wish it was true, but it sounds like copium. I bet garment makers, or artisan woodworkers said the same when big store cheap retails came. I bet they said "people value quality and etc", but in the end, outside of a group of people who has principles, everyone else floods their home with H&Ms and crap from Temu.
So yeah, good code might win among small group of principled people, but the majority will not care. And more importantly, management won't care. And as long as management don't care, you have two choices: "embrace" slop, or risk staying jobless in a though market.
Edit: Also, good code = expensive code. In an economy where people struggle to afford a living, nobody is going to pay for good code when they can get "good enough" code for 200$ a month with Claude.
For a lot of companies their entire income entirely depends on their uptime.
Might be fine if your HR software isn't approving holiday requests, but your checkout breaks, there's no human that can pick apart the mess and you lose your entire income for a week and that might be the end of the business.
If "good code" == "useful code", then yes.
People forget that good engineering isn't "the strongest bridge", but the cheapest bridge that just barely won't fail under conditions.
Engineers don't build the cheapest bridge that just barely won't fail. They build the cheapest bridge that satisfies thousands of pages of regulatory requirements maintained and enforced by dozens of different government entities. Those regulations range from safety, to aesthetic, to environmental, to economic, to arcane.
Left to their own devices, engineers would build the cheapest bridge they could sell that hopefully won't collapse. And no care for the impact on any stakeholder other than the one paying them.
> Left to their own devices, engineers would build the cheapest bridge they could sell that hopefully won't collapse.
I don't know any real (i.e. non-software) engineers, but I would love to ask them whether what you said is true. For years now, I've been convinced that we should've stuck with calling ourselves "software developers", rather than trying to crib the respectability of engineering without understanding what makes that discipline respectable.
Our toxic little industry would benefit a lot from looking at other fields, like medicine, and taking steps to become more responsible for the outcomes of our work.
Civil engineers are licensed and carry insurance. When software developers have similar requirements, then I'll call them engineers. In some fields like avionics, the certification regime is a good proxy for licensing -- I think we could extend the "engineer" title to those developers too.
Such a world still has room for unlicensed developers too -- I'd certainly be among them.
What would happen if we made bridges to last as long as possible, to withstand natural disasters and require minimal maintenance?
What if we built things that are meant to last? Would the world be better for it?
> What if we built things that are meant to last? Would the world be better for it?
You'd have a better bridge, at the expense of other things, like hospitals or roads. If people choose good-enough bridges, that shows there is something else they value more.
Once the good-enough bridge deteriorates and we have to spend more money maintaining or replacing it
Don't we end up just spending the same? Just now we're left with a crappy bridge.
It might very well be that building and maintaining a bridge for 100 years costs three or four times as much as building and maintaining one that last 50 years. If demolition costs are not same as cost of bridge well in long run replacing the bridge ever 50 years is cheaper.
On whole it is entirely reasonable optimisation problem. What is the best lifespan of single bridge over desired total lifespan.
Certainly, "enough" is doing a lot of work and things get blurry, but I think "good enough" is meant to capture some of that. Over building is also a problem. It isn't strictly true that building longer lived things is cheaper over time either, it obviously depends on the specific things getting compared. And if you go 100 years rather than 25 years, you'll have fewer chances to adjust and optimize for changes to the context, new technology, changing goals, or more efficient (including cost saving) methods.
Obviously, there's a way to do both poorly too. We can make expensive things that don't last. I think a large chunk of gripes about things that don't last are really about (1) not getting the upside of the tradeoff, cheaper (in both senses) more flexible solutions, and (2) just getting bad quality period.
Depends how much the infrastructure and needs around it changes.
But we also got roads and hospitals.
Look up Roman concrete. There are 2000 year old bridges and aqueducts still in use.
We only recently figured out how to reproduce Roman concrete.
We’d have more but a lot were blown up during WWII.
There is nothing special about roman concrete compared to moderns concrete. Modern concrete is much better
The difference is that they didn't have rebar. And so they built gravity stable structures. Heavy and costly as fuck.
A modern steel and concrete structure is much lighter and much cheaper to produce.
It does mean a nodern structure doesn't last as long but also the roman stuff we see is what survived the test of time, not what crumbled.
> There is nothing special about roman concrete compared to moderns concrete. Modern concrete is much better
Roman concrete is special because it is much more self-healing than modern concrete, and thus more durable.
However, that comes at the cost of being much less strong, set much slower and require rare ingredients. Roman concrete also doesn’t play nice with steel reinforcement.
https://en.wikipedia.org/wiki/Roman_concrete
I think you are incorrect. Compared to modern concrete, roman concrete was more poorly cured at the time of pouring. So when it began to weather and crack, un-cured concrete would mix with water and cure. Thus it was somewhat self healing.
Modern concrete is more uniform in mix, and thus it doesn't leave uncured portions.
We have modern architecture crumbling already less than 100 years after it has been built. I know engineering is about tradeoffs but we should also acknowledge that, as a society, we are so much used to put direct economic cost as the main and sometimes only metric.
Devil's advocate here. Maybe we'd all forget how to build bridges in the next thousand years, after bridging all the bridg-able spans.
What if instead of one bridge we build three, so more people can cross the river?
And if your one bridge survived as long as, or longer than three bridges?
Then you still have traffic issues and no one is happy.
> the cheapest bridge that just barely won't fail
That can't be right? What about safety factors
Safety factors exist because without them, bridges fall down
The free market ensures that bridges stay up, because the bridge-makers don't want to get sued by people who have died in bridge collapses.
That is definitely not the free market at play. It's legislative body at play.
Engineers (real ones, not software) face consequences when their work falls apart prematurely. Doubly so when it kills someone. They lose their job, their license, and they can never work in the field again.
That's why it's rare for buildings to collapse. But software collapsing is just another Monday. At best the software firm will get fined when they kill someone, but the ICs will never be held responsible.
This only works when the barrier of entry to sue is low enough to be done and when the law is applied impartially without corruption with sanctions meaningful enough , potentially company-ending, to discourage them.
At the moment you remove one of these factors, free market becomes dangerous for the people living in it.
That isn't how safety factors work... The person you're responding to is correct. I encourage you to look it up!
Safety factors account for uncertainty. Uncertainty the quality of materials, of workmanship, of unaccounted-for sources of error. Uncertainty in whether the maximum load in the spec will actually be followed.
Without a safety factor, that uncertainty means that, some of the time, some of your bridge will fall down
I'd describe that as passable engineering.
Good engineering is building the strongest bridge within budget and time.
Um, ackshually, real civil/structural engineers—at least, those in the global north—design bridges, roads, and buildings with huge tolerances (multiple times the expected loads) because unexpected shit happens and you don't want to suffer catastrophic failure when conditions are just outside of your typical use case and have a Tacoma Narrows Bridge type situation on your hands.
We might be arguing semantics, but safety margins aren't considered 'overbuilding' but part of the bare minimum requirements for a bridge to stand. They aren't there "just in case" they are there because it is known for a fact that bridges in the real world will experience degradation and overloading.
If you build a bridge that is rated to carry 100k lbs of weight, and you build it to hold 100k lbs, you didn't build it to barely meet spec -- you under built it -- because overloading is a known condition that does happen to bridges.
People are not emotionally ready to accept that certain layers of abstraction don’t need as much care and effort if they can be automated.
We are at the point where a single class can be dirty but the API of the classes should be clean. There’s no point reviewing the internals of a class anymore. I’m more or less sure that they would work as intended.
Next step is that of a micro service itself. The api of that micro service should be clean but internals may be however. We are 10% here.
"The only reason people disagree with me is because they are emotionally deficient."
Does performance not matter?
What if your AI uses an O(n) algorithm in a function when an O(log n) implementation exists? The output would still be "correct"
> Does performance not matter?
No, unfortunately. In a past life, in response to an uptime crisis, I drove a multi-quarter company-wide initiative to optimize performance and efficiency, and we still did not manage to change the company culture regarding performance.
If it does not move any metrics that execs care about, it doesn't matter.
The industry adage has been "engineer time is much more expensive than machine time," which has been used to excuse way too much bloated and non-performant code shipped to production. However, I think AI can actually change things for the better. Firstly, IME it tends to generate algorithmically efficient code by default, and generally only fails to do so if it lacks the necessary context (e.g. now knowing that an input is sorted.)
More importantly though, now engineer time is machine time. There is now very little excuse to avoid extensive refactoring to do things "the right way."
> Does performance not matter?
Performance can be a direct target in a feedback loop and optimised away. That's the easy part. Taking an idea and poof-ing a working implementation is the hard part.
Also most performance optimisations exit at the microservice architecture level, or db and io level
As it stands today the average engineer is much more likely to ship an unoptimized algorithm than an AI.
In most cases no. Bottleneck is usual IO.
The current iteration of models don't write clean code by itself but future ones will. The problem in my view is extremely similar to agentic/vibe coding. Instead of optimizing for results you can optimize for clean code. The demand is there, clean code will lead to less bugs, faster running code and less tokens used (thus less cost) when understanding the code from a fresh session. It makes sense that the first generation of vibe coding focused on the results first and not clean code. Am I missing something?
Getting AI tools to produce better code is not that hard; if you know how to do things right yourself. Basically, all you need to do is ask it to. Ask it to follow SOLID principles. Ask it to stick to guard rails. Ask it to eliminate code duplication, write tests, harden code bases, etc. It will do it. Most people just don't know what to ask for. Or even that they should be asking for it. Or how to ask for it. A lot of people getting messy results need to look in the mirror.
I'm using AI for coding just like everybody else. More or less exclusively since a few months. It's sometimes frustrating to get things done the right way but mostly I get the job done. I've been coding since the nineties. So, I know how to do things right and what doing it wrong looks like. If I catch my AI coding tools doing it wrong, I tell it to fix it and then adjust skills and guard rails to prevent it going off the rails.
AI tools actually seem to self correct when used in a nice code base. If there are tests, they'll just write more tests without needing to be prompted. If there is documentation, that gets updated along with the code. When you start with a vibe coded mess it can escalate quickly unless you make it clean up the mess. Sometimes the tests it adds are a bit meh and you have to tell it off by "add some tests for the non happy path cases, make sure to cover all possible exceptions, etc.". You can actually ask for a critical code review and then tell it "fix all of that". Sometimes it's as simple as that.
I think for this to work you need some kind of complexity budget. AI's are good at optimizing but you need to give them the right goals.
"AI will write good code because it is economically advantageous to do so. Per our definition of good code, good code is easy to understand and modify from the reduced complexity."
---------
This doesn't necessarily follow. Yes, there might be economic pressure for AI to produce "good" code, but that doesn't necessarily mean efforts to make this so will succeed. LLM's might never become proficient at producing "good" code for the same reasons that LLM's perform poorly when trained on their own output. A heuristic prediction of what "good" code for a given solution looks like is likely always going to be less "good" than code produced by skilled and deliberate human design.
Just as there is a place for fast and dirty human code, there will be a place for slop code. Likely the same sort of place. However, we may still need humans to produce "good" code that AI can be trained on as well as for solutions that actually need to be "good". AI might not be able to do that for us anytime soon, no matter what the economic imperatives are.
The economic force is the LLMs themselves are worse at maintaining slop than good good.
Everything fundamental that makes good easier for humans to maintain also makes it easier for LLMs to maintain. Full stop.
Good code wasn't winning even before the ai slop era!
The pattern was always: ship fast, fix/document later, but when "later" comes "don't touch what is working".
To date nothing changed yet, I bet it won't change even in the future.
& I have thus far made a large portion of my living off of fixing bad code "later".
… but lately, the rate at which some dev with an LLM can just churn out new bad code has just shot through the roof. I can still be struggling to pick apart the last piece of slop, trying to figure out "okay, if someone with a brain had written this, what would the inputs & outputs be?" and "what is it that production actually needs and relies on, and what causes problems, and how can we get the code from point A to point B without more outages"; but in the meantime, someone has spit out 8 more modules of the same "quality".
So sure, the basic tenants haven't changed, but these days I feel like I'm drowning in outages & bugs.
The irony is that "good" code and good documentation have top priority now in most orgs. For decades the best developers have been screaming about good code and documentation but leadership couldn't give a fuck. But now that their favorite nepobaby is here, now it's the most important thing all of a sudden.
I was told by an exec...once a company or technology implements something and gets mindshare, the community (including companies) moves on.
Competition is essentially dead for that segment given there is always outward growth.
With that being said, AI enables smaller players to implement their visions with enough completeness to be viable. And with a hands off approach to code, the underlying technology mindshare does not matter as much.
If that were true first movers would always win. Hotmail came before Gmail. Yahoo came before Google. Myspace came before Facebook. Et cetera. Of course it is best to avoid competition by creating a new (sub)category but category kings can change.
I disagree, Electron showed the world that good code can be magnetic
... I'll see myself out
It's a joke? (I'm not in the electron/js world and I don't get it)
The background pattern really makes it hard to read, just fyi. I'd make the content have a white bg if you absolutely must use the pattern.
I wish I could write beautiful good code, every part of me wants it, but I'm forced to deliver as fast as I can.
Electron won even in the pre-LLM era, I sure wonder why.
... for now.
And just to be clear: AI continues to progress. There are already rumors about the next Anthropic model coming out and we are now in the phase of the biggest centralized reinforcement loop ever existed: everyone using ai for writing and giving it feedback.
We are, thanks to LLMs, able now to codify humans and while its not clear how fast this is, i do not believe anymore that my skills are unique.
A small hobby application costed me 11 dollars on the weekend and took me 3h to 'build' while i would have probably needed 2-3 days for it.
And we are still limited by resources and normal human progress. Like claude team is still exerpimental. Things like gastown or orchestrator architecture/structure is not that estabslihed and consumed quite a lot of tokens.
We have not even had time yet to build optimzed models. Claude code still understand A LOT of languages (human languages and programming languages)
Do not think anyone really cares about code quality. I do but i'm a software engineere. Everyone around me doesn't. Business doesn't. Even fellow co-workers don't or don't understand good code.
Even stupid things like the GTA 5 Online (or was it RDR2?) startup code wasn't found for ages (there was some algo complexity in loading some config file which took ages until someone non rockstar found it and rockstar fixed it).
We also have plenty of code were it doesn't matter as long as it works. Offline apps, scripts, research scripts etc.
Maybe the models get better on the code side but I thought slop referred to any AI generated text or imagery? It’s hard to see how most of the internet’s written words won’t be slop, especially when there’s no binding compiler contract like in code.
The wrinkle here is what exactly “win” means
My prediction is that we'll start to see a whole new layer of abstraction to help us write high quality code with LLMs - meaning new programming languages, new toolchains, stricter typechecking, in-built feedback loops etc.
The slop we're seeing today comes primarily from the fact that LLMs are writing code with tools meant for human users.
The slop debt will always come to collect.
A certain big PaaS I won't name here has had lots of clusterfucks in the last 3 months. The CEO is extremely bought into AI and "code not mattering anymore". He's also constantly talking about the meteoric growth because Claude and other AI providers are using railway as default suggestions.
The toll has come to collect and now a lot of real production users are looking at alternatives.
The reality is the market is rewarding slop and "velocity now". There will come a time where it will reward quality again.
> economic forces will drive AI models toward generating good, simpler, code because it will be cheaper overall
Economic forces are completely irrelevant to the code quality of AI.
> I believe that economic incentives will start to take effect and AI models will be forced to generate good code to stay competitive amongst software developers and companies
Wherever AI succeeds, it will be because a dev is spending time on a process that requires a lot of babysitting. That time is about the same as writing it by hand. Language models reduce the need to manually type something because that's what they are designed to do, but it doesn't mean faster or better code.
AI is rubber duck that can talk back. It's also a natural language search tool. It's training wheels for devs to learn how to plan better and write half-decent code. What we have is an accessibility tool being sold as anything and everything else because investors completely misunderstand how software development works and are still in denial about it.
Code quality starts and ends with business needs being met, not technical capability. There is no way to provide that to AI as "context" or automate it away. AI is the wrong tool when those needs can be met by ideas already familiar to an experienced developer. They can write that stuff in their sleep (or while sitting in the meetings) and quickly move on.
good code do not earn money =)
The existence and ubiquity of bash scripts make me doubt this.
Everyone's talking about AI, but let's posit that today's coding models are as good as a SDE on the performance/experience distribution, maybe in the lower quartile, but can we also posit that this will improve and over time the coding models equal and then better the median software engineer? It's not like SDE's are not also churning out poor quality code "it worked for me", "what tests?" "O(what?)", etc, we've all worked with them.
The difference is that over the years while tooling and process have dramatically improved, SDE's have not improved much, junior engineers still make the same mistakes. The assumption is that (not yet proven, but the whole bubble is based on this) that models will continue to improve - eventually leaving behind human SDEs (or other domain people, lawyers, doctors, etc) - if this happens these arguments I keep seeing on HN about AI slop will all be moot.
Assuming AI continues to improve, the cost and speed of software development will dramatically drop. I saw a comment yesterday that predicted that AI will just plateau and everyone will go back to vim and Makefiles (paraphrasing).
Maybe, I don't know, but all these people saying AI is slop, Ra Ra Humans is just wishful thinking. Let's admit it, we don't know how it will play out. There's people like Dario and Sam who naturally are cheerleading for AI, then there's the HN collective who hate every new release of MacOS and every AI model, just on principle! I understand the fear, anyone who's ever read Flora Thompson's Lark Rise to Candleford will see the parallels, things are changing, AI is the plough, the railway, the transistor...
I'm tired on the debate, my experience is that AI (Gemini for me) is awesome, we all have gaps in our knowledge/skills (but not Gemini), AI helps hardcore backend engineers throw together a Gradio demo in minutes to make their point, helps junior devs review their code before making a PR, helps Product put together presentations. I could go on and on, those that don't see value in AI are doing it wrong.
As Taylor Swift said "It's me, hi, I'm the problem, it's me" - take that to heart and learn to leverage the tools, stop whining please, it's embarrassing to the whole software industry.
this submission is basically an ad
None of this is true. Models will soon scale to several million tokens of context. That, combined with the combined experience of millions of feedback cycles, will make software a solved problem for machines, even as humans remain dumb. Yes, even complex software. Complex software is actually better because it is, generally, faster with more features. It’s smarter. Like a jet fighter, the more complex it is, the more capable it is.