> The problem is that the owners of these disruptive technologies must be convinced to do something that does not come naturally to them: share. Taxes in the US amount to less than 26% of GDP, 8 percentage points less than the OECD average. Capital taxation amounts to just over 2% of GDP. These numbers will have to go much higher, since people will no longer have wages to live on and will rely more heavily on government largesse.
The tone of this article is really frustrating, the author is seemingly living in a self-imposed box in which capital has an inalienable right to rule the world. "owners must be convinced to share" - No sir, they're not kings, nor were they elected into any position, and we don't have to "convince" them of anything.
We need to have a thorough discussion about what a future without human labor should look like, and whether we really want to live in a dystopia when the only thing preventing us from living in a utopia is the ego of a few rich assholes.
One way or another they will lose their kingdoms because they don't actually have an inalienable right to control the world's resources. They only had these ownership rights because they were [thought to be] good for society as a whole. In a robotic AI future that's no longer the case and those rights will no longer exist.
The only question is whether this transition will be peaceful or extremely violent.
The thing is, in our current legal system, property rights are fairly fundamental; they own certain things, and that gives them a legal right to control it. And the money they derive from that has become more and more influential in our politics, to the point where they can influence a minority share of voters who have outsized voting rights, while also suppressing the votes of other voters, to achieve minority rule.
Without a vast reshaping of our sense of property rights, taxation, and redistribution, it's hard to see how this would change. And it's becoming increasingly hard to see who that vast reshaping could happen via peaceful, civil means.
> the author is seemingly living in a self-imposed box in which capital has an inalienable right to rule the world. "owners must be convinced to share" - No sir, they're not kings, nor were they elected into any position, and we don't have to "convince" them of anything.
That box was something that humans imposed on themselves on the scale of a civilization. At this point, I agree with the author's view because I can't see how it can ever change. Every little additional bit of the scales tipping in their favor means an exponential amount of additional effort that will be needed to undo the imbalance.
By the time society wants to talk about transitioning to a different model (if they ever want to talk about it - remember, humans are shockingly vulnerable to informational warfare and many opinions can be changed with the tweak of an algorithm), the amount of power will be more imbalanced than it likely has ever been in history. If this future comes to pass, they'll be 10x as powerful by that point. And they will have effectively endless amounts of money and power to buy themselves the best armies, automated defenses, production facilities, employees, bunkers, drones, whatever, to ensure their safety. In this worst-case scenario where demand for human labor is a shadow of what it is today, how is this in any way winnable? They could take whomever they need and clock out, automatically overseeing the rest to ensure they won't have anyone threatening them ever again.
Given comparative advantage gives a offramp to this for a lot of what we currently understand as "economics", if the author is positing that we will be beyond this, then your response is missing the forest from the trees.
There is no indication that the surplus extracted by automated labour will be distributed to the advantage of the population. If we look at how things are going at the moment and in the present, there will be a further concentration of power and capital. And I don't see any reasons why the billionaire class should give this up. You could, of course, give an argument why things are will be different this time.
If comparative advantage will not hold then that's really something, no one understands what happens in that future, proposing some random solution at this point is unbelievably premature.
We'll probably end up switching to something more like a socialist system - each according to their needs.
When China decided to allow capitalism they kept their socialist system running but allowed capitalism in parallel, at first in a small way but it picked up.
With AI you could probably do something a bit like that the other way. A small amount of UBI say that increases as AI takes on more work.
First they came for our crops, to power our cars. Replacing corn ethanol with nearly anything else is a net benefit, corn ethanol is terrible for the environment & costs taxpayers billions in subsidies.
I'm a big fan of an economic philosophy known as distributism, popularized by G.K. Chesterton and Hillaire Belloc more than a century ago.
It basically says that the economy works best when ownership of productive property is as diffuse as possible -- when a high percentage of the population has some ownership, and it's not concentrated in the hands of a relatively few wealthy shareholders (in the case of capitalism) or the government (in the case of communism/socialism).
Under distributist principles, I would say that we should pursue economic policies that allow the ownership of productive AI to be widespread, whether that's in the form of cooperatives, employee ownership, or other means of giving the average person access to that ownership. (I acknowledge that it's currently possible to invest in publicly traded AI companies, but would prefer to see other ownership opportunities as well.)
Are you not familiar with the many ways in which distributism is already practiced and has been practiced during the past two centuries?
Every federal credit union in the U.S., every worker's cooperative or consumer cooperative -- including biggies like Mondragon in Spain and The Cooperative in U.K. -- is organized in the way that distributism advocates.
There is nothing stopping us from electing lawmakers who recognize that society becomes healthier when the ownership of productive property is not concentrated, and who advocate for economic policies that promote more diffuse ownership.
Capitalism enables the boundless concentration of wealth and power in individuals. I reject the premise that seeking to become a billionaire is a natural behavior, instrinsic to the human experience.
I won't pretend to know what should replace capitalism, but I am sure we can do better.
Feudalism : The nobility held lands and means of production from the Crown in exchange for service, and vassals were in turn tenants of the nobles, while the peasants (villeins or serfs) were obliged to live on their lord's offices, factories and give him homage and labour, in exchange for protection.
If the people running these things are doomers, there's no need for capitalism to work beyond what it takes them to build their compounds and bunkers right now.
And since a large number of them seem to be building compounds and bunkers...
Capitalism can only sort of work when there is balance between the classes. Inevitably, one wins over the other, which leads to fascism or communism, and later a big reset. If the proletariat (i.e. those who depend on a salary to survive) aren't able to sell their work anymore, the owner class won't need them anymore. I personally see three outcomes:
* Apocalyptic but unlikely: the bourgeoisie gets rid of the proletariat and goes on to enjoy boundless luxury.
* Awful but likely: the bourgeoisie throws just enough to the proletariat that they won't rebel, and goes on to enjoy boundless luxury.
* Utopic: the machines' output is democratically decided and evenly distributed among the entire human population.
Keep in mind that the scenario in which machines are able to replace all human labor is still very remote, and won't happen suddenly. I'm sure many things will occur between now and then that will completely invalidate my simplistic predictions.
Unfortunately your attack plan was discovered when an AI system connected to speaker outside flagged a private conversation you were having as a domestic terroist risk and a swarm of Amazon Secuirty (tm) drones have to been dispatched to paint the concrete with your brain before you even get to the data center.
Industrial economic systems (including capitalism, which is better at it, but also Soviet communism for another) will always reinvest some of its surplus back into itself. That form is either scale or efficiency, the latter of which is usually the replacement of labor with capital. It may not do this very well or as fast as it could, but that transition is always permanent, and therefore cumulative.
So, what happens when we do that? Well, for awhile, nothing. When labor is the bottleneck, then there's always more outlets for it. But eventually there comes an inflection point, where there is so much labor replacement and the bar has been raised so high, that the surplus is in labor itself. At lower tiers, its value approaches 0. Spoiler alert: this point has already been passed. Probably everyone here knows multiple surplus individuals, who have no place in the economy, and the bar for their entry or reentry into it is so high now, they can only produce negative value in current market conditions.
So, we have an ever-increasing surplus of unrealized labor. Our overlords may feel bad about that for awhile, decide to bear the burden of a mass multitude of dependents. We better hope they do, because this works fine until it doesn't. The zeitgeist only needs to shift once for it to all be over. This won't happen tomorrow, but they only need to look at the balance sheet from a certain angle once for the massive cost center to be seen as yet another inefficiency to be optimized for. On long enough time scales, the probability of any possible event approaches 1.
Seriously Guardian, this has to be the least interesting question possible "if AI makes human labor obsolete", I mean FFS talk about a lack of understanding.
Another luddite article complaining about farming automation putting farmers out of work, but a modern equivalent.
This article wrongly assumes AGI is not only possible but also imminent, because if they were taking into account only the transformation we're seeing from AI at the moment -- it wouldn't be a story as it is not job ending.
AGI, however, is mathematically impossible. The only people telling you otherwise are the CEOs of labs who need to publicly fundraise on this premise, while privately admitting bearish sentiments on AGI.
AGI bulls assume all of the following to be true: there are no constraints to grid infrastructure (clearly false), there are no manufacturing constraints for AI hardware (clearly false) and exponential accuracy, speed and efficiency improvements will continue (clearly false, it's slowing down already).
Hell, just look at local opposition to data center deployment. You can't even get DCs built in rural towns that would benefit dramatically from the 1,000+ temporary and permanent jobs. Incredibly bearish on AGI.
I have no idea what you mean by "AGI, however, is mathematically impossible."
Further, your point about political pushback is short sighted. As AI becomes more lucrative there will be more impetuous to "pay" locations to have data centers, and as that becomes too expensive space is clearly the next answer.
The development of AGI assumes zero constraints, when constraints exist at every layer of the stack. That's why it's mathematically impossible.
In a system driven by capital, manufacturing can ramp to an extent but they generally can't exponentially ramp due to dependencies they have.
When you ramp one layer of the stack, other layers of the stack are pressurized. We're seeing a small preview of that now with memory pricing. But these break points for AGI are everywhere. Power capacity, power infrastructure, DC labor, cooling systems, memory, motherboards, GPUs. All of these things have dependencies that cannot be scaled exponentially, or quickly. As you pressure points of each of these dependencies, prices rise exponentially.
Let's take memory for instance, it is merely one block in the jenga tower but it's a good example. Memory is already at close to 100% capacity. Spinning up new capacity is highly constrained, and money can't really make it faster. Lead times are 4+ years on new plants, which cost billions.
The same is true for other components, and in some cases the situation is worse.
"Won't happen for 4+ years" and "mathematically impossible" are quite different. Given that humans apparently exhibit the "GI" part of "AGI", I find "mathematically impossible" difficult to believe. "Extremely unlikely with current LLM architecture", sure, but that's a very different statement from "mathematically impossible".
If you are making a prediction on the viability of AGI assuming that an entirely new technology will make the efficiency problem of LLMs moot then you're essentially engaging in mysticism, aren't you?
It is correct to say it is mathematically impossible, as all the people making AGI claims rely upon advances that are not even theoretical, they have not even been discovered yet, and the mere possibility of them is questioned by many scientists.
LLMs have hard and soft limits all over the place preventing AGI. You aren't gonna train and loop yourself to AGI because the compute does not exist, and will not exist.
My 4+ year point was for a single memory fab. Increasing capacity by merely 5% (generous assumption) takes 4 years and $10bn. It's starting to sound like the path to AGI in the current paradigm will cost infinite dollars and take infinite years of build-out.
Even with a transformational efficiency breakthrough, you still have hard limits all over the place. Where are you going to store all the data? Memory constraints again.
“If AI makes human labor obsolete, who decides who gets to eat?”
And within six comments we’re back to the sacred mantra: it can’t even solve a trick logic puzzle from 1983, therefore capitalism remains intact.
Allow me to contribute in the proud tradition of the Extremely Calm Skeptic.
First, the entire premise is unserious. Labor cannot become obsolete because the model does not “understand.” I know this because someone on Twitter asked it a riddle about a barber and it got confused. An entity that fumbles a barber paradox is clearly incapable of displacing accountants, paralegals, translators, mid-level engineers, support staff, or analysts.
Second, demos are misleading. Yes, it can draft contracts, generate production code, summarize regulatory filings, build internal tools, design marketing campaigns, and tutor students. But those are not real jobs. Real jobs are the parts that feel difficult and validating when I do them. The fact that those parts are shrinking is a coincidence.
Third, intelligence is not the bottleneck. The bottleneck is vibes. And regulation. And GPU supply. And “human judgment.” There will always be a final layer of ineffable judgment that only carbon-based life can provide. If pressed for examples, I will gesture broadly.
Fourth, labor markets adapt. We replaced elevator operators and invented social media managers. Therefore if large chunks of cognitive labor become cheap, the economy will effortlessly invent millions of new roles titled “Senior Human in the Loop.” The transition will be smooth. There will be no political consequences. History has a flawless track record here.
As for the eating question, that only becomes serious if labor is no longer the main mechanism for income distribution. And that won’t happen, because the models hallucinate sometimes. When something occasionally makes an error, it cannot possibly be economically transformative. By that standard, humans have been non-disruptive for millennia.
If I’m being honest, the resistance has less to do with token prediction and more to do with self-preservation. I invested years building scarce skills. Scarcity is flattering. If intelligence becomes abundant, that flattery evaporates. So I do what any rational actor would do: redefine scarcity.
When it automates my junior tasks, that’s augmentation.
When it handles mid-level tasks, that’s assistance.
When it approaches senior tasks, that’s hype.
If it ever clears that bar, I’ll discover a higher one.
This is not fear. This is prudent analysis performed while quietly pasting my entire codebase into three different models before standup.
So who decides who gets to eat?
If productive capacity detaches from human effort, ownership becomes the obvious lever. That’s not speculative. That’s how capital has always worked. But acknowledging that would mean treating the premise seriously.
Much easier to point at a cherry-picked failure and conclude that intelligence on tap changes nothing.
Anyway, back to my workflow where the fake autocomplete drafts the spec, writes the code, generates the tests, and explains the tradeoffs while I reassure myself that the important part was my supervision.
People see UBI and think, "Oh everyone will get basically what I have and I'm happy so they'll be happy too."
Humanity or economics don't work like that.
With capitalism we have the power to control our economic outcomes (to a large extent). Work more; earn more. This isn't perfect or always fair but life is never 100% fair.
With UBI, who do you think decides how much income you get?
And how do you think those people get that power?
And what happens when some resource is scarce and can't be given to everyone? Be it oil, medicine, medical services, food, clean water, a birth permit, etc.
Life isn't fair now - but it took a lot of blood and effort to get to this point and life can get A LOT more unfair if we're not careful.
> With capitalism we have the power to control our economic outcomes (to a large extent). Work more; earn more.
Go tell that to the slave laborers in the global South that made the clothes you are currently wearing. These people work way harder than me and make way less. Truth is, capital earns more than labor under Capitalism. It is absolutely not a fair system for the vast majority of humanity, and should be improved upon.
Right now the AI bros are shouting from the rooftops that we're going to have no income and we'll have plenty of time to grow our own food. So I guess we have to weigh the pros and cons of that against UBI.
And, further, if people aren't getting enough food, all bets are off.
I'm not a huge UBI fan, but if we keep concentrating wealth, it's an inevitability. So we better start trying to make life more fair.
It's true... maybe we can live like we did 2,000 years ago.
"Nobody will prevent you..." Can I plant on your property and keep all the produce? My property isn't big enough to support my family of two. Where shall I plant? The fiefdom?
I just don't get this take. It sounds awful for virtually everybody. We live at a scale that is not like it was when I was born. Asking everyone to raise their own beef and vegetables is a as much a non-starter as telling them they can just build their own tractors to till.
Edit: I'd like to acknowledge Poe's Law might be in effect here. :)
There should be an alternative version of Poe's Law where the view being parodied is so common, so ludicrous and associated with bad faith actors that it wraps back around to the person deserves all the ridicule and an assumption that they are a bad actor.
Like, we get to assume it's a Motte and Bailey or Schrödinger’s Joke.
People also starved to death if there was a late frost or potato blight.
When I was a kid we had an half acre potato patch a quarter acre vegetable garden. It was a fuck ton of work. It was only possible to keep up because my brother and I did most of the weeding, watering and assorted up keep during our summer "holidays"
I wouldn't wish a subsistence farming lifestyle on anyone.
There is not enough arable land in the US for every person in the US to be able to live on subsistence farming. Subsistence farming is a lot more work, and much less efficient, than large scale industrialized agriculture.
That option has a few problems -
1. It's not very cheap. The price of land is really high and if it is fertile land it is even that much more expensive. Not to mention the cost of raising animals and plants isn't free.
2. Specializing is much more efficient than trying to do everything yourself, requiring at least a basic economy.
3. Do you like healthcare? Trying to move to the countryside while affording healthcare(at least in the US) and actually having access to it are considerable hurdles.
4. Providing your own power isn't cheap and getting it from others certainly requires something.
And this is to just name a few - definitely not an exhaustive list.
We, as a society, have moved on from subsistence living with small groups of people and with the ratio to the number of people in the world to fertilel and it isn't a serious option for large numbers of the population. I love the dream of it and I do think we could make a big shift to move towards food independence, even through urban gardening, etc, but I don't really see it going there unless there are some pretty large societal/environmental/economic shifts that happen.
What really needs to happen, in my opinion, is a shift to the idea that we live in a time of abundance. We have the means to supply all of our needs to everyone, it is 100% a political choice to let people suffer in order for a few to thrive at levels that have never been seen. We should have food security, access to healthcare, and housing as a human right and we should do it in an efficient way (and we can!). Unfortunately we are in a time where power is once again being concentrated to the hands of the very few. There will always be work that needs to be done, but who determines what work is actually completed and who benefits from it can and has changed many times throughout our history.
> Nobody will prevent you to grow your own food and raise your own cattle like it was the case for 99.9% of humanity's time.
It certainly was not the case in most agricultural societies that people could get their own plot of land that they could cultivate without interference from the rest of the society.
To expect that people able to replace other humans completely for working purposes, especially the kind of people who end up at the head of the kind of company able to do that, would peacefully let the rest of us be, is delusional.
As someone who owns a small farm — and actually enjoys land, growing things — I'm just saying this is not a "solution" for the vast majority of people.
That may work when the global population is 200 million to maybe 800 million, and it didn't even work that well back then.
Feeding eight billion plus people requires scalable technologies.
So, which nine of every ten people should die? You volunteering?
Who owns the land on which to do this farming? Oh, perhaps start with one of the oligarchs, Bill Gates, who owns 275,000 acres, ostensibly to increase agricultural prodictivity, but still, it's his land not yours or mine, so how will we buy the food from him without a means of earning money?
My family gardens, but with a LOT of work, we produce only enough to supplement diet for part of the year. Fully sustaining would require multiples of the land we have...
Also, look into the actual lifestyle of subsistence farming, even for those who DO own the land. It is generally miserable hard labor and still entirely unreliable. One crop failure and your family gets to starve for a year...
> The problem is that the owners of these disruptive technologies must be convinced to do something that does not come naturally to them: share. Taxes in the US amount to less than 26% of GDP, 8 percentage points less than the OECD average. Capital taxation amounts to just over 2% of GDP. These numbers will have to go much higher, since people will no longer have wages to live on and will rely more heavily on government largesse.
The tone of this article is really frustrating, the author is seemingly living in a self-imposed box in which capital has an inalienable right to rule the world. "owners must be convinced to share" - No sir, they're not kings, nor were they elected into any position, and we don't have to "convince" them of anything.
We need to have a thorough discussion about what a future without human labor should look like, and whether we really want to live in a dystopia when the only thing preventing us from living in a utopia is the ego of a few rich assholes.
One way or another they will lose their kingdoms because they don't actually have an inalienable right to control the world's resources. They only had these ownership rights because they were [thought to be] good for society as a whole. In a robotic AI future that's no longer the case and those rights will no longer exist.
The only question is whether this transition will be peaceful or extremely violent.
The thing is, in our current legal system, property rights are fairly fundamental; they own certain things, and that gives them a legal right to control it. And the money they derive from that has become more and more influential in our politics, to the point where they can influence a minority share of voters who have outsized voting rights, while also suppressing the votes of other voters, to achieve minority rule.
Without a vast reshaping of our sense of property rights, taxation, and redistribution, it's hard to see how this would change. And it's becoming increasingly hard to see who that vast reshaping could happen via peaceful, civil means.
> the author is seemingly living in a self-imposed box in which capital has an inalienable right to rule the world. "owners must be convinced to share" - No sir, they're not kings, nor were they elected into any position, and we don't have to "convince" them of anything.
That box was something that humans imposed on themselves on the scale of a civilization. At this point, I agree with the author's view because I can't see how it can ever change. Every little additional bit of the scales tipping in their favor means an exponential amount of additional effort that will be needed to undo the imbalance.
By the time society wants to talk about transitioning to a different model (if they ever want to talk about it - remember, humans are shockingly vulnerable to informational warfare and many opinions can be changed with the tweak of an algorithm), the amount of power will be more imbalanced than it likely has ever been in history. If this future comes to pass, they'll be 10x as powerful by that point. And they will have effectively endless amounts of money and power to buy themselves the best armies, automated defenses, production facilities, employees, bunkers, drones, whatever, to ensure their safety. In this worst-case scenario where demand for human labor is a shadow of what it is today, how is this in any way winnable? They could take whomever they need and clock out, automatically overseeing the rest to ensure they won't have anyone threatening them ever again.
[dead]
Unfortunately, if we extrapolate from history, those questions will be answered with blood.
In the UK, home of the Luddites, we've managed to get through a lot of different economic setups by means of elections without much blood involved.
What a poor take if
"AI makes human labor obsolete"
Given comparative advantage gives a offramp to this for a lot of what we currently understand as "economics", if the author is positing that we will be beyond this, then your response is missing the forest from the trees.
There is no indication that the surplus extracted by automated labour will be distributed to the advantage of the population. If we look at how things are going at the moment and in the present, there will be a further concentration of power and capital. And I don't see any reasons why the billionaire class should give this up. You could, of course, give an argument why things are will be different this time.
I will repeat myself:
comparative advantage
[edit] I will further repeat myself:
If comparative advantage will not hold then that's really something, no one understands what happens in that future, proposing some random solution at this point is unbelievably premature.
The same people who decided yesterday.
That only makes sense if capitalism is irrelevant.
We'll probably end up switching to something more like a socialist system - each according to their needs.
When China decided to allow capitalism they kept their socialist system running but allowed capitalism in parallel, at first in a small way but it picked up.
With AI you could probably do something a bit like that the other way. A small amount of UBI say that increases as AI takes on more work.
First they came for our crops to power the data centers. Then the people.
My hope is that a breakthrough makes these planet-eaters complete folly, as a $100 optical processor trounces them.
And we'll have all that defaulted power generation capacity to run our human heating and cooling needs.
That is, unless we spend trillions fill the black holes created upon stellar collapse. [0]
Grift can make that happen.
[0] The death star. Destroy a planet just to get quicker results. A total Faustian bargain. Now I can't afford memory chips. What'll it be next?
> My hope is that a breakthrough makes these planet-eaters complete folly, as a $100 optical processor trounces them.
Wouldn't one then just want to make even more compute-intensive ones by combining a million of these $100 optical processors?
First they came for our crops, to power our cars. Replacing corn ethanol with nearly anything else is a net benefit, corn ethanol is terrible for the environment & costs taxpayers billions in subsidies.
I'm a big fan of an economic philosophy known as distributism, popularized by G.K. Chesterton and Hillaire Belloc more than a century ago.
It basically says that the economy works best when ownership of productive property is as diffuse as possible -- when a high percentage of the population has some ownership, and it's not concentrated in the hands of a relatively few wealthy shareholders (in the case of capitalism) or the government (in the case of communism/socialism).
Under distributist principles, I would say that we should pursue economic policies that allow the ownership of productive AI to be widespread, whether that's in the form of cooperatives, employee ownership, or other means of giving the average person access to that ownership. (I acknowledge that it's currently possible to invest in publicly traded AI companies, but would prefer to see other ownership opportunities as well.)
This type of economic philosophy is impossible.
There will always be people who are more motivated and capable of consolidating power. That cannot be stopped.
Capitalism and democracy (both with guardrails) are meant to harness and contain that energy such that it doesn't instantly destroy a society.
Religion goes in there somewhere too.
None of these systems of organization are perfect, and they don't seem ideal on the surface. When you see them in practice there are many flaws.
But they are feasible.
Your 'distributism' system doesn't pass the feasibility test.
Are you not familiar with the many ways in which distributism is already practiced and has been practiced during the past two centuries?
Every federal credit union in the U.S., every worker's cooperative or consumer cooperative -- including biggies like Mondragon in Spain and The Cooperative in U.K. -- is organized in the way that distributism advocates.
There is nothing stopping us from electing lawmakers who recognize that society becomes healthier when the ownership of productive property is not concentrated, and who advocate for economic policies that promote more diffuse ownership.
> This type of economic philosophy is impossible.
Well, with that attitude, it will definitely remain impossible.
This already happens in Europe and China, today.
Capitalism enables the boundless concentration of wealth and power in individuals. I reject the premise that seeking to become a billionaire is a natural behavior, instrinsic to the human experience.
I won't pretend to know what should replace capitalism, but I am sure we can do better.
Unfortunately current answer is capital owners = owners of the means of production..
Does capitalism work without human labor? What is the economic model for an automated society?
Feudalism : The nobility held lands and means of production from the Crown in exchange for service, and vassals were in turn tenants of the nobles, while the peasants (villeins or serfs) were obliged to live on their lord's offices, factories and give him homage and labour, in exchange for protection.
If the people running these things are doomers, there's no need for capitalism to work beyond what it takes them to build their compounds and bunkers right now.
And since a large number of them seem to be building compounds and bunkers...
Capitalism can only sort of work when there is balance between the classes. Inevitably, one wins over the other, which leads to fascism or communism, and later a big reset. If the proletariat (i.e. those who depend on a salary to survive) aren't able to sell their work anymore, the owner class won't need them anymore. I personally see three outcomes:
* Apocalyptic but unlikely: the bourgeoisie gets rid of the proletariat and goes on to enjoy boundless luxury.
* Awful but likely: the bourgeoisie throws just enough to the proletariat that they won't rebel, and goes on to enjoy boundless luxury.
* Utopic: the machines' output is democratically decided and evenly distributed among the entire human population.
Keep in mind that the scenario in which machines are able to replace all human labor is still very remote, and won't happen suddenly. I'm sure many things will occur between now and then that will completely invalidate my simplistic predictions.
The poor will realize that they can eat the rich
How do you eat a datacenter?
I think that monkey-wrenching at a critical level would do it.
seasoned with molotov cocktails
Unfortunately your attack plan was discovered when an AI system connected to speaker outside flagged a private conversation you were having as a domestic terroist risk and a swarm of Amazon Secuirty (tm) drones have to been dispatched to paint the concrete with your brain before you even get to the data center.
We need open source infra and models ASAP. This is quite possibly the last ship.
You won't eat, because you won't be.
Industrial economic systems (including capitalism, which is better at it, but also Soviet communism for another) will always reinvest some of its surplus back into itself. That form is either scale or efficiency, the latter of which is usually the replacement of labor with capital. It may not do this very well or as fast as it could, but that transition is always permanent, and therefore cumulative.
So, what happens when we do that? Well, for awhile, nothing. When labor is the bottleneck, then there's always more outlets for it. But eventually there comes an inflection point, where there is so much labor replacement and the bar has been raised so high, that the surplus is in labor itself. At lower tiers, its value approaches 0. Spoiler alert: this point has already been passed. Probably everyone here knows multiple surplus individuals, who have no place in the economy, and the bar for their entry or reentry into it is so high now, they can only produce negative value in current market conditions.
So, we have an ever-increasing surplus of unrealized labor. Our overlords may feel bad about that for awhile, decide to bear the burden of a mass multitude of dependents. We better hope they do, because this works fine until it doesn't. The zeitgeist only needs to shift once for it to all be over. This won't happen tomorrow, but they only need to look at the balance sheet from a certain angle once for the massive cost center to be seen as yet another inefficiency to be optimized for. On long enough time scales, the probability of any possible event approaches 1.
Seriously Guardian, this has to be the least interesting question possible "if AI makes human labor obsolete", I mean FFS talk about a lack of understanding.
Another luddite article complaining about farming automation putting farmers out of work, but a modern equivalent.
This article wrongly assumes AGI is not only possible but also imminent, because if they were taking into account only the transformation we're seeing from AI at the moment -- it wouldn't be a story as it is not job ending.
AGI, however, is mathematically impossible. The only people telling you otherwise are the CEOs of labs who need to publicly fundraise on this premise, while privately admitting bearish sentiments on AGI.
AGI bulls assume all of the following to be true: there are no constraints to grid infrastructure (clearly false), there are no manufacturing constraints for AI hardware (clearly false) and exponential accuracy, speed and efficiency improvements will continue (clearly false, it's slowing down already).
Hell, just look at local opposition to data center deployment. You can't even get DCs built in rural towns that would benefit dramatically from the 1,000+ temporary and permanent jobs. Incredibly bearish on AGI.
The article is mostly inquisitive, it asks a question. What you are attributing to the article author is mostly in your head.
I upvoted you right off reading the first line.
But then you drifted.
I have no idea what you mean by "AGI, however, is mathematically impossible."
Further, your point about political pushback is short sighted. As AI becomes more lucrative there will be more impetuous to "pay" locations to have data centers, and as that becomes too expensive space is clearly the next answer.
The development of AGI assumes zero constraints, when constraints exist at every layer of the stack. That's why it's mathematically impossible.
In a system driven by capital, manufacturing can ramp to an extent but they generally can't exponentially ramp due to dependencies they have.
When you ramp one layer of the stack, other layers of the stack are pressurized. We're seeing a small preview of that now with memory pricing. But these break points for AGI are everywhere. Power capacity, power infrastructure, DC labor, cooling systems, memory, motherboards, GPUs. All of these things have dependencies that cannot be scaled exponentially, or quickly. As you pressure points of each of these dependencies, prices rise exponentially.
Let's take memory for instance, it is merely one block in the jenga tower but it's a good example. Memory is already at close to 100% capacity. Spinning up new capacity is highly constrained, and money can't really make it faster. Lead times are 4+ years on new plants, which cost billions.
The same is true for other components, and in some cases the situation is worse.
"Won't happen for 4+ years" and "mathematically impossible" are quite different. Given that humans apparently exhibit the "GI" part of "AGI", I find "mathematically impossible" difficult to believe. "Extremely unlikely with current LLM architecture", sure, but that's a very different statement from "mathematically impossible".
If you are making a prediction on the viability of AGI assuming that an entirely new technology will make the efficiency problem of LLMs moot then you're essentially engaging in mysticism, aren't you?
It is correct to say it is mathematically impossible, as all the people making AGI claims rely upon advances that are not even theoretical, they have not even been discovered yet, and the mere possibility of them is questioned by many scientists.
LLMs have hard and soft limits all over the place preventing AGI. You aren't gonna train and loop yourself to AGI because the compute does not exist, and will not exist.
My 4+ year point was for a single memory fab. Increasing capacity by merely 5% (generous assumption) takes 4 years and $10bn. It's starting to sound like the path to AGI in the current paradigm will cost infinite dollars and take infinite years of build-out.
Even with a transformational efficiency breakthrough, you still have hard limits all over the place. Where are you going to store all the data? Memory constraints again.
The thread always opens the same way.
“If AI makes human labor obsolete, who decides who gets to eat?”
And within six comments we’re back to the sacred mantra: it can’t even solve a trick logic puzzle from 1983, therefore capitalism remains intact.
Allow me to contribute in the proud tradition of the Extremely Calm Skeptic.
First, the entire premise is unserious. Labor cannot become obsolete because the model does not “understand.” I know this because someone on Twitter asked it a riddle about a barber and it got confused. An entity that fumbles a barber paradox is clearly incapable of displacing accountants, paralegals, translators, mid-level engineers, support staff, or analysts.
Second, demos are misleading. Yes, it can draft contracts, generate production code, summarize regulatory filings, build internal tools, design marketing campaigns, and tutor students. But those are not real jobs. Real jobs are the parts that feel difficult and validating when I do them. The fact that those parts are shrinking is a coincidence.
Third, intelligence is not the bottleneck. The bottleneck is vibes. And regulation. And GPU supply. And “human judgment.” There will always be a final layer of ineffable judgment that only carbon-based life can provide. If pressed for examples, I will gesture broadly.
Fourth, labor markets adapt. We replaced elevator operators and invented social media managers. Therefore if large chunks of cognitive labor become cheap, the economy will effortlessly invent millions of new roles titled “Senior Human in the Loop.” The transition will be smooth. There will be no political consequences. History has a flawless track record here.
As for the eating question, that only becomes serious if labor is no longer the main mechanism for income distribution. And that won’t happen, because the models hallucinate sometimes. When something occasionally makes an error, it cannot possibly be economically transformative. By that standard, humans have been non-disruptive for millennia.
If I’m being honest, the resistance has less to do with token prediction and more to do with self-preservation. I invested years building scarce skills. Scarcity is flattering. If intelligence becomes abundant, that flattery evaporates. So I do what any rational actor would do: redefine scarcity.
When it automates my junior tasks, that’s augmentation. When it handles mid-level tasks, that’s assistance. When it approaches senior tasks, that’s hype. If it ever clears that bar, I’ll discover a higher one.
This is not fear. This is prudent analysis performed while quietly pasting my entire codebase into three different models before standup.
So who decides who gets to eat?
If productive capacity detaches from human effort, ownership becomes the obvious lever. That’s not speculative. That’s how capital has always worked. But acknowledging that would mean treating the premise seriously.
Much easier to point at a cherry-picked failure and conclude that intelligence on tap changes nothing.
Anyway, back to my workflow where the fake autocomplete drafts the spec, writes the code, generates the tests, and explains the tradeoffs while I reassure myself that the important part was my supervision.
Me
Watching people debate whether AI will displace labor is like watching someone in 1850 sincerely ask whether the steam engine might affect employment.
This is why UBI should scare everyone.
People see UBI and think, "Oh everyone will get basically what I have and I'm happy so they'll be happy too."
Humanity or economics don't work like that.
With capitalism we have the power to control our economic outcomes (to a large extent). Work more; earn more. This isn't perfect or always fair but life is never 100% fair.
With UBI, who do you think decides how much income you get?
And how do you think those people get that power?
And what happens when some resource is scarce and can't be given to everyone? Be it oil, medicine, medical services, food, clean water, a birth permit, etc.
Life isn't fair now - but it took a lot of blood and effort to get to this point and life can get A LOT more unfair if we're not careful.
> Work more; earn more.
That's only true from a very narrow perspective. Sure, if you have one job and you pick up another, you make a little more.
But, from a broader perspective, it's pure fiction. The hardest working people are often the poorest among us.
Your connections matter far more than your work ethic ever will.
As opposed to what, the same situation and no UBI?
UBI doesn't prevent you from accumulating additional capital if you want better economic outcomes. It's a floor, not a ceiling.
> This is why UBI should scare everyone.
No
> Work more; earn more.
Dang these billionaires must be working a lot.
> With capitalism we have the power to control our economic outcomes (to a large extent). Work more; earn more.
Go tell that to the slave laborers in the global South that made the clothes you are currently wearing. These people work way harder than me and make way less. Truth is, capital earns more than labor under Capitalism. It is absolutely not a fair system for the vast majority of humanity, and should be improved upon.
But otherwise, I agree with your take on UBI.
Right now the AI bros are shouting from the rooftops that we're going to have no income and we'll have plenty of time to grow our own food. So I guess we have to weigh the pros and cons of that against UBI.
And, further, if people aren't getting enough food, all bets are off.
I'm not a huge UBI fan, but if we keep concentrating wealth, it's an inevitability. So we better start trying to make life more fair.
Nobody will prevent you to grow your own food and raise your own cattle like it was the case for 99.9% of humanity's time.
One of the biggest problem we have these days is that most people don't want to live in the countryside and consider that
It's true... maybe we can live like we did 2,000 years ago.
"Nobody will prevent you..." Can I plant on your property and keep all the produce? My property isn't big enough to support my family of two. Where shall I plant? The fiefdom?
I just don't get this take. It sounds awful for virtually everybody. We live at a scale that is not like it was when I was born. Asking everyone to raise their own beef and vegetables is a as much a non-starter as telling them they can just build their own tractors to till.
Edit: I'd like to acknowledge Poe's Law might be in effect here. :)
There's plenty of cheap land if you dare going a bit further that big cities, at least in my country.
Most people just want to live where everybody already lives
> I'd like to acknowledge Poe's Law might be in effect here. :)
It's depressing that we can't be sure anymore because far too many people say things like this and actually mean it.
There should be an alternative version of Poe's Law where the view being parodied is so common, so ludicrous and associated with bad faith actors that it wraps back around to the person deserves all the ridicule and an assumption that they are a bad actor.
Like, we get to assume it's a Motte and Bailey or Schrödinger’s Joke.
People also starved to death if there was a late frost or potato blight.
When I was a kid we had an half acre potato patch a quarter acre vegetable garden. It was a fuck ton of work. It was only possible to keep up because my brother and I did most of the weeding, watering and assorted up keep during our summer "holidays"
I wouldn't wish a subsistence farming lifestyle on anyone.
There is not enough arable land in the US for every person in the US to be able to live on subsistence farming. Subsistence farming is a lot more work, and much less efficient, than large scale industrialized agriculture.
That option has a few problems - 1. It's not very cheap. The price of land is really high and if it is fertile land it is even that much more expensive. Not to mention the cost of raising animals and plants isn't free. 2. Specializing is much more efficient than trying to do everything yourself, requiring at least a basic economy. 3. Do you like healthcare? Trying to move to the countryside while affording healthcare(at least in the US) and actually having access to it are considerable hurdles. 4. Providing your own power isn't cheap and getting it from others certainly requires something.
And this is to just name a few - definitely not an exhaustive list.
We, as a society, have moved on from subsistence living with small groups of people and with the ratio to the number of people in the world to fertilel and it isn't a serious option for large numbers of the population. I love the dream of it and I do think we could make a big shift to move towards food independence, even through urban gardening, etc, but I don't really see it going there unless there are some pretty large societal/environmental/economic shifts that happen.
What really needs to happen, in my opinion, is a shift to the idea that we live in a time of abundance. We have the means to supply all of our needs to everyone, it is 100% a political choice to let people suffer in order for a few to thrive at levels that have never been seen. We should have food security, access to healthcare, and housing as a human right and we should do it in an efficient way (and we can!). Unfortunately we are in a time where power is once again being concentrated to the hands of the very few. There will always be work that needs to be done, but who determines what work is actually completed and who benefits from it can and has changed many times throughout our history.
> Nobody will prevent you to grow your own food and raise your own cattle like it was the case for 99.9% of humanity's time.
It certainly was not the case in most agricultural societies that people could get their own plot of land that they could cultivate without interference from the rest of the society.
To expect that people able to replace other humans completely for working purposes, especially the kind of people who end up at the head of the kind of company able to do that, would peacefully let the rest of us be, is delusional.
What's to stop 200 year old High Lord Musk from bulldozing your farm to put up a new power plant?
Castle Doctrine ?
Castle doctrine doesn't work when you've got a veritable army at the door, which a bought government would happily send.
Farming requires inputs unless you started an agroforestry farm 10 years ago and are now just maintaining it. It's also insanely time consuming.
The entire point is that you won't have a job to do so time is not the biggest issue then
As someone who owns a small farm — and actually enjoys land, growing things — I'm just saying this is not a "solution" for the vast majority of people.
That doesn't scale to billions of people.
Yes
8+ billion people cannot grow their own food, and if you factor in climate change, many food producing areas will be unable to grow anything eatable.
So I would like to know davidguetta thought process. Maybe he was being sarcastic.
That may work when the global population is 200 million to maybe 800 million, and it didn't even work that well back then.
Feeding eight billion plus people requires scalable technologies.
So, which nine of every ten people should die? You volunteering?
Who owns the land on which to do this farming? Oh, perhaps start with one of the oligarchs, Bill Gates, who owns 275,000 acres, ostensibly to increase agricultural prodictivity, but still, it's his land not yours or mine, so how will we buy the food from him without a means of earning money?
My family gardens, but with a LOT of work, we produce only enough to supplement diet for part of the year. Fully sustaining would require multiples of the land we have...
Also, look into the actual lifestyle of subsistence farming, even for those who DO own the land. It is generally miserable hard labor and still entirely unreliable. One crop failure and your family gets to starve for a year...