I recently activated my account on there and went to the forum for my country. It was already taken over by moderators. Then I looked at the mod and he took all real estate that is already available on Reddit that is related to said country. So in a way, he was probably the first account on there and became god-king for eternity for the subreddits related to the country. I had no idea who he was, what he stood for, what his plans were for his newfound digital real estate etc.
I feel like the moderated subforum is a fundamentally broken system for dealing with content. I much prefer the Federated / X / Instagram approach where I can deal with users and have the tools needed to curate my own content, instead of relying on some ideologically captured no-name account that chooses what I can or cannot see based on whims.
Your country wouldn't be Norway by any chance? I remember that on Reddit there was one powermod who was dead-set on owning every Nowegian-language forum, and every name that could potentially be a base for people trying to escape him.
You need both. LLMs can, I think, do the bulk of removing posts that break community guidelines, but you need moderators to define and adjust the guidelines. Most would also like to have a human to escalate a dispute to.
Google is famous for having almost solely automated support, at it absolutely sucks at doing almost anything. AI only moderation would go the same way.
> but you need moderators to define and adjust the guidelines
The comments above you are suggesting that global guidelines are unnecessary. Instead, they suggest you don't need moderation at all when LLMs now give us the technology to filter out the stuff individual users don't want to see based own their own personal policies. I am sure you can come up with reasons to dispute that, but "you need moderators to do the thing you say is no longer necessary" doesn't add to the discussion.
The absolutely broken moderator system of Reddit made me leave it forever after being a regular user for more than a decade. The “god-king” thing simply doesn’t work.
Same here. The power-tripping of mods ruins reddit. Most don't care about the community as much as they care about exercising their absolute power over users.
And even if it does, the mods don't have real control to moderate communities either, so you get the worst of both worlds. I don't go to most queer reddit communities anymore because a lot of them have bots that downvote trans-positive posts, even if the community is specifically meant to be inclusive. There's nothing to couple active participation to voting weight or anything of that kind and voting is not considered "brigading" by reddit if the coordination happens off-site (at least not in a way that'd lead to any enforcement action).
It's makes a great propaganda machine though, given humans have a tendency to measure their own opinions on social clues.
I still haven't been able to figure out how to make an account without it being immediately shadowbanned or normalbanned. Tried again the other day, it was something in between where logged-out users could see it was banned but I couldn't.
You need to ditch and replace all your devices and acquire a new phone number. I'm serious. Virtually all large websites these days employ a lot of fingerprinting and persistence technologies.
And yes, ditch them. Even well over a decade ago, Wikipedia of all places already employed IP address matching to link sockpuppet accounts. You must be extremely careful of never using any device that was associated with your old accounts on the same network as the devices associated with your new account. And that includes devices only seen by association.
It happens to all new accounts. It's known that new account are shadowbanned almost everywhere until they are 30 days old and farmed some karma on a very small set of subreddits that don't shadowban new accounts. It's shocking they ever get any new users, really; as far as a non-technical new user knows, nobody ever reads their comments for some reason.
My boss uses Reddit some. I'm banned. At the shop, we use the same IP address (and we do not use ipv6 there).
I tried to log in with a ~10-year-old account that I'd never commented with. A perfect Beetlejuicing moment had arrived and I just wanted to play the game with a short, snarky comment.
It logged in fine, and then: Insta-ban, just like that. (Maybe I should have used a new browser on a new network that I've never used before, but whatever -- nothing of value was lost here.)
Meanwhile, the boss man's access continued unimpeded; this suggests that it is a rather targeted contagion.
And it seems to follow the systems, not the networks.
(If anyone wants banned, just let me know. I seem to have a well-poisoned system to play with.)
It's either some personal unquenched thirst for power or he thought that new digg will be as popular as these ~20 years ago, and that he'll be able to control content submitted and get paid for "promoting" it.
I've seen something similar over the last ~17 years: a bunch of same terminally online accounts uploading content from our local media outlets on country-related subs and local digg-like sites - both active and long defunct for 10 years now. Some of those users even appeared on mastodon and bsky.
The social link aggregators were created for people to share their favorite links, places from the Internet so others could see these and have fun, expand their knowledge and so on. For me it was the cherry on top of the web2.0 period where everything was fresh, beta and innocent. That lasted for a while up until other people, entities figured out that such sites can be used to promote their content, insert ads. The next stage was and remains till today opinion control by "curating" the content and/or reactions in discussions - still done by humans but more prevalent presence of convincing bots.
Reddit itself lost its impartial and independent status a while ago. Big subs related to media franchises or big corporations are heavily controlled to the point it's impossible to submit content that's critical. It's all happy world seen by pink glasses, or as some say toxic positivity.
There are still niche places where moderation is limited but as I said last time, from my own experiences: such subs were targeted by bad actors who by submitting forbidden content tried to impose lockouts so later they could take these in their control.
hn isn't free of some of these issues either. while discussions still remain on good levels (tho degradation to reddit levels already happens), there's no control over content: there are accounts who do nothing but upload links every few minutes, hours.
I'm not sure if it's possible to have link aggregators or multi-thematic forums that could be free of such... issues. The similar problem with establishing "real estates" happened on lemmy when some part of userbase decided to abandon reddit due to controversial changes.
A well moderated forum (like HN) is great. I don't have time for the signal-to-noise ratio of X.
IMHO Reddit would be better if it had AI moderators that strictly follow a sub's policies. Users could read the policies upfront before deciding whether to join. new subs could start with some neutral default policy, and users could then propose changes to the policy and democratically vote on those changes.
If the policies are public, there's a lot more transparency. eg my city of millions of people has a subreddit. The head mod bans people for criticizing a certain dog breed. This "policy" is pretty opaque, but if the AI enforced subreddit rules say "thou shalt not mention the dog's breed when commenting on articles about someone being mauled to death", more people would be familiar with the rule (and perhaps there would be more organized discussion).
I was on a subreddit for a while that voted on rules and had a rotating dictator to facilitate them. It worked decently well, although it never got to the point where the sub was brigaded. This was also pre-LLM so moderation was still a big time sink and the sub eventually fizzled out
I've always thought than on Reddit (or Digg, or Lemmy or others) common words, brands, names... should be broad "topics" or categories that nobody can claim (first come, first served). You should be able to add a sub/community under a topic, but just like everyone else, and then users interested in said topic could add and exclude different subs to taste.
sadly, a nice idea that is painfully naive with how computers are used in reality.
One need only remember how easy it was to take over IRC channels with a few hundred bots to see the endgame of this rationale… it cannot be patched out, it’s inherent to the internet.
That which would make a vote valid; can (and will) be gamed.
It could work depending on how it is set up. Maybe only accounts with n-number of years get 1 single vote, and maybe don't let any random 2-day old account get a vote.
As long as sub forums can be created easily, users may pick their sub forum and thus indirectly moderator.
In this setup having users elect the moderator leads to cases where small groups create their special interest group and then some trolls challenge the moderator.
Their may be some oversight on the large sub forum, but not all.
Necessary for this is that subforums can't have unique names. If a bad mod can squat all the words like "computers", "programming", "coding", newcomers aren't going to know the best subforum is called "RealProgNoBadMod"
You see this in city-focused subreddits. But the reality is the name is power. New users type in their city and join the original one. The hostile mods suppress mention of the new one. It never manages to get critical mass.
Crucially, SO's election system needs to be bootstrapped: users aren't eligible to vote until they have a history of participation. The level of participation is fairly trivial, but it provides enough signal to allow a reasonable detection (and elimination) of bot / sock puppet networks without resorting to crude measures like blacklists or "bot tests".
For new sites, this meant that the bulk of moderation was done by employees, followed by employee-appointed temporary moderators. This dramatically reduced abuse, but also reduced the explosion of new sub-communities that sites like Reddit thrived on.
It was pretty decent in the mid and late 00s. The community started turning toxic in the very early 10s and by about 2015 was quite poisonous. The saddest part is that the problem was known and spoken about frequently, but the response to that from staff and/or high-level mods was to just double down and dig in.
Internet is way behind on democracy. In general everyone likes democracy until they're in charge, then they realise they're the best person to be in charge and the idiots who vote don't have a clue, and should probably be banned if not beheaded for speaking out of turn.
You'd have to weight votes by some kind of participation metric to solve the problem of very little authentication of the voters
A democratic election requires that the elected be your employee, where you work with him on a regular basis to direct him in his job. That works (ish) in government where people doing the hiring have heavily invested life interests in it succeeding.
Does a subforum offer the same? Once the mod is elected, are you going to sit down with him each day to make sure he is doing the job to your wishes and expectations? I say (ish) in government because it often doesn't even work there, even where people have heavily invested life interests, with a lot (maybe even the vast majority!) of people never getting involved in democracy. A subforum? Who cares?
If there were to be elections, it is unlikely they could be anything other than authoritarianly, with the chosen one becoming the ultimate power.
>> I recently activated my account on there and went to the forum for my country. It was already taken over by moderators. Then I looked at the mod and he took all real estate that is already available on Reddit that is related to said country.
Are you sure? My understanding is that accounts were only allowed to create two communities.
Kinda seems like we’re rapidly headed for the complete collapse of the internet as we know it.
Every site that is driven by user posting seems to be headed towards being overrun by AI bots chatting with each other, either for sake of promoting something or farming karma.
And there’s really not much point in publishing good content anymore, since AI is just going slurp it up and regurgitate it without driving you any traffic.
Though it’ll be interesting to see what happens to ChatGPT and the like once the amount of quality content for them to consume slows to a trickle. Will people still use ChatGPT to get product recommendations without Reddit posts and Wirecutter providing good content for those recommendations?
The bot problem cannot be solved. Even if you strongly authenticate, people are letting bots act on their behalf (moltbook is a great example of this) and what's to stop people doing that in the future. Build your identity and reputation autonomously with the benefits that come with that.
This happens now on Onlyfans too. Content creators hire agencies which in the best case outsource chatting to "customers" to armies of cheap labour in Asia, and the worst case use bots.
The dead internet theory [1] is probably not just a theory anymore. HN recently made a policy to not allow AI posting and posters, but do you honestly think that's going to work? I would place a bet that a top HN poster within the next year is outed as using AI for posting on their behalf.
Anubis is one such answer [0]. Cryptocurrency and micro transactions are another.
In the last few decades, spam was a problem because the marginal transaction costs of information exchange were orders of magnitude lower than they had been. Note that physical mail spam was, and still, is an issue. Focusing on perceptual or fuzzy computation as the limiting factor, through captchas and other 'human tests', allowed for most spam to be effectively mitigated.
Now that intelligence is becoming orders of magnitude cheaper, perceptual computation challenges no longer work, but we can still do computation challenges in the form of proof of work or proxies thereof. Spam will never wholly go away but we can at least cause more friction by charging bot networks to execute in the form of energy or money.
Bots get so good that they become indistinguishable from humans. If that’s true then it doesn’t actually matter if your community is all bots. But it does matter because authenticity matters to humans. They will seek authenticity where they can successfully sense it, which will be in-person.
Human simulacrums will one day cause a repeat of this issue. Then we’ll have a whole Blade Runner 2049 issue about what exactly is authenticity?
"Creator", on the other hand, is beautiful. It means you don't have to pick a lane. Anything can be creative. Documentary filmmaking, stop motion, dance, costume work, historical reenactment, indie animation, economics essays, game dev...
The problem is we don't have a nice word that holistically captures the output of creators. They're not all making films or illustrations. So what do you call it? "Art" is awkward.
"Content" works, but it sounds like slop. We need a better alternative word that elevates creative output.
> people are letting bots act on their behalf (moltbook is a great example of this) and what's to stop people doing that in the future.
Verifiable credentials; services can get persistent pseudonymous identifiers that are linked to a real-world identity. Ban them once and they stay banned. It doesn’t matter if a person lets a bot post inauthentic content using their identity if, when they are caught, that person cannot simply register a new account. This solves a bunch of problems – online abuse, spam, bots, etc. – without telling websites who you are or governments what you do.
You kinda skipped the bit I wrote alongside this about strong authentication. There are numerous ways to do this. For example, in Finland you have to physically identify yourself to open a bank account and you can then use that to authenticate. It's used for all public sector services and a few others with strict accreditation.
The issue is that it solves nothing if you can't distinguish between text that is written by AI and isn't, regardless of strong authentication.
There is the other side of this too: Real people - fake posts.
So, you have other folks on here already saying that the code their bots write is better than their own, right?
How long until someone who is karma focused just uses a bot to write their comments and post their threads? I mean, it's probably already happening, right? Just like a bot doing your homework for you, but with somehow even less stakes. I imagine that non native speakers will take their posts and go to an AI to help clean them up, at the very least. At the worst, I can imagine a person having a bot interact fully under their name.
So even if we have some draconian system of verification, we will still have some non-zero percentage of bot spam. My out-of-my-butt guess is somewhere near 40%.
I've talked about this on here before, but we think the solution is an auth layer built on top of credit score through an intermediary like creditkarma. The score itself doesn't really matter but it does solve big problems.
Plus, if you wanted to implement a filtering system for users, I personally would rather trust reviews / comments from credit scores over 650, they have less incentive to be astroturfing.
But yes, I think your conclusion is correct. This is the only way.
How is that creditkarma accumulated? By other "users"? Does the intermediary guarantee, the this account is a valid person now and always, and not sold the account or not stolen? I mean, we will always need some middlemen I guess?
IMO this is inevitable. HN is freaking about about the end of the anonymous internet, but it's already over and we're just figuring it out. Eventually the bots will find their 90s cyberpunk cosplay IRC channel too.
I'd rather have a system where there's a small investment cost to making an account, but you could always make another.
Imagine A system where there's a vending machine outside City Hall, you spend $X on a charity for choice, and you get a one-time, anonymous token. You can "spend" it with a forum to indicate "this is probably a person or close enough to it."
Misuse of the system could be curbed by making it so that the status of a token cannot be tested non-destructively.
Something Awful made you pay $10 for an account. Directly to the forum. If you got banned you could pay another $10 to try again. Somehow this didn't lead to that bad incentives even though you'd think it would.
Ban reason and the moderator name were public on Something Awful, which allowed the community to respond (actively or passively), and for more senior moderators/admin to take public action against rogue moderators. The transparent audit trail countered the incentive to ban somewhat, but a lot of people also treating getting banned as a game.
Lemmy isn't simply Lemmy since it's federated. A screenshot like this is somewhat meaningless without specifying on which instance this happened. There are instances with very lax or even no moderation at all.
For the majority of large, well-federated instances, I don't think it's meaningless, because deletions also propagate to other instances.
If a mod on one server doesn't like something I say, and they delete my comment, all the other (well-behaved) federated instances will also delete my comment.
Of course this also creates problems in the other direction, like servers that ignore deletion requests.
That combined with a large amount of blocked instances across the board, I feel like you get into this "which direction would you like to piss into the wind" situation where you have no idea how many people/instances will actually see your message if at all.
>transactional emails from various services that you’ve signed up for
These are one of the main culprits of unwanted emails... and a toll system would make them all the more valuable for the even worse actors to take advantage of.
probably not, the problem is that spammers/scammers are looking for whales, and if you are talking about draining the retirement accounts of an American who's been saving all their life, that's quite a big payout in the six or seven figures.
In the case of the 415 scams I used to ask “who would expect $20M to fall out of the sky?” The obvious answer is “someone who already had $20M fall out of the sky”
The bot problem can easily be solved. It’s just that no one likes the cure. Think about this for a minute: what would happen if you had a country where all its citizens could act anonymously with no consequences, no reputation, no repercussions, and no trace? Would you want to go there? Live there? No, because it would be a lawless wasteland dominated by the worst of the worst.
Yet people act like the internet is somehow different. The internet is a massive society. Social networks are very much like virtual countries, or even continents. We’ve all enjoyed the benefits of living in this society of zero consequence, but it’s now been overrun by the very worst people, just like the imaginary country above.
You claim we can’t solve this problem, but we already have solved it here in the physical world with identities, laws, and consequences. The real problem is that most people don’t want to let go of the very thing that is the problem: anonymity. Unfortunately, there won’t be a choice for much longer. The internet will certainly be dead without a system that ties IP addresses and online identities to real people.
No, it’s not the internet we all wanted, but humanity has ruined the one we have.
I can imagine a "anonymity" or "reputation" filter attached to every interaction in the internet. Enabled by default, but you can disable safe mode and see bots having fun.
Also for me problem is not in the anonymity itself, but in the lack of reputation. If I have a signal that entity can be trusted, I don't care much about its real identity.
I think this is a great way to frame the conversation and possible solution: reputation. things like accumulated karma or credits and IRL connections (big data will love this) all begin to feel dystopian whereas reputation I believe is something that everybody can get behind. It can absolutely remain anonymous, while still benefiting from IRL meetups for big reputation bumps (just use your handle). We all hang out in lots of places online, let that rep build and be used everywhere. Pretty sure they were trying to do something like this in the fedverse but haven't touched base on it in a long time ...
So you are missing something here. Up until recently IRL was anonymous by the nature of capturing all that data of what people are doing was expensive and difficult to process. Cameras weren't everywhere either.
If you lie to me in the real world, I know what you look like and won’t trust you again. You cannot change your face. If you punch me in the real world, I can punch you back. If you stab me in the real world, you’re likely going to jail once the police catch up to you. You don’t do any of those things because the lack of anonymity imparts consequence. There is no anonymity in the real world unless you run around in a full face mask, in which case no one will trust you anyways.
>The real problem is that most people don’t want to let go of the very thing that is the problem: anonymity.
Anonymity is not the problem though. We've gone with anonymity for a long while and it has worked fine. Would a removal of anonymity suddenly fix all this? No, absolutely not. Astroturfing and PR campaigns happened before AI comments were a concern, same as bad actors.
The problem here is the "recent" development of trusting whatever you read online. Of insisting that content should be personal, trustable and real, when none of this can ever be ensured. The separate, but related problem of engagement-based economy makes it way worse.
And remember: social media sites don't actually want to get rid of bots, for the most part. That's not in their interest, as long as bots increase engagement, does anyone trust them to actively hurt their bottom line in order to promote honest, productive discourse? Please.
With AI running rampant, it seems security through obscurity is basically the best thing we have. Everyone knows reddit, facebook, xitter, etc so any clown can and does have bots running loose. HN is "obscure" in that most normies don't know about this place, and so it's relatively safe from the floods of spam. But I think it's just a matter of time until non-tech people start looking for those few bastions of human comments online, come across this place, and a great flood begins and it'll never be undone. After that, I guess it'll be a rise of invite-only forums like we had in the early 2000s all over again.
HN may not be “mainstream” but it is certainly _very_ vulnerable to bot spam given the topics discussed and the make-up of the audience.
You can already see it happening now - at least the bots that write like vanilla Claude/ChatGPT. Presumably there is a much larger hidden cohort of bots that are instructed to talk more naturally and thus are better adept at flying under the radar…
I’ve never seen people on the likes of blackhatworld selling hacker news accounts or services. The glass half full take on this is that hn is surprisingly robust in its ability to deal with vote manipulation.
If you are rate limited, a moderator has manually applied a rate limit to your account. Accounts are not rate limited by default. You can appeal the decision by emailing hn@ycombinator.com.
I think there's a short-term rate limit applied to everyone, e.g. you get a message if you try to post three replies in the same minute. I've seen it once, and I don't think I'm active enough to have earned a manual flag.
The karma points you get on HN are worthless, which I think is a bonus. They don't buy you anything. On Reddit, for instance, many parts of the site are walled off until you have "farmed" enough karma to participate.
You get the right to down vote and if I promote my totally not a scam product on HN, people will check my user account and see: on wow over 9000 karma? Gotta be trust worthy, when in truth it's just been karma farming.
I don't know, never found much value in karma. I recreate an account at least once a year for no particular reason and it roughly takes me a week to get enough karma to do what is important (flagging posts).
150m page views a month is peanuts and very far away from the "social" networks numbers. I don't have those numbers, but I know how many page views we had 2011 while running a german browser game community.
The internet seems to have grown massively within the past couple years (unfortunately, almost certainly because of bots). I bet the number today is orders of magnitude higher.
I was thinking the same thing, that this wouldn't necessarily be a bad thing. I'm curious how far it will go.. if we'll get invite-only mesh networks with self-contained mini-internets and the like.
I've asked ChatGPT a question about something I read in a thread here and it responded with a comment from that thread, even though the thread was less than an hour old. HN is well known in the tech community and there are certain subjects, especially anything involving Israel or India, that nearly instantly result in a flood of comments from bad actors. HN isn't Reddit but it's also a shadow of what it once was, which is driving away more of the productive participation in favor of agenda-based posting.
Note that these topics often involve comments which you can predict very easily. Internet users are like that, agenda or no. Wasn’t it in the heyday of forums that you could recognize the most prolific/annoying members by their style and vocabulary? A model should have no problem pulling such things off.
The future is human curated content. Provide the same experience people get today but without the noise. Give them just the good stuff and don't let just anyone make a post. A book has an author, a movie has a director, maybe websites can have webmasters again who filter through the garbage for you.
You've nailed it. Social media is no longer and will never again be a substitute for real human interactions. It sort of worked when it was mostly real humans, but that era is ending and not coming back. Algorithms are now controlling what you see, and bots and agents are increasingly creating and posting most of the content.
Everything clicks nice, so to speak. A nice UI you have there.
I would suggest you explain what it's about in one sentence, just like you explain in your HN profile. The About-page says not so much. You can add some explanation there, or even just one sentence at the top of the homepage (or other pages).
This means that only sites which verify identity will have any value in the future. And by verified, that means against government ID and verified as real.
No amount of sign up fee works as an alternative.
Note that a site can verify identity, prevent sock puppets, ban bad actors and prevent re-registration, all while keeping that ID private.
You still get a handle and publicly facing nick if you want it.
The company which handles this correctly will have a big B after it. Digg actually has a chance at this.
It has no users, so the outrage won't exist in the same capacity. Existing platforms will be pummeled in the market if they try to convert to this type of site, as their DAU will likely drop a thousandfold, just due to the eliminated bots.
But Digg could relaunch this way. And as exhibited, this is now the only way.
The age of the anonymous internet is over, it's done. People not realizing this are living in the past.
Note, I don't like this, but acknowledging reality is vital. Issues with leaked databases, users, hacking of Pii are all technical and legislative issues, and not relevant to whether or not this happens.
Because it will happen, and is happening.
It should be noted that falsifying ID is a crime. Fake ID coupled with
computer fraud laws will eventually result in hefty jail time. This is sensible, if people want a world where ecommerce, and discourse is online... and the general public does.
And has exhibited a complete lack of care about privacy regardless.
I think people who want to stay anonymous just will not participate anymore. Like I’ve enjoyed using this site, Reddit etc but couldn’t care less about dropping them if I need to have an id verification to access. Someone will probably create a new communication method to replace this.
>No amount of sign up fee works as an alternative.
Simply put money is worth too much, at some point someone will want access to this human audience and offer too much to be resisted.
>It should be noted that falsifying ID is a crime
Lol, no one gives a shit on the internet. People will use stolen ID'S to get accounts. If the network is lucrative enough, governments will provide fake IDs to spread propaganda.
Every website needs to add the "friend or foe" system[0] so that I can mark bots to avoid their content and mark good posters so I can filter just to theirs.
no, I truly do not want to read IHeartHitler88's opinion on jews, or donttreadonme09's bright opinions about how the economy would be better if we listened to Ayn Rand. I'll be very happy when they're out of my sight. If I want to have a miserable day, sure, I'll turn it off.
Fact of the matter is, most posts on the internet are already dogshit. Now they're also populated by AI, but the point stands. Most of what you will say online is at best useless.
I know, it hurts. Most of what I say in this website doesn't matter. Even if it did, it's about the same thing as screaming into the void. And it applies to you too.
The vast majority of what we post is vapid, useless bullshit.
> And there’s really not much point in publishing good content anymore, since AI is just going slurp it up and regurgitate it without driving you any traffic.
You just published good content knowing AI will slurp it up and not give you any traffic in return. I'm now replying to you with more content with the same expectations about AI and traffic. Why care about AI or traffic or recognition? Isn't the content the thing that matters?
It's like answering technical questions in an anonymous/pseudonymous chat or forum, which I'm sure you've done, too. We do it to help others. If an AI can take my answer and spread it around without paying me or mentioning one of my random usernames I change every month or so, I would be happy. And if the AI gives me credit like "coffeecup543 originally posted that on IRC channel X 5 years ago", I couldn't care less. It would be noise to the reader. Even if the AI uses my real name, so what?
The people who cared about traffic and money from their posts rarely made good content, anyway. Listicles and affiliate marketing BS and SEO optimizations and making a video that could be 1 minute into 10 minutes, or text that could've been 5 articles into a long book - all existed from before AI. With AI I actually get less of this crap - either skip it or condense it.
It's two different problems. People who run review sites and blogs and such care about traffic, and not getting attribution will kill their desire to participate. People who post here and on Reddit etc. care about talking with other human beings, and feeling ignored in a sea of botspam will kill *their* desire to participate.
> feeling ignored in a sea of botspam will kill their desire to participate.
The bots are not really that bad, they're (still) pretty easy to spot and not engage with. I'm more perplexed about the negativity filled comments sections, and I'm pretty sure most posters are real grass-fed certified humans.
I don't get why negative posts get so upvoted, get so popular on the front page, and people still debate with outdated arguments in them. People come in and fight other deamons, make straw-man arguments and in general promote negative stuff like there's no tomorrow. I think you can get so much more signal from posititve examples, from "hey I did a thing" type posts, and so on. Even overhyped stuff like the claw-mania can still be useful. Yet the "I did a thing" get so overwhelmed by negativity, nitpicking and "haha not perfect means doa" type of messages. That makes me want to participate less...
Oh that's just human nature: there's a reason why trashy tabloids continue to exist despite how public sentiment seems to universally agree that they're awful spreaders of rumour and insecurity. More people are Skankhunt42 than we'd like to admit.
In the most simple sense - Yes, it is the content that matters.
In the more practical sense - cognitive and emotional resources are limited and our brains are not content agnostic.
We have different behaviors, expectations and capacities for talking to machines and talking to humans.
For example, if I am engaging with a human I can expect to potentially change their minds.
For a machine? Why bother even responding. It’s of no utility to me to respond.
Furthermore, all human communication comes with a human emotional context. There are vast amounts of information implied through tone, through what we choose not to say. Sometimes people say things in one emotional state that is not what they would say on another occasion.
To move the conversation forward, addressing the emotional payload behind the words used, matters more than the words used themselves.
There are a myriad reasons why humans are practically poorer for these tools.
Asking money to people in order to read stuff, and promoting the one people are actually ready to part with real money to read, is a first interesting step. (See: substack, Patreon,etc...)
I know this is going to sound horrible, but : how about asking money to contribute, period ? Maybe have a free tier of a couple comments, etc... But if you want to build a troll factory, sure... Show us the cash ?
I do believe that charging for it is one way to create some friction, but it's not enough.
Twitter is full of blue checks that are just bots and automated reply guys.
I'm treating now all these bots as a stressor on our defense systems, and we will end up having to learn how to build a real Web of Trust, and really up our game on the PKI side. We also need some good Zero Knowledge proof of humanity that people can tie to their Keyoxide profile, so that we can just filter out any message that is not provably associated with a human.
This could be positive. So far things were gamed and manipulated to some extent, with some fake content, but it was never too obvious, and a bit of a cat and mouse game with filters and whatnot. Now, it's so easy to fake content that robust systems will have to evolve, or most social media sites will become worthless, and advertisers will catch up eventually when they are paying for bot-only sites.
The downside of course is that these robust systems are hard to imagine without complete loss of anonymity of the users.
Web of trust weakens anonymity, but doesn’t eliminate it.
- You know who your online invitees are, but not your invitees-of-invitees-of-…
- You can create an account, get it invited, then create an alt account and invite it. Now the alt account is still linked to you, but others don’t know whether it’s your friend or yourself. (Importantly, you can’t evade bans with alts; if your invited users keep getting banned, you’ll be prevented from inviting more if not banned yourself)
> Though it’ll be interesting to see what happens to ChatGPT and the like once the amount of quality content for them to consume slows to a trickle.
Creative loop moves inside the agentic chat room, where we do learning, work, art, research, leisure, planning, and other activities. Already OpenAI is close to 1B users and puts multiple trillion tokens per day into our heads, while we put our own tokens into their logs. An experience flywheel or extended cognition wheel of planetary size. LLMs can reflect and detect which of their responses compound better in downstream activities and derive RLHF-RLVR signalling from all our interactions. One good thing is that a chat room is less about posing than a forum, but LLMs have taken to sycophancy so they are not immune, just easier to deal with than forums. And you can more easily find another LLM than a replacement speciality forum.
Every website that was driven by traffic is also dying. I have put nearly a decade of work into mine, and AI overviews and ChatGPT have reduced traffic by over 60%. At some point I will need to give up and find a job, and that corner of the internet will get no new original information, just rehashed slop.
As someone who came of age before “the internet as you know it”, I am looking forward to all of the cancerous Web 2.0 OG slop and narcissism factories succumbing to their own fates. Let me tell you, the internet as we know it sucks, and the internet it ate 25-years ago is a marked improvement. We should be so lucky. Now go write a personal blog in plain text, and rejoice.
You mean a complete collapse of social media, not the whole internet. The internet is a telecom ecosystem and has a lot more to it than just forums and link aggregators.
I honestly believe it might not even be such a bad thing. People were arguably better without social networks and media, and it's perhaps better to let the cancerous thing just die and keep the internet just as a utility powering boring things like banking and academia.
What would you say are the major applications of the internet? It's used for business and academia in ways that aren't going away, yes. M2M communication will stay. Social media is the largest user-facing segment and it might not. I don't have a sense of how big these sectors are relative to each other. If the largest sectors of the internet disappear, the internet shrinks a lot.
Unless you're allowed to say slurs without being banned, your forum will be overrun with bots. The sanitation of the internet is the perfect breeding ground for brand-safe AI promotion bots.
Curious how you came to that conclusion. Anecdotally, places where you can slur to your heart's content like /r/conservative seem far more inundated with bots than other areas of Reddit. I feel like that's really saying something too, because Reddit has a really bad bot problem overall.
At some point websites will just have to start charging an entry fee just to make it so if you really are yet another bot, at least you are paying for your stay. If you're not rate limiting your websites in 2026 on a per user level, you really need to, and figure out how to do it meaningfully. Raise limits for known human power users, especially if they pay to use your website.
I wonder if the "short-term" "fix" is people will start to migrate off the web and into mobile, though none of this stops agents from using phone emulators, so kind of pointless, but I imagine crawling the web is easier for AI.
This is a comically short lifespan. Didn't they launch less than like 6 months ago? To just torch it and shut it down is wild and right from the jump referencing downsizing the team... I got the impression this was a fairly small team from the beginning. Not to mention it was backed by stupendously wealthy cofounders making fortunes off the web 2.0 run of original digg and reddit, yet can't seem to stomach a bumpy 2 quarter initial launch?
There was a lot in the new digg that I was concerned or at least not optimistic about but come on - are we even going to try anymore?
> None of it was enough. When you can't trust that the votes, the comments, and the engagement you're seeing are real, you've lost the foundation a community platform is built on.
I used to love HN. Lots of interesting stuff, great articles, novel projects. Now it feels like the frontpage is always around 70% LLM-related stuff. And not breakthrough research or projects, just "new Claude version X" and shit like that. Eternal September I guess?
It's not, hence the "don't post AI slop as your comment" posting a few days back that had 1000+ comments.
Currently an unsolved problem - just stealthier on some platforms than others. Trigger the right topic on HN and the bots come out in-force together with humans sloppily copy/pasting LLM content.
I am kind of peeved. I started a community there and diligently posted links to topical news, and it kind of became a reference to me. Like many others, I've put in some amount of effort.
Now it's gone, again. Without a head's up or a way to get a backup out of it, it seems like. Can't say I am a fan of that.
Kevin Rose didn't start Hodinkee, he started Watchville years after Hodinkee was already well established. Watchville merged with Hodinkee, at which point he became the CEO for 2 years.
From what I can tell Watchville was abandoned a few years ago.
That's exactly what they did to the old Digg back in 2010 -- massive redesign that effectively deleted all old posts, comments, and favorites without warning or opportunity to back up. I pretty feel vindicated choosing not to trust them again, though it's wild they didn't even make an effort to do better here when they claim to want to keep going.
I do have a lemmy account, but have not really returned to it in a while. Maybe I haven't found the right communities yet, but it had nothing about it that felt engaging. People upvoted, but nobody talked. No interaction. Digg felt more alive from day one. I replied to a post in a niche community with ~100 members and only afterwards realized it was @justin.
My experience with lemmy has not been nice. A majority of people there are just downright awful, and the mods are often power-hungry and overzealous in their actions. Many times entire servers are defederated from many others due to how a large percentage of their users behave.
You're right, and that is one of the lessons to be reminded of here.
My main point wasn't that, though. It's simply a bad and low-effort way to handle the situation, and like one of the other replies points out, there are better options. They could have just as well disabled posting and maybe even viewing of submissions and communities for the time being. Just shutting it all down immediately without notice leaves a bad taste in my mouth, and I will not be among the people returning for their next relaunch. I am sure others feel the same way, and I don't think it is a wise decision to needlessly put off your early adopters if you're hoping for them to come back "next time".
Argh. Also quite irritated. I had 50/50 transitioned over to it despite the lower traffic because it was a calm oasis. The thing about bots is believable, though, because you could already see it happening. Dead Internet has been real for a while, and I'd love to seem Kevin and Alex do a followup on this.
Yeah. Sadly the default communities were flooded with blog spam, and that's just the part I noticed. A couple days ago a bunch of smaller communities also got a noticeable bump in members. That didn't change anything in my own community, but others apparently weren't so lucky.
I can see why the team got overwhelmed. I wouldn't want to have to deal with that.
The "new" Digg was just Reddit with the exact same type of comments you can find there and I left it (Digg and Reddit) because of that. There are very few sites where real discourse is still possible without it being filled with memes, running jokes, "witty" one-liners and the constant need to "one-up" and call-out each other. What does Digg even want to be? Nobody needs a second nu-Reddit. It speaks volumes that this post also seems to be AI-generated.
> sites where real discourse is still possible without it being filled with memes, running jokes, “witty” one-liners [etc]
There are subreddits within Reddit such as https://www.reddit.com/r/neutralnews/ that have strict rules around sourcing, etc. However, I think that’s not what most users want, and may not be quite what you’re looking for either, apologies.
There are 3 horsemen of Internet forums, one of them is topics with a low barrier to entry.
At that point anyone can speak up, and their opinion takes up as much screen real estate and reading time (often less reading time) than a truly informed take.
By putting effort barriers in place, it forces a fitness test that most users (and bots) fail.
Another subreddit which has strong rules is r/badeconomics. I didn’t know about neutralnews, so thank you for giving me another example to add to the list.
I think communities like Reddit and Digg grow to a certain point and don’t grow anymore because everybody else absolutely hates what those communities have become. See the fight years ago where Digg thought it had to outgrow MrBabyMan. Problem is platforms don’t usually win those fights.
Sure, today’s redditors love sharing stupid image memes. For each of them there are 20 people who wouldn’t touch Reddit with a 10-foot pole.
The whole problem is trying to be a catchall where people with zero knowledge or skills can hang out. Twitter/X and Reddit especially suffer from it.
Topical forums tend to have a much higher SNR. My favorite forum of all time, johnbridge, had none of those issues. Sadly it died this year all the same, but many others still exist. When you have a forum dedicated to something that requires a minimum barrier to entry, the more useless folks get shunned away pretty early and easily.
- Users don't have to pay to post links/stories
- Users have to pay to comment on links/stories
- Users have to pay to "upvote" comments. Downvotes don't exist
- Each link "lives" a certain amount of time before it is locked.
- After lock time, users who posted the link get "paid" a % of the collected $ comments/upvotes. Comments that are upvoted also earn $ proportionally to the upvotes.
Hashcash was conceived to solve automated spam/email. Participating in a discussion must cost something, that's the only way bots and spam will get partially stopped. Or, if they start to optimize to get "the most votes", then so be it, their content will increase in quality.
Paying users for their posts is what killed YouTube, Twitter Facebook, Instagram... You will only get shitty ragebait comments. Not to mention that you have to link some bank account with your full name, etc.
This sounds like a platform that has no appeal to the average person, and an incredible appeal to people wishing to launder money or use money to run an influence campaign. Deliberately determining popularity proportionally to the amount of money spent is little different than advertising, but this would be under the false premise of "someone thought this was important/valuable enough to pay money to suggest I see it".
If this were to exist today, I know I would be incredibly critical of it.
Every election I see internet-connected gym machines have the leaderboards spammed with right wing messages because some people don’t have to work and just spin all day.
It seems like that would lead to a proliferation of ragebait, deliberately controversial posts, and overly simplistic articles to attract the greatest amount of comments. I frequently see deeply technical high-value posts on HN with very few comments but each thread about politics ends up getting hundreds of comments.
The patterns were there if you knew to look for them.
The original Digg excepted, Kevin Rose's attention span is extremely limited. He will give something ~3-4 months of attention before (apparently) getting bored and wanting to move on to something else.
Up until that point, he will be an unrelenting hype man of whatever his attention is lasered on at that moment.
Then the hype posts start to drift. They show up once every few days, then once a week, then stop entirely. Any criticism or skepticism is considered a buzz kill in the cloud of good vibes only.
A few months later, a dramatic explainer post arrives (underestimating the cold start problem? Really??), outlining why the idea didn't work and why the next one will be better, for sure, for real.
This (AI generated) note from the current CEO paints an optimistic picture, but the most likely outcome will be that Digg simply doesn't launch. It's sustained on the nostalgic vapors of the old guard, not renewed by a replenished sense of purpose, or connection.
I'd say I'd love to be proven wrong, but I personally question the utility of a Web 2.0 social network phoenixing itself. We have endured a decade+ of originality being buffed out of web products, most now resembling variations of Bootstrap and shadcn in service of dev convenience and getting rich quicker.
Surely in the age of vibe coding, we can afford to take creative risks again, and think of something new.
I think anyone with tens of millions of dollars would find it hard to compete in the business world - they should stay in their garden with their rare plants
This kind of makes the Digg team look like a joke. Rebuilding was always going to be hard, but I think this kills any chance of building it up a third time since no one can take it seriously.
That didn't last long. I'm not sure I want to invest my time again if/when they relaunch.
I kind of expected this. The way some of these people work, if the site isn't an instant unicorn, it's trash. But if the goal is a good community, that is something that takes time to build and should grow slow. The incentives are all backward.
Digg's death in 2010 was essentially the original case study for how to destroy platform trust overnight. The v4 redesign wasn't just bad UX — it was a signal that the company had fundamentally changed its relationship with users. When Kevin Rose tried to "fix" the front page by giving power users less influence, he accidentally revealed that the whole value of Digg was those power users.
What's interesting is that every subsequent attempt to revive Digg has been a bet that brand nostalgia outlasts institutional memory of why it failed. It doesn't.
I was excited about a Reddit alternative. I signed up months before the public beta. When I tried the public beta the new Digg website turned out to be a terribly bloated and slow NextJS app. Used it once and never again.
It's a shame, the intention is still there, if they decide to come back I'll give it another shot.
Btw, why are we publishing simple static pages at ~2.84 MB compressed.
For a short time I was a part of a small site that banned politics.
It was fine, people talked about work, personal stuff, travel, until one person posted about their disappointment that their state was limiting various services or rights to gay people. For them this meant their rights were in question and they were understandably upset.
Immediately some folks cried politics and that they shouldn’t post about that sort of thing.
To the user posting it it was about their life…
I don’t think “no politics” rules really make much sense. For someone it’s more than politics, and IMO because a topic is touched by politicians or government shouldn’t make it disallowed.
I've nver seen discussion of politics on forums do anything but turn into hate-filled, dogmatic posts which aren't productive at all. Every political thread here turns into the same takes and HN imagines itself as intellectually better than others. It's not interesting or productive. If talking about politics fixed things, why are politics worse today than they've ever been? There's no costs and no solutions to ranting about politics online.
The vast majority of people do not want to get on a forum to escape their life to see every more or worse content about their daily lives.
You're right, there needs to be some outlet but when people propose this it's because they are sick and tired of politics and the injection of them into everthing is not helping those politics, it just makes it worse.
Tons of people aren't political creatures and want nothing to do with politicians. This notion that more politics will fix thing isn't born out by Reddit, X, the US Congress, Brexit, etc. It's too easy to divide and manipulate people.
> Wouldn't that be almost impossible?. Politics affects our lives every day.
No it wouldn't be. And if your definition of "politics" includes "literally every time a thing happens" then your definition is so broad as to be useless.
When people say that they want politics banned, they are talking about the extremely controversial arguments that are almost completely unrelated to whatever the community is about. IE, if you run a group about Cheese making, and someone comes in and starts arguing about if an ice shooting on the other side of the country was justified or not, that is... off topic. And everyone with a brain can understand that.
It really isn't that hard to figure out which topics are related to cheese making and which other topics have almost nothing to do with it, even if someone could make a bad faith argument that it is related (EX:, your response would probably go something like "Well what if someone knows a cheese maker who is here illegally, therefore thats why ice enforcement on the other side of the country is relevant!". You could say that but we all would know that you are being bad faith or have some sort of issue with determining what words mean to regular people)
Partial credit in this example could go to political issues that are very obviously and directly related to cheese making. A new tax on cheese that goes into effect in your local town, and very directly is related to the group topic. Stuff like that might be OK.
And your response to this example would go something like "Oh, so are you saying that politics should be allowed!?!? how do you tell the difference between a cheese tax and an ice shooting on the other side of the country? Hypocrit!"
And the answer to that is that we can use our brain. We all know that a cheese tax is more related to the local cheese making group than national politics. And we don't have to argue with clearly bad faith arguments that pretend otherwise.
To summarize, when people say that they want to ban politics, what they actually mean is that they want to ban completely off topic controversial issues that others are trying to shoe horn into a group that isn't related to that issue.
And people are saying that it is OK to compartmentalize things. Every group in the world doesn't have to talk about your pet issue. The cheese making group can just be mostly about cheese making and they don't have to argue every day about national immigration policies.
There's a forum (HardForum) where they've taken a kind of opposite approach: people pay to access private forums where they can talk about politics and random things while the public-facing boards remain tech focused.
Basically incentivizing those who feel strongly about things to just pay up to talk about them in an exclusive area, which also keeps the site ad-free. Been apparently working for 25 years.
Unfortunately unless you also ban it in comments, people with an axe to grind will find a way to bring it up in the most inappropriate places. Casual swipes at Elon and Trump and Biden or AOC (depending on your corner of the internet) will happen on stories about the nutritional value of school lunches or fundraising for some animal shelter. It even happens on HN constantly.
You really gotta wonder how much value the "Digg" brand actually has, because the number of people that remember/care about the site from its original glory days is ever dwindling.
I liked digg v2 (I guess), when it relaunched as a sort of curator of interesting articles (and videos). For years it was my go-to place when bored and wanted something interesting to read.
I guess that in an ocean of upvote-based platforms, an island of hand-picked content was a welcome change -- at least for me.
The move (back) to a reddit-like site never made sense to me. Hopefully what comes next has real value to the users.
Apparently the reason why their articles were interesting was because... they copied all of their content from DamnInteresting. Once they were called out they stopped, and the quality went downhill.
One of the things I always disliked about the original Digg was their threading. The slashdot like feed where the oldest comments were at the top and there was only one level of replies tended to encourage the "first" comments and harmed the quality of the discussion. I was glad to see it use a reddit-like comment thread for the new site, but it also meant there wasn't much reason to use it over reddit.
I'm a bit surprised with Alexis' involvement they didn't anticipate the bot problem. Alexis left reddit several years ago but I'm sure he's still in touch with the folks who run the place. It would've been worth it to talk to them about the threats they currently face and how they deal with them.
Absolutely. They kinda brag about it now. But I think it was just the founders making multiple accounts. It sounds like the new Digg was worried about bots scaring people away from the site with thinly disguised ads.
I was a big user of Pocket between 2015-19, which also curated interesting long-form articles. The problem with that model was that paywalls were coming up everywhere, so the free articles that remained came from low-quality sites (Forbes, The Inc, FastCompany) full of long-form hustle culture or self-improvement stuff loaded with affiliate links. Maybe Digg v2 had the same issue.
The problem seems to be identify, a real problem, and looks like it will only get worse. Would creating a zero knowledge digital identity service (maybe centralized, maybe decentralized idk), where you prove you're human via your government id, passport, driver license, whatever, and the service can then attest you're a real person? So if I'm Digg, I would ask for some form of OAuth, the system would simply verify that you are in fact a human, and you would go on to create your account, forever verified. This way the identity service only does identity, it does not keep a record of where you are attesting, no logs, nothing, just your identity and basically saying yes/no, no sharing of ids or any other data.
So people would go through one hurdle in life, to get this id, and reuse it for every service.
Is this a worthwhile idea? I know many have tried, so help me poke holes in it.
1/ KYC is pricey, and users might not want to pay for it
2/ Spammer can hire real people to farm accounts
I think this idea might work if we
- create reputation graph, where valuable contributors vote for others and spread reputation
- users can fine-tune their reputation graph, so instead of "one for all", user can have his personal customized graph (pick 30 authorities and we will rebuild graph from there)
I think apps that want assurance of your identity should pay for your kya. They want valuable people after all, and this should go into their CAC. Users still pay nothing, the identity service does not care about their info, after verification, it drops any details, like uploaded documents, whatever, keeps a certificate.
The cost for this service is likely keeping up with ID systems for multiple countries, infra and support.
Potentially, if this is made into a protocol, it can be decentralized kind of like the SSL system, so each country manages it's own rules.
I am less concerned here. If you plug in AI into your identity, I guess your identity is revoked. I see the problem though, that once a service notices you're an AI, there is no way to block you because we don't really know who you are, only that you're human.
So we need a mechanism that makes this identity verifiable, maybe you get a unique identifier from the identity service, so you can block the account. There is no mechanism to report you to say, the identity service (this is a bot), so you manage your own block list.
The risk here is fingerprinting, your id can be cross referenced across apps. Maybe here is where you implement a zk proof that you're who you say you are.
I don’t love the original idea because uploading identification is risky. You could just plug AI into a verified account but at least the vector is a single account instead of unbounded.
No, the problem is people want everything for free. The solution is very simple. Charge $5 to open an account. Only allow a person to moderate one forum/community/subreddit/etc... Delete accounts that break rules ruthlessly. This would work, but no one on the internet wants to pay for a quality forum so we deal with the same crap over and over and over and pretend like there is some other soultion.
More evidence that "millions of people in the same room" isn't a sustainable model for online communities. I've been feeling for years that some kind of "chain of trust" and/or "X degrees of separation" reputation model is basically inevitable for broad-scale online social communities.
I wonder if the old forum model would work. Instead of these mega-forum-platfroms, there are just small communities with a niche focus at their own URL.
I suppose bots could find forums that use the most popular software and still make accounts and spam, but it would be much more obvious and less fruitful for someone to spam deck builders in Vancouver (something I saw often on Digg) on a forum that is focused on aquariums owners in the midwest.
Spammers don't care if it's fruitful - they just run software that finds every forum and spams it. That's why you can block so many by asking "is an elephant big or small?" on the sign-up form.
Old forums still exist and work just fine without any CEOs pontificating about "community".
I'm on plenty of niche interest boards built on PHPbb, Xenforo and Discourse. Chronologically ordered discussions, RSS support, no algorithmic "For You" bullshit.
Is Kevin Rose known to know how to address bot problems? I think it's a little absurd to address a bot problem with bringing back the original founder. I believe he was great at community building and functionality, but bot prevention is a different beast. The post mentioned that they also worked with third parties which I believe should have more bot prevention experience than Kevin.
To be fair, I don't know Kevin Rose personally, so maybe he knows more than the industry, but I highly doubt it.
Reddit has the same problem. They are fighting it more or less successfully. I would look more in that direction.
Is Reddit fighting the bot problem? They introduced a feature to hide post history which makes it hard to know whether you’re interacting with a spammy bot account. If anything they’re embracing it.
Actions speak louder than words. They’ve added features that help spammers hide their behaviour, they are rejecting API keys when people apply for access to deal with the bot problem, they ignore subreddits with spam-friendly moderators, and they ignore reports on vote manipulation. There’s a tonne of low-hanging fruit for tackling the bot problem on Reddit that they aren’t doing anything about, and often it seems like people outside of Reddit do a better job without access to the raw data than people inside Reddit do with the raw data.
I know they claim to care about the bot problem, but they appear at absolute best incredibly complacent about it, if not complicit. All those OnlyFans spammers, AI spam bots, etc. are engagement. They are ruining the platform for people, but engagement figures don’t distinguish between fake engagement and real people. The outcome of their current behaviour is for engagement to steadily rise while the value to real people steadily falls. It’s like they want to be the poster child for Dead Internet Theory.
I don't think this is helpful to bots tbh. For over a decade every time I come across a clear bot account their comment history seems very human. I assume they're either buying real accounts for one-off astroturfing hit and runs in combination with deleting older astroturfing comments after the submission stops getting traffic to hide their footprints. Or more likely there's a giant ring of bots that submit innocent things and then comment preplanned innocent things in a giant bot circle and then make pointed comments on r/politics or whatever after establishing an innocent baseline. This is the obvious approach I'd take if it were me.
I'd also be really surprised if there wasn't coordination with Reddit employees/execs themselves for big advertisers.
The Reddit CEO mentioned that the community thrives when humans talk to humans - and not with AI slop. He also said they are working on efforts to identify automated accounts.
Reddit can't even manage to regularly identify and ban bots that copy previously popular posts/comments verbatim, and that's a much easier problem than modern LLM-based bots.
The bot problem is serious right now. I've switched to only allowing accounts that have paid at least once to post for my own network. It's a hard barrier (minimum spend is $2 for my site), but it almost completely solves the bot problem.
We really need some way to "verify as human" in the next coming years.
> We really need some way to "verify as human" in the next coming years.
I don't believe there is any practical way to do it.
Sure, there are ways to verify a human linked to a specific account exists in a one-off fashion, but for individual interactions you'll never know that it isn't an LLM reading and posting if they put even a small amount of effort to make it seem humanish.
I'm somewhat relieved. I didn't invest much effort into my community, but I had an amazing, top-level name and over 1000 members.
Moderation was really hard. We didn't have AI posters, but there were persistent posters who were extremely annoying (mostly in their post volume and long-windedness) while still following the rules. I was really trying a hands-off approach with moderation, and it seemed to be working for the most part. It's all moot now though.
I stopped using Digg a long long time ago. It just felt too slow to get the news I care about.
I was an avid Slashdot user way back in the day, but the site was basically the same throughout the day, and I wanted faster updates. Digg did this perfectly for a time, but eventually I migrated entirely to Reddit (even before whatever that drama was that caused a big exodus from Digg).
I think Reddit right now is the sweet spot: up to date information, longer-term articles to read, and easy to catch up on things I missed. I was recently pressured to sign up for X (or Twitter or whatever), and I had to turn off all of the notifications since I was constantly spammed with "BREAKING: X RESPONDS TO Y ABOUT Z!!!!"
Right now having Reddit for scrolling and Hackernews for articles+discussion feels like it works for me.
Reddit is flooded with AI slop. r/all currently has AI-generated text posts and articles on the first page. Upvoted because they're the typical orange man bad stuff, but LLM slop nonetheless. Assuming the engagement is organic, it's depressing how much of the site has no eye for this stuff.
There are decent small communities I'm a part of but the trash feels like it is encroaching.
And the notifications you describe are exactly reddit's notifications? "your comment received 10/20/50/100 upvotes!" "x responds to y about z" "News is trending"
>When you can't trust that the votes, the comments, and the engagement you're seeing are real, you've lost the foundation a community platform is built on.
Make it $1 to register you’ll cut the audience by 90% and 95+ percent of those will be real people. Just a guess but based on some professional experience.
Community /books helped me track down a book I've been dying to reread for almost ten years now. Reddit failed the task, so did all other places I turned to. Cheers for that, and rip.
There strategy did not make any sense: only a few pre-approved broad-and-shallow forums about everything instead of trying to attract niche communities from Reddit or even FB Groups.
Because there's no real discussion in such broad communities. Only jokes, generic replies, and silly fights. They're equivalent to comment sections on news sites.
They introduced user-created communities a few months ago. They had problems with squatting and splintering, which might have played a role in their annoucement.
Is that the whole story? Why isn't reddit overrun by bots then (or are they?), and why wouldn't basic proof-of-work techniques fence against bots? Since they started out just in January, isn't it plausible to assume they didn't meet their target user figures and investors jumped ship?
Damn. I still have faith that what a lot of us that migrated to new Digg envision is possible. Post pandemic Internet has choppier waters than before, but I'm going to try and keep a positive outlook and I look forward to their followup emails.
I can appreciate how "building social is hard" in 2026, but is trying to be social on the internet still a worthy goal? The world has such problems with isolation and distrust that I'm not sure "online" is the solution. If Digg can do something different and help heal the world, more power to them, but I'm not holding my breath. That's not a slight to Digg, but more a comment on the slipping mental health of the world.
> This is not a reflection of their talent, their effort, or their belief in what we were building. It's a reflection of the brutal reality of finding product-market fit in an environment that has fundamentally changed.
Ironic, they use AI in their shutdown post that blames AI.
>> This is not a reflection of their talent, their effort, or their belief in what we were building. It's a reflection of the brutal reality of finding product-market fit in an environment that has fundamentally changed.
> Ironic, they use AI in their shutdown post that blames AI.
This… seems like regular prose to me. What makes you say so confidently it was written by AI?
I think you're spot on. It feels like parts were edited with AI and parts were left alone.
> This isn't just a Digg problem. It's an internet problem. But it hit us harder because trust is the product.
The statement this is making is presumably the crux of the problem (Digg cannot survive without trust!) but it's worded so poorly that it's hard to imagine someone sat down and figured these three sentences were the best way to make the point.
"We underestimated the gravitational pull of existing platforms. Network effects aren't just a moat, they're a wall."
It's a mixed metaphor which doesn't make any sense. There are really very few ways in which this can be considered good writing - I guess the grammar is ok even if it is nonsense.
So let's break it down - underestimated the gravitational effects - ok, this is nice, like where it's going talking about these big competitors sucking in users, but then we have the metaphor extended to breaking point:
Network effects are a moat, but not just a moat, they're a wall (which is really not anything like a moat). So which of these 3 things are they, and why are we mixing the metaphors of gravity (pulling in customers), moats (competitive moat) and walls (walled gardens).
It's just all a bit nonsensical and the kind of fuzzy prose that seems superficially impressive without actually saying anything meaningful in which LLMs excel. Go try generating an article from just the heads in this article, and see how similarly it reads.
If you want your gradation to work, the items need to be similar and progressively stronger. That's why it doesn't work. A wall is not "stronger" than a moat. "Not a fence, a rampart" would work.
Compare to the canonical example from Cyrano de Bergerac: ''Tis a rock! ... a peak! ... a cape! -- A cape, forsooth! 'Tis a peninsular!'
That’s the entire point - network effects are commonly discussed as being a moat (people can’t cross without difficulty) but are actually a wall - people can’t cross and can’t view the other side. Seems simple and straightforward to me.
In a castle for defence, yes similar in function but not form and often used together not one or the other.
In business metaphors no they are used for different things and also when you create a metaphor you should stick with it, that’s what makes this jarring and weird.
"Network effects aren't just a moat, they're a wall." is a VERY ChatGPT way to write. It's not proof, but the parent is right that this smells a bit of AI writing.
Not to the same extent at all. If you use ChatGPT for a while, you'll see it writes like that very frequently. Humans do write like that sometimes, but not with anywhere the frequency that ChatGPT does it. That's weak evidence for it being ChatGPT.
Suppose ChatGPT uses a semicolon more often than an individual person. On a pageful of comments from many random people, someone using a semicolon doesn't mean they're a bot even if 100% of their comments on that page includes one.
> It behooves you to not write like that if you don’t want people dehumanizing you.
I have to strongly disagree with you on this. It behooves us (as a species) not to degrade our own manner of speaking and writing simply because of a (possibly temporary) technical anomaly.
In my view, it would be really, really sad to lose expressive punctuation or ways of constructing sentences simply because they're overused by AI.
I, for one, won't be a part of that, and I hope you won't, either.
I think a human would have split the "it's not this, it's that" type of sentence into two separate sentences that could be more descriptive. This is a blog post, not a tweet, so there's no length constraint.
If they wanted to keep it to a single sentence, they could have used a a word like "rather" to act as a separator between moat and wall.
The rule of three is a basic writing structure taught to 12 year olds. I know people have given up on even the basics (capitalisation) in recent years but let's not just banish structured writing to "AI".
Much like the vouch system mitchellh is working on for open source contributors, the wider web needs a trust layer that can vouch for a poster's status as human or AI, along with a "quality" score that can travel from site to site.
facebook and twitter became broken for me, but not because of bots, rather because of the "smart feed" ("the algorithm"), which is hiding all posts of my friends and promotes incendiary garbage.
In other words, I am seeing enshittification full-scale, but not the bots.
Interesting there was no notice given to the people who paid $5 for pre-launch access and who helped build the communities before it went public. Not a good way to get anyone to invest their time in it next time they launch. "Bots" is a shitty excuse too. Their whole thing was that they were going to build it a utilise "AI" to prevent that and make moderation more automated. In reality they launched zero of those features and then opened it up to the world completely unprepared.
This is why identity verification is going to become mandatory for anyone who wants to participate in these kinds of sites. If you want to blame someone for it, blame humanity. I reluctantly will say that I welcome it if it would bring the dead internet back to life.
> This isn't just a Digg problem. It's an internet problem. But it hit us harder because trust is the product.
Hmm...
> We underestimated the gravitational pull of existing platforms. Network effects aren't just a moat, they're a wall.
What does this even mean? How many metaphors can it mix up in one paragraph? Can't they write a blog post the old fashioned way, with feeling? Imagine reading a corporate blog post about being laid off which the founder couldn't even be bothered to write.
Amazing how close to corporate newspeak chatgpt can get (prompt was the headings of this blog post), it has the same sort of blank say-nothing feeling of this blog post:
https://chatgpt.com/s/t_69b4890e54ac819193f221351ea900a7
Everyone here seems focused on bots, as does the author of the post. The bigger problem (as also stated in the latter half of the post) is straight-forward: there product wasn't very good. Who is asking for digg to return, save for a very (very) tiny community of nostalgic diehards? Digg is irrelevant. That doesn't mean the internet is dead. It just means digg is.
100% that entire page was written by an LLM. So fucking obvious and I’m so tired of reading the same awful writing style with all these corporate spiel rants. If you don’t care enough to write something yourself, just don’t even bother.
Really annoying, I was starting to use it for a few niche communities instead of Reddit.
If they relaunch, I hope they develop something integrated with the fediverse. I believe the time to build walled gardens is over, plugging with the fediverse might give them a running start to build something g together with the wide fediverse community, maybe something easier to use for non-techies and well moderated.
Digg may have a bot problem but Reddit isn't far behind. So many subreddits are full of slop that they've become useless and/ or completely unreliable.
What's an actual viable solution to this kind of thing?
CATPCHAs aren't it. Maybe micro-fees to actually post things would discourage bot posting? I really don't know.
Seems like it's just dead internet all over the place these days.
> We underestimated the gravitational pull of existing platforms. Network effects aren't just a moat, they're a wall. The loyalty users have to the communities they've already built elsewhere is profound. Getting people to move is a hard enough problem. Getting them to move and bring their people with them is something else entirely.
So as predicted it wasn't really worth eyeballs or the inevitable forced media coverage 6 months ago.
And I will continue to die on the will die on the hill that Reddit only survived/became "successful" because of the legendary Digg slip up and exodus. Alexis Ohanian still doesn't seem to have any clue that it was right-place-right-time and Kevin Rose seems to have not learned much either. Can we stop giving either anymore credibility? As with any social site it's the user base/community that helps pull thru darkness. And no one was really asking for this.
I wasn't a digg user, but this was done to combat 'voting rings' (bots), and the reddit migration was memed partially because it was/is far more open to manipulation. So at least their principles have been somewhat consistent.
I think the [dupe] is a false alarm in the sense that they just put up a banner saying it is shut down and I think they were starting it up again back then.
I recently activated my account on there and went to the forum for my country. It was already taken over by moderators. Then I looked at the mod and he took all real estate that is already available on Reddit that is related to said country. So in a way, he was probably the first account on there and became god-king for eternity for the subreddits related to the country. I had no idea who he was, what he stood for, what his plans were for his newfound digital real estate etc.
I feel like the moderated subforum is a fundamentally broken system for dealing with content. I much prefer the Federated / X / Instagram approach where I can deal with users and have the tools needed to curate my own content, instead of relying on some ideologically captured no-name account that chooses what I can or cannot see based on whims.
Your country wouldn't be Norway by any chance? I remember that on Reddit there was one powermod who was dead-set on owning every Nowegian-language forum, and every name that could potentially be a base for people trying to escape him.
wow, is there more on this?
Also, honestly, with AI/LLMs now, do we even need human moderators anywhere anymore
You need both. LLMs can, I think, do the bulk of removing posts that break community guidelines, but you need moderators to define and adjust the guidelines. Most would also like to have a human to escalate a dispute to.
Google is famous for having almost solely automated support, at it absolutely sucks at doing almost anything. AI only moderation would go the same way.
> but you need moderators to define and adjust the guidelines
The comments above you are suggesting that global guidelines are unnecessary. Instead, they suggest you don't need moderation at all when LLMs now give us the technology to filter out the stuff individual users don't want to see based own their own personal policies. I am sure you can come up with reasons to dispute that, but "you need moderators to do the thing you say is no longer necessary" doesn't add to the discussion.
The absolutely broken moderator system of Reddit made me leave it forever after being a regular user for more than a decade. The “god-king” thing simply doesn’t work.
Same here. The power-tripping of mods ruins reddit. Most don't care about the community as much as they care about exercising their absolute power over users.
And even if it does, the mods don't have real control to moderate communities either, so you get the worst of both worlds. I don't go to most queer reddit communities anymore because a lot of them have bots that downvote trans-positive posts, even if the community is specifically meant to be inclusive. There's nothing to couple active participation to voting weight or anything of that kind and voting is not considered "brigading" by reddit if the coordination happens off-site (at least not in a way that'd lead to any enforcement action).
It's makes a great propaganda machine though, given humans have a tendency to measure their own opinions on social clues.
I still haven't been able to figure out how to make an account without it being immediately shadowbanned or normalbanned. Tried again the other day, it was something in between where logged-out users could see it was banned but I couldn't.
You need to ditch and replace all your devices and acquire a new phone number. I'm serious. Virtually all large websites these days employ a lot of fingerprinting and persistence technologies.
And yes, ditch them. Even well over a decade ago, Wikipedia of all places already employed IP address matching to link sockpuppet accounts. You must be extremely careful of never using any device that was associated with your old accounts on the same network as the devices associated with your new account. And that includes devices only seen by association.
> and acquire a new phone number
> Wikipedia of all places already employed IP address matching to link sockpuppet accounts
That’s… well, that’s just not how tcp/ip works. Your phone number has nothing to do with your device IP…
It does when your phone number is used for 2fa in a session running on tcp/ip
It happens to all new accounts. It's known that new account are shadowbanned almost everywhere until they are 30 days old and farmed some karma on a very small set of subreddits that don't shadowban new accounts. It's shocking they ever get any new users, really; as far as a non-technical new user knows, nobody ever reads their comments for some reason.
How contagious is it? Can I get other people banned from Reddit by logging into my instantly banned account on their wifi network?
Not that contagious, I'm afraid.
My boss uses Reddit some. I'm banned. At the shop, we use the same IP address (and we do not use ipv6 there).
I tried to log in with a ~10-year-old account that I'd never commented with. A perfect Beetlejuicing moment had arrived and I just wanted to play the game with a short, snarky comment.
It logged in fine, and then: Insta-ban, just like that. (Maybe I should have used a new browser on a new network that I've never used before, but whatever -- nothing of value was lost here.)
Meanwhile, the boss man's access continued unimpeded; this suggests that it is a rather targeted contagion.
And it seems to follow the systems, not the networks.
(If anyone wants banned, just let me know. I seem to have a well-poisoned system to play with.)
Just don't use apps. Then the only association is a discardable cookie and IP.
There’s also browser fingerprinting
This is widly innacurate.
It's either some personal unquenched thirst for power or he thought that new digg will be as popular as these ~20 years ago, and that he'll be able to control content submitted and get paid for "promoting" it.
I've seen something similar over the last ~17 years: a bunch of same terminally online accounts uploading content from our local media outlets on country-related subs and local digg-like sites - both active and long defunct for 10 years now. Some of those users even appeared on mastodon and bsky.
The social link aggregators were created for people to share their favorite links, places from the Internet so others could see these and have fun, expand their knowledge and so on. For me it was the cherry on top of the web2.0 period where everything was fresh, beta and innocent. That lasted for a while up until other people, entities figured out that such sites can be used to promote their content, insert ads. The next stage was and remains till today opinion control by "curating" the content and/or reactions in discussions - still done by humans but more prevalent presence of convincing bots.
Reddit itself lost its impartial and independent status a while ago. Big subs related to media franchises or big corporations are heavily controlled to the point it's impossible to submit content that's critical. It's all happy world seen by pink glasses, or as some say toxic positivity. There are still niche places where moderation is limited but as I said last time, from my own experiences: such subs were targeted by bad actors who by submitting forbidden content tried to impose lockouts so later they could take these in their control.
hn isn't free of some of these issues either. while discussions still remain on good levels (tho degradation to reddit levels already happens), there's no control over content: there are accounts who do nothing but upload links every few minutes, hours.
I'm not sure if it's possible to have link aggregators or multi-thematic forums that could be free of such... issues. The similar problem with establishing "real estates" happened on lemmy when some part of userbase decided to abandon reddit due to controversial changes.
A well moderated forum (like HN) is great. I don't have time for the signal-to-noise ratio of X.
IMHO Reddit would be better if it had AI moderators that strictly follow a sub's policies. Users could read the policies upfront before deciding whether to join. new subs could start with some neutral default policy, and users could then propose changes to the policy and democratically vote on those changes.
> users could then propose changes to the policy and democratically vote on those changes.
Which, in fact, would open up the same rat race with determining which accounts are real and so forth.
Not disagreeing with you, just circling around this same problem. Feels like the world still isn't ready yet.
If the policies are public, there's a lot more transparency. eg my city of millions of people has a subreddit. The head mod bans people for criticizing a certain dog breed. This "policy" is pretty opaque, but if the AI enforced subreddit rules say "thou shalt not mention the dog's breed when commenting on articles about someone being mauled to death", more people would be familiar with the rule (and perhaps there would be more organized discussion).
I was on a subreddit for a while that voted on rules and had a rotating dictator to facilitate them. It worked decently well, although it never got to the point where the sub was brigaded. This was also pre-LLM so moderation was still a big time sink and the sub eventually fizzled out
That’s because certain dog breeds aren’t more likely to maul and saying otherwise is ignorant fear mongering.
Try criticizing Apple or China or other sacred cows and see how quickly your post gets flagged.
I've always thought than on Reddit (or Digg, or Lemmy or others) common words, brands, names... should be broad "topics" or categories that nobody can claim (first come, first served). You should be able to add a sub/community under a topic, but just like everyone else, and then users interested in said topic could add and exclude different subs to taste.
Has any popular site tried an approach where you dynamically select your mods as more of a content filter than global moderation?
Most places can hide posts and block users at the user level, so why not select which mods can do that for you?
Same for italian forums. I don't believe bot and spam are to be blamed fully.
It was just a copy of reddit. How useful?
Yes. Subforums should elect mods democratically.
sadly, a nice idea that is painfully naive with how computers are used in reality.
One need only remember how easy it was to take over IRC channels with a few hundred bots to see the endgame of this rationale… it cannot be patched out, it’s inherent to the internet.
That which would make a vote valid; can (and will) be gamed.
> it’s inherent to the internet.
Who said the election needs to take place on the internet?
A paper ballot-style election, while not perfect either, works well enough in practice.
It could work depending on how it is set up. Maybe only accounts with n-number of years get 1 single vote, and maybe don't let any random 2-day old account get a vote.
So now accounts are worth even more money or reason to steal.
As long as sub forums can be created easily, users may pick their sub forum and thus indirectly moderator.
In this setup having users elect the moderator leads to cases where small groups create their special interest group and then some trolls challenge the moderator.
Their may be some oversight on the large sub forum, but not all.
Necessary for this is that subforums can't have unique names. If a bad mod can squat all the words like "computers", "programming", "coding", newcomers aren't going to know the best subforum is called "RealProgNoBadMod"
Yes, the "important" ones need some special attention. If "democracy" where anybody can create arbitrary amount of accounts is however questionable.
The vast majority of sub forums however are more targeted and smaller to begin with.
Squatting is bad no matter how niche the topic
Squatting also invites corruption and selling rights to control what is posted to a sub.
You see this in city-focused subreddits. But the reality is the name is power. New users type in their city and join the original one. The hostile mods suppress mention of the new one. It never manages to get critical mass.
Stack Overflow does this and it works far better than arbitrary tyrant style moderation.
Crucially, SO's election system needs to be bootstrapped: users aren't eligible to vote until they have a history of participation. The level of participation is fairly trivial, but it provides enough signal to allow a reasonable detection (and elimination) of bot / sock puppet networks without resorting to crude measures like blacklists or "bot tests".
For new sites, this meant that the bulk of moderation was done by employees, followed by employee-appointed temporary moderators. This dramatically reduced abuse, but also reduced the explosion of new sub-communities that sites like Reddit thrived on.
Stack Overflow is dead now.
I don’t think it was ever very good.
It was pretty decent in the mid and late 00s. The community started turning toxic in the very early 10s and by about 2015 was quite poisonous. The saddest part is that the problem was known and spoken about frequently, but the response to that from staff and/or high-level mods was to just double down and dig in.
I'm too old, but it seemed like it would be decent for a beginner in the mid-to-late 00s. But it never handled advanced, difficult topics very well.
Probably, but now it's actually dead by all the metrics. People ask LLMs instead because they won't close their questions.
Why? Genuinely curious.
I am a big proponent of (direct) democracy in general.
Internet is way behind on democracy. In general everyone likes democracy until they're in charge, then they realise they're the best person to be in charge and the idiots who vote don't have a clue, and should probably be banned if not beheaded for speaking out of turn.
You'd have to weight votes by some kind of participation metric to solve the problem of very little authentication of the voters
A democratic election requires that the elected be your employee, where you work with him on a regular basis to direct him in his job. That works (ish) in government where people doing the hiring have heavily invested life interests in it succeeding.
Does a subforum offer the same? Once the mod is elected, are you going to sit down with him each day to make sure he is doing the job to your wishes and expectations? I say (ish) in government because it often doesn't even work there, even where people have heavily invested life interests, with a lot (maybe even the vast majority!) of people never getting involved in democracy. A subforum? Who cares?
If there were to be elections, it is unlikely they could be anything other than authoritarianly, with the chosen one becoming the ultimate power.
This is why moderation choice should be based on metrics, not first cone first served.
>> I recently activated my account on there and went to the forum for my country. It was already taken over by moderators. Then I looked at the mod and he took all real estate that is already available on Reddit that is related to said country.
Are you sure? My understanding is that accounts were only allowed to create two communities.
On Reddit? It's horribly intransparent but there seems to be a special class of people to whom the normal rules don't apply.
That limit wouldn't stop you creating more communities with more accounts anyway.
Kinda seems like we’re rapidly headed for the complete collapse of the internet as we know it.
Every site that is driven by user posting seems to be headed towards being overrun by AI bots chatting with each other, either for sake of promoting something or farming karma.
And there’s really not much point in publishing good content anymore, since AI is just going slurp it up and regurgitate it without driving you any traffic.
Though it’ll be interesting to see what happens to ChatGPT and the like once the amount of quality content for them to consume slows to a trickle. Will people still use ChatGPT to get product recommendations without Reddit posts and Wirecutter providing good content for those recommendations?
The bot problem cannot be solved. Even if you strongly authenticate, people are letting bots act on their behalf (moltbook is a great example of this) and what's to stop people doing that in the future. Build your identity and reputation autonomously with the benefits that come with that.
This happens now on Onlyfans too. Content creators hire agencies which in the best case outsource chatting to "customers" to armies of cheap labour in Asia, and the worst case use bots.
The dead internet theory [1] is probably not just a theory anymore. HN recently made a policy to not allow AI posting and posters, but do you honestly think that's going to work? I would place a bet that a top HN poster within the next year is outed as using AI for posting on their behalf.
[1] https://en.wikipedia.org/wiki/Dead_Internet_theory
The bot problem can be solved.
Anubis is one such answer [0]. Cryptocurrency and micro transactions are another.
In the last few decades, spam was a problem because the marginal transaction costs of information exchange were orders of magnitude lower than they had been. Note that physical mail spam was, and still, is an issue. Focusing on perceptual or fuzzy computation as the limiting factor, through captchas and other 'human tests', allowed for most spam to be effectively mitigated.
Now that intelligence is becoming orders of magnitude cheaper, perceptual computation challenges no longer work, but we can still do computation challenges in the form of proof of work or proxies thereof. Spam will never wholly go away but we can at least cause more friction by charging bot networks to execute in the form of energy or money.
[0] https://github.com/TecharoHQ/anubis
Indeed - the future is RL meet-ups and small, intimate online communities.
Perhaps not the worst thing in the world?
This is the optimistic take I’ve held.
Bots get so good that they become indistinguishable from humans. If that’s true then it doesn’t actually matter if your community is all bots. But it does matter because authenticity matters to humans. They will seek authenticity where they can successfully sense it, which will be in-person.
Human simulacrums will one day cause a repeat of this issue. Then we’ll have a whole Blade Runner 2049 issue about what exactly is authenticity?
Yeah, you're completely right. Maybe this will be the impetus a lot of people need to detach from online.
Counterpoint: https://reddit.com/r/MyBoyfriendIsAI/
People will prefer the bots that give them head pats and tell them they're so smart and that they love them
I don't necessarily think that is a stop-gap against people socializing more offline/being socially productive online.
Especially considering the fact that it seems more the case that the bigger stop-gap is what we already have:
In asian (especially Japan) it's host(ess) clubs.
Globally for friends it's influencers exploiting loneliness.
Those are things I think has to go for people to embrace offline socialization or using their online time better.
> Perhaps not the worst thing in the world?
Definitely not. “Terminally online” is as deleterious as it sounds.
"content creators" https://fgiesen.wordpress.com/2025/07/06/content-creator/
It's the same freelance advertisers who optimistically refer to themselves as "influencers".
The word "content" is gross.
"Creator", on the other hand, is beautiful. It means you don't have to pick a lane. Anything can be creative. Documentary filmmaking, stop motion, dance, costume work, historical reenactment, indie animation, economics essays, game dev...
The problem is we don't have a nice word that holistically captures the output of creators. They're not all making films or illustrations. So what do you call it? "Art" is awkward.
"Content" works, but it sounds like slop. We need a better alternative word that elevates creative output.
Since it's OnlyFans, I'd think something like "porn stars" or "online girlfriends"
If it were YouTube, "YouTuber" is a start, but you could also be a "YouTube science communicator" or something
Creator is a fine word to use in place of YouTuber. And vice versa.
But what do you call their output?
What do you call an illustrator's output? A photographer? What about when all of that shows up on your feed collectively?
Content is a gross word.
Creations?
> people are letting bots act on their behalf (moltbook is a great example of this) and what's to stop people doing that in the future.
Verifiable credentials; services can get persistent pseudonymous identifiers that are linked to a real-world identity. Ban them once and they stay banned. It doesn’t matter if a person lets a bot post inauthentic content using their identity if, when they are caught, that person cannot simply register a new account. This solves a bunch of problems – online abuse, spam, bots, etc. – without telling websites who you are or governments what you do.
You kinda skipped the bit I wrote alongside this about strong authentication. There are numerous ways to do this. For example, in Finland you have to physically identify yourself to open a bank account and you can then use that to authenticate. It's used for all public sector services and a few others with strict accreditation.
The issue is that it solves nothing if you can't distinguish between text that is written by AI and isn't, regardless of strong authentication.
There is the other side of this too: Real people - fake posts.
So, you have other folks on here already saying that the code their bots write is better than their own, right?
How long until someone who is karma focused just uses a bot to write their comments and post their threads? I mean, it's probably already happening, right? Just like a bot doing your homework for you, but with somehow even less stakes. I imagine that non native speakers will take their posts and go to an AI to help clean them up, at the very least. At the worst, I can imagine a person having a bot interact fully under their name.
So even if we have some draconian system of verification, we will still have some non-zero percentage of bot spam. My out-of-my-butt guess is somewhere near 40%.
This is exactly right. The problem is the friction that this kind of system adds.
Even so, I implemented this and I wrote about it here: https://blog.picheta.me/post/the-future-of-social-media-is-h...
The ability to make a new account is an important defense against abusive bans. You don't want it to be possible for Google to unperson you.
I've talked about this on here before, but we think the solution is an auth layer built on top of credit score through an intermediary like creditkarma. The score itself doesn't really matter but it does solve big problems.
Plus, if you wanted to implement a filtering system for users, I personally would rather trust reviews / comments from credit scores over 650, they have less incentive to be astroturfing.
But yes, I think your conclusion is correct. This is the only way.
How is that creditkarma accumulated? By other "users"? Does the intermediary guarantee, the this account is a valid person now and always, and not sold the account or not stolen? I mean, we will always need some middlemen I guess?
IMO this is inevitable. HN is freaking about about the end of the anonymous internet, but it's already over and we're just figuring it out. Eventually the bots will find their 90s cyberpunk cosplay IRC channel too.
> Eventually the bots will find their 90s cyberpunk cosplay IRC channel too.
How do you figure? If these bots are driven by commercial interests that seems an unlikely outcome.
I'd rather have a system where there's a small investment cost to making an account, but you could always make another.
Imagine A system where there's a vending machine outside City Hall, you spend $X on a charity for choice, and you get a one-time, anonymous token. You can "spend" it with a forum to indicate "this is probably a person or close enough to it."
Misuse of the system could be curbed by making it so that the status of a token cannot be tested non-destructively.
What does it matter? If there is incentive enough people will just pay and let their bot act on their behalf.
Something Awful made you pay $10 for an account. Directly to the forum. If you got banned you could pay another $10 to try again. Somehow this didn't lead to that bad incentives even though you'd think it would.
Ban reason and the moderator name were public on Something Awful, which allowed the community to respond (actively or passively), and for more senior moderators/admin to take public action against rogue moderators. The transparent audit trail countered the incentive to ban somewhat, but a lot of people also treating getting banned as a game.
Did they ban for this rule often?
"Am I making a post which is either funny, informative, or interesting on any level?
I hate how Reddit mods ban any post they don't like as being 'low effort / shit / spam' when it is completely vague.
Lemmy is even worse on the moderation front, even with public logs: https://a.imagem.app/G3R9xb.png
Lemmy isn't simply Lemmy since it's federated. A screenshot like this is somewhat meaningless without specifying on which instance this happened. There are instances with very lax or even no moderation at all.
For the majority of large, well-federated instances, I don't think it's meaningless, because deletions also propagate to other instances.
If a mod on one server doesn't like something I say, and they delete my comment, all the other (well-behaved) federated instances will also delete my comment.
Of course this also creates problems in the other direction, like servers that ignore deletion requests.
That combined with a large amount of blocked instances across the board, I feel like you get into this "which direction would you like to piss into the wind" situation where you have no idea how many people/instances will actually see your message if at all.
I’d love something like this implemented for email.
Sending an unsolicited email to a random person X requires you to pay a small toll (something like 50p).
Subsequent emails can then be sent for free - however person X can “revoke” your access any time necessitating a further toll payment.
You would of course be able to pre-authorise friends/family/transactional emails from various services that you’ve signed up for.
This would nuke spam economics and be minimally disruptive for other use cases of email IMO…
>transactional emails from various services that you’ve signed up for
These are one of the main culprits of unwanted emails... and a toll system would make them all the more valuable for the even worse actors to take advantage of.
When Digg restarted, you had to pay $5 to create an accoun
Do you think there is a price point that locks out spammers without locking out poor people?
probably not, the problem is that spammers/scammers are looking for whales, and if you are talking about draining the retirement accounts of an American who's been saving all their life, that's quite a big payout in the six or seven figures.
In the case of the 415 scams I used to ask “who would expect $20M to fall out of the sky?” The obvious answer is “someone who already had $20M fall out of the sky”
The bot problem can easily be solved. It’s just that no one likes the cure. Think about this for a minute: what would happen if you had a country where all its citizens could act anonymously with no consequences, no reputation, no repercussions, and no trace? Would you want to go there? Live there? No, because it would be a lawless wasteland dominated by the worst of the worst.
Yet people act like the internet is somehow different. The internet is a massive society. Social networks are very much like virtual countries, or even continents. We’ve all enjoyed the benefits of living in this society of zero consequence, but it’s now been overrun by the very worst people, just like the imaginary country above.
You claim we can’t solve this problem, but we already have solved it here in the physical world with identities, laws, and consequences. The real problem is that most people don’t want to let go of the very thing that is the problem: anonymity. Unfortunately, there won’t be a choice for much longer. The internet will certainly be dead without a system that ties IP addresses and online identities to real people.
No, it’s not the internet we all wanted, but humanity has ruined the one we have.
I can imagine a "anonymity" or "reputation" filter attached to every interaction in the internet. Enabled by default, but you can disable safe mode and see bots having fun.
Also for me problem is not in the anonymity itself, but in the lack of reputation. If I have a signal that entity can be trusted, I don't care much about its real identity.
I think this is a great way to frame the conversation and possible solution: reputation. things like accumulated karma or credits and IRL connections (big data will love this) all begin to feel dystopian whereas reputation I believe is something that everybody can get behind. It can absolutely remain anonymous, while still benefiting from IRL meetups for big reputation bumps (just use your handle). We all hang out in lots of places online, let that rep build and be used everywhere. Pretty sure they were trying to do something like this in the fedverse but haven't touched base on it in a long time ...
I suppose reshaping the fundamental social contract with the internet and the computers we use to access them would solve the problem.
So you are missing something here. Up until recently IRL was anonymous by the nature of capturing all that data of what people are doing was expensive and difficult to process. Cameras weren't everywhere either.
If you lie to me in the real world, I know what you look like and won’t trust you again. You cannot change your face. If you punch me in the real world, I can punch you back. If you stab me in the real world, you’re likely going to jail once the police catch up to you. You don’t do any of those things because the lack of anonymity imparts consequence. There is no anonymity in the real world unless you run around in a full face mask, in which case no one will trust you anyways.
>The real problem is that most people don’t want to let go of the very thing that is the problem: anonymity.
Anonymity is not the problem though. We've gone with anonymity for a long while and it has worked fine. Would a removal of anonymity suddenly fix all this? No, absolutely not. Astroturfing and PR campaigns happened before AI comments were a concern, same as bad actors.
The problem here is the "recent" development of trusting whatever you read online. Of insisting that content should be personal, trustable and real, when none of this can ever be ensured. The separate, but related problem of engagement-based economy makes it way worse.
And remember: social media sites don't actually want to get rid of bots, for the most part. That's not in their interest, as long as bots increase engagement, does anyone trust them to actively hurt their bottom line in order to promote honest, productive discourse? Please.
With AI running rampant, it seems security through obscurity is basically the best thing we have. Everyone knows reddit, facebook, xitter, etc so any clown can and does have bots running loose. HN is "obscure" in that most normies don't know about this place, and so it's relatively safe from the floods of spam. But I think it's just a matter of time until non-tech people start looking for those few bastions of human comments online, come across this place, and a great flood begins and it'll never be undone. After that, I guess it'll be a rise of invite-only forums like we had in the early 2000s all over again.
HN may not be “mainstream” but it is certainly _very_ vulnerable to bot spam given the topics discussed and the make-up of the audience.
You can already see it happening now - at least the bots that write like vanilla Claude/ChatGPT. Presumably there is a much larger hidden cohort of bots that are instructed to talk more naturally and thus are better adept at flying under the radar…
I would say that HN has a lot of features that would be seen as draconian in how much they limit your interaction by other platforms.
You can barely comment before you are rate limited.
You can’t upvote until you’ve been around a pretty long time.
New accounts are given a green badge of dishonor that makes users scrutinize their comments more.
I’m not saying these are bad things but they’re probably too restrictive for a social media network that’s just meant to be a good fun time.
I’ve never seen people on the likes of blackhatworld selling hacker news accounts or services. The glass half full take on this is that hn is surprisingly robust in its ability to deal with vote manipulation.
If you are rate limited, a moderator has manually applied a rate limit to your account. Accounts are not rate limited by default. You can appeal the decision by emailing hn@ycombinator.com.
I think there's a short-term rate limit applied to everyone, e.g. you get a message if you try to post three replies in the same minute. I've seen it once, and I don't think I'm active enough to have earned a manual flag.
The karma points you get on HN are worthless, which I think is a bonus. They don't buy you anything. On Reddit, for instance, many parts of the site are walled off until you have "farmed" enough karma to participate.
Not exactly true.
You get the right to down vote and if I promote my totally not a scam product on HN, people will check my user account and see: on wow over 9000 karma? Gotta be trust worthy, when in truth it's just been karma farming.
HN does limit some of it, but it's not a panacea.
I don't know, never found much value in karma. I recreate an account at least once a year for no particular reason and it roughly takes me a week to get enough karma to do what is important (flagging posts).
My account is literally 4 years old and I'm not even halfway there.
How do you do it?
And I'm trying to limit myself from saying unwanted things like criticizing ** or saying something nice about **. (Self censoring to avoid downvotes).
Maybe I should be more active.
Dang told me in 2019 that HN gets 150M page views a month, so it's not that obscure actually:
https://news.ycombinator.com/item?id=21201120
150m page views a month is peanuts and very far away from the "social" networks numbers. I don't have those numbers, but I know how many page views we had 2011 while running a german browser game community.
The internet seems to have grown massively within the past couple years (unfortunately, almost certainly because of bots). I bet the number today is orders of magnitude higher.
I would bet money that HN's traffic is not orders of magnitude higher than 2020. HN is not as popular as HNers think it is.
We don't disagree. The extra traffic is almost if not entirely bots (especially scrapers)
> After that, I guess it'll be a rise of invite-only forums like we had in the early 2000s all over again.
Which would be totally fine with me TBH.
Rather amusingly, invite-only torrent sites might be the only semi-public authentically human hangouts left on the internet!
I was thinking the same thing, that this wouldn't necessarily be a bad thing. I'm curious how far it will go.. if we'll get invite-only mesh networks with self-contained mini-internets and the like.
Eternal AI september.
Eternal LLMber
I've asked ChatGPT a question about something I read in a thread here and it responded with a comment from that thread, even though the thread was less than an hour old. HN is well known in the tech community and there are certain subjects, especially anything involving Israel or India, that nearly instantly result in a flood of comments from bad actors. HN isn't Reddit but it's also a shadow of what it once was, which is driving away more of the productive participation in favor of agenda-based posting.
Search engines seem to index HN in near real time. They must have custom scraping code to follow the incrementing post IDs.
Claw plugins for HN APIs arrived pretty early.
Note that these topics often involve comments which you can predict very easily. Internet users are like that, agenda or no. Wasn’t it in the heyday of forums that you could recognize the most prolific/annoying members by their style and vocabulary? A model should have no problem pulling such things off.
It pretty regular that for major post, you can find the same few highly upvoted comments on all the platforms with the story
The future is human curated content. Provide the same experience people get today but without the noise. Give them just the good stuff and don't let just anyone make a post. A book has an author, a movie has a director, maybe websites can have webmasters again who filter through the garbage for you.
The future is meeting in person and watching performers actually perform live.
You've nailed it. Social media is no longer and will never again be a substitute for real human interactions. It sort of worked when it was mostly real humans, but that era is ending and not coming back. Algorithms are now controlling what you see, and bots and agents are increasingly creating and posting most of the content.
Currently the biggest places with live performances are swamped and tickets get scalped for huge upcharges.
How did Yahoo work out compared to Google?
It’s what I’m trying to accomplish with my website(link is in my profile). Just trying to crank up the signal to noise ratio.
Nice. I like how clicking a tag also makes the word 'tag' light up.
Thanks for the kind words!
I got encouraged by another HN poster a few days ago, let me know if you have any suggestions.
I’m always open to criticism.
Everything clicks nice, so to speak. A nice UI you have there.
I would suggest you explain what it's about in one sentence, just like you explain in your HN profile. The About-page says not so much. You can add some explanation there, or even just one sentence at the top of the homepage (or other pages).
I got:
> Failed sending verification e-mail to XXX@XXmail.XXX, please contact administrator on stonky@stonkys.com
Thanks for the info, I’ll fix it tonight
Still waiting for that Contact link...
On it
AI is sucking up that content and denying traffic to its creators. This model is becoming obsolete.
A curator with a great taste and judgement is king.
Yes, precisely.
This means that only sites which verify identity will have any value in the future. And by verified, that means against government ID and verified as real.
No amount of sign up fee works as an alternative.
Note that a site can verify identity, prevent sock puppets, ban bad actors and prevent re-registration, all while keeping that ID private.
You still get a handle and publicly facing nick if you want it.
The company which handles this correctly will have a big B after it. Digg actually has a chance at this.
It has no users, so the outrage won't exist in the same capacity. Existing platforms will be pummeled in the market if they try to convert to this type of site, as their DAU will likely drop a thousandfold, just due to the eliminated bots.
But Digg could relaunch this way. And as exhibited, this is now the only way.
The age of the anonymous internet is over, it's done. People not realizing this are living in the past.
Note, I don't like this, but acknowledging reality is vital. Issues with leaked databases, users, hacking of Pii are all technical and legislative issues, and not relevant to whether or not this happens.
Because it will happen, and is happening.
It should be noted that falsifying ID is a crime. Fake ID coupled with computer fraud laws will eventually result in hefty jail time. This is sensible, if people want a world where ecommerce, and discourse is online... and the general public does.
And has exhibited a complete lack of care about privacy regardless.
I think people who want to stay anonymous just will not participate anymore. Like I’ve enjoyed using this site, Reddit etc but couldn’t care less about dropping them if I need to have an id verification to access. Someone will probably create a new communication method to replace this.
>No amount of sign up fee works as an alternative.
Simply put money is worth too much, at some point someone will want access to this human audience and offer too much to be resisted.
>It should be noted that falsifying ID is a crime
Lol, no one gives a shit on the internet. People will use stolen ID'S to get accounts. If the network is lucrative enough, governments will provide fake IDs to spread propaganda.
Lol, no one gives a shit on the internet.
This is now, not the future. Times change.
human curated -> human moderated. I, for one, don't care if it's ai, or human-written. I care if it's interesting/useful.
results are important, not the tools or process. (on this matter)
Results over time are important. Or at least they should be.
Every website needs to add the "friend or foe" system[0] so that I can mark bots to avoid their content and mark good posters so I can filter just to theirs.
[0]: https://hackersmacker.org/
This should be seperate from marking bots because what this really wil do is embed people into hearing only what they want, making discussion worse.
no, I truly do not want to read IHeartHitler88's opinion on jews, or donttreadonme09's bright opinions about how the economy would be better if we listened to Ayn Rand. I'll be very happy when they're out of my sight. If I want to have a miserable day, sure, I'll turn it off.
Fact of the matter is, most posts on the internet are already dogshit. Now they're also populated by AI, but the point stands. Most of what you will say online is at best useless.
>Most of what you will say online is at best useless.
If that is true, you are saying far too much.
I know, it hurts. Most of what I say in this website doesn't matter. Even if it did, it's about the same thing as screaming into the void. And it applies to you too.
The vast majority of what we post is vapid, useless bullshit.
On /. I would only mark obnoxious people as friends so I could see the friend-of-a-friend indicator and be cautious of anyone aligned with them.
> And there’s really not much point in publishing good content anymore, since AI is just going slurp it up and regurgitate it without driving you any traffic.
You just published good content knowing AI will slurp it up and not give you any traffic in return. I'm now replying to you with more content with the same expectations about AI and traffic. Why care about AI or traffic or recognition? Isn't the content the thing that matters?
It's like answering technical questions in an anonymous/pseudonymous chat or forum, which I'm sure you've done, too. We do it to help others. If an AI can take my answer and spread it around without paying me or mentioning one of my random usernames I change every month or so, I would be happy. And if the AI gives me credit like "coffeecup543 originally posted that on IRC channel X 5 years ago", I couldn't care less. It would be noise to the reader. Even if the AI uses my real name, so what?
The people who cared about traffic and money from their posts rarely made good content, anyway. Listicles and affiliate marketing BS and SEO optimizations and making a video that could be 1 minute into 10 minutes, or text that could've been 5 articles into a long book - all existed from before AI. With AI I actually get less of this crap - either skip it or condense it.
It's two different problems. People who run review sites and blogs and such care about traffic, and not getting attribution will kill their desire to participate. People who post here and on Reddit etc. care about talking with other human beings, and feeling ignored in a sea of botspam will kill *their* desire to participate.
> feeling ignored in a sea of botspam will kill their desire to participate.
The bots are not really that bad, they're (still) pretty easy to spot and not engage with. I'm more perplexed about the negativity filled comments sections, and I'm pretty sure most posters are real grass-fed certified humans.
I don't get why negative posts get so upvoted, get so popular on the front page, and people still debate with outdated arguments in them. People come in and fight other deamons, make straw-man arguments and in general promote negative stuff like there's no tomorrow. I think you can get so much more signal from posititve examples, from "hey I did a thing" type posts, and so on. Even overhyped stuff like the claw-mania can still be useful. Yet the "I did a thing" get so overwhelmed by negativity, nitpicking and "haha not perfect means doa" type of messages. That makes me want to participate less...
Oh that's just human nature: there's a reason why trashy tabloids continue to exist despite how public sentiment seems to universally agree that they're awful spreaders of rumour and insecurity. More people are Skankhunt42 than we'd like to admit.
That's a little bit apples to oranges, because I'm not monetizing this content, or paying to host it, or trying to make a personal brand, etc.
Yes and no.
In the most simple sense - Yes, it is the content that matters.
In the more practical sense - cognitive and emotional resources are limited and our brains are not content agnostic.
We have different behaviors, expectations and capacities for talking to machines and talking to humans.
For example, if I am engaging with a human I can expect to potentially change their minds.
For a machine? Why bother even responding. It’s of no utility to me to respond.
Furthermore, all human communication comes with a human emotional context. There are vast amounts of information implied through tone, through what we choose not to say. Sometimes people say things in one emotional state that is not what they would say on another occasion.
To move the conversation forward, addressing the emotional payload behind the words used, matters more than the words used themselves.
There are a myriad reasons why humans are practically poorer for these tools.
Asking money to people in order to read stuff, and promoting the one people are actually ready to part with real money to read, is a first interesting step. (See: substack, Patreon,etc...)
I know this is going to sound horrible, but : how about asking money to contribute, period ? Maybe have a free tier of a couple comments, etc... But if you want to build a troll factory, sure... Show us the cash ?
I do believe that charging for it is one way to create some friction, but it's not enough.
Twitter is full of blue checks that are just bots and automated reply guys.
I'm treating now all these bots as a stressor on our defense systems, and we will end up having to learn how to build a real Web of Trust, and really up our game on the PKI side. We also need some good Zero Knowledge proof of humanity that people can tie to their Keyoxide profile, so that we can just filter out any message that is not provably associated with a human.
Collapse of the Internet or collapse of the visual world wide web? tbh, I am a little curious to see what comes after clicking a button on a web page.
This could be positive. So far things were gamed and manipulated to some extent, with some fake content, but it was never too obvious, and a bit of a cat and mouse game with filters and whatnot. Now, it's so easy to fake content that robust systems will have to evolve, or most social media sites will become worthless, and advertisers will catch up eventually when they are paying for bot-only sites. The downside of course is that these robust systems are hard to imagine without complete loss of anonymity of the users.
Web of trust weakens anonymity, but doesn’t eliminate it.
- You know who your online invitees are, but not your invitees-of-invitees-of-…
- You can create an account, get it invited, then create an alt account and invite it. Now the alt account is still linked to you, but others don’t know whether it’s your friend or yourself. (Importantly, you can’t evade bans with alts; if your invited users keep getting banned, you’ll be prevented from inviting more if not banned yourself)
> Though it’ll be interesting to see what happens to ChatGPT and the like once the amount of quality content for them to consume slows to a trickle.
Creative loop moves inside the agentic chat room, where we do learning, work, art, research, leisure, planning, and other activities. Already OpenAI is close to 1B users and puts multiple trillion tokens per day into our heads, while we put our own tokens into their logs. An experience flywheel or extended cognition wheel of planetary size. LLMs can reflect and detect which of their responses compound better in downstream activities and derive RLHF-RLVR signalling from all our interactions. One good thing is that a chat room is less about posing than a forum, but LLMs have taken to sycophancy so they are not immune, just easier to deal with than forums. And you can more easily find another LLM than a replacement speciality forum.
Perhaps they migrate into Discord and Instagram once they acquire better visual and voice capabilities.
Yeah, we need human verification more than age verification.
Every website that was driven by traffic is also dying. I have put nearly a decade of work into mine, and AI overviews and ChatGPT have reduced traffic by over 60%. At some point I will need to give up and find a job, and that corner of the internet will get no new original information, just rehashed slop.
As someone who came of age before “the internet as you know it”, I am looking forward to all of the cancerous Web 2.0 OG slop and narcissism factories succumbing to their own fates. Let me tell you, the internet as we know it sucks, and the internet it ate 25-years ago is a marked improvement. We should be so lucky. Now go write a personal blog in plain text, and rejoice.
> Will people still use ChatGPT to get product recommendations without Reddit posts and Wirecutter providing good content for those recommendations?
They will try and OpenAI will sell favorable placement to manufacturers.
You mean a complete collapse of social media, not the whole internet. The internet is a telecom ecosystem and has a lot more to it than just forums and link aggregators.
I honestly believe it might not even be such a bad thing. People were arguably better without social networks and media, and it's perhaps better to let the cancerous thing just die and keep the internet just as a utility powering boring things like banking and academia.
What would you say are the major applications of the internet? It's used for business and academia in ways that aren't going away, yes. M2M communication will stay. Social media is the largest user-facing segment and it might not. I don't have a sense of how big these sectors are relative to each other. If the largest sectors of the internet disappear, the internet shrinks a lot.
That and most of the news being behind a paywall, which they can scrape anyway.
The internet archive is my safe haven these days, i can go back and remember the old internet.
Ha yeah, I quite like the 2003 vintage.
Unless you're allowed to say slurs without being banned, your forum will be overrun with bots. The sanitation of the internet is the perfect breeding ground for brand-safe AI promotion bots.
4chan has bots too.
Curious how you came to that conclusion. Anecdotally, places where you can slur to your heart's content like /r/conservative seem far more inundated with bots than other areas of Reddit. I feel like that's really saying something too, because Reddit has a really bad bot problem overall.
At some point websites will just have to start charging an entry fee just to make it so if you really are yet another bot, at least you are paying for your stay. If you're not rate limiting your websites in 2026 on a per user level, you really need to, and figure out how to do it meaningfully. Raise limits for known human power users, especially if they pay to use your website.
I wonder if the "short-term" "fix" is people will start to migrate off the web and into mobile, though none of this stops agents from using phone emulators, so kind of pointless, but I imagine crawling the web is easier for AI.
This is a comically short lifespan. Didn't they launch less than like 6 months ago? To just torch it and shut it down is wild and right from the jump referencing downsizing the team... I got the impression this was a fairly small team from the beginning. Not to mention it was backed by stupendously wealthy cofounders making fortunes off the web 2.0 run of original digg and reddit, yet can't seem to stomach a bumpy 2 quarter initial launch?
There was a lot in the new digg that I was concerned or at least not optimistic about but come on - are we even going to try anymore?
> Didn't they launch less than like 6 months ago?
Two months, according to The Verge.
https://www.theverge.com/tech/894803/digg-beta-shutdown-layo...
This is particularly embarrassing since from what I recall they were all in on AI with the new website, so to shut it down so fast because of it…
Fail Fast I guess!
> None of it was enough. When you can't trust that the votes, the comments, and the engagement you're seeing are real, you've lost the foundation a community platform is built on.
What is HN doing differently then?
A lot of banning for one, along with getting rid of most politics and off topic stuff.
Issue is we are seeing a ton of AI stuff getting posted so it's a losing battle.
I used to love HN. Lots of interesting stuff, great articles, novel projects. Now it feels like the frontpage is always around 70% LLM-related stuff. And not breakthrough research or projects, just "new Claude version X" and shit like that. Eternal September I guess?
It's not, hence the "don't post AI slop as your comment" posting a few days back that had 1000+ comments.
Currently an unsolved problem - just stealthier on some platforms than others. Trigger the right topic on HN and the bots come out in-force together with humans sloppily copy/pasting LLM content.
I am kind of peeved. I started a community there and diligently posted links to topical news, and it kind of became a reference to me. Like many others, I've put in some amount of effort.
Now it's gone, again. Without a head's up or a way to get a backup out of it, it seems like. Can't say I am a fan of that.
Cutting staff does in no way mandate a un-notified and abrupt "hard-reset".
They could at least put it in read-only mode for a short time and allow downloading of extant community content prior to a scheduled "reset day".
This smacks of flailing leadership and zero respect for their target user demographic.
They say trust is their product, well,I guess they're sold out
In Digg's non-defense, Kevin Rose has been a serial-rug-puller for his entire career. See also Pownce, Milk, and Moonbirds.
The only sustained business I'm aware of is Hodinkee.
Kevin Rose didn't start Hodinkee, he started Watchville years after Hodinkee was already well established. Watchville merged with Hodinkee, at which point he became the CEO for 2 years.
From what I can tell Watchville was abandoned a few years ago.
> Digg's founder who started the company back in 2004
Their plan is to make the internet what is was 22 years ago.
I wonder how much it's possible to recreate some of the old magic.
I'm sure it's impossible, but what if it's not?
That's exactly what they did to the old Digg back in 2010 -- massive redesign that effectively deleted all old posts, comments, and favorites without warning or opportunity to back up. I pretty feel vindicated choosing not to trust them again, though it's wild they didn't even make an effort to do better here when they claim to want to keep going.
If you're looking for a new platform lemmy is probably your best bet, at least if a server goes down everything is still saved on federated servers.
I do have a lemmy account, but have not really returned to it in a while. Maybe I haven't found the right communities yet, but it had nothing about it that felt engaging. People upvoted, but nobody talked. No interaction. Digg felt more alive from day one. I replied to a post in a niche community with ~100 members and only afterwards realized it was @justin.
My experience with lemmy has not been nice. A majority of people there are just downright awful, and the mods are often power-hungry and overzealous in their actions. Many times entire servers are defederated from many others due to how a large percentage of their users behave.
Example: https://0x0.st/8RmU.png
Despite its flaws, X seems to have a better balance between what's allowed and what's not than other non-niche social networks.
Fuck X. Various people can shove that 'better balance' completely up their jaXie.
Lemmy has the same energy as ice: a bunch of rejects from other mod communities showing up to render their version of justice upon federated folks
Yeah, the primary instance (lemmy.ml) isn't the best.
I use mander.xyz, it's science focused, but they also have a policy of only de-federating instances that host CSAM.
Isn't the biggest instance Lemmy.world? I thought .ml was the oddball fringe dominated by tankies.
Where is that policy located? I could not find it.
Their /instances page also only shows a single blocked instance, whereas something like programming.dev shows lots of questionable instances blocked.
Honestly, I couldn't find it either, but the owner talks about it in his post about blocking threads https://mander.xyz/post/1062661
> A majority of people there are just downright awful, and the mods are often power-hungry and overzealous in their actions.
If you're telling me it's _worse_ than reddit in this regard, I can only imagine how terrible it is.
Lemmy is server software, it's like saying you don't use phpBB because it has bad mods
You chose to put your effort into building something that someone else owns.
Next time try doing it in a way that you control it.
You're right, and that is one of the lessons to be reminded of here.
My main point wasn't that, though. It's simply a bad and low-effort way to handle the situation, and like one of the other replies points out, there are better options. They could have just as well disabled posting and maybe even viewing of submissions and communities for the time being. Just shutting it all down immediately without notice leaves a bad taste in my mouth, and I will not be among the people returning for their next relaunch. I am sure others feel the same way, and I don't think it is a wise decision to needlessly put off your early adopters if you're hoping for them to come back "next time".
Will we never learn to stop. Building. On. Platforms.
Argh. Also quite irritated. I had 50/50 transitioned over to it despite the lower traffic because it was a calm oasis. The thing about bots is believable, though, because you could already see it happening. Dead Internet has been real for a while, and I'd love to seem Kevin and Alex do a followup on this.
Yeah. Sadly the default communities were flooded with blog spam, and that's just the part I noticed. A couple days ago a bunch of smaller communities also got a noticeable bump in members. That didn't change anything in my own community, but others apparently weren't so lucky.
I can see why the team got overwhelmed. I wouldn't want to have to deal with that.
Related - others?
Digg.com Is Back - https://news.ycombinator.com/item?id=46671181 - Jan 2026 (10 comments)
Digg.com relaunch public beta is live - https://news.ycombinator.com/item?id=46623390 - Jan 2026 (18 comments)
Digg.com (Relaunch) - https://news.ycombinator.com/item?id=46524806 - Jan 2026 (3 comments)
Digg.com is back - https://news.ycombinator.com/item?id=44963430 - Aug 2025 (204 comments)
Digg is trying to come back from the dead with a reboot - https://news.ycombinator.com/item?id=43812384 - April 2025 (0 comments)
Kevin Rose (original digg founder) and Alexis Ohanian (a.k.a. kn0thing, original reddit founder) did an AMA recently about restarting digg
(context so people don't have to click links)
If someone wants to read the AMA, it is here: https://www.reddit.com/r/IAmA/comments/1qhf27j/alexis_ohania...
> Digg.com Is Back - Jan 2026
Damn, that didn't take long at all...
Well, it's like two major LLM models ago?
The "new" Digg was just Reddit with the exact same type of comments you can find there and I left it (Digg and Reddit) because of that. There are very few sites where real discourse is still possible without it being filled with memes, running jokes, "witty" one-liners and the constant need to "one-up" and call-out each other. What does Digg even want to be? Nobody needs a second nu-Reddit. It speaks volumes that this post also seems to be AI-generated.
> sites where real discourse is still possible without it being filled with memes, running jokes, “witty” one-liners [etc]
There are subreddits within Reddit such as https://www.reddit.com/r/neutralnews/ that have strict rules around sourcing, etc. However, I think that’s not what most users want, and may not be quite what you’re looking for either, apologies.
Eh - it IS what most users want.
In the same way people want to be fit.
There are 3 horsemen of Internet forums, one of them is topics with a low barrier to entry.
At that point anyone can speak up, and their opinion takes up as much screen real estate and reading time (often less reading time) than a truly informed take.
By putting effort barriers in place, it forces a fitness test that most users (and bots) fail.
Another subreddit which has strong rules is r/badeconomics. I didn’t know about neutralnews, so thank you for giving me another example to add to the list.
What the current users want.
I think communities like Reddit and Digg grow to a certain point and don’t grow anymore because everybody else absolutely hates what those communities have become. See the fight years ago where Digg thought it had to outgrow MrBabyMan. Problem is platforms don’t usually win those fights.
Sure, today’s redditors love sharing stupid image memes. For each of them there are 20 people who wouldn’t touch Reddit with a 10-foot pole.
Size isnt a predictive metric for community. It’s great for ad revenue.
The point being made is that communities maintain high signal to noise ratios by adding effort filters.
The whole problem is trying to be a catchall where people with zero knowledge or skills can hang out. Twitter/X and Reddit especially suffer from it.
Topical forums tend to have a much higher SNR. My favorite forum of all time, johnbridge, had none of those issues. Sadly it died this year all the same, but many others still exist. When you have a forum dedicated to something that requires a minimum barrier to entry, the more useless folks get shunned away pretty early and easily.
I want a "reddit" like discussion board where:
- Users don't have to pay to post links/stories - Users have to pay to comment on links/stories - Users have to pay to "upvote" comments. Downvotes don't exist - Each link "lives" a certain amount of time before it is locked. - After lock time, users who posted the link get "paid" a % of the collected $ comments/upvotes. Comments that are upvoted also earn $ proportionally to the upvotes.
Hashcash was conceived to solve automated spam/email. Participating in a discussion must cost something, that's the only way bots and spam will get partially stopped. Or, if they start to optimize to get "the most votes", then so be it, their content will increase in quality.
Paying users for their posts is what killed YouTube, Twitter Facebook, Instagram... You will only get shitty ragebait comments. Not to mention that you have to link some bank account with your full name, etc.
This sounds like a platform that has no appeal to the average person, and an incredible appeal to people wishing to launder money or use money to run an influence campaign. Deliberately determining popularity proportionally to the amount of money spent is little different than advertising, but this would be under the false premise of "someone thought this was important/valuable enough to pay money to suggest I see it".
If this were to exist today, I know I would be incredibly critical of it.
Makes me think of how prediction markets have a Republican bias because some rich people just gotta bet on their tribe every time
https://aaltodoc.aalto.fi/server/api/core/bitstreams/4176474...
Every election I see internet-connected gym machines have the leaderboards spammed with right wing messages because some people don’t have to work and just spin all day.
I’m missing something. What’s the incentive for people to pay to upvote or comment?
It seems like that would lead to a proliferation of ragebait, deliberately controversial posts, and overly simplistic articles to attract the greatest amount of comments. I frequently see deeply technical high-value posts on HN with very few comments but each thread about politics ends up getting hundreds of comments.
+1 let's make this
You could build this on ATProto.
What's stopping you from building it yourself?
The patterns were there if you knew to look for them.
The original Digg excepted, Kevin Rose's attention span is extremely limited. He will give something ~3-4 months of attention before (apparently) getting bored and wanting to move on to something else.
Up until that point, he will be an unrelenting hype man of whatever his attention is lasered on at that moment.
Then the hype posts start to drift. They show up once every few days, then once a week, then stop entirely. Any criticism or skepticism is considered a buzz kill in the cloud of good vibes only.
A few months later, a dramatic explainer post arrives (underestimating the cold start problem? Really??), outlining why the idea didn't work and why the next one will be better, for sure, for real.
This (AI generated) note from the current CEO paints an optimistic picture, but the most likely outcome will be that Digg simply doesn't launch. It's sustained on the nostalgic vapors of the old guard, not renewed by a replenished sense of purpose, or connection.
I'd say I'd love to be proven wrong, but I personally question the utility of a Web 2.0 social network phoenixing itself. We have endured a decade+ of originality being buffed out of web products, most now resembling variations of Bootstrap and shadcn in service of dev convenience and getting rich quicker.
Surely in the age of vibe coding, we can afford to take creative risks again, and think of something new.
Milk
Moonbirds
Digg
Too comfortable with money in the bank to give full attention to a new venture.
I'm done falling for the Kevin Rose hype train. Long time fan but this is just pathetic.
I think anyone with tens of millions of dollars would find it hard to compete in the business world - they should stay in their garden with their rare plants
This kind of makes the Digg team look like a joke. Rebuilding was always going to be hard, but I think this kills any chance of building it up a third time since no one can take it seriously.
Digg v3 was in 2006, this is at least v5 :)
> This isn't just a Digg problem. It's an internet problem.
Am I completely off base or did they use AI to write the post complaining about AI?
> Network effects aren't just a moat, they're a wall.
Digg isn't just here again. It's gone again.
The LLM style is like nails down a blackboard, are people blind to it or do they just not even read the stuff they're posting?
It is not an LLM style and you may want to reconsider your choice of allowing LLMs to live free in your head.
It's an LLM pattern but it was learned from training on these kinds of posts. People actually wrote that way, a lot, on the internet.
I'd love to see an analysis of the prevalence of "it's not x, it's y" before and after 2022
I don't know if they used AI but there were two "lastly" sentences right next to each other so at the very least it wasn't well edited.
The whole post is blatant LLM output.
That didn't last long. I'm not sure I want to invest my time again if/when they relaunch.
I kind of expected this. The way some of these people work, if the site isn't an instant unicorn, it's trash. But if the goal is a good community, that is something that takes time to build and should grow slow. The incentives are all backward.
Digg's death in 2010 was essentially the original case study for how to destroy platform trust overnight. The v4 redesign wasn't just bad UX — it was a signal that the company had fundamentally changed its relationship with users. When Kevin Rose tried to "fix" the front page by giving power users less influence, he accidentally revealed that the whole value of Digg was those power users.
What's interesting is that every subsequent attempt to revive Digg has been a bet that brand nostalgia outlasts institutional memory of why it failed. It doesn't.
I was excited about a Reddit alternative. I signed up months before the public beta. When I tried the public beta the new Digg website turned out to be a terribly bloated and slow NextJS app. Used it once and never again.
It's a shame, the intention is still there, if they decide to come back I'll give it another shot. Btw, why are we publishing simple static pages at ~2.84 MB compressed.
Those 100 npm packages won't load themselves.
I would pay cash for access to a social site that bans all US politics, the astroturfing associated with it is simply unbearable.
For a short time I was a part of a small site that banned politics.
It was fine, people talked about work, personal stuff, travel, until one person posted about their disappointment that their state was limiting various services or rights to gay people. For them this meant their rights were in question and they were understandably upset.
Immediately some folks cried politics and that they shouldn’t post about that sort of thing.
To the user posting it it was about their life…
I don’t think “no politics” rules really make much sense. For someone it’s more than politics, and IMO because a topic is touched by politicians or government shouldn’t make it disallowed.
I'm always amused when people say things like this. Any criteria that determine what constitutes "political" talk is inherently political.
Wouldn't that be almost impossible?. Politics affects our lives every day. Your comment suggests that you believe it doesn't affect yours.
https://en.wikipedia.org/wiki/Empathy
Banning discussion of politics is an endorsement of the politics that are already happening. People think it's apolitical but it's not.
I've nver seen discussion of politics on forums do anything but turn into hate-filled, dogmatic posts which aren't productive at all. Every political thread here turns into the same takes and HN imagines itself as intellectually better than others. It's not interesting or productive. If talking about politics fixed things, why are politics worse today than they've ever been? There's no costs and no solutions to ranting about politics online.
The vast majority of people do not want to get on a forum to escape their life to see every more or worse content about their daily lives.
You're right, there needs to be some outlet but when people propose this it's because they are sick and tired of politics and the injection of them into everthing is not helping those politics, it just makes it worse.
Tons of people aren't political creatures and want nothing to do with politicians. This notion that more politics will fix thing isn't born out by Reddit, X, the US Congress, Brexit, etc. It's too easy to divide and manipulate people.
> Wouldn't that be almost impossible?. Politics affects our lives every day.
No it wouldn't be. And if your definition of "politics" includes "literally every time a thing happens" then your definition is so broad as to be useless.
When people say that they want politics banned, they are talking about the extremely controversial arguments that are almost completely unrelated to whatever the community is about. IE, if you run a group about Cheese making, and someone comes in and starts arguing about if an ice shooting on the other side of the country was justified or not, that is... off topic. And everyone with a brain can understand that.
It really isn't that hard to figure out which topics are related to cheese making and which other topics have almost nothing to do with it, even if someone could make a bad faith argument that it is related (EX:, your response would probably go something like "Well what if someone knows a cheese maker who is here illegally, therefore thats why ice enforcement on the other side of the country is relevant!". You could say that but we all would know that you are being bad faith or have some sort of issue with determining what words mean to regular people)
Partial credit in this example could go to political issues that are very obviously and directly related to cheese making. A new tax on cheese that goes into effect in your local town, and very directly is related to the group topic. Stuff like that might be OK.
And your response to this example would go something like "Oh, so are you saying that politics should be allowed!?!? how do you tell the difference between a cheese tax and an ice shooting on the other side of the country? Hypocrit!"
And the answer to that is that we can use our brain. We all know that a cheese tax is more related to the local cheese making group than national politics. And we don't have to argue with clearly bad faith arguments that pretend otherwise.
To summarize, when people say that they want to ban politics, what they actually mean is that they want to ban completely off topic controversial issues that others are trying to shoe horn into a group that isn't related to that issue.
And people are saying that it is OK to compartmentalize things. Every group in the world doesn't have to talk about your pet issue. The cheese making group can just be mostly about cheese making and they don't have to argue every day about national immigration policies.
There's a forum (HardForum) where they've taken a kind of opposite approach: people pay to access private forums where they can talk about politics and random things while the public-facing boards remain tech focused.
Basically incentivizing those who feel strongly about things to just pay up to talk about them in an exclusive area, which also keeps the site ad-free. Been apparently working for 25 years.
Unfortunately unless you also ban it in comments, people with an axe to grind will find a way to bring it up in the most inappropriate places. Casual swipes at Elon and Trump and Biden or AOC (depending on your corner of the internet) will happen on stories about the nutritional value of school lunches or fundraising for some animal shelter. It even happens on HN constantly.
Why do you think people will stop at politics?
Erm, Brexit, anyone?
You thinking that astroturfing only happens for US politics is dangerously naive.
You really gotta wonder how much value the "Digg" brand actually has, because the number of people that remember/care about the site from its original glory days is ever dwindling.
They’re only in their 40s and 50s. Not quite dead yet. :)
But is there still any credit to mine there?
I liked digg v2 (I guess), when it relaunched as a sort of curator of interesting articles (and videos). For years it was my go-to place when bored and wanted something interesting to read.
I guess that in an ocean of upvote-based platforms, an island of hand-picked content was a welcome change -- at least for me.
The move (back) to a reddit-like site never made sense to me. Hopefully what comes next has real value to the users.
I've been sharing this HN comment with anyone who mentions how good the articles in digg V2 were:
https://news.ycombinator.com/item?id=39046023
Apparently the reason why their articles were interesting was because... they copied all of their content from DamnInteresting. Once they were called out they stopped, and the quality went downhill.
One of the things I always disliked about the original Digg was their threading. The slashdot like feed where the oldest comments were at the top and there was only one level of replies tended to encourage the "first" comments and harmed the quality of the discussion. I was glad to see it use a reddit-like comment thread for the new site, but it also meant there wasn't much reason to use it over reddit.
I'm a bit surprised with Alexis' involvement they didn't anticipate the bot problem. Alexis left reddit several years ago but I'm sure he's still in touch with the folks who run the place. It would've been worth it to talk to them about the threats they currently face and how they deal with them.
Wasn’t part of the Reddit story was that they bootstrapped it with fake content?
Absolutely. They kinda brag about it now. But I think it was just the founders making multiple accounts. It sounds like the new Digg was worried about bots scaring people away from the site with thinly disguised ads.
I was a big user of Pocket between 2015-19, which also curated interesting long-form articles. The problem with that model was that paywalls were coming up everywhere, so the free articles that remained came from low-quality sites (Forbes, The Inc, FastCompany) full of long-form hustle culture or self-improvement stuff loaded with affiliate links. Maybe Digg v2 had the same issue.
Why didn’t it make sense to you?
It was 4chan lite...
The problem seems to be identify, a real problem, and looks like it will only get worse. Would creating a zero knowledge digital identity service (maybe centralized, maybe decentralized idk), where you prove you're human via your government id, passport, driver license, whatever, and the service can then attest you're a real person? So if I'm Digg, I would ask for some form of OAuth, the system would simply verify that you are in fact a human, and you would go on to create your account, forever verified. This way the identity service only does identity, it does not keep a record of where you are attesting, no logs, nothing, just your identity and basically saying yes/no, no sharing of ids or any other data.
So people would go through one hurdle in life, to get this id, and reuse it for every service.
Is this a worthwhile idea? I know many have tried, so help me poke holes in it.
1/ KYC is pricey, and users might not want to pay for it
2/ Spammer can hire real people to farm accounts
I think this idea might work if we
- create reputation graph, where valuable contributors vote for others and spread reputation
- users can fine-tune their reputation graph, so instead of "one for all", user can have his personal customized graph (pick 30 authorities and we will rebuild graph from there)
I think apps that want assurance of your identity should pay for your kya. They want valuable people after all, and this should go into their CAC. Users still pay nothing, the identity service does not care about their info, after verification, it drops any details, like uploaded documents, whatever, keeps a certificate.
The cost for this service is likely keeping up with ID systems for multiple countries, infra and support.
Potentially, if this is made into a protocol, it can be decentralized kind of like the SSL system, so each country manages it's own rules.
But they can just plug an AI into a verified account.
I am less concerned here. If you plug in AI into your identity, I guess your identity is revoked. I see the problem though, that once a service notices you're an AI, there is no way to block you because we don't really know who you are, only that you're human.
So we need a mechanism that makes this identity verifiable, maybe you get a unique identifier from the identity service, so you can block the account. There is no mechanism to report you to say, the identity service (this is a bot), so you manage your own block list.
The risk here is fingerprinting, your id can be cross referenced across apps. Maybe here is where you implement a zk proof that you're who you say you are.
I don’t love the original idea because uploading identification is risky. You could just plug AI into a verified account but at least the vector is a single account instead of unbounded.
But then if the AI is detected that person can be permanently banned. No more AI. No new accounts.
So if someone compromises your identity they can unperson you? How will the AI be detected? By another AI?
"So if someone compromises your identity they can unperson you?"
You've identified a problem that unrelated systems also have. Like banks and identity theft. This solution isn't responsible for causing that problem.
"How will the AI be detected? By another AI?"
However a platform likes to. Let the best platform win.
No, the problem is people want everything for free. The solution is very simple. Charge $5 to open an account. Only allow a person to moderate one forum/community/subreddit/etc... Delete accounts that break rules ruthlessly. This would work, but no one on the internet wants to pay for a quality forum so we deal with the same crap over and over and over and pretend like there is some other soultion.
They want ad supported so they can block all the ads and let the suckers pay. Then complain relentlessly when the content caters to suckers.
More evidence that "millions of people in the same room" isn't a sustainable model for online communities. I've been feeling for years that some kind of "chain of trust" and/or "X degrees of separation" reputation model is basically inevitable for broad-scale online social communities.
I wonder if the old forum model would work. Instead of these mega-forum-platfroms, there are just small communities with a niche focus at their own URL.
I suppose bots could find forums that use the most popular software and still make accounts and spam, but it would be much more obvious and less fruitful for someone to spam deck builders in Vancouver (something I saw often on Digg) on a forum that is focused on aquariums owners in the midwest.
Spammers don't care if it's fruitful - they just run software that finds every forum and spams it. That's why you can block so many by asking "is an elephant big or small?" on the sign-up form.
Old forums still exist and work just fine without any CEOs pontificating about "community".
I'm on plenty of niche interest boards built on PHPbb, Xenforo and Discourse. Chronologically ordered discussions, RSS support, no algorithmic "For You" bullshit.
Build it and they will come.
Is Kevin Rose known to know how to address bot problems? I think it's a little absurd to address a bot problem with bringing back the original founder. I believe he was great at community building and functionality, but bot prevention is a different beast. The post mentioned that they also worked with third parties which I believe should have more bot prevention experience than Kevin.
To be fair, I don't know Kevin Rose personally, so maybe he knows more than the industry, but I highly doubt it.
Reddit has the same problem. They are fighting it more or less successfully. I would look more in that direction.
Is Reddit fighting the bot problem? They introduced a feature to hide post history which makes it hard to know whether you’re interacting with a spammy bot account. If anything they’re embracing it.
Actions speak louder than words. They’ve added features that help spammers hide their behaviour, they are rejecting API keys when people apply for access to deal with the bot problem, they ignore subreddits with spam-friendly moderators, and they ignore reports on vote manipulation. There’s a tonne of low-hanging fruit for tackling the bot problem on Reddit that they aren’t doing anything about, and often it seems like people outside of Reddit do a better job without access to the raw data than people inside Reddit do with the raw data.
I know they claim to care about the bot problem, but they appear at absolute best incredibly complacent about it, if not complicit. All those OnlyFans spammers, AI spam bots, etc. are engagement. They are ruining the platform for people, but engagement figures don’t distinguish between fake engagement and real people. The outcome of their current behaviour is for engagement to steadily rise while the value to real people steadily falls. It’s like they want to be the poster child for Dead Internet Theory.
I wonder what they say if you apply for API access and say you want to run a spam bot.
I don't think this is helpful to bots tbh. For over a decade every time I come across a clear bot account their comment history seems very human. I assume they're either buying real accounts for one-off astroturfing hit and runs in combination with deleting older astroturfing comments after the submission stops getting traffic to hide their footprints. Or more likely there's a giant ring of bots that submit innocent things and then comment preplanned innocent things in a giant bot circle and then make pointed comments on r/politics or whatever after establishing an innocent baseline. This is the obvious approach I'd take if it were me.
I'd also be really surprised if there wasn't coordination with Reddit employees/execs themselves for big advertisers.
The Reddit CEO mentioned that the community thrives when humans talk to humans - and not with AI slop. He also said they are working on efforts to identify automated accounts.
https://www.businessinsider.com/reddit-ceo-platform-most-hum...
Reddit can't even manage to regularly identify and ban bots that copy previously popular posts/comments verbatim, and that's a much easier problem than modern LLM-based bots.
The used car salesman mentioned that the car was in perfect working condition.
Actions speak more than words, especially true for CEOs.
The bot problem is serious right now. I've switched to only allowing accounts that have paid at least once to post for my own network. It's a hard barrier (minimum spend is $2 for my site), but it almost completely solves the bot problem.
We really need some way to "verify as human" in the next coming years.
If your site creates more than $3 of value then I’ll happily setup 1000 bots a day and pay you the $2 per account every single day
> We really need some way to "verify as human" in the next coming years.
I don't believe there is any practical way to do it.
Sure, there are ways to verify a human linked to a specific account exists in a one-off fashion, but for individual interactions you'll never know that it isn't an LLM reading and posting if they put even a small amount of effort to make it seem humanish.
- I am amazed that nobody has managed to come up with some revolutionary patented tech yet that can keep all the bots out or atleast 99% of them
Who wants to join me in writing an AGPL "antisocial network", which would be basically a convenient interface over rss-bridge, gnus, and deltachat?
I'm somewhat relieved. I didn't invest much effort into my community, but I had an amazing, top-level name and over 1000 members.
Moderation was really hard. We didn't have AI posters, but there were persistent posters who were extremely annoying (mostly in their post volume and long-windedness) while still following the rules. I was really trying a hands-off approach with moderation, and it seemed to be working for the most part. It's all moot now though.
They literally just went public in Jan. Building it back up was going to take years
I don’t understand what kind of shenanigans transpired. But it seems there’s more to in than “bots”
If it truly is bots, maybe a private invite only social network is the way to go.
I stopped using Digg a long long time ago. It just felt too slow to get the news I care about.
I was an avid Slashdot user way back in the day, but the site was basically the same throughout the day, and I wanted faster updates. Digg did this perfectly for a time, but eventually I migrated entirely to Reddit (even before whatever that drama was that caused a big exodus from Digg).
I think Reddit right now is the sweet spot: up to date information, longer-term articles to read, and easy to catch up on things I missed. I was recently pressured to sign up for X (or Twitter or whatever), and I had to turn off all of the notifications since I was constantly spammed with "BREAKING: X RESPONDS TO Y ABOUT Z!!!!"
Right now having Reddit for scrolling and Hackernews for articles+discussion feels like it works for me.
Reddit is flooded with AI slop. r/all currently has AI-generated text posts and articles on the first page. Upvoted because they're the typical orange man bad stuff, but LLM slop nonetheless. Assuming the engagement is organic, it's depressing how much of the site has no eye for this stuff.
There are decent small communities I'm a part of but the trash feels like it is encroaching.
And the notifications you describe are exactly reddit's notifications? "your comment received 10/20/50/100 upvotes!" "x responds to y about z" "News is trending"
If you use the old. site you don't get notifications.
How many more times is it going to be rebuilt before they grasp the obvious bit - it's dead Dave.
>When you can't trust that the votes, the comments, and the engagement you're seeing are real, you've lost the foundation a community platform is built on.
This 1000x times
Make it $1 to register you’ll cut the audience by 90% and 95+ percent of those will be real people. Just a guess but based on some professional experience.
Remember when Reddit sold to Conde Nast for peanuts because Digg was going to win? :D
From the article, verbatim:
> We're not giving up. Digg isn't going away.
Post title is misleading.
Ah just let it go already, why keep ruining peoples memories...
Cheapest four letter domain on Earth at this point, given the negative value of the business and brand.
I downloaded the mobile app, and now it's just a login screen that doesn't work
Soon hacker news will be gone. Way too many bots and chinese/russian trolls.
Community /books helped me track down a book I've been dying to reread for almost ten years now. Reddit failed the task, so did all other places I turned to. Cheers for that, and rip.
There strategy did not make any sense: only a few pre-approved broad-and-shallow forums about everything instead of trying to attract niche communities from Reddit or even FB Groups.
Why doesn't that make sense?
Because there's no real discussion in such broad communities. Only jokes, generic replies, and silly fights. They're equivalent to comment sections on news sites.
And it doesn't make sense to you that someone might make money from a platform without discussion?
They introduced user-created communities a few months ago. They had problems with squatting and splintering, which might have played a role in their annoucement.
Is that the whole story? Why isn't reddit overrun by bots then (or are they?), and why wouldn't basic proof-of-work techniques fence against bots? Since they started out just in January, isn't it plausible to assume they didn't meet their target user figures and investors jumped ship?
Go to any career subreddit and it's almost entirely LLM- generated rage bait. Never mention political subreddits, those have been gamed for years.
On reddit you can downvote spam, so I don't think it survives on the populated reddits.
Reddit very much is.
Damn. I still have faith that what a lot of us that migrated to new Digg envision is possible. Post pandemic Internet has choppier waters than before, but I'm going to try and keep a positive outlook and I look forward to their followup emails.
Thanks for the fun this past year Digg.
I can appreciate how "building social is hard" in 2026, but is trying to be social on the internet still a worthy goal? The world has such problems with isolation and distrust that I'm not sure "online" is the solution. If Digg can do something different and help heal the world, more power to them, but I'm not holding my breath. That's not a slight to Digg, but more a comment on the slipping mental health of the world.
> This is not a reflection of their talent, their effort, or their belief in what we were building. It's a reflection of the brutal reality of finding product-market fit in an environment that has fundamentally changed.
Ironic, they use AI in their shutdown post that blames AI.
>> This is not a reflection of their talent, their effort, or their belief in what we were building. It's a reflection of the brutal reality of finding product-market fit in an environment that has fundamentally changed.
> Ironic, they use AI in their shutdown post that blames AI.
This… seems like regular prose to me. What makes you say so confidently it was written by AI?
There are more tells. Rule of three, short cliche sentences.
> We know how frustrating this is, and we hope you'll give us another look once we have something to show, we’ll save your usernames!
I think it's partly human. But ex:
> Network effects aren't just a moat, they're a wall.
isn't a natural sentence.
So no evidence at all, and just your need to point out possible LLM where ever you imagine it. You could be an LLM agent.
I think you're spot on. It feels like parts were edited with AI and parts were left alone.
> This isn't just a Digg problem. It's an internet problem. But it hit us harder because trust is the product.
The statement this is making is presumably the crux of the problem (Digg cannot survive without trust!) but it's worded so poorly that it's hard to imagine someone sat down and figured these three sentences were the best way to make the point.
I think anything with the “it’s not X it’s Y” is suspect these days. I cringe when I catch myself doing it.
That sounds like a you problem unconnected with reality.
How is that not a natural sentence? I think people are reading into stuff. That's just good writing.
Could it be generated? Sure. But there aren't the obvious tells you act like there are.
Here's the context:
"We underestimated the gravitational pull of existing platforms. Network effects aren't just a moat, they're a wall."
It's a mixed metaphor which doesn't make any sense. There are really very few ways in which this can be considered good writing - I guess the grammar is ok even if it is nonsense.
So let's break it down - underestimated the gravitational effects - ok, this is nice, like where it's going talking about these big competitors sucking in users, but then we have the metaphor extended to breaking point:
Network effects are a moat, but not just a moat, they're a wall (which is really not anything like a moat). So which of these 3 things are they, and why are we mixing the metaphors of gravity (pulling in customers), moats (competitive moat) and walls (walled gardens).
It's just all a bit nonsensical and the kind of fuzzy prose that seems superficially impressive without actually saying anything meaningful in which LLMs excel. Go try generating an article from just the heads in this article, and see how similarly it reads.
If you want your gradation to work, the items need to be similar and progressively stronger. That's why it doesn't work. A wall is not "stronger" than a moat. "Not a fence, a rampart" would work.
Compare to the canonical example from Cyrano de Bergerac: ''Tis a rock! ... a peak! ... a cape! -- A cape, forsooth! 'Tis a peninsular!'
Yes I think that’s another reason this sentence doesn’t work well.
That’s the entire point - network effects are commonly discussed as being a moat (people can’t cross without difficulty) but are actually a wall - people can’t cross and can’t view the other side. Seems simple and straightforward to me.
Isnt a moat and a wall pretty similar in function? They both keep people in or out of an area.
Also werent all "moats" commonly paired with a wall in real life? As in a moat around a castle wall?
In a castle for defence, yes similar in function but not form and often used together not one or the other.
In business metaphors no they are used for different things and also when you create a metaphor you should stick with it, that’s what makes this jarring and weird.
"Network effects aren't just a moat, they're a wall." is a VERY ChatGPT way to write. It's not proof, but the parent is right that this smells a bit of AI writing.
It's also a VERY HUMAN way to write.
I don't care so much about Digg, but the endless "haha, I caught you!" comments annoy me more than the rare actual AI-written content they label.
Not to the same extent at all. If you use ChatGPT for a while, you'll see it writes like that very frequently. Humans do write like that sometimes, but not with anywhere the frequency that ChatGPT does it. That's weak evidence for it being ChatGPT.
So based on your one example, you immediately went ChatGPT! because…?
Suppose ChatGPT uses a semicolon more often than an individual person. On a pageful of comments from many random people, someone using a semicolon doesn't mean they're a bot even if 100% of their comments on that page includes one.
It behooves you to not write like that if you don’t want people dehumanizing you.
If stupid people choose to dehumanize based on stupid rules, that is not my problem.
Screw them. I was writing like that before AI came along, and I won’t change just because it offends their delicate sensibilities.
> It behooves you to not write like that if you don’t want people dehumanizing you.
I have to strongly disagree with you on this. It behooves us (as a species) not to degrade our own manner of speaking and writing simply because of a (possibly temporary) technical anomaly.
In my view, it would be really, really sad to lose expressive punctuation or ways of constructing sentences simply because they're overused by AI.
I, for one, won't be a part of that, and I hope you won't, either.
Your prose is poor so it is no wonder. Half the words you use are superfluous, some are nonsensical, and you beg the question.
Please consider reading the Hacker News community guidelines before you post again: https://news.ycombinator.com/newsguidelines.html
Would now be a good time to point out that I said that "It's not proof" and "weak evidence"? Because that is what I said.
Your next sentence then immediately took it as proof and evidence, so no.
I think a human would have split the "it's not this, it's that" type of sentence into two separate sentences that could be more descriptive. This is a blog post, not a tweet, so there's no length constraint.
If they wanted to keep it to a single sentence, they could have used a a word like "rather" to act as a separator between moat and wall.
The rule of three is a basic writing structure taught to 12 year olds. I know people have given up on even the basics (capitalisation) in recent years but let's not just banish structured writing to "AI".
"This is not...this is" is a tell
I think we'll have to disagree on that. Humans write that way, too, and they've written that way for far longer than AI.
(Where do you think AI picked up its writing habits from?)
There isn't any "this is" in that sentence.
Much like the vouch system mitchellh is working on for open source contributors, the wider web needs a trust layer that can vouch for a poster's status as human or AI, along with a "quality" score that can travel from site to site.
This leads to paid certifications from limited experts leading to political payoffs controlling the certifiers
> We're not giving up. Digg isn't going away.
I think the HN title needs adjusted
"Digg is Just Resting"
Digg has gone to live on a farm in the countryside where it can run around and play with aol, myspace and all the other websites.
No you can't visit.
https://flipso.com might be an alternative.
Subreply.com is working just fine, no AI agents. Spam accounts get deleted.
The legibility of that site is incredibly bad.
They fired significant number of people on a Friday.
"Morbius" of social news aggregators
I am very curious where people who complain about the bots really get to see them.
The only website which became totally useless for me after the general availability of LLMs is OkCupid. It's indeed dead. The rest are fine.
What am I doing differently compared to everyone else?
I'm regularly using: telegram, whatsapp, wechat, hackernews, lobsters, reddit, opennet.ru, vk.com, pornhub, youtube, odysee, libera.chat, arxiv, gmail, github, gitlab, sourcehut, codeberg, thepiratebay, rutracker, Anna's archive, xda-developers.
facebook and twitter became broken for me, but not because of bots, rather because of the "smart feed" ("the algorithm"), which is hiding all posts of my friends and promotes incendiary garbage.
In other words, I am seeing enshittification full-scale, but not the bots.
The only one I'd expect to see them is vk but maybe they can't make money doing it in Russia.
YouTube comment sections are botted.
Ah, I never read YouTube comments. Nothing ever useful there.
Wait, digg was back?
Interesting there was no notice given to the people who paid $5 for pre-launch access and who helped build the communities before it went public. Not a good way to get anyone to invest their time in it next time they launch. "Bots" is a shitty excuse too. Their whole thing was that they were going to build it a utilise "AI" to prevent that and make moderation more automated. In reality they launched zero of those features and then opened it up to the world completely unprepared.
This is why identity verification is going to become mandatory for anyone who wants to participate in these kinds of sites. If you want to blame someone for it, blame humanity. I reluctantly will say that I welcome it if it would bring the dead internet back to life.
Didn't Kevin Rose re-acquire Digg in the last year or so?
He and Alexis Ohanian (co-founder of Reddit) went in on it together.
Yes, he did. Now he's gonna be the full-time CEO according to this.
> This isn't just a Digg problem. It's an internet problem. But it hit us harder because trust is the product.
Hmm...
> We underestimated the gravitational pull of existing platforms. Network effects aren't just a moat, they're a wall.
What does this even mean? How many metaphors can it mix up in one paragraph? Can't they write a blog post the old fashioned way, with feeling? Imagine reading a corporate blog post about being laid off which the founder couldn't even be bothered to write.
Amazing how close to corporate newspeak chatgpt can get (prompt was the headings of this blog post), it has the same sort of blank say-nothing feeling of this blog post: https://chatgpt.com/s/t_69b4890e54ac819193f221351ea900a7
Everyone here seems focused on bots, as does the author of the post. The bigger problem (as also stated in the latter half of the post) is straight-forward: there product wasn't very good. Who is asking for digg to return, save for a very (very) tiny community of nostalgic diehards? Digg is irrelevant. That doesn't mean the internet is dead. It just means digg is.
lol
100% that entire page was written by an LLM. So fucking obvious and I’m so tired of reading the same awful writing style with all these corporate spiel rants. If you don’t care enough to write something yourself, just don’t even bother.
god damn it
i really enjoyed the new digg
That was fast.
Did they have a working business plan?
Step 1: Copy Reddit
Step 2: ?
Step 3: Profit!
Step 0: acquire the company through a leveraged buyout by the CEO's own private equity firm [0]
Step 1: speed-run into the ground while loading it up with the debt of the purchase price and paying yourself management fees.
Step 2: close up shop, write down the loss and reduce tax liability for next year?
[0] https://techcrunch.com/2026/01/14/digg-launches-its-new-redd...
Surprised anyone took the revive attempt seriously frankly.
"It may be that the purpose of your existence is merely to serve as a warning to others"
Really annoying, I was starting to use it for a few niche communities instead of Reddit.
If they relaunch, I hope they develop something integrated with the fediverse. I believe the time to build walled gardens is over, plugging with the fediverse might give them a running start to build something g together with the wide fediverse community, maybe something easier to use for non-techies and well moderated.
We will see I guess…
Digg may have a bot problem but Reddit isn't far behind. So many subreddits are full of slop that they've become useless and/ or completely unreliable.
What's an actual viable solution to this kind of thing?
CATPCHAs aren't it. Maybe micro-fees to actually post things would discourage bot posting? I really don't know.
Seems like it's just dead internet all over the place these days.
Without irony: accept the death of the internet, and touch grass
Registration by snail mail coming soon to most of the Internet?
That will never happen, so why bring it up?
The title doesn’t capture the mood of page. Maybe:
Dead internet theory confirmed, Digg the latest victim
Whatever happened to MrBabyMan?
Did you know it was back? They are blaming bots.
Sure.. Digg.com is back (118 points, 6 months ago, 209 comments) https://news.ycombinator.com/item?id=44963430
"This isn't just a Digg problem. It's an internet problem."
Yes.
> We underestimated the gravitational pull of existing platforms. Network effects aren't just a moat, they're a wall. The loyalty users have to the communities they've already built elsewhere is profound. Getting people to move is a hard enough problem. Getting them to move and bring their people with them is something else entirely.
This. So much This.
So as predicted it wasn't really worth eyeballs or the inevitable forced media coverage 6 months ago.
And I will continue to die on the will die on the hill that Reddit only survived/became "successful" because of the legendary Digg slip up and exodus. Alexis Ohanian still doesn't seem to have any clue that it was right-place-right-time and Kevin Rose seems to have not learned much either. Can we stop giving either anymore credibility? As with any social site it's the user base/community that helps pull thru darkness. And no one was really asking for this.
Let sleeping dogs lie.
> legendary Digg slip up
I wasn't a digg user, but this was done to combat 'voting rings' (bots), and the reddit migration was memed partially because it was/is far more open to manipulation. So at least their principles have been somewhat consistent.
I think the [dupe] is a false alarm in the sense that they just put up a banner saying it is shut down and I think they were starting it up again back then.
Ok, unduped now. Thanks!