I've been using Go more or less in every full-time job I've had since pre-1.0. It's simple for people on the team to pick up the basics, it generally chugs along (I'm rarely worried about updating to latest version of Go), it has most useful things built in, it compiles fast. Concurrency is tricky but if you spend some time with it, it's nice to express data flow in Go. The type system is most of the time very convenient, if sometimes a bit verbose. Just all-around a trusty tool in the belt.
But I can't help but agree with a lot of points in this article. Go was designed by some old-school folks that maybe stuck a bit too hard to their principles, losing sight of the practical conveniences. That said, it's a _feeling_ I have, and maybe Go would be much worse if it had solved all these quirks. To be fair, I see more leniency in fixing quirks in the last few years, like at some point I didn't think we'd ever see generics, or custom iterators, etc.
The points about RAM and portability seem mostly like personal grievances though. If it was better, that would be nice, of course. But the GC in Go is very unlikely to cause issues in most programs even at very large scale, and it's not that hard to debug. And Go runs on most platforms anyone could ever wish to ship their software on.
But yeah the whole error / nil situation still bothers me. I find myself wishing for Result[Ok, Err] and Optional[T] quite often.
The go language and its runtime is the only system I know that is able to handle concurrency with multicore cpus seamlessly within the language, using the CSP-like (goroutine/channel) formalism which is easy to reason with.
Python is a mess with the gil and async libraries that are hard to reason with. C,C++,Java etc need external libraries to implement threading which cant be reasoned with in the context of the language itself.
So, go is a perfect fit for the http server (or service) usecase and in my experience there is no parallel.
> Java etc need external libraries to implement threading which cant be reasoned with in the context of the language itself.
What do you mean by this for Java? The library is the runtime that ships with Java, and while they're OS threads under the hood, the abstraction isn't all that leaky, and it doesn't feel like they're actually outside the JVM.
> So, go is a perfect fit for the http server (or service) usecase and in my experience there is no parallel.
Elixir handling 2 million websocket connections on a single machine back in 2015 would like to have a word.[1] This is largely thanks to the Erlang runtime it sits atop.
Having written some tricky Go (I implemented Raft for a class) and a lot of Elixir (professional development), it is my experience that Go's concurrency model works for a few cases but largely sucks in others and is way easier to write footguns in Go than it ought to be.
I worked in both Elixir and Go. I still think Elixir is best for concurrency.
I recently realized that there is no easy way to "bubble up a goroutine error", and I wrote some code to make sure that was possible, and that's when I realize, as usual, that I'm rewriting part of the OTP library.
The whole supervisor mechanism is so valuable for concurrency.
> using the CSP-like (goroutine/channel) formalism which is easy to reason with
I thought it was a seldom mentioned fact in Go that CSP systems are impossible to reason about outside of toy projects so everyone uses mutexes and such for systemic coordination.
I'm not sure I've even seen channels in a production application used for anything more than stopping a goroutine, collecting workgroup results, or something equally localized.
For Erlang and Elixir, concurrent programming is pretty much their thing so grab any book or tutorial on them and you'll be introduced to how they handle it.
And of those seven, how many are mainstream? A single one...
So it's really Go vs. Java, or you can take a performance hit and use Erlang (valid choice for some tasks but not all), or take a chance on a novel paradigm/unsupported language.
Erlang (or Elixir) are absolutely Go replacements for the types of software where CSP is likely important.
Source: spent the last few weeks at work replacing a Go program with an Elixir one instead.
I'd use Go again (without question) but it is not a panacea. It should be the default choice for CLI utilities and many servers, but the notion that it is the only usable language with something approximating CSP is idiotic.
Not to dispute too strongly (since I haven't used this functionality myself), but Node.js does have support for true multithreading since v12: https://nodejs.org/dist/latest/docs/api/worker_threads.html. I'm not sure what you mean by "M:1 threaded" but I'm legitimately curious to understand more here, if you're willing to give more details.
There are also runtimes like e.g. Hermes (used primarily by React Native), there's support for separating operations between the graphics thread and other threads.
All that being said, I won't dispute OP's point about "handling concurrency [...] within the language"- multithreading and concurrency are baked into the Golang language in a more fundamental way than Javascript. But it's certainly worth pointing out that at least several of the major runtimes are capable of multithreading, out of the box.
Yeah those are workers which require manual admin of memory shared / passed memory:
> Within a worker thread, worker.getEnvironmentData() returns a clone of data passed to the spawning thread's worker.setEnvironmentData(). Every new Worker receives its own copy of the environment data automatically.
M:1 threaded means that the user space threads are mapped onto a single kernel thread. Go is M:N threaded: goroutines can be arbitrarily scheduled across various underlying OS threads. Its primitives (goroutines and channels) make both concurrency and parallelism notably simpler than most languages.
> But it's certainly worth pointing out that at least several of the major runtimes are capable of multithreading, out of the box.
I’d personally disagree in this context. Almost every language has pthread-style cro-magnon concurrency primitives. The context for this thread is precisely how go differs from regular threading interfaces. Quoting gp:
> The go language and its runtime is the only system I know that is able to handle concurrency with multicore cpus seamlessly within the language, using the CSP-like (goroutine/channel) formalism which is easy to reason with.
Yes other languages have threading, but in go both concurrency and parallelism are easier than most.
Go was designed by some old-school folks that maybe stuck a bit too hard to their principles, losing sight of the practical conveniences.
I'd say that it's entirely the other way around: they stuck to the practical convenience of solving the problem that they had in front of them, quickly, instead of analyzing the problem from the first principles, and solving the problem correctly (or using a solution that was Not Invented Here).
Go's filesystem API is the perfect example. You need to open files? Great, we'll create
func Open(name string) (*File, error)
function, you can open files now, done. What if the file name is not valid UTF-8, though? Who cares, hasn't happen to me in the first 5 years I used Go.
> Who cares, hasn't happen to me in the first 5 years I used Go.
This is the mindset that makes me want to throttle the golang authors.
Golang makes it easy to do the dumb, wrong, incorrect thing that looks like it works 99.7% of the time. How can that be wrong? It works in almost all cases!
The problem is that your code is littered with these situations everywhere. You don’t think to test for them, it’s worked on all the data you fed it so far, and then you run into situations like the GP’s where you lose data because golang didn’t bother to think carefully about some API impedance mismatch, can’t even express it anyway, and just drops things on the floor when it happens.
So now your user has irrecoverably lost data, there’s a bug in your bug tracker, and you and everyone else who uses go has to solve for yet another a stupid footgun that should have been obvious from the start and can never be fixed upstream.
And you, and every other golang programmer, gets a steady and never-ending stream of these type of issues, randomly selected for, for the lifetime of your program. Which one will bite you tomorrow? No idea! But the more and more people who use it, the more data you feed it, the more clients with off-the-beaten-track use-cases, the more and more it happens.
Oops, non-UTF-8 filename. Oops, can’t detect the difference between an empty string in some JSON or a nil one. Oops, handed out a pointer and something got mutated out from under me. Oops, forgot to defer. Oops, maps aren’t thread-safe. Oops, maps don’t have a sane zero value. And on and on and fucking on and it never goddamn ends.
And it could have, if only Rob Pike and co. didn’t just ship literally the first thing they wrote with zero forethought.
> Golang makes it easy to do the dumb, wrong, incorrect thing that looks like it works 99.7% of the time. How can that be wrong? It works in almost all cases!
my favorite example of this was the go authors refusing to add monotonic time into the standard library because they confidently misunderstood its necessity
(presumably because clocks at google don't ever step)
then after some huge outages (due to leap seconds) they finally added it
now the libraries are a complete a mess because the original clock/time abstractions weren't built with the concept of multiple clocks
and every go program written is littered with terrible bugs due to use of the wrong clock
I can count on fewer hands the number of times I've been bitten by such things in over 10 years of professional Go vs bitten just in the last three weeks by half-assed Java.
Is golang better than Java? Sure, fine, maybe. I'm not a Java expert so I don't have a dog in the race.
Should and could golang have been so much better than it is? Would golang have been better if Pike and co. had considered use-cases outside of Google, or looked outward for inspiration even just a little? Unambiguously yes, and none of the changes would have needed it to sacrifice its priorities of language simplicity, compilation speed, etc.
It is absolutely okay to feel that go is a better language than some of its predecessors while at the same time being utterly frustrated at the the very low-hanging, comparatively obvious, missed opportunities for it to have been drastically better.
There is a lot to say about Java, but the libraries (both standard lib and popular third-party ones) are goddamn battle-hardened, so I have a hard time believing your claim.
While the general question about string encoding is fine, unfortunately in a general-purpose and cross-platform language, a file interface that enforces Unicode correctness is actively broken, in that there are files out in the world it will be unable to interact with. If your language is enforcing that, and it doesn't have a fallback to a bag of bytes, it is broken, you just haven't encountered it. Go is correct on this specific API. I'm not celebrating that fact here, nor do I expect the Go designers are either, but it's still correct.
This is one of those things that kind of bugs me about, say, OsStr / OsString in Rust. In theory, it’s a very nice, principled approach to strings (must be UTF-8) and filenames (arbitrary bytes, almost, on Linux & Mac). In practice, the ergonomics around OsStr are horrible. They are missing most of the API that normal strings have… it seems like manipulating them is an afterthought, and it was assumed that people would treat them as opaque (which is wrong).
Go’s more chaotic approach to allow strings to have non-Unicode contents is IMO more ergonomic. You validate that strings are UTF-8 at the place where you care that they are UTF-8. (So I’m agreeing.)
The big problem isn't invalid UTF-8 but invalid UTF-16 (on Windows et al). AIUI Go had nasty bugs around this (https://github.com/golang/go/issues/59971) until it recently adopted WTF-8, an encoding that was actually invented for Rust's OsStr.
WTF-8 has some inconvenient properties. Concatenating two strings requires special handling. Rust's opaque types can patch over this but I bet Go's WTF-8 handling exposes some unintuitive behavior.
There is a desire to add a normal string API to OsStr but the details aren't settled. For example: should it be possible to split an OsStr on an OsStr needle? This can be implemented but it'd require switching to OMG-WTF-8 (https://rust-lang.github.io/rfcs/2295-os-str-pattern.html), an encoding with even more special cases. (I've thrown my own hat into this ring with OsStr::slice_encoded_bytes().)
The current state is pretty sad yeah. If you're OK with losing portability you can use the OsStrExt extension traits.
Yeah, I avoided talking about Windows which isn’t UTF-16 but “int16 string” the same way Unix filenames are int8 strings.
IMO the differences with Windows are such that I’m much more unhappy with WTF-8. There’s a lot that sucks about C++ but at least I can do something like
#if _WIN32
using pathchar = wchar_t;
constexpr pathchar sep = L'\\';
#else
using pathchar = char;
constexpr pathchar sep = '/';
#endif
using pathstring = std::basic_string<pathchar>;
Mind you this sucks for a lot of reasons, one big reason being that you’re directly exposed to the differences between path representations on different operating systems. Despite all the ways that this (above) sucks, I still generally prefer it over the approaches of Go or Rust.
> You validate that strings are UTF-8 at the place where you care that they are UTF-8.
The problem with this, as with any lack of static typing, is that you now have to validate at _every_ place that cares, or to carefully track whether a value has already been validated, instead of validating once and letting the compiler check that it happened.
In practice, the validation generally happens when you convert to JSON or use an HTML template or something like that, so it’s not so many places.
Validation is nice but Rust’s principled approach leaves me high and dry sometimes. Maybe Rust will finish figuring out the OsString interface and at that point we can say Rust has “won” the conversation, but it’s not there yet, and it’s been years.
Except when it doesn’t and then you have to deal with fucking Cthulhu because everyone thought they could just make incorrect assumptions that aren’t actually enforced anywhere because “oh that never happens”.
That isn’t engineering. It’s programming by coincidence.
> Maybe Rust will finish figuring out the OsString interface
The entire reason OsString is painful to use is because those problems exist and are real. Golang drops them on the floor and forces you pick up the mess on the random day when an unlucky end user loses data. Rust forces you to confront them, as unfortunate as they are. It's painful once, and then the problem is solved for the indefinite future.
Rust also provides OsStrExt if you don’t care about portability, which greatly removes many of these issues.
I don’t know how that’s not ideal: mistakes are hard, but you can opt into better ergonomics if you don’t need the portability. If you end up needing portability later, the compiler will tell you that you can’t use the shortcuts you opted into.
Much more egregious is the fact that the API allows returning both an error and a valid file handle. That may be documented to not happen. But look at the Read method instead. It will return both errors and a length you need to handle at the same time.
The Read() method is certainly an exception rather than a rule. The common convention is to return nil value upon encountering an error unless there's real value in returning both, e.g. for a partial read that failed in the end but produced some non-empty result nevertheless. It's a rare occasion, yes, but if you absolutely have to handle this case you can. Otherwise you typically ignore the result if err!=nil. It's a mess, true, but real world is also quite messy unfortunately, and Go acknowledges that
Most of the time if there's a result, there's no error. If there's an error, there's no result. But don't forget to check every time! And make sure you don't make a mistake when you're checking and accidentally use the value anyway, because even though it's technically meaningless it's still nominally a meaningful value since zero values are supposed to be meaningful.
Oh and make sure to double-check the docs, because the language can't let you know about the cases where both returns are meaningful.
The real world is messy. And golang doesn't give you advance warning on where the messes are, makes no effort to prevent you from stumbling into them, and stands next to you constantly criticizing you while you clean them up by yourself. "You aren't using that variable any more, clean that up too." "There's no new variables now, so use `err =` instead of `err :=`."
It breaks. Which is weird because you can create a string which isn't valid UTF-8 (eg "\xbd\xb2\x3d\xbc\x20\xe2\x8c\x98") and print it out with no trouble; you just can't pass it to e.g. `os.Create` or `os.Open`.
(Bash and a variety of other utils will also complain about it being valid UTF-8; neovim won't save a file under that name; etc.)
Well, Windows is an odd beast when 8-bit file names are used. If done naively, you can’t express all valid filenames with even broken UTF-8 and non-valid-Unicode filenames cannot be encoded to UTF-8 without loss or some weird convention.
You can do something like WTF-8 (not a misspelling, alas) to make it bidirectional. Rust does this under the hood but doesn’t expose the internal representation.
What do you mean by "when 8-bit filenames are used"? Do you mean the -A APIs, like CreateFileA()? Those do not take UTF-8, mind you -- unless you are using a relatively recent version of Windows that allows you to run your process with a UTF-8 codepage.
In general, Windows filenames are Unicode and you can always express those filenames by using the -W APIs (like CreateFileW()).
I think it depends on the underlying filesystem. Unicode (UTF-16) is first-class on NTFS.
But Windows still supports FAT, I guess, where multiple 8-bit encodings are possible: the so-called "OEM" code pages (437, 850 etc.) or "ANSI" code pages (1250, 1251 etc.). I haven't checked how recent Windows versions cope with FAT file names that cannot be represented as Unicode.
This also epitomizes the issue. What's the point of having `string` type at all, if it doesn't allow you to make any extra assumptions about the contents beyond `[]byte`? The answer is that they planned to make conversion to `string` error out when it's invalid UTF-8, and then assume that `string`s are valid UTF-8, but then it caused problems elsewhere, so they dropped it for immediate practical convenience.
Rust apparently got relatively close to not having &str as a primitive type and instead only providing a library alias to &[u8] when Rust 1.0 shipped.
Score another for Rust's Safety Culture. It would be convenient to just have &str as an alias for &[u8] but if that mistake had been allowed all the safety checking that Rust now does centrally has to be owned by every single user forever. Instead of a few dozen checks overseen by experts there'd be myriad sprinkled across every project and always ready to bite you.
So it's true that technically the primitive type is str, and indeed it's even possible to make a &mut str though it's quite rare that you'd want to mutably borrow the string slice.
However no &str is not "an alias for &&String" and I can't quite imagine how you'd think that. String doesn't exist in Rust's core, it's from alloc and thus wouldn't be available if you don't have an allocator.
str is not really a "primitive type", it only exists abstractly as an argument to type constructors - treating the & operator as a "type constructor" for that purpose, but including Box<>, Rc<>, Arc<> etc. So you can have Box<str> or Arc<str> in addition to &str or perhaps &mut str, but not really 'str' in isolation.
IMO utf8 isn't a highly specific format, it's universal for text. Every ascii string you'd write in C or C++ or whatever is already utf8.
So that means that for 99% of scenarios, the difference between char[] and a proper utf8 string is none. They have the same data representation and memory layout.
The problem comes in when people start using string like they use string in PHP. They just use it to store random bytes or other binary data.
This makes no sense with the string type. String is text, but now we don't have text. That's a problem.
We should use byte[] or something for this instead of string. That's an abuse of string. I don't think allowing strings to not be text is too constraining - that's what a string is!
Yes, Windows text is broken in its own special way.
We can try to shove it into objects that work on other text but this won't work in edge cases.
Like if I take text on Linux and try to write a Windows file with that text, it's broken. And vice versa.
Go let's you do the broken thing. In Rust or even using libraries in most languages, you cant. You have to specifically convert between them.
That's why I mean when I say "storing random binary data as text". Sure, Windows almost UTF16 abomination is kind of text, but not really. Its its own thing. That requires a different type of string OR converting it to a normal string.
Even on Linux, you can't have '/' in a filename, or ':' on macOS. And this is without getting into issues related to the null byte in strings. Having a separate Path object that represents a filename or path + filename makes sense, because on every platform there are idiosyncratic requirements surrounding paths.
It maybe legacy cruft downstream of poorly thought out design decisions at the system/OS level, but we're stuck with it. And a language that doesn't provide the tooling necessary to muddle through this mess safely isn't a serious platform to build on, IMHO.
There is room for languages that explicitly make the tradeoff of being easy to use (e.g. a unified string type) at the cost of not handling many real world edge cases correctly. But these should not be used for serious things like backup systems where edge cases result in lost data. Go is making the tradeoff for language simplicity, while being marketed and positioned as a serious language for writing serious programs, which it is not.
I've always thought the point of the string type was for indexing. One index of a string is always one character, but characters are sometimes composed of multiple bytes.
Yup. But to be clear, in Unicode a string will index code points, not characters. E.g. a single emoji can be made of multiple code points, as well as certain characters in certain languages. The Unicode name for a character like this is a "grapheme", and grapheme splitting is so complicated it generally belongs in a dedicated Unicode library, not a general-purpose string object.
You can't do that in a performant way and going that route can lead to problems, because characters (= graphemes in the language of Unicode) generally don't always behave as developers assume.
string is just an immutable []byte. It's actually one of my favorite things about Go that strings can contain invalid utf-8, so you don't end up with the Rust mess of String vs OSString vs PathBuf vs Vec<u8>. It's all just string
Rust &str and String are specifically intended for UTF-8 valid text. If you're working with arbitrary byte sequences, that's what &[u8] and Vec<u8> are for in Rust. It's not a "mess", it's just different from what Golang does.
It's never been clear to me where such a type is actually useful. In what cases do you really need to restrict it to valid UTF-8?
You should always be able to iterate the code points of a string, whether or not it's valid Unicode. The iterator can either silently replace any errors with replacement characters, or denote the errors by returning eg, `Result<char, Utf8Error>`, depending on the use case.
All languages that have tried restricting Unicode afaik have ended up adding workarounds for the fact that real world "text" sometimes has encoding errors and it's often better to just preserve the errors instead of corrupting the data through replacement characters, or just refusing to accept some inputs and crashing the program.
In Rust there's bstr/ByteStr (currently being added to std), awkward having to decide which string type to use.
In Python there's PEP-383/"surrogateescape", which works because Python strings are not guaranteed valid (they're potentially ill-formed UTF-32 sequences, with a range restriction). Awkward figuring out when to actually use it.
In Raku there's UTF8-C8, which is probably the weirdest workaround of all (left as an exercise for the reader to try to understand .. oh, and it also interferes with valid Unicode that's not normalized, because that's another stupid restriction).
Meanwhile the Unicode standard itself specifies Unicode strings as being sequences of code units [0][1], so Go is one of the few modern languages that actually implements Unicode (8-bit) strings. Note that at least two out of the three inventors of Go also basically invented UTF-8.
> Unicode strings need not contain well-formed code unit sequences under all conditions. This is equivalent to saying that a particular Unicode string need not be in a Unicode encoding form.
The way Rust handles this is perfectly fine. String type promises its contents are valid UTF-8. When you create it from array of bytes, you have three options: 1) ::from_utf8, which will force you to handle invalid UTF-8 error, 2) ::from_utf8_lossy, which will replace invalid code points with replacement character code point, and 3) from_utf8_unchecked, which will not do the validity check and is explicitly marked as unsafe.
But there's no option to just construct the string with the invalid bytes. 3) is not for this purpose; it is for when you already know that it is valid.
If you use 3) to create a &str/String from invalid bytes, you can't safely use that string as the standard library is unfortunately designed around the assumption that only valid UTF-8 is stored.
> Constructing a non-UTF-8 string slice is not immediate undefined behavior, but any function called on a string slice may assume that it is valid UTF-8, which means that a non-UTF-8 string slice can lead to undefined behavior down the road.
> It's never been clear to me where such a type is actually useful. In what cases do you really need to restrict it to valid UTF-8?
Because 99.999% of the time you want it to be valid and would like an error if it isn't? If you want to work with invalid UTF-8, that should be a deliberate choice.
Do you want grep to crash when your text file turned out to have a partially written character in it? 99.999% seems very high, and you haven't given an actual use case for the restriction.
> they stuck to the practical convenience of solving the problem that they had in front of them, quickly, instead of analyzing the problem from the first principles, and solving the problem correctly (or using a solution that was Not Invented Here).
I've said this before, but much of Go's design looks like it's imitating the C++ style at Google. The comments where I see people saying they like something about Go it's often an idiom that showed up first in the C++ macros or tooling.
I used to check this before I left Google, and I'm sure it's becoming less true over time. But to me it looks like the idea of Go was basically "what if we created a Python-like compiled language that was easier to onboard than C++ but which still had our C++ ergonomics?"
> Go was designed by some old-school folks that maybe stuck a bit too hard to their principles, losing sight of the practical conveniences.
It feels often like the two principles they stuck/stick to are "what makes writing the compiler easier" and "what makes compilation fast". And those are good goals, but they're only barely developer-oriented.
Not sure it was only that. I remember a lot of "we're not Java" in the discussions around it. I always had the feeling, they were rejecting certain ideas like exceptions and generics more out of principle, than any practical analysis.
Like, yes, those ideas have frequently been driven too far and have led to their own pain points. But people also seem to frequently rediscover that removing them entirety will lead to pain, too.
What makes compilation fast is a good goal at places with large code bases and build times. Maybe makes less sense in smaller startups with a few 100k LOC.
I am reminded when I read "barely developer oriented" that this comes from Google, who run compute and compilers at Ludicrous Scale. It doesn't seem strange that they might optimize (at least in part) for compiler speed and simplicity.
I recently started writing Go for a new job, after 20 years of not touching a compiled language for something serious (I've done DevKitArm dev. as a hobby).
I know it's mostly a matter of tastes, but darn, it feels horrible. And there are no default parameter values, and the error hanling smells bad, and no real stack trace in production. And the "object orientation" syntax, adding some ugly reference to each function. And the pointers...
It took me back to my C/C++ days. Like programming with 25 year old technology from back when I was in university in 1999.
And then people are amazed for it to achieve compile times, compiled languages were already doing on PCs running at 10 MHz within the constraints of 640 KB (TB, TP, Modula-2, Clipper, QB).
It is weird to lump C++ and Rust together. I have used Rust code bases that compile in 2-3 minutes what a C++ compiler would take literally hours to compile.
I feel people who complain about rustc compile times must be new to using compiled languages…
That's a reasonable trade-off to make for some people, no? There's plenty of work to be done where you can cope with the occasional runtime error and less then bleeding edge performance, especially if that then means wins in other areas (compile speeds, tooling). Having a variety of languages available feels like a pretty good thing to me.
Well, I personally would be happier with a stronger type system (e.g. java can compile just as fast, and it has a less anemic type system), but sure.
And sure, it is welcome from a dev POV on one hand, though from an ecosystem perspective, more languages are not necessarily good as it multiplies the effort required.
> Java ? Licensing sagas requiring the use of divergent forks. Plus Go is easier to work with, perhaps especially for server-side deployments
Yeah, these are sagas only, because there is basically one, single, completely free implementation anyone uses on the server-side and it's OpenJDK, which was made 100% open-source and the reference implementation by Oracle. Basically all of Corretto, AdoptOpenJDK, etc are just builds of the exact same repository.
People bringing this whole license topic up can't be taken seriously, it's like saying that Linux is proprietary because you can pay for support at Red Hat..
> People bringing this whole license topic up can't be taken seriously
So you mean all those universities and other places that have been forced to spend $$$ on licenses under the new regime also can't be taken seriously ? Are you saying none of them took advice and had nobody on staff to tell them OpenJDK exists ?
Regarding your Linux comment, some of us are old enough to remember the SCO saga.
Sadly Oracle have deeper pockets to pay more lawyers than SCO ever did ....
I have made a bunch of claims, that are objectively true. From there, basic logical inference says that you can completely freely use Java. Anything else is irrelevant.
I don't know what/which university you talk about, but I'm sure they were also "forced to pay $$$" for their water bills and whatnot. If they decided to go with paid support, then.. you have to pay for it. In exchange you can a) point your finger at a third-party if something goes wrong (which governments love doing/often legally necessary) b) get actual live support on Christmas Eve if needed.
TL;DR: Its impossible to know if anyone on campus has downloaded Oracle Java....
Quote from this article:[1]
*He told The Register that Oracle is "putting specific Java sales teams in country, and then identifying those companies that appear to be downloading and... then going in and requesting to [do] audits. That recipe appears to be playing out truly globally at this point."*
That's also true of torrented PhotoShop, Microsoft Office, etc..
Also, as another topic, Oracle is doing audits specifically because their software doesn't phone home to check licenses and stuff like that - which is a crucial requirement for their intended target demographics, big government organizations, safety critical systems, etc. A whole country's healthcare system, or a nuclear power base can't just stop because someone forgot to pay the bill.
So instead Oracle just visits companies that have a license with them, and checks what is being used to determine if it's in accord with the existing contract. And yeah, from this respect I also heard of a couple of stories where a company was not using the software as the letter of the contract, e.g. accidentally enabling this or that, and at the audit the Oracle salesman said that they will ignore the mistake if they subscribe to this larger package, which most manager will gladly accept as they can avoid the blame, which is questionable business practice, but still doesn't have anything to do with OpenJDK..
The article tries very hard to draw a connection between the licensing costs for the universities and Oracle auditing random java downloads, but nobody actually says that this is what happened.
The waiver of historic fees goes back to the last licensing change where Oracle changed how licensing fees would be calculated. So it seems reasonable that Oracle went after them because they were paying customers that failed to pay the inflated fees.
> So you mean all those universities and other places that have been forced to spend $$$ on licenses under the new regime also can't be taken seriously ? Are you saying none of them took advice and had nobody on staff to tell them OpenJDK exists ?
This info is actually quite surprising to me, never heard of it since everywhere I know switched to OpenJDK-based alternatives from the get-go. There was no reason to keep on the Oracle one after the licencing shenanigans they tried to play.
Why do these places kept the Oracle JDK and ended up paying for it? OpenJDK was a drop-in replacement, nothing of value is lost by switching...
Yeah I know, but people have trouble understanding the absolutely trivial licensing around OpenJDK, let's not bring up alternative implementations (which actually makes the whole platform an even better target from a longevity perspective! There isn't many languages that have a standard with multiple, completely independent impls).
You forgot D. In a world where D exists, it's hard to understand why Go needed to be created. Every critique in this post is not an issue in D. If the effort Google put into Go had gone on making D better, I think D today would be the best language you could use. But as it is, D has had very little investment (by that I mean actual developer time spent on making it better, cleaning it up, writing tools) and it shows.
Go has a big, high quality standard library with most of what one might need. Means you have to bring in and manage (and trust) far fewer third party dependencies, and you can work faster because you’re not spending a bunch of time figuring out what the crate of the week is for basic functionality.
Rust intentionally chooses to have a small standard library to avoid the "dead batteries" problem. But the Rust community also maintains lists of "blessed" crates to try and cope with the issue of having to trust third-party software components of unknown quality.
The downside of a small stdlib is the proliferation of options, and you suddenly discover(ed?, it's been a minute) that your async package written for Tokio won't work on async-std and so forth.
This has often been the case in Go too - until `log/slog` existed, lots of people chose a structured logger and made it part of their API, forcing it on everyone else.
I think having http in the standard library is a perfect example of the dead batteries problem: should the stdlib http also support QUIC and/or websockets? If you choose to include it, you've made stdlib include support for very specific use cases. If you choose not to include it, should the quic crate then extend or subsume the stdlib http implementation? If you choose subsume, you've created a dead battery. If you choose extend, you've created a maintenance nightmare by introducing a dependency between stdlib and an external crate.
Sorry but for most programming tasks I prefer having actual data containers with features than an HTTP library: Set, Tree, etc types. Those are fundamental CS building blocks yet are absent from the Go standard library. (well, they were added pretty recently, still nowhere near as featureful than std::collection in Rust).
Also, as mentioned by another comment, an HTTP or crypto library can become obsolete _fast_. What about HTTP3? What about post-quantum crypto? What about security fixes? The stdlib is tied to the language version, thus to a language release. Having such code independant allows is to evolve much faster, be leaner, and be more composable. So yes, the library is well maintained, but it's tied to the Go version.
Also, it enables breaking API changes if absolutely needed. I can name two precendents:
- in rust, time APIs in chrono had to be changed a few times, and the Rust maintainers were thankful it was not part of the stdlib, as it allowed massive changes
- otoh, in Go, it was found out that net.Ip has an absolutely atrocious design (it's just an alias for []byte). Tailscale wrote a replacement that's now in a subpackage in net, but the old net.Ip is set in stone. (https://tailscale.com/blog/netaddr-new-ip-type-for-go)
This just makes it even more frustrating to me. Everything good about go is more about the tooling and ecosystem but the language itself is not very good. I wish this effort had been put into a better language.
uv + the new way of adding the required packages in the comments is pretty good.
you can go `uv run script.py` and it'll automatically fetch the libraries and run the script in a virtual environment.
Still no match for Go though, shipping a single cross-compiled binary is a joy. And with a bit of trickery you can even bundle in your whole static website in it :) Works great when you're building business logic with a simple UI on top.
I've been out of the Python game for a while but I'm not surprised there is yet another tool on the market to handle this.
You really come to appreciate when these batteries are included with the language itself. That Go binary will _always_ run but that Python project won't build in a few years.
Or the import path was someone's blog domain that included a <meta> reference to the actual github repo (along with the tag, IIRC) where the source code really lives. Insanity
Well, that's the problem I was highlighting - golang somehow decided to have the worst of both worlds: arbitrary domains in import paths and then putting the actual ref of the source code ... elsewhere
Yes, My favourite is the `time` package. It's just so elegant how it's just a number under there, the nominal type system truly shines. And using it is a treat.
What do you mean I can do `+= 8*time.Hour` :D
Unfortunately it doesn't have error handling, so when you do += 8 hours and it fails, it won't return a Go error, it won't throw a Go exception, it just silently does the wrong thing (clamp the duration) and hope you don't notice...
It's simplistic and that's nice for small tools or scripts, but at scale it becomes really brittle since none of the edge cases are handled
I thankfully found out when writing unit tests instead of in production. In Go time.Time has a much higher range than time.Duration, so it's very easy to have an overflow when you take a time difference. But there's also no error returned in general when manipulating time.Duration, you have to remember to check carefully around each operation to know if it risks going out of range.
Internally time.Duration is a single 64bit count, while time.Time is two more complicated 64bit fields plus a location
As long as you don’t need to do `hours := 8` and `+= hours * time.Hour`. Incredibly the only way to get that multiplication to work is to cast `hours` to a `time.Duration`.
In Go, `int * Duration = error`, but `Duration * Duration = Duration`!
My feeling is that in terms of developer ergonomics, it nailed the “very opinionated, very standard, one way of doing things” part. It is a joy to work on a large microservices architecture and not have a different style on each repo, or avoiding formatting discussions because it is included.
The issue is that it was a bit outdated in the choice of _which_ things to choose as the one Go way. People expect a map/filter method rather than a loop with off by one risks, a type system with the smartness of typescript (if less featured and more heavily enforced), error handling is annoying, and so on.
I get that it’s tough to implement some of those features without opening the way to a lot of “creativity” in the bad sense. But I feel like go is sometimes a hard sell for this reason, for young devs whose mother language is JavaScript and not C.
> The issue is that it was a bit outdated in the choice of _which_ things to choose as the one Go way
I agree with this. I feel like Go was a very smart choice to create a new language to be easy and practical and have great tooling, and not to be experimental or super ambitious in any particular direction, only trusting established programming patterns. It's just weird that they missed some things that had been pretty well hashed out by 2009.
Map/filter/etc. are a perfect example. I remember around 2000 the average programmer thought map and filter were pointlessly weird and exotic. Why not use a for loop like a normal human? Ten years later the average programmer was like, for loops are hard to read and are perfect hiding places for bugs, I can't believe we used to use them even for simple things like map, filter, and foreach.
By 2010, even Java had decided that it needed to add its "stream API" and lambda functions, because no matter how awful they looked when bolted onto Java, it was still an improvement in clarity and simplicity.
Somehow Go missed this step forward the industry had taken and decided to double down on "for." Go's different flavors of for are a significant improvement over the C/C++/Java for loop, but I think it would have been more in line with the conservative, pragmatic philosophy of Go to adopt the proven solution that the industry was converging on.
Do they? After too many functional battles I started practicing what I'm jokingly calling "Debugging-Driven Development" and just like TDD keeps the design decisions in mind to allow for testability from the get-go, this makes me write code that will be trivially easy to debug (specially printf-guided debugging and step-by-step execution debugging)
Like, adding a printf in the middle of a for loop, without even needing to understand the logic of the loop. Just make a new line and write a printf. I grew tired of all those tight chains of code that iterate beautifully but later when in a hurry at 3am on a Sunday are hell to decompose and debug.
I'm not a hard defender of functional programming in general, mind you.
It's just that a ridiculous amount of steps in real world problems can be summarised as 'reshape this data', 'give me a subset of this set', or 'aggregate this data by this field'.
Loops are, IMO, very bad at expressing those common concepts briefly and clearly. They take a lot of screen space, usually accesory variables, and it isn't immediately clear from just seing a for block what you're about to do - "I'm about to iterate" isn't useful information to me as a reader, are you transforming data, selecting it, aggregating it?.
The consequence is that you usually end up with tons of lines like
userIds = getIdsfromUsers(users);
where the function is just burying a loop. Compare to:
userIds = users.pluck('id')
and you save the buried utility function somewhere else.
Rust has `.inspect()` for iterators, which achieves your printf debugging needs. Granted, it's a bit harder for an actual debugger, but support's quite good for now.
Just use a real debugger. You can step into closures and stuff.
I assume, anyway. Maybe the Go debugger is kind of shitty, I don't know. But in PHP with xdebug you just use all the fancy array_* methods and then step through your closures or callables with the debugger.
I'll agree that explicit loops are easier to debug, but that comes at the cost of being harder to write _and_ read (need to keep state in my head) _and_ being more bug-prone (because mutability).
I think it's a bad trade-off, most languages out there are moving away from it
There's actually one more interesting plus for the for loops that's not quite obvious in the beginning: the for-loops allow to do perform a single memory pass instead of multiple. If you're processing a large enough list it does make a significant difference because memory accesses are relatively expensive (the difference is not insignificant, the loop can be made e.g. 10x more performant by optimising memory accesses alone).
So for a large loop the code like
for i, value := source {
result[i] = value * 2 + 1
}
Would be 2x faster than a loop like
for i, value := source {
intermediate[i] = value * 2
}
for i, value := intermediate {
result[i] = value + 1
}
Depending on your iterator implementation (or, lackthere of), the functional boils down to your first example.
For example, Rust iterators are lazily evaluated with early-exits (when filtering data), thus it's your first form but as optimized as possible. OTOH python's map/filter/etc may very well return a full list each time, like with your intermediate.
I would say that any sane language allowing functional-style data manipulation will have them as fast as manual for-loops. (that's why Rust bugs you with .iter()/.collect())
This is a very valid point. Loops also let you play with the iteration itself for performance, deciding to skip n steps if a condition is met for example.
I always encounter these upsides once every few years when preparing leetcode interviews, where this kind of optimization is needed for achieving acceptable results.
In daily life, however, most of these chunks of data to transform fall in one of these categories:
- small size, where readability and maintainability matters much more than performance
- living in a db, and being filtered/reshaped by the query rather than code
- being chunked for atomic processing in a queue or similar (usual when importing a big chunk of data).
- the operation itself is a standard algorithm that you just consume from a standard library that handless the loop internally.
Much like trees and recursion, most of us don’t flex that muscle often. Your mileage might vary depending of domain of course.
This tends to be true for most languages, even the ones with easier concurrency support. Using it correctly is the tricky part.
I have no real problem with the portability. The area I see Go shining in is stuff like AWS Lambda where you want fast execution and aren't distributing the code to user systems.
I get you can specifically write code that does not malloc, but I'm curious at scale if there are heap management / fragmentation and compression issues that are equivalent to GC pause issues.
I don't have a lot of experience with the malloc languages at scale, but I do know that heat fragmentation and GC fragmentation are very similar problems.
There are techniques in GC languages to avoid GC like arena allocation and stuff like that, generally considered non-idiomatic.
> I find myself wishing for Optional[T] quite often.
Well, so long as you don't care about compatibility with the broad ecosystem, you can write a perfectly fine Optional yourself:
type Optional[Value any] struct {
value Value
exists bool
}
// New empty.
func New[Value any]() Optional[Value] {}
// New of value.
func Of[Value any](value Value) Optional[Value] {}
// New of pointer.
func OfPointer[Value any](value *Value) Optional[Value] {}
// Only general way to get the value.
func (o Optional[Value]) Get() (Value, bool) {}
// Get value or panic.
func (o Optional[Value]) MustGet() Value {}
// Get value or default.
func (o Optional[Value]) GetOrElse(defaultValue Value) Value {}
// JSON support.
func (o Optional[Value]) MarshalJSON() ([]byte, error) {}
func (o *Optional[Value]) UnmarshalJSON(data []byte) error {}
// DB support.
func (o *Optional[Value]) Scan(value any) error {}
func (o Optional[Value]) Value() (driver.Value, error) {}
But you probably do care about compatibility with everyone else, so... yeah it really sucks that the Go way of dealing with optionality is slinging pointers around.
For JSON, you can't encode Optional[T] as nothing at all. It has to encode to something, which usually means null. But when you decode, the absence of the field means UnmarshalJSON doesn't get called at all. This typically results in the default value, which of course you would then re-encode as null. So if you round-trip your JSON, you get a materially different output than input (this matters for some other languages/libraries). Maybe the new encoding/json/v2 library fixes this, I haven't looked yet.
Also, I would usually want Optional[T]{value:nil,exists:true} to be impossible regardless of T. But Go's type system is too limited to express this restriction, or even to express a way for a function to enforce this restriction, without resorting to reflection, and reflection has a type erasure problem making it hard to get right even then! So you'd have to write a bunch of different constructors: one for all primitive types and strings; one each for pointers, maps, and slices; three for channels (chan T, <-chan T, chan<- T); and finally one for interfaces, which has to use reflection.
You can write `Optional`, sure, but you can't un-write `nil`, which is what I really want. I use `Optional<T>` in Java as much as I can, and it hasn't saved me from NullPointerException.
I find Result[] and Optional[] somewhat overrated, but nil does bother me. However, nil isn't going to go away (what else is going to be the default value for pointers and interfaces, and not break existing code?). I think something like a non-nilable type annotation/declaration would be all Go needs.
Yeah maybe they're overrated, but they seem like the agreed-upon set of types to avoid null and to standardize error handling (with some support for nice sugars like Rust's ? operator).
I quite often see devs introducing them in other languages like TypeScript, but it just doesn't work as well when it's introduced in userland (usually you just end up with a small island of the codebase following this standard).
Typescript has another way of dealing with null/undefined: it's in the type definition, and you can't use a value that's potentially null/undefined. Using Optional<T> in Typescript is, IMO, weird. Typescript also has exceptions...
I think they only work if the language is built around it. In Rust, it works, because you just can't deref an Optional type without matching it, and the matching mechanism is much more general than that. But in other languages, it just becomes a wart.
As I said, some kind of type annotation would be most go-like, e.g.
func f(ptr PtrToData?) int { ... }
You would only be allowed to touch *ptr inside a if ptr != nil { ... }. There's a linter from uber (nilaway) that works like that, except for the type annotation. That proposal would break existing code, so perhaps something an explicit marker for non-nil pointers is needed instead (but that's not very ergonomic, alas).
Yeah default values are one of Go's original sins, and it's far too late to roll those back. I don't think there are even many benefits—`int i;` is not meaningfully better than `int i = 0;`. If it's struct initialization they were worried about, well, just write a constructor.
Go has chosen explicit over implicit everywhere except initialization—the one place where I really needed "explicit."
Golang is great for problem classes where you really, really can't do away with tracing GC. That's a rare case perhaps, but it exists nonetheless. Most GC languages don't have the kind of high-performance concurrent GC that you get out of the box with Golang, and the minimum RAM requirements are quite low as well. (You can of course provide more RAM to try and increase overall throughput, and you probably should - but you don't have to. That makes it a great fit for running on small cloud VM's, where RAM itself can be at a premium.)
Java's GCs are a generation ahead, though, in both throughput-oriented and latency-sensitive workloads [1]. Though Go's GC did/does get a few improvements and it is much better than it was a few years ago.
[1] ZGC has basically decoupled the heap size from the pause time, at that point you get longer pauses from the OS scheduler than from GC.
> But yeah the whole error / nil situation still bothers me. I find myself wishing for Result[Ok, Err] and Optional[T] quite often.
I got insta rejected in interview when i said this in response to interview panels question about 'thoughts about golang' .
Like they said, 'interview is over' and showed me the (virtual) door. I was stunned lol. This was during peak golang mania . Not sure what happened to rancherlabs .
They probably thought you weren't going to be a good fit for writing idiomatic Go. One of the things many people praise Go for is its standard style across codebases, if you don't like it, you're liable to try and write code that uses different patterns, which is painful for everyone involved.
I've worked almost exclusively on a large Golang project for over 5 years now and this definitely resonates with me. One component of that project is required to use as little memory as possible, and so much of my life has been spent hitting rough edges with Go on that front. We've hit so many issues where the garbage collector just doesn't clean things up quickly enough, or we get issues with heap fragmentation (because Go, in its infinite wisdom, decided not to have a compacting garbage collector) that we've had to try and avoid allocations entirely. Oh, and when we do have those issues, it's extremely difficult to debug. You can take heap profiles, but those only tell you about the live objects in the heap. They don't tell you about all of the garbage and all of the fragmentation. So diagnosing the issue becomes a matter of reading the tea leaves. For example, the heap profile says function X only allocated 1KB of memory, but it's called in a hot loop, so there's probably 20MB of garbage that this thing has generated that's invisible on the profile.
We pre-allocate a bunch of static buffers and re-use them. But that leads to a ton of ownership issues, like the append footgun mentioned in the article. We've even had to re-implement portions of the standard library because they allocate. And I get that we have a non-standard use case, and most programmers don't need to be this anal about memory usage. But we do, and it would be really nice to not feel like we're fighting the language.
I've found that when you need this it's easier to move stuff offheap, although obviously that's not entirely trivial in a GC language, and it certainly creates a lot of rough edges. If you find yourself writing what's essentially, e.g. C++ or Rust in Go, then you probably should just rewrite that part in the respective language when you can :)
I know this comment isn't terribly helpful, so I'm sorry, but it also sounds like Go is entirely the wrong language for this use case and you and your team were forced to use it for some corporate reason, like, the company only uses a subset of widely used programming languages in production.
I've heard the term "beaten path" used for these languages, or languages that an organization chooses to use and forbids the use of others.
Perhaps the new "Green Tea" GC will help? It's described as "a parallel marking algorithm that, if not memory-centric, is at least memory-aware, in that it endeavors to process objects close to one another together."
I saw that! I’m definitely interested in trying it out to see if it helps for our use case. Of course, at this point we’ve reduced allocations so much the GC doesn’t have a ton of work to do, unless we slip up somewhere (which has happened). I’ll probably have to intentionally add some allocations in a hot path as a stress test.
What I would absolutely love is a compacting garbage collector, but my understanding is Go can’t add that without breaking backwards compatibility, and so likely will never do that.
Go has its fair share of flaws but I still think it hits a sweet spot that no other server side language provides.
It’s faster than Node or Python, with a better type system than either. It’s got a much easier learning curve than Rust. It has a good stdlib and tooling. Simple syntax with usually only one way to do things. Error handling has its problems but I still prefer it over Node, where a catch clause might receive just about anything as an “error”.
Am I missing a language that does this too or more? I’m not a Go fanatic at all, mostly written Node for backends in my career, but I’ve been exploring Go lately.
> It’s faster than Node or Python, with a better type system than either. It’s got a much easier learning curve than Rust. It has a good stdlib and tooling. Simple syntax with usually only one way to do things. Error handling has its problems but I still prefer it over Node, where a catch clause might receive just about anything as an “error”.
I feel like I could write this same paragraph about Java or C#.
Just because you can learn about something doesn't mean you need to. C# now offers top-level programs that are indistinguishable from python scripts at a quick glance. No namespaces, classes or main methods are required. Just the code you want to execute and one simple file.
I mostly agree with you except the simple syntax with one way of doing things. If my memory serves me, Java supports at least 2 different paradigms for concurrency, for example, maybe more. I don’t know about C#. Correct me if wrong.
But that's only because they're older and were around before modern concurrent programming was invented.
In C#, for example, there are multiple ways, but you should generally be using the modern approach of async/Task, which is trivial to learn and used exclusively in examples for years.
Maybe this is a bit pedantic, but it bothers me when people refer to "Node" as a programming language. It's not a language, it's a JavaScript runtime. Which to that you might say "well when people say Node they just mean JavaScript". But that's also probably not accurate, because a good chunk of modern Node-executed projects are written in TypeScript, not JavaScript. So saying "Node" doesn't actually say which programming language you mean. (Also, there are so many non-Node ways to execute JavaScript/TypeScript nowadays)
Anyway, assuming you're talking about TypeScript, I'm surprised to hear that you prefer Go's type system to TypeScript's. There are definitely cases where you can get carried away with TypeScript types, but due to that expressiveness I find it much more productive than Go's type system (and I'd make the same argument for Rust vs. Go).
My intent was just to emphasize that I’m comparing Go against writing JavaScript for the Node runtime and not in the browser, that is all, but you are correct.
Regarding Typescript, I actually am a big fan of it, and I almost never write vanilla JS anymore. I feel my team uses it well and work out the kinks with code review. My primary complaint, though, is that I cannot trust any other team to do the same, and TS supports escape hatches to bypass or lie about typing.
I work on a project with a codebase shared by several other teams. Just this week I have been frustrated numerous times by explicit type assertions of variables to something they are not (`foo as Bar`). In those cases it’s worse than vanilla JS because it misleads.
Yeah, but no one is using v8 directly, even though technically you could if you wanted. Node.js is as much JavaScript as LuaJIT is Lua, or GCC compiles C.
Yeah the big problem is that most languages have their fair share of rough edges. Go is performant and portable* with a good runtime and a good ecosystem. But it also has nil pointers, zero values, no destructors, and no macros. (And before anyone says macros are bad, codegen is worse, and Go has to use a lot of codegen to get around the lack of macros).
There are languages with fewer warts, but they're usually more complicated (e.g. Rust), because most of Go's problems are caused by its creators' fixation with simplicity at all costs.
It definitely hits a sweet spot. There is basically no other faster, widely used programming language in production used predominantly for web services than Go. You can argue Rust, but I just don't see it in job listings. And virtually no one is writing web services in C or C++ directly.
I still don't understand why defer works on function scope, and not lexical scope, and nobody has been able to explain to me the reason for it.
In fact this was so surprising to me is that I only found out about it when I wrote code that processed files in a loop, and it started crashing once the list of files got too big, because defer didnt close the handles until the function returned.
When I asked some other Go programmers, they told me to wrap the loop body in an anonymus func and invoke that.
Other than that (and some other niggles), I find Go a pleasant, compact language, with an efficient syntax, that kind of doesn't really encourage people trying to be cute. I started my Go journey rewriting a fairly substantial C# project, and was surprised to learn that despite it having like 10% of the features of C#, the code ended up being smaller. It also encourages performant defaults, like not forcing GC allocation at every turn, very good and built-in support for codegen for stuff like serialization, and no insistence to 'eat the world' like C# does with stuff like ORMs that showcase you can write C# instead of SQL for RDBMS and doing GRPC by annotating C# objects. In Go, you do SQL by writing SQL, and you od GRPC by writing protobuf specs.
So sometimes you want it lexical scope, and sometimes function scope; For example, maybe you open a bunch of files in a loop and need them all open for the rest of the function.
Right now it's function scope; if you need it lexical scope, you can wrap it in a function.
Suppose it were lexical scope and you needed it function scope. Then what do you do?
You can start a new scope with `{}` in go. If I have a bunch of temp vars I'll declare the final result outside the braces and then do the work inside. But lately days I'll just write a function. It's clearer and easier to test.
Really? I find the opposite is true. If I need lexical scope then I’d just write, for example
f.Close() // without defer
The reason I might want function scope defer is because there might be a lot of different exit points from that function.
With lexical scope, there’s only three ways to safely jump the scope:
1. reaching the end of the procedure, in which case you don’t need a defer)
2. A ‘return’, in which case you’re also exiting the function scope
3. a ‘break’ or ‘continue’, which admittedly could see the benefit of a lexical scope defer but they’re also generally trivial to break into their own functions; and arguably should be if your code is getting complex enough that you’ve got enough branches to want a defer.
If Go had other control flows like try/catch, and so on and so forth, then there would be a stronger case for lexical defer. But it’s not really a problem for anyone aside those who are also looking for other features that Go also doesn’t support.
You do what the compiler has to do under the hood: at the top of the function create a list of open files, and have a defer statement that loops over the list closing all of the files. It's really not a complicated construct.
OK, what happens now if you have an error opening one of those files, return an error from inside the for loop, and forget to close the files you'd already opened?
You put the files in the collection as you open them, and you register the defer before opening any of them. It works fine. Defer should be lexically scoped.
Yes it does, function-scope defer needs a dynamic data structure to keep track of pending defers, so its not zero cost.
It can be also a source of bugs where you hang onto something for longer than intended - considering there's no indication of something that might block in Go, you can acquire a mutex, defer the release, and be surprised when some function call ends up blocking, and your whole program hangs for a second.
I think it's only a real issue when you're coming from a language that has different rules. Block-scoping (and thus not being able to e.g. conditionally remove a temp file at the end of a function) would be equally surprising for someone coming from Go.
But I do definitely agree that the dynamic nature of defer and it not being block-scoped is probably not the best
Having to wrap a loop body in a function that's immediately invoked seems like it would make the code harder to read. Especially for a language that prides itself on being "simple" and "straightforward".
I’ve worked with languages that have both, and find myself wishing I could have function-level defer inside conditionals when I use the block-level languages.
I worked briefly on extending an Go static site generator someone wrote for a client. The code was very clear and easy to read, but difficult to extend due to the many rough edges with the language. Simple changes required altering a lot of code in ways that were not immediately obvious. The ability to encapsulate and abstract is hindered in the name of “simplicity.” Abstraction is the primary way we achieve simple and easy to extend code. John Ousterhoust defined a complex program as one that is difficult to extend rather than necessarily being large or difficult to understand at scale. The average Go program seems to violate this principle a lot. Programs appear “simple” but extension proves difficult and fraught.
Go is a case of the emperor having no clothes. Telling people that they just don’t get it or that it’s a different way of doing things just doesn’t convince me. The only thing it has going for it is a simple dev experience.
I find the way people talk about Go super weird. If people have criticisms people almost always respond that the language is just "fine" and people kind of shame you for wanting it. People say Go is simpler but having to write a for loop to get the list of keys of a map is not simpler.
I agree with your point, but you'll have to update your example of something go can't do
> having to write a for loop to get the list of keys of a map
We now have the stdlib "maps" package, you can do:
keys := slices.Collect(maps.Keys(someMap))
With the wonder of generics, it's finally possible to implement that.
Now if only Go was consistent about methods vs functions, maybe then we could have "keys := someMap.Keys()" instead of it being a weird mix like `http.Request.Headers.Set("key", "value")` but `map["key"] = "value"`
Fair I stopped using Go pre-generics so I am pretty out of date. I just remember having this conversation about generics and at the time there was a large anti-generics group. Is it a lot better with generics? I was worried that a lot of the library code was already written pre-generics.
The generics are a weak mimicry of what generics could be, almost as if to say "there we did it" without actually making the language that much more expressive.
For example, you're not allowed to write the following:
type Option[T any] struct { t *T }
func (o *Option[T]) Map[U any](f func(T) U) *Option[U] { ... }
That fails because methods can't have type parameters, only structs and functions. It hurts the ergonomics of generics quite a bit.
And, as you rightly point out, the stdlib is largely pre-generics, so now there's a bunch of duplicate functions, like "strings.Sort" and "slices.Sort", "atomic.Pointer" and "atomic.Value", quite possible a sync/v2 soon https://github.com/golang/go/issues/71076, etc.
The old non-generic versions also aren't deprecated typically, so they're just there to trap people that don't know "no never use atomic.Value, always use atomic.Pointer".
Ooh! Or remember when a bunch of people acted like they had ascended to heaven for looking down on syntax-highlighting because Rob said something about it being a distraction? Or the swarms blasting me for insisting GOPATH was a nightmare that could only be born of Google's hubris (literally at the same time that `godep` was a thing and Kubernetes was spending significant efforts just fucking dealing with GOPATH.).
Happy to not be in that community, happy to not have to write (or read) Go these days.
And frankly, most of the time I see people gushing about Go, it's for features that trivially exist in most languages that aren't C, or are entirely subjective like "it's easy" (while ignoring, you know, reality).
I used go for years, and while it's able to get small things up and running quickly, bigger projects soon become death-by-a-thousand-cuts.
Debugging is a nightmare because it refuses to even compile if you have unused X (which you always will have when you're debugging and testing "What happens if I comment out this bit?").
The bureaucracy is annoying. The magic filenames are annoying. The magic field names are annoying. The secret hidden panics in the standard library are annoying. The secret behind-your-back heap copies are annoying (and SLOW). All the magic in go eventually becomes annoying, because usually it's a naively repurposed thing (where they depend on something that was designed for a different purpose under different assumptions, but naively decided to depend on its side effects for their own ever-so-slightly-incompatible machinery - like special file names, and capitalization even though not all characters have such a thing .. was it REALLY such a chore to type "pub" for things you wanted exposed?).
Now that AI has gotten good, I'm rather enjoying Rust because I can just quickly ask the AI why my types don't match or a gnarly mutable borrow is happening - rather than spending hours poring over documentation and SO questions.
I personally don't like Go, and it has many shortcomings, but there is a reason it is popular regardless:
Go is a reasonably performant language that makes it pretty straightforward to write reliable, highly concurrent services that don't rely on heavy multithreading - all thanks to the goroutine model.
There really was no other reasonably popular, static, compiled language around when Google came out.
And there still barely is - the only real competitor that sits in a similar space is Java with the new virtual threads.
Languages with async/await promise something similar, but in practice are burdened with a lot of complexity (avoiding blocking in async tasks, function colouring, ...)
I'm not counting Erlang here, because it is a very different type of language...
So I'd say Go is popular despite the myriad of shortcomings, thanks to goroutines and the Google project street cred.
Slowly but surely, the jvm has been closing the go gap. With efforts like virtual threads, zgc, lilliput, Leyden, and Valhalla, the jvm has been closing the gap.
The change from Java 8 to 25 is night and day. And the future looks bright. Java is slowly bringing in more language features that make it quite ergonomic to work with.
I'm still traumatised by Java from my earlier career. So many weird patterns, FactoryFactories and Spring Framework and ORMs that work 90% of the time and the 10% is pure pain.
I have no desire to go back to Java no matter how much the language has evolved.
For me C# has filled the void of Java in enterprise/gaming environments.
C# is a highly underrated language that has evolved very quickly over the last decade into a nice mix of OOP and functional.
It's fast enough, easy enough (being very similar now to TypeScript), versatile enough, well-documented (so LLMs do a great job), broad and well-maintained first party libraries, and the team has over time really focused on improving terseness of the language (pattern matching and switch expressions are really one thing I miss a lot when switching between C# and TS).
EF Core is also easily one of the best ORMs: super mature, stable, well-documented, performant, easy to use, and expressive. Having been in the Node ecosystem for the past year, there's really no comparison for building fast with less papercuts (Prisma, Drizzle, etc. all abound with papercuts).
It's too bad that it seems that many folks I've chatted with have a bad taste from .NET Framework (legacy, Windows only) and may have previously worked in C# when it was Windows only and never gave it another look.
While C# is great, but the problem with programming languages, is you're net only picking a language, but a kind of company who uses it, and a kind of person who writes it.
Which means if you write C#, you'll encounter a ton of devs who come from an enterprise, banking or govt background, who think doing a 4 layer enterprise architecture with DTOs and 5 line classes is the only way you can write a CRUD app, and the worst of all you'll se a ton of people who learned C# in college a decade ago and refuse to learn anything else.
EF is great, but most people use it because they don't have to learn SQL and databases.
Blazor is great, but most people use it because they don't want to learn Frontend dev, and JS frameworks.
I think you have a point with the types of resources, but in my experience, its also not hard to separate the wheat from the chaff with pretty simple heuristics (though that is likely very different now with AI and cheating!).
"Modern C#" (if we can differentiate that) has a lot of nice amenities for modeling like immutable `record` types and named tuples. I think where EF really shines is that it allows you to model the domain with persistence easily and then use DTOs purely as projections (which is how I use DTOs) into views (e.g. REST API endpoints).
I can't say for the broader ecosystem, but at least in my own use cases, EFC is primarily used for write scenarios and some basic read scenarios. But in almost all of my projects, I end up using CQRS with Dapper on the read side for more complex queries. So I don't think that it's people avoiding SQL; rather it's teams focused on productivity first.
WRT to Blazor, I would not recommend it in place of JS except for internal tooling (tried it at one startup and switched to Vue + Vite). But to be fair, modern FE development in JS is an absolute cluster of complexity.
As someone who developed in it at the time I found the reason it died was because they made new, slightly incompatible, versions every new Windows release.
I was so glad it died. It was a weird proprietary replacement for Flash, which itself was weird and proprietary, except the new one was owned by a huge company that publicly stated they wanted to crush Linux and friends.
A big chunk of their strategy at the time was around how to completely own the web. I celebrated every time their attempts failed.
I love C#, but have actually found LLMs to be quite bad a producing idiomatic code because the language is changing so fast and often they don't even know about the latest language(/blazor) features. I constantly have to undo my initial prompt and rewrite it to tell them that we don't use Startup.cs any more, only Program.cs, and Program.cs is a flat file and not a class.
Plus it seems hopeful to think you'll be only working with "New java" paradigm when most enterprise software is stuck on older versions. Just like python, in theory you can make great new green field project but 80% of the work in the industry is on older or legacy components.
I guess it's reasonable to be hopeful as a Java developer nowadays.
Modern Java communities are slowly adopting the common FP practice "making illegal states unrepresentable" and call it "data oriented programming". Which is nice for those of us who actively use ADT. I no longer need to repeatedly explain "what is Option<?>?" or "why ADT?" whenever I use them; I could just point them to those new resources.
Hopefully, this shift will steer the Java community toward a saner direction than the current cargo cult which believed mutable C-struct (under guise of "anemic domain model") + Garbage Collector was OOP.
That may be true, but navigating 30 years of accumulated cruft, fragmented ecosystems and tooling, and ever-evolving syntax and conventions, is enough to drive anyone away. Personally, I never want to deal with classpath hell again, though this may have improved since I last touched Java ~15 years ago.
Go, with all its faults, tries very hard to shun complexity, which I've found over the years to be the most important quality a language can have. I don't want a language with many features. I want a language with the bare essentials that are robust and well designed, a certain degree of flexibility, and for it to get out of my way. Go does this better than any language I've ever used.
I can reasonably likely run a 30 years old compiled, .jar file on the latest Java version. Java is the epitome of backwards and forward-compatible changes, and the language was very carefully grown so the syntax is not too indifferent, someone hibernated since Java 7 will probably have no problem reading Java 25 code.
> Go, with all its faults, tries very hard to shun complexity
The whole field is about managing complexity. You don't shun complexity, you give tools to people to be able to manage it.
And Go goes the low end of the spectrum, of not giving enough features to manage that complexity -- it's simplistic, not simple.
I think the optimum as actually at Java - it is a very easy language with not much going on (compared to, say, Scala), but just enough expressivity that you can have efficient and comfortable to use libraries for all kind of stuff (e.g. a completely type safe SQL DSL)
you try keep the easy things easy + simple, and try to make the hard things easier and simpler, if possible. Simple aint easy
I dont hate java (anymore), it has plenty of utility, (like say...jira). But when I'm writing golang I pretty much never think "oh I wish this I was writing java right now." no thanks
Well, spring is a whole framework that gives you a lot of stuff, but sure, complexity has to live somewhere - fundamentally so.
Without it, you either write that complexity yourself or fail to even recognize why is it necessary in the first place, e.g. failing to realize the existence of SQL injections, Cross-Site Scripting, etc. Backends have some common requirements and it is pretty rare that your problem wouldn't need these primitives, so as a beginner, I would advice.. learning the framework as well, the same way you would learn how to fly a plane before attempting it.
For other stuff, there is no requirement to use Spring - vanilla java has a bunch of tools and feel free to hack whatever you want!
> The whole field is about managing complexity. You don't shun complexity, you give tools to people to be able to manage it.
Complexity exists in all layers of computing, from the silicon up. While we can't avoid complexity of real world problems, we can certainly minimize the complexity required for their solutions. There are an infinite amount of problems caused primarily by the self-induced complexity of our software stacks and the hardware it runs on. Choosing a high-level language that deliberately tries to avoid these problems is about the only say I have in this matter, since I don't have the skill nor patience to redo decades of difficult work smarter people than me have done.
Just because a language embraces simplicity doesn't mean that it doesn't provide the tools to solve real world problems. Go authors have done a great job of choosing the right set of trade-offs, unlike most other language authors. Most of the time. I still think generics were a mistake.
Being able to create a self contained Kotlin app (JVM) that starts up quickly and uses the same amount of memory as the equivalent golang app would be amazing.
Graal native Image does that (though the compile time is quite long, but you can just run it on the JVM for development with hot reload and whatnot, and only do a native compile at release)
Still an issue. The main problem is for native compilation you have to declare your reflection targets upfront. That can be a headache if your framework doesn't support it.
You can get a large portion of what graal native offers by using AppCDS and compressed object headers.
Well Google isn't really making a ton of new (successful) services these days, so the potential to introduce a new language is quite small unfortunately :). Plus, Go lacks one quite important thing which is ability to do an equivalent of HotSwap in the live service, which is really useful for debugging large complex applications without shutting them down.
Google is 100% writing a whole load of new services, and Go is 13 years old (even older within Google), so it surely has had ample opportunities to take.
As for hot swap, I haven't heard it being used for production, that's mostly for faster development cycles - though I could be wrong. Generally it is safer to bring up the new version, direct requests over, and shut down the old version. It's problematic to just hot swap classes, e.g. if you were to add a new field to one of your classes, how would old instances that lack it behave?
There are real pain points with async/await, but I find the criticism there often overblown. Most of the issues go away if you go pure async, mixing older sync code with async is much more difficult though.
My experience is mostly with C#, but async/await works very well there in my experience. You do need to know some basics there to avoid problem, but that's the case for essentially every kind of concurrency. They all have footguns.
My vote is for Elixir as well, but it's not a competitor for multiple important reasons. There are some languages in that niche, although too small and immature, like Crystal, Nim. Still waiting for something better.
yeah, if the requirement is "makes it pretty straightforward to write reliable, highly concurrent services that don't rely on heavy multithreading", Elixir is a perfect match.
And even without types (which are coming and are looking good), Elixir's pattern matching is a thousands times better than the horror of Go error handling
The only silver bullet we know of is building on existing libraries. These are also non-accidentally the top 3 most popular languages according to any ranking worthy of consideration.
First, we allow main methods to omit the infamous boilerplate of public static void main(String[] args), which simplifies the Hello, World! program to:
class HelloWorld {
void main() {
System.out.println("Hello, World!");
}
}
Second, we introduce a compact form of source file that lets developers get straight to the code, without a superfluous class declaration:
Third, we add a new class in the java.lang package that provides basic line-oriented I/O methods for beginners, thereby replacing the mysterious System.out.println with a simpler form:
Always find 'java is verbose' to be a novice argument from go coders when there is so much boilerplate on the go side of things that's nicely handled on the java side.
Every function call is 3-5 lines in Go. For any problem which needs to handle errors, the Go code is generally >2x the Java LOC. Go is a language that especially suffers from the "code padding" problem.
It's rich to complain about verbosity coming from Go.
Nonetheless, Java has eased the psvm requirements, you don't even have to explicitly declare a class and a void main method is enough. [1] Not that it would matter for any non-script code.
An expert Ruby programmer can do wonders and be insanely productive, but I think there is a size from which it doesn't scale as nicely (both from a performance and a larger team perspective).
PHP's frameworks are fantastic and they hide a lot from an otherwise minefield of a language (though steadily improved over the years).
Both are decent choices if this is what you/your developers know.
Absolutely no on Java. Even if the core language has seen improvements over the years, choosing Java almost certainly means that your team will be tied to using proprietary / enterprise tools (IntelliJ) because every time you work at a Java/C# shop, local environments are tied to IDE configurations. Not to mention Spring -- now every code review will render "Large diffs are not rendered by default." in Github because a simple module in Java must be a new class at least >500 LOC long.
Local environments are not tied to IDEs at all, but you are doing yourself a disservice if you don't use a decent IDE irrespective of language - they are a huge productivity boost.
And are you stuck in the XML times or what? Spring Boot is insanely productive - just as a fact of matter, Go is significantly more verbose than Java, with all the unnecessary if errs.
Local environments are not literally tied to IDEs, but they effectively are in any non-trivially sized project. And the reason is because most Java shops really do believe "you are doing yourself a disservice if you don't use a decent IDE irrespective of language." I get along fine with a text editor + CLI tools in Deno, Lua, and Zig. Only when I enter Java world do the wisest of the wise say "yeah there is a CLI, but I don't really know it. I recommend you download IntelliJ and run these configs instead."
Yes Spring Boot is productive. So is Ruby on Rails or Laravel.
Any production-grade project will use either Maven or Gradle for builds. There are CI/CD pipelines, lints, etc, how would all these work if you could only build through an IDE?
Sure, there are some awfully dated companies that still send changed files over email to each other with no version control, I'm sure some of those are stuck with an IDE config, but to be honest where I have seen this most commonly were some Visual Studio projects, not Java. Even though you could find any of these for any other language, you just need to scale your user base up. A language that hasn't even hit 1.0 will have a higher percentage of technically capable users, that's hardly a surprise.
>Only when I enter Java world do the wisest of the wise say "yeah there is a CLI, but I don't really know it. I recommend you download IntelliJ and run these configs instead."
Then they obviously don't know their tooling well, and I would hesitate to call a jr 'the wisest of the wise'
Count Rust. From what I can see, it's becoming very popular in the microservices landscape. Not hard to imagine why. Multithreading is a breeze. Memory use is low. Latency is great.
For the most part I've loved Go since just before 1.0 through today. Nits can surely be picked, but "it's still not good" is a strange take.
I think there is little to no chance it can hold on to its central vision as the creators "age out" of the project, which will make the language worse (and render the tradeoffs pointless).
I think allowing it to become pigeon holed as "a language for writing servers" has cost and will continue to cost important mindshare that instead jumps to Rust or remains in Python or etc.
Maybe it's just fun, like harping on about how bad Visual Basic was, which was true but irrelevant, as the people who needed to do the things it did well got on with doing so.
Fascinating. Coming from C++ I can't imagine not having RAII. That seems so wordy and painful. And that nil comparison is...gross.
I don't get how you can assign an interface to be a pointer to a structure. How does that work? That seems like a compile error. I don't know much about Go interfaces.
There were points in this article that made me feel like Rob Schneider in Demolition Man saying "He doesn't know about the three sea shells!" but there were a couple points made that were valid.
the nil issue. An interface, when assigned a struct, is no longer nil even if that struct is nil - probably a mistake. Valid point.
append in a func. Definitely one of the biggest issues is that slices are by ref. They did this to save memory and speed but the append issue becomes a monster unless abstracted. Valid point.
err in scope for the whole func. You defined it, of course it is. Better to reuse a generic var than constantly instantiate another. The lack of try catch forces you to think. Not a valid point.
defer. What is the difference between a scope block and a function block? I'll wait.
I like Go and Rust, but sometimes I feel like they lack tools that other languages have just because they WANT to be different, without any real benefit.
Whenever I read Go code, I see a lot more error handling code than usual because the language doesn't have exceptions...
And sometimes Go/Rust code is more complex because it also lacks some OOP tools, and there are no tools to replace them.
So, Go/Rust has a lot more boilerplate code than I would expect from modern languages.
For example, in Delphi, an interface can be implemented by a property:
type
TMyClass = class(TInterfacedObject, IMyInterface)
private
FMyInterfaceImpl: TMyInterfaceImplementation; // A field containing the actual implementation
public
constructor Create;
destructor Destroy; override;
property MyInterface: IMyInterface read FMyInterfaceImpl implements IMyInterface;
end;
This isn't possible in Go/Rust. And the Go documentation I read strongly recommended using Composition, without good tools for that.
This "new way is the best way, period ignore good things of the past" is common.
When MySQL didn't have transactions, the documentation said "perform operations atomically" without saying exactly how.
MongoDB didn't have transactions until version 4.0. They said it wasn't important.
When Go didn't have generics, there were a bunch of "patterns" to replace generics... which in practice did not replace.
The lack of inheritance in Go/Rust leaves me with the same impression. The new patterns do not replace the inheritance or other tools.
"We don't have this tool in the language because people used it wrong in the old languages." Don't worry, people will use the new tools wrong too!
Go allows deferring an implementation of an interface to a member of a type. It is somewhat unintuitive, and I think the field has to be an unnamed one.
Similarly, if a field implements a trait in Rust, you can expose it via `AsRef` and `AsMutRef`, just return a reference to it.
These are not ideal tools, and I find the Go solution rather unintuitive, but they solve the problems that I would've solved with inheritance in other languages. I rarely use them.
Technically, the term "billion dollar mistake", coined in 1965, would now be a "10 billion dollar mistake" in 2025. Or, if the cost is measured in terms of housing, it would be a "21 billion dollar mistake".
I agree with just about everything in the post. I've been bit a time or two by the "two flavors of null." That said, my most pleasant and most productive code bases I've worked in have all been Go.
Some learnings. Don't pass sections of your slices to things that mutate them. Anonymous functions need recovers. Know how all goroutines return.
If you don't like Go, then just let go. I hope nobody forces you to use it.
Some critique is definitely valid, but some of it just sounds like they didn't take the time to grasp the language. It's trade offs all the way. For example there is a lot I like about Rust, but still no my favorite language.
In my opinion, the section on data ownership contained the most egregious and unforgivable example of go's flaws. The behavior of append in that example is the kind of bug-causing or esoteric behavior that should never make it into any programming language. As a regular writer of go code, I understand why this particular quirk of the language exists, but I hope I never truly "grasp" it to the extent that I forgive it.
I'm surprised people in these comments aren't focusing more on the append example.
Disagree. Most critiques of Go I've read have been weak. This one was decent. And I say that as a big enjoyer of Go.
That said I really wish there was a revamp where they did things right in terms of nil, scoping rules etc. However, they've commited to never breaking existing programs (honorable, understandable) so the design space is extremely limited. I prefer dealing with local awkwardness and even excessive verbosity over systemic issues any day.
Few things are truly forced upon me in life but walking away from everything that I don't like would be foolish. There is compromise everywhere and I don't think entering into a tradeoff means I'm not entitled to have opinions about the things I'm trading off.
I don't think the article sounds like someone didn't take the time to grasp the language. It sounds like it's talking about the kind of thing that really only grates on you after you've seriously used the language for a while.
Sure but life choices are one thing, but this critique is still valuable. I learned a thing or two, and also think go can improve (I understand it's because I don't grok the language but I still prefer map to append in a loop)
In 2015 I wrote an article "How to complain about Go" to mock this type of articles that completely miss the big picture and the real world impact of "imperfect" language. Glad it's still relevant :)
This has always been my takeaway with Go. An imperfect language for imperfect developers, chosen for organizations (not people) to ensure a baseline usefulness of their engineers from junior to senior. Do I like it? No. Would I ever choose it willingly? No. But when the options at the time were Javascript or untyped Python, it may have seemed like a more attractive option. Python was also dealing with a nasty 2-to-3 upgrade at the time that looks foolish in comparison to Golang's automatic formatting and upgrade mechanisms.
They are forcing people to write Typescript code like it’s Golang where I am right now (amongst other extremely stupid decisions - only unit test service boundaries, do not pull out logic into pure functions, do not write UI tests, etc.). I really must remember to ask organisations to show me their code before joining them.
(I realise this isn’t who is hiring, but email in bio)
I really try not to throw anymore with typescript, I do error checking like in Go. When used with a Go backend, it makes context switching really easy...
Cross compiling go is easy. Static binaries work everywhere. The cryptographic library is the foundation of various CAs like letsencrypt and is excellent.
The green threads are very interesting since you can create 1000s of them at a low cost and that makes different designs possible.
I think this complaining about defer is a bit trivial. The actual major problem for me is the way imports work. The fact that it knows about github and the way that it's difficult to replace a dependency there with some other one including a local one. The forced layout of files, cmd directories etc etc.
I can live with it all but modules are the things which I have wasted the most time and struggled the most.
In practice, none of these thing mentioned in the article have been an issue for me, at all. (Upvoted anyway)
What has been an issue for me, though, is working with private repositories outside GitHub (and I have to clarify that, because working with private repositories on GitHub is different, because Go has hardcoded settings specifically to make GitHub work).
I had hopes for the GOAUTH environment variable, but either (1) I'm more dumb and blind than I thought I already was, or (2) there's still no way to force Go to fetch a module using SSH without trying an HTTPS request first. And no, `GOPRIVATE="mymodule"` and `GOPROXY="direct"` don't do the trick, not even combined with Git's `insteadOf`.
Definitely not just you. At my previous job we had a need to fetch private Go modules from Gitlab and, later, a self-hosted instance of Forgejo. CTO and I spent a full day or so doing trial and error to get a clean solution. If I recall correctly, we ultimately resorted to each developer adding `GOPRIVATE={module_namespace}` to their environment and adding the following to their `.netrc`:
```
machine {server} # e.g. gitlab.com
login {username}
password {read_only_api_key} # Must be actual key and not an ENV var
```
Worked consistently, but not a solution we were thrilled with.
The absolutely pointless and ridiculous complaints about enums are just plain stupid by this point.
Ok we get it, you want something fancier. Well, you didn't get it. Deal with it. Go has other problems (as pointed out by the OP). I really don't understand how people could care so much about this enum thing. Yes, Rust enums are great, but they are just completely different. Why would I ever compare them and waste energy on that? Different designers, different decisions.
People want sum types because sum types solve a large set of design problems, while being a concept old enough to appear back in SML in 1980s. One of the best phrased complaints I've seen against Go's design is a claim that Go language team ignored 30+ years of programming language design, because the language really seems to introduce design issues and footguns that were solved decades before work on it even started
Ouch!! Pascal's lack of popularity certainly isn't due to the fact that it supports such nice enumerated types (or sets for that matter). I think he was just pointing out that such nice things have existed (and been known to exist) for a long time and that it's odd that a new language couldn't have borrowed the feature.
I like Go, but my main annoyance is deciding when to use a pointer or not use a pointer as variable/receiver/argument. And if its an interface variable, it has a pointer to the concrete instance in the interface 'struct'. Some things are canonically passed as pointers like contexts.
It just feels sloppy and I'm worried I'm going to make a mistake.
This confused me too. It is tricky because sometimes it's more performant to copy the data rather than use a pointer, and there's not a clear boundary as to when that is the case. The advice I was given was "profile your code and make your decision data-driven". That didn't make me happy.
Now I always use pointers consistently for the readability.
Yup, that's it. If you're going to modify a field in the receiver, or want to pass a field by reference, you're going to need a pointer. Otherwise, a value will do, unless ... that weird interface thing makes you. I guess that's the problem?
I 80% of time use structs. common misunderstanding: it does not reduce performance for pointer vs value receivers (Go compiler generates same code for both, no copy of struct receiver happens). most of structs are small anyways, safe to copy. Go also automatically translates value receivers and pointer receivers back-and-forth. and if I see pointer I see something that can be mutated (or very large). in fact, if I see a pointer, I think "here we go.. will it be mutated?". written 400,000 LOC in Go, rarely seeing this issue.
Recently I was in a meeting where we were considering adopting Go more widely for our backend services, but a couple of the architect level guys brought up the two-types-of-nil issue and ultimately shot it down. I feel like they were being a little dramatic about it, but it is startling to me that its 2025 and the team still has not fixed it. If the only thing you value in language design is never breaking existing code, even if by any definition that existing code is already broken, eventually the only thing using your language will be existing code.
This has already been explained many times, but it's so much fun I'll do it again. :-)
So: The way Go presents it is confusing, but this behavior makes sense, is correct, will never be changed, and is undoubtedly depended on by correct programs.
The confusing thing for people use to C++ or C# or Java or Python or most other languages is that in Go nil is a perfectly valid pointer receiver for a method to have. The method resolution lookup happens statically at compile time, and as long as the method doesn't try to deref the pointer, all good.
It still works if you assign to an interface.
package main
import "fmt"
type Dog struct {}
type Cat struct {}
type Animal interface {
MakeNoise()
}
func (*Dog) MakeNoise() { fmt.Println("bark") }
func (*Cat) MakeNoise() { fmt.Println("meow") }
func main() {
var d *Dog = nil
var c *Cat = nil
var i Animal = d
var j Animal = c
d.MakeNoise()
c.MakeNoise()
i.MakeNoise()
j.MakeNoise()
}
This will print
bark
meow
bark
meow
But the interface method lookup can't happen at compile time. So the interface value is actually a pair -- the pointer to the type, and the instance value. The type is not nil, hence the interface value is something like (&Cat,nil) and (&Dog,nil) in each case, which is not the interface zero value, which is (nil, nil).
But it's super confusing because Go type cooerces a nil struct value to a non-nil (&type, nil) interface value. There's probably some naming or syntax way to make this clearer.
I deeply, seriously, believe that you should have written the words "Its super confusing", meditated on that for a minute, then left it at that. It is super confusing. That's it. Nothing else matters. I understand why it is the way it is. I'm not stupid. As you said: Its super confusing, which is relevant when you're picking languages other people at your company (interns, juniors) have to write in.
> “The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.”
(Side note, Go did fix scoping of captured variables in for,range loops, which was a back-incompat change, but they justified it by emperically showing it fixed more bugs than it caused (very reasonable). C# made the same change w/ the same justification earlier, which was inspiration for Go.)
Architect-level is complaining about language quirks? That's low on my priorities for languages. I'd worry more about maturity, tooling support, library support, ease of learning, and availability of developers.
I think our end-state decision, IIRC, was to just expand our usage of TypeScript; which also has Golang beat on all those verticals you list. More mature, way better tooling, way more libraries, easier to hire for, etc.
Though, thinking back, someone should have brought up TypeScript's at least three different ways to represent nil (undefined, null, NaN, a few others). Its at least a little better in TS, because unlike in Go the type-checker doesn't actively lie to you about how many different states of undefined you might be dealing with.
I both agree with these points, and also think it absolutely doesn't matter. Go is the best language if you need to ship quickly and have solid performance. Also Go + AI works amazingly well. So in some ways you can actually move faster compared to languages like Node and Python these days.
I wrote a book on Go, so I'm biased. But when I started using Go more than a decado ago, it really felt like a breath of fresh air. It made coding _fun_ again, less boilerplate heavy than Java, simple enough to pick up, and performance was generally good.
There's no single 'best language', and it depends on what your use-cases are. But I'd say that for many typical backend tasks, Go is a choice you won't really regret, even if you have some gripes with the language.
I don't agree with most of the article but I believe I know where it comes from.
Golang's biggest shortcoming is the fact that it touches bare metal isn't visible clearly enough. It provides many high level features which makes this ambience of "we got you" but fails on delivering proper education to its users that they are going to have a dirt on their hands.
Take a slice for example: even in naming it means "part of" but in reality it's closer to "box full of pointers" what happens when you modify pointer+1? Or "two types of nil"; there is a difference between having two bytes (simplification), one of struct type and the other of address to that struct and having just a NULL - same as knowing that house doesn't exist and being confident that house exists and saying it's in the middle of the volcano beneath the ocean.
The Foo99 critique is another example. If you'd want to have not 99 loop but 10 billion loops each with mere 10 bytes you'd need 100GiB of memory just to exit it. If you'd reuse the address block you'd only use... 10 bytes.
I also recommend trying to implement lexical scope defer in C and putting them in threads. That's a big bottle of fun.
I think that it ultimately boils down to what kind of engineer one wants to be. I don't like hand holding and rather be left on my own with a rain of unit tests following my code so Go, Zig, C (from low level
Languages) just works for me. Some prefer Rust or high level abstractions. That's also fine.
But IMO poking at Go that it doesn't hide abstractions is like making fun of football of being child's play because not only it doesn't have horses but also has players using legs instead of mallets.
> I believe I know where it comes from […] poking at Go that it doesn't hide abstractions
Author here.
No, this is not where it comes from. I've been coding C form more than 30 years, Go for maybe 12-15, and currently prefer Rust. I enjoy C++ (yes, really) and getting all those handle-less knives to fit together.
No, my critique of Go is that it did not take the lessons learned from decades of theory, what worked and didn't work.
I don't fault Go for its leaky abstractions in slices, for example. I do fault it for creating bad abstraction APIs in the first place, handing out footguns when they are avoidable. I know to avoid the footgun of appending to slices while other slices of the same array may still be accessible elsewhere. But I think it's indefensible to have created that footgun in the year Go was created.
Live long enough, and anybody will make a silly mistake. "Just don't make a mistake" is not an option. That's why programming language APIs and syntax matters.
As for bare metal; Go manages to neither get the benefits possible of being high level, and at the same time not being suitable for bare metal.
It's a missed opportunity. Because yes, in 2007 it's not like I could have pointed to something that was strictly better for some target use cases.
I don't share experience about not being suitable about bare metal. But I do have experience with high level languages doing similar things through "innovative" thinking. I've seen int overflows in Rust. I've seen libraries that waited for UDP packet to be rebroadcasted before sending another implemented in Elixir.
No Turing complete language will ever prevent people from being idiots.
It's not only programming language API and syntax. It's a conceptual complexity, which Go has very low. It's a remodeling difficulty which Rust has very high. It's implicit behavior that you get from high stack of JS/TS libraries stitched together. It's accessibility of tooling, size of the ecosystem and availability of APIs. And Golang crosses many of those checkboxes.
All the examples you've shown in your article were "huh? isn't this obvious?" to me. With your experience in C I have no idea you why you don't want to reuse same allocation multiple times and instead keeping all of them separately while reserving allocation space for possibly less than you need.
Even if you'd assume all of this should be on stack you still would crash or bleed memory through implicit allocations that exit the stack.
Add 200 of goroutines and how does that (pun intended) stack?
Is fixing those perceived footguns really a missed opportunity? Go is getting stronger every year and while it's hated by some (and I get it, some people like Rust approach better it's _fine_) it's used more and more as a mature and stable language.
Many applications don't even worry about GC. And if you're developing some critical application, pair it with Zig and enjoy cross-compilation sweetness with as bare metal as possible with all the pipes that are needed.
Go is the best language for me because
I develop fast with it,
don't have that many bugs,
it builds fast and
I'm usually just fine having a garbage collector
The dependency management is great too
• errors handled by truthy if or try syntax
• all 0s and nils are falsey
• #if PORTABLE put(";}") #end
• modifying! methods like "hi".reverse!()
• GC can be paused/disabled
• many more ease of use QoL enhancements
No, this has been the case as long as Go has been around, then you look and its some C or C++ developer with specific needs, thats okay, its not for everyone.
I think with C or C++ devs, those who live in glass houses shouldn’t throw stones.
I would criticize Go from the point of view of more modern languages that have powerful type systems like the ML family, Erlang/Elixir or even the up and coming Gleam. These languages succeed in providing powerful primitives and models for creating good, encapsulating abstractions. ML languages can help one entirely avoid certain errors and understand exactly where a change to code affects other parts of the code — while languages like Erlang provided interesting patterns for handling runtime errors without extensive boilerplate like Go.
It’s a language that hobbles developers under the aegis of “simplicity.” Certainly, there are languages like Python which give too much freedom — and those that are too complex like Rust IMO, but Go is at best a step sideways from such languages. If people have fun or get mileage out of it, that’s fine, but we cannot pretend that it’s really this great tool.
". They are likely the two most difficult parts of any design for parametric polymorphism. In retrospect, we were biased too much by experience with C++ without concepts and Java generics. We would have been well-served to spend more time with CLU and C++ concepts earlier."
And still there are more modern idioms and language features that ML had in the 70s but are missing from Go. But, these have the fatal flaw of Not being Invented Here.
My biggest nitpick against Go was, is and still is the package management. Rust did it so nice and NuGet (C#/.NET) got it so right that Microsoft added it as a built-in thing for Visual Studio, it was originally a plugin and not from Microsoft whatsoever, now they fully own it which is fine, and it just works.
Cargo is amazing, and you can do amazing things with it, I wish Go would invest in this area more.
Also funny you mention Python, a LOT of Go devs are former Python devs, especially in the early days.
Not really, no one at other other than the original authors though of that, the authors had an issue with C++ compile times and were sponsored by their manager to work on this Go side project of theirs.
Google's networking services keep being writen in Java/Kotlin, C++, and nowadays Rust.
> Has Go become the new PHP? Every now and then I see an article complaining about Go's shortcomings.
These sorts of articles have been commonplace even before Go released 1.0 in 2013. In fact, most (if not all) of these complaints could have been written identically back then. The only thing missing from this post that could make me believe it truly was written in 2013 would be a complaint about Go not having generics, which were added a few years ago.
People on HN have been complaining about Go since Go was a weird side-project tucked away at Google that even Google itself didn't care about and didn't bother to dedicate any resources to. Meanwhile, people still keep using it and finding it useful.
The last 20% is also deliberately never done. It's the way they like to run their language. I find it frustrating, but it seems to work for some people.
Go is a pretty good example of how mediocre technology that would never have taken off on its own merits benefits from the rose tinted spectacles that get applied when FAANG starts a project.
I don’t buy this at all. I picked up Go because it has fast compilation speed, produces static binaries, can build useful things without a ton of dependencies, is relatively easy to maintain, and has good tooling baked in. I think this is why it gained adoption vs Dart or whatever other corporate-backed languages I’m forgetting.
Go _excels_ at API glue. Get JSON as string, marshal it to a struct, apply business logic, send JSON to a different API.
Everything for that is built in to the standard library and by default performant up to levels where you really don't need to worry about it before your API glue SaaS is making actual money.
I tried out one project because of these attributes and then scrapped it fairly quickly in favor of rust. Not enough type safety, too much verbosity. Too much fucking "if err != nil".
The language sits in an awkward space between rust and python where one of them would almost always be a better choice.
I’m almost with you. If there was a language with a fast compiler, excellent tooling, a robust standard library, static binaries, and an F#-like type system, I’d never use anything else.
Rust simply doesn’t cut it for me. I’m hoping Roc might become this, but I’m not holding my breath.
I find Rust's stdlib to be lacking vs Go, and so the average Rust project has a lot of dependencies. To me, Rust feels like the systems-programming equivalent to Node + NPM. Also, the compilation speed was really painful last time I used it. I'm used to the speed of Zig, Hare, Go, Bun. Rust makes me want to jab myself in the eye with a spork.
The other jarring example of this kind of deferring logical thinking to big corps was people defending Apple's soldering of memory and ssd, specially so on this site, until some Chinese lad proved that all the imagined issues for why Apple had to do such and such was bs post hoc rationalisation.
The same goes with Go, but if you spend enough time, every little while you see the disillusionment of some hardcore fans, even from the Go's core team, and they start asking questions but always start with things like "I know this is Go and holy reasons exists and I am doing a sin to question but why X or Y". It is comedy.
Carbon exists only for interoperating with and transitioning off of C++. Creating a new code base in carbon doesn’t really make sense, and the project’s readme literally tells you not to do that.
> Existing modern languages already provide an excellent developer experience: Go, Swift, Kotlin, Rust, and many more. Developers that can use one of these existing languages should.
A popular language is always going to attract some hate. Also, these kinds of discussions can be useful for helping the language evolve.
But everyone knows in their heart of hearts that a few small language warts definitely don't outweigh Go's simplicity and convenience. Do I wish it had algebraic data types, sure, sure. Is that a deal-breaker, nah. It's the perfect example of something that's popular for a reason.
It is easily one of the most productive languages. No fuss, no muss, just getting stuff done.
Go nearly gave me carpal tunnel with the vast quantities and almost the same but not quite the same repetitive code patterns it brings along with it. I’d never use it again.
I've written a fair chunk of go in $dayjob and I have t say it's just... Boring. I know that sounds like a weird thing to complain about, but I just can't get enthused for anything I write in go. It's just.. Meh. Not sure why that is, guess it doesn't really click for me like other languages have in the past.
No, it's absolutely meant to be boring by design. It's also a downside, obviously, but it's easily compensated by working on something that's already challenging. The language standing out of your way is quite useful in such cases
And it's perfect for most business software, because most businesses are not focused on building good software.
Go has a good-enough standard library, and Go can support a "pile-of-if-statements" architecture. This is all you need.
Most enterprise environments are not handled with enough care to move beyond "pile-of-if-statements". Sure, maybe when the code was new it had a decent architecture, but soon the original developers left and then the next wave came in and they had different ideas and dreamed of a "rewrite", which they sneakily started but never finished, then they left, and the 3rd wave of developers came in and by that point the code was a mess and so now they just throw if-statements onto the pile until the Jira tickets are closed, and the company chugs along with its shitty software, and if the company ever leaks the personal data of 100 million people, they aren't financially liable.
This post is just an attention grabbing rage bate. Listed issues are superficial unless the person is a bit far into the spectrum. There is no good datapoint which would weigh the issues against real world problems, i.e. how much does it cost. Even the point about ram is weak without the data.
I get bitten by the "nil interface" problem if I'm not paying a lot of attention since golang makes a distinction between the "enclosing type" and the "receiver type"
package main
import "fmt"
type Foo struct{
Name string
}
func (f *Foo) Kaboom() {
fmt.Printf("hello from Kaboom, f=%s\n", f.Name)
}
func NewKaboom() interface{ Kaboom() } {
var p *Foo = nil
return p
}
func main() {
obj := NewKaboom()
fmt.Printf("obj == nil? %v\n", obj == nil)
// The next line will panic (because method receives nil *Foo)
obj.Kaboom()
}
go run fred.go
obj == nil? false
panic: runtime error: invalid memory address or nil pointer dereference
I think a lot of people got on the Go train because of Google and not necessarily because it was good. There was a big adoption in Chinese tech scene for example. I personally think Rust/Go/Zig and other modern languages suffer a bit from trying too hard not to be C/C++/Java.
Go was a breath of fresh air and pretty usable right from the start. It felt like a neat little language with - finally - a modern standard library. Fifteen years ago, that was a welcome change. I think it's no surprise that Go and Node.js both got started and took off around the same time. People were looking something modern, lightweight, and simple and both projects delivered that.
> If you stuff random binary data into a string, Go just steams along, as described in this post.
> Over the decades I have lost data to tools skipping non-UTF-8 filenames. I should not be blamed for having files that were named before UTF-8 existed.
What I intended to say with this is that ignoring the problem if invalid UTF-8 (could be valid iso8859-1) with no error handling, or other way around, has lost me data in the past.
Compare this to Rust, where a path name is of a different type than a mere string. And if you need to treat it like a string and you don't care if it's "a bit wrong" (because it's for being shown to the user), then you can call `.to_string_lossy()`. But it's be more hard to accidentally not handle that case when exact name match does matter.
When exactness matters, `.to_str()` returns `Option<&str>`, so the caller is forced to deal with the situation that the file name may not be UTF-8.
Being sloppy with file name encodings is how data is lost. Go is sloppy with strings of all kinds, file names included.
Thanks for your reply. I understand that encoding the character set in the type system is more explicit and can help find bugs.
But forcing all strings to be UTF-8 does not magically help with the issue you described. In practice I've often seen the opposite: Now you have to write two code paths, one for UTF-8 and one for everything else. And the second one is ignored in practice because it is annoying to write. For example, I built the web server project in your other submission (very cool!) and gave it a tar file that has a non-UTF-8 name. There is no special handling happening, I simply get "error: invalid UTF-8 was detected in one or more arguments" and the application exits. It just refuses to work with non-UTF-8 files at all -- is this less sloppy?
Forcing UTF-8 does not "fix" compatibility in strange edge cases, it just breaks them all. The best approach is to treat data as opaque bytes unless there is a good reason not to. Which is what Go does, so I think it is unfair to blame Go for this particular reason instead of the backup applications.
> It just refuses to work with non-UTF-8 files at all -- is this less sloppy?
You can debate whether it is sloppy but I think an error is much better than silently corrupting data.
> The best approach is to treat data as opaque bytes unless there is a good reason not to
This doesn't seem like a good approach when dealing with strings which are not just blobs of bytes. They have an encoding and generally you want ways to, for instance, convert a string to upper/lowercase.
Can't say I know the best way here. But Rust does this better than anything I've seen.
I don't think you need two code paths. Maybe your program can live its entire life never converting away from the original form. Say you read from disk, pick out just the filename, and give to an archive library.
There's no need to ever convert that to a "string". Yes, it could have been a byte array, but taking out the file name (or maybe final dir plus file name) are string operations, just not necessarily on UTF-8 strings.
And like I said, for all use cases where it just needs to be shown to users, the "lossy" version is fine.
> I simply get "error: invalid UTF-8 was detected in one or more arguments" and the application exits. It just refuses to work with non-UTF-8 files at all -- is this less sloppy?
Haha, touche. But yes, it's less sloppy. Would you prefer that the files were silently skipped? You've created your archive, you started the webserver, but you just can't get it to deliver the page you want.
In order for tarweb to support non-UTF-8 in filenames, the programmer has to actually think about what that means. I don't think it means doing a lossy conversion, because that's not what the file name was, and it's not merely for human display. And it should probably not be the bytes either, because tools will likely want to send UTF-8 encoded.
Or they don't. In either case unless that's designed, implemented, and tested, non-UTF-8 in filenames should probably be seen as malformed input. For something that uses a tarfile for the duration of the process's life, that probably means rejecting it, and asking the user to roll back to a previous working version or something.
> Forcing UTF-8 does not "fix" compatibility in strange edge cases
Yup. Still better than silently corrupting.
Compare this to how for Rust kernel work they apparently had to implement a new Vec equivalent, because dealing with allocation failures is a different thing in user and kernel space[1], and Vec push can't fail.
Similarly, Go string operations cannot fail. And memory allocation issues has reasons that string operations don't.
[1] a big separate topic. Nobody (almost) runs with overcommit off.
But there is no silent corruption when you pass the data as opaque bytes, you just get some placeholder symbols when displayed. This is how I see the file in my terminal and I can rm it just fine.
And yes, question marks in the terminal are way better than applications not working at all.
The case of non-UTF-8 being skipped is usually a characteristic of applications written in languages that don't use bytes for their default string type, not the other way around. This has bitten me multiple times with Python2/3 libraries.
Another annoying thing Go proponents say is that it is simple. It is not. And even if it was, the code you write with a simple language is not automatically simple. Take the k8s control plane for example; some of the most convoluted and bulky code that exists, and it’s all in Go.
I wrote a small explainer on the typed-vs-untyped nil issue. It is one of the things that can actually bite you in production. Easy to miss it in code review.
If you run the code, you will see that calling read() on ControlMessage causes a panic even though there is a nil check. However, it doesn't happen for Message. See the read() implementation for Message: we need to have a nil check inside the pointer-receiver struct methods. This is the simplest solution. We have a linter for this. The ecosystem also helps, e.g protobuf generated code also has nil checks inside pointer receivers.
After spending some time in lower level languages Go IMO makes much more sense. Your example:
First one - you have an address to a struct, you pass it, all good.
Second case: you set address of struct to "nil". What is nil? It's an address like anything else. Maybe it's 0x000000 or something else. At this point from memory perspective it exists, but OS will prevent you from touching anything that NULL pointer allows you to touch.
Because you don't touch ANYTHING nothing fails. It's like a deadly poison in a box you don't open.
Third example id the same as second one. You have a IMessage but it points to NULL (instead NULL pointing to deadly poison).
And in fourth, you finally open the box.
Is it magic knowledge? I don't think so, but I'm also not surprised about how you can modify data through slice passing.
IMO the biggest Go shortcoming is selling itself as a high level language, while it touches more bare metal that people are used to touch.
> Wait, what? Why is err reused for foo2()? Is there’s something subtle I’m not seeing? Even if we change that to :=, we’re left to wonder why err is in scope for (potentially) the rest of the function. Why? Is it read later?
First time its assigned nil, second time its overwritten in case there's an error in the 2nd function. I dont see the authors issue? Its very explicit.
Author here: I'm not talking about the value. I'm talking about the lifetime of the variable.
After checking for nil, there's no reason `err` should still be in scope. That's why it's recommended to write `if err := foo(); err != nil`, because after that, one cannot even accidentally refer to `err`.
I'm giving examples where Go syntactically does not allow you to limit the lifetime of the variable. The variable, not its value.
You are describing what happens. I have no problem with what happens, but with the language.
I gave an example in the post, but to spell it out: Because a typo variable is not caught, e.g. as an unused variable.
The example from the blog post would fail, because `return err` referred to an `err` that was no longer in scope. It would syntactically prevent accidentally writing `foo99()` instead of `err := foo99()`.
I'll have to read the rest later but this was an unforced error on the author's part. There is nothing unclear about that block of code. If err isn't but, it was set, and we're no longer in the function. If it's not, why waste an interface handle?
Anyone want to try to explain what he's on about with the first example?
bar, err := foo()
if err != nil {
return err
}
if err := foo2(); err != nil {
return err
}
The above (which declares a new value of err scoped to the second if statement) should compile right? What is it that he's complaining about?
EDIT: OK, I think I understand; there's no easy way to have `bar` be function-scoped and `err` be if-scoped.
I mean, I'm with him on the interfaces. But the "append" thing just seems like ranting to me. In his example, `a` is a local variable; why would assigning a local variable be expected to change the value in the caller? Would you expect the following to work?
int func(a *MyStruct) {
a = &MyStruct{...}
}
If not why would you expect `a = apppend(a, ...)` to work?
Oh, I see. I mean, yeah, the relationships between slices and arrays is somewhat subtle; but it buys you some power as well. I came to golang after decades of C, so I didn't have much trouble with the concept.
I'm afraid I can only consider that a taste thing.
EDIT: One thing I don't consider a taste thing is the lack of the equivalent of a "const *". The problem with the slice thing is that you can sort of sometimes change things but not really. It would be nice if you could be forced to pass either a pointer to a slice (such that you can actually allocate a new backing array and point to it), or a non-modifiable slice (such that you know the function isn't going to change the slice behind your back).
That might be it, but I wondered about that one, as well as the append complaint. It seems like the author disagree with scoping rules, but they aren't really any different than a lot of other languages.
If someone really doesn't like the reuse of err, there's no reason why they couldn't create separate variable, e.g. err_foo and err_foo2. There's not no reason to not reuse err.
Well no, the second "if" statement is a red herring. Both of the following work:
bar, err := foo()
if err != nil {
return err
}
if err = foo2(); err != nil {
return err
}
and
bar, err := foo()
if err != nil {
return err
}
if err := foo2(); err != nil {
return err
}
He even says as much:
> Even if we change that to :=, we’re left to wonder why err is in scope for (potentially) the rest of the function. Why? Is it read later?
My initial reaction was: "The first `err` is function-scope because the programmer made it function-scope; he clearly knows you can make them local to the if, so what's he on about?`
It was only when I tried to rewrite the code to make the first `err` if-scope that I realized the problem I guess he has: OK, how do you make both `err` variable if-scope while making `bar` function-scope? You'd have to do something like this:
var bar MyType
if lbar, err := foo(); err != nil {
return err
} else {
bar = lbar
}
Which is a lot of cruft to add just to restrict the scope of `err`.
None of these objections seem at all serious to me, then the piece wraps up with "Why do I care about memory use? RAM is cheap." Excuse me? Memory bloat effects performance and user experience with every operation. Careful attention to software engineering should avoid or minimize these problems and emphasize the value of being tidy with memory use.
As a long-time Go programmer I didn't understand the comment about two types of nil because I have never experienced that issue, so I dug into it.
It turns out to be nothing but a misunderstanding of what the fmt.Println() statement is actually doing. If we use a more advanced print statement then everything becomes extremely clear:
package main
import (
"fmt"
"github.com/k0kubun/pp/v3"
)
type I interface{}
type S struct{}
func main() {
var i I
var s *S
pp.Println(s, i) // (*main.S)(nil) nil
fmt.Println(s == nil, i == nil, s == i) // true true false
i = s
pp.Println(s, i) // (*main.S)(nil) (*main.S)(nil)
fmt.Println(s == nil, i == nil, s == i) // true false true
}
The author of this post has noted a convenience feature, namely that fmt.Println() tells you the state of the thing in the interface and not the state of the interface, mistaken it as a fundamental design issue and written a screed about a language issue that literally doesn't exist.
Being charitable, I guess the author could actually be complaining that putting a nil pointer inside a nil interface is confusing. It is indeed confusing, but it doesn't mean there are "two types" of nil. Nil just means empty.
The author is showing the result of s==nil and i==nil, which are checks that you would have to do almost everywhere (the so called "billion dollar mistake")
It's not about Printf. It's about how these two different kind of nil values sometimes compare equal to nil, sometimes compare equal to each other, and sometimes not
Yes there is a real internal difference between the two that you can print. But that is the point the author is making.
It's a contrived example which I have never really experienced in my own code (and at this point, I've written a lot of it) or any of my team's code.
Go had some poor design features, many of which have now been fixed, some of which can't be fixed. It's fine to warn people about those. But inventing intentionally confusing examples and then complaining about them is pretty close to strawmanning.
> It's a contrived example which I have never really experienced in my own code (and at this point, I've written a lot of it) or any of my team's code.
It's confusing enough that it has an FAQ entry and that people tried to get it changed for Go 2. Evidently people are running in to this. (I for sure did)
I believe you that you've never hit it, it's definitely not an everyday problem. But they didn't make it up, it does bite people from time to time.
It's sort of a known sharp edge that people occasionally cut themselves on. No language is perfect, but when people run into them they rightfully complain about it
That's really my problem with these kind of critiques.
EVERY language has certain pitfalls like this. Back when I wrote PHP for 20+ years I had a Google doc full of every stupid PHP pitfall I came across.
And they were always almost a combination of something silly in the language, and horrible design by the developer, or trying to take a shortcut and losing the plot.
Author here. No, I didn't misunderstand it. Interface variables have two types of nil. Untyped, which does compare to nil, and typed, which does not.
What are you trying to clarify by printing the types? I know what the types are, and that's why I could provide the succinct weird example. I know what the result of the comparisons are, and why.
And the "why" is "because there are two types of nil, because it's a bad language choice".
I've seen this in real code. Someone compares a variable to nil, it's not, and then they call a method (receiver), and it crashes with nil dereference.
> Author here. No, I didn't misunderstand it. Interface variables have two types of nil. Untyped, which does compare to nil, and typed, which does not.
There aren't two types of nil. Would you call an empty bucket and an empty cup "two types of empty"?
There is one nil, which means different things in different contexts. You're muddying the waters and making something which is actually quite straightforward (an interface can contain other things, including things that are themselves empty) seem complicated.
> I've seen this in real code. Someone compares a variable to nil, it's not, and then they call a method (receiver), and it crashes with nil dereference.
Sure, I've seen pointer-to-pointer dereferences fail for the same reason in C. It's not particularly different.
> Though Python is almost entirely refcounted, so one can pretty much rely on the __del__ finalizer being called.
yeah no. you need an acyclic structure to maybe guarantee this, in CPython. other Python implementations are more normal in that you shouldn't rely on finalizers at all.
I love Python, but the sheer number of caveats and warnings for __del__ makes me question if this person has ever read the docs [0]. My favorite WTF:
> It is possible (though not recommended!) for the __del__() method to postpone destruction of the instance by creating a new reference to it. This is called object resurrection.
Show me a programming language that does not have annoying flaws and I'll show you a programming language that does not yet exist, and probably won't ever exist.
I really like Go. It scratches every itch that I have. Is it the language for your problems? I don't know, but very possibly that answer is "no".
Go is easy to learn, very simple (this is a strong feature, for me) and if you want something more, you can code that up pretty quickly.
The blog article author lost me completely when they said this:
> Why do I care about memory use? RAM is cheap.
That is something that only the inexperienced say. At scale, nothing is cheap; there is no cheap resource if you are writing software for scale or for customers. Often, single bytes count. RAM usage counts. CPU cycles count. Allocations count. People want to pretend that they don't matter because it makes their job easier, but if you want to write performant software, you better have that those cpu cache lines in mind, and if you have those in mind, you have memory usage of your types in mind.
What does this mean? Do they just use recover and keep bad data?
> The standard library does that. fmt.Print when calling .String(), and the standard library HTTP server does that, for exceptions in the HTTP handlers.
Apart from this most doesn't seem that big of a deal, except for `append` which is truly a bad syntax. If you doing it inplace append don't return the value.
As someone who for >10 years writes golang and has written some bigger codebases using it, this are my takes on this articles claims:
:Error variable Scope
-> Yes can be confusing at the beginning, but if you have some experience it doesnt really matter. Would it be cool to scope it down?`Sure, but it feels like here is something blown up to an "issue" where i would see other things to alot more important for the go team to revisit. Regarding the error handling in go, some hate it , some love it : i personally like it (yes i really do) so i think its more a preference than a "bad" thing.
:Two types of nil
-> Funny, i never encountered this in > 10 years of go with ALOT of work in pointer juggling, so i wonder in which reality this hits your where it cant be avoided. Tho confusing i admit
:It’s not portable
-> I have no opinion here since i work on unix systems only and i have my compiled binaries specific shrug dont see any issue here either.
:append with no defined ownership
-> I mean... seriously? Your test case, while the results may be unexpected, is a super wierd one. Why you you append a mid field, if you think about what these functions do under the hood your attemp actualyl feels like you WANT to procude strange behaviour and things like that can be done in any language.
:defer is dumb
-> Here i 100% agree - from my pov it leads to massive resource wasting and in certain situations it can also create strange errors, but im not motivated to explain this - ill just say defer, while it seems usefull, from my pov is a bad thing and should not be used.
:The standard library swallows exceptions, so all hope is lost
-> "So all hope is lost" i mean you already left the realm of objectiveness long before tbut this really tops it. I wrote some quite big go applications and i never had a situation where i could not handle an exception simply by adjusting my code in a way that i prevent it from even happening. Again - i feel like someone is just in search of things to complain that could simply be avoided. (also in case someone comes up with a super specific probably once in a million case, well alrways keep in mind that language design doesnt orient on the least occuring thing).
:Sometimes things aren’t UTF-8
-> I wont bother to read another whole article, if its important include an example. I have dealth with different encodings (web crawler) and i could handle all of them.
:Memory use
-> What you describe is one of the design decisions im not absolutly happy with, the memory handling. But than, one of my golang projects is an in memory graph storage/database - which in one of my cases run for ~2years without restart and had about 18GB of dataset stored in it. It has a lot of mutex handling (regarding your earlier complain with exxceptions, never had one) and it btw run as backend of a internet facing service so it wasnt just fed internal data.
--------------------
Finally i wanne say : often things come down to personal preference. I could spend days raging about javascript, java, c++ or some other languages, but whatfor? Pick the language that fits your use case and your liking, dont pick one that doesnt and complain about it.
Also , just to show im not just a big "golang is the best" fanboy, because it isnt - there are things to critizize like the previously mentioned memory handling.
While i still think you just created memory leaks in your app, golang had this idea of "arenas" which would enable the code to manage memory partly himself and therefor developt much more memory efficient applications. This has stalled lately and i REALLY hope the go team will pick it up again and make this a stable thing to use. I probably would update all of my bigger codebases using it.
Also - and thats something thats annoying me ALOT beacuse it made me spend alot of hours - the golang plugin system. I wrote an architecture to orchestrate processing and for certain reasons i wanted to implement the orchestrated "things" as plugins. But the plugin system as it is rn can only be described as the torments of hell. I messed with it for like 3 years till i recently dropped the plugin functionality and added the stuff directly. Plugins are a very powerfull thing and a good plugin system could be a great thing, but in its current state i would recommend noone to touch it.
These are just two points, i could list some more but the point i want to get to is : there are real things you can critizize instead of things that you create yourself or that are language design decision that you just dont like. Im not sure if such articles are the rage of someone who just is bored or its ragebait to make people read it. Either way its not helping anyone.
Other commenters have. I have. Not everyone will. Doesn't make it good.
:append with no defined ownership
I've seen it. Of course one can just "not do that", but wouldn't it be nice if it were syntactically prevented?
:It’s not portable ("just Unix")
I also only work on Unix systems. But if you only work on amd64 Linux, then portability is not a concern. Supporting BSD and Linux is where I encounter this mess.
:All hope is lost
All hope is lost specifically on the idea of not needing to write exception safe code. If panics did always crash the problem, then that'd be fine. But no coding standard can save you from the standard library, so yes, all hope about being able to pretend panic exits the problem, is lost.
You don't need to read my blog posts. Looking forward to reading your, much better, critique.
I say switching to Go is like a different kind of Zen. It takes time, to settle in and get in the flow of Go... Unlike the others, the LSP is fast, the developer, not so much. Once you've lost all will to live you become quite proficient at it. /s
I've been writing small Go utilities for myself since the Go minor version number was <10
I can still check out the code to any of them, open it and it'll look the same as modern code. I can also compile all of them with the latest compiler (1.25?) and it'll just work.
No need to investigate 5 years of package manager changes and new frameworks.
I was like "Have I ever actually heard that?" and the answer turns out to be "No" so now I have (it's a Metallica track about suicidal ideation, whether it's good idea to listen to it while writing Go I could not say and YMMV).
defer is no worse than Java's try-with-resources. Neither is true RAII, because in both cases you, the caller, need to remember to write the wordy form ("try (...) {" or "defer ...") instead of the plain form ("..."), which will still compile but silently do the wrong thing.
Sure, true RAII would be improvement over both, but the author's point is that Java is an improvement over Go, because the resource acquisition is lexical scoped, not function-scoped. Imagine if Java's `try (...) { }` didn't clear the resource when the try block ends, but rather when the wrapping method returns. That's how Go's defer works.
defer is not block scoped in Go, it's function scoped. So if you want to defer a mutex unlock it will only be executed at the end of the function even if placed in a block. This means you can't do this (sketch):
func (f *Foo) foo() {
// critical section
{
f.mutex.Lock()
defer f.mutex.Unlock()
// something with the shared resource
}
// The lock is still held here, but you probably didn't want that
}
You can call Unlock directly, but then if there's a panic it won't be unlocked like it would be in the above. That can be an issue if something higher in the call stack prevents the panic from crashing the entire program, it would leave your system in a bad state.
This is the key problem with defer. It operates a lot like a finally block, but only on function exit which means it's not actually suited to the task.
And as the sibling pointed out, you could use an anonymous function that's immediately called, but that's just awkward, even if it has become idiomatic.
Is there anything that soothes devs more than developing a superiority complex of their particular tooling? And then the unquenchable thirst to bash "downwards"? I find it so utterly pathetic.
Man ... the author comes over as a person who is butthurt that Go took out his girlfriend on a date. He comes over as the typical Rust fanboy that whines about Go non-stop...
/Look up his previous posts. "I finally got around to learn Rust. It’s amazing." Guessed it! O, how easy to spot they always are. They are always so angry when it comes down to Go. Jealousy? Who knows ...
If there is some constant in a lot of Go rant posts, its typical Rust fanboys that just can not understand that few care about Rust.
*If you do not like Go, nobody forces you to use it.*
This type of Go bashing from Rust users, has been going on for the last 10+ years. Where we had Rust users evangelizing Rust as the one and only solution to every problem and telling everybody that their code needed to be rewritten in Rust.
Most of the points mentioned are literally the quirks of a language. Any language has quirks. Do we need to start ranting about Rust? No, we do not care about Rust's quirks because we all have better things to do.
O, lets not forget the typical GC ranting, because of course Rust user need to rant about the GC. I mean, somebody need to soft refer to our only savior called Rust. When most of use do not give two cents about the GC. It gets the job done, and rarely becomes a issue for the 99.9% of us.
Go is a simple language that provides a lot of benefits to most developers that use it. We do not need a jackhammer when a basic hammer will do.
Can it have improvements, sure. Every language can have improvements, but i am more then happy with what it has.
See, we can write a post without needing to put down a language or rant about that language its quirks. Just focus on your programing in the language that you so clearly love.
I don't really care if you want that. Everyone should know that that's just the way slices work. Nothing more nothing less.
I really don't give a damn about that, i just know how slices behave, because I learned the language. That's what you should do when you are programming with it (professionally)
I am fine with the subsequent example, too. If you read up about slices, then that's how they are defined and how they work. I am not judging, I am just using the language as it is presented to me.
Then you seem to be fine with inconsistent ownership and a behavioral dependence on the underlying data rather than the structure.
You really don't see why people would point a definition that changes underneath you out as a bad definition? They're not arguing the documentation is wrong.
The definition is perfectly consistent. append is in-place if there's enough capacity (and the programmer can check this directly with cap() if they want), and otherwise it allocates a new backing array.
The author obviously knows that too, otherwise they wouldn't have written about it. All of these issues are just how the language works, and that's the problem.
This was an interesting read and very educational in my case, but each time I read an article criticizing a programming language it's written by someone who hasn't done anything better.
It's a shame because it is just as effective as pissing in the wind.
I’ve never been a rock star, but I think Creed sucks.
I really don’t like your logic. I’m not a Michelin chef, but I’m qualified to say that a restaurant ruined my dessert. While I probably couldn’t make a crème brûlée any better than theirs, I can still tell that they screwed it up compared to their competitor next door.
For example, I love Python, but it’s going to be inherently slow in places because `sum(list)` has to check the type of every single item to see what __add__ function to call. Doesn’t matter if they’re all integers; there’s no way to prove to the interpreter that a string couldn’t have sneaked in there, so the interpreter has to check each and every time.
See? I’ve never written a language, let alone one as popular as Python, but I’m still qualified to point out its shortcomings compared to other languages.
If you're saying someone can't credibly criticize a language without having designed a language themselves, I'll ask that you present your body of work of programming language criticisms so I know if you have "produced something better" in the programming language criticism space.
Of course, by your reasoning this also means you yourself have designed a language.
I'll leave out repeating your colorful language if you haven't done any of these things.
> If you're saying someone can't credibly criticize a language without having designed a language themselves
Actually I think that's a reasonable argument. I've not designed a language myself (other than toy experiments) so I'm hesitant to denigrate other people's design choices because even with my limited experience I'm aware that there are always compromises.
Similarly, I'm not impressed by literary critics whose own writing is unimpressive.
Who would be qualified to judge their those critics’ writing as good or bad? Critics already qualified as good writers? Who vetted them, then? It’d have to be a stream of certified good authors all the way back.
No, I stick by my position. I may not be able to do any better, but I can tell when something’s not good.
(I have no opinion on Go. I’ve barely used it. This is only on the general principle of being able to judge something you couldn’t do yourself. I mean, the Olympics have gymnastic judges who are not gold medalists.)
Congratulations, you have found a few pain points in a language. Now as a scientific exercise apply the same reasoning to a few others. Will the number of issues you find multiplied by their importance be greater or lower than the score for Go? There you go, that's the entire problem - Go is bad, but there is no viable alternative in general.
I've been using Go more or less in every full-time job I've had since pre-1.0. It's simple for people on the team to pick up the basics, it generally chugs along (I'm rarely worried about updating to latest version of Go), it has most useful things built in, it compiles fast. Concurrency is tricky but if you spend some time with it, it's nice to express data flow in Go. The type system is most of the time very convenient, if sometimes a bit verbose. Just all-around a trusty tool in the belt.
But I can't help but agree with a lot of points in this article. Go was designed by some old-school folks that maybe stuck a bit too hard to their principles, losing sight of the practical conveniences. That said, it's a _feeling_ I have, and maybe Go would be much worse if it had solved all these quirks. To be fair, I see more leniency in fixing quirks in the last few years, like at some point I didn't think we'd ever see generics, or custom iterators, etc.
The points about RAM and portability seem mostly like personal grievances though. If it was better, that would be nice, of course. But the GC in Go is very unlikely to cause issues in most programs even at very large scale, and it's not that hard to debug. And Go runs on most platforms anyone could ever wish to ship their software on.
But yeah the whole error / nil situation still bothers me. I find myself wishing for Result[Ok, Err] and Optional[T] quite often.
> Concurrency is tricky
The go language and its runtime is the only system I know that is able to handle concurrency with multicore cpus seamlessly within the language, using the CSP-like (goroutine/channel) formalism which is easy to reason with.
Python is a mess with the gil and async libraries that are hard to reason with. C,C++,Java etc need external libraries to implement threading which cant be reasoned with in the context of the language itself.
So, go is a perfect fit for the http server (or service) usecase and in my experience there is no parallel.
> Java etc need external libraries to implement threading which cant be reasoned with in the context of the language itself.
What do you mean by this for Java? The library is the runtime that ships with Java, and while they're OS threads under the hood, the abstraction isn't all that leaky, and it doesn't feel like they're actually outside the JVM.
Working with them can be a bit clunky, though.
Also, Java is one of the only languages with actually decent concurrent data structures right out of the box.
I think parent means they're (mostly) not supported via keywords. But you can use Kotlin and get that.
> Java etc need external libraries to implement threading
Java does not need external libraries to implement threading, it's baked into the language and its standard libraries.
> So, go is a perfect fit for the http server (or service) usecase and in my experience there is no parallel.
Elixir handling 2 million websocket connections on a single machine back in 2015 would like to have a word.[1] This is largely thanks to the Erlang runtime it sits atop.
Having written some tricky Go (I implemented Raft for a class) and a lot of Elixir (professional development), it is my experience that Go's concurrency model works for a few cases but largely sucks in others and is way easier to write footguns in Go than it ought to be.
[1]: https://phoenixframework.org/blog/the-road-to-2-million-webs...
I worked in both Elixir and Go. I still think Elixir is best for concurrency.
I recently realized that there is no easy way to "bubble up a goroutine error", and I wrote some code to make sure that was possible, and that's when I realize, as usual, that I'm rewriting part of the OTP library.
The whole supervisor mechanism is so valuable for concurrency.
> using the CSP-like (goroutine/channel) formalism which is easy to reason with
I thought it was a seldom mentioned fact in Go that CSP systems are impossible to reason about outside of toy projects so everyone uses mutexes and such for systemic coordination.
I'm not sure I've even seen channels in a production application used for anything more than stopping a goroutine, collecting workgroup results, or something equally localized.
With all due respect, there are many languages in popular use that can do this, in many cases better than golang.
I believe it’s the only system you know. But it’s far from the only one.
> there are many languages in popular use that can do this, in many cases better than golang
I'd love to see a list of these, with any references you can provide.
Erlang, Elixir, Ada, plenty of others. Erlang and Ada predate Go by several decades, too.
You wanted sources, here's the chapter on tasks and synchronization in the Ada LRM: http://www.ada-auth.org/standards/22rm/html/RM-9.html
For Erlang and Elixir, concurrent programming is pretty much their thing so grab any book or tutorial on them and you'll be introduced to how they handle it.
Please elaborate or give some examples to back your claim?
There's not that many. C/C++ and Rust all map to OS threads and don't have CSP type concurrency built in.
In Go's category, there's Java, Haskell, OCaml, Julia, Nim, Crystal, Pony...
Dynamic languages are more likely to have green threads but aren't Go replacements.
> There's not that many.
You list three that don't, and then you go on to list seven languages that do.
Yes, not many languages support concurrency like Go does...
And of those seven, how many are mainstream? A single one...
So it's really Go vs. Java, or you can take a performance hit and use Erlang (valid choice for some tasks but not all), or take a chance on a novel paradigm/unsupported language.
Erlang (or Elixir) are absolutely Go replacements for the types of software where CSP is likely important.
Source: spent the last few weeks at work replacing a Go program with an Elixir one instead.
I'd use Go again (without question) but it is not a panacea. It should be the default choice for CLI utilities and many servers, but the notion that it is the only usable language with something approximating CSP is idiotic.
Go is such a good fit for multi-core, especially that it is not even memory safe under data races..
This is a diabolical take
Erlang.
Swift? JavaScript?
JavaScript? How, web workers? JavaScript is M:1 threaded. You can’t use multiple cores without what basically amounts to user space ipc
Not to dispute too strongly (since I haven't used this functionality myself), but Node.js does have support for true multithreading since v12: https://nodejs.org/dist/latest/docs/api/worker_threads.html. I'm not sure what you mean by "M:1 threaded" but I'm legitimately curious to understand more here, if you're willing to give more details.
There are also runtimes like e.g. Hermes (used primarily by React Native), there's support for separating operations between the graphics thread and other threads.
All that being said, I won't dispute OP's point about "handling concurrency [...] within the language"- multithreading and concurrency are baked into the Golang language in a more fundamental way than Javascript. But it's certainly worth pointing out that at least several of the major runtimes are capable of multithreading, out of the box.
Yeah those are workers which require manual admin of memory shared / passed memory:
> Within a worker thread, worker.getEnvironmentData() returns a clone of data passed to the spawning thread's worker.setEnvironmentData(). Every new Worker receives its own copy of the environment data automatically.
M:1 threaded means that the user space threads are mapped onto a single kernel thread. Go is M:N threaded: goroutines can be arbitrarily scheduled across various underlying OS threads. Its primitives (goroutines and channels) make both concurrency and parallelism notably simpler than most languages.
> But it's certainly worth pointing out that at least several of the major runtimes are capable of multithreading, out of the box.
I’d personally disagree in this context. Almost every language has pthread-style cro-magnon concurrency primitives. The context for this thread is precisely how go differs from regular threading interfaces. Quoting gp:
> The go language and its runtime is the only system I know that is able to handle concurrency with multicore cpus seamlessly within the language, using the CSP-like (goroutine/channel) formalism which is easy to reason with.
Yes other languages have threading, but in go both concurrency and parallelism are easier than most.
(But not erlang :) )
I had to look M:1 threading up too - it's this: https://en.wikipedia.org/wiki/Thread_(computing)#M:1_(user-l...
Basically OP was saying that JavaScript can run multiple tasks concurrently, but with no parallelism since all tasks map to 1 OS thread.
So...not concurrently.
No. See [Concurrency vs. Parallelism](https://stackoverflow.com/questions/1050222/what-is-the-diff...).
The tasks run concurrently, but not in parallel.
Go was designed by some old-school folks that maybe stuck a bit too hard to their principles, losing sight of the practical conveniences.
I'd say that it's entirely the other way around: they stuck to the practical convenience of solving the problem that they had in front of them, quickly, instead of analyzing the problem from the first principles, and solving the problem correctly (or using a solution that was Not Invented Here).
Go's filesystem API is the perfect example. You need to open files? Great, we'll create
function, you can open files now, done. What if the file name is not valid UTF-8, though? Who cares, hasn't happen to me in the first 5 years I used Go.> Who cares, hasn't happen to me in the first 5 years I used Go.
This is the mindset that makes me want to throttle the golang authors.
Golang makes it easy to do the dumb, wrong, incorrect thing that looks like it works 99.7% of the time. How can that be wrong? It works in almost all cases!
The problem is that your code is littered with these situations everywhere. You don’t think to test for them, it’s worked on all the data you fed it so far, and then you run into situations like the GP’s where you lose data because golang didn’t bother to think carefully about some API impedance mismatch, can’t even express it anyway, and just drops things on the floor when it happens.
So now your user has irrecoverably lost data, there’s a bug in your bug tracker, and you and everyone else who uses go has to solve for yet another a stupid footgun that should have been obvious from the start and can never be fixed upstream.
And you, and every other golang programmer, gets a steady and never-ending stream of these type of issues, randomly selected for, for the lifetime of your program. Which one will bite you tomorrow? No idea! But the more and more people who use it, the more data you feed it, the more clients with off-the-beaten-track use-cases, the more and more it happens.
Oops, non-UTF-8 filename. Oops, can’t detect the difference between an empty string in some JSON or a nil one. Oops, handed out a pointer and something got mutated out from under me. Oops, forgot to defer. Oops, maps aren’t thread-safe. Oops, maps don’t have a sane zero value. And on and on and fucking on and it never goddamn ends.
And it could have, if only Rob Pike and co. didn’t just ship literally the first thing they wrote with zero forethought.
> Golang makes it easy to do the dumb, wrong, incorrect thing that looks like it works 99.7% of the time. How can that be wrong? It works in almost all cases!
my favorite example of this was the go authors refusing to add monotonic time into the standard library because they confidently misunderstood its necessity
(presumably because clocks at google don't ever step)
then after some huge outages (due to leap seconds) they finally added it
now the libraries are a complete a mess because the original clock/time abstractions weren't built with the concept of multiple clocks
and every go program written is littered with terrible bugs due to use of the wrong clock
https://github.com/golang/go/issues/12914 (https://github.com/golang/go/issues/12914#issuecomment-15075... might qualify for the worst comment ever)
I can count on fewer hands the number of times I've been bitten by such things in over 10 years of professional Go vs bitten just in the last three weeks by half-assed Java.
Is golang better than Java? Sure, fine, maybe. I'm not a Java expert so I don't have a dog in the race.
Should and could golang have been so much better than it is? Would golang have been better if Pike and co. had considered use-cases outside of Google, or looked outward for inspiration even just a little? Unambiguously yes, and none of the changes would have needed it to sacrifice its priorities of language simplicity, compilation speed, etc.
It is absolutely okay to feel that go is a better language than some of its predecessors while at the same time being utterly frustrated at the the very low-hanging, comparatively obvious, missed opportunities for it to have been drastically better.
There is a lot to say about Java, but the libraries (both standard lib and popular third-party ones) are goddamn battle-hardened, so I have a hard time believing your claim.
You can believe what you like, of course, but "battle tested" does not mean "isn't easy to abuse".
While the general question about string encoding is fine, unfortunately in a general-purpose and cross-platform language, a file interface that enforces Unicode correctness is actively broken, in that there are files out in the world it will be unable to interact with. If your language is enforcing that, and it doesn't have a fallback to a bag of bytes, it is broken, you just haven't encountered it. Go is correct on this specific API. I'm not celebrating that fact here, nor do I expect the Go designers are either, but it's still correct.
This is one of those things that kind of bugs me about, say, OsStr / OsString in Rust. In theory, it’s a very nice, principled approach to strings (must be UTF-8) and filenames (arbitrary bytes, almost, on Linux & Mac). In practice, the ergonomics around OsStr are horrible. They are missing most of the API that normal strings have… it seems like manipulating them is an afterthought, and it was assumed that people would treat them as opaque (which is wrong).
Go’s more chaotic approach to allow strings to have non-Unicode contents is IMO more ergonomic. You validate that strings are UTF-8 at the place where you care that they are UTF-8. (So I’m agreeing.)
The big problem isn't invalid UTF-8 but invalid UTF-16 (on Windows et al). AIUI Go had nasty bugs around this (https://github.com/golang/go/issues/59971) until it recently adopted WTF-8, an encoding that was actually invented for Rust's OsStr.
WTF-8 has some inconvenient properties. Concatenating two strings requires special handling. Rust's opaque types can patch over this but I bet Go's WTF-8 handling exposes some unintuitive behavior.
There is a desire to add a normal string API to OsStr but the details aren't settled. For example: should it be possible to split an OsStr on an OsStr needle? This can be implemented but it'd require switching to OMG-WTF-8 (https://rust-lang.github.io/rfcs/2295-os-str-pattern.html), an encoding with even more special cases. (I've thrown my own hat into this ring with OsStr::slice_encoded_bytes().)
The current state is pretty sad yeah. If you're OK with losing portability you can use the OsStrExt extension traits.
Yeah, I avoided talking about Windows which isn’t UTF-16 but “int16 string” the same way Unix filenames are int8 strings.
IMO the differences with Windows are such that I’m much more unhappy with WTF-8. There’s a lot that sucks about C++ but at least I can do something like
Mind you this sucks for a lot of reasons, one big reason being that you’re directly exposed to the differences between path representations on different operating systems. Despite all the ways that this (above) sucks, I still generally prefer it over the approaches of Go or Rust.> You validate that strings are UTF-8 at the place where you care that they are UTF-8.
The problem with this, as with any lack of static typing, is that you now have to validate at _every_ place that cares, or to carefully track whether a value has already been validated, instead of validating once and letting the compiler check that it happened.
In practice, the validation generally happens when you convert to JSON or use an HTML template or something like that, so it’s not so many places.
Validation is nice but Rust’s principled approach leaves me high and dry sometimes. Maybe Rust will finish figuring out the OsString interface and at that point we can say Rust has “won” the conversation, but it’s not there yet, and it’s been years.
> validation generally happens when
Except when it doesn’t and then you have to deal with fucking Cthulhu because everyone thought they could just make incorrect assumptions that aren’t actually enforced anywhere because “oh that never happens”.
That isn’t engineering. It’s programming by coincidence.
> Maybe Rust will finish figuring out the OsString interface
The entire reason OsString is painful to use is because those problems exist and are real. Golang drops them on the floor and forces you pick up the mess on the random day when an unlucky end user loses data. Rust forces you to confront them, as unfortunate as they are. It's painful once, and then the problem is solved for the indefinite future.
Rust also provides OsStrExt if you don’t care about portability, which greatly removes many of these issues.
I don’t know how that’s not ideal: mistakes are hard, but you can opt into better ergonomics if you don’t need the portability. If you end up needing portability later, the compiler will tell you that you can’t use the shortcuts you opted into.
> What if the file name is not valid UTF-8, though
They could support passing filename as `string | []byte`. But wait, go does not even have union types.
But []byte, or a wrapper like Path, is enough, if strings are easily convertible into it. Rust does it that way via the AsRef<T> trait.
Much more egregious is the fact that the API allows returning both an error and a valid file handle. That may be documented to not happen. But look at the Read method instead. It will return both errors and a length you need to handle at the same time.
The Read() method is certainly an exception rather than a rule. The common convention is to return nil value upon encountering an error unless there's real value in returning both, e.g. for a partial read that failed in the end but produced some non-empty result nevertheless. It's a rare occasion, yes, but if you absolutely have to handle this case you can. Otherwise you typically ignore the result if err!=nil. It's a mess, true, but real world is also quite messy unfortunately, and Go acknowledges that
Go doesn't acknowledge that. It punts.
Most of the time if there's a result, there's no error. If there's an error, there's no result. But don't forget to check every time! And make sure you don't make a mistake when you're checking and accidentally use the value anyway, because even though it's technically meaningless it's still nominally a meaningful value since zero values are supposed to be meaningful.
Oh and make sure to double-check the docs, because the language can't let you know about the cases where both returns are meaningful.
The real world is messy. And golang doesn't give you advance warning on where the messes are, makes no effort to prevent you from stumbling into them, and stands next to you constantly criticizing you while you clean them up by yourself. "You aren't using that variable any more, clean that up too." "There's no new variables now, so use `err =` instead of `err :=`."
> What if the file name is not valid UTF-8
Nothing? Neither Go nor the OS require file names to be UTF-8, I believe
> Nothing?
It breaks. Which is weird because you can create a string which isn't valid UTF-8 (eg "\xbd\xb2\x3d\xbc\x20\xe2\x8c\x98") and print it out with no trouble; you just can't pass it to e.g. `os.Create` or `os.Open`.
(Bash and a variety of other utils will also complain about it being valid UTF-8; neovim won't save a file under that name; etc.)
That sounds like your kernel refusing to create that file, nothing to do with Go.
Well, Windows is an odd beast when 8-bit file names are used. If done naively, you can’t express all valid filenames with even broken UTF-8 and non-valid-Unicode filenames cannot be encoded to UTF-8 without loss or some weird convention.
You can do something like WTF-8 (not a misspelling, alas) to make it bidirectional. Rust does this under the hood but doesn’t expose the internal representation.
What do you mean by "when 8-bit filenames are used"? Do you mean the -A APIs, like CreateFileA()? Those do not take UTF-8, mind you -- unless you are using a relatively recent version of Windows that allows you to run your process with a UTF-8 codepage.
In general, Windows filenames are Unicode and you can always express those filenames by using the -W APIs (like CreateFileW()).
I think it depends on the underlying filesystem. Unicode (UTF-16) is first-class on NTFS. But Windows still supports FAT, I guess, where multiple 8-bit encodings are possible: the so-called "OEM" code pages (437, 850 etc.) or "ANSI" code pages (1250, 1251 etc.). I haven't checked how recent Windows versions cope with FAT file names that cannot be represented as Unicode.
I believe the same is true on linux, which only cares about 0x2f bytes (i.e. /)
And 0x00, if I remember correctly.
And 0x00.
Note that Go strings can be invalid UTF-8, they dropped panicking on encountering an invalid UTF string before 1.0 I think
This also epitomizes the issue. What's the point of having `string` type at all, if it doesn't allow you to make any extra assumptions about the contents beyond `[]byte`? The answer is that they planned to make conversion to `string` error out when it's invalid UTF-8, and then assume that `string`s are valid UTF-8, but then it caused problems elsewhere, so they dropped it for immediate practical convenience.
Rust apparently got relatively close to not having &str as a primitive type and instead only providing a library alias to &[u8] when Rust 1.0 shipped.
Score another for Rust's Safety Culture. It would be convenient to just have &str as an alias for &[u8] but if that mistake had been allowed all the safety checking that Rust now does centrally has to be owned by every single user forever. Instead of a few dozen checks overseen by experts there'd be myriad sprinkled across every project and always ready to bite you.
. (early morning brain fart -- I need my coffee)
So it's true that technically the primitive type is str, and indeed it's even possible to make a &mut str though it's quite rare that you'd want to mutably borrow the string slice.
However no &str is not "an alias for &&String" and I can't quite imagine how you'd think that. String doesn't exist in Rust's core, it's from alloc and thus wouldn't be available if you don't have an allocator.
str is not really a "primitive type", it only exists abstractly as an argument to type constructors - treating the & operator as a "type constructor" for that purpose, but including Box<>, Rc<>, Arc<> etc. So you can have Box<str> or Arc<str> in addition to &str or perhaps &mut str, but not really 'str' in isolation.
Why not use utf8.ValidString in the places it is needed? Why burden one of the most basic data types with highly specific format checks?
It's far better to get some � when working with messy data instead of applications refusing to work and erroring out left and right.
IMO utf8 isn't a highly specific format, it's universal for text. Every ascii string you'd write in C or C++ or whatever is already utf8.
So that means that for 99% of scenarios, the difference between char[] and a proper utf8 string is none. They have the same data representation and memory layout.
The problem comes in when people start using string like they use string in PHP. They just use it to store random bytes or other binary data.
This makes no sense with the string type. String is text, but now we don't have text. That's a problem.
We should use byte[] or something for this instead of string. That's an abuse of string. I don't think allowing strings to not be text is too constraining - that's what a string is!
Not all text is UTF-8, and there are real world contexts (e.g. Windows) where this matters a lot.
Yes, Windows text is broken in its own special way.
We can try to shove it into objects that work on other text but this won't work in edge cases.
Like if I take text on Linux and try to write a Windows file with that text, it's broken. And vice versa.
Go let's you do the broken thing. In Rust or even using libraries in most languages, you cant. You have to specifically convert between them.
That's why I mean when I say "storing random binary data as text". Sure, Windows almost UTF16 abomination is kind of text, but not really. Its its own thing. That requires a different type of string OR converting it to a normal string.
Even on Linux, you can't have '/' in a filename, or ':' on macOS. And this is without getting into issues related to the null byte in strings. Having a separate Path object that represents a filename or path + filename makes sense, because on every platform there are idiosyncratic requirements surrounding paths.
It maybe legacy cruft downstream of poorly thought out design decisions at the system/OS level, but we're stuck with it. And a language that doesn't provide the tooling necessary to muddle through this mess safely isn't a serious platform to build on, IMHO.
There is room for languages that explicitly make the tradeoff of being easy to use (e.g. a unified string type) at the cost of not handling many real world edge cases correctly. But these should not be used for serious things like backup systems where edge cases result in lost data. Go is making the tradeoff for language simplicity, while being marketed and positioned as a serious language for writing serious programs, which it is not.
I've always thought the point of the string type was for indexing. One index of a string is always one character, but characters are sometimes composed of multiple bytes.
Yup. But to be clear, in Unicode a string will index code points, not characters. E.g. a single emoji can be made of multiple code points, as well as certain characters in certain languages. The Unicode name for a character like this is a "grapheme", and grapheme splitting is so complicated it generally belongs in a dedicated Unicode library, not a general-purpose string object.
You can't do that in a performant way and going that route can lead to problems, because characters (= graphemes in the language of Unicode) generally don't always behave as developers assume.
string is just an immutable []byte. It's actually one of my favorite things about Go that strings can contain invalid utf-8, so you don't end up with the Rust mess of String vs OSString vs PathBuf vs Vec<u8>. It's all just string
Rust &str and String are specifically intended for UTF-8 valid text. If you're working with arbitrary byte sequences, that's what &[u8] and Vec<u8> are for in Rust. It's not a "mess", it's just different from what Golang does.
If anything that will make Rust programs likely to be correct under any strange text input, while Go might just handle the happy path of ASCII inputs.
Stuff like this matters a great deal on the standard library level.
It's never been clear to me where such a type is actually useful. In what cases do you really need to restrict it to valid UTF-8?
You should always be able to iterate the code points of a string, whether or not it's valid Unicode. The iterator can either silently replace any errors with replacement characters, or denote the errors by returning eg, `Result<char, Utf8Error>`, depending on the use case.
All languages that have tried restricting Unicode afaik have ended up adding workarounds for the fact that real world "text" sometimes has encoding errors and it's often better to just preserve the errors instead of corrupting the data through replacement characters, or just refusing to accept some inputs and crashing the program.
In Rust there's bstr/ByteStr (currently being added to std), awkward having to decide which string type to use.
In Python there's PEP-383/"surrogateescape", which works because Python strings are not guaranteed valid (they're potentially ill-formed UTF-32 sequences, with a range restriction). Awkward figuring out when to actually use it.
In Raku there's UTF8-C8, which is probably the weirdest workaround of all (left as an exercise for the reader to try to understand .. oh, and it also interferes with valid Unicode that's not normalized, because that's another stupid restriction).
Meanwhile the Unicode standard itself specifies Unicode strings as being sequences of code units [0][1], so Go is one of the few modern languages that actually implements Unicode (8-bit) strings. Note that at least two out of the three inventors of Go also basically invented UTF-8.
[0] https://www.unicode.org/versions/Unicode16.0.0/core-spec/cha...
> Unicode string: A code unit sequence containing code units of a particular Unicode encoding form.
[1] https://www.unicode.org/versions/Unicode16.0.0/core-spec/cha...
> Unicode strings need not contain well-formed code unit sequences under all conditions. This is equivalent to saying that a particular Unicode string need not be in a Unicode encoding form.
The way Rust handles this is perfectly fine. String type promises its contents are valid UTF-8. When you create it from array of bytes, you have three options: 1) ::from_utf8, which will force you to handle invalid UTF-8 error, 2) ::from_utf8_lossy, which will replace invalid code points with replacement character code point, and 3) from_utf8_unchecked, which will not do the validity check and is explicitly marked as unsafe.
But there's no option to just construct the string with the invalid bytes. 3) is not for this purpose; it is for when you already know that it is valid.
If you use 3) to create a &str/String from invalid bytes, you can't safely use that string as the standard library is unfortunately designed around the assumption that only valid UTF-8 is stored.
https://doc.rust-lang.org/std/primitive.str.html#invariant
> Constructing a non-UTF-8 string slice is not immediate undefined behavior, but any function called on a string slice may assume that it is valid UTF-8, which means that a non-UTF-8 string slice can lead to undefined behavior down the road.
I don’t understand this complaint. (3) sounds like exactly what you are asking for. And yes, doing unsafe thing is unsafe.
> It's never been clear to me where such a type is actually useful. In what cases do you really need to restrict it to valid UTF-8?
Because 99.999% of the time you want it to be valid and would like an error if it isn't? If you want to work with invalid UTF-8, that should be a deliberate choice.
Do you want grep to crash when your text file turned out to have a partially written character in it? 99.999% seems very high, and you haven't given an actual use case for the restriction.
I think maybe you've forgotten about the rune type. Rune does make assumptions.
[]Rune is for sequences of UTF characters. rune is an alias for int32. string, I think, is an alias for []byte.
`string` is not an alias for []byte.
Consider:
How many times does that loop over 6 bytes iterate? The answer is it iterates twice, with i=0 and i=3.There's also quite a few standard APIs that behave weirdly if a string is not valid utf-8, which wouldn't be the case if it was just a bag of bytes.
> they stuck to the practical convenience of solving the problem that they had in front of them, quickly, instead of analyzing the problem from the first principles, and solving the problem correctly (or using a solution that was Not Invented Here).
I've said this before, but much of Go's design looks like it's imitating the C++ style at Google. The comments where I see people saying they like something about Go it's often an idiom that showed up first in the C++ macros or tooling.
I used to check this before I left Google, and I'm sure it's becoming less true over time. But to me it looks like the idea of Go was basically "what if we created a Python-like compiled language that was easier to onboard than C++ but which still had our C++ ergonomics?"
Didn’t Go come out of a language that was written for Plan9, thus pre-dating Rob Pike’s work at Google?
not that I recall but I may not be recalling correctly.
But certainly, anyone will bring their previous experience to the project, so there must be some Plan9 influence in there somewhere
> Go was designed by some old-school folks that maybe stuck a bit too hard to their principles, losing sight of the practical conveniences.
It feels often like the two principles they stuck/stick to are "what makes writing the compiler easier" and "what makes compilation fast". And those are good goals, but they're only barely developer-oriented.
Not sure it was only that. I remember a lot of "we're not Java" in the discussions around it. I always had the feeling, they were rejecting certain ideas like exceptions and generics more out of principle, than any practical analysis.
Like, yes, those ideas have frequently been driven too far and have led to their own pain points. But people also seem to frequently rediscover that removing them entirety will lead to pain, too.
What makes compilation fast is a good goal at places with large code bases and build times. Maybe makes less sense in smaller startups with a few 100k LOC.
I am reminded when I read "barely developer oriented" that this comes from Google, who run compute and compilers at Ludicrous Scale. It doesn't seem strange that they might optimize (at least in part) for compiler speed and simplicity.
I recently started writing Go for a new job, after 20 years of not touching a compiled language for something serious (I've done DevKitArm dev. as a hobby).
I know it's mostly a matter of tastes, but darn, it feels horrible. And there are no default parameter values, and the error hanling smells bad, and no real stack trace in production. And the "object orientation" syntax, adding some ugly reference to each function. And the pointers...
It took me back to my C/C++ days. Like programming with 25 year old technology from back when I was in university in 1999.
And then people are amazed for it to achieve compile times, compiled languages were already doing on PCs running at 10 MHz within the constraints of 640 KB (TB, TP, Modula-2, Clipper, QB).
> [some] compiled languages were already doing on PCs running at 10 MHz within the constraints of 640 KB
Many compiled languages are very slow to compile however, especially for large projects, C++ and rust being the usual examples.
It is weird to lump C++ and Rust together. I have used Rust code bases that compile in 2-3 minutes what a C++ compiler would take literally hours to compile.
I feel people who complain about rustc compile times must be new to using compiled languages…
There is a way to make C++ beat Rust though.
Make use of binary libraries, export templates, incremental compilation and linking with multiple cores, and if using VC++ or clang vLatest, modules.
It still isn't Delphi fast, but becomes more manageable.
True, however there are more programming languages than only C++ and Rust.
Well, spewing out barely-optimized machine code and having an ultra-weak type system certainly helps with speed - a la Go!
That's a reasonable trade-off to make for some people, no? There's plenty of work to be done where you can cope with the occasional runtime error and less then bleeding edge performance, especially if that then means wins in other areas (compile speeds, tooling). Having a variety of languages available feels like a pretty good thing to me.
But go tooling is bad. Like, really really bad.
Sure it's good compared to like... C++. Is go actually competing with C++? From where I'm standing, no.
But compared to what you might actually use Go for... The tooling is bad. PHP has better tooling, dotnet has better tooling, Java has better tooling.
Well, I personally would be happier with a stronger type system (e.g. java can compile just as fast, and it has a less anemic type system), but sure.
And sure, it is welcome from a dev POV on one hand, though from an ecosystem perspective, more languages are not necessarily good as it multiplies the effort required.
It is kind of ironic that from Go's point of view, Java's type system is PhD level of language knowledge.
Especially given how the language was criticised back in 1996.
Unfortunately the lack of abstraction and simple type system in Go makes it far _slower_ for me to code than e.g. Rust.
> Just all-around a trusty tool in the belt
I agree.
The Go std-lib is fantastic.
Also no dependency-hell with Go, unlike with Python. Just ship an oven-ready binary.
And what's the alternative ?
Java ? Licensing sagas requiring the use of divergent forks. Plus Go is easier to work with, perhaps especially for server-side deployments.
Zig ? Rust ? Complex learning curve. And having to choose e.g. Rust crates re-introduces dependency hell and the potential for supply-chain attacks.
> Java ? Licensing sagas requiring the use of divergent forks. Plus Go is easier to work with, perhaps especially for server-side deployments
Yeah, these are sagas only, because there is basically one, single, completely free implementation anyone uses on the server-side and it's OpenJDK, which was made 100% open-source and the reference implementation by Oracle. Basically all of Corretto, AdoptOpenJDK, etc are just builds of the exact same repository.
People bringing this whole license topic up can't be taken seriously, it's like saying that Linux is proprietary because you can pay for support at Red Hat..
> People bringing this whole license topic up can't be taken seriously
So you mean all those universities and other places that have been forced to spend $$$ on licenses under the new regime also can't be taken seriously ? Are you saying none of them took advice and had nobody on staff to tell them OpenJDK exists ?
Regarding your Linux comment, some of us are old enough to remember the SCO saga.
Sadly Oracle have deeper pockets to pay more lawyers than SCO ever did ....
I have made a bunch of claims, that are objectively true. From there, basic logical inference says that you can completely freely use Java. Anything else is irrelevant.
I don't know what/which university you talk about, but I'm sure they were also "forced to pay $$$" for their water bills and whatnot. If they decided to go with paid support, then.. you have to pay for it. In exchange you can a) point your finger at a third-party if something goes wrong (which governments love doing/often legally necessary) b) get actual live support on Christmas Eve if needed.
TL;DR: Its impossible to know if anyone on campus has downloaded Oracle Java....
Quote from this article:[1]
[1] https://www.theregister.com/2025/06/13/jisc_java_oracle/That's also true of torrented PhotoShop, Microsoft Office, etc..
Also, as another topic, Oracle is doing audits specifically because their software doesn't phone home to check licenses and stuff like that - which is a crucial requirement for their intended target demographics, big government organizations, safety critical systems, etc. A whole country's healthcare system, or a nuclear power base can't just stop because someone forgot to pay the bill.
So instead Oracle just visits companies that have a license with them, and checks what is being used to determine if it's in accord with the existing contract. And yeah, from this respect I also heard of a couple of stories where a company was not using the software as the letter of the contract, e.g. accidentally enabling this or that, and at the audit the Oracle salesman said that they will ignore the mistake if they subscribe to this larger package, which most manager will gladly accept as they can avoid the blame, which is questionable business practice, but still doesn't have anything to do with OpenJDK..
> Quote from this article:[1]
The article tries very hard to draw a connection between the licensing costs for the universities and Oracle auditing random java downloads, but nobody actually says that this is what happened.
The waiver of historic fees goes back to the last licensing change where Oracle changed how licensing fees would be calculated. So it seems reasonable that Oracle went after them because they were paying customers that failed to pay the inflated fees.
> So you mean all those universities and other places that have been forced to spend $$$ on licenses under the new regime also can't be taken seriously ? Are you saying none of them took advice and had nobody on staff to tell them OpenJDK exists ?
This info is actually quite surprising to me, never heard of it since everywhere I know switched to OpenJDK-based alternatives from the get-go. There was no reason to keep on the Oracle one after the licencing shenanigans they tried to play.
Why do these places kept the Oracle JDK and ended up paying for it? OpenJDK was a drop-in replacement, nothing of value is lost by switching...
TL;DR: Its impossible to know if anyone on campus has downloaded Oracle Java....Oracle monitors downloads and sends in the auditors...
See link/quote in my earlier reply above.
The licensing thing is such FUD man. Oracle being a terrible company is in no way a decent argument that Java should not be used.
There are other JVMs that do not descend from OpenJDK, but in general your point stands.
Yeah I know, but people have trouble understanding the absolutely trivial licensing around OpenJDK, let's not bring up alternative implementations (which actually makes the whole platform an even better target from a longevity perspective! There isn't many languages that have a standard with multiple, completely independent impls).
You forgot D. In a world where D exists, it's hard to understand why Go needed to be created. Every critique in this post is not an issue in D. If the effort Google put into Go had gone on making D better, I think D today would be the best language you could use. But as it is, D has had very little investment (by that I mean actual developer time spent on making it better, cleaning it up, writing tools) and it shows.
I don't think the languages are comparable. Go tries to stay simple (whatever that means), while D is a kitchen-sink language.
> Rust crates re-introduces dependency hell and the potential for supply-chain attacks.
I’m only a casual user of both but how are rust crates meaningfully different from go’s dependency management?
Go has a big, high quality standard library with most of what one might need. Means you have to bring in and manage (and trust) far fewer third party dependencies, and you can work faster because you’re not spending a bunch of time figuring out what the crate of the week is for basic functionality.
Rust intentionally chooses to have a small standard library to avoid the "dead batteries" problem. But the Rust community also maintains lists of "blessed" crates to try and cope with the issue of having to trust third-party software components of unknown quality.
Different trade offs, both are fine.
The downside of a small stdlib is the proliferation of options, and you suddenly discover(ed?, it's been a minute) that your async package written for Tokio won't work on async-std and so forth.
This has often been the case in Go too - until `log/slog` existed, lots of people chose a structured logger and made it part of their API, forcing it on everyone else.
> Rust intentionally chooses to have a small standard library to avoid the "dead batteries" problem.
There is a difference between "small" and Rust's which is for all intents and purposes, non-existent.
I mean, in 2025, not having crypto in stdlib when every man and his dog is using crypto ? Or http when every man and his dog are calling REST APIs ?
As the other person who replied to you said. Go just allows you to hit the ground running and get on with it.
Having to navigate the world of crates, unofficially "blessed" or not is just a bit of a re-inventing the wheel scenario really....
P.S. The Go stdlib is also well maintained, so I don't really buy the specific "dead batteries" claim either.
The go stdlib is well maintained and featureful because Google is very invested in it being both of those things for the use cases
That works well for go and Google but I'm not sure how easily that'd be to replicate with rust or others
I think having http in the standard library is a perfect example of the dead batteries problem: should the stdlib http also support QUIC and/or websockets? If you choose to include it, you've made stdlib include support for very specific use cases. If you choose not to include it, should the quic crate then extend or subsume the stdlib http implementation? If you choose subsume, you've created a dead battery. If you choose extend, you've created a maintenance nightmare by introducing a dependency between stdlib and an external crate.
> I mean, in 2025, not having crypto in stdlib when every man and his dog is using crypto ? Or http when every man and his dog are calling REST APIs ?
I'm not and I'm glad the core team doesn't have to maintain an http server and can spend time on the low level features I chose Rust for.
Sorry but for most programming tasks I prefer having actual data containers with features than an HTTP library: Set, Tree, etc types. Those are fundamental CS building blocks yet are absent from the Go standard library. (well, they were added pretty recently, still nowhere near as featureful than std::collection in Rust).
Also, as mentioned by another comment, an HTTP or crypto library can become obsolete _fast_. What about HTTP3? What about post-quantum crypto? What about security fixes? The stdlib is tied to the language version, thus to a language release. Having such code independant allows is to evolve much faster, be leaner, and be more composable. So yes, the library is well maintained, but it's tied to the Go version.
Also, it enables breaking API changes if absolutely needed. I can name two precendents:
- in rust, time APIs in chrono had to be changed a few times, and the Rust maintainers were thankful it was not part of the stdlib, as it allowed massive changes
- otoh, in Go, it was found out that net.Ip has an absolutely atrocious design (it's just an alias for []byte). Tailscale wrote a replacement that's now in a subpackage in net, but the old net.Ip is set in stone. (https://tailscale.com/blog/netaddr-new-ip-type-for-go)
Do you think C and C++ should have http or crypto in their standard libraries?
I think it's because go's community sticks close to the standard library:
e.g. iirc. Rust has multiple ways of handling Strings while Go has (to a big extent) only one (thanks to the GC)
What does String/OsSfeing have to do with garbage collection?
This just makes it even more frustrating to me. Everything good about go is more about the tooling and ecosystem but the language itself is not very good. I wish this effort had been put into a better language.
Go has transparent async io and a very nice M:N threading model that makes writing http servers using epoll very simple and efficient.
The ergonomics for this use case are better than in any language I ever used.
Implementing HTTP servers isn’t exactly a common use case in software development, though.
uv + the new way of adding the required packages in the comments is pretty good.
you can go `uv run script.py` and it'll automatically fetch the libraries and run the script in a virtual environment.
Still no match for Go though, shipping a single cross-compiled binary is a joy. And with a bit of trickery you can even bundle in your whole static website in it :) Works great when you're building business logic with a simple UI on top.
I've been out of the Python game for a while but I'm not surprised there is yet another tool on the market to handle this.
You really come to appreciate when these batteries are included with the language itself. That Go binary will _always_ run but that Python project won't build in a few years.
Unless it made use of CGO and has dynamic dependencies, always is a bit too much.
Or the import path was someone's blog domain that included a <meta> reference to the actual github repo (along with the tag, IIRC) where the source code really lives. Insanity
I never understood the mentality to have SCM urls as package imports directly on the source code.
Well, that's the problem I was highlighting - golang somehow decided to have the worst of both worlds: arbitrary domains in import paths and then putting the actual ref of the source code ... elsewhere
oh, ok :-/I would presume only a go.mod entry would specify whether it really is v3.0.0 or v3.0.1
Also, for future generations, don't use that package https://github.com/go-yaml/yaml#this-project-is-unmaintained
uv is the new hotness now. Let us check back in 5 years...
> you can go `uv run script.py` and it'll automatically fetch the libraries and run the script in a virtual environment.
Yeah, but you still have to install `uv` as a pre-requisite.
And you still end up with a virtual environment full of dependency hell.
And then of course we all remember that whole messy era when Python 2 transitioned to Python 3, and then deferred it, and deferred it again....
You make a fair point, of course it is technically possible to make it (slightly) "cleaner". But I'll still take the Go binary thanks. ;-)
Installing uv is a requirement and incredibly easy.
No, there is no dependency hell in the venv.
Python 2 to 3: are you really still kicking that horse? It's dead...please move on.
> std-lib
Yes, My favourite is the `time` package. It's just so elegant how it's just a number under there, the nominal type system truly shines. And using it is a treat. What do you mean I can do `+= 8*time.Hour` :D
Unfortunately it doesn't have error handling, so when you do += 8 hours and it fails, it won't return a Go error, it won't throw a Go exception, it just silently does the wrong thing (clamp the duration) and hope you don't notice...
It's simplistic and that's nice for small tools or scripts, but at scale it becomes really brittle since none of the edge cases are handled
When would that fail - if the resulting time is before the minimum time or after the maximum time?
I thankfully found out when writing unit tests instead of in production. In Go time.Time has a much higher range than time.Duration, so it's very easy to have an overflow when you take a time difference. But there's also no error returned in general when manipulating time.Duration, you have to remember to check carefully around each operation to know if it risks going out of range.
Internally time.Duration is a single 64bit count, while time.Time is two more complicated 64bit fields plus a location
How is it easy to have an overflow? time.Duration is capped to +- 290 years IIRC.
As long as you don’t need to do `hours := 8` and `+= hours * time.Hour`. Incredibly the only way to get that multiplication to work is to cast `hours` to a `time.Duration`.
In Go, `int * Duration = error`, but `Duration * Duration = Duration`!
That is consistent though. Constants take type based on context, so 8 * time.Hour has 8 as a time.Duration.
If you have an int variable hours := 8, you have to cast it before multiplying.
This is also true for simple int and float operations.
is valid, but x := 3 would need float64(x)*f to be valid. Same is true for addition etc.The way Go parses time strings by default is insane though, even the maintainers regret it. It's a textbook example of being too clever.
By choosing default values instead of templatized values?
Other than having to periodically remember what 0-padded milliseconds are or whatever this isn't a huge deal.
I'm not OP, but I also got tripped up the first time I saw time.Parse("2006-01-02 03:04:05") and was like what the actual?!
https://pkg.go.dev/time#Layout
My feeling is that in terms of developer ergonomics, it nailed the “very opinionated, very standard, one way of doing things” part. It is a joy to work on a large microservices architecture and not have a different style on each repo, or avoiding formatting discussions because it is included.
The issue is that it was a bit outdated in the choice of _which_ things to choose as the one Go way. People expect a map/filter method rather than a loop with off by one risks, a type system with the smartness of typescript (if less featured and more heavily enforced), error handling is annoying, and so on.
I get that it’s tough to implement some of those features without opening the way to a lot of “creativity” in the bad sense. But I feel like go is sometimes a hard sell for this reason, for young devs whose mother language is JavaScript and not C.
> The issue is that it was a bit outdated in the choice of _which_ things to choose as the one Go way
I agree with this. I feel like Go was a very smart choice to create a new language to be easy and practical and have great tooling, and not to be experimental or super ambitious in any particular direction, only trusting established programming patterns. It's just weird that they missed some things that had been pretty well hashed out by 2009.
Map/filter/etc. are a perfect example. I remember around 2000 the average programmer thought map and filter were pointlessly weird and exotic. Why not use a for loop like a normal human? Ten years later the average programmer was like, for loops are hard to read and are perfect hiding places for bugs, I can't believe we used to use them even for simple things like map, filter, and foreach.
By 2010, even Java had decided that it needed to add its "stream API" and lambda functions, because no matter how awful they looked when bolted onto Java, it was still an improvement in clarity and simplicity.
Somehow Go missed this step forward the industry had taken and decided to double down on "for." Go's different flavors of for are a significant improvement over the C/C++/Java for loop, but I think it would have been more in line with the conservative, pragmatic philosophy of Go to adopt the proven solution that the industry was converging on.
> People expect a map/filter method
Do they? After too many functional battles I started practicing what I'm jokingly calling "Debugging-Driven Development" and just like TDD keeps the design decisions in mind to allow for testability from the get-go, this makes me write code that will be trivially easy to debug (specially printf-guided debugging and step-by-step execution debugging)
Like, adding a printf in the middle of a for loop, without even needing to understand the logic of the loop. Just make a new line and write a printf. I grew tired of all those tight chains of code that iterate beautifully but later when in a hurry at 3am on a Sunday are hell to decompose and debug.
I'm not a hard defender of functional programming in general, mind you.
It's just that a ridiculous amount of steps in real world problems can be summarised as 'reshape this data', 'give me a subset of this set', or 'aggregate this data by this field'.
Loops are, IMO, very bad at expressing those common concepts briefly and clearly. They take a lot of screen space, usually accesory variables, and it isn't immediately clear from just seing a for block what you're about to do - "I'm about to iterate" isn't useful information to me as a reader, are you transforming data, selecting it, aggregating it?.
The consequence is that you usually end up with tons of lines like
userIds = getIdsfromUsers(users);
where the function is just burying a loop. Compare to:
userIds = users.pluck('id')
and you save the buried utility function somewhere else.
Rust has `.inspect()` for iterators, which achieves your printf debugging needs. Granted, it's a bit harder for an actual debugger, but support's quite good for now.
Just use a real debugger. You can step into closures and stuff.
I assume, anyway. Maybe the Go debugger is kind of shitty, I don't know. But in PHP with xdebug you just use all the fancy array_* methods and then step through your closures or callables with the debugger.
I'll agree that explicit loops are easier to debug, but that comes at the cost of being harder to write _and_ read (need to keep state in my head) _and_ being more bug-prone (because mutability).
I think it's a bad trade-off, most languages out there are moving away from it
There's actually one more interesting plus for the for loops that's not quite obvious in the beginning: the for-loops allow to do perform a single memory pass instead of multiple. If you're processing a large enough list it does make a significant difference because memory accesses are relatively expensive (the difference is not insignificant, the loop can be made e.g. 10x more performant by optimising memory accesses alone).
So for a large loop the code like
for i, value := source { result[i] = value * 2 + 1 }
Would be 2x faster than a loop like
for i, value := source { intermediate[i] = value * 2 }
for i, value := intermediate { result[i] = value + 1 }
Depending on your iterator implementation (or, lackthere of), the functional boils down to your first example.
For example, Rust iterators are lazily evaluated with early-exits (when filtering data), thus it's your first form but as optimized as possible. OTOH python's map/filter/etc may very well return a full list each time, like with your intermediate.
I would say that any sane language allowing functional-style data manipulation will have them as fast as manual for-loops. (that's why Rust bugs you with .iter()/.collect())
Python map/filter/zip/etc. return generators, so they're lazily evaluated.
Clojure transducers as well.
This is a very valid point. Loops also let you play with the iteration itself for performance, deciding to skip n steps if a condition is met for example.
I always encounter these upsides once every few years when preparing leetcode interviews, where this kind of optimization is needed for achieving acceptable results.
In daily life, however, most of these chunks of data to transform fall in one of these categories:
- small size, where readability and maintainability matters much more than performance
- living in a db, and being filtered/reshaped by the query rather than code
- being chunked for atomic processing in a queue or similar (usual when importing a big chunk of data).
- the operation itself is a standard algorithm that you just consume from a standard library that handless the loop internally.
Much like trees and recursion, most of us don’t flex that muscle often. Your mileage might vary depending of domain of course.
There's also that rust does a _lot_ of compiler optimizations on map/filter/reduce and it's trivially parallelizable in many cases.
This depends on the language and IDE. Intellij Java debugger is excellent at stream debugging.
"Concurrency is tricky"
This tends to be true for most languages, even the ones with easier concurrency support. Using it correctly is the tricky part.
I have no real problem with the portability. The area I see Go shining in is stuff like AWS Lambda where you want fast execution and aren't distributing the code to user systems.
People tend to refer to the bit where Discord rewrote a bit of their stack in Rust because Go GC pauses were causing issues.
The code was on the hot path of their central routing server handling Billions (with a B) messages in a second or something crazy like that.
You're not building Discord, the GC will most likely never be even a blip in your metrics. The GC is just fine.
I get you can specifically write code that does not malloc, but I'm curious at scale if there are heap management / fragmentation and compression issues that are equivalent to GC pause issues.
I don't have a lot of experience with the malloc languages at scale, but I do know that heat fragmentation and GC fragmentation are very similar problems.
There are techniques in GC languages to avoid GC like arena allocation and stuff like that, generally considered non-idiomatic.
> The type system is most of the time very convenient
In what universe?
In mine. It's Just Fine.
Is it the best or most robust or can you do fancy shit with it? No
But it works well enough to release reliable software along with the massive linter framework that's built on top of Go.
> I find myself wishing for Optional[T] quite often.
Well, so long as you don't care about compatibility with the broad ecosystem, you can write a perfectly fine Optional yourself:
But you probably do care about compatibility with everyone else, so... yeah it really sucks that the Go way of dealing with optionality is slinging pointers around.There's some other issues, too.
For JSON, you can't encode Optional[T] as nothing at all. It has to encode to something, which usually means null. But when you decode, the absence of the field means UnmarshalJSON doesn't get called at all. This typically results in the default value, which of course you would then re-encode as null. So if you round-trip your JSON, you get a materially different output than input (this matters for some other languages/libraries). Maybe the new encoding/json/v2 library fixes this, I haven't looked yet.
Also, I would usually want Optional[T]{value:nil,exists:true} to be impossible regardless of T. But Go's type system is too limited to express this restriction, or even to express a way for a function to enforce this restriction, without resorting to reflection, and reflection has a type erasure problem making it hard to get right even then! So you'd have to write a bunch of different constructors: one for all primitive types and strings; one each for pointers, maps, and slices; three for channels (chan T, <-chan T, chan<- T); and finally one for interfaces, which has to use reflection.
You can write `Optional`, sure, but you can't un-write `nil`, which is what I really want. I use `Optional<T>` in Java as much as I can, and it hasn't saved me from NullPointerException.
I find Result[] and Optional[] somewhat overrated, but nil does bother me. However, nil isn't going to go away (what else is going to be the default value for pointers and interfaces, and not break existing code?). I think something like a non-nilable type annotation/declaration would be all Go needs.
Yeah maybe they're overrated, but they seem like the agreed-upon set of types to avoid null and to standardize error handling (with some support for nice sugars like Rust's ? operator).
I quite often see devs introducing them in other languages like TypeScript, but it just doesn't work as well when it's introduced in userland (usually you just end up with a small island of the codebase following this standard).
Typescript has another way of dealing with null/undefined: it's in the type definition, and you can't use a value that's potentially null/undefined. Using Optional<T> in Typescript is, IMO, weird. Typescript also has exceptions...
I think they only work if the language is built around it. In Rust, it works, because you just can't deref an Optional type without matching it, and the matching mechanism is much more general than that. But in other languages, it just becomes a wart.
As I said, some kind of type annotation would be most go-like, e.g.
You would only be allowed to touch *ptr inside a if ptr != nil { ... }. There's a linter from uber (nilaway) that works like that, except for the type annotation. That proposal would break existing code, so perhaps something an explicit marker for non-nil pointers is needed instead (but that's not very ergonomic, alas).Yeah default values are one of Go's original sins, and it's far too late to roll those back. I don't think there are even many benefits—`int i;` is not meaningfully better than `int i = 0;`. If it's struct initialization they were worried about, well, just write a constructor.
Go has chosen explicit over implicit everywhere except initialization—the one place where I really needed "explicit."
Golang is great for problem classes where you really, really can't do away with tracing GC. That's a rare case perhaps, but it exists nonetheless. Most GC languages don't have the kind of high-performance concurrent GC that you get out of the box with Golang, and the minimum RAM requirements are quite low as well. (You can of course provide more RAM to try and increase overall throughput, and you probably should - but you don't have to. That makes it a great fit for running on small cloud VM's, where RAM itself can be at a premium.)
Java's GCs are a generation ahead, though, in both throughput-oriented and latency-sensitive workloads [1]. Though Go's GC did/does get a few improvements and it is much better than it was a few years ago.
[1] ZGC has basically decoupled the heap size from the pause time, at that point you get longer pauses from the OS scheduler than from GC.
Do you have a source for this? My understanding is Go's GC is much better optimized for low latency.
> But yeah the whole error / nil situation still bothers me. I find myself wishing for Result[Ok, Err] and Optional[T] quite often.
I got insta rejected in interview when i said this in response to interview panels question about 'thoughts about golang' .
Like they said, 'interview is over' and showed me the (virtual) door. I was stunned lol. This was during peak golang mania . Not sure what happened to rancherlabs .
Oh my, you sure dodged a bullet.
Some workplaces explicitly test cultural closeness to their philosophy of work (language, architecture, etc).
It’s part trying to keep a common direction and part fear that dislike of their tech risks the hire not staying for long.
I don’t agree with this approach, don’t get me wrong, but I’ve seen it done and it might explain your experience.
No need to sugarcoat it. Some places are cults and it's best to avoid them. Good for GP.
They probably thought you weren't going to be a good fit for writing idiomatic Go. One of the things many people praise Go for is its standard style across codebases, if you don't like it, you're liable to try and write code that uses different patterns, which is painful for everyone involved.
I've worked almost exclusively on a large Golang project for over 5 years now and this definitely resonates with me. One component of that project is required to use as little memory as possible, and so much of my life has been spent hitting rough edges with Go on that front. We've hit so many issues where the garbage collector just doesn't clean things up quickly enough, or we get issues with heap fragmentation (because Go, in its infinite wisdom, decided not to have a compacting garbage collector) that we've had to try and avoid allocations entirely. Oh, and when we do have those issues, it's extremely difficult to debug. You can take heap profiles, but those only tell you about the live objects in the heap. They don't tell you about all of the garbage and all of the fragmentation. So diagnosing the issue becomes a matter of reading the tea leaves. For example, the heap profile says function X only allocated 1KB of memory, but it's called in a hot loop, so there's probably 20MB of garbage that this thing has generated that's invisible on the profile.
We pre-allocate a bunch of static buffers and re-use them. But that leads to a ton of ownership issues, like the append footgun mentioned in the article. We've even had to re-implement portions of the standard library because they allocate. And I get that we have a non-standard use case, and most programmers don't need to be this anal about memory usage. But we do, and it would be really nice to not feel like we're fighting the language.
I've found that when you need this it's easier to move stuff offheap, although obviously that's not entirely trivial in a GC language, and it certainly creates a lot of rough edges. If you find yourself writing what's essentially, e.g. C++ or Rust in Go, then you probably should just rewrite that part in the respective language when you can :)
I know this comment isn't terribly helpful, so I'm sorry, but it also sounds like Go is entirely the wrong language for this use case and you and your team were forced to use it for some corporate reason, like, the company only uses a subset of widely used programming languages in production.
I've heard the term "beaten path" used for these languages, or languages that an organization chooses to use and forbids the use of others.
Perhaps the new "Green Tea" GC will help? It's described as "a parallel marking algorithm that, if not memory-centric, is at least memory-aware, in that it endeavors to process objects close to one another together."
https://github.com/golang/go/issues/73581
I saw that! I’m definitely interested in trying it out to see if it helps for our use case. Of course, at this point we’ve reduced allocations so much the GC doesn’t have a ton of work to do, unless we slip up somewhere (which has happened). I’ll probably have to intentionally add some allocations in a hot path as a stress test.
What I would absolutely love is a compacting garbage collector, but my understanding is Go can’t add that without breaking backwards compatibility, and so likely will never do that.
I guess you'd be interested in the arena experiment, though it seems to be currently on pause
Go has its fair share of flaws but I still think it hits a sweet spot that no other server side language provides.
It’s faster than Node or Python, with a better type system than either. It’s got a much easier learning curve than Rust. It has a good stdlib and tooling. Simple syntax with usually only one way to do things. Error handling has its problems but I still prefer it over Node, where a catch clause might receive just about anything as an “error”.
Am I missing a language that does this too or more? I’m not a Go fanatic at all, mostly written Node for backends in my career, but I’ve been exploring Go lately.
>with a better type system than either
Given Python's substantial improvements recently, I would put it far ahead of the structural typing done in Go, personally.
> It’s faster than Node or Python, with a better type system than either. It’s got a much easier learning curve than Rust. It has a good stdlib and tooling. Simple syntax with usually only one way to do things. Error handling has its problems but I still prefer it over Node, where a catch clause might receive just about anything as an “error”.
I feel like I could write this same paragraph about Java or C#.
Java and C# are both languages with A LOT more features and things to learn. Go someone can pick 80% of the language up in a single day.
Just because you can learn about something doesn't mean you need to. C# now offers top-level programs that are indistinguishable from python scripts at a quick glance. No namespaces, classes or main methods are required. Just the code you want to execute and one simple file.
https://learn.microsoft.com/en-us/dotnet/csharp/fundamentals...
I mostly agree with you except the simple syntax with one way of doing things. If my memory serves me, Java supports at least 2 different paradigms for concurrency, for example, maybe more. I don’t know about C#. Correct me if wrong.
But that's only because they're older and were around before modern concurrent programming was invented.
In C#, for example, there are multiple ways, but you should generally be using the modern approach of async/Task, which is trivial to learn and used exclusively in examples for years.
Maybe this is a bit pedantic, but it bothers me when people refer to "Node" as a programming language. It's not a language, it's a JavaScript runtime. Which to that you might say "well when people say Node they just mean JavaScript". But that's also probably not accurate, because a good chunk of modern Node-executed projects are written in TypeScript, not JavaScript. So saying "Node" doesn't actually say which programming language you mean. (Also, there are so many non-Node ways to execute JavaScript/TypeScript nowadays)
Anyway, assuming you're talking about TypeScript, I'm surprised to hear that you prefer Go's type system to TypeScript's. There are definitely cases where you can get carried away with TypeScript types, but due to that expressiveness I find it much more productive than Go's type system (and I'd make the same argument for Rust vs. Go).
My intent was just to emphasize that I’m comparing Go against writing JavaScript for the Node runtime and not in the browser, that is all, but you are correct.
Regarding Typescript, I actually am a big fan of it, and I almost never write vanilla JS anymore. I feel my team uses it well and work out the kinks with code review. My primary complaint, though, is that I cannot trust any other team to do the same, and TS supports escape hatches to bypass or lie about typing.
I work on a project with a codebase shared by several other teams. Just this week I have been frustrated numerous times by explicit type assertions of variables to something they are not (`foo as Bar`). In those cases it’s worse than vanilla JS because it misleads.
Yeah, but no one is using v8 directly, even though technically you could if you wanted. Node.js is as much JavaScript as LuaJIT is Lua, or GCC compiles C.
it is pedantic, everyone knows what "node" means in this context
Yeah the big problem is that most languages have their fair share of rough edges. Go is performant and portable* with a good runtime and a good ecosystem. But it also has nil pointers, zero values, no destructors, and no macros. (And before anyone says macros are bad, codegen is worse, and Go has to use a lot of codegen to get around the lack of macros).
There are languages with fewer warts, but they're usually more complicated (e.g. Rust), because most of Go's problems are caused by its creators' fixation with simplicity at all costs.
It definitely hits a sweet spot. There is basically no other faster, widely used programming language in production used predominantly for web services than Go. You can argue Rust, but I just don't see it in job listings. And virtually no one is writing web services in C or C++ directly.
Maybe Nim. But it's not really caught on and the ecosystem is therefore relatively immature.
I still don't understand why defer works on function scope, and not lexical scope, and nobody has been able to explain to me the reason for it.
In fact this was so surprising to me is that I only found out about it when I wrote code that processed files in a loop, and it started crashing once the list of files got too big, because defer didnt close the handles until the function returned.
When I asked some other Go programmers, they told me to wrap the loop body in an anonymus func and invoke that.
Other than that (and some other niggles), I find Go a pleasant, compact language, with an efficient syntax, that kind of doesn't really encourage people trying to be cute. I started my Go journey rewriting a fairly substantial C# project, and was surprised to learn that despite it having like 10% of the features of C#, the code ended up being smaller. It also encourages performant defaults, like not forcing GC allocation at every turn, very good and built-in support for codegen for stuff like serialization, and no insistence to 'eat the world' like C# does with stuff like ORMs that showcase you can write C# instead of SQL for RDBMS and doing GRPC by annotating C# objects. In Go, you do SQL by writing SQL, and you od GRPC by writing protobuf specs.
So sometimes you want it lexical scope, and sometimes function scope; For example, maybe you open a bunch of files in a loop and need them all open for the rest of the function.
Right now it's function scope; if you need it lexical scope, you can wrap it in a function.
Suppose it were lexical scope and you needed it function scope. Then what do you do?
Making it lexical scope would make both of these solvable, and would be clear for anyone reading it.
You can just introduce a new scope wherever you want with {} in sane languages, to control the required behavior as you wish.
You can start a new scope with `{}` in go. If I have a bunch of temp vars I'll declare the final result outside the braces and then do the work inside. But lately days I'll just write a function. It's clearer and easier to test.
Currently, you can write
When it's lexically scoped, you'd need to add some variable. Not that that happens a lot, but a lexically scoped defer isnt needed often either.What's an example of where you'd need to do that?
I can't recall ever needing that (but that might just be because I'm used to lexical scoping for defer-type constructs / RAII).
> Suppose it were lexical scope and you needed it function scope. Then what do you do?
Defer a bulk thing at the function scope level, and append files to an array after opening them.
That seems like more work, and less readability, than sticking in the extra function.
Would be nice to have both options though. Why not a “defer” package?
I never wanted function-scope defer, not sure what would be the usecase, but if there was one, you could just do what the other comments suggested.
Really? I find the opposite is true. If I need lexical scope then I’d just write, for example
The reason I might want function scope defer is because there might be a lot of different exit points from that function.With lexical scope, there’s only three ways to safely jump the scope:
1. reaching the end of the procedure, in which case you don’t need a defer)
2. A ‘return’, in which case you’re also exiting the function scope
3. a ‘break’ or ‘continue’, which admittedly could see the benefit of a lexical scope defer but they’re also generally trivial to break into their own functions; and arguably should be if your code is getting complex enough that you’ve got enough branches to want a defer.
If Go had other control flows like try/catch, and so on and so forth, then there would be a stronger case for lexical defer. But it’s not really a problem for anyone aside those who are also looking for other features that Go also doesn’t support.
You do what the compiler has to do under the hood: at the top of the function create a list of open files, and have a defer statement that loops over the list closing all of the files. It's really not a complicated construct.
defer { close all the files in the collection }
?
OK, what happens now if you have an error opening one of those files, return an error from inside the for loop, and forget to close the files you'd already opened?
You put the files in the collection as you open them, and you register the defer before opening any of them. It works fine. Defer should be lexically scoped.
1. it avoids a level of indentation until you wrap it in a function
2. mechanic is tied to call stack / stack unwinding
3. it feels natural when you're coming from C with `goto fail`
(yes it annoys me when I want to defer in a loop & now that loop body needs to be a function)
You can write SQL or use protbuf spec with C#. You just also have the other options.
There’s probably no deep reason, does it matter much?
Yes it does, function-scope defer needs a dynamic data structure to keep track of pending defers, so its not zero cost.
It can be also a source of bugs where you hang onto something for longer than intended - considering there's no indication of something that might block in Go, you can acquire a mutex, defer the release, and be surprised when some function call ends up blocking, and your whole program hangs for a second.
I think it's only a real issue when you're coming from a language that has different rules. Block-scoping (and thus not being able to e.g. conditionally remove a temp file at the end of a function) would be equally surprising for someone coming from Go.
But I do definitely agree that the dynamic nature of defer and it not being block-scoped is probably not the best
Having to wrap a loop body in a function that's immediately invoked seems like it would make the code harder to read. Especially for a language that prides itself on being "simple" and "straightforward".
I’ve worked with languages that have both, and find myself wishing I could have function-level defer inside conditionals when I use the block-level languages.
Lexical scope does not have a stack to put defer onto.
All the defer sites in a lexical scope are static, you can target those sites directly or add a fixed-size stack in the frame.
I worked briefly on extending an Go static site generator someone wrote for a client. The code was very clear and easy to read, but difficult to extend due to the many rough edges with the language. Simple changes required altering a lot of code in ways that were not immediately obvious. The ability to encapsulate and abstract is hindered in the name of “simplicity.” Abstraction is the primary way we achieve simple and easy to extend code. John Ousterhoust defined a complex program as one that is difficult to extend rather than necessarily being large or difficult to understand at scale. The average Go program seems to violate this principle a lot. Programs appear “simple” but extension proves difficult and fraught.
Go is a case of the emperor having no clothes. Telling people that they just don’t get it or that it’s a different way of doing things just doesn’t convince me. The only thing it has going for it is a simple dev experience.
I find the way people talk about Go super weird. If people have criticisms people almost always respond that the language is just "fine" and people kind of shame you for wanting it. People say Go is simpler but having to write a for loop to get the list of keys of a map is not simpler.
I agree with your point, but you'll have to update your example of something go can't do
> having to write a for loop to get the list of keys of a map
We now have the stdlib "maps" package, you can do:
With the wonder of generics, it's finally possible to implement that.Now if only Go was consistent about methods vs functions, maybe then we could have "keys := someMap.Keys()" instead of it being a weird mix like `http.Request.Headers.Set("key", "value")` but `map["key"] = "value"`
Or 'close(chan x)' but 'file.Close()', etc etc.
Fair I stopped using Go pre-generics so I am pretty out of date. I just remember having this conversation about generics and at the time there was a large anti-generics group. Is it a lot better with generics? I was worried that a lot of the library code was already written pre-generics.
The generics are a weak mimicry of what generics could be, almost as if to say "there we did it" without actually making the language that much more expressive.
For example, you're not allowed to write the following:
That fails because methods can't have type parameters, only structs and functions. It hurts the ergonomics of generics quite a bit.And, as you rightly point out, the stdlib is largely pre-generics, so now there's a bunch of duplicate functions, like "strings.Sort" and "slices.Sort", "atomic.Pointer" and "atomic.Value", quite possible a sync/v2 soon https://github.com/golang/go/issues/71076, etc.
The old non-generic versions also aren't deprecated typically, so they're just there to trap people that don't know "no never use atomic.Value, always use atomic.Pointer".
Ooh! Or remember when a bunch of people acted like they had ascended to heaven for looking down on syntax-highlighting because Rob said something about it being a distraction? Or the swarms blasting me for insisting GOPATH was a nightmare that could only be born of Google's hubris (literally at the same time that `godep` was a thing and Kubernetes was spending significant efforts just fucking dealing with GOPATH.).
Happy to not be in that community, happy to not have to write (or read) Go these days.
And frankly, most of the time I see people gushing about Go, it's for features that trivially exist in most languages that aren't C, or are entirely subjective like "it's easy" (while ignoring, you know, reality).
I used go for years, and while it's able to get small things up and running quickly, bigger projects soon become death-by-a-thousand-cuts.
Debugging is a nightmare because it refuses to even compile if you have unused X (which you always will have when you're debugging and testing "What happens if I comment out this bit?").
The bureaucracy is annoying. The magic filenames are annoying. The magic field names are annoying. The secret hidden panics in the standard library are annoying. The secret behind-your-back heap copies are annoying (and SLOW). All the magic in go eventually becomes annoying, because usually it's a naively repurposed thing (where they depend on something that was designed for a different purpose under different assumptions, but naively decided to depend on its side effects for their own ever-so-slightly-incompatible machinery - like special file names, and capitalization even though not all characters have such a thing .. was it REALLY such a chore to type "pub" for things you wanted exposed?).
Now that AI has gotten good, I'm rather enjoying Rust because I can just quickly ask the AI why my types don't match or a gnarly mutable borrow is happening - rather than spending hours poring over documentation and SO questions.
I personally don't like Go, and it has many shortcomings, but there is a reason it is popular regardless:
Go is a reasonably performant language that makes it pretty straightforward to write reliable, highly concurrent services that don't rely on heavy multithreading - all thanks to the goroutine model.
There really was no other reasonably popular, static, compiled language around when Google came out.
And there still barely is - the only real competitor that sits in a similar space is Java with the new virtual threads.
Languages with async/await promise something similar, but in practice are burdened with a lot of complexity (avoiding blocking in async tasks, function colouring, ...)
I'm not counting Erlang here, because it is a very different type of language...
So I'd say Go is popular despite the myriad of shortcomings, thanks to goroutines and the Google project street cred.
Slowly but surely, the jvm has been closing the go gap. With efforts like virtual threads, zgc, lilliput, Leyden, and Valhalla, the jvm has been closing the gap.
The change from Java 8 to 25 is night and day. And the future looks bright. Java is slowly bringing in more language features that make it quite ergonomic to work with.
I'm still traumatised by Java from my earlier career. So many weird patterns, FactoryFactories and Spring Framework and ORMs that work 90% of the time and the 10% is pure pain.
I have no desire to go back to Java no matter how much the language has evolved.
For me C# has filled the void of Java in enterprise/gaming environments.
C# is a highly underrated language that has evolved very quickly over the last decade into a nice mix of OOP and functional.
It's fast enough, easy enough (being very similar now to TypeScript), versatile enough, well-documented (so LLMs do a great job), broad and well-maintained first party libraries, and the team has over time really focused on improving terseness of the language (pattern matching and switch expressions are really one thing I miss a lot when switching between C# and TS).
EF Core is also easily one of the best ORMs: super mature, stable, well-documented, performant, easy to use, and expressive. Having been in the Node ecosystem for the past year, there's really no comparison for building fast with less papercuts (Prisma, Drizzle, etc. all abound with papercuts).
It's too bad that it seems that many folks I've chatted with have a bad taste from .NET Framework (legacy, Windows only) and may have previously worked in C# when it was Windows only and never gave it another look.
While C# is great, but the problem with programming languages, is you're net only picking a language, but a kind of company who uses it, and a kind of person who writes it.
Which means if you write C#, you'll encounter a ton of devs who come from an enterprise, banking or govt background, who think doing a 4 layer enterprise architecture with DTOs and 5 line classes is the only way you can write a CRUD app, and the worst of all you'll se a ton of people who learned C# in college a decade ago and refuse to learn anything else.
EF is great, but most people use it because they don't have to learn SQL and databases.
Blazor is great, but most people use it because they don't want to learn Frontend dev, and JS frameworks.
I think you have a point with the types of resources, but in my experience, its also not hard to separate the wheat from the chaff with pretty simple heuristics (though that is likely very different now with AI and cheating!).
"Modern C#" (if we can differentiate that) has a lot of nice amenities for modeling like immutable `record` types and named tuples. I think where EF really shines is that it allows you to model the domain with persistence easily and then use DTOs purely as projections (which is how I use DTOs) into views (e.g. REST API endpoints).
I can't say for the broader ecosystem, but at least in my own use cases, EFC is primarily used for write scenarios and some basic read scenarios. But in almost all of my projects, I end up using CQRS with Dapper on the read side for more complex queries. So I don't think that it's people avoiding SQL; rather it's teams focused on productivity first.
WRT to Blazor, I would not recommend it in place of JS except for internal tooling (tried it at one startup and switched to Vue + Vite). But to be fair, modern FE development in JS is an absolute cluster of complexity.
I'm still sad that Silverlight[0] (and Moonlight) died because people hated MS so viscerally back then.
It was actually really good for the time and lightyears ahead of whatever Flash was doing.
But people rather used all kinds of hacks to get Flash working on Linux and OSX rather than use Moonlight.
[0] https://en.wikipedia.org/wiki/Microsoft_Silverlight
As someone who developed in it at the time I found the reason it died was because they made new, slightly incompatible, versions every new Windows release.
After a while people got tired of doing updates.
I was so glad it died. It was a weird proprietary replacement for Flash, which itself was weird and proprietary, except the new one was owned by a huge company that publicly stated they wanted to crush Linux and friends.
A big chunk of their strategy at the time was around how to completely own the web. I celebrated every time their attempts failed.
I love C#, but have actually found LLMs to be quite bad a producing idiomatic code because the language is changing so fast and often they don't even know about the latest language(/blazor) features. I constantly have to undo my initial prompt and rewrite it to tell them that we don't use Startup.cs any more, only Program.cs, and Program.cs is a flat file and not a class.
I think that can be solved with an `instructions.md` and explicitly stating the language version/features to use.
To be fair those "weird patterns" weren't really Java itself but the crazy culture that grew up around it when it became "enterprise".
And actually coming over from C++!
It is incredible how many people think GoF has Java on it, without ever reading anything about the book.
Plus it seems hopeful to think you'll be only working with "New java" paradigm when most enterprise software is stuck on older versions. Just like python, in theory you can make great new green field project but 80% of the work in the industry is on older or legacy components.
I guess it's reasonable to be hopeful as a Java developer nowadays.
Modern Java communities are slowly adopting the common FP practice "making illegal states unrepresentable" and call it "data oriented programming". Which is nice for those of us who actively use ADT. I no longer need to repeatedly explain "what is Option<?>?" or "why ADT?" whenever I use them; I could just point them to those new resources.
Hopefully, this shift will steer the Java community toward a saner direction than the current cargo cult which believed mutable C-struct (under guise of "anemic domain model") + Garbage Collector was OOP.
Yeah, and you might just be given a mainframe with vacuum tubes..
Like, there are 10 million Java devs, there is a whole lot of completely brand new development going in any language, let alone in such a huge one.
That isn’t Java, but spring.
That said, if on the JVM, just use Kotlin.
Or Clojure, Scala, Groovy.
and with the GraalVM, JavaScript/Node, Python, R, and Ruby.
among many others.
That’s great, but are you still using Maven and Gradle? I’d want to see a popular package manager that doesn’t suck before I’d consider going back.
(Similar to how Python is finally getting its act together with the uv tool.)
There are still a LOT of places running old versions of Java, like JDK 8.
Java is great if you stick to a recent version and update on a regular basis. But a lot of companies hate their own developers.
That may be true, but navigating 30 years of accumulated cruft, fragmented ecosystems and tooling, and ever-evolving syntax and conventions, is enough to drive anyone away. Personally, I never want to deal with classpath hell again, though this may have improved since I last touched Java ~15 years ago.
Go, with all its faults, tries very hard to shun complexity, which I've found over the years to be the most important quality a language can have. I don't want a language with many features. I want a language with the bare essentials that are robust and well designed, a certain degree of flexibility, and for it to get out of my way. Go does this better than any language I've ever used.
I can reasonably likely run a 30 years old compiled, .jar file on the latest Java version. Java is the epitome of backwards and forward-compatible changes, and the language was very carefully grown so the syntax is not too indifferent, someone hibernated since Java 7 will probably have no problem reading Java 25 code.
> Go, with all its faults, tries very hard to shun complexity
The whole field is about managing complexity. You don't shun complexity, you give tools to people to be able to manage it.
And Go goes the low end of the spectrum, of not giving enough features to manage that complexity -- it's simplistic, not simple.
I think the optimum as actually at Java - it is a very easy language with not much going on (compared to, say, Scala), but just enough expressivity that you can have efficient and comfortable to use libraries for all kind of stuff (e.g. a completely type safe SQL DSL)
you shun unnecessary complexity.
If you dont think that exists in java, spend some time in the maven documentation or spring documentation https://docs.spring.io/spring-framework/reference/index.html https://maven.apache.org/guides/getting-started/ Then imagine yourself a beginner to programming trying to make sense of that documentation
you try keep the easy things easy + simple, and try to make the hard things easier and simpler, if possible. Simple aint easy
I dont hate java (anymore), it has plenty of utility, (like say...jira). But when I'm writing golang I pretty much never think "oh I wish this I was writing java right now." no thanks
Well, spring is a whole framework that gives you a lot of stuff, but sure, complexity has to live somewhere - fundamentally so.
Without it, you either write that complexity yourself or fail to even recognize why is it necessary in the first place, e.g. failing to realize the existence of SQL injections, Cross-Site Scripting, etc. Backends have some common requirements and it is pretty rare that your problem wouldn't need these primitives, so as a beginner, I would advice.. learning the framework as well, the same way you would learn how to fly a plane before attempting it.
For other stuff, there is no requirement to use Spring - vanilla java has a bunch of tools and feel free to hack whatever you want!
> The whole field is about managing complexity. You don't shun complexity, you give tools to people to be able to manage it.
Complexity exists in all layers of computing, from the silicon up. While we can't avoid complexity of real world problems, we can certainly minimize the complexity required for their solutions. There are an infinite amount of problems caused primarily by the self-induced complexity of our software stacks and the hardware it runs on. Choosing a high-level language that deliberately tries to avoid these problems is about the only say I have in this matter, since I don't have the skill nor patience to redo decades of difficult work smarter people than me have done.
Just because a language embraces simplicity doesn't mean that it doesn't provide the tools to solve real world problems. Go authors have done a great job of choosing the right set of trade-offs, unlike most other language authors. Most of the time. I still think generics were a mistake.
Being able to create a self contained Kotlin app (JVM) that starts up quickly and uses the same amount of memory as the equivalent golang app would be amazing.
Graal native Image does that (though the compile time is quite long, but you can just run it on the JVM for development with hot reload and whatnot, and only do a native compile at release)
From what I have heard, Graal is still quite a headache if you are using libraries that are not compatible, but maybe this is out of date.
Still an issue. The main problem is for native compilation you have to declare your reflection targets upfront. That can be a headache if your framework doesn't support it.
You can get a large portion of what graal native offers by using AppCDS and compressed object headers.
Here's the latest JEP for all that.
https://openjdk.org/jeps/483
The comparative strictness and simplicity of Go also makes it a good option for LLM-assisted programming.
Every single piece of Go 1.x code scraped from the internet and baked in to the models is still perfectly valid and compiles with the latest version.
> And there still barely is - the only real competitor that sits in a similar space is Java with the new virtual threads
Which Google uses far more commonly than Go, still to this day.
Well Google isn't really making a ton of new (successful) services these days, so the potential to introduce a new language is quite small unfortunately :). Plus, Go lacks one quite important thing which is ability to do an equivalent of HotSwap in the live service, which is really useful for debugging large complex applications without shutting them down.
Google is 100% writing a whole load of new services, and Go is 13 years old (even older within Google), so it surely has had ample opportunities to take.
As for hot swap, I haven't heard it being used for production, that's mostly for faster development cycles - though I could be wrong. Generally it is safer to bring up the new version, direct requests over, and shut down the old version. It's problematic to just hot swap classes, e.g. if you were to add a new field to one of your classes, how would old instances that lack it behave?
There are real pain points with async/await, but I find the criticism there often overblown. Most of the issues go away if you go pure async, mixing older sync code with async is much more difficult though.
My experience is mostly with C#, but async/await works very well there in my experience. You do need to know some basics there to avoid problem, but that's the case for essentially every kind of concurrency. They all have footguns.
What modern language is a better fit for new projects in your opinion?
Elixir, with types
I love Elixir but you cannot compile it into a single binary, it is massively concurrent but single-threaded slow, and deployment is still nontrivial.
And lists are slower than arrays, even if they provide functional guarantees (everything is a tradeoff…)
That said, pretty much everything else about it is amazing though IMHO and it has unique features you won’t find almost anywhere else
That doesn’t exist yet. Also Elixir is in no way a replacement for Go.
It can’t match it for performance. There’s no mutable array, almost everything is a linked list, and message passing is the only way to share data.
I primarily use Elixir in my day job, but I just had to write high performance tool for data migration and I used Go for that.
My vote is for Elixir as well, but it's not a competitor for multiple important reasons. There are some languages in that niche, although too small and immature, like Crystal, Nim. Still waiting for something better.
P.S. Swift, anyone?
yeah, if the requirement is "makes it pretty straightforward to write reliable, highly concurrent services that don't rely on heavy multithreading", Elixir is a perfect match.
And even without types (which are coming and are looking good), Elixir's pattern matching is a thousands times better than the horror of Go error handling
This one i can get behind.
Clojure
For web frontend: js
For ML/data: python
For backend/general purpose software: Java
The only silver bullet we know of is building on existing libraries. These are also non-accidentally the top 3 most popular languages according to any ranking worthy of consideration.
I'd swap java with go any day of the week. I never liked how much 'code-padding' is required with java `public static void main`
For Java 25 which is planned to be released in a couple of weeks:
----- https://openjdk.org/jeps/512 -----
First, we allow main methods to omit the infamous boilerplate of public static void main(String[] args), which simplifies the Hello, World! program to:
Second, we introduce a compact form of source file that lets developers get straight to the code, without a superfluous class declaration: Third, we add a new class in the java.lang package that provides basic line-oriented I/O methods for beginners, thereby replacing the mysterious System.out.println with a simpler form:so getting closer to Go's syntax, n'en déplaise à certains, apparently. :-)
Always find 'java is verbose' to be a novice argument from go coders when there is so much boilerplate on the go side of things that's nicely handled on the java side.
Every function call is 3-5 lines in Go. For any problem which needs to handle errors, the Go code is generally >2x the Java LOC. Go is a language that especially suffers from the "code padding" problem.
It's rich to complain about verbosity coming from Go.
Nonetheless, Java has eased the psvm requirements, you don't even have to explicitly declare a class and a void main method is enough. [1] Not that it would matter for any non-script code.
[1] https://openjdk.org/jeps/495
Java, lol. Enterprise lang with too many abstractions and wrongly interpreted OOP. Absolutely not.
What about php/ruby for web?
An expert Ruby programmer can do wonders and be insanely productive, but I think there is a size from which it doesn't scale as nicely (both from a performance and a larger team perspective).
PHP's frameworks are fantastic and they hide a lot from an otherwise minefield of a language (though steadily improved over the years).
Both are decent choices if this is what you/your developers know.
But they wouldn't be my personal first choice.
Absolutely no on Java. Even if the core language has seen improvements over the years, choosing Java almost certainly means that your team will be tied to using proprietary / enterprise tools (IntelliJ) because every time you work at a Java/C# shop, local environments are tied to IDE configurations. Not to mention Spring -- now every code review will render "Large diffs are not rendered by default." in Github because a simple module in Java must be a new class at least >500 LOC long.
When did you last touch java, before 2000?
Local environments are not tied to IDEs at all, but you are doing yourself a disservice if you don't use a decent IDE irrespective of language - they are a huge productivity boost.
And are you stuck in the XML times or what? Spring Boot is insanely productive - just as a fact of matter, Go is significantly more verbose than Java, with all the unnecessary if errs.
> When did you last touch java, before 2000?
August 22, 2025.
Local environments are not literally tied to IDEs, but they effectively are in any non-trivially sized project. And the reason is because most Java shops really do believe "you are doing yourself a disservice if you don't use a decent IDE irrespective of language." I get along fine with a text editor + CLI tools in Deno, Lua, and Zig. Only when I enter Java world do the wisest of the wise say "yeah there is a CLI, but I don't really know it. I recommend you download IntelliJ and run these configs instead."
Yes Spring Boot is productive. So is Ruby on Rails or Laravel.
Any production-grade project will use either Maven or Gradle for builds. There are CI/CD pipelines, lints, etc, how would all these work if you could only build through an IDE?
Sure, there are some awfully dated companies that still send changed files over email to each other with no version control, I'm sure some of those are stuck with an IDE config, but to be honest where I have seen this most commonly were some Visual Studio projects, not Java. Even though you could find any of these for any other language, you just need to scale your user base up. A language that hasn't even hit 1.0 will have a higher percentage of technically capable users, that's hardly a surprise.
>Only when I enter Java world do the wisest of the wise say "yeah there is a CLI, but I don't really know it. I recommend you download IntelliJ and run these configs instead."
Then they obviously don't know their tooling well, and I would hesitate to call a jr 'the wisest of the wise'
I know, both proprietary and enterprise, right? https://github.com/JetBrains/intellij-community/blob/idea/20... (I would also link to the Apache 2 copy of PyCharm but it wouldn't matter to folks who just enjoy shitting on professional tools)
That's the community edition. Cute and snarky comment, though.
Count Rust. From what I can see, it's becoming very popular in the microservices landscape. Not hard to imagine why. Multithreading is a breeze. Memory use is low. Latency is great.
Rust async makes it quite easy to shoot yourself in the foot in multiple ways.
Most users writing basic async CRUD servers won't notice, but you very much do if you write complex , highly concurrent servers.
That can be a viable tradeoff, and is for many, but it's far from being as fool-proof as Go.
Some language with rust features minus memory and lifetime management and gos gc and stdlib would be possibly the language I've been waiting for.
For the most part I've loved Go since just before 1.0 through today. Nits can surely be picked, but "it's still not good" is a strange take.
I think there is little to no chance it can hold on to its central vision as the creators "age out" of the project, which will make the language worse (and render the tradeoffs pointless).
I think allowing it to become pigeon holed as "a language for writing servers" has cost and will continue to cost important mindshare that instead jumps to Rust or remains in Python or etc.
Maybe it's just fun, like harping on about how bad Visual Basic was, which was true but irrelevant, as the people who needed to do the things it did well got on with doing so.
Fascinating. Coming from C++ I can't imagine not having RAII. That seems so wordy and painful. And that nil comparison is...gross.
I don't get how you can assign an interface to be a pointer to a structure. How does that work? That seems like a compile error. I don't know much about Go interfaces.
There were points in this article that made me feel like Rob Schneider in Demolition Man saying "He doesn't know about the three sea shells!" but there were a couple points made that were valid.
the nil issue. An interface, when assigned a struct, is no longer nil even if that struct is nil - probably a mistake. Valid point.
append in a func. Definitely one of the biggest issues is that slices are by ref. They did this to save memory and speed but the append issue becomes a monster unless abstracted. Valid point.
err in scope for the whole func. You defined it, of course it is. Better to reuse a generic var than constantly instantiate another. The lack of try catch forces you to think. Not a valid point.
defer. What is the difference between a scope block and a function block? I'll wait.
Great article!
I like Go and Rust, but sometimes I feel like they lack tools that other languages have just because they WANT to be different, without any real benefit.
Whenever I read Go code, I see a lot more error handling code than usual because the language doesn't have exceptions...
And sometimes Go/Rust code is more complex because it also lacks some OOP tools, and there are no tools to replace them.
So, Go/Rust has a lot more boilerplate code than I would expect from modern languages.
For example, in Delphi, an interface can be implemented by a property:
This isn't possible in Go/Rust. And the Go documentation I read strongly recommended using Composition, without good tools for that.This "new way is the best way, period ignore good things of the past" is common.
When MySQL didn't have transactions, the documentation said "perform operations atomically" without saying exactly how.
MongoDB didn't have transactions until version 4.0. They said it wasn't important.
When Go didn't have generics, there were a bunch of "patterns" to replace generics... which in practice did not replace.
The lack of inheritance in Go/Rust leaves me with the same impression. The new patterns do not replace the inheritance or other tools.
"We don't have this tool in the language because people used it wrong in the old languages." Don't worry, people will use the new tools wrong too!
Go allows deferring an implementation of an interface to a member of a type. It is somewhat unintuitive, and I think the field has to be an unnamed one.
Similarly, if a field implements a trait in Rust, you can expose it via `AsRef` and `AsMutRef`, just return a reference to it.
These are not ideal tools, and I find the Go solution rather unintuitive, but they solve the problems that I would've solved with inheritance in other languages. I rarely use them.
Technically, the term "billion dollar mistake", coined in 1965, would now be a "10 billion dollar mistake" in 2025. Or, if the cost is measured in terms of housing, it would be a "21 billion dollar mistake".
:^/
Every language has its flaws. I respect Go for staying relatively simple. And it has decent concurrency (for my needs).
These days, it seems like languages keep chasing paradigms and over adapt to moving targets.
Look at what Rust and Swift have become. C# has stayed relatively sane somehow, but it's not what I'd call indepedent.
I agree with just about everything in the post. I've been bit a time or two by the "two flavors of null." That said, my most pleasant and most productive code bases I've worked in have all been Go.
Some learnings. Don't pass sections of your slices to things that mutate them. Anonymous functions need recovers. Know how all goroutines return.
If you don't like Go, then just let go. I hope nobody forces you to use it.
Some critique is definitely valid, but some of it just sounds like they didn't take the time to grasp the language. It's trade offs all the way. For example there is a lot I like about Rust, but still no my favorite language.
In my opinion, the section on data ownership contained the most egregious and unforgivable example of go's flaws. The behavior of append in that example is the kind of bug-causing or esoteric behavior that should never make it into any programming language. As a regular writer of go code, I understand why this particular quirk of the language exists, but I hope I never truly "grasp" it to the extent that I forgive it.
I'm surprised people in these comments aren't focusing more on the append example.
Disagree. Most critiques of Go I've read have been weak. This one was decent. And I say that as a big enjoyer of Go.
That said I really wish there was a revamp where they did things right in terms of nil, scoping rules etc. However, they've commited to never breaking existing programs (honorable, understandable) so the design space is extremely limited. I prefer dealing with local awkwardness and even excessive verbosity over systemic issues any day.
Few things are truly forced upon me in life but walking away from everything that I don't like would be foolish. There is compromise everywhere and I don't think entering into a tradeoff means I'm not entitled to have opinions about the things I'm trading off.
I don't think the article sounds like someone didn't take the time to grasp the language. It sounds like it's talking about the kind of thing that really only grates on you after you've seriously used the language for a while.
Sure but life choices are one thing, but this critique is still valuable. I learned a thing or two, and also think go can improve (I understand it's because I don't grok the language but I still prefer map to append in a loop)
Go indeed has some problems. But IMHO, none described in this article is valid.
"Love it or leave it!"
Which begs the question: What is your favorite language?
In 2015 I wrote an article "How to complain about Go" to mock this type of articles that completely miss the big picture and the real world impact of "imperfect" language. Glad it's still relevant :)
This has always been my takeaway with Go. An imperfect language for imperfect developers, chosen for organizations (not people) to ensure a baseline usefulness of their engineers from junior to senior. Do I like it? No. Would I ever choose it willingly? No. But when the options at the time were Javascript or untyped Python, it may have seemed like a more attractive option. Python was also dealing with a nasty 2-to-3 upgrade at the time that looks foolish in comparison to Golang's automatic formatting and upgrade mechanisms.
They are forcing people to write Typescript code like it’s Golang where I am right now (amongst other extremely stupid decisions - only unit test service boundaries, do not pull out logic into pure functions, do not write UI tests, etc.). I really must remember to ask organisations to show me their code before joining them.
(I realise this isn’t who is hiring, but email in bio)
I do this and think it works really well...
myfunc(arg: string): Value | Err
I really try not to throw anymore with typescript, I do error checking like in Go. When used with a Go backend, it makes context switching really easy...
They still throw and just have millions of try catch blocks repeated everywhere around almost every function :-/
Have you seen Java people write Python? Same vibe :)
Reminded me of this classic talk https://www.youtube.com/watch?v=o9pEzgHorH0
Ah yes. I love working at places that hire experts just to tell them how they should do the work they're an expert at.
Cross compiling go is easy. Static binaries work everywhere. The cryptographic library is the foundation of various CAs like letsencrypt and is excellent.
The green threads are very interesting since you can create 1000s of them at a low cost and that makes different designs possible.
I think this complaining about defer is a bit trivial. The actual major problem for me is the way imports work. The fact that it knows about github and the way that it's difficult to replace a dependency there with some other one including a local one. The forced layout of files, cmd directories etc etc.
I can live with it all but modules are the things which I have wasted the most time and struggled the most.
> The forced layout of files, cmd directories etc etc.
You don't need to have a cmd directory. I see it a lot in Go projects but I'm not sure why.
> The fact that it knows about github and the way that it's difficult to replace a dependency there with some other one including a local one.
Use `replace` in `go.mod`, or `go.work` if you're hacking on it locally?
or go ahead and commit it, if you're galaxy brain and want to throw off would-be attackers trying to understand your codebase https://github.com/pulumi/pulumi/blob/v3.191.0/pkg/go.mod#L5 or https://github.com/opentofu/terraform-provider-aws/blob/main...
In practice, none of these thing mentioned in the article have been an issue for me, at all. (Upvoted anyway)
What has been an issue for me, though, is working with private repositories outside GitHub (and I have to clarify that, because working with private repositories on GitHub is different, because Go has hardcoded settings specifically to make GitHub work).
I had hopes for the GOAUTH environment variable, but either (1) I'm more dumb and blind than I thought I already was, or (2) there's still no way to force Go to fetch a module using SSH without trying an HTTPS request first. And no, `GOPRIVATE="mymodule"` and `GOPROXY="direct"` don't do the trick, not even combined with Git's `insteadOf`.
Definitely not just you. At my previous job we had a need to fetch private Go modules from Gitlab and, later, a self-hosted instance of Forgejo. CTO and I spent a full day or so doing trial and error to get a clean solution. If I recall correctly, we ultimately resorted to each developer adding `GOPRIVATE={module_namespace}` to their environment and adding the following to their `.netrc`:
``` machine {server} # e.g. gitlab.com login {username} password {read_only_api_key} # Must be actual key and not an ENV var ```
Worked consistently, but not a solution we were thrilled with.
Ok, it's not a good fit for you.
Don't use it I guess and ignore all the X is not good posts for language X you do decide to use?
As usual, lets revisit something that Pascal could do in 1976,
Go in 2025,If Pascal doesn't have required exhaustive pattern matching, it's no better than Go or C# in this regard.
The absolutely pointless and ridiculous complaints about enums are just plain stupid by this point.
Ok we get it, you want something fancier. Well, you didn't get it. Deal with it. Go has other problems (as pointed out by the OP). I really don't understand how people could care so much about this enum thing. Yes, Rust enums are great, but they are just completely different. Why would I ever compare them and waste energy on that? Different designers, different decisions.
People want sum types because sum types solve a large set of design problems, while being a concept old enough to appear back in SML in 1980s. One of the best phrased complaints I've seen against Go's design is a claim that Go language team ignored 30+ years of programming language design, because the language really seems to introduce design issues and footguns that were solved decades before work on it even started
Rust did not exist in 1976.
ML did, however (1973), and had..... sum types!
Sum types are not the same as the trivial example above. Sum types are actually useful, for one thing.
Where do you put the comments on the Pascal version?
Where you feel like it.
Where's Pascal today?
Ouch!! Pascal's lack of popularity certainly isn't due to the fact that it supports such nice enumerated types (or sets for that matter). I think he was just pointing out that such nice things have existed (and been known to exist) for a long time and that it's odd that a new language couldn't have borrowed the feature.
Being used by these folks, https://www.embarcadero.com/
If you prefer, I can provide the same example in C, C++, D, Java, C#, Scala, Kotlin, Swift, Rust, Nim, Zig, Odin.
Just below Go with Perl in between. All above Fortran, all below Visual Basic.
https://www.tiobe.com/tiobe-index/
It's alive and kicking, right? :) https://www.freepascal.org They even have a game engine that can compile to a WASM target: https://castle-engine.io/web
I like Go, but my main annoyance is deciding when to use a pointer or not use a pointer as variable/receiver/argument. And if its an interface variable, it has a pointer to the concrete instance in the interface 'struct'. Some things are canonically passed as pointers like contexts.
It just feels sloppy and I'm worried I'm going to make a mistake.
I mostly use it as a signal for mutability to some extent.
And also when I want a value with stable identity I'd use a pointer.
This confused me too. It is tricky because sometimes it's more performant to copy the data rather than use a pointer, and there's not a clear boundary as to when that is the case. The advice I was given was "profile your code and make your decision data-driven". That didn't make me happy.
Now I always use pointers consistently for the readability.
Using pointers as optional types is the absolute worst part of using go.
...do you want a copy or the original object?
Yup, that's it. If you're going to modify a field in the receiver, or want to pass a field by reference, you're going to need a pointer. Otherwise, a value will do, unless ... that weird interface thing makes you. I guess that's the problem?
Just use pointers everywhere? Who cares.
But just not a pointer to an interface.
Its annoying to need to think about whether I’m working with an interface type of concrete type.
And if use pointers everywhere, why not make it the default?
I just always use pointers for structs.
I 80% of time use structs. common misunderstanding: it does not reduce performance for pointer vs value receivers (Go compiler generates same code for both, no copy of struct receiver happens). most of structs are small anyways, safe to copy. Go also automatically translates value receivers and pointer receivers back-and-forth. and if I see pointer I see something that can be mutated (or very large). in fact, if I see a pointer, I think "here we go.. will it be mutated?". written 400,000 LOC in Go, rarely seeing this issue.
Recently I was in a meeting where we were considering adopting Go more widely for our backend services, but a couple of the architect level guys brought up the two-types-of-nil issue and ultimately shot it down. I feel like they were being a little dramatic about it, but it is startling to me that its 2025 and the team still has not fixed it. If the only thing you value in language design is never breaking existing code, even if by any definition that existing code is already broken, eventually the only thing using your language will be existing code.
This has already been explained many times, but it's so much fun I'll do it again. :-)
So: The way Go presents it is confusing, but this behavior makes sense, is correct, will never be changed, and is undoubtedly depended on by correct programs.
The confusing thing for people use to C++ or C# or Java or Python or most other languages is that in Go nil is a perfectly valid pointer receiver for a method to have. The method resolution lookup happens statically at compile time, and as long as the method doesn't try to deref the pointer, all good.
It still works if you assign to an interface.
This will print But the interface method lookup can't happen at compile time. So the interface value is actually a pair -- the pointer to the type, and the instance value. The type is not nil, hence the interface value is something like (&Cat,nil) and (&Dog,nil) in each case, which is not the interface zero value, which is (nil, nil).But it's super confusing because Go type cooerces a nil struct value to a non-nil (&type, nil) interface value. There's probably some naming or syntax way to make this clearer.
But the behavior is completely reasonable.
I deeply, seriously, believe that you should have written the words "Its super confusing", meditated on that for a minute, then left it at that. It is super confusing. That's it. Nothing else matters. I understand why it is the way it is. I'm not stupid. As you said: Its super confusing, which is relevant when you're picking languages other people at your company (interns, juniors) have to write in.
> “The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.”
(Side note, Go did fix scoping of captured variables in for,range loops, which was a back-incompat change, but they justified it by emperically showing it fixed more bugs than it caused (very reasonable). C# made the same change w/ the same justification earlier, which was inspiration for Go.)
Architect-level is complaining about language quirks? That's low on my priorities for languages. I'd worry more about maturity, tooling support, library support, ease of learning, and availability of developers.
I think our end-state decision, IIRC, was to just expand our usage of TypeScript; which also has Golang beat on all those verticals you list. More mature, way better tooling, way more libraries, easier to hire for, etc.
Though, thinking back, someone should have brought up TypeScript's at least three different ways to represent nil (undefined, null, NaN, a few others). Its at least a little better in TS, because unlike in Go the type-checker doesn't actively lie to you about how many different states of undefined you might be dealing with.
I both agree with these points, and also think it absolutely doesn't matter. Go is the best language if you need to ship quickly and have solid performance. Also Go + AI works amazingly well. So in some ways you can actually move faster compared to languages like Node and Python these days.
I wrote a book on Go, so I'm biased. But when I started using Go more than a decado ago, it really felt like a breath of fresh air. It made coding _fun_ again, less boilerplate heavy than Java, simple enough to pick up, and performance was generally good.
There's no single 'best language', and it depends on what your use-cases are. But I'd say that for many typical backend tasks, Go is a choice you won't really regret, even if you have some gripes with the language.
Go indeed has its problems. But the ones described in this article just prove the author is a Go newbie.
In a comment in this thread, the author states that they have 12 - 15 years of experience in Go [0].
[0] https://news.ycombinator.com/item?id=44985378
but still a Go newbie?
The article's points feel overly simplistic/shallow and lack the depth you'd expect from an experienced Go programmer.
I don't agree with most of the article but I believe I know where it comes from.
Golang's biggest shortcoming is the fact that it touches bare metal isn't visible clearly enough. It provides many high level features which makes this ambience of "we got you" but fails on delivering proper education to its users that they are going to have a dirt on their hands.
Take a slice for example: even in naming it means "part of" but in reality it's closer to "box full of pointers" what happens when you modify pointer+1? Or "two types of nil"; there is a difference between having two bytes (simplification), one of struct type and the other of address to that struct and having just a NULL - same as knowing that house doesn't exist and being confident that house exists and saying it's in the middle of the volcano beneath the ocean.
The Foo99 critique is another example. If you'd want to have not 99 loop but 10 billion loops each with mere 10 bytes you'd need 100GiB of memory just to exit it. If you'd reuse the address block you'd only use... 10 bytes.
I also recommend trying to implement lexical scope defer in C and putting them in threads. That's a big bottle of fun.
I think that it ultimately boils down to what kind of engineer one wants to be. I don't like hand holding and rather be left on my own with a rain of unit tests following my code so Go, Zig, C (from low level Languages) just works for me. Some prefer Rust or high level abstractions. That's also fine.
But IMO poking at Go that it doesn't hide abstractions is like making fun of football of being child's play because not only it doesn't have horses but also has players using legs instead of mallets.
> I believe I know where it comes from […] poking at Go that it doesn't hide abstractions
Author here.
No, this is not where it comes from. I've been coding C form more than 30 years, Go for maybe 12-15, and currently prefer Rust. I enjoy C++ (yes, really) and getting all those handle-less knives to fit together.
No, my critique of Go is that it did not take the lessons learned from decades of theory, what worked and didn't work.
I don't fault Go for its leaky abstractions in slices, for example. I do fault it for creating bad abstraction APIs in the first place, handing out footguns when they are avoidable. I know to avoid the footgun of appending to slices while other slices of the same array may still be accessible elsewhere. But I think it's indefensible to have created that footgun in the year Go was created.
Live long enough, and anybody will make a silly mistake. "Just don't make a mistake" is not an option. That's why programming language APIs and syntax matters.
As for bare metal; Go manages to neither get the benefits possible of being high level, and at the same time not being suitable for bare metal.
It's a missed opportunity. Because yes, in 2007 it's not like I could have pointed to something that was strictly better for some target use cases.
I don't share experience about not being suitable about bare metal. But I do have experience with high level languages doing similar things through "innovative" thinking. I've seen int overflows in Rust. I've seen libraries that waited for UDP packet to be rebroadcasted before sending another implemented in Elixir.
No Turing complete language will ever prevent people from being idiots.
It's not only programming language API and syntax. It's a conceptual complexity, which Go has very low. It's a remodeling difficulty which Rust has very high. It's implicit behavior that you get from high stack of JS/TS libraries stitched together. It's accessibility of tooling, size of the ecosystem and availability of APIs. And Golang crosses many of those checkboxes.
All the examples you've shown in your article were "huh? isn't this obvious?" to me. With your experience in C I have no idea you why you don't want to reuse same allocation multiple times and instead keeping all of them separately while reserving allocation space for possibly less than you need.
Even if you'd assume all of this should be on stack you still would crash or bleed memory through implicit allocations that exit the stack.
Add 200 of goroutines and how does that (pun intended) stack?
Is fixing those perceived footguns really a missed opportunity? Go is getting stronger every year and while it's hated by some (and I get it, some people like Rust approach better it's _fine_) it's used more and more as a mature and stable language.
Many applications don't even worry about GC. And if you're developing some critical application, pair it with Zig and enjoy cross-compilation sweetness with as bare metal as possible with all the pipes that are needed.
Go is the best language for me because I develop fast with it, don't have that many bugs, it builds fast and I'm usually just fine having a garbage collector The dependency management is great too
Go is a super productive powerhouse for me.
Of all the languages one could accuse of being hermetically designed in an ivory tower, Go would be the second-least likely.
That's why there is the Goo language: Go with syntactic sugar and batteries included
https://github.com/pannous/goo/
• errors handled by truthy if or try syntax • all 0s and nils are falsey • #if PORTABLE put(";}") #end • modifying! methods like "hi".reverse!() • GC can be paused/disabled • many more ease of use QoL enhancements
Has Go become the new PHP? Every now and then I see an article complaining about Go's shortcomings.
No, this has been the case as long as Go has been around, then you look and its some C or C++ developer with specific needs, thats okay, its not for everyone.
I think with C or C++ devs, those who live in glass houses shouldn’t throw stones.
I would criticize Go from the point of view of more modern languages that have powerful type systems like the ML family, Erlang/Elixir or even the up and coming Gleam. These languages succeed in providing powerful primitives and models for creating good, encapsulating abstractions. ML languages can help one entirely avoid certain errors and understand exactly where a change to code affects other parts of the code — while languages like Erlang provided interesting patterns for handling runtime errors without extensive boilerplate like Go.
It’s a language that hobbles developers under the aegis of “simplicity.” Certainly, there are languages like Python which give too much freedom — and those that are too complex like Rust IMO, but Go is at best a step sideways from such languages. If people have fun or get mileage out of it, that’s fine, but we cannot pretend that it’s really this great tool.
> I would criticize Go from the point of view of more modern languages that have powerful type systems like the ML
Go release date: 2012
ML: 1997
You forgot: CLU 1977.
". They are likely the two most difficult parts of any design for parametric polymorphism. In retrospect, we were biased too much by experience with C++ without concepts and Java generics. We would have been well-served to spend more time with CLU and C++ concepts earlier."
https://go.googlesource.com/proposal/+/master/design/go2draf...
And still there are more modern idioms and language features that ML had in the 70s but are missing from Go. But, these have the fatal flaw of Not being Invented Here.
My biggest nitpick against Go was, is and still is the package management. Rust did it so nice and NuGet (C#/.NET) got it so right that Microsoft added it as a built-in thing for Visual Studio, it was originally a plugin and not from Microsoft whatsoever, now they fully own it which is fine, and it just works.
Cargo is amazing, and you can do amazing things with it, I wish Go would invest in this area more.
Also funny you mention Python, a LOT of Go devs are former Python devs, especially in the early days.
Which part of the package management/modules system do you find lacking?
Curious too because I find it mostly great.
Go was announced as a replacement for C & C++ so I think it's reasonable to compare it to that.
It was intended as a as a replacement for C & C++ for Google's use case of network services btw.
Not really, no one at other other than the original authors though of that, the authors had an issue with C++ compile times and were sponsored by their manager to work on this Go side project of theirs.
Google's networking services keep being writen in Java/Kotlin, C++, and nowadays Rust.
It hasn't been promoted that way for over a decade at this point.
> Has Go become the new PHP? Every now and then I see an article complaining about Go's shortcomings.
These sorts of articles have been commonplace even before Go released 1.0 in 2013. In fact, most (if not all) of these complaints could have been written identically back then. The only thing missing from this post that could make me believe it truly was written in 2013 would be a complaint about Go not having generics, which were added a few years ago.
People on HN have been complaining about Go since Go was a weird side-project tucked away at Google that even Google itself didn't care about and didn't bother to dedicate any resources to. Meanwhile, people still keep using it and finding it useful.
On the contrary, PHP at least improves with times and embraces modern pratices in language design.
Go was always 80% there,but the last missing(hard) 20% wasn't ever done.
It is infuriating because it is close to being good, but it will never get there - now due to backwards compatibility.
Also Rob Pike quote about Go's origins is spot on.
The last 20% is also deliberately never done. It's the way they like to run their language. I find it frustrating, but it seems to work for some people.
Go is a pretty good example of how mediocre technology that would never have taken off on its own merits benefits from the rose tinted spectacles that get applied when FAANG starts a project.
I don’t buy this at all. I picked up Go because it has fast compilation speed, produces static binaries, can build useful things without a ton of dependencies, is relatively easy to maintain, and has good tooling baked in. I think this is why it gained adoption vs Dart or whatever other corporate-backed languages I’m forgetting.
80% of what programmers write is API glue.
Go _excels_ at API glue. Get JSON as string, marshal it to a struct, apply business logic, send JSON to a different API.
Everything for that is built in to the standard library and by default performant up to levels where you really don't need to worry about it before your API glue SaaS is making actual money.
I tried out one project because of these attributes and then scrapped it fairly quickly in favor of rust. Not enough type safety, too much verbosity. Too much fucking "if err != nil".
The language sits in an awkward space between rust and python where one of them would almost always be a better choice.
But, google rose colored specs...
I’m almost with you. If there was a language with a fast compiler, excellent tooling, a robust standard library, static binaries, and an F#-like type system, I’d never use anything else.
Rust simply doesn’t cut it for me. I’m hoping Roc might become this, but I’m not holding my breath.
OCaml? Possibly Haskell as well?
Compiler could be faster, I guess but apart from that Rust has all of those things.
I find Rust's stdlib to be lacking vs Go, and so the average Rust project has a lot of dependencies. To me, Rust feels like the systems-programming equivalent to Node + NPM. Also, the compilation speed was really painful last time I used it. I'm used to the speed of Zig, Hare, Go, Bun. Rust makes me want to jab myself in the eye with a spork.
Exactly.
The other jarring example of this kind of deferring logical thinking to big corps was people defending Apple's soldering of memory and ssd, specially so on this site, until some Chinese lad proved that all the imagined issues for why Apple had to do such and such was bs post hoc rationalisation.
The same goes with Go, but if you spend enough time, every little while you see the disillusionment of some hardcore fans, even from the Go's core team, and they start asking questions but always start with things like "I know this is Go and holy reasons exists and I am doing a sin to question but why X or Y". It is comedy.
It means many people are using it. That's it.
Oh no , Rust is too tough, go is no good, am i going back to java?
Maybe the new in-development Carbon language? It sounds promising, but it is nowhere near its 1.0 release.
Carbon exists only for interoperating with and transitioning off of C++. Creating a new code base in carbon doesn’t really make sense, and the project’s readme literally tells you not to do that.
> ... and the project’s readme literally tells you not to do that.
Could you quote which paragraph you're talking about?
AFAIK, interoperability with C++ code is just one of their explicit goals; they only place that as the last item in the "Language Goals" section.
> Existing modern languages already provide an excellent developer experience: Go, Swift, Kotlin, Rust, and many more. Developers that can use one of these existing languages should.
So many options in-between.
A popular language is always going to attract some hate. Also, these kinds of discussions can be useful for helping the language evolve.
But everyone knows in their heart of hearts that a few small language warts definitely don't outweigh Go's simplicity and convenience. Do I wish it had algebraic data types, sure, sure. Is that a deal-breaker, nah. It's the perfect example of something that's popular for a reason.
It is easily one of the most productive languages. No fuss, no muss, just getting stuff done.
Go nearly gave me carpal tunnel with the vast quantities and almost the same but not quite the same repetitive code patterns it brings along with it. I’d never use it again.
You still type most of your code?
AI solved my issues with carpal tunnel.
And when I'm feeling fancy, I don't even type, just command AI by voice. "handle error case".
Sum types is the one big thing missing IMO, the language got a LOT of things right otherwise
Yeah the language doesn't feel next gen
I can see why people pick it but its major step up in convenience rather than major step up in evolution programming language itself
I've written a fair chunk of go in $dayjob and I have t say it's just... Boring. I know that sounds like a weird thing to complain about, but I just can't get enthused for anything I write in go. It's just.. Meh. Not sure why that is, guess it doesn't really click for me like other languages have in the past.
It's a good language for teams, for sure, though.
No, it's absolutely meant to be boring by design. It's also a downside, obviously, but it's easily compensated by working on something that's already challenging. The language standing out of your way is quite useful in such cases
Go being boring is exactly why I use it.
if this is the worst, not too bad.
Agree, most of us arent needing niche C++ / C language features, what Go has for us is sufficient.
It doesn't need to be good because it is not meant for good developers.
And it's perfect for most business software, because most businesses are not focused on building good software.
Go has a good-enough standard library, and Go can support a "pile-of-if-statements" architecture. This is all you need.
Most enterprise environments are not handled with enough care to move beyond "pile-of-if-statements". Sure, maybe when the code was new it had a decent architecture, but soon the original developers left and then the next wave came in and they had different ideas and dreamed of a "rewrite", which they sneakily started but never finished, then they left, and the 3rd wave of developers came in and by that point the code was a mess and so now they just throw if-statements onto the pile until the Jira tickets are closed, and the company chugs along with its shitty software, and if the company ever leaks the personal data of 100 million people, they aren't financially liable.
Go has extremely robust linters just for the corporate use-case. And gofmt.
Every piece of code looks the same and can be automatically, neutrally, analysed for issues.
This post is just an attention grabbing rage bate. Listed issues are superficial unless the person is a bit far into the spectrum. There is no good datapoint which would weigh the issues against real world problems, i.e. how much does it cost. Even the point about ram is weak without the data.
Go has problems, sure. But I’ve yet to see a hit piece on Go that actually holds up to real scrutiny.
Usually, as here, objections to go take the form a technically-correct-but-ultimately-pedantic arguments.
The positives of go are so overwhelmingly high magnitude that all those small things basically don’t matter enough to abandon the language.
Go is good enough to justify using it now while waiting for the slow-but-steady stream of improvements from version to version to make life better.
there are plenty of other languages. I dont get this love-hate type of speech like golang itself owes you an apology.
> Two types of nil
What in the javascript is this.
I get bitten by the "nil interface" problem if I'm not paying a lot of attention since golang makes a distinction between the "enclosing type" and the "receiver type"
I think a lot of people got on the Go train because of Google and not necessarily because it was good. There was a big adoption in Chinese tech scene for example. I personally think Rust/Go/Zig and other modern languages suffer a bit from trying too hard not to be C/C++/Java.
Go was a breath of fresh air and pretty usable right from the start. It felt like a neat little language with - finally - a modern standard library. Fifteen years ago, that was a welcome change. I think it's no surprise that Go and Node.js both got started and took off around the same time. People were looking something modern, lightweight, and simple and both projects delivered that.
> If you stuff random binary data into a string, Go just steams along, as described in this post.
> Over the decades I have lost data to tools skipping non-UTF-8 filenames. I should not be blamed for having files that were named before UTF-8 existed.
Umm.. why blame Go for that?
Author here.
What I intended to say with this is that ignoring the problem if invalid UTF-8 (could be valid iso8859-1) with no error handling, or other way around, has lost me data in the past.
Compare this to Rust, where a path name is of a different type than a mere string. And if you need to treat it like a string and you don't care if it's "a bit wrong" (because it's for being shown to the user), then you can call `.to_string_lossy()`. But it's be more hard to accidentally not handle that case when exact name match does matter.
When exactness matters, `.to_str()` returns `Option<&str>`, so the caller is forced to deal with the situation that the file name may not be UTF-8.
Being sloppy with file name encodings is how data is lost. Go is sloppy with strings of all kinds, file names included.
Thanks for your reply. I understand that encoding the character set in the type system is more explicit and can help find bugs.
But forcing all strings to be UTF-8 does not magically help with the issue you described. In practice I've often seen the opposite: Now you have to write two code paths, one for UTF-8 and one for everything else. And the second one is ignored in practice because it is annoying to write. For example, I built the web server project in your other submission (very cool!) and gave it a tar file that has a non-UTF-8 name. There is no special handling happening, I simply get "error: invalid UTF-8 was detected in one or more arguments" and the application exits. It just refuses to work with non-UTF-8 files at all -- is this less sloppy?
Forcing UTF-8 does not "fix" compatibility in strange edge cases, it just breaks them all. The best approach is to treat data as opaque bytes unless there is a good reason not to. Which is what Go does, so I think it is unfair to blame Go for this particular reason instead of the backup applications.
> It just refuses to work with non-UTF-8 files at all -- is this less sloppy?
You can debate whether it is sloppy but I think an error is much better than silently corrupting data.
> The best approach is to treat data as opaque bytes unless there is a good reason not to
This doesn't seem like a good approach when dealing with strings which are not just blobs of bytes. They have an encoding and generally you want ways to, for instance, convert a string to upper/lowercase.
Can't say I know the best way here. But Rust does this better than anything I've seen.
I don't think you need two code paths. Maybe your program can live its entire life never converting away from the original form. Say you read from disk, pick out just the filename, and give to an archive library.
There's no need to ever convert that to a "string". Yes, it could have been a byte array, but taking out the file name (or maybe final dir plus file name) are string operations, just not necessarily on UTF-8 strings.
And like I said, for all use cases where it just needs to be shown to users, the "lossy" version is fine.
> I simply get "error: invalid UTF-8 was detected in one or more arguments" and the application exits. It just refuses to work with non-UTF-8 files at all -- is this less sloppy?
Haha, touche. But yes, it's less sloppy. Would you prefer that the files were silently skipped? You've created your archive, you started the webserver, but you just can't get it to deliver the page you want.
In order for tarweb to support non-UTF-8 in filenames, the programmer has to actually think about what that means. I don't think it means doing a lossy conversion, because that's not what the file name was, and it's not merely for human display. And it should probably not be the bytes either, because tools will likely want to send UTF-8 encoded.
Or they don't. In either case unless that's designed, implemented, and tested, non-UTF-8 in filenames should probably be seen as malformed input. For something that uses a tarfile for the duration of the process's life, that probably means rejecting it, and asking the user to roll back to a previous working version or something.
> Forcing UTF-8 does not "fix" compatibility in strange edge cases
Yup. Still better than silently corrupting.
Compare this to how for Rust kernel work they apparently had to implement a new Vec equivalent, because dealing with allocation failures is a different thing in user and kernel space[1], and Vec push can't fail.
Similarly, Go string operations cannot fail. And memory allocation issues has reasons that string operations don't.
[1] a big separate topic. Nobody (almost) runs with overcommit off.
An error is better than silent corruption, sure.
But there is no silent corruption when you pass the data as opaque bytes, you just get some placeholder symbols when displayed. This is how I see the file in my terminal and I can rm it just fine.
And yes, question marks in the terminal are way better than applications not working at all.
The case of non-UTF-8 being skipped is usually a characteristic of applications written in languages that don't use bytes for their default string type, not the other way around. This has bitten me multiple times with Python2/3 libraries.
“What color is your nil?” — The two billion dollar mistake.
Talk about hyperbole.
I dislike Go but I haven’t found anything else I dislike less.
Another annoying thing Go proponents say is that it is simple. It is not. And even if it was, the code you write with a simple language is not automatically simple. Take the k8s control plane for example; some of the most convoluted and bulky code that exists, and it’s all in Go.
I wrote a small explainer on the typed-vs-untyped nil issue. It is one of the things that can actually bite you in production. Easy to miss it in code review.
Here's the accompanying playground: https://go.dev/play/p/Kt93xQGAiHK
If you run the code, you will see that calling read() on ControlMessage causes a panic even though there is a nil check. However, it doesn't happen for Message. See the read() implementation for Message: we need to have a nil check inside the pointer-receiver struct methods. This is the simplest solution. We have a linter for this. The ecosystem also helps, e.g protobuf generated code also has nil checks inside pointer receivers.
After spending some time in lower level languages Go IMO makes much more sense. Your example:
First one - you have an address to a struct, you pass it, all good.
Second case: you set address of struct to "nil". What is nil? It's an address like anything else. Maybe it's 0x000000 or something else. At this point from memory perspective it exists, but OS will prevent you from touching anything that NULL pointer allows you to touch.
Because you don't touch ANYTHING nothing fails. It's like a deadly poison in a box you don't open.
Third example id the same as second one. You have a IMessage but it points to NULL (instead NULL pointing to deadly poison).
And in fourth, you finally open the box.
Is it magic knowledge? I don't think so, but I'm also not surprised about how you can modify data through slice passing.
IMO the biggest Go shortcoming is selling itself as a high level language, while it touches more bare metal that people are used to touch.
great example, that is indeed tricky
s/good/perfect
> Wait, what? Why is err reused for foo2()? Is there’s something subtle I’m not seeing? Even if we change that to :=, we’re left to wonder why err is in scope for (potentially) the rest of the function. Why? Is it read later?
First time its assigned nil, second time its overwritten in case there's an error in the 2nd function. I dont see the authors issue? Its very explicit.
Author here: I'm not talking about the value. I'm talking about the lifetime of the variable.
After checking for nil, there's no reason `err` should still be in scope. That's why it's recommended to write `if err := foo(); err != nil`, because after that, one cannot even accidentally refer to `err`.
I'm giving examples where Go syntactically does not allow you to limit the lifetime of the variable. The variable, not its value.
You are describing what happens. I have no problem with what happens, but with the language.
Why does the lifetime even matter?
I gave an example in the post, but to spell it out: Because a typo variable is not caught, e.g. as an unused variable.
The example from the blog post would fail, because `return err` referred to an `err` that was no longer in scope. It would syntactically prevent accidentally writing `foo99()` instead of `err := foo99()`.
I'll have to read the rest later but this was an unforced error on the author's part. There is nothing unclear about that block of code. If err isn't but, it was set, and we're no longer in the function. If it's not, why waste an interface handle?
Anyone want to try to explain what he's on about with the first example?
The above (which declares a new value of err scoped to the second if statement) should compile right? What is it that he's complaining about?EDIT: OK, I think I understand; there's no easy way to have `bar` be function-scoped and `err` be if-scoped.
I mean, I'm with him on the interfaces. But the "append" thing just seems like ranting to me. In his example, `a` is a local variable; why would assigning a local variable be expected to change the value in the caller? Would you expect the following to work?
If not why would you expect `a = apppend(a, ...)` to work?> why would assigning a local variable be expected to change the value in the caller?
I think you may need to re-read. My point is that it DOES change the value in the caller. (well, sometimes) That's the problem.
Oh, I see. I mean, yeah, the relationships between slices and arrays is somewhat subtle; but it buys you some power as well. I came to golang after decades of C, so I didn't have much trouble with the concept.
I'm afraid I can only consider that a taste thing.
EDIT: One thing I don't consider a taste thing is the lack of the equivalent of a "const *". The problem with the slice thing is that you can sort of sometimes change things but not really. It would be nice if you could be forced to pass either a pointer to a slice (such that you can actually allocate a new backing array and point to it), or a non-modifiable slice (such that you know the function isn't going to change the slice behind your back).
That might be it, but I wondered about that one, as well as the append complaint. It seems like the author disagree with scoping rules, but they aren't really any different than a lot of other languages.
If someone really doesn't like the reuse of err, there's no reason why they couldn't create separate variable, e.g. err_foo and err_foo2. There's not no reason to not reuse err.
edit: the main rant about err was that it is left in scope but I believe the author does not like that
You didn't copy the code correctly from the first example.
Well no, the second "if" statement is a red herring. Both of the following work:
and He even says as much:> Even if we change that to :=, we’re left to wonder why err is in scope for (potentially) the rest of the function. Why? Is it read later?
My initial reaction was: "The first `err` is function-scope because the programmer made it function-scope; he clearly knows you can make them local to the if, so what's he on about?`
It was only when I tried to rewrite the code to make the first `err` if-scope that I realized the problem I guess he has: OK, how do you make both `err` variable if-scope while making `bar` function-scope? You'd have to do something like this:
Which is a lot of cruft to add just to restrict the scope of `err`.None of these objections seem at all serious to me, then the piece wraps up with "Why do I care about memory use? RAM is cheap." Excuse me? Memory bloat effects performance and user experience with every operation. Careful attention to software engineering should avoid or minimize these problems and emphasize the value of being tidy with memory use.
lol, first I thought - "cmon, errors are bad, stop beating the dead horse", but then the fan started, good article, had a lot of fun reading it
Someone send this man a peer bonus
As a long-time Go programmer I didn't understand the comment about two types of nil because I have never experienced that issue, so I dug into it.
It turns out to be nothing but a misunderstanding of what the fmt.Println() statement is actually doing. If we use a more advanced print statement then everything becomes extremely clear:
The author of this post has noted a convenience feature, namely that fmt.Println() tells you the state of the thing in the interface and not the state of the interface, mistaken it as a fundamental design issue and written a screed about a language issue that literally doesn't exist.Being charitable, I guess the author could actually be complaining that putting a nil pointer inside a nil interface is confusing. It is indeed confusing, but it doesn't mean there are "two types" of nil. Nil just means empty.
The author is showing the result of s==nil and i==nil, which are checks that you would have to do almost everywhere (the so called "billion dollar mistake")
It's not about Printf. It's about how these two different kind of nil values sometimes compare equal to nil, sometimes compare equal to each other, and sometimes not
Yes there is a real internal difference between the two that you can print. But that is the point the author is making.
It's a contrived example which I have never really experienced in my own code (and at this point, I've written a lot of it) or any of my team's code.
Go had some poor design features, many of which have now been fixed, some of which can't be fixed. It's fine to warn people about those. But inventing intentionally confusing examples and then complaining about them is pretty close to strawmanning.
> It's a contrived example which I have never really experienced in my own code (and at this point, I've written a lot of it) or any of my team's code.
It's confusing enough that it has an FAQ entry and that people tried to get it changed for Go 2. Evidently people are running in to this. (I for sure did)
I believe you that you've never hit it, it's definitely not an everyday problem. But they didn't make it up, it does bite people from time to time.
It's sort of a known sharp edge that people occasionally cut themselves on. No language is perfect, but when people run into them they rightfully complain about it
That's really my problem with these kind of critiques.
EVERY language has certain pitfalls like this. Back when I wrote PHP for 20+ years I had a Google doc full of every stupid PHP pitfall I came across.
And they were always almost a combination of something silly in the language, and horrible design by the developer, or trying to take a shortcut and losing the plot.
Author here. No, I didn't misunderstand it. Interface variables have two types of nil. Untyped, which does compare to nil, and typed, which does not.
What are you trying to clarify by printing the types? I know what the types are, and that's why I could provide the succinct weird example. I know what the result of the comparisons are, and why.
And the "why" is "because there are two types of nil, because it's a bad language choice".
I've seen this in real code. Someone compares a variable to nil, it's not, and then they call a method (receiver), and it crashes with nil dereference.
Edit, according to this comment this two-types-of-null bites other people in production: https://news.ycombinator.com/item?id=44983576
> Author here. No, I didn't misunderstand it. Interface variables have two types of nil. Untyped, which does compare to nil, and typed, which does not.
There aren't two types of nil. Would you call an empty bucket and an empty cup "two types of empty"?
There is one nil, which means different things in different contexts. You're muddying the waters and making something which is actually quite straightforward (an interface can contain other things, including things that are themselves empty) seem complicated.
> I've seen this in real code. Someone compares a variable to nil, it's not, and then they call a method (receiver), and it crashes with nil dereference.
Sure, I've seen pointer-to-pointer dereferences fail for the same reason in C. It's not particularly different.
> Though Python is almost entirely refcounted, so one can pretty much rely on the __del__ finalizer being called.
yeah no. you need an acyclic structure to maybe guarantee this, in CPython. other Python implementations are more normal in that you shouldn't rely on finalizers at all.
I love Python, but the sheer number of caveats and warnings for __del__ makes me question if this person has ever read the docs [0]. My favorite WTF:
> It is possible (though not recommended!) for the __del__() method to postpone destruction of the instance by creating a new reference to it. This is called object resurrection.
[0]: https://docs.python.org/3/reference/datamodel.html#object.__...
How does this relate to the claim of the parent comment that cyclic structures are never freed in python (which is false, btw)?
When I replied, the only thing the comment said was “yeah no.” I was agreeing that __del__ is fraught with peril.
Reading: cyclic GC, yes, the section I linked explicitly discusses that problem, and how it’s solved.
this is not what I claim, BTW.
Author here.
Yes, yes. Hence the words "almost" and "pretty much". For exactly this reason.
Show me a programming language that does not have annoying flaws and I'll show you a programming language that does not yet exist, and probably won't ever exist.
I really like Go. It scratches every itch that I have. Is it the language for your problems? I don't know, but very possibly that answer is "no".
Go is easy to learn, very simple (this is a strong feature, for me) and if you want something more, you can code that up pretty quickly.
The blog article author lost me completely when they said this:
> Why do I care about memory use? RAM is cheap.
That is something that only the inexperienced say. At scale, nothing is cheap; there is no cheap resource if you are writing software for scale or for customers. Often, single bytes count. RAM usage counts. CPU cycles count. Allocations count. People want to pretend that they don't matter because it makes their job easier, but if you want to write performant software, you better have that those cpu cache lines in mind, and if you have those in mind, you have memory usage of your types in mind.
What does this mean? Do they just use recover and keep bad data?
> The standard library does that. fmt.Print when calling .String(), and the standard library HTTP server does that, for exceptions in the HTTP handlers.
Apart from this most doesn't seem that big of a deal, except for `append` which is truly a bad syntax. If you doing it inplace append don't return the value.
The standard library recovers from the panic, and program continues.
This means that if you do:
And `get_something()` panics, then the program continues with a locked mutex. There are more dangerous things than a deadlocked program, of course.It's non-optional to use defer, and thus write exception safe code. Even if you never use exceptions.
> Previous posts Why Go is not my favourite language and Go programs are not portable have me critiquing Go for over a decade.
I chuckled
Same here, I don't know if this makes him Go's biggest fan or this is actually genuinely sad.
Never had any problems with Go as it makes me millions each year.
Never had a problem with Enron because I sold it when it was high.
As someone who for >10 years writes golang and has written some bigger codebases using it, this are my takes on this articles claims:
:Error variable Scope -> Yes can be confusing at the beginning, but if you have some experience it doesnt really matter. Would it be cool to scope it down?`Sure, but it feels like here is something blown up to an "issue" where i would see other things to alot more important for the go team to revisit. Regarding the error handling in go, some hate it , some love it : i personally like it (yes i really do) so i think its more a preference than a "bad" thing.
:Two types of nil -> Funny, i never encountered this in > 10 years of go with ALOT of work in pointer juggling, so i wonder in which reality this hits your where it cant be avoided. Tho confusing i admit
:It’s not portable -> I have no opinion here since i work on unix systems only and i have my compiled binaries specific shrug dont see any issue here either.
:append with no defined ownership -> I mean... seriously? Your test case, while the results may be unexpected, is a super wierd one. Why you you append a mid field, if you think about what these functions do under the hood your attemp actualyl feels like you WANT to procude strange behaviour and things like that can be done in any language.
:defer is dumb -> Here i 100% agree - from my pov it leads to massive resource wasting and in certain situations it can also create strange errors, but im not motivated to explain this - ill just say defer, while it seems usefull, from my pov is a bad thing and should not be used.
:The standard library swallows exceptions, so all hope is lost -> "So all hope is lost" i mean you already left the realm of objectiveness long before tbut this really tops it. I wrote some quite big go applications and i never had a situation where i could not handle an exception simply by adjusting my code in a way that i prevent it from even happening. Again - i feel like someone is just in search of things to complain that could simply be avoided. (also in case someone comes up with a super specific probably once in a million case, well alrways keep in mind that language design doesnt orient on the least occuring thing).
:Sometimes things aren’t UTF-8 -> I wont bother to read another whole article, if its important include an example. I have dealth with different encodings (web crawler) and i could handle all of them.
:Memory use -> What you describe is one of the design decisions im not absolutly happy with, the memory handling. But than, one of my golang projects is an in memory graph storage/database - which in one of my cases run for ~2years without restart and had about 18GB of dataset stored in it. It has a lot of mutex handling (regarding your earlier complain with exxceptions, never had one) and it btw run as backend of a internet facing service so it wasnt just fed internal data.
--------------------
Finally i wanne say : often things come down to personal preference. I could spend days raging about javascript, java, c++ or some other languages, but whatfor? Pick the language that fits your use case and your liking, dont pick one that doesnt and complain about it.
Also , just to show im not just a big "golang is the best" fanboy, because it isnt - there are things to critizize like the previously mentioned memory handling.
While i still think you just created memory leaks in your app, golang had this idea of "arenas" which would enable the code to manage memory partly himself and therefor developt much more memory efficient applications. This has stalled lately and i REALLY hope the go team will pick it up again and make this a stable thing to use. I probably would update all of my bigger codebases using it.
Also - and thats something thats annoying me ALOT beacuse it made me spend alot of hours - the golang plugin system. I wrote an architecture to orchestrate processing and for certain reasons i wanted to implement the orchestrated "things" as plugins. But the plugin system as it is rn can only be described as the torments of hell. I messed with it for like 3 years till i recently dropped the plugin functionality and added the stuff directly. Plugins are a very powerfull thing and a good plugin system could be a great thing, but in its current state i would recommend noone to touch it.
These are just two points, i could list some more but the point i want to get to is : there are real things you can critizize instead of things that you create yourself or that are language design decision that you just dont like. Im not sure if such articles are the rage of someone who just is bored or its ragebait to make people read it. Either way its not helping anyone.
Author here.
:Two types of nil
Other commenters have. I have. Not everyone will. Doesn't make it good.
:append with no defined ownership
I've seen it. Of course one can just "not do that", but wouldn't it be nice if it were syntactically prevented?
:It’s not portable ("just Unix")
I also only work on Unix systems. But if you only work on amd64 Linux, then portability is not a concern. Supporting BSD and Linux is where I encounter this mess.
:All hope is lost
All hope is lost specifically on the idea of not needing to write exception safe code. If panics did always crash the problem, then that'd be fine. But no coding standard can save you from the standard library, so yes, all hope about being able to pretend panic exits the problem, is lost.
You don't need to read my blog posts. Looking forward to reading your, much better, critique.
I use Go daily for work, alongside Dart, Python.
I say switching to Go is like a different kind of Zen. It takes time, to settle in and get in the flow of Go... Unlike the others, the LSP is fast, the developer, not so much. Once you've lost all will to live you become quite proficient at it. /s
I've been writing small Go utilities for myself since the Go minor version number was <10
I can still check out the code to any of them, open it and it'll look the same as modern code. I can also compile all of them with the latest compiler (1.25?) and it'll just work.
No need to investigate 5 years of package manager changes and new frameworks.
I also sing "Fade to Black" when I have to write go :D
I was like "Have I ever actually heard that?" and the answer turns out to be "No" so now I have (it's a Metallica track about suicidal ideation, whether it's good idea to listen to it while writing Go I could not say and YMMV).
My developer experience was similar to rust but more frustrating because of the lax typing.
ISTG if I get downvoted for sharing my opinion I will give up on life.
defer is no worse than Java's try-with-resources. Neither is true RAII, because in both cases you, the caller, need to remember to write the wordy form ("try (...) {" or "defer ...") instead of the plain form ("..."), which will still compile but silently do the wrong thing.
Sure, true RAII would be improvement over both, but the author's point is that Java is an improvement over Go, because the resource acquisition is lexical scoped, not function-scoped. Imagine if Java's `try (...) { }` didn't clear the resource when the try block ends, but rather when the wrapping method returns. That's how Go's defer works.
Can't you create a new block scope in Go? If not, I agree. If so, just do that if you want lexical scoping?
defer is not block scoped in Go, it's function scoped. So if you want to defer a mutex unlock it will only be executed at the end of the function even if placed in a block. This means you can't do this (sketch):
You can call Unlock directly, but then if there's a panic it won't be unlocked like it would be in the above. That can be an issue if something higher in the call stack prevents the panic from crashing the entire program, it would leave your system in a bad state.This is the key problem with defer. It operates a lot like a finally block, but only on function exit which means it's not actually suited to the task.
And as the sibling pointed out, you could use an anonymous function that's immediately called, but that's just awkward, even if it has become idiomatic.
You have to create an anonymous function.
Still better (compiler speed) than Rust.
Still not playing remotely in the same league. Only one of them is a "systems language", reusing Go's inappropriate marketing term.
I'm still appalled that there's no "do while" loop in go.
Python doesn’t have one either
Is there anything that soothes devs more than developing a superiority complex of their particular tooling? And then the unquenchable thirst to bash "downwards"? I find it so utterly pathetic.
Man ... the author comes over as a person who is butthurt that Go took out his girlfriend on a date. He comes over as the typical Rust fanboy that whines about Go non-stop...
/Look up his previous posts. "I finally got around to learn Rust. It’s amazing." Guessed it! O, how easy to spot they always are. They are always so angry when it comes down to Go. Jealousy? Who knows ...
If there is some constant in a lot of Go rant posts, its typical Rust fanboys that just can not understand that few care about Rust.
*If you do not like Go, nobody forces you to use it.*
This type of Go bashing from Rust users, has been going on for the last 10+ years. Where we had Rust users evangelizing Rust as the one and only solution to every problem and telling everybody that their code needed to be rewritten in Rust.
Most of the points mentioned are literally the quirks of a language. Any language has quirks. Do we need to start ranting about Rust? No, we do not care about Rust's quirks because we all have better things to do.
O, lets not forget the typical GC ranting, because of course Rust user need to rant about the GC. I mean, somebody need to soft refer to our only savior called Rust. When most of use do not give two cents about the GC. It gets the job done, and rarely becomes a issue for the 99.9% of us.
Go is a simple language that provides a lot of benefits to most developers that use it. We do not need a jackhammer when a basic hammer will do.
Can it have improvements, sure. Every language can have improvements, but i am more then happy with what it has.
See, we can write a post without needing to put down a language or rant about that language its quirks. Just focus on your programing in the language that you so clearly love.
No one cares more about rust than Gophers.
Usually it's the other way around...
> Probably [hello NIGHTMARE !]. Who wants that? Nobody wants that.
I don't really care if you want that. Everyone should know that that's just the way slices work. Nothing more nothing less.
I really don't give a damn about that, i just know how slices behave, because I learned the language. That's what you should do when you are programming with it (professionally)
If you're fine with that then you should be upset by the subsequent example, because by your own definition "that's just not the way slices work".
I am fine with the subsequent example, too. If you read up about slices, then that's how they are defined and how they work. I am not judging, I am just using the language as it is presented to me.
For anyone interested, this article explains the fundamentals very well, imo: https://go.dev/blog/slices-intro
Then you seem to be fine with inconsistent ownership and a behavioral dependence on the underlying data rather than the structure.
You really don't see why people would point a definition that changes underneath you out as a bad definition? They're not arguing the documentation is wrong.
The definition is perfectly consistent. append is in-place if there's enough capacity (and the programmer can check this directly with cap() if they want), and otherwise it allocates a new backing array.
Yes, it's consistent and complicated and non-intuitive.
"Consistent" is necessary but not sufficient for "good".
The author obviously knows that too, otherwise they wouldn't have written about it. All of these issues are just how the language works, and that's the problem.
Yup. If you code in Go then you should know that.
Just like every PHP coder should know that the ternary operator associativity is backwards compared to every other language.
If you code in a language, then you should know what's bad about that language. That doesn't make those aspects not bad.
Note that since PHP 8.0 the ternary operator is non-associative, and attempting to nest it without explicit parenthesis produces a hard error.
> because I learned the language
If that's your argument then there are no bad design decisions for any language.
This was an interesting read and very educational in my case, but each time I read an article criticizing a programming language it's written by someone who hasn't done anything better.
It's a shame because it is just as effective as pissing in the wind.
I’ve never been a rock star, but I think Creed sucks.
I really don’t like your logic. I’m not a Michelin chef, but I’m qualified to say that a restaurant ruined my dessert. While I probably couldn’t make a crème brûlée any better than theirs, I can still tell that they screwed it up compared to their competitor next door.
For example, I love Python, but it’s going to be inherently slow in places because `sum(list)` has to check the type of every single item to see what __add__ function to call. Doesn’t matter if they’re all integers; there’s no way to prove to the interpreter that a string couldn’t have sneaked in there, so the interpreter has to check each and every time.
See? I’ve never written a language, let alone one as popular as Python, but I’m still qualified to point out its shortcomings compared to other languages.
If you're saying someone can't credibly criticize a language without having designed a language themselves, I'll ask that you present your body of work of programming language criticisms so I know if you have "produced something better" in the programming language criticism space.
Of course, by your reasoning this also means you yourself have designed a language.
I'll leave out repeating your colorful language if you haven't done any of these things.
> If you're saying someone can't credibly criticize a language without having designed a language themselves
Actually I think that's a reasonable argument. I've not designed a language myself (other than toy experiments) so I'm hesitant to denigrate other people's design choices because even with my limited experience I'm aware that there are always compromises.
Similarly, I'm not impressed by literary critics whose own writing is unimpressive.
Who would be qualified to judge their those critics’ writing as good or bad? Critics already qualified as good writers? Who vetted them, then? It’d have to be a stream of certified good authors all the way back.
No, I stick by my position. I may not be able to do any better, but I can tell when something’s not good.
(I have no opinion on Go. I’ve barely used it. This is only on the general principle of being able to judge something you couldn’t do yourself. I mean, the Olympics have gymnastic judges who are not gold medalists.)
Congratulations, you have found a few pain points in a language. Now as a scientific exercise apply the same reasoning to a few others. Will the number of issues you find multiplied by their importance be greater or lower than the score for Go? There you go, that's the entire problem - Go is bad, but there is no viable alternative in general.