I've been using Go more or less in every full-time job I've had since pre-1.0. It's simple for people on the team to pick up the basics, it generally chugs along (I'm rarely worried about updating to latest version of Go), it has most useful things built in, it compiles fast. Concurrency is tricky but if you spend some time with it, it's nice to express data flow in Go. The type system is most of the time very convenient, if sometimes a bit verbose. Just all-around a trusty tool in the belt.
But I can't help but agree with a lot of points in this article. Go was designed by some old-school folks that maybe stuck a bit too hard to their principles, losing sight of the practical conveniences. That said, it's a _feeling_ I have, and maybe Go would be much worse if it had solved all these quirks. To be fair, I see more leniency in fixing quirks in the last few years, like at some point I didn't think we'd ever see generics, or custom iterators, etc.
The points about RAM and portability seem mostly like personal grievances though. If it was better, that would be nice, of course. But the GC in Go is very unlikely to cause issues in most programs even at very large scale, and it's not that hard to debug. And Go runs on most platforms anyone could ever wish to ship their software on.
But yeah the whole error / nil situation still bothers me. I find myself wishing for Result[Ok, Err] and Optional[T] quite often.
Go was designed by some old-school folks that maybe stuck a bit too hard to their principles, losing sight of the practical conveniences.
I'd say that it's entirely the other way around: they stuck to the practical convenience of solving the problem that they had in front of them, quickly, instead of analyzing the problem from the first principles, and solving the problem correctly (or using a solution that was Not Invented Here).
Go's filesystem API is the perfect example. You need to open files? Great, we'll create
func Open(name string) (*File, error)
function, you can open files now, done. What if the file name is not valid UTF-8, though? Who cares, hasn't happen to me in the first 5 years I used Go.
> Who cares, hasn't happen to me in the first 5 years I used Go.
This is the mindset that makes me want to throttle the golang authors.
Golang makes it easy to do the dumb, wrong, incorrect thing that looks like it works 99.7% of the time. How can that be wrong? It works in almost all cases!
The problem is that your code is littered with these situations everywhere. You don’t think to test for them, it’s worked on all the data you fed it so far, and then you run into situations like the GP’s where you lose data because golang didn’t bother to think carefully about some API impedance mismatch, can’t even express it anyway, and just drops things on the floor when it happens.
So now your user has irrecoverably lost data, there’s a bug in your bug tracker, and you and everyone else who uses go has to solve for yet another a stupid footgun that should have been obvious from the start and can never be fixed upstream.
And you, and every other golang programmer, gets a steady and never-ending stream of these type of issues, randomly selected for, for the lifetime of your program. Which one will bite you tomorrow? No idea! But the more and more people who use it, the more data you feed it, the more clients with off-the-beaten-track use-cases, the more and more it happens.
Oops, non-UTF-8 filename. Oops, can’t detect the difference between an empty string in some JSON or a nil one. Oops, handed out a pointer and something got mutated out from under me. Oops, forgot to defer. Oops, maps aren’t thread-safe. Oops, maps don’t have a sane zero value. And on and on and fucking on and it never goddamn ends.
And it could have, if only Rob Pike and co. didn’t just ship literally the first thing they wrote with zero forethought.
> Golang makes it easy to do the dumb, wrong, incorrect thing that looks like it works 99.7% of the time. How can that be wrong? It works in almost all cases!
my favorite example of this was the go authors refusing to add monotonic time into the standard library because they confidently misunderstood its necessity
(presumably because clocks at google don't ever step)
then after some huge outages (due to leap seconds) they finally added it
now the libraries are a complete a mess because the original clock/time abstractions weren't built with the concept of multiple clocks
and every go program written is littered with terrible bugs due to use of the wrong clock
This issue is probably my favorite Goism. Real issue identified and the feedback is, “You shouldn’t run hardware that way. Run servers like Google does without time jumping.” Similar with the original stance to code versioning. Just run a monorepo!
It’s not about making zero mistakes, it’s about learning from previous languages which made mistakes and not repeating them. I decided against using go pretty early on because I recognized just how many mistakes they were repeating that would end up haunting maintainers.
I can count on fewer hands the number of times I've been bitten by such things in over 10 years of professional Go vs bitten just in the last three weeks by half-assed Java.
There is a lot to say about Java, but the libraries (both standard lib and popular third-party ones) are goddamn battle-hardened, so I have a hard time believing your claim.
They might very well be, because time-handling in Java almost always sucked. In the beginning there was java.util.Date and it was very poorly designed. Sun tried to fix that with java.util.Calendar. That worked for a while but it was still cumbersome, Calendar.getInstance() anyone? After that someone sat down and wrote Joda-Time, which was really really cool and IMO the basis of JSR-310 and the new java.time API. So you're kind of right, but it only took them 15 years to make it right.
At the time of Date's "reign", were there any other language with a better library? And Calendar is not a replacement for Date so it's a bit out of the picture.
Joda time is an excellent library and indeed it was basically the basis for java's time API, and.. for pretty much any modern language's time API, but given the history - Java basically always had the best time library available at the time.
Is golang better than Java? Sure, fine, maybe. I'm not a Java expert so I don't have a dog in the race.
Should and could golang have been so much better than it is? Would golang have been better if Pike and co. had considered use-cases outside of Google, or looked outward for inspiration even just a little? Unambiguously yes, and none of the changes would have needed it to sacrifice its priorities of language simplicity, compilation speed, etc.
It is absolutely okay to feel that go is a better language than some of its predecessors while at the same time being utterly frustrated at the the very low-hanging, comparatively obvious, missed opportunities for it to have been drastically better.
While the general question about string encoding is fine, unfortunately in a general-purpose and cross-platform language, a file interface that enforces Unicode correctness is actively broken, in that there are files out in the world it will be unable to interact with. If your language is enforcing that, and it doesn't have a fallback to a bag of bytes, it is broken, you just haven't encountered it. Go is correct on this specific API. I'm not celebrating that fact here, nor do I expect the Go designers are either, but it's still correct.
This is one of those things that kind of bugs me about, say, OsStr / OsString in Rust. In theory, it’s a very nice, principled approach to strings (must be UTF-8) and filenames (arbitrary bytes, almost, on Linux & Mac). In practice, the ergonomics around OsStr are horrible. They are missing most of the API that normal strings have… it seems like manipulating them is an afterthought, and it was assumed that people would treat them as opaque (which is wrong).
Go’s more chaotic approach to allow strings to have non-Unicode contents is IMO more ergonomic. You validate that strings are UTF-8 at the place where you care that they are UTF-8. (So I’m agreeing.)
The big problem isn't invalid UTF-8 but invalid UTF-16 (on Windows et al). AIUI Go had nasty bugs around this (https://github.com/golang/go/issues/59971) until it recently adopted WTF-8, an encoding that was actually invented for Rust's OsStr.
WTF-8 has some inconvenient properties. Concatenating two strings requires special handling. Rust's opaque types can patch over this but I bet Go's WTF-8 handling exposes some unintuitive behavior.
There is a desire to add a normal string API to OsStr but the details aren't settled. For example: should it be possible to split an OsStr on an OsStr needle? This can be implemented but it'd require switching to OMG-WTF-8 (https://rust-lang.github.io/rfcs/2295-os-str-pattern.html), an encoding with even more special cases. (I've thrown my own hat into this ring with OsStr::slice_encoded_bytes().)
The current state is pretty sad yeah. If you're OK with losing portability you can use the OsStrExt extension traits.
Yeah, I avoided talking about Windows which isn’t UTF-16 but “int16 string” the same way Unix filenames are int8 strings.
IMO the differences with Windows are such that I’m much more unhappy with WTF-8. There’s a lot that sucks about C++ but at least I can do something like
#if _WIN32
using pathchar = wchar_t;
constexpr pathchar sep = L'\\';
#else
using pathchar = char;
constexpr pathchar sep = '/';
#endif
using pathstring = std::basic_string<pathchar>;
Mind you this sucks for a lot of reasons, one big reason being that you’re directly exposed to the differences between path representations on different operating systems. Despite all the ways that this (above) sucks, I still generally prefer it over the approaches of Go or Rust.
> You validate that strings are UTF-8 at the place where you care that they are UTF-8.
The problem with this, as with any lack of static typing, is that you now have to validate at _every_ place that cares, or to carefully track whether a value has already been validated, instead of validating once and letting the compiler check that it happened.
In practice, the validation generally happens when you convert to JSON or use an HTML template or something like that, so it’s not so many places.
Validation is nice but Rust’s principled approach leaves me high and dry sometimes. Maybe Rust will finish figuring out the OsString interface and at that point we can say Rust has “won” the conversation, but it’s not there yet, and it’s been years.
Except when it doesn’t and then you have to deal with fucking Cthulhu because everyone thought they could just make incorrect assumptions that aren’t actually enforced anywhere because “oh that never happens”.
That isn’t engineering. It’s programming by coincidence.
> Maybe Rust will finish figuring out the OsString interface
The entire reason OsString is painful to use is because those problems exist and are real. Golang drops them on the floor and forces you pick up the mess on the random day when an unlucky end user loses data. Rust forces you to confront them, as unfortunate as they are. It's painful once, and then the problem is solved for the indefinite future.
Rust also provides OsStrExt if you don’t care about portability, which greatly removes many of these issues.
I don’t know how that’s not ideal: mistakes are hard, but you can opt into better ergonomics if you don’t need the portability. If you end up needing portability later, the compiler will tell you that you can’t use the shortcuts you opted into.
Can you give an example of how Go's approach causes people to lose data? This was alluded to in the blog post but they didn't explain anything.
It seems like there's some confusion in the GGGGGP post, since Go works correctly even if the filename is not valid UTF-8 .. maybe that's why they haven't noticed any issues.
Imagine that you're writing a function that'll walk the directory to copy some files somewhere else, and then delete the directory. Unfortunately, you hit this
Ah, mentioning Windows filenames would have been useful.
I guess the issue isn't so much about whether strings are well-formed, but about whether the conversion (eg, from UTF-16 to UTF-8 at the filesystem boundary) raises an error or silently modifies the data to use replacement characters.
I do think that is the main fundamental mistake in Go's Unicode handling; it tends to use replacement characters automatically instead of signalling errors. Using replacement characters is at least conformant to Unicode but imo unless you know the text is not going to be used as an identifier (like a filename), conversion should instead just fail.
The other option is using some mechanism to preserve the errors instead of failing quietly (replacement) or failing loudly (raise/throw/panic/return err), and I believe that's what they're now doing for filenames on Windows, using WTF-8. I agree with this new approach, though would still have preferred they not use replacement characters automatically in various places (another one is the "json" module, which quietly corrupts your non-UTF-8 and non-UTF-16 data using replacement characters).
Probably worth noting that the WTF-8 approach works because strings are not validated; WTF-8 involves converting invalid UTF-16 data into invalid UTF-8 data such that the conversion is reversible. It would not be possible to encode invalid UTF-16 data into valid UTF-8 data without changing the meaning of valid Unicode strings.
They have effectively done this (since the linked issue was raised), by just converting Windows filenames to WTF-8.
I think this is sensible, because the fact that Windows still uses UTF-16 (or more precisely "Unicode 16-bit strings") in some places shouldn't need to complicate the API on other platforms that didn't make the UCS-2/UTF-16 mistake.
It's possible that the WTF-8 strings might not concatenate the way they do in UTF-16 or properly enforced WTF-8 (which has special behaviour on concatenation), but they'll still round-trip to the intended 16-bit string, even after concatenation.
It's completely in-line with Rust's approach. Concentrate on the hard stuff that lifts every boat. Like the type system, language features, and keep the standard library very small, and maybe import/adopt very successful packages. (Like once_cell. But since removing things from std is considered a forever no-no, it seems path handling has to be solved by crates. Eg. https://github.com/chipsenkbeil/typed-path )
Much more egregious is the fact that the API allows returning both an error and a valid file handle. That may be documented to not happen. But look at the Read method instead. It will return both errors and a length you need to handle at the same time.
The Read() method is certainly an exception rather than a rule. The common convention is to return nil value upon encountering an error unless there's real value in returning both, e.g. for a partial read that failed in the end but produced some non-empty result nevertheless. It's a rare occasion, yes, but if you absolutely have to handle this case you can. Otherwise you typically ignore the result if err!=nil. It's a mess, true, but real world is also quite messy unfortunately, and Go acknowledges that
Most of the time if there's a result, there's no error. If there's an error, there's no result. But don't forget to check every time! And make sure you don't make a mistake when you're checking and accidentally use the value anyway, because even though it's technically meaningless it's still nominally a meaningful value since zero values are supposed to be meaningful.
Oh and make sure to double-check the docs, because the language can't let you know about the cases where both returns are meaningful.
The real world is messy. And golang doesn't give you advance warning on where the messes are, makes no effort to prevent you from stumbling into them, and stands next to you constantly criticizing you while you clean them up by yourself. "You aren't using that variable any more, clean that up too." "There's no new variables now, so use `err =` instead of `err :=`."
It breaks. Which is weird because you can create a string which isn't valid UTF-8 (eg "\xbd\xb2\x3d\xbc\x20\xe2\x8c\x98") and print it out with no trouble; you just can't pass it to e.g. `os.Create` or `os.Open`.
(Bash and a variety of other utils will also complain about it being valid UTF-8; neovim won't save a file under that name; etc.)
I'm confused, so is Go restricted to UTF-8 only filenames, because it can read/write arbitrary byte sequences (which is what string can hold), which should be sufficient for dealing with other encodings?
Then I am having a hard time understanding the issue in the post, it seems pretty vague, is there any idea what specific issue is happening, is it how they've used Go, or does Go have an inherent implementation issue, specifically these lines:
If you stuff random binary data into a string, Go just steams along, as described in this post.
Over the decades I have lost data to tools skipping non-UTF-8 filenames. I should not be blamed for having files that were named before UTF-8 existed.
It sounds like you found a bug in your filesystem, not in Golang's API, because you totally can pass that string to those functions and open the file successfully.
Well, Windows is an odd beast when 8-bit file names are used. If done naively, you can’t express all valid filenames with even broken UTF-8 and non-valid-Unicode filenames cannot be encoded to UTF-8 without loss or some weird convention.
You can do something like WTF-8 (not a misspelling, alas) to make it bidirectional. Rust does this under the hood but doesn’t expose the internal representation.
What do you mean by "when 8-bit filenames are used"? Do you mean the -A APIs, like CreateFileA()? Those do not take UTF-8, mind you -- unless you are using a relatively recent version of Windows that allows you to run your process with a UTF-8 codepage.
In general, Windows filenames are Unicode and you can always express those filenames by using the -W APIs (like CreateFileW()).
Windows filenames in the W APIs are 16-bit (which the A APIs essentially wrap with conversions to the active old-school codepage), and are normally well formed UTF-16. But they aren’t required to be - NTFS itself only cares about 0x0000 and 0x005C (backslash) I believe, and all layers of the stack accept invalid UTF-16 surrogates. Don’t get me started on the normal Win32 path processing (Unicode normalization, “COM” is still a special file, etc.), some of which can be bypassed with the “\\?\” prefix when in NTFS.
The upshot is that since the values aren’t always UTF-16, there’s no canonical way to convert them to single byte strings such that valid UTF-16 gets turned into valid UTF-8 but the rest can still be roundtripped. That’s what bastardized encodings like WTF-8 solve. The Rust Path API is the best take on this I’ve seen that doesn’t choke on bad Unicode.
I think it depends on the underlying filesystem. Unicode (UTF-16) is first-class on NTFS.
But Windows still supports FAT, I guess, where multiple 8-bit encodings are possible: the so-called "OEM" code pages (437, 850 etc.) or "ANSI" code pages (1250, 1251 etc.). I haven't checked how recent Windows versions cope with FAT file names that cannot be represented as Unicode.
Windows paths are not necessarily well-formed UTF-16 (UCS-2 by some people’s definition) down to the filesystem level. If they were always well formed, you could convert to a single byte representation by straightforward Unicode re-encoding. But since they aren’t - there are choices that need to be made about what to do with malformed UTF-16 if you want to round trip them to single byte strings such that they match UTF-8 encoding if they are well formed.
In Linux, they’re 8-bit almost-arbitrary strings like you noted, and usually UTF-8. So they always have a convenient 8-bit encoding (I.e. leave them alone). If you hated yourself and wanted to convert them to UTF-16, however, you’d have the same problem Windows does but in reverse.
This also epitomizes the issue. What's the point of having `string` type at all, if it doesn't allow you to make any extra assumptions about the contents beyond `[]byte`? The answer is that they planned to make conversion to `string` error out when it's invalid UTF-8, and then assume that `string`s are valid UTF-8, but then it caused problems elsewhere, so they dropped it for immediate practical convenience.
Rust apparently got relatively close to not having &str as a primitive type and instead only providing a library alias to &[u8] when Rust 1.0 shipped.
Score another for Rust's Safety Culture. It would be convenient to just have &str as an alias for &[u8] but if that mistake had been allowed all the safety checking that Rust now does centrally has to be owned by every single user forever. Instead of a few dozen checks overseen by experts there'd be myriad sprinkled across every project and always ready to bite you.
The problem with string length is there's probably at least four concepts that could conceivably be called length, and few people are happy when none of them are len.
Of the top of my head, in order of likely difficulty to calculate: byte length, number of code points, number of grapheme/characters, height/width to display.
Maybe it would be best for Str not to have len at all. It could have bytes, code_points, graphemes. And every use would be precise.
> The problem with string length is there's probably at least four concepts that could conceivably be called length.
The answer here isn't to throw up your hands, pick one, and other cases be damned. It's to expose them all and let the engineer choose. To not beat the dead horse of Rust, I'll point that Ruby gets this right too.
Similarly, each of those "views" lets you slice, index, etc. across those concepts naturally. Golang's string is the worst of them all. They're nominally UTF-8, but nothing actually enforces it. But really they're just buckets of bytes, unless you send them to APIs that silently require them to be UTF-8 and drop them on the floor or misbehave if they're not.
Height/width to display is font-dependent, so can't just be on a "string" but needs an object with additional context.
Problems arise when you try to take a slice of a string and end up picking an index (perhaps based on length) that would split a code point. String/str offers an abstraction over Unicode scalars (code points) via the chars iterator, but it all feels a bit messy to have the byte based abstraction more or less be the default.
FWIW the docs indicate that working with grapheme clusters will never end up in the standard library.
You can easily treat `&str` as bytes, just call `.as_bytes()`, and you get `&[u8]`, no questions asked. The reason why you don't want to treat &str as just bytes by default is that it's almost always a wrong thing to do. Moreover, it's the worst kind of a wrong thing, because it actually works correctly 99% of the time, so you might not even realize you have a bug until much too late.
If your API takes &str, and tries to do byte-based indexing, it should almost certainly be taking &[u8] instead.
> but it all feels a bit messy to have the byte based abstraction more or less be the default.
I mean, really neither should be the default. You should have to pick chars or bytes on use, but I don't think that's palatable; most languages have chosen one or the other as the preferred form. Or some have the joy of being forward thinking in the 90s and built around UCS-2 and later extended to UTF-16, so you've got 16-bit 'characters' with some code points that are two characters. Of course, dealing with operating systems means dealing with whatever they have as well as what the language prefers (or, as discussed elsewhere in this thread, pretending it doesn't exist to make easy things easier and hard things harder)
So it's true that technically the primitive type is str, and indeed it's even possible to make a &mut str though it's quite rare that you'd want to mutably borrow the string slice.
However no &str is not "an alias for &&String" and I can't quite imagine how you'd think that. String doesn't exist in Rust's core, it's from alloc and thus wouldn't be available if you don't have an allocator.
str is not really a "primitive type", it only exists abstractly as an argument to type constructors - treating the & operator as a "type constructor" for that purpose, but including Box<>, Rc<>, Arc<> etc. So you can have Box<str> or Arc<str> in addition to &str or perhaps &mut str, but not really 'str' in isolation.
IMO utf8 isn't a highly specific format, it's universal for text. Every ascii string you'd write in C or C++ or whatever is already utf8.
So that means that for 99% of scenarios, the difference between char[] and a proper utf8 string is none. They have the same data representation and memory layout.
The problem comes in when people start using string like they use string in PHP. They just use it to store random bytes or other binary data.
This makes no sense with the string type. String is text, but now we don't have text. That's a problem.
We should use byte[] or something for this instead of string. That's an abuse of string. I don't think allowing strings to not be text is too constraining - that's what a string is!
The approach you are advocating is the approach that was abandoned, for good reasons, in the Unix filesystem in the 70s and in Perl in the 80s.
One of the great advances of Unix was that you don't need separate handling for binary data and text data; they are stored in the same kind of file and can be contained in the same kinds of strings (except, sadly, in C). Occasionally you need to do some kind of text-specific processing where you care, but the rest of the time you can keep all your code 8-bit clean so that it can handle any data safely.
Languages that have adopted the approach you advocate, such as Python, frequently have bugs like exception tracebacks they can't print (because stdout is set to ASCII) or filenames they can't open when they're passed in on the command line (because they aren't valid UTF-8).
Yes, Windows text is broken in its own special way.
We can try to shove it into objects that work on other text but this won't work in edge cases.
Like if I take text on Linux and try to write a Windows file with that text, it's broken. And vice versa.
Go let's you do the broken thing. In Rust or even using libraries in most languages, you cant. You have to specifically convert between them.
That's why I mean when I say "storing random binary data as text". Sure, Windows almost UTF16 abomination is kind of text, but not really. Its its own thing. That requires a different type of string OR converting it to a normal string.
Even on Linux, you can't have '/' in a filename, or ':' on macOS. And this is without getting into issues related to the null byte in strings. Having a separate Path object that represents a filename or path + filename makes sense, because on every platform there are idiosyncratic requirements surrounding paths.
It maybe legacy cruft downstream of poorly thought out design decisions at the system/OS level, but we're stuck with it. And a language that doesn't provide the tooling necessary to muddle through this mess safely isn't a serious platform to build on, IMHO.
There is room for languages that explicitly make the tradeoff of being easy to use (e.g. a unified string type) at the cost of not handling many real world edge cases correctly. But these should not be used for serious things like backup systems where edge cases result in lost data. Go is making the tradeoff for language simplicity, while being marketed and positioned as a serious language for writing serious programs, which it is not.
> Even on Linux, you can't have '/' in a filename, or ':' on macOS
Yes this is why all competent libraries don't actually use string for path. They have their own path data type because it's actually a different data type.
Again, you can do the Go thing and just use the broken string, but that's dumb and you shouldn't. They should look at C++ std::filesystem, it's actually quite good in this regard.
> And a language that doesn't provide the tooling necessary to muddle through this mess safely isn't a serious platform to build on, IMHO.
I agree, even PHP does a better job at this than Go, which is really saying something.
> Go is making the tradeoff for language simplicity, while being marketed and positioned as a serious language for writing serious programs, which it is not.
> Yes this is why all competent libraries don't actually use string for path. They have their own path data type because it's actually a different data type.
What is different about it? I don't see any constraints here relevant to having a different type. Note that this thread has already confused the issue, because they said filename and you said path. A path can contain /, it just happens to mean something.
If you want a better abstraction to locations of files on disk, then you shouldn't use paths at all, since they break if the file gets moved.
A string can contain characters a path cannot, depending on the operating system. So only some strings are valid paths.
Typically the way you do this is you have the constructor for path do the validation or you use a static path::fromString() function.
Also paths breaking when a file is moved is correct behavior sometimes. For example something like openFile() or moveFile() requires paths. Also path can be relative location.
> A string can contain characters a path cannot, depending on the operating system. So only some strings are valid paths.
Can it? If you want to open a file with invalid UTF8 in the name, then the path has to contain that.
And a path can contain the path separator - it's the filename that can't contain it.
> For example something like openFile() or moveFile() requires paths.
macOS has something called bookmark URLs that can contain things like inode numbers or addresses of network mounts. Apps use it to remember how to find recently opened files even if you've reorganized your disk or the mount has dropped off.
IIRC it does resolve to a path so it can use open() eventually, but you could imagine an alternative. Well, security issues aside.
I've always thought the point of the string type was for indexing. One index of a string is always one character, but characters are sometimes composed of multiple bytes.
Yup. But to be clear, in Unicode a string will index code points, not characters. E.g. a single emoji can be made of multiple code points, as well as certain characters in certain languages. The Unicode name for a character like this is a "grapheme", and grapheme splitting is so complicated it generally belongs in a dedicated Unicode library, not a general-purpose string object.
You can't do that in a performant way and going that route can lead to problems, because characters (= graphemes in the language of Unicode) generally don't always behave as developers assume.
string is just an immutable []byte. It's actually one of my favorite things about Go that strings can contain invalid utf-8, so you don't end up with the Rust mess of String vs OSString vs PathBuf vs Vec<u8>. It's all just string
Rust &str and String are specifically intended for UTF-8 valid text. If you're working with arbitrary byte sequences, that's what &[u8] and Vec<u8> are for in Rust. It's not a "mess", it's just different from what Golang does.
It's never been clear to me where such a type is actually useful. In what cases do you really need to restrict it to valid UTF-8?
You should always be able to iterate the code points of a string, whether or not it's valid Unicode. The iterator can either silently replace any errors with replacement characters, or denote the errors by returning eg, `Result<char, Utf8Error>`, depending on the use case.
All languages that have tried restricting Unicode afaik have ended up adding workarounds for the fact that real world "text" sometimes has encoding errors and it's often better to just preserve the errors instead of corrupting the data through replacement characters, or just refusing to accept some inputs and crashing the program.
In Rust there's bstr/ByteStr (currently being added to std), awkward having to decide which string type to use.
In Python there's PEP-383/"surrogateescape", which works because Python strings are not guaranteed valid (they're potentially ill-formed UTF-32 sequences, with a range restriction). Awkward figuring out when to actually use it.
In Raku there's UTF8-C8, which is probably the weirdest workaround of all (left as an exercise for the reader to try to understand .. oh, and it also interferes with valid Unicode that's not normalized, because that's another stupid restriction).
Meanwhile the Unicode standard itself specifies Unicode strings as being sequences of code units [0][1], so Go is one of the few modern languages that actually implements Unicode (8-bit) strings. Note that at least two out of the three inventors of Go also basically invented UTF-8.
> Unicode strings need not contain well-formed code unit sequences under all conditions. This is equivalent to saying that a particular Unicode string need not be in a Unicode encoding form.
The way Rust handles this is perfectly fine. String type promises its contents are valid UTF-8. When you create it from array of bytes, you have three options: 1) ::from_utf8, which will force you to handle invalid UTF-8 error, 2) ::from_utf8_lossy, which will replace invalid code points with replacement character code point, and 3) from_utf8_unchecked, which will not do the validity check and is explicitly marked as unsafe.
But there's no option to just construct the string with the invalid bytes. 3) is not for this purpose; it is for when you already know that it is valid.
If you use 3) to create a &str/String from invalid bytes, you can't safely use that string as the standard library is unfortunately designed around the assumption that only valid UTF-8 is stored.
> Constructing a non-UTF-8 string slice is not immediate undefined behavior, but any function called on a string slice may assume that it is valid UTF-8, which means that a non-UTF-8 string slice can lead to undefined behavior down the road.
How could any library function work with completely random bytes? Like, how would it iterate over code points? It may want to assume utf8's standard rules and e.g. know that after this byte prefix, the next byte is also part of the same code point (excuse me if I'm using wrong terminology), but now you need complex error handling at every single line, which would be unnecessary if you just made your type represent only valid instances.
Again, this is the same simplistic, vs just the right abstraction, this just smudges the complexity over a much larger surface area.
If you have a byte array that is not utf-8 encoded, then just... use a byte array.
There are a lot of operations that are valid and well-defined on binary strings, such as sorting them, hashing them, writing them to files, measuring their lengths, indexing a trie with them, splitting them on delimiter bytes or substrings, concatenating them, substring-searching them, posting them to ZMQ as messages, subscribing to them as ZMQ prefixes, using them as keys or values in LevelDB, and so on. For binary strings that don't contain null bytes, we can add passing them as command-line arguments and using them as filenames.
The entire point of UTF-8 (designed, by the way, by the group that designed Go) is to encode Unicode in such a way that these byte string operations perform the corresponding Unicode operations, precisely so that you don't have to care whether your string is Unicode or just plain ASCII, so you don't need any error handling, except for the rare case where you want to do something related to the text that the string semantically represents. The only operation that doesn't really map is measuring the length.
> There are a lot of operations that are valid and well-defined on binary strings, such as (...), and so on.
Every single thing you listed here is supported by &[u8] type. That's the point: if you want to operate on data without assuming it's valid UTF-8, you just use &[u8] (or allocating Vec<u8>), and the standard library offers what you'd typically want, except of the functions that assume that the string is valid UTF-8 (like e.g. iterating over code points). If you want that, you need to convert your &[u8] to &str, and the process of conversion forces you to check for conversion errors.
The problem is that there are so many functions that unnecessarily take `&str` rather than `&[u8]` because the expectation is that textual things should use `&str`.
So you naturally write another one of these functions that takes a `&str` so that it can pass to another function that only accepts `&str`.
Fundamentally no one actually requires validation (ie, walking over the string an extra time up front), we're just making it part of the contract because something else has made it part of the contract.
It's much worse than that—in many cases, such as passing a filename to a program on the Linux command line, correct behavior requires not validating, so erroring out when validation fails introduces bugs. I've explained this in more detail in https://news.ycombinator.com/item?id=44991638.
That's semantically okay, but giving &str such a short name creates a dangerous temptation to use it for things such as filenames, stdio, and command-line arguments, where that process of conversion introduces errors into code that would otherwise work reliably for any non-null-containing string, as it does in Go. If it were called something like ValidatedUnicodeTextSlice it would probably be fine.
It's actually extremely hard to introduce problems like that, precisely because Rust's standard library is very well designed. Can you give an example scenario where it would be a problem?
use std::env;
fn main() {
let args: Vec<String> = env::args().collect();
...
}
When I run this code, a literal example from the official manual, with this filename I have here, it panics:
$ ./main $'\200'
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: "\x80"', library/std/src/env.rs:805:51
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
($'\200' is bash's notation for a single byte with the value 128. We'll see it below in the strace output.)
So, literally any program anyone writes in Rust will crash if you attempt to pass it that filename, if it uses the manual's recommended way to accept command-line arguments. It might work fine for a long time, in all kinds of tests, and then blow up in production when a wild file appears with a filename that fails to be valid Unicode.
This C program I just wrote handles it fine:
#include <unistd.h>
#include <fcntl.h>
#include <stdio.h>
#include <stdlib.h>
char buf[4096];
void
err(char *s)
{
perror(s);
exit(-1);
}
int
main(int argc, char **argv)
{
int input, output;
if ((input = open(argv[1], O_RDONLY)) < 0) err(argv[1]);
if ((output = open(argv[2], O_WRONLY | O_CREAT, 0666)) < 0) err(argv[2]);
for (;;) {
ssize_t size = read(input, buf, sizeof buf);
if (size < 0) err("read");
if (size == 0) return 0;
ssize_t size2 = write(output, buf, (size_t)size);
if (size2 != size) err("write");
}
}
(I probably should have used O_TRUNC.)
Here you can see that it does successfully copy that file:
The Rust manual page linked above explains why they think introducing this bug by default into all your programs is a good idea, and how to avoid it:
> Note that std::env::args will panic if any argument contains invalid Unicode. If your program needs to accept arguments containing invalid Unicode, use std::env::args_os instead. That function returns an iterator that produces OsString values instead of String values. We’ve chosen to use std::env::args here for simplicity because OsString values differ per platform and are more complex to work with than String values.
I don't know what's "complex" about OsString, but for the time being I'll take the manual's word for it.
So, Rust's approach evidently makes it extremely hard not to introduce problems like that, even in the simplest programs.
Go's approach doesn't have that problem; this program works just as well as the C program, without the Rust footgun:
(O_CREATE makes me laugh. I guess Ken did get to spell "creat" with an "e" after all!)
This program generates a much less clean strace, so I am not going to include it.
You might wonder how such a filename could arise other than as a deliberate attack. The most common scenario is when the filenames are encoded in a non-Unicode encoding like Shift-JIS or Latin-1, followed by disk corruption, but the deliberate attack scenario is nothing to sneeze at either. You don't want attackers to be able to create filenames your tools can't see, or turn to stone if they examine, like Medusa.
Note that the log message on error also includes the ill-formed Unicode filename:
$ ./cp $'\201' baz
2025/08/22 21:53:49 open source: open ζ: no such file or directory
But it didn't say ζ. It actually emitted a byte with value 129, making the error message ill-formed UTF-8. This is obviously potentially dangerous, depending on where that logfile goes because it can include arbitrary terminal escape sequences. But note that Rust's UTF-8 validation won't protect you from that, or from things like this:
$ ./cp $'\n2025/08/22 21:59:59 oh no' baz
2025/08/22 21:59:09 open source: open
2025/08/22 21:59:59 oh no: no such file or directory
I'm not bagging on Rust. There are a lot of good things about Rust. But its string handling is not one of them.
There might be potential improvements, like using OsString by default for `env::args()` but I would pick Rust's string handling over Go’s or C's any day.
Isn't &[u8] what you should be using for command-line arguments and filenames and whatnot? In that case you'd want its name to be short, like &[u8], rather than long like &[bytes] or &[raw_uncut_byte] or something.
OsStr/OsString is what you would use in those circumstances. Path/PathBuf specifically for filenames or paths, which I think uses OsStr/OsString internally. I've never looked at OsStr's internals but I wouldn't be surprised if it is a wrapper around &[u8].
Note that &[u8] would allow things like null bytes, and maybe other edge cases.
You can't get null bytes from a command-line argument. And going by https://news.ycombinator.com/item?id=44991638 it's common to not use OsString when accepting command-line arguments, because std::env::args yields Strings, which means that probably most Rust programs that accept filenames on the command line have this bug.
Rust String can contain null bytes! Rust uses explicit string lengths. Agree though that most OS wouldn't be able to pass null bytes in arguments though.
Right, but it can't contain invalid UTF-8, which is valid in both command-line parameters and in filenames on Linux, FreeBSD, and other normal Unixes. See my link above for a demonstration of how this causes bugs in Rust programs.
> If you use 3) to create a &str/String from invalid bytes, you can't safely use that string as the standard library is unfortunately designed around the assumption that only valid UTF-8 is stored.
Yes, and that's a good thing. It allows every code that gets &str/String to assume that the input is valid UTF-8. The alternative would be that every single time you write a function that takes a string as an argument, you have to analyze your code, consider what would happen if the argument was not valid UTF-8, and handle that appropriately. You'd also have to redo the whole analysis every time you modify the function. That's a horrible waste of time: it's much better to:
1) Convert things to String early, and assume validity later, and
2) Make functions that explicitly don't care about validity take &[u8] instead.
This is, of course, exactly what Rust does: I am not aware of a single thing that &str allows you to do that you cannot do with &[u8], except things that do require you to assume it's valid UTF-8.
> This is, of course, exactly what Rust does: I am not aware of a single thing that &str allows you to do that you cannot do with &[u8], except things that do require you to assume it's valid UTF-8.
Doesn't this demonstrate my point? If you can do everything with &[u8], what's the point in validating UTF-8? It's just a less universal string type, and your program wastes CPU cycles doing unnecessary validation.
> I don’t understand this complaint. (3) sounds like exactly what you are asking for. And yes, doing unsafe thing is unsafe
You're meant to use `unsafe` as a way of limiting the scope of reasoning about safety.
Once you construct a `&str` using `from_utf8_unchecked`, you can't safely pass it to any other function without looking at its code and reasoning about whether it's still safe.
> It's never been clear to me where such a type is actually useful. In what cases do you really need to restrict it to valid UTF-8?
Because 99.999% of the time you want it to be valid and would like an error if it isn't? If you want to work with invalid UTF-8, that should be a deliberate choice.
Do you want grep to crash when your text file turned out to have a partially written character in it? 99.999% seems very high, and you haven't given an actual use case for the restriction.
Crash? No. But I can safely handle the error where it happens, because the language actually helps me with this situation by returning a proper Result type. So I have to explicitly check which "variant" I have, instead of forgetting to call the validate function in case of go.
Rust doesn't crash when it gets an error unless you tell it to. You make a choice how to handle the error because you have to it or it won't compile. If you don't care about losing information when reading a file, you can use the lossy function that gracefully handles invalid bytes.
If the filename is not valid UTF-8, Golang can still open the file without a problem, as long as your filesystem doesn't attempt to be clever. Linux ext4fs and Go both consider filenames to be binary strings except that they cannot contain NULs.
> they stuck to the practical convenience of solving the problem that they had in front of them, quickly, instead of analyzing the problem from the first principles, and solving the problem correctly (or using a solution that was Not Invented Here).
I've said this before, but much of Go's design looks like it's imitating the C++ style at Google. The comments where I see people saying they like something about Go it's often an idiom that showed up first in the C++ macros or tooling.
I used to check this before I left Google, and I'm sure it's becoming less true over time. But to me it looks like the idea of Go was basically "what if we created a Python-like compiled language that was easier to onboard than C++ but which still had our C++ ergonomics?"
Literally building the project out of the Plan 9 source code is very far from "bring[ing] their previous experience to the project, (...) some Plan9 influence in there somewhere"
> What if the file name is not valid UTF-8, though?
Then make it valid UTF-8. If you try to solve the long tail of issues in a commonly used function of the library its going to cause a lot of pain. This approach is better. If someone has a weird problem like file names with invalid characters, they can solve it themselves, even publish a package. Why complicate 100% of uses for solving 0.01% of issues?
I recently started writing Go for a new job, after 20 years of not touching a compiled language for something serious (I've done DevKitArm dev. as a hobby).
I know it's mostly a matter of tastes, but darn, it feels horrible. And there are no default parameter values, and the error hanling smells bad, and no real stack trace in production. And the "object orientation" syntax, adding some ugly reference to each function. And the pointers...
It took me back to my C/C++ days. Like programming with 25 year old technology from back when I was in university in 1999.
And then people are amazed for it to achieve compile times, compiled languages were already doing on PCs running at 10 MHz within the constraints of 640 KB (TB, TP, Modula-2, Clipper, QB).
It is weird to lump C++ and Rust together. I have used Rust code bases that compile in 2-3 minutes what a C++ compiler would take literally hours to compile.
I feel people who complain about rustc compile times must be new to using compiled languages…
That's a reasonable trade-off to make for some people, no? There's plenty of work to be done where you can cope with the occasional runtime error and less then bleeding edge performance, especially if that then means wins in other areas (compile speeds, tooling). Having a variety of languages available feels like a pretty good thing to me.
Go was a response, in part, to C++, if I recall how it was described when it was introduced. That doesn't seem to be how it ended it out. Maybe it was that "systems programming language" means something different for different people.
Well, I personally would be happier with a stronger type system (e.g. java can compile just as fast, and it has a less anemic type system), but sure.
And sure, it is welcome from a dev POV on one hand, though from an ecosystem perspective, more languages are not necessarily good as it multiplies the effort required.
What do you mean by saying Java compiles just as fast? Do you mean to say that the Java->bytecode conversion is fast? Duh, it barely does anything. Go compilation generates machine code, you can't compare it to bytecode generation.
Are Java AOT compilation times just as fast as Go?
> Duh, it barely does anything. Go compilation generates machine code, you can't compare it to bytecode generation
Why not? Machine code is not all that special - C++ and Rust is slow due to optimizations, not for machine code as target itself. Go "barely does anything", just spits out machine code almost as is.
Java AOT via GraalVM's native image is quite slow, but it has a different way of working (doing all the Java class loading and initialization and "baking" that into the native image).
That's a bit unfair to the modern compilers - there's a lot more standards to adhere to, more (micro)architectures, frontends need to plug into IRs into optimisers into codegen, etc. Some of it is self-inflicted: do you need yet-another 0.01% optimisation? At the cost of maintainability, or even correctness? (Hello, UB.) But most of it is just computers evolving.
If you want a nice modern compiled language, try Kotlin. It's not ideal, but it's very ergonomic and has very reasonable compile times (to JVM, I did not play with native compilation). People also praise Nim for being nice towards the developer, but I don't have any first-hand experience with it.
I have only used Kotlin on the JVM. You're saying there's a way to avoid the JVM and build binaries with it? Gotta go look that up. The problem with Kotlin is not the language but finding jobs using it can be spotty. "Kotlin specialist" isn't really a thing at all. You can find more Golang and Python jobs than Kotlin.
But it's not--Go is a thoroughly modern language, minus a few things as noted in this discussion. But it's very and I've written quite a few APIs for corporate clients using it and they are doing great.
> Go was designed by some old-school folks that maybe stuck a bit too hard to their principles, losing sight of the practical conveniences.
It feels often like the two principles they stuck/stick to are "what makes writing the compiler easier" and "what makes compilation fast". And those are good goals, but they're only barely developer-oriented.
Not sure it was only that. I remember a lot of "we're not Java" in the discussions around it. I always had the feeling, they were rejecting certain ideas like exceptions and generics more out of principle, than any practical analysis.
Like, yes, those ideas have frequently been driven too far and have led to their own pain points. But people also seem to frequently rediscover that removing them entirety will lead to pain, too.
Ian Lance Taylor, a big proponent of generics, wrote a lot about the difficulties of adding generics to Golang. I bet the initial team just had to cut the scope and produce a working language, as simple as possible while still practically useful. Easy concurrency was the goal, so they basically took mostl of Modula-2 plus ideas form Oberon (and elsewhere), removed all the "fluff" (like arrays indexable by enumeration types, etc), added GC, and that was plenty enough.
I feel especially with generics though, there is a sort of loop that many languages fall into. It goes something like this:
(1) "Generics are too complicated and academical and in the real world we only need them for a small number of well-known tasks anyway, so let's just leave them out!"
(2) The amount of code that does need generics but now has to work around the lack of them piles up, leading to an explosion of different libraries, design patterns, etc, that all try to partially recreate them in their own way.
(3) The language designers finally cave and introduce some kind of generics support in a later version of the language. However, at this point, they have to deal with all the "legacy" code that is not generics-aware and with runtime environments that aren't either. It also somehow has to play nice with all the ad-hoc solutions that are still present. So the new implementation has to deal with a myriad of special cases and tradeoffs that wouldn't be there in the first if it had been included in the language from the beginning.
(4) All the tradeoffs give the feature a reputation of needless complexity and frustrating limitations and/or footguns, prompting the next language designer to wonder if they should include them at all. Go to (1) ...
I am reminded when I read "barely developer oriented" that this comes from Google, who run compute and compilers at Ludicrous Scale. It doesn't seem strange that they might optimize (at least in part) for compiler speed and simplicity.
What makes compilation fast is a good goal at places with large code bases and build times. Maybe makes less sense in smaller startups with a few 100k LOC.
The go language and its runtime is the only system I know that is able to handle concurrency with multicore cpus seamlessly within the language, using the CSP-like (goroutine/channel) formalism which is easy to reason with.
Python is a mess with the gil and async libraries that are hard to reason with. C,C++,Java etc need external libraries to implement threading which cant be reasoned with in the context of the language itself.
So, go is a perfect fit for the http server (or service) usecase and in my experience there is no parallel.
> So, go is a perfect fit for the http server (or service) usecase and in my experience there is no parallel.
Elixir handling 2 million websocket connections on a single machine back in 2015 would like to have a word.[1] This is largely thanks to the Erlang runtime it sits atop.
Having written some tricky Go (I implemented Raft for a class) and a lot of Elixir (professional development), it is my experience that Go's concurrency model works for a few cases but largely sucks in others and is way easier to write footguns in Go than it ought to be.
I worked in both Elixir and Go. I still think Elixir is best for concurrency.
I recently realized that there is no easy way to "bubble up a goroutine error", and I wrote some code to make sure that was possible, and that's when I realize, as usual, that I'm rewriting part of the OTP library.
The whole supervisor mechanism is so valuable for concurrency.
> Java etc need external libraries to implement threading which cant be reasoned with in the context of the language itself.
What do you mean by this for Java? The library is the runtime that ships with Java, and while they're OS threads under the hood, the abstraction isn't all that leaky, and it doesn't feel like they're actually outside the JVM.
For Erlang and Elixir, concurrent programming is pretty much their thing so grab any book or tutorial on them and you'll be introduced to how they handle it.
And of those seven, how many are mainstream? A single one...
So it's really Go vs. Java, or you can take a performance hit and use Erlang (valid choice for some tasks but not all), or take a chance on a novel paradigm/unsupported language.
If you want mainstream, Java and C# are mainstream and both are used much more than Go. Clojure isn't too niche, though not as popular as Go, and supports concurrency out of the box at least as well as Go. Ada is still used widely within its niches and has better concurrency than Go baked in since 1983. And then, yes, Erlang and Elixir to add to that list.
That's 6 languages, a non-exhaustive list of them, that are either properly mainstream and more popular than Go or at least well-known and easy to obtain and get started with. All of which have concurrency baked in and well-supported (unlike, say, C).
EDIT: And one more thing, all but Elixir are older than Go, though Clojure only slightly. So prior art was there to learn from.
Erlang (or Elixir) are absolutely Go replacements for the types of software where CSP is likely important.
Source: spent the last few weeks at work replacing a Go program with an Elixir one instead.
I'd use Go again (without question) but it is not a panacea. It should be the default choice for CLI utilities and many servers, but the notion that it is the only usable language with something approximating CSP is idiotic.
Unless we consider JDK as external library. Speaking of library, Java's concurrency containers are truly powerful yet can be safely used by so many engineers. I don't think Go's ecosystem is even close.
> using the CSP-like (goroutine/channel) formalism which is easy to reason with
I thought it was a seldom mentioned fact in Go that CSP systems are impossible to reason about outside of toy projects so everyone uses mutexes and such for systemic coordination.
I'm not sure I've even seen channels in a production application used for anything more than stopping a goroutine, collecting workgroup results, or something equally localized.
There's also atomic operations (sync/atomic) and higher-level abstractions built on atomics and/or mutexes (sempahores, sync.Once, sync.WaitGroup/errgroup.Group, etc.). I've used these and seen them used by others.
But yeah, the CSP model is mostly dead. I think the language authors' insistence that goroutines should not be addressable or even preemptible from user code makes this inevitable.
Practical Go concurrency owes more to its green threads and colorless functions than its channels.
It is rare to encounter this in practice, and it does get picked up by the race detector (which you have to consciously enable). But the language designers chose not to address it, so I think it's a valid criticism. [1]
Once you know about it, though, it's easy to avoid. I do think, especially given that the CSP features of Go are downplayed nowadays, this should be addressed more prominently in the docs, with the more realistic solutions presented (atomics, mutexes).
It could also potentially be addressed using 128-bit atomics, at least for strings and interfaces (whereas slices are too big, taking up 3 words). The idea of adding general 128-bit atomic support is on their radar [2] and there already exists a package for it [3], but I don't think strings or interfaces meet the alignment requirements.
Not to dispute too strongly (since I haven't used this functionality myself), but Node.js does have support for true multithreading since v12: https://nodejs.org/dist/latest/docs/api/worker_threads.html. I'm not sure what you mean by "M:1 threaded" but I'm legitimately curious to understand more here, if you're willing to give more details.
There are also runtimes like e.g. Hermes (used primarily by React Native), there's support for separating operations between the graphics thread and other threads.
All that being said, I won't dispute OP's point about "handling concurrency [...] within the language"- multithreading and concurrency are baked into the Golang language in a more fundamental way than Javascript. But it's certainly worth pointing out that at least several of the major runtimes are capable of multithreading, out of the box.
Yeah those are workers which require manual admin of memory shared / passed memory:
> Within a worker thread, worker.getEnvironmentData() returns a clone of data passed to the spawning thread's worker.setEnvironmentData(). Every new Worker receives its own copy of the environment data automatically.
M:1 threaded means that the user space threads are mapped onto a single kernel thread. Go is M:N threaded: goroutines can be arbitrarily scheduled across various underlying OS threads. Its primitives (goroutines and channels) make both concurrency and parallelism notably simpler than most languages.
> But it's certainly worth pointing out that at least several of the major runtimes are capable of multithreading, out of the box.
I’d personally disagree in this context. Almost every language has pthread-style cro-magnon concurrency primitives. The context for this thread is precisely how go differs from regular threading interfaces. Quoting gp:
> The go language and its runtime is the only system I know that is able to handle concurrency with multicore cpus seamlessly within the language, using the CSP-like (goroutine/channel) formalism which is easy to reason with.
Yes other languages have threading, but in go both concurrency and parallelism are easier than most.
Even the node author came out and said the concurrency design there was wrong and switched to golang. Libuv is cool and all, but doesn't handle everything and you still have a bottleneck, poor isolation, and the single threaded event loop to deal with. Back pressure in node becomes a real thing and the whole thing becomes very painful and obvious at scale.
Granted, many people don't ever need to handle that kind of throughput. It depends on the app and the load put on to it. So many people don't realize. Which is fine! If it works it works. But if you do fall into the need of concurrency, yea, you probably don't want to be using node - even the newer versions. You certainly could do worse than golang. It's good we have some choices out there.
The other thing I always say is the choice in languages and technology is not for one person. It's for the software and team at hand. I often choose languages, frameworks, and tools specifically because of the team that's charged with building and maintaining. If you can make them successful because a language gives them type safety or memory safety that rust offers or a good tool chain, whatever it is that the team needs - that's really good. In fact, it could well be the difference between a successful business and an unsuccessful one. No one really cares how magical the software is if the company goes under and no one uses the software.
My feeling is that in terms of developer ergonomics, it nailed the “very opinionated, very standard, one way of doing things” part. It is a joy to work on a large microservices architecture and not have a different style on each repo, or avoiding formatting discussions because it is included.
The issue is that it was a bit outdated in the choice of _which_ things to choose as the one Go way. People expect a map/filter method rather than a loop with off by one risks, a type system with the smartness of typescript (if less featured and more heavily enforced), error handling is annoying, and so on.
I get that it’s tough to implement some of those features without opening the way to a lot of “creativity” in the bad sense. But I feel like go is sometimes a hard sell for this reason, for young devs whose mother language is JavaScript and not C.
> The issue is that it was a bit outdated in the choice of _which_ things to choose as the one Go way
I agree with this. I feel like Go was a very smart choice to create a new language to be easy and practical and have great tooling, and not to be experimental or super ambitious in any particular direction, only trusting established programming patterns. It's just weird that they missed some things that had been pretty well hashed out by 2009.
Map/filter/etc. are a perfect example. I remember around 2000 the average programmer thought map and filter were pointlessly weird and exotic. Why not use a for loop like a normal human? Ten years later the average programmer was like, for loops are hard to read and are perfect hiding places for bugs, I can't believe we used to use them even for simple things like map, filter, and foreach.
By 2010, even Java had decided that it needed to add its "stream API" and lambda functions, because no matter how awful they looked when bolted onto Java, it was still an improvement in clarity and simplicity.
Somehow Go missed this step forward the industry had taken and decided to double down on "for." Go's different flavors of for are a significant improvement over the C/C++/Java for loop, but I think it would have been more in line with the conservative, pragmatic philosophy of Go to adopt the proven solution that the industry was converging on.
Go Generics provides all of this. Prior to generics, you could have filter, map, reduce etc but you needed to implement them yourself once in a library/pkg and do it for each type.
After Go added generics in version 1.18, you can just import someone else's generic implementations of whatever of these functions you want and use them all throughout your code and never think about it. It's no longer a problem.
The language might permit it now, but it isn't designed for it. I think if the Go designers had intended for map, filter, et al to replace most for loops, they would have designed a more concise syntax for anonymous functions. Something more along the lines of:
Do they? After too many functional battles I started practicing what I'm jokingly calling "Debugging-Driven Development" and just like TDD keeps the design decisions in mind to allow for testability from the get-go, this makes me write code that will be trivially easy to debug (specially printf-guided debugging and step-by-step execution debugging)
Like, adding a printf in the middle of a for loop, without even needing to understand the logic of the loop. Just make a new line and write a printf. I grew tired of all those tight chains of code that iterate beautifully but later when in a hurry at 3am on a Sunday are hell to decompose and debug.
I'm not a hard defender of functional programming in general, mind you.
It's just that a ridiculous amount of steps in real world problems can be summarised as 'reshape this data', 'give me a subset of this set', or 'aggregate this data by this field'.
Loops are, IMO, very bad at expressing those common concepts briefly and clearly. They take a lot of screen space, usually accesory variables, and it isn't immediately clear from just seing a for block what you're about to do - "I'm about to iterate" isn't useful information to me as a reader, are you transforming data, selecting it, aggregating it?.
The consequence is that you usually end up with tons of lines like
userIds = getIdsfromUsers(users);
where the function is just burying a loop. Compare to:
userIds = users.pluck('id')
and you save the buried utility function somewhere else.
Rust has `.inspect()` for iterators, which achieves your printf debugging needs. Granted, it's a bit harder for an actual debugger, but support's quite good for now.
I'll agree that explicit loops are easier to debug, but that comes at the cost of being harder to write _and_ read (need to keep state in my head) _and_ being more bug-prone (because mutability).
I think it's a bad trade-off, most languages out there are moving away from it
There's actually one more interesting plus for the for loops that's not quite obvious in the beginning: the for-loops allow to do perform a single memory pass instead of multiple. If you're processing a large enough list it does make a significant difference because memory accesses are relatively expensive (the difference is not insignificant, the loop can be made e.g. 10x more performant by optimising memory accesses alone).
So for a large loop the code like
for i, value := source {
result[i] = value * 2 + 1
}
Would be 2x faster than a loop like
for i, value := source {
intermediate[i] = value * 2
}
for i, value := intermediate {
result[i] = value + 1
}
Depending on your iterator implementation (or, lackthere of), the functional boils down to your first example.
For example, Rust iterators are lazily evaluated with early-exits (when filtering data), thus it's your first form but as optimized as possible. OTOH python's map/filter/etc may very well return a full list each time, like with your intermediate. [EDIT] python returns generators, so it's sane.
I would say that any sane language allowing functional-style data manipulation will have them as fast as manual for-loops. (that's why Rust bugs you with .iter()/.collect())
This is a very valid point. Loops also let you play with the iteration itself for performance, deciding to skip n steps if a condition is met for example.
I always encounter these upsides once every few years when preparing leetcode interviews, where this kind of optimization is needed for achieving acceptable results.
In daily life, however, most of these chunks of data to transform fall in one of these categories:
- small size, where readability and maintainability matters much more than performance
- living in a db, and being filtered/reshaped by the query rather than code
- being chunked for atomic processing in a queue or similar (usual when importing a big chunk of data).
- the operation itself is a standard algorithm that you just consume from a standard library that handless the loop internally.
Much like trees and recursion, most of us don’t flex that muscle often. Your mileage might vary depending of domain of course.
Just use a real debugger. You can step into closures and stuff.
I assume, anyway. Maybe the Go debugger is kind of shitty, I don't know. But in PHP with xdebug you just use all the fancy array_* methods and then step through your closures or callables with the debugger.
> Java ? Licensing sagas requiring the use of divergent forks. Plus Go is easier to work with, perhaps especially for server-side deployments
Yeah, these are sagas only, because there is basically one, single, completely free implementation anyone uses on the server-side and it's OpenJDK, which was made 100% open-source and the reference implementation by Oracle. Basically all of Corretto, AdoptOpenJDK, etc are just builds of the exact same repository.
People bringing this whole license topic up can't be taken seriously, it's like saying that Linux is proprietary because you can pay for support at Red Hat..
> People bringing this whole license topic up can't be taken seriously
So you mean all those universities and other places that have been forced to spend $$$ on licenses under the new regime also can't be taken seriously ? Are you saying none of them took advice and had nobody on staff to tell them OpenJDK exists ?
Regarding your Linux comment, some of us are old enough to remember the SCO saga.
Sadly Oracle have deeper pockets to pay more lawyers than SCO ever did ....
> So you mean all those universities and other places that have been forced to spend $$$ on licenses under the new regime also can't be taken seriously ? Are you saying none of them took advice and had nobody on staff to tell them OpenJDK exists ?
This info is actually quite surprising to me, never heard of it since everywhere I know switched to OpenJDK-based alternatives from the get-go. There was no reason to keep on the Oracle one after the licencing shenanigans they tried to play.
Why do these places kept the Oracle JDK and ended up paying for it? OpenJDK was a drop-in replacement, nothing of value is lost by switching...
I have made a bunch of claims, that are objectively true. From there, basic logical inference says that you can completely freely use Java. Anything else is irrelevant.
I don't know what/which university you talk about, but I'm sure they were also "forced to pay $$$" for their water bills and whatnot. If they decided to go with paid support, then.. you have to pay for it. In exchange you can a) point your finger at a third-party if something goes wrong (which governments love doing/often legally necessary) b) get actual live support on Christmas Eve if needed.
TL;DR: Its impossible to know if anyone on campus has downloaded Oracle Java....
Quote from this article:[1]
*He told The Register that Oracle is "putting specific Java sales teams in country, and then identifying those companies that appear to be downloading and... then going in and requesting to [do] audits. That recipe appears to be playing out truly globally at this point."*
That's also true of torrented PhotoShop, Microsoft Office, etc..
Also, as another topic, Oracle is doing audits specifically because their software doesn't phone home to check licenses and stuff like that - which is a crucial requirement for their intended target demographics, big government organizations, safety critical systems, etc. A whole country's healthcare system, or a nuclear power base can't just stop because someone forgot to pay the bill.
So instead Oracle just visits companies that have a license with them, and checks what is being used to determine if it's in accord with the existing contract. And yeah, from this respect I also heard of a couple of stories where a company was not using the software as the letter of the contract, e.g. accidentally enabling this or that, and at the audit the Oracle salesman said that they will ignore the mistake if they subscribe to this larger package, which most manager will gladly accept as they can avoid the blame, which is questionable business practice, but still doesn't have anything to do with OpenJDK..
The article tries very hard to draw a connection between the licensing costs for the universities and Oracle auditing random java downloads, but nobody actually says that this is what happened.
The waiver of historic fees goes back to the last licensing change where Oracle changed how licensing fees would be calculated. So it seems reasonable that Oracle went after them because they were paying customers that failed to pay the inflated fees.
Yeah I know, but people have trouble understanding the absolutely trivial licensing around OpenJDK, let's not bring up alternative implementations (which actually makes the whole platform an even better target from a longevity perspective! There isn't many languages that have a standard with multiple, completely independent impls).
You forgot D. In a world where D exists, it's hard to understand why Go needed to be created. Every critique in this post is not an issue in D. If the effort Google put into Go had gone on making D better, I think D today would be the best language you could use. But as it is, D has had very little investment (by that I mean actual developer time spent on making it better, cleaning it up, writing tools) and it shows.
Go has a big, high quality standard library with most of what one might need. Means you have to bring in and manage (and trust) far fewer third party dependencies, and you can work faster because you’re not spending a bunch of time figuring out what the crate of the week is for basic functionality.
Rust intentionally chooses to have a small standard library to avoid the "dead batteries" problem. But the Rust community also maintains lists of "blessed" crates to try and cope with the issue of having to trust third-party software components of unknown quality.
I think having http in the standard library is a perfect example of the dead batteries problem: should the stdlib http also support QUIC and/or websockets? If you choose to include it, you've made stdlib include support for very specific use cases. If you choose not to include it, should the quic crate then extend or subsume the stdlib http implementation? If you choose subsume, you've created a dead battery. If you choose extend, you've created a maintenance nightmare by introducing a dependency between stdlib and an external crate.
Sorry but for most programming tasks I prefer having actual data containers with features than an HTTP library: Set, Tree, etc types. Those are fundamental CS building blocks yet are absent from the Go standard library. (well, they were added pretty recently, still nowhere near as featureful than std::collection in Rust).
Also, as mentioned by another comment, an HTTP or crypto library can become obsolete _fast_. What about HTTP3? What about post-quantum crypto? What about security fixes? The stdlib is tied to the language version, thus to a language release. Having such code independant allows is to evolve much faster, be leaner, and be more composable. So yes, the library is well maintained, but it's tied to the Go version.
Also, it enables breaking API changes if absolutely needed. I can name two precendents:
- in rust, time APIs in chrono had to be changed a few times, and the Rust maintainers were thankful it was not part of the stdlib, as it allowed massive changes
- otoh, in Go, it was found out that net.Ip has an absolutely atrocious design (it's just an alias for []byte). Tailscale wrote a replacement that's now in a subpackage in net, but the old net.Ip is set in stone. (https://tailscale.com/blog/netaddr-new-ip-type-for-go)
> Set, Tree, etc types. Those are fundamental CS building blocks
And if you're engaging in CS then Go is probably the last language you should be using. If however, what you're interested in doing is programming, the fundamental data structures there are arrays and hashmaps, which Go has built-in. Everything else is niche.
> Also, as mentioned by another comment, an HTTP or crypto library can become obsolete _fast_. What about HTTP3? What about post-quantum crypto? What about security fixes? The stdlib is tied to the language version, thus to a language release. Having such code independant allows is to evolve much faster, be leaner, and be more composable. So yes, the library is well maintained, but it's tied to the Go version.
The entire point is to have a well supported crypto library. Which Go does and it's always kept up to date. Including security fixes.
> - otoh, in Go, it was found out that net.Ip has an absolutely atrocious design (it's just an alias for []byte). Tailscale wrote a replacement that's now in a subpackage in net, but the old net.Ip is set in stone. (https://tailscale.com/blog/netaddr-new-ip-type-for-go)
Yes, and? This seems to me to be the perfect way to handle things - at all times there is a blessed high-quality library to use. As warts of its design get found out over time, a new version is worked on and released once every ~10 years.
A total mess of barely-supported libraries that the userbase is split over is just that - a mess.
The downside of a small stdlib is the proliferation of options, and you suddenly discover(ed?, it's been a minute) that your async package written for Tokio won't work on async-std and so forth.
This has often been the case in Go too - until `log/slog` existed, lots of people chose a structured logger and made it part of their API, forcing it on everyone else.
No, none outside of stdlib anyway in the way you're probably thinking of.
There are specialized constructs which live in third-party crates, such as rope implementations and stack-to-heap growable Strings, but those would have to exist as external modules in Go as well.
uv + the new way of adding the required packages in the comments is pretty good.
you can go `uv run script.py` and it'll automatically fetch the libraries and run the script in a virtual environment.
Still no match for Go though, shipping a single cross-compiled binary is a joy. And with a bit of trickery you can even bundle in your whole static website in it :) Works great when you're building business logic with a simple UI on top.
I've been out of the Python game for a while but I'm not surprised there is yet another tool on the market to handle this.
You really come to appreciate when these batteries are included with the language itself. That Go binary will _always_ run but that Python project won't build in a few years.
Or the import path was someone's blog domain that included a <meta> reference to the actual github repo (along with the tag, IIRC) where the source code really lives. Insanity
Well, that's the problem I was highlighting - golang somehow decided to have the worst of both worlds: arbitrary domains in import paths and then putting the actual ref of the source code ... elsewhere
This just makes it even more frustrating to me. Everything good about go is more about the tooling and ecosystem but the language itself is not very good. I wish this effort had been put into a better language.
> I wish this effort had been put into a better language.
But it is being put. Read newsletters like "The Go Blog", "Go Weekly". It's been improving constantly. Language-changes require lots of time to be done right, but the language is evolving.
> Rust crates re-introduces [...] potential for supply-chain attacks.
I have absolutely no idea how go would solve this problem, and in fact I don't think it does at all.
> The Go std-lib is fantastic.
I have seen worse, but I would still not call it decent considering this is a fairly new language that could have done a lot more.
I am going to ignore the incredible amount of asinine and downright wrong stuff in many of the most popular libraries (even the basic ones maintained by google) since you are talking only about the stdlib.
On the top of my head I found inconsistent tagging management for structs (json defaults, omitzero vs omitempty), not even errors on tag typos, the reader/writer pattern that forces you to to write custom connectors between the two, bzip2 has a reader and no writer, the context linked list for K/V. Just look at the consistency of the interfaces in the "encoding" pkg and cry, the package `hash` should actually be `checksum`. Why does `strconv.Atoi`/ItoA still exist? Time.Add() vs Time.Sub()...
It chock full of inconsistencies. It forces me to look at the documentation every single time I don't use something for more than a couple of days. No, the autocomplete with the 2-line documentation does not include the potential pitfalls that are explained at the top of the package only.
And please don't get me started on the wrappers I had to write around stuff in the net library to make it a bit more consistent or just less plain wrong. net/url.Parse!!! I said don't make my start on this package! nil vs NoBody! ARGH!
None of this is stuff at the language level (of which there is plenty to say).
None of it is a dealbreaker per se, but it adds attrition and becomes death by a billion cuts.
I don't even trust any parser written in go anymore, I always try to come up with corner cases to check how it reacts, and I am often surprised by most of them.
Sure, there are worse languages and libraries. Still not something I would pick up in 2025 for a new project.
Yes, My favourite is the `time` package. It's just so elegant how it's just a number under there, the nominal type system truly shines. And using it is a treat.
What do you mean I can do `+= 8*time.Hour` :D
Unfortunately it doesn't have error handling, so when you do += 8 hours and it fails, it won't return a Go error, it won't throw a Go exception, it just silently does the wrong thing (clamp the duration) and hope you don't notice...
It's simplistic and that's nice for small tools or scripts, but at scale it becomes really brittle since none of the edge cases are handled
I thankfully found out when writing unit tests instead of in production. In Go time.Time has a much higher range than time.Duration, so it's very easy to have an overflow when you take a time difference. But there's also no error returned in general when manipulating time.Duration, you have to remember to check carefully around each operation to know if it risks going out of range.
Internally time.Duration is a single 64bit count, while time.Time is two more complicated 64bit fields plus a location
As long as you don’t need to do `hours := 8` and `+= hours * time.Hour`. Incredibly the only way to get that multiplication to work is to cast `hours` to a `time.Duration`.
In Go, `int * Duration = error`, but `Duration * Duration = Duration`!
I get you can specifically write code that does not malloc, but I'm curious at scale if there are heap management / fragmentation and compression issues that are equivalent to GC pause issues.
I don't have a lot of experience with the malloc languages at scale, but I do know that heat fragmentation and GC fragmentation are very similar problems.
There are techniques in GC languages to avoid GC like arena allocation and stuff like that, generally considered non-idiomatic.
This tends to be true for most languages, even the ones with easier concurrency support. Using it correctly is the tricky part.
I have no real problem with the portability. The area I see Go shining in is stuff like AWS Lambda where you want fast execution and aren't distributing the code to user systems.
I find Result[] and Optional[] somewhat overrated, but nil does bother me. However, nil isn't going to go away (what else is going to be the default value for pointers and interfaces, and not break existing code?). I think something like a non-nilable type annotation/declaration would be all Go needs.
Yeah maybe they're overrated, but they seem like the agreed-upon set of types to avoid null and to standardize error handling (with some support for nice sugars like Rust's ? operator).
I quite often see devs introducing them in other languages like TypeScript, but it just doesn't work as well when it's introduced in userland (usually you just end up with a small island of the codebase following this standard).
Typescript has another way of dealing with null/undefined: it's in the type definition, and you can't use a value that's potentially null/undefined. Using Optional<T> in Typescript is, IMO, weird. Typescript also has exceptions...
I think they only work if the language is built around it. In Rust, it works, because you just can't deref an Optional type without matching it, and the matching mechanism is much more general than that. But in other languages, it just becomes a wart.
As I said, some kind of type annotation would be most go-like, e.g.
func f(ptr PtrToData?) int { ... }
You would only be allowed to touch *ptr inside a if ptr != nil { ... }. There's a linter from uber (nilaway) that works like that, except for the type annotation. That proposal would break existing code, so perhaps something an explicit marker for non-nil pointers is needed instead (but that's not very ergonomic, alas).
Yeah default values are one of Go's original sins, and it's far too late to roll those back. I don't think there are even many benefits—`int i;` is not meaningfully better than `int i = 0;`. If it's struct initialization they were worried about, well, just write a constructor.
Go has chosen explicit over implicit everywhere except initialization—the one place where I really needed "explicit."
It makes types very predictable though: a var int is always a valid int no matter what, where or how. How would you design the type system and semantics around initialization and declarations without defaults? Just allow uninitialized values like in C? That’s basically default values with extra steps and bonus security holes. An expansion of the type system to account for PossiblyUndefined<t>? That feels like a significant complication, but maybe someone made it work…
Golang is great for problem classes where you really, really can't do away with tracing GC. That's a rare case perhaps, but it exists nonetheless. Most GC languages don't have the kind of high-performance concurrent GC that you get out of the box with Golang, and the minimum RAM requirements are quite low as well. (You can of course provide more RAM to try and increase overall throughput, and you probably should - but you don't have to. That makes it a great fit for running on small cloud VM's, where RAM itself can be at a premium.)
Java's GCs are a generation ahead, though, in both throughput-oriented and latency-sensitive workloads [1]. Though Go's GC did/does get a few improvements and it is much better than it was a few years ago.
[1] ZGC has basically decoupled the heap size from the pause time, at that point you get longer pauses from the OS scheduler than from GC.
> But yeah the whole error / nil situation still bothers me. I find myself wishing for Result[Ok, Err] and Optional[T] quite often.
I got insta rejected in interview when i said this in response to interview panels question about 'thoughts about golang' .
Like they said, 'interview is over' and showed me the (virtual) door. I was stunned lol. This was during peak golang mania . Not sure what happened to rancherlabs .
They probably thought you weren't going to be a good fit for writing idiomatic Go. One of the things many people praise Go for is its standard style across codebases, if you don't like it, you're liable to try and write code that uses different patterns, which is painful for everyone involved.
> I find myself wishing for Optional[T] quite often.
Well, so long as you don't care about compatibility with the broad ecosystem, you can write a perfectly fine Optional yourself:
type Optional[Value any] struct {
value Value
exists bool
}
// New empty.
func New[Value any]() Optional[Value] {}
// New of value.
func Of[Value any](value Value) Optional[Value] {}
// New of pointer.
func OfPointer[Value any](value *Value) Optional[Value] {}
// Only general way to get the value.
func (o Optional[Value]) Get() (Value, bool) {}
// Get value or panic.
func (o Optional[Value]) MustGet() Value {}
// Get value or default.
func (o Optional[Value]) GetOrElse(defaultValue Value) Value {}
// JSON support.
func (o Optional[Value]) MarshalJSON() ([]byte, error) {}
func (o *Optional[Value]) UnmarshalJSON(data []byte) error {}
// DB support.
func (o *Optional[Value]) Scan(value any) error {}
func (o Optional[Value]) Value() (driver.Value, error) {}
But you probably do care about compatibility with everyone else, so... yeah it really sucks that the Go way of dealing with optionality is slinging pointers around.
You can write `Optional`, sure, but you can't un-write `nil`, which is what I really want. I use `Optional<T>` in Java as much as I can, and it hasn't saved me from NullPointerException.
You're not being very precise about your exact issues. `nil` isn't anywhere as much of an issue in Go as it is in Java because not everything is a reference to an object. A struct cannot be nil, etc. In Java you can literally just `return null` instead of an `Optional<T>`, not so in Go.
There aren't many possibilities for nil errors in Go once you eliminate the self-harm of abusing pointers to represent optionality.
For JSON, you can't encode Optional[T] as nothing at all. It has to encode to something, which usually means null. But when you decode, the absence of the field means UnmarshalJSON doesn't get called at all. This typically results in the default value, which of course you would then re-encode as null. So if you round-trip your JSON, you get a materially different output than input (this matters for some other languages/libraries). Maybe the new encoding/json/v2 library fixes this, I haven't looked yet.
Also, I would usually want Optional[T]{value:nil,exists:true} to be impossible regardless of T. But Go's type system is too limited to express this restriction, or even to express a way for a function to enforce this restriction, without resorting to reflection, and reflection has a type erasure problem making it hard to get right even then! So you'd have to write a bunch of different constructors: one for all primitive types and strings; one each for pointers, maps, and slices; three for channels (chan T, <-chan T, chan<- T); and finally one for interfaces, which has to use reflection.
Ideally, I would want Optional[T] to encode the same as T when a value is present, and to encode in a configurable way when the value is absent. Admittedly, the nothing to null problem exists with *T too, and even with *T and `json:",omitempty"`, you get the opposite problem (null turns to nothing). I didn't think about that at the time, so it's really more of an issue with encoding/json rather than Optional[T] per se. However, you can't implement MarshalJSON and output nothing as far as I know.
The remarkable thing to me about Go is that it was created relatively recently, and the collective mindshare of our industry knew better about these sorts of issues. It would be like inventing a modern record player today with fancy new records that can't be damaged and last forever. Great... but why the fuck are we doing that? We should not be writing low level code like this with all of the boilerplate, verbosity, footguns. Build high level languages that perform like low level languages.
I shouldn't fault the creators. They did what they did, and that is all and good. I am more shocked by the way it has exploded in adoption.
I still don't understand why defer works on function scope, and not lexical scope, and nobody has been able to explain to me the reason for it.
In fact this was so surprising to me is that I only found out about it when I wrote code that processed files in a loop, and it started crashing once the list of files got too big, because defer didnt close the handles until the function returned.
When I asked some other Go programmers, they told me to wrap the loop body in an anonymus func and invoke that.
Other than that (and some other niggles), I find Go a pleasant, compact language, with an efficient syntax, that kind of doesn't really encourage people trying to be cute. I started my Go journey rewriting a fairly substantial C# project, and was surprised to learn that despite it having like 10% of the features of C#, the code ended up being smaller. It also encourages performant defaults, like not forcing GC allocation at every turn, very good and built-in support for codegen for stuff like serialization, and no insistence to 'eat the world' like C# does with stuff like ORMs that showcase you can write C# instead of SQL for RDBMS and doing GRPC by annotating C# objects. In Go, you do SQL by writing SQL, and you od GRPC by writing protobuf specs.
So sometimes you want it lexical scope, and sometimes function scope; For example, maybe you open a bunch of files in a loop and need them all open for the rest of the function.
Right now it's function scope; if you need it lexical scope, you can wrap it in a function.
Suppose it were lexical scope and you needed it function scope. Then what do you do?
You can start a new scope with `{}` in go. If I have a bunch of temp vars I'll declare the final result outside the braces and then do the work inside. But lately days I'll just write a function. It's clearer and easier to test.
Someone already replied, but in general when you conditionally acquire a resource, but continue on failing. E.g., if you manage to acquire it, defer the Close() call, otherwise try to get another resource.
Another example I found in my code is a conditional lock. The code runs through a list of objects it might have to update (note: it is only called in one thread). As an optimization, it doesn't acquire a lock on the list until it finds an object that has to be changed. That allows other threads to use/lock that list in the meantime instead of waiting until the list scan has finished.
Really? I find the opposite is true. If I need lexical scope then I’d just write, for example
f.Close() // without defer
The reason I might want function scope defer is because there might be a lot of different exit points from that function.
With lexical scope, there’s only three ways to safely jump the scope:
1. reaching the end of the procedure, in which case you don’t need a defer)
2. A ‘return’, in which case you’re also exiting the function scope
3. a ‘break’ or ‘continue’, which admittedly could see the benefit of a lexical scope defer but they’re also generally trivial to break into their own functions; and arguably should be if your code is getting complex enough that you’ve got enough branches to want a defer.
If Go had other control flows like try/catch, and so on and so forth, then there would be a stronger case for lexical defer. But it’s not really a problem for anyone aside those who are also looking for other features that Go also doesn’t support.
Minor nit but the second example is ostensibly still just functional scoping (as in not literally but more pragmatically) because you don’t contain any branching outside of the try block within your method.
But that’s a moot point because I appreciate it’s just an example. And, more importantly, Go doesn’t support the kind of control flows you’re describing anyway (as I said in my previous post).
A lot of the comments here about ‘defer’ make sense in other languages that have different idioms and features to Go. But they don’t apply directly to Go because you’d have to make other changes to the language first (eg implementing try blocks).
That's a theoretical problem that almost never surfaces in practice.
Using `defer...recover` is computationally expensive within hot paths. And since Go encourages errors to be surfaced via the `error` type, when writing idomatic Go you don't actually need to raise exceptions all that often.
So panics are reserved for instances where your code reaches a point that it cannot reasonably continue.
This means you want to catch panics at boundary points in your code.
Given that global state is an anti-pattern in any language, you'd want to wrap your mutex, file, whatever operations in a `struct` or its own package and instantiate it there. So you can have a destructor on that which is still caught by panic and not overly reliant on `defer` to achieve it.
This actually leads to my biggest pet peeve in Go. It's not `x, err := someFunction()` and nor is it `defer/panic`, these are all just ugly syntax that doesn't actually slow you down. My biggest complaint is the lack of creator and destructor methods for structs.
the `NewClass`-style way of initialising types is an ugly workaround and it constantly requires checking if libraries require manual initilisation before use. Not a massive time sink but it's not something the IDE can easily hint to you so you do get pulled from your flow to then either Google that library or check what `New...` functions are defined in the IDEs syntax completion. Either way, it's a distraction.
The lack of a destructor, though, does really show up all the other weaknesses in Go. It then makes `defer` so much more important than it otherwise should be. It means the language then needs runtime hacks for you to add your own dereference hooks[0][1] (this is a problem I run into often with CGO where I do need to deallocate, for example, texture data from the GPU). And it means you need to check each struct to see if it includes a `Close()` method.
I've heard the argument against destructors is that they don't catch errors. But the counterargument to that is the `defer x.Close()` idiom, where errors are ignored anyway.
I think that change, and tuples too so `err` doesn't always have to be its own variable, would transform Go significantly into something that feels just as ergonomic as it is simple to learn, and without harming the readability of Go code.
You do what the compiler has to do under the hood: at the top of the function create a list of open files, and have a defer statement that loops over the list closing all of the files. It's really not a complicated construct.
OK, what happens now if you have an error opening one of those files, return an error from inside the for loop, and forget to close the files you'd already opened?
You put the files in the collection as you open them, and you register the defer before opening any of them. It works fine. Defer should be lexically scoped.
I think you hit the nail on the head - I think it's the stupid decision on Go lang designers part to make panic-s recover-able. This necessitates stack unwinding, meaning defer-s still need to run if a panic happens down the stack.
Since they didn't want to have a 'proper' RAII unwinding mechanism, this is the crappy compromise they came up with.
I’ve worked with languages that have both, and find myself wishing I could have function-level defer inside conditionals when I use the block-level languages.
Yes it does, function-scope defer needs a dynamic data structure to keep track of pending defers, so its not zero cost.
It can be also a source of bugs where you hang onto something for longer than intended - considering there's no indication of something that might block in Go, you can acquire a mutex, defer the release, and be surprised when some function call ends up blocking, and your whole program hangs for a second.
I think it's only a real issue when you're coming from a language that has different rules. Block-scoping (and thus not being able to e.g. conditionally remove a temp file at the end of a function) would be equally surprising for someone coming from Go.
But I do definitely agree that the dynamic nature of defer and it not being block-scoped is probably not the best
Having to wrap a loop body in a function that's immediately invoked seems like it would make the code harder to read. Especially for a language that prides itself on being "simple" and "straightforward".
I've worked almost exclusively on a large Golang project for over 5 years now and this definitely resonates with me. One component of that project is required to use as little memory as possible, and so much of my life has been spent hitting rough edges with Go on that front. We've hit so many issues where the garbage collector just doesn't clean things up quickly enough, or we get issues with heap fragmentation (because Go, in its infinite wisdom, decided not to have a compacting garbage collector) that we've had to try and avoid allocations entirely. Oh, and when we do have those issues, it's extremely difficult to debug. You can take heap profiles, but those only tell you about the live objects in the heap. They don't tell you about all of the garbage and all of the fragmentation. So diagnosing the issue becomes a matter of reading the tea leaves. For example, the heap profile says function X only allocated 1KB of memory, but it's called in a hot loop, so there's probably 20MB of garbage that this thing has generated that's invisible on the profile.
We pre-allocate a bunch of static buffers and re-use them. But that leads to a ton of ownership issues, like the append footgun mentioned in the article. We've even had to re-implement portions of the standard library because they allocate. And I get that we have a non-standard use case, and most programmers don't need to be this anal about memory usage. But we do, and it would be really nice to not feel like we're fighting the language.
I've found that when you need this it's easier to move stuff offheap, although obviously that's not entirely trivial in a GC language, and it certainly creates a lot of rough edges. If you find yourself writing what's essentially, e.g. C++ or Rust in Go, then you probably should just rewrite that part in the respective language when you can :)
Perhaps the new "Green Tea" GC will help? It's described as "a parallel marking algorithm that, if not memory-centric, is at least memory-aware, in that it endeavors to process objects close to one another together."
I saw that! I’m definitely interested in trying it out to see if it helps for our use case. Of course, at this point we’ve reduced allocations so much the GC doesn’t have a ton of work to do, unless we slip up somewhere (which has happened). I’ll probably have to intentionally add some allocations in a hot path as a stress test.
What I would absolutely love is a compacting garbage collector, but my understanding is Go can’t add that without breaking backwards compatibility, and so likely will never do that.
> One component of that project is required to use as little memory as possible, and so much of my life has been spent hitting rough edges with Go on that front.
You made a poor choice of language for the problem. It'd be a good fit for C/C++/Rust/Zig.
I know this comment isn't terribly helpful, so I'm sorry, but it also sounds like Go is entirely the wrong language for this use case and you and your team were forced to use it for some corporate reason, like, the company only uses a subset of widely used programming languages in production.
I've heard the term "beaten path" used for these languages, or languages that an organization chooses to use and forbids the use of others.
No, Go isn’t actually that widely used at my company. The original developers chose Go because they thought it was a good fit for our use case. We were particularly looking for a compiled language that produces binaries with minimal dependencies, didn’t have manual memory management, and was relatively mature (I think Rust was barely 1.0 at the time). We knew we wanted to limit memory usage, but it was more of a “nice to have” than anything else. And Go worked pretty well. It was in production for a couple years before we started getting burnt by these issues. We are looking at porting this to Rust, but that’s a big lift. This is a 50K+ line code base that’s pretty battle tested.
> The original developers chose Go because they thought it was a good fit for our use case.
I don't completely get this. If you are memory requirements are strict, this makes little to no sense to me. I was programming J2ME games 20 years ago for Nokia devices. We were trying to fit games into 50-128kb RAM and all of this with Java of all the languages. No sane Java developer would have looked at that code without fainting - no dynamic allocations, everything was static, byte and char were the most common data types used. Images in the games were shaved, no headers, nothing. You really have to think it through if you got memory constraints on your target device.
The strictness of our memory requirements wasn’t made apparent until years later. For background, the application is a system daemon running on end-user servers. So every byte we allocate, every cycle of CPU we use, is a byte or cycle taken away from our customers. We don’t provide firm guarantees or formal SLAs on our memory usage, but we do try to minimize it whenever possible. Because what we don’t want is for someone to upgrade agent versions and suddenly our daemon is using significantly more memory and causing OOMs, or using more CPU and force customers to scale out their fleet. Our p99 cpu and memory usage right now are more or less the same as they were two years ago (RSS is under 40MB last I checked)
So it’s not like we’re running on a machine with only kilobytes of RAM, but we do want to minimize our usage.
I worked briefly on extending an Go static site generator someone wrote for a client. The code was very clear and easy to read, but difficult to extend due to the many rough edges with the language. Simple changes required altering a lot of code in ways that were not immediately obvious. The ability to encapsulate and abstract is hindered in the name of “simplicity.” Abstraction is the primary way we achieve simple and easy to extend code. John Ousterhoust defined a complex program as one that is difficult to extend rather than necessarily being large or difficult to understand at scale. The average Go program seems to violate this principle a lot. Programs appear “simple” but extension proves difficult and fraught.
Go is a case of the emperor having no clothes. Telling people that they just don’t get it or that it’s a different way of doing things just doesn’t convince me. The only thing it has going for it is a simple dev experience.
I find the way people talk about Go super weird. If people have criticisms people almost always respond that the language is just "fine" and people kind of shame you for wanting it. People say Go is simpler but having to write a for loop to get the list of keys of a map is not simpler.
I agree with your point, but you'll have to update your example of something go can't do
> having to write a for loop to get the list of keys of a map
We now have the stdlib "maps" package, you can do:
keys := slices.Collect(maps.Keys(someMap))
With the wonder of generics, it's finally possible to implement that.
Now if only Go was consistent about methods vs functions, maybe then we could have "keys := someMap.Keys()" instead of it being a weird mix like `http.Request.Headers.Set("key", "value")` but `map["key"] = "value"`
I haven't use Go since 2024, but I was going to say something similar--seems like I was pretty happy doing all my Functional style coding in Go. The problem for me was the client didn't want us to use it. We were given the choice between Java (ugh) and Python to build APIs. We chose Python because I cross my arms and bite my lip and refuse to write anymore Java in these days of containers as the portability. I never really liked Java, or maybe I never really like the kinds of jobs you get using Java? <-- that
Fair I stopped using Go pre-generics so I am pretty out of date. I just remember having this conversation about generics and at the time there was a large anti-generics group. Is it a lot better with generics? I was worried that a lot of the library code was already written pre-generics.
The generics are a weak mimicry of what generics could be, almost as if to say "there we did it" without actually making the language that much more expressive.
For example, you're not allowed to write the following:
type Option[T any] struct { t *T }
func (o *Option[T]) Map[U any](f func(T) U) *Option[U] { ... }
That fails because methods can't have type parameters, only structs and functions. It hurts the ergonomics of generics quite a bit.
And, as you rightly point out, the stdlib is largely pre-generics, so now there's a bunch of duplicate functions, like "strings.Sort" and "slices.Sort", "atomic.Pointer" and "atomic.Value", quite possible a sync/v2 soon https://github.com/golang/go/issues/71076, etc.
The old non-generic versions also aren't deprecated typically, so they're just there to trap people that don't know "no never use atomic.Value, always use atomic.Pointer".
> Now if only Go was consistent about methods vs functions
This also hurts discoverability. `slices`, `maps`, `iter`, `sort` are all top-level packages you simply need to know about to work efficiently with iteration. You cannot just `items.sort().map(foo)`, guided and discoverable by auto-completion.
Ooh! Or remember when a bunch of people acted like they had ascended to heaven for looking down on syntax-highlighting because Rob said something about it being a distraction? Or the swarms blasting me for insisting GOPATH was a nightmare that could only be born of Google's hubris (literally at the same time that `godep` was a thing and Kubernetes was spending significant efforts just fucking dealing with GOPATH.).
Happy to not be in that community, happy to not have to write (or read) Go these days.
And frankly, most of the time I see people gushing about Go, it's for features that trivially exist in most languages that aren't C, or are entirely subjective like "it's easy" (while ignoring, you know, reality).
So you used Go once, briefly, and yet you feel competent to pass this judgement so easily?
As someone who's been doing Go since 2015, working on dozens of large codebases counting probably a million lines total, across multiple teams, your criticisms do not ring true.
Go is no worse than C when it comes to extensibility, or C# or Java for that matter. Go programs are only extensible to the extent (ha) developers design their codebases right. Certainly, Go trades expressivity for explicitness more than some languages. You're encouraged to have fewer layers of abstraction and be more concrete and explicit. But in no way does that impede being able to extend code. The ability to write modular, extensible programs is a skill that must be learned, not something a programming language gives you for free.
It sounds like you worked on a poorly constructed codebase and assumed it was Go's fault.
It certainly isn’t impossible to write good code in Go. Perhaps the code base I was working on was bad — it didn’t seem obvious to me that it was. Go is not a bad language in the way that brainfuck is a bad language.
I think Java and C# offer clearly more straightforward ways to extend and modify existing code. Maybe the primary ways extension in Java and C# works are not quite the right ones for every situation.
The primary skill necessary to write modular code is first knowing what the modular interfaces is and second being able to implement it in a clean fashion. Go does offer a form of interfaces. But precisely because it encourages you to be highly explicit and avoid abstraction, it can make it difficult for you to implement the right abstraction and therefore complicate the modular interfaces.
Programming is hard. I don’t think adopting a kind of ascetic language like Go makes programming easier overall. Maybe it’s harder to be an architecture astronaut in Go, but only by eliminating entire classes of abstraction that are sometimes just necessary. Sometimes, inheritance is the right abstraction. Sometimes, you really need highly generic and polymorphic code (see some of the other comments for issues with Go’s implementation of generics).
Go has its fair share of flaws but I still think it hits a sweet spot that no other server side language provides.
It’s faster than Node or Python, with a better type system than either. It’s got a much easier learning curve than Rust. It has a good stdlib and tooling. Simple syntax with usually only one way to do things. Error handling has its problems but I still prefer it over Node, where a catch clause might receive just about anything as an “error”.
Am I missing a language that does this too or more? I’m not a Go fanatic at all, mostly written Node for backends in my career, but I’ve been exploring Go lately.
> It’s faster than Node or Python, with a better type system than either. It’s got a much easier learning curve than Rust. It has a good stdlib and tooling. Simple syntax with usually only one way to do things. Error handling has its problems but I still prefer it over Node, where a catch clause might receive just about anything as an “error”.
I feel like I could write this same paragraph about Java or C#.
Just because you can learn about something doesn't mean you need to. C# now offers top-level programs that are indistinguishable from python scripts at a quick glance. No namespaces, classes or main methods are required. Just the code you want to execute and one simple file.
The benefit of the language being small and "normal" idioms mostly followed and the std library being good-enough that people rarely import another dep for something it can handle is that it's by far the easiest language in which to read my dependencies that I've ever used.
If I hit a point where I need to do that in pretty much any other language, I'll cast about for some way to avoid doing it for a while (to include finding a different dependency to replace that one) because it's almost always gonna be a time-suck and may end up yielding nothing useful at all without totally unreasonable amounts of time spent on it, so I may burn time trying and then just have to abandon the effort.
In go, I unhesitatingly hop right in and find what I need fast, just about every time.
It's the polar opposite of something like Javascript (or Typescript—it doesn't avoid this problem) where you can have three libraries and all three both read like a totally different language from the one you're writing, and also like totally different languages from one another. Ugh. This one was initially written during the "everything should be a HOF" trend and ties itself in knots to avoid ever treating the objects it's implicitly instantiating all over the place as objects... this one uses "class" liberally... this one extensively leans on the particular features of prototypal inheritance, OMG, kill me now... this one imports Lodash, sigh, here we go... et cetera.
I mean thats fine, but thats hardly applicable to the ease of throwing a new dev into a very large c# codebase and how quickly they can ramp up on the language.
Any large codebase has a large ramp up time by virtue of being large. And the large codebase can have devex automation to get past the initial ceremony setup of larger languages like Java. It feels like the wrong thing to optimize for. As a better alternative to small services that would've been made in python or node, yes for sure, then the quick setup and simplicity of go makes sense. Which is why the biggest audience of people who use go and used another language previously is python engineers and people who want to make small network services.
At the larger codebase go company I worked at, the general conclusion was: Go is a worse Java. The company should've just used Java in the end.
I mostly agree with you except the simple syntax with one way of doing things. If my memory serves me, Java supports at least 2 different paradigms for concurrency, for example, maybe more. I don’t know about C#. Correct me if wrong.
There is no one paradigm for concurrency, no method is strictly better than the other. Channels are not the only primitive used in go either, so it's a bit moot point.
What's important is how good primitives you have access to. Java has platform and virtual threads now (the latter simplifying a lot of cases where reactive stuff was prevalent before) with proper concurrent data structures.
But that's only because they're older and were around before modern concurrent programming was invented.
In C#, for example, there are multiple ways, but you should generally be using the modern approach of async/Task, which is trivial to learn and used exclusively in examples for years.
Maybe this is a bit pedantic, but it bothers me when people refer to "Node" as a programming language. It's not a language, it's a JavaScript runtime. Which to that you might say "well when people say Node they just mean JavaScript". But that's also probably not accurate, because a good chunk of modern Node-executed projects are written in TypeScript, not JavaScript. So saying "Node" doesn't actually say which programming language you mean. (Also, there are so many non-Node ways to execute JavaScript/TypeScript nowadays)
Anyway, assuming you're talking about TypeScript, I'm surprised to hear that you prefer Go's type system to TypeScript's. There are definitely cases where you can get carried away with TypeScript types, but due to that expressiveness I find it much more productive than Go's type system (and I'd make the same argument for Rust vs. Go).
My intent was just to emphasize that I’m comparing Go against writing JavaScript for the Node runtime and not in the browser, that is all, but you are correct.
Regarding Typescript, I actually am a big fan of it, and I almost never write vanilla JS anymore. I feel my team uses it well and work out the kinks with code review. My primary complaint, though, is that I cannot trust any other team to do the same, and TS supports escape hatches to bypass or lie about typing.
I work on a project with a codebase shared by several other teams. Just this week I have been frustrated numerous times by explicit type assertions of variables to something they are not (`foo as Bar`). In those cases it’s worse than vanilla JS because it misleads.
Yeah, but no one is using v8 directly, even though technically you could if you wanted. Node.js is as much JavaScript as LuaJIT is Lua, or GCC compiles C.
Fair, but I think the JavaScript ecosystem is unique in that the language is very separate from the thing that executes/compiles it. When you write Go, 99.9% of the time you're writing for the Go compiler.
When you write JavaScript (or TypeScript that gets transpiled), it's not as easy to assume the target is Node (V8). It could be Bun (JavaScriptCore), Deno, a browser, etc.
Apparently not, because I first assumed that he was talking about TypeScript considering that JavaScript doesn't have much of type system to compare to.
Yeah the big problem is that most languages have their fair share of rough edges. Go is performant and portable* with a good runtime and a good ecosystem. But it also has nil pointers, zero values, no destructors, and no macros. (And before anyone says macros are bad, codegen is worse, and Go has to use a lot of codegen to get around the lack of macros).
There are languages with fewer warts, but they're usually more complicated (e.g. Rust), because most of Go's problems are caused by its creators' fixation with simplicity at all costs.
I thought it was obvious that codegen was better than macros—at least, textual macros. You can't tell me Ken Thompson omitted macros from the Golang design because he didn't have experience using languages with macro systems!
Even AST-based macro systems have tricky problems like nontermination and variable capture. It can be tough to debug why your compiler is stuck in an infinite macro expansion loop. Macro systems that solve these problems, like the R⁵RS syntax-rules system, have other drawbacks like very complex implementations and limited expressive power.
And often there's no easy way to look at the code after it's been through the macro processor, which makes bugs in the generated code introduced by buggy macros hard to track down.
By contrast, if your code generator hangs in an infinite loop, you can debug it the same way you normally debug your programs; it doesn't suffer from tricky bugs due to variable capture; and it's easy to look at its output.
I’ve only used Go for a little toy project but I’m surprised to hear the opinion that it has a better type system than Node, a runtime for which the defacto type system is typescript!
Yes, Python is massively ahead there. The largest wart is that types can be out of sync with actual implementation, with things blowing up at runtime -- but so can Go with `any` and reflection.
Python, for a number of years at this point, has had structural (!) pattern matching with unpacking, type-checking baked in, with exhaustiveness checking (depending on the type checker you use). And all that works at "type-check time".
It can also facilitate type-state programming through class methods.
Libraries like Pydantic are fantastic in their combination of ergonomics and type safety.
The prime missing piece is sum types, which need language-level support to work well.
As long as none of the code you wrote ten years ago is worth anything, and you don't expect to want to use the code you're writing today ten years from now. Python is useful for prototyping.
Python with a library like Pydantic isn't bad—I wouldn't rate base Python as being near Go's level, at all, though you can get it up to something non-painful with libraries.
Go (and lots of other languages...) wreck it on dependency management and deployment, though. :-/ As the saying goes, "it was easier to invent Docker than fix Python's tooling".
Yeah I think, given its gradual typing approach, that any discussion about the quality and utility of Python's type system assumes that one is using one of the best in class type checkers right now.
I didn't really use it much until the last few years. It was limited and annoyiongly verbose. Now it's great, you don't even have to do things like explicitly notate covariant/contravariant types, and a lot of what used to be clumsy annotation with imports from typing is now just specified with normal Python.
And best of all, more and more libraries are viewing type support as a huge priority, so there's usually no more having to download type mocks and annotation packages and worry about keeping them in sync. There are some libraries that do annoying things like adding " | None" after all their return types to allow themselves to be sloppy, but at least they are sort of calling out to you that they could be sloppy instead of letting it surprise you.
It's now good and easy enough that it saves me time to use type annotations even for small scripts, as the time it saves from quickly catching typos or messing up a return type.
Like you said, Pydantic is often the magic that makes it really useful. It is just easy enough and adds enough value that it's worth not lugging around data in dicts or tuples.
My main gripe with Go's typing has always been that I think the structural typing of its interfaces is convenient but really it's convenient in the same way that duck typing is. In the same way that a hunter with a duck call is the same as a duck with duck typing, a F16 and a VCR are both things that have an ejection feature.
The real cream is that there barely any maintenance. The code I wrote 15years ago still works
That’s the selling point for me. If I’m coming to a legacy code as that no one working wrote, I pray it is go because then it just keeps working through upgrading the compiler and generally the libraries used.
I have a deep hatred of Go for all the things it doesn't have, including a usable type system (if I cannot write SomeClass<T where T extends HorsePlay> or similiar, the type system is not usable for me).
For NodeJS development, you would typically write it in Typescript - which has a very good type system.
Personally I have also written serverside C# code, which is a very nice experience these days. C# is a big language these days though.
It definitely hits a sweet spot. There is basically no other faster, widely used programming language in production used predominantly for web services than Go. You can argue Rust, but I just don't see it in job listings. And virtually no one is writing web services in C or C++ directly.
I personally don't like Go, and it has many shortcomings, but there is a reason it is popular regardless:
Go is a reasonably performant language that makes it pretty straightforward to write reliable, highly concurrent services that don't rely on heavy multithreading - all thanks to the goroutine model.
There really was no other reasonably popular, static, compiled language around when Google came out.
And there still barely is - the only real competitor that sits in a similar space is Java with the new virtual threads.
Languages with async/await promise something similar, but in practice are burdened with a lot of complexity (avoiding blocking in async tasks, function colouring, ...)
I'm not counting Erlang here, because it is a very different type of language...
So I'd say Go is popular despite the myriad of shortcomings, thanks to goroutines and the Google project street cred.
Slowly but surely, the jvm has been closing the go gap. With efforts like virtual threads, zgc, lilliput, Leyden, and Valhalla, the jvm has been closing the gap.
The change from Java 8 to 25 is night and day. And the future looks bright. Java is slowly bringing in more language features that make it quite ergonomic to work with.
I'm still traumatised by Java from my earlier career. So many weird patterns, FactoryFactories and Spring Framework and ORMs that work 90% of the time and the 10% is pure pain.
I have no desire to go back to Java no matter how much the language has evolved.
For me C# has filled the void of Java in enterprise/gaming environments.
C# is a highly underrated language that has evolved very quickly over the last decade into a nice mix of OOP and functional.
It's fast enough, easy enough (being very similar now to TypeScript), versatile enough, well-documented (so LLMs do a great job), broad and well-maintained first party libraries, and the team has over time really focused on improving terseness of the language (pattern matching and switch expressions are really one thing I miss a lot when switching between C# and TS).
EF Core is also easily one of the best ORMs: super mature, stable, well-documented, performant, easy to use, and expressive. Having been in the Node ecosystem for the past year, there's really no comparison for building fast with less papercuts (Prisma, Drizzle, etc. all abound with papercuts).
It's too bad that it seems that many folks I've chatted with have a bad taste from .NET Framework (legacy, Windows only) and may have previously worked in C# when it was Windows only and never gave it another look.
While C# is great, but the problem with programming languages, is you're net only picking a language, but a kind of company who uses it, and a kind of person who writes it.
Which means if you write C#, you'll encounter a ton of devs who come from an enterprise, banking or govt background, who think doing a 4 layer enterprise architecture with DTOs and 5 line classes is the only way you can write a CRUD app, and the worst of all you'll se a ton of people who learned C# in college a decade ago and refuse to learn anything else.
EF is great, but most people use it because they don't have to learn SQL and databases.
Blazor is great, but most people use it because they don't want to learn Frontend dev, and JS frameworks.
I think you have a point with the types of resources, but in my experience, its also not hard to separate the wheat from the chaff with pretty simple heuristics (though that is likely very different now with AI and cheating!).
"Modern C#" (if we can differentiate that) has a lot of nice amenities for modeling like immutable `record` types and named tuples. I think where EF really shines is that it allows you to model the domain with persistence easily and then use DTOs purely as projections (which is how I use DTOs) into views (e.g. REST API endpoints).
I can't say for the broader ecosystem, but at least in my own use cases, EFC is primarily used for write scenarios and some basic read scenarios. But in almost all of my projects, I end up using CQRS with Dapper on the read side for more complex queries. So I don't think that it's people avoiding SQL; rather it's teams focused on productivity first.
WRT to Blazor, I would not recommend it in place of JS except for internal tooling (tried it at one startup and switched to Vue + Vite). But to be fair, modern FE development in JS is an absolute cluster of complexity.
Imo, Unity C# is almost 'not real' C#, as in it uses a completely different programming model, with different object lifetimes, programming and object models.
I know the execution bits are the same, but programming for Unity feels very different in every way than writing ASP.NET code (and more different than moving from ASP.NET to Spring Boot)
I love C#, but have actually found LLMs to be quite bad a producing idiomatic code because the language is changing so fast and often they don't even know about the latest language(/blazor) features. I constantly have to undo my initial prompt and rewrite it to tell them that we don't use Startup.cs any more, only Program.cs, and Program.cs is a flat file and not a class.
I was so glad it died. It was a weird proprietary replacement for Flash, which itself was weird and proprietary, except the new one was owned by a huge company that publicly stated they wanted to crush Linux and friends.
A big chunk of their strategy at the time was around how to completely own the web. I celebrated every time their attempts failed.
As someone who developed in it at the time I found the reason it died was because they made new, slightly incompatible, versions every new Windows release.
I spent untold hours and years mastering inversion of control and late-binding and all those design patterns that were SOOOOO important for interviews only to never really utilize all that stuff because once the apps were built, they rarely, if ever, got reconfigured to do something besides the exact thing they were built for. We might as well have not bothered with any of it and just wrote modular, testable non-OOP code and called it a day. After just about 25 years, I look back at all the time I spent using Spring Framework, Struts, and older products and just kind of shake my head. It was all someone else's money making scheme.
I'm also reminded about the time that Tomcat stopped being an application you deploy to and just being an embedded library in the runtime! It was like the collective light went on that Web containers were just a sham. That didn't prevent employers from forcing me to keep using Websphere/WAS because "they paid for that and by god they're going to use it!" Meanwhile it was totally obsolete as docker containers just swept them all by the wayside.
I wonder what "Webshere admins" are doing these days? That was once a lucrative role to be able to manage those Jython configs, lol.
Plus it seems hopeful to think you'll be only working with "New java" paradigm when most enterprise software is stuck on older versions. Just like python, in theory you can make great new green field project but 80% of the work in the industry is on older or legacy components.
I guess it's reasonable to be hopeful as a Java developer nowadays.
Modern Java communities are slowly adopting the common FP practice "making illegal states unrepresentable" and call it "data oriented programming". Which is nice for those of us who actively use ADT. I no longer need to repeatedly explain "what is Option<?>?" or "why ADT?" whenever I use them; I could just point them to those new resources.
Hopefully, this shift will steer the Java community toward a saner direction than the current cargo cult which believed mutable C-struct (under guise of "anemic domain model") + Garbage Collector was OOP.
My criticism of the JVM is that it is no longer useful because we don't do portability using that mechanism anymore. We build applications that run in containers and can be compiled in the exact type of environment they are going to run inside of and we control all of that. The old days of Sun Microsystems and Java needing to run on Solaris, DEC, HP, maybe SGI, and later Linux, are LOOOOOOONG gone. And yet here we still are with portability inside our portability for ancient reasons.
If you believe that's the reason for the JVM (and that it's a "VM" in the traditional sense), you are greatly mistaken. It's like saying C is no longer needed because there is only Arm and x86..
The JVM is a runtime, just like what Go has. It allows for the best observability of any platform (you can literally connect to a prod instance and check e.g. the object allocations) and has stellar performance and stability.
That may be true, but navigating 30 years of accumulated cruft, fragmented ecosystems and tooling, and ever-evolving syntax and conventions, is enough to drive anyone away. Personally, I never want to deal with classpath hell again, though this may have improved since I last touched Java ~15 years ago.
Go, with all its faults, tries very hard to shun complexity, which I've found over the years to be the most important quality a language can have. I don't want a language with many features. I want a language with the bare essentials that are robust and well designed, a certain degree of flexibility, and for it to get out of my way. Go does this better than any language I've ever used.
I can reasonably likely run a 30 years old compiled, .jar file on the latest Java version. Java is the epitome of backwards and forward-compatible changes, and the language was very carefully grown so the syntax is not too indifferent, someone hibernated since Java 7 will probably have no problem reading Java 25 code.
> Go, with all its faults, tries very hard to shun complexity
The whole field is about managing complexity. You don't shun complexity, you give tools to people to be able to manage it.
And Go goes the low end of the spectrum, of not giving enough features to manage that complexity -- it's simplistic, not simple.
I think the optimum as actually at Java - it is a very easy language with not much going on (compared to, say, Scala), but just enough expressivity that you can have efficient and comfortable to use libraries for all kind of stuff (e.g. a completely type safe SQL DSL)
you try keep the easy things easy + simple, and try to make the hard things easier and simpler, if possible. Simple aint easy
I dont hate java (anymore), it has plenty of utility, (like say...jira). But when I'm writing golang I pretty much never think "oh I wish this I was writing java right now." no thanks
Well, spring is a whole framework that gives you a lot of stuff, but sure, complexity has to live somewhere - fundamentally so.
Without it, you either write that complexity yourself or fail to even recognize why is it necessary in the first place, e.g. failing to realize the existence of SQL injections, Cross-Site Scripting, etc. Backends have some common requirements and it is pretty rare that your problem wouldn't need these primitives, so as a beginner, I would advice.. learning the framework as well, the same way you would learn how to fly a plane before attempting it.
For other stuff, there is no requirement to use Spring - vanilla java has a bunch of tools and feel free to hack whatever you want!
> The whole field is about managing complexity. You don't shun complexity, you give tools to people to be able to manage it.
Complexity exists in all layers of computing, from the silicon up. While we can't avoid complexity of real world problems, we can certainly minimize the complexity required for their solutions. There are an infinite amount of problems caused primarily by the self-induced complexity of our software stacks and the hardware it runs on. Choosing a high-level language that deliberately tries to avoid these problems is about the only say I have in this matter, since I don't have the skill nor patience to redo decades of difficult work smarter people than me have done.
Just because a language embraces simplicity doesn't mean that it doesn't provide the tools to solve real world problems. Go authors have done a great job of choosing the right set of trade-offs, unlike most other language authors. Most of the time. I still think generics were a mistake.
Being able to create a self contained Kotlin app (JVM) that starts up quickly and uses the same amount of memory as the equivalent golang app would be amazing.
Graal native Image does that (though the compile time is quite long, but you can just run it on the JVM for development with hot reload and whatnot, and only do a native compile at release)
Still an issue. The main problem is for native compilation you have to declare your reflection targets upfront. That can be a headache if your framework doesn't support it.
You can get a large portion of what graal native offers by using AppCDS and compressed object headers.
Only those reflection targets that are not "visible" from straight forward code. If you have code that accesses the "stringLiteral" field of a class, then it will be auto-registered for you. But if you access it based on user input, then you have to register it manually.
Also, quite a few libraries have metadata now denoting these extra reflection targets.
But nonetheless you are right in general, but depends on your use case.
Well Google isn't really making a ton of new (successful) services these days, so the potential to introduce a new language is quite small unfortunately :). Plus, Go lacks one quite important thing which is ability to do an equivalent of HotSwap in the live service, which is really useful for debugging large complex applications without shutting them down.
Google is 100% writing a whole load of new services, and Go is 13 years old (even older within Google), so it surely has had ample opportunities to take.
As for hot swap, I haven't heard it being used for production, that's mostly for faster development cycles - though I could be wrong. Generally it is safer to bring up the new version, direct requests over, and shut down the old version. It's problematic to just hot swap classes, e.g. if you were to add a new field to one of your classes, how would old instances that lack it behave?
HotSwap is really useful to be able to make small adjustments, e.g. to add a logging statement somewhere to test your hypothesis. It's probably not safe to use to change the behaviour significantly, certainly not in production :)
My vote is for Elixir as well, but it's not a competitor for multiple important reasons. There are some languages in that niche, although too small and immature, like Crystal, Nim. Still waiting for something better.
yeah, if the requirement is "makes it pretty straightforward to write reliable, highly concurrent services that don't rely on heavy multithreading", Elixir is a perfect match.
And even without types (which are coming and are looking good), Elixir's pattern matching is a thousands times better than the horror of Go error handling
I haven't followed swift too closely, but ref counting is not a good fit for typical server applications. Sure, value types and such take off a lot of load from the GC (yes, ref counting is a GC), but still, tracing GCs have much better performance on server workloads. (Reference counting when an object is shared between multiple cores require atomic increments/decrements and that is very expensive).
Sure, though RC can't get away from pauses either - ever seen a C++ program seemingly hang at termination? That's a large object graph recursively running its destructors. And the worst thing is that it runs on the mutator thread (the thread doing the actual work).
Also, Java has ZGC that basically solved the pause time issue, though it does come at the expense of some throughput (compared to their default GC).
The only silver bullet we know of is building on existing libraries. These are also non-accidentally the top 3 most popular languages according to any ranking worthy of consideration.
First, we allow main methods to omit the infamous boilerplate of public static void main(String[] args), which simplifies the Hello, World! program to:
class HelloWorld {
void main() {
System.out.println("Hello, World!");
}
}
Second, we introduce a compact form of source file that lets developers get straight to the code, without a superfluous class declaration:
Third, we add a new class in the java.lang package that provides basic line-oriented I/O methods for beginners, thereby replacing the mysterious System.out.println with a simpler form:
Perhaps that it is coming sooner than you think...
It all started with adding Value types, now syntactic refinements à la Go... Who knows? :-)
You'll be very happy.
edit: hold on wait, java doesn't have Value types yet...
/jk
refinement: the process of removing impurities or unwanted elements from a substance.
refinement: the improvement or clarification of something by the making of small changes.
public static void in a class with factory of AbstractFactoryBuilderInstances...? right..? Yes, say that again?
We are talking about removing unnecessary syntactic constructs, not adding as some would do with annotations in order to have what? Refinement types perhaps? :)
> public static void in a class with factory of AbstractFactoryBuilderInstances
That's not syntax. Factory builders have nothing to do with syntax and everything to do with code style.
The oxymoron is implying syntax refinements would be inspired by Go of all things, a language with famously basic syntax. I'm not saying it's bad to have basic syntax. But obviously modern Java has a much more refined syntax and it's not because it looks closer to Go.
Always find 'java is verbose' to be a novice argument from go coders when there is so much boilerplate on the go side of things that's nicely handled on the java side.
Every function call is 3-5 lines in Go. For any problem which needs to handle errors, the Go code is generally >2x the Java LOC. Go is a language that especially suffers from the "code padding" problem.
It's rich to complain about verbosity coming from Go.
Nonetheless, Java has eased the psvm requirements, you don't even have to explicitly declare a class and a void main method is enough. [1] Not that it would matter for any non-script code.
An expert Ruby programmer can do wonders and be insanely productive, but I think there is a size from which it doesn't scale as nicely (both from a performance and a larger team perspective).
PHP's frameworks are fantastic and they hide a lot from an otherwise minefield of a language (though steadily improved over the years).
Both are decent choices if this is what you/your developers know.
Absolutely no on Java. Even if the core language has seen improvements over the years, choosing Java almost certainly means that your team will be tied to using proprietary / enterprise tools (IntelliJ) because every time you work at a Java/C# shop, local environments are tied to IDE configurations. Not to mention Spring -- now every code review will render "Large diffs are not rendered by default." in Github because a simple module in Java must be a new class at least >500 LOC long.
Local environments are not tied to IDEs at all, but you are doing yourself a disservice if you don't use a decent IDE irrespective of language - they are a huge productivity boost.
And are you stuck in the XML times or what? Spring Boot is insanely productive - just as a fact of matter, Go is significantly more verbose than Java, with all the unnecessary if errs.
Local environments are not literally tied to IDEs, but they effectively are in any non-trivially sized project. And the reason is because most Java shops really do believe "you are doing yourself a disservice if you don't use a decent IDE irrespective of language." I get along fine with a text editor + CLI tools in Deno, Lua, and Zig. Only when I enter Java world do the wisest of the wise say "yeah there is a CLI, but I don't really know it. I recommend you download IntelliJ and run these configs instead."
Yes Spring Boot is productive. So is Ruby on Rails or Laravel.
Any production-grade project will use either Maven or Gradle for builds. There are CI/CD pipelines, lints, etc, how would all these work if you could only build through an IDE?
Sure, there are some awfully dated companies that still send changed files over email to each other with no version control, I'm sure some of those are stuck with an IDE config, but to be honest where I have seen this most commonly were some Visual Studio projects, not Java. Even though you could find any of these for any other language, you just need to scale your user base up. A language that hasn't even hit 1.0 will have a higher percentage of technically capable users, that's hardly a surprise.
>Only when I enter Java world do the wisest of the wise say "yeah there is a CLI, but I don't really know it. I recommend you download IntelliJ and run these configs instead."
Then they obviously don't know their tooling well, and I would hesitate to call a jr 'the wisest of the wise'
There are real pain points with async/await, but I find the criticism there often overblown. Most of the issues go away if you go pure async, mixing older sync code with async is much more difficult though.
My experience is mostly with C#, but async/await works very well there in my experience. You do need to know some basics there to avoid problem, but that's the case for essentially every kind of concurrency. They all have footguns.
Count Rust. From what I can see, it's becoming very popular in the microservices landscape. Not hard to imagine why. Multithreading is a breeze. Memory use is low. Latency is great.
I used go for years, and while it's able to get small things up and running quickly, bigger projects soon become death-by-a-thousand-cuts.
Debugging is a nightmare because it refuses to even compile if you have unused X (which you always will have when you're debugging and testing "What happens if I comment out this bit?").
The bureaucracy is annoying. The magic filenames are annoying. The magic field names are annoying. The secret hidden panics in the standard library are annoying. The secret behind-your-back heap copies are annoying (and SLOW). All the magic in go eventually becomes annoying, because usually it's a naively repurposed thing (where they depend on something that was designed for a different purpose under different assumptions, but naively decided to depend on its side effects for their own ever-so-slightly-incompatible machinery - like special file names, and capitalization even though not all characters have such a thing .. was it REALLY such a chore to type "pub" for things you wanted exposed?).
Now that AI has gotten good, I'm rather enjoying Rust because I can just quickly ask the AI why my types don't match or a gnarly mutable borrow is happening - rather than spending hours poring over documentation and SO questions.
I haven't done serious Rust development since AI got good, but I did have a brief play last December and it's shocking how good they are at Rust. It feels like the verbose syntax and having tons of explicit information everywhere just makes it breeze through problems that would trip up a human for ages.
I once described this "debugging" problem to one of the creators and he did not even understand the problem. It is so amateurish you wonder if they ever dipped a toe outside the academic world.
Btw, AI sucks on GO. One would have guessed that such a simple lang would suit ChatGPT. Turns out ChatGPT is much better at Java, C#, Pyhton and many other langs than GO.
I wrote a book on Go, so I'm biased. But when I started using Go more than a decado ago, it really felt like a breath of fresh air. It made coding _fun_ again, less boilerplate heavy than Java, simple enough to pick up, and performance was generally good.
There's no single 'best language', and it depends on what your use-cases are. But I'd say that for many typical backend tasks, Go is a choice you won't really regret, even if you have some gripes with the language.
Often, when I have some home DIY or woodworking problem, I reach for my trusty Dremel:
* The Dremel is approachable: I don't have to worry about cutting off my hand with the jigsaw or set up a jig with the circular saw. I don't have to haul my workpiece out to the garage.
* The Dremel is simple: One slider for speed. Apply spinny bit to workpiece.
* The Dremel is fun: It fits comfortably in my hand. It's not super loud. I don't worry about hurting myself with it. It very satisfyingly shaves bits of stuff off things.
In so many respects, the Dremel is a great tool. But 90% of the time when I use it, it ends up taking my five times as long (but an enjoyable 5x!) and the end result is a wobbly scratchy mess. I curse myself for not spending the upfront willpower to use the right tool for the job.
I find myself doing this with all sorts of real and software tools: Over-optimizing for fun and ease-of-entry and forgetting the value of the end result and using the proper tool for the job.
I think of this as the "Dremel effect" and I try to be mindful of it when selecting tools.
Most of my coding these days is definitely in the 'for fun' bucket given my current role. So I'd rather take 5x and have fun.
That said, I don't think Go is only fun, I think it's also a viable option for many backend projects where you'd traditionally have reached for Java / C#. And IMO, it sure beats the recent tendency of having JS/Python powering backend microservices.
For the most part I've loved Go since just before 1.0 through today. Nits can surely be picked, but "it's still not good" is a strange take.
I think there is little to no chance it can hold on to its central vision as the creators "age out" of the project, which will make the language worse (and render the tradeoffs pointless).
I think allowing it to become pigeon holed as "a language for writing servers" has cost and will continue to cost important mindshare that instead jumps to Rust or remains in Python or etc.
Maybe it's just fun, like harping on about how bad Visual Basic was, which was true but irrelevant, as the people who needed to do the things it did well got on with doing so.
Yep, most of what the author complains about are trivial issues you could find in any language. For contrast, some real, deep-rooted language design problems with Go are:
- Zero values, lack of support for constructors
- Poor handling of null
- Mutability by default
- A static type system not designed with generics in mind
- `int` is not arbitrary precision [1]
- The built-in array type (slices) has poorly considered ownership semantics [2]
They are forcing people to write Typescript code like it’s Golang where I am right now (amongst other extremely stupid decisions - only unit test service boundaries, do not pull out logic into pure functions, do not write UI tests, etc.). I really must remember to ask organisations to show me their code before joining them.
(I realise this isn’t who is hiring, but email in bio)
I really try not to throw anymore with typescript, I do error checking like in Go. When used with a Go backend, it makes context switching really easy...
If you don't like Go, then just let go. I hope nobody forces you to use it.
Some critique is definitely valid, but some of it just sounds like they didn't take the time to grasp the language. It's trade offs all the way. For example there is a lot I like about Rust, but still no my favorite language.
Disagree. Most critiques of Go I've read have been weak. This one was decent. And I say that as a big enjoyer of Go.
That said I really wish there was a revamp where they did things right in terms of nil, scoping rules etc. However, they've commited to never breaking existing programs (honorable, understandable) so the design space is extremely limited. I prefer dealing with local awkwardness and even excessive verbosity over systemic issues any day.
Few things are truly forced upon me in life but walking away from everything that I don't like would be foolish. There is compromise everywhere and I don't think entering into a tradeoff means I'm not entitled to have opinions about the things I'm trading off.
I don't think the article sounds like someone didn't take the time to grasp the language. It sounds like it's talking about the kind of thing that really only grates on you after you've seriously used the language for a while.
This article was a well-thought-out one from someone who has obviously really used Go to build real things.
I quite like Go and use it when I can. However, I wish there were something like Go, without these issues. It's worth talking about that. For instance, I think most of these critiques are fair but I would quibble with a few:
1. Error scope: yes, this causes code review to be more complex than it needs to be. It's a place for subtle, unnecessary bugs.
2. Two types of nil: yes, this is super confusing.
3. It's not portable: Go isn't as portable as C89, but it's pretty damn portable. It's plenty portable to write a general-purpose pre-built CLI tool in, for instance, which is about my bar for "pragmatic portability."
4. Append ownership & other slice weirdness: yes.
5. Unenforced `defer`: yes, similar to `err`, this introduces subtle bugs that can only be overcome via documentation, careful review, and boilerplate handling.
6. Exceptions on top of err returns: yes.
7. utf-8: Hasn't bitten me, but I don't know how valid this critique is or isn't.
8. Memory use: imo GC is a selling-point of the language, not a detriment.
In my opinion, the section on data ownership contained the most egregious and unforgivable example of go's flaws. The behavior of append in that example is the kind of bug-causing or esoteric behavior that should never make it into any programming language. As a regular writer of go code, I understand why this particular quirk of the language exists, but I hope I never truly "grasp" it to the extent that I forgive it.
I'm surprised people in these comments aren't focusing more on the append example.
Sure but life choices are one thing, but this critique is still valuable. I learned a thing or two, and also think go can improve (I understand it's because I don't grok the language but I still prefer map to append in a loop)
Technically, the term "billion dollar mistake", coined in 1965, would now be a "10 billion dollar mistake" in 2025. Or, if the cost is measured in terms of housing, it would be a "21 billion dollar mistake".
The billion dollar mistake was made in 1965 but the term was coined in 2009, defined as the following:
> I couldn't resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years.
I don't agree with most of the article but I believe I know where it comes from.
Golang's biggest shortcoming is the fact that it touches bare metal isn't visible clearly enough. It provides many high level features which makes this ambience of "we got you" but fails on delivering proper education to its users that they are going to have a dirt on their hands.
Take a slice for example: even in naming it means "part of" but in reality it's closer to "box full of pointers" what happens when you modify pointer+1? Or "two types of nil"; there is a difference between having two bytes (simplification), one of struct type and the other of address to that struct and having just a NULL - same as knowing that house doesn't exist and being confident that house exists and saying it's in the middle of the volcano beneath the ocean.
The Foo99 critique is another example. If you'd want to have not 99 loop but 10 billion loops each with mere 10 bytes you'd need 100GiB of memory just to exit it. If you'd reuse the address block you'd only use... 10 bytes.
I also recommend trying to implement lexical scope defer in C and putting them in threads. That's a big bottle of fun.
I think that it ultimately boils down to what kind of engineer one wants to be. I don't like hand holding and rather be left on my own with a rain of unit tests following my code so Go, Zig, C (from low level
Languages) just works for me. Some prefer Rust or high level abstractions. That's also fine.
But IMO poking at Go that it doesn't hide abstractions is like making fun of football of being child's play because not only it doesn't have horses but also has players using legs instead of mallets.
> I believe I know where it comes from […] poking at Go that it doesn't hide abstractions
Author here.
No, this is not where it comes from. I've been coding C form more than 30 years, Go for maybe 12-15, and currently prefer Rust. I enjoy C++ (yes, really) and getting all those handle-less knives to fit together.
No, my critique of Go is that it did not take the lessons learned from decades of theory, what worked and didn't work.
I don't fault Go for its leaky abstractions in slices, for example. I do fault it for creating bad abstraction APIs in the first place, handing out footguns when they are avoidable. I know to avoid the footgun of appending to slices while other slices of the same array may still be accessible elsewhere. But I think it's indefensible to have created that footgun in the year Go was created.
Live long enough, and anybody will make a silly mistake. "Just don't make a mistake" is not an option. That's why programming language APIs and syntax matters.
As for bare metal; Go manages to neither get the benefits possible of being high level, and at the same time not being suitable for bare metal.
It's a missed opportunity. Because yes, in 2007 it's not like I could have pointed to something that was strictly better for some target use cases.
I don't share experience about not being suitable about bare metal. But I do have experience with high level languages doing similar things through "innovative" thinking. I've seen int overflows in Rust. I've seen libraries that waited for UDP packet to be rebroadcasted before sending another implemented in Elixir.
No Turing complete language will ever prevent people from being idiots.
It's not only programming language API and syntax. It's a conceptual complexity, which Go has very low. It's a remodeling difficulty which Rust has very high. It's implicit behavior that you get from high stack of JS/TS libraries stitched together. It's accessibility of tooling, size of the ecosystem and availability of APIs. And Golang crosses many of those checkboxes.
All the examples you've shown in your article were "huh? isn't this obvious?" to me. With your experience in C I have no idea you why you don't want to reuse same allocation multiple times and instead keeping all of them separately while reserving allocation space for possibly less than you need.
Even if you'd assume all of this should be on stack you still would crash or bleed memory through implicit allocations that exit the stack.
Add 200 of goroutines and how does that (pun intended) stack?
Is fixing those perceived footguns really a missed opportunity? Go is getting stronger every year and while it's hated by some (and I get it, some people like Rust approach better it's _fine_) it's used more and more as a mature and stable language.
Many applications don't even worry about GC. And if you're developing some critical application, pair it with Zig and enjoy cross-compilation sweetness with as bare metal as possible with all the pipes that are needed.
> With your experience in C I have no idea you why you don't want to reuse same allocation multiple times and instead keeping all of them separately while reserving allocation space for possibly less than you need.
Which part are you referring to, here?
> Even if you'd assume all of this should be on stack you still would crash or bleed memory through implicit allocations that exit the stack.
What do you mean by this? I don't mean to be rude, but this sounds confusing if you understand how memory works. What do you mean an allocation that exits the stack would bleed memory?
> All the examples you've shown in your article were "huh? isn't this obvious?" to me.
It is. None of this was new to me. In C++ defining a non-virtual destructor on a class hierarchy is also not new to me, but a fair critique can be made there too why it's "allowed". I do feel like C++ can defend that one from first principles though, in a way that Go cannot.
I'm not sure what you mean by the foo99 thing. I'm guessing this is about defer inside a loop?
> Is fixing those perceived footguns really a missed opportunity?
In practice, none of these thing mentioned in the article have been an issue for me, at all. (Upvoted anyway)
What has been an issue for me, though, is working with private repositories outside GitHub (and I have to clarify that, because working with private repositories on GitHub is different, because Go has hardcoded settings specifically to make GitHub work).
I had hopes for the GOAUTH environment variable, but either (1) I'm more dumb and blind than I thought I already was, or (2) there's still no way to force Go to fetch a module using SSH without trying an HTTPS request first. And no, `GOPRIVATE="mymodule"` and `GOPROXY="direct"` don't do the trick, not even combined with Git's `insteadOf`.
Definitely not just you. At my previous job we had a need to fetch private Go modules from Gitlab and, later, a self-hosted instance of Forgejo. CTO and I spent a full day or so doing trial and error to get a clean solution. If I recall correctly, we ultimately resorted to each developer adding `GOPRIVATE={module_namespace}` to their environment and adding the following to their `.netrc`:
```
machine {server} # e.g. gitlab.com
login {username}
password {read_only_api_key} # Must be actual key and not an ENV var
```
Worked consistently, but not a solution we were thrilled with.
- I’ve seen a lot of debate here comparing Go’s issues (like nil handling or error scoping) to Rust’s strengths.
- As someone who’s worked with C/C++ and Fortran, I think all these languages have their own challenges—Go’s simplicity trades off against Rust’s safety guarantees, for example.
- Could someone share a real-world example where Go’s design caused a production issue that Rust or another language would’ve avoided?
- I’m curious how these trade-offs play out in practice.
Sorry, I don't do Go/Rust coding, still on C/C++/Fotran.
It's a niche use case to have software that load plugins and it just so happens those plugins are written in Go? No it's not a niche case. If all programing you do in Go is web servers than sure you won't see this.
Recently I was in a meeting where we were considering adopting Go more widely for our backend services, but a couple of the architect level guys brought up the two-types-of-nil issue and ultimately shot it down. I feel like they were being a little dramatic about it, but it is startling to me that its 2025 and the team still has not fixed it. If the only thing you value in language design is never breaking existing code, even if by any definition that existing code is already broken, eventually the only thing using your language will be existing code.
This has already been explained many times, but it's so much fun I'll do it again. :-)
So: The way Go presents it is confusing, but this behavior makes sense, is correct, will never be changed, and is undoubtedly depended on by correct programs.
The confusing thing for people use to C++ or C# or Java or Python or most other languages is that in Go nil is a perfectly valid pointer receiver for a method to have. The method resolution lookup happens statically at compile time, and as long as the method doesn't try to deref the pointer, all good.
It still works if you assign to an interface.
package main
import "fmt"
type Dog struct {}
type Cat struct {}
type Animal interface {
MakeNoise()
}
func (*Dog) MakeNoise() { fmt.Println("bark") }
func (*Cat) MakeNoise() { fmt.Println("meow") }
func main() {
var d *Dog = nil
var c *Cat = nil
var i Animal = d
var j Animal = c
d.MakeNoise()
c.MakeNoise()
i.MakeNoise()
j.MakeNoise()
}
This will print
bark
meow
bark
meow
But the interface method lookup can't happen at compile time. So the interface value is actually a pair -- the pointer to the type, and the instance value. The type is not nil, hence the interface value is something like (&Cat,nil) and (&Dog,nil) in each case, which is not the interface zero value, which is (nil, nil).
But it's super confusing because Go type cooerces a nil struct value to a non-nil (&type, nil) interface value. There's probably some naming or syntax way to make this clearer.
The underlying reason, which you hint on, is that in Go (unlike Python, Java, C#… even C++) the “type” of an “object” is not stored alongside the object.
A struct{a, b int32} takes 8 bytes of memory. It doesn't use any extra bytes to “know” its type, to point to a vtable of “methods,” to store a lock, or any other object “header.”
Dynamic dispatch in Go uses interfaces which are fat pointers that store the both type and a pointer to an object.
With this design it's only natural that you can have nil pointers, nil interfaces (no type and no pointer), and typed interfaces to a nil pointer.
This may be a bad design decision, it may be confusing. It's the reason why data races can corrupt memory.
But saying, as the author, “The reason for the difference boils down to again, not thinking, just typing” is just lazy.
Just as lazy as it is arguing Go is bad for portability.
I've written Go code that uses syscalls extensively and runs in two dozen different platforms, and found it far more sensible than the C approach.
Yeah, I totally agree -- given Go's design, the behavior makes sense (and changing the behavior just to make it more familiar to users of languages that fundamentally work differently would be silly).
However, the non-intuitive punning of nil is unfortunate.
I'm not sure what the ideal design would be.
Perhaps just making an interface not comparable to nil, but instead something like `unset`.
type Cat struct {}
type Animal interface{}
func main() {
var c *Cat = nil
var a Animal = c
if a == nil { // compile error, can't compare interface to nil
;
}
if a == unset { // false, hopefully intuitively
}
}
Still, it's a sharp edge you hit once and then understand. I am surprised people get so bothered by it...it's not like something that impairs your use of the language once you're proficient.
(E.g. complaints about nil existing at all, or error handling, are much more relatable!)
(Side note, Go did fix scoping of captured variables in for,range loops, which was a back-incompat change, but they justified it by emperically showing it fixed more bugs than it caused (very reasonable). C# made the same change w/ the same justification earlier, which was inspiration for Go.)
Yeah, it blew my mind when I first learned Go had this problem -- like, people have already tripped over this many times! I was pleasantly surprised to see them fix it though.
I deeply, seriously, believe that you should have written the words "Its super confusing", meditated on that for a minute, then left it at that. It is super confusing. That's it. Nothing else matters. I understand why it is the way it is. I'm not stupid. As you said: Its super confusing, which is relevant when you're picking languages other people at your company (interns, juniors) have to write in.
> “The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.”
Architect-level is complaining about language quirks? That's low on my priorities for languages. I'd worry more about maturity, tooling support, library support, ease of learning, and availability of developers.
I think our end-state decision, IIRC, was to just expand our usage of TypeScript; which also has Golang beat on all those verticals you list. More mature, way better tooling, way more libraries, easier to hire for, etc.
Though, thinking back, someone should have brought up TypeScript's at least three different ways to represent nil (undefined, null, NaN, a few others). Its at least a little better in TS, because unlike in Go the type-checker doesn't actively lie to you about how many different states of undefined you might be dealing with.
I like Go, but my main annoyance is deciding when to use a pointer or not use a pointer as variable/receiver/argument. And if its an interface variable, it has a pointer to the concrete instance in the interface 'struct'. Some things are canonically passed as pointers like contexts.
It just feels sloppy and I'm worried I'm going to make a mistake.
This confused me too. It is tricky because sometimes it's more performant to copy the data rather than use a pointer, and there's not a clear boundary as to when that is the case. The advice I was given was "profile your code and make your decision data-driven". That didn't make me happy.
Now I always use pointers consistently for the readability.
Yup, that's it. If you're going to modify a field in the receiver, or want to pass a field by reference, you're going to need a pointer. Otherwise, a value will do, unless ... that weird interface thing makes you. I guess that's the problem?
I 80% of time use structs. common misunderstanding: it does not reduce performance for pointer vs value receivers (Go compiler generates same code for both, no copy of struct receiver happens). most of structs are small anyways, safe to copy. Go also automatically translates value receivers and pointer receivers back-and-forth. and if I see pointer I see something that can be mutated (or very large). in fact, if I see a pointer, I think "here we go.. will it be mutated?". written 400,000 LOC in Go, rarely seeing this issue.
I like Go and Rust, but sometimes I feel like they lack tools that other languages have just because they WANT to be different, without any real benefit.
Whenever I read Go code, I see a lot more error handling code than usual because the language doesn't have exceptions...
And sometimes Go/Rust code is more complex because it also lacks some OOP tools, and there are no tools to replace them.
So, Go/Rust has a lot more boilerplate code than I would expect from modern languages.
For example, in Delphi, an interface can be implemented by a property:
type
TMyClass = class(TInterfacedObject, IMyInterface)
private
FMyInterfaceImpl: TMyInterfaceImplementation; // A field containing the actual implementation
public
constructor Create;
destructor Destroy; override;
property MyInterface: IMyInterface read FMyInterfaceImpl implements IMyInterface;
end;
This isn't possible in Go/Rust. And the Go documentation I read strongly recommended using Composition, without good tools for that.
This "new way is the best way, period ignore good things of the past" is common.
When MySQL didn't have transactions, the documentation said "perform operations atomically" without saying exactly how.
MongoDB didn't have transactions until version 4.0. They said it wasn't important.
When Go didn't have generics, there were a bunch of "patterns" to replace generics... which in practice did not replace.
The lack of inheritance in Go/Rust leaves me with the same impression. The new patterns do not replace the inheritance or other tools.
"We don't have this tool in the language because people used it wrong in the old languages." Don't worry, people will use the new tools wrong too!
Go allows deferring an implementation of an interface to a member of a type. It is somewhat unintuitive, and I think the field has to be an unnamed one.
Similarly, if a field implements a trait in Rust, you can expose it via `AsRef` and `AsMutRef`, just return a reference to it.
These are not ideal tools, and I find the Go solution rather unintuitive, but they solve the problems that I would've solved with inheritance in other languages. I rarely use them.
Thanks. I had been searching for this for a project in the past and couldn't find it in Go or Rust. Before posting, I asked chatgpt, and he said it wasn't possible...
Been using Go for two years now, coming from C. Totally fair points. Go’s quirks can feel more like landmines than design decisions, especially when coming from languages that handle things like RAII, error scope, or nil with more grace. But part of Go’s charm (and curse) is its unapologetic minimalism. It’s not trying to be elegant, just predictable and maintainable at scale. Saying “no sane person” would choose X might feel cathartic, but it shuts down understanding of why rational teams do choose Go and often thrive with it. Go’s not for everyone, but it does what it does on purpose.
Ouch!! Pascal's lack of popularity certainly isn't due to the fact that it supports such nice enumerated types (or sets for that matter). I think he was just pointing out that such nice things have existed (and been known to exist) for a long time and that it's odd that a new language couldn't have borrowed the feature.
Pascal evolved into Modula-2, which Wirth then simplified into Oberon. His student Griesemer did his dissertation on extending Oberon for parallel programming on supercomputers. Concurrently, Pike found Modula-2 an inspiration for some languages he wrote in the 80s and 90s. He got together with Griesemer and Ken Thompson to rework one of those languages, Newsqueak, into Golang. So that's where Pascal is today.
People want sum types because sum types solve a large set of design problems, while being a concept old enough to appear back in SML in 1980s. One of the best phrased complaints I've seen against Go's design is a claim that Go language team ignored 30+ years of programming language design, because the language really seems to introduce design issues and footguns that were solved decades before work on it even started
A popular language is always going to attract some hate. Also, these kinds of discussions can be useful for helping the language evolve.
But everyone knows in their heart of hearts that a few small language warts definitely don't outweigh Go's simplicity and convenience. Do I wish it had algebraic data types, sure, sure. Is that a deal-breaker, nah. It's the perfect example of something that's popular for a reason.
It is easily one of the most productive languages. No fuss, no muss, just getting stuff done.
I agree with just about everything in the post. I've been bit a time or two by the "two flavors of null." That said, my most pleasant and most productive code bases I've worked in have all been Go.
Some learnings. Don't pass sections of your slices to things that mutate them. Anonymous functions need recovers. Know how all goroutines return.
every language has its problems; Go I think is pretty good despite them. not saying points raised in the article are invalid, you def have to be careful, and I hate the "nil interface is not necessarily nil" issue as much as anyone.
It's hard to find a language that will satisfy everyone's needs. Go I find better for smaller, focused applications/utilities... can definitely see how it would cause problems at an "enterprise" level codebase.
I both agree with these points, and also think it absolutely doesn't matter. Go is the best language if you need to ship quickly and have solid performance. Also Go + AI works amazingly well. So in some ways you can actually move faster compared to languages like Node and Python these days.
In 2015 I wrote an article "How to complain about Go" to mock this type of articles that completely miss the big picture and the real world impact of "imperfect" language. Glad it's still relevant :)
This has always been my takeaway with Go. An imperfect language for imperfect developers, chosen for organizations (not people) to ensure a baseline usefulness of their engineers from junior to senior. Do I like it? No. Would I ever choose it willingly? No. But when the options at the time were Javascript or untyped Python, it may have seemed like a more attractive option. Python was also dealing with a nasty 2-to-3 upgrade at the time that looks foolish in comparison to Golang's automatic formatting and upgrade mechanisms.
• errors handled by truthy if or try syntax
• all 0s and nils are falsey
• #if PORTABLE put(";}") #end
• modifying! methods like "hi".reverse!()
• GC can be paused/disabled
• many more ease of use QoL enhancements
No, this has been the case as long as Go has been around, then you look and its some C or C++ developer with specific needs, thats okay, its not for everyone.
I think with C or C++ devs, those who live in glass houses shouldn’t throw stones.
I would criticize Go from the point of view of more modern languages that have powerful type systems like the ML family, Erlang/Elixir or even the up and coming Gleam. These languages succeed in providing powerful primitives and models for creating good, encapsulating abstractions. ML languages can help one entirely avoid certain errors and understand exactly where a change to code affects other parts of the code — while languages like Erlang provided interesting patterns for handling runtime errors without extensive boilerplate like Go.
It’s a language that hobbles developers under the aegis of “simplicity.” Certainly, there are languages like Python which give too much freedom — and those that are too complex like Rust IMO, but Go is at best a step sideways from such languages. If people have fun or get mileage out of it, that’s fine, but we cannot pretend that it’s really this great tool.
My biggest nitpick against Go was, is and still is the package management. Rust did it so nice and NuGet (C#/.NET) got it so right that Microsoft added it as a built-in thing for Visual Studio, it was originally a plugin and not from Microsoft whatsoever, now they fully own it which is fine, and it just works.
Cargo is amazing, and you can do amazing things with it, I wish Go would invest in this area more.
Also funny you mention Python, a LOT of Go devs are former Python devs, especially in the early days.
". They are likely the two most difficult parts of any design for parametric polymorphism. In retrospect, we were biased too much by experience with C++ without concepts and Java generics. We would have been well-served to spend more time with CLU and C++ concepts earlier."
And still there are more modern idioms and language features that ML had in the 70s but are missing from Go. But, these have the fatal flaw of Not being Invented Here.
Not really, no one at other other than the original authors though of that, the authors had an issue with C++ compile times and were sponsored by their manager to work on this Go side project of theirs.
Google's networking services keep being writen in Java/Kotlin, C++, and nowadays Rust.
> Has Go become the new PHP? Every now and then I see an article complaining about Go's shortcomings.
These sorts of articles have been commonplace even before Go released 1.0 in 2013. In fact, most (if not all) of these complaints could have been written identically back then. The only thing missing from this post that could make me believe it truly was written in 2013 would be a complaint about Go not having generics, which were added a few years ago.
People on HN have been complaining about Go since Go was a weird side-project tucked away at Google that even Google itself didn't care about and didn't bother to dedicate any resources to. Meanwhile, people still keep using it and finding it useful.
The last 20% is also deliberately never done. It's the way they like to run their language. I find it frustrating, but it seems to work for some people.
Go is a pretty good example of how mediocre technology that would never have taken off on its own merits benefits from the rose tinted spectacles that get applied when FAANG starts a project.
I don’t buy this at all. I picked up Go because it has fast compilation speed, produces static binaries, can build useful things without a ton of dependencies, is relatively easy to maintain, and has good tooling baked in. I think this is why it gained adoption vs Dart or whatever other corporate-backed languages I’m forgetting.
Go _excels_ at API glue. Get JSON as string, marshal it to a struct, apply business logic, send JSON to a different API.
Everything for that is built in to the standard library and by default performant up to levels where you really don't need to worry about it before your API glue SaaS is making actual money.
I tried out one project because of these attributes and then scrapped it fairly quickly in favor of rust. Not enough type safety, too much verbosity. Too much fucking "if err != nil".
The language sits in an awkward space between rust and python where one of them would almost always be a better choice.
I’m almost with you. If there was a language with a fast compiler, excellent tooling, a robust standard library, static binaries, and an F#-like type system, I’d never use anything else.
Rust simply doesn’t cut it for me. I’m hoping Roc might become this, but I’m not holding my breath.
I find Rust's stdlib to be lacking vs Go, and so the average Rust project has a lot of dependencies. To me, Rust feels like the systems-programming equivalent to Node + NPM. Also, the compilation speed was really painful last time I used it. I'm used to the speed of Zig, Hare, Go, Bun. Rust makes me want to jab myself in the eye with a spork.
The other jarring example of this kind of deferring logical thinking to big corps was people defending Apple's soldering of memory and ssd, specially so on this site, until some Chinese lad proved that all the imagined issues for why Apple had to do such and such was bs post hoc rationalisation.
The same goes with Go, but if you spend enough time, every little while you see the disillusionment of some hardcore fans, even from the Go's core team, and they start asking questions but always start with things like "I know this is Go and holy reasons exists and I am doing a sin to question but why X or Y". It is comedy.
Go is the best language for me because
I develop fast with it,
don't have that many bugs,
it builds fast and
I'm usually just fine having a garbage collector
The dependency management is great too
Go nearly gave me carpal tunnel with the vast quantities and almost the same but not quite the same repetitive code patterns it brings along with it. I’d never use it again.
I could reply with skill issue. But a more constructive comment would be to use AI to auto-complete, in the same line, what you would have to type anyway. Not as a way to produce large swaths of code.
Carbon exists only for interoperating with and transitioning off of C++. Creating a new code base in carbon doesn’t really make sense, and the project’s readme literally tells you not to do that.
> Existing modern languages already provide an excellent developer experience: Go, Swift, Kotlin, Rust, and many more. Developers that can use one of these existing languages should.
bar, err := foo()
if err != nil {
return err
}
if err = foo2(); err != nil {
return err
}
Sounded more like a nitpicking.
If you really care about scope while being able to use `bar` later down, the code should be written as:
bar, err := foo()
if err != nil {
return err
}
err = foo2() // Just reuse `err` plainly
if err != nil {
return err
}
which actually overwrites `err`, opposite to "shadowing" it.
The confusing here, is that the difference between `if err != nil` and `if err = call(); err != nil` is not just style, the later one also introduces a scope that captures whatever variables got created before `;`.
If you really REALLY want to use the same `if` style, try:
if bar, err := foo(); err != nil {
return err
} else if bar2, err := foo2(); err != nil {
return err
} else {
[Use `bar` and `bar2` here]
return ...
}
Cross compiling go is easy. Static binaries work everywhere. The cryptographic library is the foundation of various CAs like letsencrypt and is excellent.
The green threads are very interesting since you can create 1000s of them at a low cost and that makes different designs possible.
I think this complaining about defer is a bit trivial. The actual major problem for me is the way imports work. The fact that it knows about github and the way that it's difficult to replace a dependency there with some other one including a local one. The forced layout of files, cmd directories etc etc.
I can live with it all but modules are the things which I have wasted the most time and struggled the most.
Would the interface nil example be clearer if checking for `nil` didn't use the `==` operator? For example, with a hypothetical `is` operator:
package main
import "fmt"
type I interface{}
type S struct{}
func main() {
var i I
var s *S
fmt.Println(s, i) // nil nil
fmt.Println(s is nil, i is nil, s == i) // t,t,f: Not confusing anymore?
i = s
fmt.Println(s, i) // nil nil
fmt.Println(s is nil, i is nil, s == i) // t,f,t: Still not confusing?
}
Of course, this means you have to precisely define the semantics of `is` and `==`:
- `is` for interfaces checks both value and interface type.
- `==` for interfaces uses only the value and not the interface type.
- For structs/value, `is` and `==` are obvious since there's only a value to check.
There are a couple of minor errors in this post, but mostly it consists of someone getting extremely overexcited about minor problems in Golang they have correctly identified.
Author here. I may not be able to deny the second part, but I would love to hear anything you think is factually incorrect or that I may have been unclear about. Always happy to be corrected.
(something that's not a minor error, that someone else pointed out, is that Python isn't strictly refcounted. Yeah, that's why emphasized "almost" and "pretty much". I can't do anything about that kind of critique)
Oh, I meant that you were mistaken about handling nom-UTF-8 filenames (see https://news.ycombinator.com/item?id=44986040) and in 90% of cases a deferred mutex unlock makes things worse instead of better.
The kind of general reason you need a mutex is that you are mutating some data structure from one valid state to another, but in between, it's in an inconsistent state. If other code sees that inconsistent state, it might crash or otherwise misbehave. So you acquire the mutex beforehand and release it once it's in the new valid state.
But what happens if you panic during the modification? The data structure might still be in an inconsistent state! But now it's unlocked! So other threads that use the inconsistent data will misbehave, and now you have a very tricky bug to fix.
This doesn't always apply. Maybe the mutex is guarding a more systemic consistency condition, like "the number in this variable is the number of messages we have received", and nothing will ever crash or otherwise malfunction if some counts are lost. Maybe it's really just providing a memory fence guarding against a torn read. Maybe the mutex is just guarding a compare-and-swap operation written out as a compare followed by a swap. But in cases like these I question whether you can really panic with the mutex held!
This is why Java deprecated Thread.stop. (But Java does implicitly unlock mutexes when unwinding during exception handling, and that does cause bugs.)
This is only vaguely relevant to your topic of whether Golang is good or not. Explicit error handling arguably improves your chances of noticing the possibility of an error arising with the mutex held, and therefore handling it correctly, but you correctly pointed out that because Go does have exceptions, you still have to worry about it. And cases like these tend to be fiendishly hard to test—maybe it's literally impossible for a test to make the code you're calling panic.
I think a lot of people got on the Go train because of Google and not necessarily because it was good. There was a big adoption in Chinese tech scene for example. I personally think Rust/Go/Zig and other modern languages suffer a bit from trying too hard not to be C/C++/Java.
Go was a breath of fresh air and pretty usable right from the start. It felt like a neat little language with - finally - a modern standard library. Fifteen years ago, that was a welcome change. I think it's no surprise that Go and Node.js both got started and took off around the same time. People were looking something modern, lightweight, and simple and both projects delivered that.
This post is just an attention grabbing rage bate. Listed issues are superficial unless the person is a bit far into the spectrum. There is no good datapoint which would weigh the issues against real world problems, i.e. how much does it cost. Even the point about ram is weak without the data.
I've written a fair chunk of go in $dayjob and I have t say it's just... Boring. I know that sounds like a weird thing to complain about, but I just can't get enthused for anything I write in go. It's just.. Meh. Not sure why that is, guess it doesn't really click for me like other languages have in the past.
No, it's absolutely meant to be boring by design. It's also a downside, obviously, but it's easily compensated by working on something that's already challenging. The language standing out of your way is quite useful in such cases
Another annoying thing Go proponents say is that it is simple. It is not. And even if it was, the code you write with a simple language is not automatically simple. Take the k8s control plane for example; some of the most convoluted and bulky code that exists, and it’s all in Go.
Fascinating. Coming from C++ I can't imagine not having RAII. That seems so wordy and painful. And that nil comparison is...gross.
I don't get how you can assign an interface to be a pointer to a structure. How does that work? That seems like a compile error. I don't know much about Go interfaces.
There were points in this article that made me feel like Rob Schneider in Demolition Man saying "He doesn't know about the three sea shells!" but there were a couple points made that were valid.
the nil issue. An interface, when assigned a struct, is no longer nil even if that struct is nil - probably a mistake. Valid point.
append in a func. Definitely one of the biggest issues is that slices are by ref. They did this to save memory and speed but the append issue becomes a monster unless abstracted. Valid point.
err in scope for the whole func. You defined it, of course it is. Better to reuse a generic var than constantly instantiate another. The lack of try catch forces you to think. Not a valid point.
defer. What is the difference between a scope block and a function block? I'll wait.
> If you stuff random binary data into a string, Go just steams along, as described in this post.
> Over the decades I have lost data to tools skipping non-UTF-8 filenames. I should not be blamed for having files that were named before UTF-8 existed.
What I intended to say with this is that ignoring the problem if invalid UTF-8 (could be valid iso8859-1) with no error handling, or other way around, has lost me data in the past.
Compare this to Rust, where a path name is of a different type than a mere string. And if you need to treat it like a string and you don't care if it's "a bit wrong" (because it's for being shown to the user), then you can call `.to_string_lossy()`. But it's be more hard to accidentally not handle that case when exact name match does matter.
When exactness matters, `.to_str()` returns `Option<&str>`, so the caller is forced to deal with the situation that the file name may not be UTF-8.
Being sloppy with file name encodings is how data is lost. Go is sloppy with strings of all kinds, file names included.
Thanks for your reply. I understand that encoding the character set in the type system is more explicit and can help find bugs.
But forcing all strings to be UTF-8 does not magically help with the issue you described. In practice I've often seen the opposite: Now you have to write two code paths, one for UTF-8 and one for everything else. And the second one is ignored in practice because it is annoying to write. For example, I built the web server project in your other submission (very cool!) and gave it a tar file that has a non-UTF-8 name. There is no special handling happening, I simply get "error: invalid UTF-8 was detected in one or more arguments" and the application exits. It just refuses to work with non-UTF-8 files at all -- is this less sloppy?
Forcing UTF-8 does not "fix" compatibility in strange edge cases, it just breaks them all. The best approach is to treat data as opaque bytes unless there is a good reason not to. Which is what Go does, so I think it is unfair to blame Go for this particular reason instead of the backup applications.
> It just refuses to work with non-UTF-8 files at all -- is this less sloppy?
You can debate whether it is sloppy but I think an error is much better than silently corrupting data.
> The best approach is to treat data as opaque bytes unless there is a good reason not to
This doesn't seem like a good approach when dealing with strings which are not just blobs of bytes. They have an encoding and generally you want ways to, for instance, convert a string to upper/lowercase.
Can't say I know the best way here. But Rust does this better than anything I've seen.
I don't think you need two code paths. Maybe your program can live its entire life never converting away from the original form. Say you read from disk, pick out just the filename, and give to an archive library.
There's no need to ever convert that to a "string". Yes, it could have been a byte array, but taking out the file name (or maybe final dir plus file name) are string operations, just not necessarily on UTF-8 strings.
And like I said, for all use cases where it just needs to be shown to users, the "lossy" version is fine.
> I simply get "error: invalid UTF-8 was detected in one or more arguments" and the application exits. It just refuses to work with non-UTF-8 files at all -- is this less sloppy?
Haha, touche. But yes, it's less sloppy. Would you prefer that the files were silently skipped? You've created your archive, you started the webserver, but you just can't get it to deliver the page you want.
In order for tarweb to support non-UTF-8 in filenames, the programmer has to actually think about what that means. I don't think it means doing a lossy conversion, because that's not what the file name was, and it's not merely for human display. And it should probably not be the bytes either, because tools will likely want to send UTF-8 encoded.
Or they don't. In either case unless that's designed, implemented, and tested, non-UTF-8 in filenames should probably be seen as malformed input. For something that uses a tarfile for the duration of the process's life, that probably means rejecting it, and asking the user to roll back to a previous working version or something.
> Forcing UTF-8 does not "fix" compatibility in strange edge cases
Yup. Still better than silently corrupting.
Compare this to how for Rust kernel work they apparently had to implement a new Vec equivalent, because dealing with allocation failures is a different thing in user and kernel space[1], and Vec push can't fail.
Similarly, Go string operations cannot fail. And memory allocation issues has reasons that string operations don't.
[1] a big separate topic. Nobody (almost) runs with overcommit off.
But there is no silent corruption when you pass the data as opaque bytes, you just get some placeholder symbols when displayed. This is how I see the file in my terminal and I can rm it just fine.
And yes, question marks in the terminal are way better than applications not working at all.
The case of non-UTF-8 being skipped is usually a characteristic of applications written in languages that don't use bytes for their default string type, not the other way around. This has bitten me multiple times with Python2/3 libraries.
And it's perfect for most business software, because most businesses are not focused on building good software.
Go has a good-enough standard library, and Go can support a "pile-of-if-statements" architecture. This is all you need.
Most enterprise environments are not handled with enough care to move beyond "pile-of-if-statements". Sure, maybe when the code was new it had a decent architecture, but soon the original developers left and then the next wave came in and they had different ideas and dreamed of a "rewrite", which they sneakily started but never finished, then they left, and the 3rd wave of developers came in and by that point the code was a mess and so now they just throw if-statements onto the pile until the Jira tickets are closed, and the company chugs along with its shitty software, and if the company ever leaks the personal data of 100 million people, they aren't financially liable.
For too many aspects for my liking, Go moves complexity out of the language and into your code where you get to unavoidably deal with the cognitive load. It's fine if you can keep things small and simple, but beyond a certain complexity, it's a hard pass for me.
Anyone want to try to explain what he's on about with the first example?
bar, err := foo()
if err != nil {
return err
}
if err := foo2(); err != nil {
return err
}
The above (which declares a new value of err scoped to the second if statement) should compile right? What is it that he's complaining about?
EDIT: OK, I think I understand; there's no easy way to have `bar` be function-scoped and `err` be if-scoped.
I mean, I'm with him on the interfaces. But the "append" thing just seems like ranting to me. In his example, `a` is a local variable; why would assigning a local variable be expected to change the value in the caller? Would you expect the following to work?
int func(a *MyStruct) {
a = &MyStruct{...}
}
If not why would you expect `a = apppend(a, ...)` to work?
Oh, I see. I mean, yeah, the relationships between slices and arrays is somewhat subtle; but it buys you some power as well. I came to golang after decades of C, so I didn't have much trouble with the concept.
I'm afraid I can only consider that a taste thing.
EDIT: One thing I don't consider a taste thing is the lack of the equivalent of a "const *". The problem with the slice thing is that you can sort of sometimes change things but not really. It would be nice if you could be forced to pass either a pointer to a slice (such that you can actually allocate a new backing array and point to it), or a non-modifiable slice (such that you know the function isn't going to change the slice behind your back).
That might be it, but I wondered about that one, as well as the append complaint. It seems like the author disagree with scoping rules, but they aren't really any different than a lot of other languages.
If someone really doesn't like the reuse of err, there's no reason why they couldn't create separate variable, e.g. err_foo and err_foo2. There's not no reason to not reuse err.
Well no, the second "if" statement is a red herring. Both of the following work:
bar, err := foo()
if err != nil {
return err
}
if err = foo2(); err != nil {
return err
}
and
bar, err := foo()
if err != nil {
return err
}
if err := foo2(); err != nil {
return err
}
He even says as much:
> Even if we change that to :=, we’re left to wonder why err is in scope for (potentially) the rest of the function. Why? Is it read later?
My initial reaction was: "The first `err` is function-scope because the programmer made it function-scope; he clearly knows you can make them local to the if, so what's he on about?`
It was only when I tried to rewrite the code to make the first `err` if-scope that I realized the problem I guess he has: OK, how do you make both `err` variable if-scope while making `bar` function-scope? You'd have to do something like this:
var bar MyType
if lbar, err := foo(); err != nil {
return err
} else {
bar = lbar
}
Which is a lot of cruft to add just to restrict the scope of `err`.
Congratulations, you have found a few pain points in a language. Now as a scientific exercise apply the same reasoning to a few others. Will the number of issues you find multiplied by their importance be greater or lower than the score for Go? There you go, that's the entire problem - Go is bad, but there is no viable alternative in general.
I get bitten by the "nil interface" problem if I'm not paying a lot of attention since golang makes a distinction between the "enclosing type" and the "receiver type"
package main
import "fmt"
type Foo struct{
Name string
}
func (f *Foo) Kaboom() {
fmt.Printf("hello from Kaboom, f=%s\n", f.Name)
}
func NewKaboom() interface{ Kaboom() } {
var p *Foo = nil
return p
}
func main() {
obj := NewKaboom()
fmt.Printf("obj == nil? %v\n", obj == nil)
// The next line will panic (because method receives nil *Foo)
obj.Kaboom()
}
go run fred.go
obj == nil? false
panic: runtime error: invalid memory address or nil pointer dereference
As a long-time Go programmer I didn't understand the comment about two types of nil because I have never experienced that issue, so I dug into it.
It turns out to be nothing but a misunderstanding of what the fmt.Println() statement is actually doing. If we use a more advanced print statement then everything becomes extremely clear:
package main
import (
"fmt"
"github.com/k0kubun/pp/v3"
)
type I interface{}
type S struct{}
func main() {
var i I
var s *S
pp.Println(s, i) // (*main.S)(nil) nil
fmt.Println(s == nil, i == nil, s == i) // true true false
i = s
pp.Println(s, i) // (*main.S)(nil) (*main.S)(nil)
fmt.Println(s == nil, i == nil, s == i) // true false true
}
The author of this post has noted a convenience feature, namely that fmt.Println() tells you the state of the thing in the interface and not the state of the interface, mistaken it as a fundamental design issue and written a screed about a language issue that literally doesn't exist.
Being charitable, I guess the author could actually be complaining that putting a nil pointer inside a nil interface is confusing. It is indeed confusing, but it doesn't mean there are "two types" of nil. Nil just means empty.
The author is showing the result of s==nil and i==nil, which are checks that you would have to do almost everywhere (the so called "billion dollar mistake")
It's not about Printf. It's about how these two different kind of nil values sometimes compare equal to nil, sometimes compare equal to each other, and sometimes not
Yes there is a real internal difference between the two that you can print. But that is the point the author is making.
It's a contrived example which I have never really experienced in my own code (and at this point, I've written a lot of it) or any of my team's code.
Go had some poor design features, many of which have now been fixed, some of which can't be fixed. It's fine to warn people about those. But inventing intentionally confusing examples and then complaining about them is pretty close to strawmanning.
> It's a contrived example which I have never really experienced in my own code (and at this point, I've written a lot of it) or any of my team's code.
It's confusing enough that it has an FAQ entry and that people tried to get it changed for Go 2. Evidently people are running in to this. (I for sure did)
That's really my problem with these kind of critiques.
EVERY language has certain pitfalls like this. Back when I wrote PHP for 20+ years I had a Google doc full of every stupid PHP pitfall I came across.
And they were always almost a combination of something silly in the language, and horrible design by the developer, or trying to take a shortcut and losing the plot.
I believe you that you've never hit it, it's definitely not an everyday problem. But they didn't make it up, it does bite people from time to time.
It's sort of a known sharp edge that people occasionally cut themselves on. No language is perfect, but when people run into them they rightfully complain about it
Author here. No, I didn't misunderstand it. Interface variables have two types of nil. Untyped, which does compare to nil, and typed, which does not.
What are you trying to clarify by printing the types? I know what the types are, and that's why I could provide the succinct weird example. I know what the result of the comparisons are, and why.
And the "why" is "because there are two types of nil, because it's a bad language choice".
I've seen this in real code. Someone compares a variable to nil, it's not, and then they call a method (receiver), and it crashes with nil dereference.
> Author here. No, I didn't misunderstand it. Interface variables have two types of nil. Untyped, which does compare to nil, and typed, which does not.
There aren't two types of nil. Would you call an empty bucket and an empty cup "two types of empty"?
There is one nil, which means different things in different contexts. You're muddying the waters and making something which is actually quite straightforward (an interface can contain other things, including things that are themselves empty) seem complicated.
> I've seen this in real code. Someone compares a variable to nil, it's not, and then they call a method (receiver), and it crashes with nil dereference.
Sure, I've seen pointer-to-pointer dereferences fail for the same reason in C. It's not particularly different.
God, this sort of article is so boring. Go is a great language, as evidenced by the tremendous amount of excellent software that’s been written in it. Are there some rough points? Sure.
But that’s all this is, is a list of annoyances the author experiences when using Go. Great, write an article about the problems with Go, but don’t say “therefore it’s a bad language”.
While I agree with many of the points brought up, none of them seems like such a huge issue that it’s even worth discussing, honestly. So you have different taste the the original designers. Who cares? What language do you say is better? I can find just as many problems with that language.
> Wait, what? Why is err reused for foo2()? Is there’s something subtle I’m not seeing? Even if we change that to :=, we’re left to wonder why err is in scope for (potentially) the rest of the function. Why? Is it read later?
First time its assigned nil, second time its overwritten in case there's an error in the 2nd function. I dont see the authors issue? Its very explicit.
Author here: I'm not talking about the value. I'm talking about the lifetime of the variable.
After checking for nil, there's no reason `err` should still be in scope. That's why it's recommended to write `if err := foo(); err != nil`, because after that, one cannot even accidentally refer to `err`.
I'm giving examples where Go syntactically does not allow you to limit the lifetime of the variable. The variable, not its value.
You are describing what happens. I have no problem with what happens, but with the language.
I gave an example in the post, but to spell it out: Because a typo variable is not caught, e.g. as an unused variable.
The example from the blog post would fail, because `return err` referred to an `err` that was no longer in scope. It would syntactically prevent accidentally writing `foo99()` instead of `err := foo99()`.
I'll have to read the rest later but this was an unforced error on the author's part. There is nothing unclear about that block of code. If err isn't but, it was set, and we're no longer in the function. If it's not, why waste an interface handle?
> Though Python is almost entirely refcounted, so one can pretty much rely on the __del__ finalizer being called.
yeah no. you need an acyclic structure to maybe guarantee this, in CPython. other Python implementations are more normal in that you shouldn't rely on finalizers at all.
I love Python, but the sheer number of caveats and warnings for __del__ makes me question if this person has ever read the docs [0]. My favorite WTF:
> It is possible (though not recommended!) for the __del__() method to postpone destruction of the instance by creating a new reference to it. This is called object resurrection.
This reads like your generic why js sucks why c sucks why py sucks etc.
Use the language where it makes sense. You do know what if you have an issue that the language fails at, you can solve that particular problem in another language and.. call that code?
We used to have a node ts service. We had some computationally heavy stuff, we moved that one part to Go because it was good for that ONE thing. I think later someone ported that one thing to Rust and it became a standalone project.
Idk. It’s just code. Nobody really cares, we use these tools to solve problems.
Go passes on a lot of ideas that are popular in academic language theory and design, with mixed but I think mostly positive results for its typical use cases.
Its main virtues are low cognitive load and encouraging simple straightforward ways of doing things, with the latter feeding into the former.
Languages with sophisticated powerful type systems and other features are superior in a lot of ways, but in the hands of most developers they are excuses to massively over-complicate everything. Sophomore developers (not junior but not yet senior) love complexity and will use any chance to add as much of it as they can, either to show off how smart they are, to explore, or to try to implement things they think they need but actually don't. Go somewhat discourages this, though devs will still find a way of course.
Experienced developers know that complexity is evil and simplicity is actually the sign of intelligence and skill. A language with advanced features is there to make it easier and simpler to express difficult concepts, not to make it more difficult and complex to express simple concepts. Every language feature should not always be used.
Oh yeah. Said another way, it discourages nerd-sniping, which in practice is a huge problem with functional programming and highly expressive type systems.
You end up creating these elegant abstractions that are very seductive from a programmer-as-artist perspective, but usually a distraction from just getting the work done in a good enough way.
You can tell that the creators of Go are very familiar with engineer psychology and what gets them off track. Go takes away all shiny toys.
Go is the kind of language you use at your job, and you necessarily need to have dozens of linters and automated code quality checks set up to catch all the gotchas, like the stuff with "err" here, and nobody is ever going to get any joy from any of it. the entire exercise of "you must return and consume an error code from all functions" has been ridiculous from go's inception, it looked ridiculous to me back when I saw it in 2009, and now that I have to use it for k8s stuff at work, it's exactly as ridiculous as it seemed back then.
With all of that, Go becomes the perfect language for the age of LLMs writing all the code. Let the LLMs deal with all the boilerplate and misery of Go, while at the same time its total lack of elegance is also well suited to LLMs which similarly have the most dim notions of code elegance.
None of these objections seem at all serious to me, then the piece wraps up with "Why do I care about memory use? RAM is cheap." Excuse me? Memory bloat effects performance and user experience with every operation. Careful attention to software engineering should avoid or minimize these problems and emphasize the value of being tidy with memory use.
I wrote a small explainer on the typed-vs-untyped nil issue. It is one of the things that can actually bite you in production. Easy to miss it in code review.
If you run the code, you will see that calling read() on ControlMessage causes a panic even though there is a nil check. However, it doesn't happen for Message. See the read() implementation for Message: we need to have a nil check inside the pointer-receiver struct methods. This is the simplest solution. We have a linter for this. The ecosystem also helps, e.g protobuf generated code also has nil checks inside pointer receivers.
After spending some time in lower level languages Go IMO makes much more sense. Your example:
First one - you have an address to a struct, you pass it, all good.
Second case: you set address of struct to "nil". What is nil? It's an address like anything else. Maybe it's 0x000000 or something else. At this point from memory perspective it exists, but OS will prevent you from touching anything that NULL pointer allows you to touch.
Because you don't touch ANYTHING nothing fails. It's like a deadly poison in a box you don't open.
Third example id the same as second one. You have a IMessage but it points to NULL (instead NULL pointing to deadly poison).
And in fourth, you finally open the box.
Is it magic knowledge? I don't think so, but I'm also not surprised about how you can modify data through slice passing.
IMO the biggest Go shortcoming is selling itself as a high level language, while it touches more bare metal that people are used to touch.
What does this mean? Do they just use recover and keep bad data?
> The standard library does that. fmt.Print when calling .String(), and the standard library HTTP server does that, for exceptions in the HTTP handlers.
Apart from this most doesn't seem that big of a deal, except for `append` which is truly a bad syntax. If you doing it inplace append don't return the value.
What's popular and what's good are rarely (if ever) the same thing.
Python sucks balls but it has more fanboys than a K-pop idol. Enforced whitespace? No strong typing? A global interpreter lock? Garbage error messages? A package repository you can't search (only partially because it's full of trash packages and bad forks) with random names and no naming convention? A complete lack of standardization in installing or setting up applications/environments? 100 lines to run a command and read the stout and stderr in real time?
The real reason everyone uses Python and Go is because Google used them. Otherwise Python looks like BASIC with objects and Go is an overhyped niche language.
all these lame folks complaining about "what go could have been"... is this not HACKER news? cant you go and build your own, "better" language? but you won't, you'll just complain
Show me a programming language that does not have annoying flaws and I'll show you a programming language that does not yet exist, and probably won't ever exist.
I really like Go. It scratches every itch that I have. Is it the language for your problems? I don't know, but very possibly that answer is "no".
Go is easy to learn, very simple (this is a strong feature, for me) and if you want something more, you can code that up pretty quickly.
The blog article author lost me completely when they said this:
> Why do I care about memory use? RAM is cheap.
That is something that only the inexperienced say. At scale, nothing is cheap; there is no cheap resource if you are writing software for scale or for customers. Often, single bytes count. RAM usage counts. CPU cycles count. Allocations count. People want to pretend that they don't matter because it makes their job easier, but if you want to write performant software, you better have that those cpu cache lines in mind, and if you have those in mind, you have memory usage of your types in mind.
> At scale, nothing is cheap; there is no cheap resource if you are writing software for scale or for customers. Often, single bytes count. RAM usage counts. CPU cycles count. Allocations count
Well if maximalist performance tuning is your stated goal, to the point that single bytes count, I would imagine Go is a pretty terrible choice? There are definitely languages with a more tunable GC and more cache line friendly tools than Go.
But honestly, your comment reads more like gatekeeping, saying someone is inexperienced because they aren't working with software at the same scale as you. You sound equally inexperienced (and uninterested) with their problem domain.
As someone who for >10 years writes golang and has written some bigger codebases using it, this are my takes on this articles claims:
:Error variable Scope
-> Yes can be confusing at the beginning, but if you have some experience it doesnt really matter. Would it be cool to scope it down?`Sure, but it feels like here is something blown up to an "issue" where i would see other things to alot more important for the go team to revisit. Regarding the error handling in go, some hate it , some love it : i personally like it (yes i really do) so i think its more a preference than a "bad" thing.
:Two types of nil
-> Funny, i never encountered this in > 10 years of go with ALOT of work in pointer juggling, so i wonder in which reality this hits your where it cant be avoided. Tho confusing i admit
:It’s not portable
-> I have no opinion here since i work on unix systems only and i have my compiled binaries specific shrug dont see any issue here either.
:append with no defined ownership
-> I mean... seriously? Your test case, while the results may be unexpected, is a super wierd one. Why you you append a mid field, if you think about what these functions do under the hood your attemp actualyl feels like you WANT to procude strange behaviour and things like that can be done in any language.
:defer is dumb
-> Here i 100% agree - from my pov it leads to massive resource wasting and in certain situations it can also create strange errors, but im not motivated to explain this - ill just say defer, while it seems usefull, from my pov is a bad thing and should not be used.
:The standard library swallows exceptions, so all hope is lost
-> "So all hope is lost" i mean you already left the realm of objectiveness long before tbut this really tops it. I wrote some quite big go applications and i never had a situation where i could not handle an exception simply by adjusting my code in a way that i prevent it from even happening. Again - i feel like someone is just in search of things to complain that could simply be avoided. (also in case someone comes up with a super specific probably once in a million case, well alrways keep in mind that language design doesnt orient on the least occuring thing).
:Sometimes things aren’t UTF-8
-> I wont bother to read another whole article, if its important include an example. I have dealth with different encodings (web crawler) and i could handle all of them.
:Memory use
-> What you describe is one of the design decisions im not absolutly happy with, the memory handling. But than, one of my golang projects is an in memory graph storage/database - which in one of my cases run for ~2years without restart and had about 18GB of dataset stored in it. It has a lot of mutex handling (regarding your earlier complain with exxceptions, never had one) and it btw run as backend of a internet facing service so it wasnt just fed internal data.
--------------------
Finally i wanne say : often things come down to personal preference. I could spend days raging about javascript, java, c++ or some other languages, but whatfor? Pick the language that fits your use case and your liking, dont pick one that doesnt and complain about it.
Also , just to show im not just a big "golang is the best" fanboy, because it isnt - there are things to critizize like the previously mentioned memory handling.
While i still think you just created memory leaks in your app, golang had this idea of "arenas" which would enable the code to manage memory partly himself and therefor developt much more memory efficient applications. This has stalled lately and i REALLY hope the go team will pick it up again and make this a stable thing to use. I probably would update all of my bigger codebases using it.
Also - and thats something thats annoying me ALOT beacuse it made me spend alot of hours - the golang plugin system. I wrote an architecture to orchestrate processing and for certain reasons i wanted to implement the orchestrated "things" as plugins. But the plugin system as it is rn can only be described as the torments of hell. I messed with it for like 3 years till i recently dropped the plugin functionality and added the stuff directly. Plugins are a very powerfull thing and a good plugin system could be a great thing, but in its current state i would recommend noone to touch it.
These are just two points, i could list some more but the point i want to get to is : there are real things you can critizize instead of things that you create yourself or that are language design decision that you just dont like. Im not sure if such articles are the rage of someone who just is bored or its ragebait to make people read it. Either way its not helping anyone.
Other commenters have. I have. Not everyone will. Doesn't make it good.
:append with no defined ownership
I've seen it. Of course one can just "not do that", but wouldn't it be nice if it were syntactically prevented?
:It’s not portable ("just Unix")
I also only work on Unix systems. But if you only work on amd64 Linux, then portability is not a concern. Supporting BSD and Linux is where I encounter this mess.
:All hope is lost
All hope is lost specifically on the idea of not needing to write exception safe code. If panics did always crash the problem, then that'd be fine. But no coding standard can save you from the standard library, so yes, all hope about being able to pretend panic exits the problem, is lost.
You don't need to read my blog posts. Looking forward to reading your, much better, critique.
I say switching to Go is like a different kind of Zen. It takes time, to settle in and get in the flow of Go... Unlike the others, the LSP is fast, the developer, not so much. Once you've lost all will to live you become quite proficient at it. /s
I've been writing small Go utilities for myself since the Go minor version number was <10
I can still check out the code to any of them, open it and it'll look the same as modern code. I can also compile all of them with the latest compiler (1.25?) and it'll just work.
No need to investigate 5 years of package manager changes and new frameworks.
I was like "Have I ever actually heard that?" and the answer turns out to be "No" so now I have (it's a Metallica track about suicidal ideation, whether it's good idea to listen to it while writing Go I could not say and YMMV).
defer is no worse than Java's try-with-resources. Neither is true RAII, because in both cases you, the caller, need to remember to write the wordy form ("try (...) {" or "defer ...") instead of the plain form ("..."), which will still compile but silently do the wrong thing.
Sure, true RAII would be improvement over both, but the author's point is that Java is an improvement over Go, because the resource acquisition is lexical scoped, not function-scoped. Imagine if Java's `try (...) { }` didn't clear the resource when the try block ends, but rather when the wrapping method returns. That's how Go's defer works.
defer is not block scoped in Go, it's function scoped. So if you want to defer a mutex unlock it will only be executed at the end of the function even if placed in a block. This means you can't do this (sketch):
func (f *Foo) foo() {
// critical section
{
f.mutex.Lock()
defer f.mutex.Unlock()
// something with the shared resource
}
// The lock is still held here, but you probably didn't want that
}
You can call Unlock directly, but then if there's a panic it won't be unlocked like it would be in the above. That can be an issue if something higher in the call stack prevents the panic from crashing the entire program, it would leave your system in a bad state.
This is the key problem with defer. It operates a lot like a finally block, but only on function exit which means it's not actually suited to the task.
And as the sibling pointed out, you could use an anonymous function that's immediately called, but that's just awkward, even if it has become idiomatic.
Is there anything that soothes devs more than developing a superiority complex of their particular tooling? And then the unquenchable thirst to bash "downwards"? I find it so utterly pathetic.
I don't really care if you want that. Everyone should know that that's just the way slices work. Nothing more nothing less.
I really don't give a damn about that, i just know how slices behave, because I learned the language. That's what you should do when you are programming with it (professionally)
The author obviously knows that too, otherwise they wouldn't have written about it. All of these issues are just how the language works, and that's the problem.
I am fine with the subsequent example, too. If you read up about slices, then that's how they are defined and how they work. I am not judging, I am just using the language as it is presented to me.
Then you seem to be fine with inconsistent ownership and a behavioral dependence on the underlying data rather than the structure.
You really don't see why people would point a definition that changes underneath you out as a bad definition? They're not arguing the documentation is wrong.
The definition is perfectly consistent. append is in-place if there's enough capacity (and the programmer can check this directly with cap() if they want), and otherwise it allocates a new backing array.
This was an interesting read and very educational in my case, but each time I read an article criticizing a programming language it's written by someone who hasn't done anything better.
It's a shame because it is just as effective as pissing in the wind.
If you're saying someone can't credibly criticize a language without having designed a language themselves, I'll ask that you present your body of work of programming language criticisms so I know if you have "produced something better" in the programming language criticism space.
Of course, by your reasoning this also means you yourself have designed a language.
I'll leave out repeating your colorful language if you haven't done any of these things.
> If you're saying someone can't credibly criticize a language without having designed a language themselves
Actually I think that's a reasonable argument. I've not designed a language myself (other than toy experiments) so I'm hesitant to denigrate other people's design choices because even with my limited experience I'm aware that there are always compromises.
Similarly, I'm not impressed by literary critics whose own writing is unimpressive.
Who would be qualified to judge their those critics’ writing as good or bad? Critics already qualified as good writers? Who vetted them, then? It’d have to be a stream of certified good authors all the way back.
No, I stick by my position. I may not be able to do any better, but I can tell when something’s not good.
(I have no opinion on Go. I’ve barely used it. This is only on the general principle of being able to judge something you couldn’t do yourself. I mean, the Olympics have gymnastic judges who are not gold medalists.)
I’ve never been a rock star, but I think Creed sucks.
I really don’t like your logic. I’m not a Michelin chef, but I’m qualified to say that a restaurant ruined my dessert. While I probably couldn’t make a crème brûlée any better than theirs, I can still tell that they screwed it up compared to their competitor next door.
For example, I love Python, but it’s going to be inherently slow in places because `sum(list)` has to check the type of every single item to see what __add__ function to call. Doesn’t matter if they’re all integers; there’s no way to prove to the interpreter that a string couldn’t have sneaked in there, so the interpreter has to check each and every time.
See? I’ve never written a language, let alone one as popular as Python, but I’m still qualified to point out its shortcomings compared to other languages.
But I can't help but agree with a lot of points in this article. Go was designed by some old-school folks that maybe stuck a bit too hard to their principles, losing sight of the practical conveniences. That said, it's a _feeling_ I have, and maybe Go would be much worse if it had solved all these quirks. To be fair, I see more leniency in fixing quirks in the last few years, like at some point I didn't think we'd ever see generics, or custom iterators, etc.
The points about RAM and portability seem mostly like personal grievances though. If it was better, that would be nice, of course. But the GC in Go is very unlikely to cause issues in most programs even at very large scale, and it's not that hard to debug. And Go runs on most platforms anyone could ever wish to ship their software on.
But yeah the whole error / nil situation still bothers me. I find myself wishing for Result[Ok, Err] and Optional[T] quite often.
I'd say that it's entirely the other way around: they stuck to the practical convenience of solving the problem that they had in front of them, quickly, instead of analyzing the problem from the first principles, and solving the problem correctly (or using a solution that was Not Invented Here).
Go's filesystem API is the perfect example. You need to open files? Great, we'll create
function, you can open files now, done. What if the file name is not valid UTF-8, though? Who cares, hasn't happen to me in the first 5 years I used Go.This is the mindset that makes me want to throttle the golang authors.
Golang makes it easy to do the dumb, wrong, incorrect thing that looks like it works 99.7% of the time. How can that be wrong? It works in almost all cases!
The problem is that your code is littered with these situations everywhere. You don’t think to test for them, it’s worked on all the data you fed it so far, and then you run into situations like the GP’s where you lose data because golang didn’t bother to think carefully about some API impedance mismatch, can’t even express it anyway, and just drops things on the floor when it happens.
So now your user has irrecoverably lost data, there’s a bug in your bug tracker, and you and everyone else who uses go has to solve for yet another a stupid footgun that should have been obvious from the start and can never be fixed upstream.
And you, and every other golang programmer, gets a steady and never-ending stream of these type of issues, randomly selected for, for the lifetime of your program. Which one will bite you tomorrow? No idea! But the more and more people who use it, the more data you feed it, the more clients with off-the-beaten-track use-cases, the more and more it happens.
Oops, non-UTF-8 filename. Oops, can’t detect the difference between an empty string in some JSON or a nil one. Oops, handed out a pointer and something got mutated out from under me. Oops, forgot to defer. Oops, maps aren’t thread-safe. Oops, maps don’t have a sane zero value. And on and on and fucking on and it never goddamn ends.
And it could have, if only Rob Pike and co. didn’t just ship literally the first thing they wrote with zero forethought.
my favorite example of this was the go authors refusing to add monotonic time into the standard library because they confidently misunderstood its necessity
(presumably because clocks at google don't ever step)
then after some huge outages (due to leap seconds) they finally added it
now the libraries are a complete a mess because the original clock/time abstractions weren't built with the concept of multiple clocks
and every go program written is littered with terrible bugs due to use of the wrong clock
https://github.com/golang/go/issues/12914 (https://github.com/golang/go/issues/12914#issuecomment-15075... might qualify for the worst comment ever)
Joda time is an excellent library and indeed it was basically the basis for java's time API, and.. for pretty much any modern language's time API, but given the history - Java basically always had the best time library available at the time.
Should and could golang have been so much better than it is? Would golang have been better if Pike and co. had considered use-cases outside of Google, or looked outward for inspiration even just a little? Unambiguously yes, and none of the changes would have needed it to sacrifice its priorities of language simplicity, compilation speed, etc.
It is absolutely okay to feel that go is a better language than some of its predecessors while at the same time being utterly frustrated at the the very low-hanging, comparatively obvious, missed opportunities for it to have been drastically better.
Go’s more chaotic approach to allow strings to have non-Unicode contents is IMO more ergonomic. You validate that strings are UTF-8 at the place where you care that they are UTF-8. (So I’m agreeing.)
WTF-8 has some inconvenient properties. Concatenating two strings requires special handling. Rust's opaque types can patch over this but I bet Go's WTF-8 handling exposes some unintuitive behavior.
There is a desire to add a normal string API to OsStr but the details aren't settled. For example: should it be possible to split an OsStr on an OsStr needle? This can be implemented but it'd require switching to OMG-WTF-8 (https://rust-lang.github.io/rfcs/2295-os-str-pattern.html), an encoding with even more special cases. (I've thrown my own hat into this ring with OsStr::slice_encoded_bytes().)
The current state is pretty sad yeah. If you're OK with losing portability you can use the OsStrExt extension traits.
IMO the differences with Windows are such that I’m much more unhappy with WTF-8. There’s a lot that sucks about C++ but at least I can do something like
Mind you this sucks for a lot of reasons, one big reason being that you’re directly exposed to the differences between path representations on different operating systems. Despite all the ways that this (above) sucks, I still generally prefer it over the approaches of Go or Rust.The problem with this, as with any lack of static typing, is that you now have to validate at _every_ place that cares, or to carefully track whether a value has already been validated, instead of validating once and letting the compiler check that it happened.
Validation is nice but Rust’s principled approach leaves me high and dry sometimes. Maybe Rust will finish figuring out the OsString interface and at that point we can say Rust has “won” the conversation, but it’s not there yet, and it’s been years.
Except when it doesn’t and then you have to deal with fucking Cthulhu because everyone thought they could just make incorrect assumptions that aren’t actually enforced anywhere because “oh that never happens”.
That isn’t engineering. It’s programming by coincidence.
> Maybe Rust will finish figuring out the OsString interface
The entire reason OsString is painful to use is because those problems exist and are real. Golang drops them on the floor and forces you pick up the mess on the random day when an unlucky end user loses data. Rust forces you to confront them, as unfortunate as they are. It's painful once, and then the problem is solved for the indefinite future.
Rust also provides OsStrExt if you don’t care about portability, which greatly removes many of these issues.
I don’t know how that’s not ideal: mistakes are hard, but you can opt into better ergonomics if you don’t need the portability. If you end up needing portability later, the compiler will tell you that you can’t use the shortcuts you opted into.
It seems like there's some confusion in the GGGGGP post, since Go works correctly even if the filename is not valid UTF-8 .. maybe that's why they haven't noticed any issues.
https://github.com/golang/go/issues/32334
oops, looks like some files are just inaccessible to you, and you cannot copy them.
Fortunately, when you try to delete the source directory, Go's standard library enters infinite loop, which saves your data.
https://github.com/golang/go/issues/59971
I guess the issue isn't so much about whether strings are well-formed, but about whether the conversion (eg, from UTF-16 to UTF-8 at the filesystem boundary) raises an error or silently modifies the data to use replacement characters.
I do think that is the main fundamental mistake in Go's Unicode handling; it tends to use replacement characters automatically instead of signalling errors. Using replacement characters is at least conformant to Unicode but imo unless you know the text is not going to be used as an identifier (like a filename), conversion should instead just fail.
The other option is using some mechanism to preserve the errors instead of failing quietly (replacement) or failing loudly (raise/throw/panic/return err), and I believe that's what they're now doing for filenames on Windows, using WTF-8. I agree with this new approach, though would still have preferred they not use replacement characters automatically in various places (another one is the "json" module, which quietly corrupts your non-UTF-8 and non-UTF-16 data using replacement characters).
Probably worth noting that the WTF-8 approach works because strings are not validated; WTF-8 involves converting invalid UTF-16 data into invalid UTF-8 data such that the conversion is reversible. It would not be possible to encode invalid UTF-16 data into valid UTF-8 data without changing the meaning of valid Unicode strings.
I think this is sensible, because the fact that Windows still uses UTF-16 (or more precisely "Unicode 16-bit strings") in some places shouldn't need to complicate the API on other platforms that didn't make the UCS-2/UTF-16 mistake.
It's possible that the WTF-8 strings might not concatenate the way they do in UTF-16 or properly enforced WTF-8 (which has special behaviour on concatenation), but they'll still round-trip to the intended 16-bit string, even after concatenation.
Most of the time if there's a result, there's no error. If there's an error, there's no result. But don't forget to check every time! And make sure you don't make a mistake when you're checking and accidentally use the value anyway, because even though it's technically meaningless it's still nominally a meaningful value since zero values are supposed to be meaningful.
Oh and make sure to double-check the docs, because the language can't let you know about the cases where both returns are meaningful.
The real world is messy. And golang doesn't give you advance warning on where the messes are, makes no effort to prevent you from stumbling into them, and stands next to you constantly criticizing you while you clean them up by yourself. "You aren't using that variable any more, clean that up too." "There's no new variables now, so use `err =` instead of `err :=`."
Nothing? Neither Go nor the OS require file names to be UTF-8, I believe
It breaks. Which is weird because you can create a string which isn't valid UTF-8 (eg "\xbd\xb2\x3d\xbc\x20\xe2\x8c\x98") and print it out with no trouble; you just can't pass it to e.g. `os.Create` or `os.Open`.
(Bash and a variety of other utils will also complain about it being valid UTF-8; neovim won't save a file under that name; etc.)
If you stuff random binary data into a string, Go just steams along, as described in this post.
Over the decades I have lost data to tools skipping non-UTF-8 filenames. I should not be blamed for having files that were named before UTF-8 existed.
Yes, that was my assumption when bash et al also had problems with it.
You can do something like WTF-8 (not a misspelling, alas) to make it bidirectional. Rust does this under the hood but doesn’t expose the internal representation.
In general, Windows filenames are Unicode and you can always express those filenames by using the -W APIs (like CreateFileW()).
The upshot is that since the values aren’t always UTF-16, there’s no canonical way to convert them to single byte strings such that valid UTF-16 gets turned into valid UTF-8 but the rest can still be roundtripped. That’s what bastardized encodings like WTF-8 solve. The Rust Path API is the best take on this I’ve seen that doesn’t choke on bad Unicode.
In Linux, they’re 8-bit almost-arbitrary strings like you noted, and usually UTF-8. So they always have a convenient 8-bit encoding (I.e. leave them alone). If you hated yourself and wanted to convert them to UTF-16, however, you’d have the same problem Windows does but in reverse.
Score another for Rust's Safety Culture. It would be convenient to just have &str as an alias for &[u8] but if that mistake had been allowed all the safety checking that Rust now does centrally has to be owned by every single user forever. Instead of a few dozen checks overseen by experts there'd be myriad sprinkled across every project and always ready to bite you.
https://github.com/rust-lang/rfcs/issues/2692
Of the top of my head, in order of likely difficulty to calculate: byte length, number of code points, number of grapheme/characters, height/width to display.
Maybe it would be best for Str not to have len at all. It could have bytes, code_points, graphemes. And every use would be precise.
The answer here isn't to throw up your hands, pick one, and other cases be damned. It's to expose them all and let the engineer choose. To not beat the dead horse of Rust, I'll point that Ruby gets this right too.
Similarly, each of those "views" lets you slice, index, etc. across those concepts naturally. Golang's string is the worst of them all. They're nominally UTF-8, but nothing actually enforces it. But really they're just buckets of bytes, unless you send them to APIs that silently require them to be UTF-8 and drop them on the floor or misbehave if they're not.Height/width to display is font-dependent, so can't just be on a "string" but needs an object with additional context.
FWIW the docs indicate that working with grapheme clusters will never end up in the standard library.
If your API takes &str, and tries to do byte-based indexing, it should almost certainly be taking &[u8] instead.
I mean, really neither should be the default. You should have to pick chars or bytes on use, but I don't think that's palatable; most languages have chosen one or the other as the preferred form. Or some have the joy of being forward thinking in the 90s and built around UCS-2 and later extended to UTF-16, so you've got 16-bit 'characters' with some code points that are two characters. Of course, dealing with operating systems means dealing with whatever they have as well as what the language prefers (or, as discussed elsewhere in this thread, pretending it doesn't exist to make easy things easier and hard things harder)
However no &str is not "an alias for &&String" and I can't quite imagine how you'd think that. String doesn't exist in Rust's core, it's from alloc and thus wouldn't be available if you don't have an allocator.
It's far better to get some � when working with messy data instead of applications refusing to work and erroring out left and right.
So that means that for 99% of scenarios, the difference between char[] and a proper utf8 string is none. They have the same data representation and memory layout.
The problem comes in when people start using string like they use string in PHP. They just use it to store random bytes or other binary data.
This makes no sense with the string type. String is text, but now we don't have text. That's a problem.
We should use byte[] or something for this instead of string. That's an abuse of string. I don't think allowing strings to not be text is too constraining - that's what a string is!
One of the great advances of Unix was that you don't need separate handling for binary data and text data; they are stored in the same kind of file and can be contained in the same kinds of strings (except, sadly, in C). Occasionally you need to do some kind of text-specific processing where you care, but the rest of the time you can keep all your code 8-bit clean so that it can handle any data safely.
Languages that have adopted the approach you advocate, such as Python, frequently have bugs like exception tracebacks they can't print (because stdout is set to ASCII) or filenames they can't open when they're passed in on the command line (because they aren't valid UTF-8).
We can try to shove it into objects that work on other text but this won't work in edge cases.
Like if I take text on Linux and try to write a Windows file with that text, it's broken. And vice versa.
Go let's you do the broken thing. In Rust or even using libraries in most languages, you cant. You have to specifically convert between them.
That's why I mean when I say "storing random binary data as text". Sure, Windows almost UTF16 abomination is kind of text, but not really. Its its own thing. That requires a different type of string OR converting it to a normal string.
It maybe legacy cruft downstream of poorly thought out design decisions at the system/OS level, but we're stuck with it. And a language that doesn't provide the tooling necessary to muddle through this mess safely isn't a serious platform to build on, IMHO.
There is room for languages that explicitly make the tradeoff of being easy to use (e.g. a unified string type) at the cost of not handling many real world edge cases correctly. But these should not be used for serious things like backup systems where edge cases result in lost data. Go is making the tradeoff for language simplicity, while being marketed and positioned as a serious language for writing serious programs, which it is not.
Yes this is why all competent libraries don't actually use string for path. They have their own path data type because it's actually a different data type.
Again, you can do the Go thing and just use the broken string, but that's dumb and you shouldn't. They should look at C++ std::filesystem, it's actually quite good in this regard.
> And a language that doesn't provide the tooling necessary to muddle through this mess safely isn't a serious platform to build on, IMHO.
I agree, even PHP does a better job at this than Go, which is really saying something.
> Go is making the tradeoff for language simplicity, while being marketed and positioned as a serious language for writing serious programs, which it is not.
I would agree.
What is different about it? I don't see any constraints here relevant to having a different type. Note that this thread has already confused the issue, because they said filename and you said path. A path can contain /, it just happens to mean something.
If you want a better abstraction to locations of files on disk, then you shouldn't use paths at all, since they break if the file gets moved.
Typically the way you do this is you have the constructor for path do the validation or you use a static path::fromString() function.
Also paths breaking when a file is moved is correct behavior sometimes. For example something like openFile() or moveFile() requires paths. Also path can be relative location.
Can it? If you want to open a file with invalid UTF8 in the name, then the path has to contain that.
And a path can contain the path separator - it's the filename that can't contain it.
> For example something like openFile() or moveFile() requires paths.
macOS has something called bookmark URLs that can contain things like inode numbers or addresses of network mounts. Apps use it to remember how to find recently opened files even if you've reorganized your disk or the mount has dropped off.
IIRC it does resolve to a path so it can use open() eventually, but you could imagine an alternative. Well, security issues aside.
Stuff like this matters a great deal on the standard library level.
You should always be able to iterate the code points of a string, whether or not it's valid Unicode. The iterator can either silently replace any errors with replacement characters, or denote the errors by returning eg, `Result<char, Utf8Error>`, depending on the use case.
All languages that have tried restricting Unicode afaik have ended up adding workarounds for the fact that real world "text" sometimes has encoding errors and it's often better to just preserve the errors instead of corrupting the data through replacement characters, or just refusing to accept some inputs and crashing the program.
In Rust there's bstr/ByteStr (currently being added to std), awkward having to decide which string type to use.
In Python there's PEP-383/"surrogateescape", which works because Python strings are not guaranteed valid (they're potentially ill-formed UTF-32 sequences, with a range restriction). Awkward figuring out when to actually use it.
In Raku there's UTF8-C8, which is probably the weirdest workaround of all (left as an exercise for the reader to try to understand .. oh, and it also interferes with valid Unicode that's not normalized, because that's another stupid restriction).
Meanwhile the Unicode standard itself specifies Unicode strings as being sequences of code units [0][1], so Go is one of the few modern languages that actually implements Unicode (8-bit) strings. Note that at least two out of the three inventors of Go also basically invented UTF-8.
[0] https://www.unicode.org/versions/Unicode16.0.0/core-spec/cha...
> Unicode string: A code unit sequence containing code units of a particular Unicode encoding form.
[1] https://www.unicode.org/versions/Unicode16.0.0/core-spec/cha...
> Unicode strings need not contain well-formed code unit sequences under all conditions. This is equivalent to saying that a particular Unicode string need not be in a Unicode encoding form.
If you use 3) to create a &str/String from invalid bytes, you can't safely use that string as the standard library is unfortunately designed around the assumption that only valid UTF-8 is stored.
https://doc.rust-lang.org/std/primitive.str.html#invariant
> Constructing a non-UTF-8 string slice is not immediate undefined behavior, but any function called on a string slice may assume that it is valid UTF-8, which means that a non-UTF-8 string slice can lead to undefined behavior down the road.
Again, this is the same simplistic, vs just the right abstraction, this just smudges the complexity over a much larger surface area.
If you have a byte array that is not utf-8 encoded, then just... use a byte array.
The entire point of UTF-8 (designed, by the way, by the group that designed Go) is to encode Unicode in such a way that these byte string operations perform the corresponding Unicode operations, precisely so that you don't have to care whether your string is Unicode or just plain ASCII, so you don't need any error handling, except for the rare case where you want to do something related to the text that the string semantically represents. The only operation that doesn't really map is measuring the length.
Every single thing you listed here is supported by &[u8] type. That's the point: if you want to operate on data without assuming it's valid UTF-8, you just use &[u8] (or allocating Vec<u8>), and the standard library offers what you'd typically want, except of the functions that assume that the string is valid UTF-8 (like e.g. iterating over code points). If you want that, you need to convert your &[u8] to &str, and the process of conversion forces you to check for conversion errors.
So you naturally write another one of these functions that takes a `&str` so that it can pass to another function that only accepts `&str`.
Fundamentally no one actually requires validation (ie, walking over the string an extra time up front), we're just making it part of the contract because something else has made it part of the contract.
So, literally any program anyone writes in Rust will crash if you attempt to pass it that filename, if it uses the manual's recommended way to accept command-line arguments. It might work fine for a long time, in all kinds of tests, and then blow up in production when a wild file appears with a filename that fails to be valid Unicode.
This C program I just wrote handles it fine:
(I probably should have used O_TRUNC.)Here you can see that it does successfully copy that file:
The Rust manual page linked above explains why they think introducing this bug by default into all your programs is a good idea, and how to avoid it:> Note that std::env::args will panic if any argument contains invalid Unicode. If your program needs to accept arguments containing invalid Unicode, use std::env::args_os instead. That function returns an iterator that produces OsString values instead of String values. We’ve chosen to use std::env::args here for simplicity because OsString values differ per platform and are more complex to work with than String values.
I don't know what's "complex" about OsString, but for the time being I'll take the manual's word for it.
So, Rust's approach evidently makes it extremely hard not to introduce problems like that, even in the simplest programs.
Go's approach doesn't have that problem; this program works just as well as the C program, without the Rust footgun:
(O_CREATE makes me laugh. I guess Ken did get to spell "creat" with an "e" after all!)This program generates a much less clean strace, so I am not going to include it.
You might wonder how such a filename could arise other than as a deliberate attack. The most common scenario is when the filenames are encoded in a non-Unicode encoding like Shift-JIS or Latin-1, followed by disk corruption, but the deliberate attack scenario is nothing to sneeze at either. You don't want attackers to be able to create filenames your tools can't see, or turn to stone if they examine, like Medusa.
Note that the log message on error also includes the ill-formed Unicode filename:
But it didn't say ζ. It actually emitted a byte with value 129, making the error message ill-formed UTF-8. This is obviously potentially dangerous, depending on where that logfile goes because it can include arbitrary terminal escape sequences. But note that Rust's UTF-8 validation won't protect you from that, or from things like this: I'm not bagging on Rust. There are a lot of good things about Rust. But its string handling is not one of them.Note that &[u8] would allow things like null bytes, and maybe other edge cases.
Yes, and that's a good thing. It allows every code that gets &str/String to assume that the input is valid UTF-8. The alternative would be that every single time you write a function that takes a string as an argument, you have to analyze your code, consider what would happen if the argument was not valid UTF-8, and handle that appropriately. You'd also have to redo the whole analysis every time you modify the function. That's a horrible waste of time: it's much better to:
1) Convert things to String early, and assume validity later, and
2) Make functions that explicitly don't care about validity take &[u8] instead.
This is, of course, exactly what Rust does: I am not aware of a single thing that &str allows you to do that you cannot do with &[u8], except things that do require you to assume it's valid UTF-8.
Doesn't this demonstrate my point? If you can do everything with &[u8], what's the point in validating UTF-8? It's just a less universal string type, and your program wastes CPU cycles doing unnecessary validation.
You're meant to use `unsafe` as a way of limiting the scope of reasoning about safety.
Once you construct a `&str` using `from_utf8_unchecked`, you can't safely pass it to any other function without looking at its code and reasoning about whether it's still safe.
Also see the actual documentation: https://doc.rust-lang.org/std/primitive.str.html#method.from...
> Safety: The bytes passed in must be valid UTF-8.
Because 99.999% of the time you want it to be valid and would like an error if it isn't? If you want to work with invalid UTF-8, that should be a deliberate choice.
[]Rune is for sequences of UTF characters. rune is an alias for int32. string, I think, is an alias for []byte.
Consider:
How many times does that loop over 6 bytes iterate? The answer is it iterates twice, with i=0 and i=3.There's also quite a few standard APIs that behave weirdly if a string is not valid utf-8, which wouldn't be the case if it was just a bag of bytes.
They could support passing filename as `string | []byte`. But wait, go does not even have union types.
This is one of the minor errors in the post.
I've said this before, but much of Go's design looks like it's imitating the C++ style at Google. The comments where I see people saying they like something about Go it's often an idiom that showed up first in the C++ macros or tooling.
I used to check this before I left Google, and I'm sure it's becoming less true over time. But to me it looks like the idea of Go was basically "what if we created a Python-like compiled language that was easier to onboard than C++ but which still had our C++ ergonomics?"
But certainly, anyone will bring their previous experience to the project, so there must be some Plan9 influence in there somewhere
Then make it valid UTF-8. If you try to solve the long tail of issues in a commonly used function of the library its going to cause a lot of pain. This approach is better. If someone has a weird problem like file names with invalid characters, they can solve it themselves, even publish a package. Why complicate 100% of uses for solving 0.01% of issues?
I think you misunderstand. How do you do that for a file that exists on disk that's trying to be read? Rename it for them? They may not like that.
I know it's mostly a matter of tastes, but darn, it feels horrible. And there are no default parameter values, and the error hanling smells bad, and no real stack trace in production. And the "object orientation" syntax, adding some ugly reference to each function. And the pointers...
It took me back to my C/C++ days. Like programming with 25 year old technology from back when I was in university in 1999.
Many compiled languages are very slow to compile however, especially for large projects, C++ and rust being the usual examples.
I feel people who complain about rustc compile times must be new to using compiled languages…
Make use of binary libraries, export templates, incremental compilation and linking with multiple cores, and if using VC++ or clang vLatest, modules.
It still isn't Delphi fast, but becomes more manageable.
Sure it's good compared to like... C++. Is go actually competing with C++? From where I'm standing, no.
But compared to what you might actually use Go for... The tooling is bad. PHP has better tooling, dotnet has better tooling, Java has better tooling.
And sure, it is welcome from a dev POV on one hand, though from an ecosystem perspective, more languages are not necessarily good as it multiplies the effort required.
Especially given how the language was criticised back in 1996.
Are Java AOT compilation times just as fast as Go?
Why not? Machine code is not all that special - C++ and Rust is slow due to optimizations, not for machine code as target itself. Go "barely does anything", just spits out machine code almost as is.
Java AOT via GraalVM's native image is quite slow, but it has a different way of working (doing all the Java class loading and initialization and "baking" that into the native image).
But those are not rules. If you're doing stuff for fun, check out QBE <https://c9x.me/compile/> or Plan 9 C <https://plan9.io/sys/doc/comp.html> (which Go was derived from!)
It's really not. Proebsting's Law applies.
Given that, compilers/languages should be optimized for programmer productivity first and code speed second.
It feels often like the two principles they stuck/stick to are "what makes writing the compiler easier" and "what makes compilation fast". And those are good goals, but they're only barely developer-oriented.
Like, yes, those ideas have frequently been driven too far and have led to their own pain points. But people also seem to frequently rediscover that removing them entirety will lead to pain, too.
(1) "Generics are too complicated and academical and in the real world we only need them for a small number of well-known tasks anyway, so let's just leave them out!"
(2) The amount of code that does need generics but now has to work around the lack of them piles up, leading to an explosion of different libraries, design patterns, etc, that all try to partially recreate them in their own way.
(3) The language designers finally cave and introduce some kind of generics support in a later version of the language. However, at this point, they have to deal with all the "legacy" code that is not generics-aware and with runtime environments that aren't either. It also somehow has to play nice with all the ad-hoc solutions that are still present. So the new implementation has to deal with a myriad of special cases and tradeoffs that wouldn't be there in the first if it had been included in the language from the beginning.
(4) All the tradeoffs give the feature a reputation of needless complexity and frustrating limitations and/or footguns, prompting the next language designer to wonder if they should include them at all. Go to (1) ...
The go language and its runtime is the only system I know that is able to handle concurrency with multicore cpus seamlessly within the language, using the CSP-like (goroutine/channel) formalism which is easy to reason with.
Python is a mess with the gil and async libraries that are hard to reason with. C,C++,Java etc need external libraries to implement threading which cant be reasoned with in the context of the language itself.
So, go is a perfect fit for the http server (or service) usecase and in my experience there is no parallel.
Elixir handling 2 million websocket connections on a single machine back in 2015 would like to have a word.[1] This is largely thanks to the Erlang runtime it sits atop.
Having written some tricky Go (I implemented Raft for a class) and a lot of Elixir (professional development), it is my experience that Go's concurrency model works for a few cases but largely sucks in others and is way easier to write footguns in Go than it ought to be.
[1]: https://phoenixframework.org/blog/the-road-to-2-million-webs...
I recently realized that there is no easy way to "bubble up a goroutine error", and I wrote some code to make sure that was possible, and that's when I realize, as usual, that I'm rewriting part of the OTP library.
The whole supervisor mechanism is so valuable for concurrency.
Java does not need external libraries to implement threading, it's baked into the language and its standard libraries.
What do you mean by this for Java? The library is the runtime that ships with Java, and while they're OS threads under the hood, the abstraction isn't all that leaky, and it doesn't feel like they're actually outside the JVM.
Working with them can be a bit clunky, though.
I believe it’s the only system you know. But it’s far from the only one.
I'd love to see a list of these, with any references you can provide.
You wanted sources, here's the chapter on tasks and synchronization in the Ada LRM: http://www.ada-auth.org/standards/22rm/html/RM-9.html
For Erlang and Elixir, concurrent programming is pretty much their thing so grab any book or tutorial on them and you'll be introduced to how they handle it.
In Go's category, there's Java, Haskell, OCaml, Julia, Nim, Crystal, Pony...
Dynamic languages are more likely to have green threads but aren't Go replacements.
You list three that don't, and then you go on to list seven languages that do.
Yes, not many languages support concurrency like Go does...
So it's really Go vs. Java, or you can take a performance hit and use Erlang (valid choice for some tasks but not all), or take a chance on a novel paradigm/unsupported language.
That's 6 languages, a non-exhaustive list of them, that are either properly mainstream and more popular than Go or at least well-known and easy to obtain and get started with. All of which have concurrency baked in and well-supported (unlike, say, C).
EDIT: And one more thing, all but Elixir are older than Go, though Clojure only slightly. So prior art was there to learn from.
Source: spent the last few weeks at work replacing a Go program with an Elixir one instead.
I'd use Go again (without question) but it is not a panacea. It should be the default choice for CLI utilities and many servers, but the notion that it is the only usable language with something approximating CSP is idiotic.
I thought it was a seldom mentioned fact in Go that CSP systems are impossible to reason about outside of toy projects so everyone uses mutexes and such for systemic coordination.
I'm not sure I've even seen channels in a production application used for anything more than stopping a goroutine, collecting workgroup results, or something equally localized.
But yeah, the CSP model is mostly dead. I think the language authors' insistence that goroutines should not be addressable or even preemptible from user code makes this inevitable.
Practical Go concurrency owes more to its green threads and colorless functions than its channels.
Functions are colored: those taking context.Context and those who don't.
But I agree, this is very faint coloring compared to async implementations. One is free to context.Background() liberally.
Once you know about it, though, it's easy to avoid. I do think, especially given that the CSP features of Go are downplayed nowadays, this should be addressed more prominently in the docs, with the more realistic solutions presented (atomics, mutexes).
It could also potentially be addressed using 128-bit atomics, at least for strings and interfaces (whereas slices are too big, taking up 3 words). The idea of adding general 128-bit atomic support is on their radar [2] and there already exists a package for it [3], but I don't think strings or interfaces meet the alignment requirements.
[1]: https://research.swtch.com/gorace
[2]: https://github.com/golang/go/issues/61236
[3]: https://pkg.go.dev/github.com/CAFxX/atomic128
There are also runtimes like e.g. Hermes (used primarily by React Native), there's support for separating operations between the graphics thread and other threads.
All that being said, I won't dispute OP's point about "handling concurrency [...] within the language"- multithreading and concurrency are baked into the Golang language in a more fundamental way than Javascript. But it's certainly worth pointing out that at least several of the major runtimes are capable of multithreading, out of the box.
> Within a worker thread, worker.getEnvironmentData() returns a clone of data passed to the spawning thread's worker.setEnvironmentData(). Every new Worker receives its own copy of the environment data automatically.
M:1 threaded means that the user space threads are mapped onto a single kernel thread. Go is M:N threaded: goroutines can be arbitrarily scheduled across various underlying OS threads. Its primitives (goroutines and channels) make both concurrency and parallelism notably simpler than most languages.
> But it's certainly worth pointing out that at least several of the major runtimes are capable of multithreading, out of the box.
I’d personally disagree in this context. Almost every language has pthread-style cro-magnon concurrency primitives. The context for this thread is precisely how go differs from regular threading interfaces. Quoting gp:
> The go language and its runtime is the only system I know that is able to handle concurrency with multicore cpus seamlessly within the language, using the CSP-like (goroutine/channel) formalism which is easy to reason with.
Yes other languages have threading, but in go both concurrency and parallelism are easier than most.
(But not erlang :) )
Basically OP was saying that JavaScript can run multiple tasks concurrently, but with no parallelism since all tasks map to 1 OS thread.
The tasks run concurrently, but not in parallel.
Granted, many people don't ever need to handle that kind of throughput. It depends on the app and the load put on to it. So many people don't realize. Which is fine! If it works it works. But if you do fall into the need of concurrency, yea, you probably don't want to be using node - even the newer versions. You certainly could do worse than golang. It's good we have some choices out there.
The other thing I always say is the choice in languages and technology is not for one person. It's for the software and team at hand. I often choose languages, frameworks, and tools specifically because of the team that's charged with building and maintaining. If you can make them successful because a language gives them type safety or memory safety that rust offers or a good tool chain, whatever it is that the team needs - that's really good. In fact, it could well be the difference between a successful business and an unsuccessful one. No one really cares how magical the software is if the company goes under and no one uses the software.
The issue is that it was a bit outdated in the choice of _which_ things to choose as the one Go way. People expect a map/filter method rather than a loop with off by one risks, a type system with the smartness of typescript (if less featured and more heavily enforced), error handling is annoying, and so on.
I get that it’s tough to implement some of those features without opening the way to a lot of “creativity” in the bad sense. But I feel like go is sometimes a hard sell for this reason, for young devs whose mother language is JavaScript and not C.
I agree with this. I feel like Go was a very smart choice to create a new language to be easy and practical and have great tooling, and not to be experimental or super ambitious in any particular direction, only trusting established programming patterns. It's just weird that they missed some things that had been pretty well hashed out by 2009.
Map/filter/etc. are a perfect example. I remember around 2000 the average programmer thought map and filter were pointlessly weird and exotic. Why not use a for loop like a normal human? Ten years later the average programmer was like, for loops are hard to read and are perfect hiding places for bugs, I can't believe we used to use them even for simple things like map, filter, and foreach.
By 2010, even Java had decided that it needed to add its "stream API" and lambda functions, because no matter how awful they looked when bolted onto Java, it was still an improvement in clarity and simplicity.
Somehow Go missed this step forward the industry had taken and decided to double down on "for." Go's different flavors of for are a significant improvement over the C/C++/Java for loop, but I think it would have been more in line with the conservative, pragmatic philosophy of Go to adopt the proven solution that the industry was converging on.
After Go added generics in version 1.18, you can just import someone else's generic implementations of whatever of these functions you want and use them all throughout your code and never think about it. It's no longer a problem.
Do they? After too many functional battles I started practicing what I'm jokingly calling "Debugging-Driven Development" and just like TDD keeps the design decisions in mind to allow for testability from the get-go, this makes me write code that will be trivially easy to debug (specially printf-guided debugging and step-by-step execution debugging)
Like, adding a printf in the middle of a for loop, without even needing to understand the logic of the loop. Just make a new line and write a printf. I grew tired of all those tight chains of code that iterate beautifully but later when in a hurry at 3am on a Sunday are hell to decompose and debug.
It's just that a ridiculous amount of steps in real world problems can be summarised as 'reshape this data', 'give me a subset of this set', or 'aggregate this data by this field'.
Loops are, IMO, very bad at expressing those common concepts briefly and clearly. They take a lot of screen space, usually accesory variables, and it isn't immediately clear from just seing a for block what you're about to do - "I'm about to iterate" isn't useful information to me as a reader, are you transforming data, selecting it, aggregating it?.
The consequence is that you usually end up with tons of lines like
userIds = getIdsfromUsers(users);
where the function is just burying a loop. Compare to:
userIds = users.pluck('id')
and you save the buried utility function somewhere else.
I think it's a bad trade-off, most languages out there are moving away from it
So for a large loop the code like
for i, value := source { result[i] = value * 2 + 1 }
Would be 2x faster than a loop like
for i, value := source { intermediate[i] = value * 2 }
for i, value := intermediate { result[i] = value + 1 }
For example, Rust iterators are lazily evaluated with early-exits (when filtering data), thus it's your first form but as optimized as possible. OTOH python's map/filter/etc may very well return a full list each time, like with your intermediate. [EDIT] python returns generators, so it's sane.
I would say that any sane language allowing functional-style data manipulation will have them as fast as manual for-loops. (that's why Rust bugs you with .iter()/.collect())
I always encounter these upsides once every few years when preparing leetcode interviews, where this kind of optimization is needed for achieving acceptable results.
In daily life, however, most of these chunks of data to transform fall in one of these categories:
- small size, where readability and maintainability matters much more than performance
- living in a db, and being filtered/reshaped by the query rather than code
- being chunked for atomic processing in a queue or similar (usual when importing a big chunk of data).
- the operation itself is a standard algorithm that you just consume from a standard library that handless the loop internally.
Much like trees and recursion, most of us don’t flex that muscle often. Your mileage might vary depending of domain of course.
I assume, anyway. Maybe the Go debugger is kind of shitty, I don't know. But in PHP with xdebug you just use all the fancy array_* methods and then step through your closures or callables with the debugger.
I agree.
The Go std-lib is fantastic.
Also no dependency-hell with Go, unlike with Python. Just ship an oven-ready binary.
And what's the alternative ?
Java ? Licensing sagas requiring the use of divergent forks. Plus Go is easier to work with, perhaps especially for server-side deployments.
Zig ? Rust ? Complex learning curve. And having to choose e.g. Rust crates re-introduces dependency hell and the potential for supply-chain attacks.
Yeah, these are sagas only, because there is basically one, single, completely free implementation anyone uses on the server-side and it's OpenJDK, which was made 100% open-source and the reference implementation by Oracle. Basically all of Corretto, AdoptOpenJDK, etc are just builds of the exact same repository.
People bringing this whole license topic up can't be taken seriously, it's like saying that Linux is proprietary because you can pay for support at Red Hat..
So you mean all those universities and other places that have been forced to spend $$$ on licenses under the new regime also can't be taken seriously ? Are you saying none of them took advice and had nobody on staff to tell them OpenJDK exists ?
Regarding your Linux comment, some of us are old enough to remember the SCO saga.
Sadly Oracle have deeper pockets to pay more lawyers than SCO ever did ....
This info is actually quite surprising to me, never heard of it since everywhere I know switched to OpenJDK-based alternatives from the get-go. There was no reason to keep on the Oracle one after the licencing shenanigans they tried to play.
Why do these places kept the Oracle JDK and ended up paying for it? OpenJDK was a drop-in replacement, nothing of value is lost by switching...
See link/quote in my earlier reply above.
Weird, to me that is a strong argument. Choose your stewards.
I don't know what/which university you talk about, but I'm sure they were also "forced to pay $$$" for their water bills and whatnot. If they decided to go with paid support, then.. you have to pay for it. In exchange you can a) point your finger at a third-party if something goes wrong (which governments love doing/often legally necessary) b) get actual live support on Christmas Eve if needed.
Quote from this article:[1]
[1] https://www.theregister.com/2025/06/13/jisc_java_oracle/Also, as another topic, Oracle is doing audits specifically because their software doesn't phone home to check licenses and stuff like that - which is a crucial requirement for their intended target demographics, big government organizations, safety critical systems, etc. A whole country's healthcare system, or a nuclear power base can't just stop because someone forgot to pay the bill.
So instead Oracle just visits companies that have a license with them, and checks what is being used to determine if it's in accord with the existing contract. And yeah, from this respect I also heard of a couple of stories where a company was not using the software as the letter of the contract, e.g. accidentally enabling this or that, and at the audit the Oracle salesman said that they will ignore the mistake if they subscribe to this larger package, which most manager will gladly accept as they can avoid the blame, which is questionable business practice, but still doesn't have anything to do with OpenJDK..
The article tries very hard to draw a connection between the licensing costs for the universities and Oracle auditing random java downloads, but nobody actually says that this is what happened.
The waiver of historic fees goes back to the last licensing change where Oracle changed how licensing fees would be calculated. So it seems reasonable that Oracle went after them because they were paying customers that failed to pay the inflated fees.
I’m only a casual user of both but how are rust crates meaningfully different from go’s dependency management?
There is a difference between "small" and Rust's which is for all intents and purposes, non-existent.
I mean, in 2025, not having crypto in stdlib when every man and his dog is using crypto ? Or http when every man and his dog are calling REST APIs ?
As the other person who replied to you said. Go just allows you to hit the ground running and get on with it.
Having to navigate the world of crates, unofficially "blessed" or not is just a bit of a re-inventing the wheel scenario really....
P.S. The Go stdlib is also well maintained, so I don't really buy the specific "dead batteries" claim either.
I'm not and I'm glad the core team doesn't have to maintain an http server and can spend time on the low level features I chose Rust for.
Also, as mentioned by another comment, an HTTP or crypto library can become obsolete _fast_. What about HTTP3? What about post-quantum crypto? What about security fixes? The stdlib is tied to the language version, thus to a language release. Having such code independant allows is to evolve much faster, be leaner, and be more composable. So yes, the library is well maintained, but it's tied to the Go version.
Also, it enables breaking API changes if absolutely needed. I can name two precendents:
- in rust, time APIs in chrono had to be changed a few times, and the Rust maintainers were thankful it was not part of the stdlib, as it allowed massive changes
- otoh, in Go, it was found out that net.Ip has an absolutely atrocious design (it's just an alias for []byte). Tailscale wrote a replacement that's now in a subpackage in net, but the old net.Ip is set in stone. (https://tailscale.com/blog/netaddr-new-ip-type-for-go)
And if you're engaging in CS then Go is probably the last language you should be using. If however, what you're interested in doing is programming, the fundamental data structures there are arrays and hashmaps, which Go has built-in. Everything else is niche.
> Also, as mentioned by another comment, an HTTP or crypto library can become obsolete _fast_. What about HTTP3? What about post-quantum crypto? What about security fixes? The stdlib is tied to the language version, thus to a language release. Having such code independant allows is to evolve much faster, be leaner, and be more composable. So yes, the library is well maintained, but it's tied to the Go version.
The entire point is to have a well supported crypto library. Which Go does and it's always kept up to date. Including security fixes.
As for post-quantum: https://words.filippo.io/mlkem768/
> - otoh, in Go, it was found out that net.Ip has an absolutely atrocious design (it's just an alias for []byte). Tailscale wrote a replacement that's now in a subpackage in net, but the old net.Ip is set in stone. (https://tailscale.com/blog/netaddr-new-ip-type-for-go)
Yes, and? This seems to me to be the perfect way to handle things - at all times there is a blessed high-quality library to use. As warts of its design get found out over time, a new version is worked on and released once every ~10 years.
A total mess of barely-supported libraries that the userbase is split over is just that - a mess.
That works well for go and Google but I'm not sure how easily that'd be to replicate with rust or others
The downside of a small stdlib is the proliferation of options, and you suddenly discover(ed?, it's been a minute) that your async package written for Tokio won't work on async-std and so forth.
This has often been the case in Go too - until `log/slog` existed, lots of people chose a structured logger and made it part of their API, forcing it on everyone else.
e.g. iirc. Rust has multiple ways of handling Strings while Go has (to a big extent) only one (thanks to the GC)
No, none outside of stdlib anyway in the way you're probably thinking of.
There are specialized constructs which live in third-party crates, such as rope implementations and stack-to-heap growable Strings, but those would have to exist as external modules in Go as well.
you can go `uv run script.py` and it'll automatically fetch the libraries and run the script in a virtual environment.
Still no match for Go though, shipping a single cross-compiled binary is a joy. And with a bit of trickery you can even bundle in your whole static website in it :) Works great when you're building business logic with a simple UI on top.
You really come to appreciate when these batteries are included with the language itself. That Go binary will _always_ run but that Python project won't build in a few years.
I would presume only a go.mod entry would specify whether it really is v3.0.0 or v3.0.1
Also, for future generations, don't use that package https://github.com/go-yaml/yaml#this-project-is-unmaintained
Yeah, but you still have to install `uv` as a pre-requisite.
And you still end up with a virtual environment full of dependency hell.
And then of course we all remember that whole messy era when Python 2 transitioned to Python 3, and then deferred it, and deferred it again....
You make a fair point, of course it is technically possible to make it (slightly) "cleaner". But I'll still take the Go binary thanks. ;-)
No, there is no dependency hell in the venv.
Python 2 to 3: are you really still kicking that horse? It's dead...please move on.
The ergonomics for this use case are better than in any language I ever used.
But it is being put. Read newsletters like "The Go Blog", "Go Weekly". It's been improving constantly. Language-changes require lots of time to be done right, but the language is evolving.
I have absolutely no idea how go would solve this problem, and in fact I don't think it does at all.
> The Go std-lib is fantastic.
I have seen worse, but I would still not call it decent considering this is a fairly new language that could have done a lot more.
I am going to ignore the incredible amount of asinine and downright wrong stuff in many of the most popular libraries (even the basic ones maintained by google) since you are talking only about the stdlib.
On the top of my head I found inconsistent tagging management for structs (json defaults, omitzero vs omitempty), not even errors on tag typos, the reader/writer pattern that forces you to to write custom connectors between the two, bzip2 has a reader and no writer, the context linked list for K/V. Just look at the consistency of the interfaces in the "encoding" pkg and cry, the package `hash` should actually be `checksum`. Why does `strconv.Atoi`/ItoA still exist? Time.Add() vs Time.Sub()...
It chock full of inconsistencies. It forces me to look at the documentation every single time I don't use something for more than a couple of days. No, the autocomplete with the 2-line documentation does not include the potential pitfalls that are explained at the top of the package only.
And please don't get me started on the wrappers I had to write around stuff in the net library to make it a bit more consistent or just less plain wrong. net/url.Parse!!! I said don't make my start on this package! nil vs NoBody! ARGH!
None of this is stuff at the language level (of which there is plenty to say).
None of it is a dealbreaker per se, but it adds attrition and becomes death by a billion cuts.
I don't even trust any parser written in go anymore, I always try to come up with corner cases to check how it reacts, and I am often surprised by most of them.
Sure, there are worse languages and libraries. Still not something I would pick up in 2025 for a new project.
Yes, My favourite is the `time` package. It's just so elegant how it's just a number under there, the nominal type system truly shines. And using it is a treat. What do you mean I can do `+= 8*time.Hour` :D
It's simplistic and that's nice for small tools or scripts, but at scale it becomes really brittle since none of the edge cases are handled
Internally time.Duration is a single 64bit count, while time.Time is two more complicated 64bit fields plus a location
In Go, `int * Duration = error`, but `Duration * Duration = Duration`!
If you have an int variable hours := 8, you have to cast it before multiplying.
This is also true for simple int and float operations.
is valid, but x := 3 would need float64(x)*f to be valid. Same is true for addition etc.Other than having to periodically remember what 0-padded milliseconds are or whatever this isn't a huge deal.
https://pkg.go.dev/time#Layout
The code was on the hot path of their central routing server handling Billions (with a B) messages in a second or something crazy like that.
You're not building Discord, the GC will most likely never be even a blip in your metrics. The GC is just fine.
I don't have a lot of experience with the malloc languages at scale, but I do know that heat fragmentation and GC fragmentation are very similar problems.
There are techniques in GC languages to avoid GC like arena allocation and stuff like that, generally considered non-idiomatic.
This tends to be true for most languages, even the ones with easier concurrency support. Using it correctly is the tricky part.
I have no real problem with the portability. The area I see Go shining in is stuff like AWS Lambda where you want fast execution and aren't distributing the code to user systems.
In what universe?
Is it the best or most robust or can you do fancy shit with it? No
But it works well enough to release reliable software along with the massive linter framework that's built on top of Go.
I wonder why that ended up being necessary... ;)
I quite often see devs introducing them in other languages like TypeScript, but it just doesn't work as well when it's introduced in userland (usually you just end up with a small island of the codebase following this standard).
I think they only work if the language is built around it. In Rust, it works, because you just can't deref an Optional type without matching it, and the matching mechanism is much more general than that. But in other languages, it just becomes a wart.
As I said, some kind of type annotation would be most go-like, e.g.
You would only be allowed to touch *ptr inside a if ptr != nil { ... }. There's a linter from uber (nilaway) that works like that, except for the type annotation. That proposal would break existing code, so perhaps something an explicit marker for non-nil pointers is needed instead (but that's not very ergonomic, alas).Go has chosen explicit over implicit everywhere except initialization—the one place where I really needed "explicit."
[1] ZGC has basically decoupled the heap size from the pause time, at that point you get longer pauses from the OS scheduler than from GC.
I got insta rejected in interview when i said this in response to interview panels question about 'thoughts about golang' .
Like they said, 'interview is over' and showed me the (virtual) door. I was stunned lol. This was during peak golang mania . Not sure what happened to rancherlabs .
It’s part trying to keep a common direction and part fear that dislike of their tech risks the hire not staying for long.
I don’t agree with this approach, don’t get me wrong, but I’ve seen it done and it might explain your experience.
Well, so long as you don't care about compatibility with the broad ecosystem, you can write a perfectly fine Optional yourself:
But you probably do care about compatibility with everyone else, so... yeah it really sucks that the Go way of dealing with optionality is slinging pointers around.There aren't many possibilities for nil errors in Go once you eliminate the self-harm of abusing pointers to represent optionality.
For JSON, you can't encode Optional[T] as nothing at all. It has to encode to something, which usually means null. But when you decode, the absence of the field means UnmarshalJSON doesn't get called at all. This typically results in the default value, which of course you would then re-encode as null. So if you round-trip your JSON, you get a materially different output than input (this matters for some other languages/libraries). Maybe the new encoding/json/v2 library fixes this, I haven't looked yet.
Also, I would usually want Optional[T]{value:nil,exists:true} to be impossible regardless of T. But Go's type system is too limited to express this restriction, or even to express a way for a function to enforce this restriction, without resorting to reflection, and reflection has a type erasure problem making it hard to get right even then! So you'd have to write a bunch of different constructors: one for all primitive types and strings; one each for pointers, maps, and slices; three for channels (chan T, <-chan T, chan<- T); and finally one for interfaces, which has to use reflection.
You hear that Rob Pike? LOL. All those years he shat on Java, it was so irritating. (Yes schadenfreude /g)
I shouldn't fault the creators. They did what they did, and that is all and good. I am more shocked by the way it has exploded in adoption.
Would love to see a coffeescript for golang.
It's not viable to use, but: https://github.com/borgo-lang/borgo
In fact this was so surprising to me is that I only found out about it when I wrote code that processed files in a loop, and it started crashing once the list of files got too big, because defer didnt close the handles until the function returned.
When I asked some other Go programmers, they told me to wrap the loop body in an anonymus func and invoke that.
Other than that (and some other niggles), I find Go a pleasant, compact language, with an efficient syntax, that kind of doesn't really encourage people trying to be cute. I started my Go journey rewriting a fairly substantial C# project, and was surprised to learn that despite it having like 10% of the features of C#, the code ended up being smaller. It also encourages performant defaults, like not forcing GC allocation at every turn, very good and built-in support for codegen for stuff like serialization, and no insistence to 'eat the world' like C# does with stuff like ORMs that showcase you can write C# instead of SQL for RDBMS and doing GRPC by annotating C# objects. In Go, you do SQL by writing SQL, and you od GRPC by writing protobuf specs.
Right now it's function scope; if you need it lexical scope, you can wrap it in a function.
Suppose it were lexical scope and you needed it function scope. Then what do you do?
You can just introduce a new scope wherever you want with {} in sane languages, to control the required behavior as you wish.
I can't recall ever needing that (but that might just be because I'm used to lexical scoping for defer-type constructs / RAII).
Another example I found in my code is a conditional lock. The code runs through a list of objects it might have to update (note: it is only called in one thread). As an optimization, it doesn't acquire a lock on the list until it finds an object that has to be changed. That allows other threads to use/lock that list in the meantime instead of waiting until the list scan has finished.
I now realize I could have used an RWLock...
Defer a bulk thing at the function scope level, and append files to an array after opening them.
Would be nice to have both options though. Why not a “defer” package?
With lexical scope, there’s only three ways to safely jump the scope:
1. reaching the end of the procedure, in which case you don’t need a defer)
2. A ‘return’, in which case you’re also exiting the function scope
3. a ‘break’ or ‘continue’, which admittedly could see the benefit of a lexical scope defer but they’re also generally trivial to break into their own functions; and arguably should be if your code is getting complex enough that you’ve got enough branches to want a defer.
If Go had other control flows like try/catch, and so on and so forth, then there would be a stronger case for lexical defer. But it’s not really a problem for anyone aside those who are also looking for other features that Go also doesn’t support.
try (SomeResource foo = SomeResource.open()) { method(foo); }
or
public void method() { try(...) { // all business logic wrapped in try-with-resources } }
To me it seems like lexical scoping can accomplish nearly everything functional scoping can, just without any surprising behavior.
But that’s a moot point because I appreciate it’s just an example. And, more importantly, Go doesn’t support the kind of control flows you’re describing anyway (as I said in my previous post).
A lot of the comments here about ‘defer’ make sense in other languages that have different idioms and features to Go. But they don’t apply directly to Go because you’d have to make other changes to the language first (eg implementing try blocks).
Using `defer...recover` is computationally expensive within hot paths. And since Go encourages errors to be surfaced via the `error` type, when writing idomatic Go you don't actually need to raise exceptions all that often.
So panics are reserved for instances where your code reaches a point that it cannot reasonably continue.
This means you want to catch panics at boundary points in your code.
Given that global state is an anti-pattern in any language, you'd want to wrap your mutex, file, whatever operations in a `struct` or its own package and instantiate it there. So you can have a destructor on that which is still caught by panic and not overly reliant on `defer` to achieve it.
This actually leads to my biggest pet peeve in Go. It's not `x, err := someFunction()` and nor is it `defer/panic`, these are all just ugly syntax that doesn't actually slow you down. My biggest complaint is the lack of creator and destructor methods for structs.
the `NewClass`-style way of initialising types is an ugly workaround and it constantly requires checking if libraries require manual initilisation before use. Not a massive time sink but it's not something the IDE can easily hint to you so you do get pulled from your flow to then either Google that library or check what `New...` functions are defined in the IDEs syntax completion. Either way, it's a distraction.
The lack of a destructor, though, does really show up all the other weaknesses in Go. It then makes `defer` so much more important than it otherwise should be. It means the language then needs runtime hacks for you to add your own dereference hooks[0][1] (this is a problem I run into often with CGO where I do need to deallocate, for example, texture data from the GPU). And it means you need to check each struct to see if it includes a `Close()` method.
I've heard the argument against destructors is that they don't catch errors. But the counterargument to that is the `defer x.Close()` idiom, where errors are ignored anyway.
I think that change, and tuples too so `err` doesn't always have to be its own variable, would transform Go significantly into something that feels just as ergonomic as it is simple to learn, and without harming the readability of Go code.
[0] https://medium.com/@ksandeeptech07/improved-finalizers-in-go...
[1] eg https://github.com/lmorg/ttyphoon/blob/main/window/backend/r...
?
2. mechanic is tied to call stack / stack unwinding
3. it feels natural when you're coming from C with `goto fail`
(yes it annoys me when I want to defer in a loop & now that loop body needs to be a function)
Since they didn't want to have a 'proper' RAII unwinding mechanism, this is the crappy compromise they came up with.
It can be also a source of bugs where you hang onto something for longer than intended - considering there's no indication of something that might block in Go, you can acquire a mutex, defer the release, and be surprised when some function call ends up blocking, and your whole program hangs for a second.
But I do definitely agree that the dynamic nature of defer and it not being block-scoped is probably not the best
We pre-allocate a bunch of static buffers and re-use them. But that leads to a ton of ownership issues, like the append footgun mentioned in the article. We've even had to re-implement portions of the standard library because they allocate. And I get that we have a non-standard use case, and most programmers don't need to be this anal about memory usage. But we do, and it would be really nice to not feel like we're fighting the language.
https://github.com/golang/go/issues/73581
What I would absolutely love is a compacting garbage collector, but my understanding is Go can’t add that without breaking backwards compatibility, and so likely will never do that.
You made a poor choice of language for the problem. It'd be a good fit for C/C++/Rust/Zig.
I've heard the term "beaten path" used for these languages, or languages that an organization chooses to use and forbids the use of others.
I don't completely get this. If you are memory requirements are strict, this makes little to no sense to me. I was programming J2ME games 20 years ago for Nokia devices. We were trying to fit games into 50-128kb RAM and all of this with Java of all the languages. No sane Java developer would have looked at that code without fainting - no dynamic allocations, everything was static, byte and char were the most common data types used. Images in the games were shaved, no headers, nothing. You really have to think it through if you got memory constraints on your target device.
So it’s not like we’re running on a machine with only kilobytes of RAM, but we do want to minimize our usage.
Go is a case of the emperor having no clothes. Telling people that they just don’t get it or that it’s a different way of doing things just doesn’t convince me. The only thing it has going for it is a simple dev experience.
> having to write a for loop to get the list of keys of a map
We now have the stdlib "maps" package, you can do:
With the wonder of generics, it's finally possible to implement that.Now if only Go was consistent about methods vs functions, maybe then we could have "keys := someMap.Keys()" instead of it being a weird mix like `http.Request.Headers.Set("key", "value")` but `map["key"] = "value"`
Or 'close(chan x)' but 'file.Close()', etc etc.
For example, you're not allowed to write the following:
That fails because methods can't have type parameters, only structs and functions. It hurts the ergonomics of generics quite a bit.And, as you rightly point out, the stdlib is largely pre-generics, so now there's a bunch of duplicate functions, like "strings.Sort" and "slices.Sort", "atomic.Pointer" and "atomic.Value", quite possible a sync/v2 soon https://github.com/golang/go/issues/71076, etc.
The old non-generic versions also aren't deprecated typically, so they're just there to trap people that don't know "no never use atomic.Value, always use atomic.Pointer".
This also hurts discoverability. `slices`, `maps`, `iter`, `sort` are all top-level packages you simply need to know about to work efficiently with iteration. You cannot just `items.sort().map(foo)`, guided and discoverable by auto-completion.
Generics can only be on function and not methods because of it's type system. So don't hold your breath and modifying this would be a breaking change.
Happy to not be in that community, happy to not have to write (or read) Go these days.
And frankly, most of the time I see people gushing about Go, it's for features that trivially exist in most languages that aren't C, or are entirely subjective like "it's easy" (while ignoring, you know, reality).
As someone who's been doing Go since 2015, working on dozens of large codebases counting probably a million lines total, across multiple teams, your criticisms do not ring true.
Go is no worse than C when it comes to extensibility, or C# or Java for that matter. Go programs are only extensible to the extent (ha) developers design their codebases right. Certainly, Go trades expressivity for explicitness more than some languages. You're encouraged to have fewer layers of abstraction and be more concrete and explicit. But in no way does that impede being able to extend code. The ability to write modular, extensible programs is a skill that must be learned, not something a programming language gives you for free.
It sounds like you worked on a poorly constructed codebase and assumed it was Go's fault.
I think Java and C# offer clearly more straightforward ways to extend and modify existing code. Maybe the primary ways extension in Java and C# works are not quite the right ones for every situation.
The primary skill necessary to write modular code is first knowing what the modular interfaces is and second being able to implement it in a clean fashion. Go does offer a form of interfaces. But precisely because it encourages you to be highly explicit and avoid abstraction, it can make it difficult for you to implement the right abstraction and therefore complicate the modular interfaces.
Programming is hard. I don’t think adopting a kind of ascetic language like Go makes programming easier overall. Maybe it’s harder to be an architecture astronaut in Go, but only by eliminating entire classes of abstraction that are sometimes just necessary. Sometimes, inheritance is the right abstraction. Sometimes, you really need highly generic and polymorphic code (see some of the other comments for issues with Go’s implementation of generics).
It’s faster than Node or Python, with a better type system than either. It’s got a much easier learning curve than Rust. It has a good stdlib and tooling. Simple syntax with usually only one way to do things. Error handling has its problems but I still prefer it over Node, where a catch clause might receive just about anything as an “error”.
Am I missing a language that does this too or more? I’m not a Go fanatic at all, mostly written Node for backends in my career, but I’ve been exploring Go lately.
I feel like I could write this same paragraph about Java or C#.
https://learn.microsoft.com/en-us/dotnet/csharp/fundamentals...
If I hit a point where I need to do that in pretty much any other language, I'll cast about for some way to avoid doing it for a while (to include finding a different dependency to replace that one) because it's almost always gonna be a time-suck and may end up yielding nothing useful at all without totally unreasonable amounts of time spent on it, so I may burn time trying and then just have to abandon the effort.
In go, I unhesitatingly hop right in and find what I need fast, just about every time.
It's the polar opposite of something like Javascript (or Typescript—it doesn't avoid this problem) where you can have three libraries and all three both read like a totally different language from the one you're writing, and also like totally different languages from one another. Ugh. This one was initially written during the "everything should be a HOF" trend and ties itself in knots to avoid ever treating the objects it's implicitly instantiating all over the place as objects... this one uses "class" liberally... this one extensively leans on the particular features of prototypal inheritance, OMG, kill me now... this one imports Lodash, sigh, here we go... et cetera.
At the larger codebase go company I worked at, the general conclusion was: Go is a worse Java. The company should've just used Java in the end.
Give me an apples to oranges comparison. With routing, cookies, authN/authz, SQL injection, cross site scripting protection, etc.
What's important is how good primitives you have access to. Java has platform and virtual threads now (the latter simplifying a lot of cases where reactive stuff was prevalent before) with proper concurrent data structures.
In C#, for example, there are multiple ways, but you should generally be using the modern approach of async/Task, which is trivial to learn and used exclusively in examples for years.
Anyway, assuming you're talking about TypeScript, I'm surprised to hear that you prefer Go's type system to TypeScript's. There are definitely cases where you can get carried away with TypeScript types, but due to that expressiveness I find it much more productive than Go's type system (and I'd make the same argument for Rust vs. Go).
Regarding Typescript, I actually am a big fan of it, and I almost never write vanilla JS anymore. I feel my team uses it well and work out the kinks with code review. My primary complaint, though, is that I cannot trust any other team to do the same, and TS supports escape hatches to bypass or lie about typing.
I work on a project with a codebase shared by several other teams. Just this week I have been frustrated numerous times by explicit type assertions of variables to something they are not (`foo as Bar`). In those cases it’s worse than vanilla JS because it misleads.
When you write JavaScript (or TypeScript that gets transpiled), it's not as easy to assume the target is Node (V8). It could be Bun (JavaScriptCore), Deno, a browser, etc.
There are languages with fewer warts, but they're usually more complicated (e.g. Rust), because most of Go's problems are caused by its creators' fixation with simplicity at all costs.
Even AST-based macro systems have tricky problems like nontermination and variable capture. It can be tough to debug why your compiler is stuck in an infinite macro expansion loop. Macro systems that solve these problems, like the R⁵RS syntax-rules system, have other drawbacks like very complex implementations and limited expressive power.
And often there's no easy way to look at the code after it's been through the macro processor, which makes bugs in the generated code introduced by buggy macros hard to track down.
By contrast, if your code generator hangs in an infinite loop, you can debug it the same way you normally debug your programs; it doesn't suffer from tricky bugs due to variable capture; and it's easy to look at its output.
Agree on node/TS error handling. It’s super whack
Given Python's substantial improvements recently, I would put it far ahead of the structural typing done in Go, personally.
Python, for a number of years at this point, has had structural (!) pattern matching with unpacking, type-checking baked in, with exhaustiveness checking (depending on the type checker you use). And all that works at "type-check time".
It can also facilitate type-state programming through class methods.
Libraries like Pydantic are fantastic in their combination of ergonomics and type safety.
The prime missing piece is sum types, which need language-level support to work well.
Go is simplistic in comparison.
Go (and lots of other languages...) wreck it on dependency management and deployment, though. :-/ As the saying goes, "it was easier to invent Docker than fix Python's tooling".
I didn't really use it much until the last few years. It was limited and annoyiongly verbose. Now it's great, you don't even have to do things like explicitly notate covariant/contravariant types, and a lot of what used to be clumsy annotation with imports from typing is now just specified with normal Python.
And best of all, more and more libraries are viewing type support as a huge priority, so there's usually no more having to download type mocks and annotation packages and worry about keeping them in sync. There are some libraries that do annoying things like adding " | None" after all their return types to allow themselves to be sloppy, but at least they are sort of calling out to you that they could be sloppy instead of letting it surprise you.
It's now good and easy enough that it saves me time to use type annotations even for small scripts, as the time it saves from quickly catching typos or messing up a return type.
Like you said, Pydantic is often the magic that makes it really useful. It is just easy enough and adds enough value that it's worth not lugging around data in dicts or tuples.
My main gripe with Go's typing has always been that I think the structural typing of its interfaces is convenient but really it's convenient in the same way that duck typing is. In the same way that a hunter with a duck call is the same as a duck with duck typing, a F16 and a VCR are both things that have an ejection feature.
That’s the selling point for me. If I’m coming to a legacy code as that no one working wrote, I pray it is go because then it just keeps working through upgrading the compiler and generally the libraries used.
For NodeJS development, you would typically write it in Typescript - which has a very good type system.
Personally I have also written serverside C# code, which is a very nice experience these days. C# is a big language these days though.
Go is a reasonably performant language that makes it pretty straightforward to write reliable, highly concurrent services that don't rely on heavy multithreading - all thanks to the goroutine model.
There really was no other reasonably popular, static, compiled language around when Google came out.
And there still barely is - the only real competitor that sits in a similar space is Java with the new virtual threads.
Languages with async/await promise something similar, but in practice are burdened with a lot of complexity (avoiding blocking in async tasks, function colouring, ...)
I'm not counting Erlang here, because it is a very different type of language...
So I'd say Go is popular despite the myriad of shortcomings, thanks to goroutines and the Google project street cred.
The change from Java 8 to 25 is night and day. And the future looks bright. Java is slowly bringing in more language features that make it quite ergonomic to work with.
I have no desire to go back to Java no matter how much the language has evolved.
For me C# has filled the void of Java in enterprise/gaming environments.
It's fast enough, easy enough (being very similar now to TypeScript), versatile enough, well-documented (so LLMs do a great job), broad and well-maintained first party libraries, and the team has over time really focused on improving terseness of the language (pattern matching and switch expressions are really one thing I miss a lot when switching between C# and TS).
EF Core is also easily one of the best ORMs: super mature, stable, well-documented, performant, easy to use, and expressive. Having been in the Node ecosystem for the past year, there's really no comparison for building fast with less papercuts (Prisma, Drizzle, etc. all abound with papercuts).
It's too bad that it seems that many folks I've chatted with have a bad taste from .NET Framework (legacy, Windows only) and may have previously worked in C# when it was Windows only and never gave it another look.
Which means if you write C#, you'll encounter a ton of devs who come from an enterprise, banking or govt background, who think doing a 4 layer enterprise architecture with DTOs and 5 line classes is the only way you can write a CRUD app, and the worst of all you'll se a ton of people who learned C# in college a decade ago and refuse to learn anything else.
EF is great, but most people use it because they don't have to learn SQL and databases.
Blazor is great, but most people use it because they don't want to learn Frontend dev, and JS frameworks.
"Modern C#" (if we can differentiate that) has a lot of nice amenities for modeling like immutable `record` types and named tuples. I think where EF really shines is that it allows you to model the domain with persistence easily and then use DTOs purely as projections (which is how I use DTOs) into views (e.g. REST API endpoints).
I can't say for the broader ecosystem, but at least in my own use cases, EFC is primarily used for write scenarios and some basic read scenarios. But in almost all of my projects, I end up using CQRS with Dapper on the read side for more complex queries. So I don't think that it's people avoiding SQL; rather it's teams focused on productivity first.
WRT to Blazor, I would not recommend it in place of JS except for internal tooling (tried it at one startup and switched to Vue + Vite). But to be fair, modern FE development in JS is an absolute cluster of complexity.
And when the front end is C# so is the back end.
It was actually really good for the time and lightyears ahead of whatever Flash was doing.
But people rather used all kinds of hacks to get Flash working on Linux and OSX rather than use Moonlight.
[0] https://en.wikipedia.org/wiki/Microsoft_Silverlight
A big chunk of their strategy at the time was around how to completely own the web. I celebrated every time their attempts failed.
After a while people got tired of doing updates.
That said, if on the JVM, just use Kotlin.
and with the GraalVM, JavaScript/Node, Python, R, and Ruby.
among many others.
I'm also reminded about the time that Tomcat stopped being an application you deploy to and just being an embedded library in the runtime! It was like the collective light went on that Web containers were just a sham. That didn't prevent employers from forcing me to keep using Websphere/WAS because "they paid for that and by god they're going to use it!" Meanwhile it was totally obsolete as docker containers just swept them all by the wayside.
I wonder what "Webshere admins" are doing these days? That was once a lucrative role to be able to manage those Jython configs, lol.
Modern Java communities are slowly adopting the common FP practice "making illegal states unrepresentable" and call it "data oriented programming". Which is nice for those of us who actively use ADT. I no longer need to repeatedly explain "what is Option<?>?" or "why ADT?" whenever I use them; I could just point them to those new resources.
Hopefully, this shift will steer the Java community toward a saner direction than the current cargo cult which believed mutable C-struct (under guise of "anemic domain model") + Garbage Collector was OOP.
Like, there are 10 million Java devs, there is a whole lot of completely brand new development going in any language, let alone in such a huge one.
This simply isn’t true. 60% of the ecosystem has moved beyond Java 8 in the last poll.
The JVM is a runtime, just like what Go has. It allows for the best observability of any platform (you can literally connect to a prod instance and check e.g. the object allocations) and has stellar performance and stability.
(Similar to how Python is finally getting its act together with the uv tool.)
Though Gradle is more than fine with the Kotlin DSL nowadays.
Go, with all its faults, tries very hard to shun complexity, which I've found over the years to be the most important quality a language can have. I don't want a language with many features. I want a language with the bare essentials that are robust and well designed, a certain degree of flexibility, and for it to get out of my way. Go does this better than any language I've ever used.
> Go, with all its faults, tries very hard to shun complexity
The whole field is about managing complexity. You don't shun complexity, you give tools to people to be able to manage it.
And Go goes the low end of the spectrum, of not giving enough features to manage that complexity -- it's simplistic, not simple.
I think the optimum as actually at Java - it is a very easy language with not much going on (compared to, say, Scala), but just enough expressivity that you can have efficient and comfortable to use libraries for all kind of stuff (e.g. a completely type safe SQL DSL)
If you dont think that exists in java, spend some time in the maven documentation or spring documentation https://docs.spring.io/spring-framework/reference/index.html https://maven.apache.org/guides/getting-started/ Then imagine yourself a beginner to programming trying to make sense of that documentation
you try keep the easy things easy + simple, and try to make the hard things easier and simpler, if possible. Simple aint easy
I dont hate java (anymore), it has plenty of utility, (like say...jira). But when I'm writing golang I pretty much never think "oh I wish this I was writing java right now." no thanks
Without it, you either write that complexity yourself or fail to even recognize why is it necessary in the first place, e.g. failing to realize the existence of SQL injections, Cross-Site Scripting, etc. Backends have some common requirements and it is pretty rare that your problem wouldn't need these primitives, so as a beginner, I would advice.. learning the framework as well, the same way you would learn how to fly a plane before attempting it.
For other stuff, there is no requirement to use Spring - vanilla java has a bunch of tools and feel free to hack whatever you want!
Great, pretty much every language ever can do the equivalent. Not what anyone is talking about.
> Java is the epitome of backwards and forward-compatible changes,
Is the number of companies stuck on Java 8 lower than 50% yet? [1]
[1]: https://www.jetbrains.com/lp/devecosystem-2023/java/
Go already has a breaking change.
> Java 8
Yes
Complexity exists in all layers of computing, from the silicon up. While we can't avoid complexity of real world problems, we can certainly minimize the complexity required for their solutions. There are an infinite amount of problems caused primarily by the self-induced complexity of our software stacks and the hardware it runs on. Choosing a high-level language that deliberately tries to avoid these problems is about the only say I have in this matter, since I don't have the skill nor patience to redo decades of difficult work smarter people than me have done.
Just because a language embraces simplicity doesn't mean that it doesn't provide the tools to solve real world problems. Go authors have done a great job of choosing the right set of trade-offs, unlike most other language authors. Most of the time. I still think generics were a mistake.
Java is great if you stick to a recent version and update on a regular basis. But a lot of companies hate their own developers.
You can get a large portion of what graal native offers by using AppCDS and compressed object headers.
Here's the latest JEP for all that.
https://openjdk.org/jeps/483
Also, quite a few libraries have metadata now denoting these extra reflection targets.
But nonetheless you are right in general, but depends on your use case.
Every single piece of Go 1.x code scraped from the internet and baked in to the models is still perfectly valid and compiles with the latest version.
Which Google uses far more commonly than Go, still to this day.
As for hot swap, I haven't heard it being used for production, that's mostly for faster development cycles - though I could be wrong. Generally it is safer to bring up the new version, direct requests over, and shut down the old version. It's problematic to just hot swap classes, e.g. if you were to add a new field to one of your classes, how would old instances that lack it behave?
And lists are slower than arrays, even if they provide functional guarantees (everything is a tradeoff…)
That said, pretty much everything else about it is amazing though IMHO and it has unique features you won’t find almost anywhere else
It can’t match it for performance. There’s no mutable array, almost everything is a linked list, and message passing is the only way to share data.
I primarily use Elixir in my day job, but I just had to write high performance tool for data migration and I used Go for that.
P.S. Swift, anyone?
And even without types (which are coming and are looking good), Elixir's pattern matching is a thousands times better than the horror of Go error handling
Also, Java has ZGC that basically solved the pause time issue, though it does come at the expense of some throughput (compared to their default GC).
For ML/data: python
For backend/general purpose software: Java
The only silver bullet we know of is building on existing libraries. These are also non-accidentally the top 3 most popular languages according to any ranking worthy of consideration.
----- https://openjdk.org/jeps/512 -----
First, we allow main methods to omit the infamous boilerplate of public static void main(String[] args), which simplifies the Hello, World! program to:
Second, we introduce a compact form of source file that lets developers get straight to the code, without a superfluous class declaration: Third, we add a new class in the java.lang package that provides basic line-oriented I/O methods for beginners, thereby replacing the mysterious System.out.println with a simpler form:edit: hold on wait, java doesn't have Value types yet... /jk
An oxymoron if I've ever heard one.
refinement: the process of removing impurities or unwanted elements from a substance.
refinement: the improvement or clarification of something by the making of small changes.
public static void in a class with factory of AbstractFactoryBuilderInstances...? right..? Yes, say that again?
We are talking about removing unnecessary syntactic constructs, not adding as some would do with annotations in order to have what? Refinement types perhaps? :)
That's not syntax. Factory builders have nothing to do with syntax and everything to do with code style.
The oxymoron is implying syntax refinements would be inspired by Go of all things, a language with famously basic syntax. I'm not saying it's bad to have basic syntax. But obviously modern Java has a much more refined syntax and it's not because it looks closer to Go.
Nonetheless, Java has eased the psvm requirements, you don't even have to explicitly declare a class and a void main method is enough. [1] Not that it would matter for any non-script code.
[1] https://openjdk.org/jeps/495
PHP's frameworks are fantastic and they hide a lot from an otherwise minefield of a language (though steadily improved over the years).
Both are decent choices if this is what you/your developers know.
But they wouldn't be my personal first choice.
Local environments are not tied to IDEs at all, but you are doing yourself a disservice if you don't use a decent IDE irrespective of language - they are a huge productivity boost.
And are you stuck in the XML times or what? Spring Boot is insanely productive - just as a fact of matter, Go is significantly more verbose than Java, with all the unnecessary if errs.
August 22, 2025.
Local environments are not literally tied to IDEs, but they effectively are in any non-trivially sized project. And the reason is because most Java shops really do believe "you are doing yourself a disservice if you don't use a decent IDE irrespective of language." I get along fine with a text editor + CLI tools in Deno, Lua, and Zig. Only when I enter Java world do the wisest of the wise say "yeah there is a CLI, but I don't really know it. I recommend you download IntelliJ and run these configs instead."
Yes Spring Boot is productive. So is Ruby on Rails or Laravel.
Sure, there are some awfully dated companies that still send changed files over email to each other with no version control, I'm sure some of those are stuck with an IDE config, but to be honest where I have seen this most commonly were some Visual Studio projects, not Java. Even though you could find any of these for any other language, you just need to scale your user base up. A language that hasn't even hit 1.0 will have a higher percentage of technically capable users, that's hardly a surprise.
Then they obviously don't know their tooling well, and I would hesitate to call a jr 'the wisest of the wise'
My experience is mostly with C#, but async/await works very well there in my experience. You do need to know some basics there to avoid problem, but that's the case for essentially every kind of concurrency. They all have footguns.
Most users writing basic async CRUD servers won't notice, but you very much do if you write complex , highly concurrent servers.
That can be a viable tradeoff, and is for many, but it's far from being as fool-proof as Go.
Debugging is a nightmare because it refuses to even compile if you have unused X (which you always will have when you're debugging and testing "What happens if I comment out this bit?").
The bureaucracy is annoying. The magic filenames are annoying. The magic field names are annoying. The secret hidden panics in the standard library are annoying. The secret behind-your-back heap copies are annoying (and SLOW). All the magic in go eventually becomes annoying, because usually it's a naively repurposed thing (where they depend on something that was designed for a different purpose under different assumptions, but naively decided to depend on its side effects for their own ever-so-slightly-incompatible machinery - like special file names, and capitalization even though not all characters have such a thing .. was it REALLY such a chore to type "pub" for things you wanted exposed?).
Now that AI has gotten good, I'm rather enjoying Rust because I can just quickly ask the AI why my types don't match or a gnarly mutable borrow is happening - rather than spending hours poring over documentation and SO questions.
Go people will yell at you because you aren't using the right tool.
Yeah, Go is too rigid with their principles.
Btw, AI sucks on GO. One would have guessed that such a simple lang would suit ChatGPT. Turns out ChatGPT is much better at Java, C#, Pyhton and many other langs than GO.
I’d say Terraform was the worst. But that shouldn’t be a surprise given it’s niche
There's no single 'best language', and it depends on what your use-cases are. But I'd say that for many typical backend tasks, Go is a choice you won't really regret, even if you have some gripes with the language.
* The Dremel is approachable: I don't have to worry about cutting off my hand with the jigsaw or set up a jig with the circular saw. I don't have to haul my workpiece out to the garage.
* The Dremel is simple: One slider for speed. Apply spinny bit to workpiece.
* The Dremel is fun: It fits comfortably in my hand. It's not super loud. I don't worry about hurting myself with it. It very satisfyingly shaves bits of stuff off things.
In so many respects, the Dremel is a great tool. But 90% of the time when I use it, it ends up taking my five times as long (but an enjoyable 5x!) and the end result is a wobbly scratchy mess. I curse myself for not spending the upfront willpower to use the right tool for the job.
I find myself doing this with all sorts of real and software tools: Over-optimizing for fun and ease-of-entry and forgetting the value of the end result and using the proper tool for the job.
I think of this as the "Dremel effect" and I try to be mindful of it when selecting tools.
Most of my coding these days is definitely in the 'for fun' bucket given my current role. So I'd rather take 5x and have fun.
That said, I don't think Go is only fun, I think it's also a viable option for many backend projects where you'd traditionally have reached for Java / C#. And IMO, it sure beats the recent tendency of having JS/Python powering backend microservices.
I think there is little to no chance it can hold on to its central vision as the creators "age out" of the project, which will make the language worse (and render the tradeoffs pointless).
I think allowing it to become pigeon holed as "a language for writing servers" has cost and will continue to cost important mindshare that instead jumps to Rust or remains in Python or etc.
Maybe it's just fun, like harping on about how bad Visual Basic was, which was true but irrelevant, as the people who needed to do the things it did well got on with doing so.
Usually, as here, objections to go take the form a technically-correct-but-ultimately-pedantic arguments.
The positives of go are so overwhelmingly high magnitude that all those small things basically don’t matter enough to abandon the language.
Go is good enough to justify using it now while waiting for the slow-but-steady stream of improvements from version to version to make life better.
- Zero values, lack of support for constructors
- Poor handling of null
- Mutability by default
- A static type system not designed with generics in mind
- `int` is not arbitrary precision [1]
- The built-in array type (slices) has poorly considered ownership semantics [2]
Notable mentions:
- No sum types
- No string interpolation
[1]: https://github.com/golang/go/issues/19623
[2]: https://news.ycombinator.com/item?id=39477821
(I realise this isn’t who is hiring, but email in bio)
myfunc(arg: string): Value | Err
I really try not to throw anymore with typescript, I do error checking like in Go. When used with a Go backend, it makes context switching really easy...
Some critique is definitely valid, but some of it just sounds like they didn't take the time to grasp the language. It's trade offs all the way. For example there is a lot I like about Rust, but still no my favorite language.
That said I really wish there was a revamp where they did things right in terms of nil, scoping rules etc. However, they've commited to never breaking existing programs (honorable, understandable) so the design space is extremely limited. I prefer dealing with local awkwardness and even excessive verbosity over systemic issues any day.
I don't think the article sounds like someone didn't take the time to grasp the language. It sounds like it's talking about the kind of thing that really only grates on you after you've seriously used the language for a while.
I quite like Go and use it when I can. However, I wish there were something like Go, without these issues. It's worth talking about that. For instance, I think most of these critiques are fair but I would quibble with a few:
1. Error scope: yes, this causes code review to be more complex than it needs to be. It's a place for subtle, unnecessary bugs.
2. Two types of nil: yes, this is super confusing.
3. It's not portable: Go isn't as portable as C89, but it's pretty damn portable. It's plenty portable to write a general-purpose pre-built CLI tool in, for instance, which is about my bar for "pragmatic portability."
4. Append ownership & other slice weirdness: yes.
5. Unenforced `defer`: yes, similar to `err`, this introduces subtle bugs that can only be overcome via documentation, careful review, and boilerplate handling.
6. Exceptions on top of err returns: yes.
7. utf-8: Hasn't bitten me, but I don't know how valid this critique is or isn't.
8. Memory use: imo GC is a selling-point of the language, not a detriment.
I'm surprised people in these comments aren't focusing more on the append example.
:^/
> I couldn't resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years.
Golang's biggest shortcoming is the fact that it touches bare metal isn't visible clearly enough. It provides many high level features which makes this ambience of "we got you" but fails on delivering proper education to its users that they are going to have a dirt on their hands.
Take a slice for example: even in naming it means "part of" but in reality it's closer to "box full of pointers" what happens when you modify pointer+1? Or "two types of nil"; there is a difference between having two bytes (simplification), one of struct type and the other of address to that struct and having just a NULL - same as knowing that house doesn't exist and being confident that house exists and saying it's in the middle of the volcano beneath the ocean.
The Foo99 critique is another example. If you'd want to have not 99 loop but 10 billion loops each with mere 10 bytes you'd need 100GiB of memory just to exit it. If you'd reuse the address block you'd only use... 10 bytes.
I also recommend trying to implement lexical scope defer in C and putting them in threads. That's a big bottle of fun.
I think that it ultimately boils down to what kind of engineer one wants to be. I don't like hand holding and rather be left on my own with a rain of unit tests following my code so Go, Zig, C (from low level Languages) just works for me. Some prefer Rust or high level abstractions. That's also fine.
But IMO poking at Go that it doesn't hide abstractions is like making fun of football of being child's play because not only it doesn't have horses but also has players using legs instead of mallets.
Author here.
No, this is not where it comes from. I've been coding C form more than 30 years, Go for maybe 12-15, and currently prefer Rust. I enjoy C++ (yes, really) and getting all those handle-less knives to fit together.
No, my critique of Go is that it did not take the lessons learned from decades of theory, what worked and didn't work.
I don't fault Go for its leaky abstractions in slices, for example. I do fault it for creating bad abstraction APIs in the first place, handing out footguns when they are avoidable. I know to avoid the footgun of appending to slices while other slices of the same array may still be accessible elsewhere. But I think it's indefensible to have created that footgun in the year Go was created.
Live long enough, and anybody will make a silly mistake. "Just don't make a mistake" is not an option. That's why programming language APIs and syntax matters.
As for bare metal; Go manages to neither get the benefits possible of being high level, and at the same time not being suitable for bare metal.
It's a missed opportunity. Because yes, in 2007 it's not like I could have pointed to something that was strictly better for some target use cases.
No Turing complete language will ever prevent people from being idiots.
It's not only programming language API and syntax. It's a conceptual complexity, which Go has very low. It's a remodeling difficulty which Rust has very high. It's implicit behavior that you get from high stack of JS/TS libraries stitched together. It's accessibility of tooling, size of the ecosystem and availability of APIs. And Golang crosses many of those checkboxes.
All the examples you've shown in your article were "huh? isn't this obvious?" to me. With your experience in C I have no idea you why you don't want to reuse same allocation multiple times and instead keeping all of them separately while reserving allocation space for possibly less than you need.
Even if you'd assume all of this should be on stack you still would crash or bleed memory through implicit allocations that exit the stack.
Add 200 of goroutines and how does that (pun intended) stack?
Is fixing those perceived footguns really a missed opportunity? Go is getting stronger every year and while it's hated by some (and I get it, some people like Rust approach better it's _fine_) it's used more and more as a mature and stable language.
Many applications don't even worry about GC. And if you're developing some critical application, pair it with Zig and enjoy cross-compilation sweetness with as bare metal as possible with all the pipes that are needed.
Which part are you referring to, here?
> Even if you'd assume all of this should be on stack you still would crash or bleed memory through implicit allocations that exit the stack.
What do you mean by this? I don't mean to be rude, but this sounds confusing if you understand how memory works. What do you mean an allocation that exits the stack would bleed memory?
It is. None of this was new to me. In C++ defining a non-virtual destructor on a class hierarchy is also not new to me, but a fair critique can be made there too why it's "allowed". I do feel like C++ can defend that one from first principles though, in a way that Go cannot.
I'm not sure what you mean by the foo99 thing. I'm guessing this is about defer inside a loop?
> Is fixing those perceived footguns really a missed opportunity?
In my opinion very yes.
What has been an issue for me, though, is working with private repositories outside GitHub (and I have to clarify that, because working with private repositories on GitHub is different, because Go has hardcoded settings specifically to make GitHub work).
I had hopes for the GOAUTH environment variable, but either (1) I'm more dumb and blind than I thought I already was, or (2) there's still no way to force Go to fetch a module using SSH without trying an HTTPS request first. And no, `GOPRIVATE="mymodule"` and `GOPROXY="direct"` don't do the trick, not even combined with Git's `insteadOf`.
``` machine {server} # e.g. gitlab.com login {username} password {read_only_api_key} # Must be actual key and not an ENV var ```
Worked consistently, but not a solution we were thrilled with.
- As someone who’s worked with C/C++ and Fortran, I think all these languages have their own challenges—Go’s simplicity trades off against Rust’s safety guarantees, for example.
- Could someone share a real-world example where Go’s design caused a production issue that Rust or another language would’ve avoided?
- I’m curious how these trade-offs play out in practice.
Sorry, I don't do Go/Rust coding, still on C/C++/Fotran.
A simple one, if you create two separate library in Go and try to link with an application, you will have a terrible time.
I've ran into this same issue: https://github.com/golang/go/issues/65050
https://www.youtube.com/watch?v=xuv9A7CJF54&t=440s
that's not a production issue, and it's a very niche use case
So: The way Go presents it is confusing, but this behavior makes sense, is correct, will never be changed, and is undoubtedly depended on by correct programs.
The confusing thing for people use to C++ or C# or Java or Python or most other languages is that in Go nil is a perfectly valid pointer receiver for a method to have. The method resolution lookup happens statically at compile time, and as long as the method doesn't try to deref the pointer, all good.
It still works if you assign to an interface.
This will print But the interface method lookup can't happen at compile time. So the interface value is actually a pair -- the pointer to the type, and the instance value. The type is not nil, hence the interface value is something like (&Cat,nil) and (&Dog,nil) in each case, which is not the interface zero value, which is (nil, nil).But it's super confusing because Go type cooerces a nil struct value to a non-nil (&type, nil) interface value. There's probably some naming or syntax way to make this clearer.
But the behavior is completely reasonable.
A struct{a, b int32} takes 8 bytes of memory. It doesn't use any extra bytes to “know” its type, to point to a vtable of “methods,” to store a lock, or any other object “header.”
Dynamic dispatch in Go uses interfaces which are fat pointers that store the both type and a pointer to an object.
With this design it's only natural that you can have nil pointers, nil interfaces (no type and no pointer), and typed interfaces to a nil pointer.
This may be a bad design decision, it may be confusing. It's the reason why data races can corrupt memory.
But saying, as the author, “The reason for the difference boils down to again, not thinking, just typing” is just lazy.
Just as lazy as it is arguing Go is bad for portability.
I've written Go code that uses syscalls extensively and runs in two dozen different platforms, and found it far more sensible than the C approach.
However, the non-intuitive punning of nil is unfortunate.
I'm not sure what the ideal design would be.
Perhaps just making an interface not comparable to nil, but instead something like `unset`.
Still, it's a sharp edge you hit once and then understand. I am surprised people get so bothered by it...it's not like something that impairs your use of the language once you're proficient.(E.g. complaints about nil existing at all, or error handling, are much more relatable!)
> “The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.”
I do think there probably would've been some more elegant syntax or naming convention to make it less confusing.
Though, thinking back, someone should have brought up TypeScript's at least three different ways to represent nil (undefined, null, NaN, a few others). Its at least a little better in TS, because unlike in Go the type-checker doesn't actively lie to you about how many different states of undefined you might be dealing with.
You were right, it's a niche and therefore pretty much irrelevant issue. They may as well have rejected Python due to its "significant whitespace".
It just feels sloppy and I'm worried I'm going to make a mistake.
Now I always use pointers consistently for the readability.
And also when I want a value with stable identity I'd use a pointer.
Its annoying to need to think about whether I’m working with an interface type of concrete type.
And if use pointers everywhere, why not make it the default?
I like Go and Rust, but sometimes I feel like they lack tools that other languages have just because they WANT to be different, without any real benefit.
Whenever I read Go code, I see a lot more error handling code than usual because the language doesn't have exceptions...
And sometimes Go/Rust code is more complex because it also lacks some OOP tools, and there are no tools to replace them.
So, Go/Rust has a lot more boilerplate code than I would expect from modern languages.
For example, in Delphi, an interface can be implemented by a property:
This isn't possible in Go/Rust. And the Go documentation I read strongly recommended using Composition, without good tools for that.This "new way is the best way, period ignore good things of the past" is common.
When MySQL didn't have transactions, the documentation said "perform operations atomically" without saying exactly how.
MongoDB didn't have transactions until version 4.0. They said it wasn't important.
When Go didn't have generics, there were a bunch of "patterns" to replace generics... which in practice did not replace.
The lack of inheritance in Go/Rust leaves me with the same impression. The new patterns do not replace the inheritance or other tools.
"We don't have this tool in the language because people used it wrong in the old languages." Don't worry, people will use the new tools wrong too!
Similarly, if a field implements a trait in Rust, you can expose it via `AsRef` and `AsMutRef`, just return a reference to it.
These are not ideal tools, and I find the Go solution rather unintuitive, but they solve the problems that I would've solved with inheritance in other languages. I rarely use them.
https://www.tiobe.com/tiobe-index/
If you prefer, I can provide the same example in C, C++, D, Java, C#, Scala, Kotlin, Swift, Rust, Nim, Zig, Odin.
C# thankfully was designed by someone that appreciates type systems, maybe you should revisit it.
But I guess Go devs love to type their beloved boilerplate, it gives fuzzy feelings.
And concretely, _I_ want Sum types in Go. I also want them in C# and every other language I might have to use.
But everyone knows in their heart of hearts that a few small language warts definitely don't outweigh Go's simplicity and convenience. Do I wish it had algebraic data types, sure, sure. Is that a deal-breaker, nah. It's the perfect example of something that's popular for a reason.
It is easily one of the most productive languages. No fuss, no muss, just getting stuff done.
Some learnings. Don't pass sections of your slices to things that mutate them. Anonymous functions need recovers. Know how all goroutines return.
It's hard to find a language that will satisfy everyone's needs. Go I find better for smaller, focused applications/utilities... can definitely see how it would cause problems at an "enterprise" level codebase.
These days, it seems like languages keep chasing paradigms and over adapt to moving targets.
Look at what Rust and Swift have become. C# has stayed relatively sane somehow, but it's not what I'd call indepedent.
There is a crack in everything, that's how the light gets in.
https://github.com/pannous/goo/
• errors handled by truthy if or try syntax • all 0s and nils are falsey • #if PORTABLE put(";}") #end • modifying! methods like "hi".reverse!() • GC can be paused/disabled • many more ease of use QoL enhancements
I would criticize Go from the point of view of more modern languages that have powerful type systems like the ML family, Erlang/Elixir or even the up and coming Gleam. These languages succeed in providing powerful primitives and models for creating good, encapsulating abstractions. ML languages can help one entirely avoid certain errors and understand exactly where a change to code affects other parts of the code — while languages like Erlang provided interesting patterns for handling runtime errors without extensive boilerplate like Go.
It’s a language that hobbles developers under the aegis of “simplicity.” Certainly, there are languages like Python which give too much freedom — and those that are too complex like Rust IMO, but Go is at best a step sideways from such languages. If people have fun or get mileage out of it, that’s fine, but we cannot pretend that it’s really this great tool.
Cargo is amazing, and you can do amazing things with it, I wish Go would invest in this area more.
Also funny you mention Python, a LOT of Go devs are former Python devs, especially in the early days.
Go release date: 2012
ML: 1997
". They are likely the two most difficult parts of any design for parametric polymorphism. In retrospect, we were biased too much by experience with C++ without concepts and Java generics. We would have been well-served to spend more time with CLU and C++ concepts earlier."
https://go.googlesource.com/proposal/+/master/design/go2draf...
Google's networking services keep being writen in Java/Kotlin, C++, and nowadays Rust.
These sorts of articles have been commonplace even before Go released 1.0 in 2013. In fact, most (if not all) of these complaints could have been written identically back then. The only thing missing from this post that could make me believe it truly was written in 2013 would be a complaint about Go not having generics, which were added a few years ago.
People on HN have been complaining about Go since Go was a weird side-project tucked away at Google that even Google itself didn't care about and didn't bother to dedicate any resources to. Meanwhile, people still keep using it and finding it useful.
It is infuriating because it is close to being good, but it will never get there - now due to backwards compatibility.
Also Rob Pike quote about Go's origins is spot on.
Go _excels_ at API glue. Get JSON as string, marshal it to a struct, apply business logic, send JSON to a different API.
Everything for that is built in to the standard library and by default performant up to levels where you really don't need to worry about it before your API glue SaaS is making actual money.
The language sits in an awkward space between rust and python where one of them would almost always be a better choice.
But, google rose colored specs...
Sure? Depends on use case.
> too much verbosity
Doesn't meaningfully affect anything.
> Too much fucking "if err != nil".
A surface level concern.
> The language sits in an awkward space between rust and python where one of them would almost always be a better choice.
Rust doesn't have a GC so it's stuck to its systems programming niche. If you want the ergonomics of a GC, Rust is out.
Python? Good, but slow, packaging is a joke, dynamic typing (didn't you mention type safety?), async instead of green threads, etc., etc.
Rust simply doesn’t cut it for me. I’m hoping Roc might become this, but I’m not holding my breath.
The other jarring example of this kind of deferring logical thinking to big corps was people defending Apple's soldering of memory and ssd, specially so on this site, until some Chinese lad proved that all the imagined issues for why Apple had to do such and such was bs post hoc rationalisation.
The same goes with Go, but if you spend enough time, every little while you see the disillusionment of some hardcore fans, even from the Go's core team, and they start asking questions but always start with things like "I know this is Go and holy reasons exists and I am doing a sin to question but why X or Y". It is comedy.
Go is a super productive powerhouse for me.
AI solved my issues with carpal tunnel.
And when I'm feeling fancy, I don't even type, just command AI by voice. "handle error case".
Could you quote which paragraph you're talking about?
AFAIK, interoperability with C++ code is just one of their explicit goals; they only place that as the last item in the "Language Goals" section.
If you really care about scope while being able to use `bar` later down, the code should be written as:
which actually overwrites `err`, opposite to "shadowing" it.The confusing here, is that the difference between `if err != nil` and `if err = call(); err != nil` is not just style, the later one also introduces a scope that captures whatever variables got created before `;`.
If you really REALLY want to use the same `if` style, try:
The green threads are very interesting since you can create 1000s of them at a low cost and that makes different designs possible.
I think this complaining about defer is a bit trivial. The actual major problem for me is the way imports work. The fact that it knows about github and the way that it's difficult to replace a dependency there with some other one including a local one. The forced layout of files, cmd directories etc etc.
I can live with it all but modules are the things which I have wasted the most time and struggled the most.
Use `replace` in `go.mod`, or `go.work` if you're hacking on it locally?
You don't need to have a cmd directory. I see it a lot in Go projects but I'm not sure why.
- `is` for interfaces checks both value and interface type.
- `==` for interfaces uses only the value and not the interface type.
- For structs/value, `is` and `==` are obvious since there's only a value to check.
I also review a lot of Go code. I see these problems all the time from other people.
[0] https://news.ycombinator.com/item?id=44985378
The article's points feel overly simplistic/shallow and lack the depth you'd expect from an experienced Go programmer.
(something that's not a minor error, that someone else pointed out, is that Python isn't strictly refcounted. Yeah, that's why emphasized "almost" and "pretty much". I can't do anything about that kind of critique)
The kind of general reason you need a mutex is that you are mutating some data structure from one valid state to another, but in between, it's in an inconsistent state. If other code sees that inconsistent state, it might crash or otherwise misbehave. So you acquire the mutex beforehand and release it once it's in the new valid state.
But what happens if you panic during the modification? The data structure might still be in an inconsistent state! But now it's unlocked! So other threads that use the inconsistent data will misbehave, and now you have a very tricky bug to fix.
This doesn't always apply. Maybe the mutex is guarding a more systemic consistency condition, like "the number in this variable is the number of messages we have received", and nothing will ever crash or otherwise malfunction if some counts are lost. Maybe it's really just providing a memory fence guarding against a torn read. Maybe the mutex is just guarding a compare-and-swap operation written out as a compare followed by a swap. But in cases like these I question whether you can really panic with the mutex held!
This is why Java deprecated Thread.stop. (But Java does implicitly unlock mutexes when unwinding during exception handling, and that does cause bugs.)
This is only vaguely relevant to your topic of whether Golang is good or not. Explicit error handling arguably improves your chances of noticing the possibility of an error arising with the mutex held, and therefore handling it correctly, but you correctly pointed out that because Go does have exceptions, you still have to worry about it. And cases like these tend to be fiendishly hard to test—maybe it's literally impossible for a test to make the code you're calling panic.
I can see why people pick it but its major step up in convenience rather than major step up in evolution programming language itself
The distinction you're making here does not exist IMO. Convenience is the entire point of design.
It's a good language for teams, for sure, though.
Talk about hyperbole.
I don't get how you can assign an interface to be a pointer to a structure. How does that work? That seems like a compile error. I don't know much about Go interfaces.
the nil issue. An interface, when assigned a struct, is no longer nil even if that struct is nil - probably a mistake. Valid point.
append in a func. Definitely one of the biggest issues is that slices are by ref. They did this to save memory and speed but the append issue becomes a monster unless abstracted. Valid point.
err in scope for the whole func. You defined it, of course it is. Better to reuse a generic var than constantly instantiate another. The lack of try catch forces you to think. Not a valid point.
defer. What is the difference between a scope block and a function block? I'll wait.
> Over the decades I have lost data to tools skipping non-UTF-8 filenames. I should not be blamed for having files that were named before UTF-8 existed.
Umm.. why blame Go for that?
What I intended to say with this is that ignoring the problem if invalid UTF-8 (could be valid iso8859-1) with no error handling, or other way around, has lost me data in the past.
Compare this to Rust, where a path name is of a different type than a mere string. And if you need to treat it like a string and you don't care if it's "a bit wrong" (because it's for being shown to the user), then you can call `.to_string_lossy()`. But it's be more hard to accidentally not handle that case when exact name match does matter.
When exactness matters, `.to_str()` returns `Option<&str>`, so the caller is forced to deal with the situation that the file name may not be UTF-8.
Being sloppy with file name encodings is how data is lost. Go is sloppy with strings of all kinds, file names included.
But forcing all strings to be UTF-8 does not magically help with the issue you described. In practice I've often seen the opposite: Now you have to write two code paths, one for UTF-8 and one for everything else. And the second one is ignored in practice because it is annoying to write. For example, I built the web server project in your other submission (very cool!) and gave it a tar file that has a non-UTF-8 name. There is no special handling happening, I simply get "error: invalid UTF-8 was detected in one or more arguments" and the application exits. It just refuses to work with non-UTF-8 files at all -- is this less sloppy?
Forcing UTF-8 does not "fix" compatibility in strange edge cases, it just breaks them all. The best approach is to treat data as opaque bytes unless there is a good reason not to. Which is what Go does, so I think it is unfair to blame Go for this particular reason instead of the backup applications.
You can debate whether it is sloppy but I think an error is much better than silently corrupting data.
> The best approach is to treat data as opaque bytes unless there is a good reason not to
This doesn't seem like a good approach when dealing with strings which are not just blobs of bytes. They have an encoding and generally you want ways to, for instance, convert a string to upper/lowercase.
I don't think you need two code paths. Maybe your program can live its entire life never converting away from the original form. Say you read from disk, pick out just the filename, and give to an archive library.
There's no need to ever convert that to a "string". Yes, it could have been a byte array, but taking out the file name (or maybe final dir plus file name) are string operations, just not necessarily on UTF-8 strings.
And like I said, for all use cases where it just needs to be shown to users, the "lossy" version is fine.
> I simply get "error: invalid UTF-8 was detected in one or more arguments" and the application exits. It just refuses to work with non-UTF-8 files at all -- is this less sloppy?
Haha, touche. But yes, it's less sloppy. Would you prefer that the files were silently skipped? You've created your archive, you started the webserver, but you just can't get it to deliver the page you want.
In order for tarweb to support non-UTF-8 in filenames, the programmer has to actually think about what that means. I don't think it means doing a lossy conversion, because that's not what the file name was, and it's not merely for human display. And it should probably not be the bytes either, because tools will likely want to send UTF-8 encoded.
Or they don't. In either case unless that's designed, implemented, and tested, non-UTF-8 in filenames should probably be seen as malformed input. For something that uses a tarfile for the duration of the process's life, that probably means rejecting it, and asking the user to roll back to a previous working version or something.
> Forcing UTF-8 does not "fix" compatibility in strange edge cases
Yup. Still better than silently corrupting.
Compare this to how for Rust kernel work they apparently had to implement a new Vec equivalent, because dealing with allocation failures is a different thing in user and kernel space[1], and Vec push can't fail.
Similarly, Go string operations cannot fail. And memory allocation issues has reasons that string operations don't.
[1] a big separate topic. Nobody (almost) runs with overcommit off.
But there is no silent corruption when you pass the data as opaque bytes, you just get some placeholder symbols when displayed. This is how I see the file in my terminal and I can rm it just fine.
And yes, question marks in the terminal are way better than applications not working at all.
The case of non-UTF-8 being skipped is usually a characteristic of applications written in languages that don't use bytes for their default string type, not the other way around. This has bitten me multiple times with Python2/3 libraries.
Go has a good-enough standard library, and Go can support a "pile-of-if-statements" architecture. This is all you need.
Most enterprise environments are not handled with enough care to move beyond "pile-of-if-statements". Sure, maybe when the code was new it had a decent architecture, but soon the original developers left and then the next wave came in and they had different ideas and dreamed of a "rewrite", which they sneakily started but never finished, then they left, and the 3rd wave of developers came in and by that point the code was a mess and so now they just throw if-statements onto the pile until the Jira tickets are closed, and the company chugs along with its shitty software, and if the company ever leaks the personal data of 100 million people, they aren't financially liable.
Every piece of code looks the same and can be automatically, neutrally, analysed for issues.
Don't use it I guess and ignore all the X is not good posts for language X you do decide to use?
EDIT: OK, I think I understand; there's no easy way to have `bar` be function-scoped and `err` be if-scoped.
I mean, I'm with him on the interfaces. But the "append" thing just seems like ranting to me. In his example, `a` is a local variable; why would assigning a local variable be expected to change the value in the caller? Would you expect the following to work?
If not why would you expect `a = apppend(a, ...)` to work?I think you may need to re-read. My point is that it DOES change the value in the caller. (well, sometimes) That's the problem.
I'm afraid I can only consider that a taste thing.
EDIT: One thing I don't consider a taste thing is the lack of the equivalent of a "const *". The problem with the slice thing is that you can sort of sometimes change things but not really. It would be nice if you could be forced to pass either a pointer to a slice (such that you can actually allocate a new backing array and point to it), or a non-modifiable slice (such that you know the function isn't going to change the slice behind your back).
If someone really doesn't like the reuse of err, there's no reason why they couldn't create separate variable, e.g. err_foo and err_foo2. There's not no reason to not reuse err.
> Even if we change that to :=, we’re left to wonder why err is in scope for (potentially) the rest of the function. Why? Is it read later?
My initial reaction was: "The first `err` is function-scope because the programmer made it function-scope; he clearly knows you can make them local to the if, so what's he on about?`
It was only when I tried to rewrite the code to make the first `err` if-scope that I realized the problem I guess he has: OK, how do you make both `err` variable if-scope while making `bar` function-scope? You'd have to do something like this:
Which is a lot of cruft to add just to restrict the scope of `err`.What in the javascript is this.
It turns out to be nothing but a misunderstanding of what the fmt.Println() statement is actually doing. If we use a more advanced print statement then everything becomes extremely clear:
The author of this post has noted a convenience feature, namely that fmt.Println() tells you the state of the thing in the interface and not the state of the interface, mistaken it as a fundamental design issue and written a screed about a language issue that literally doesn't exist.Being charitable, I guess the author could actually be complaining that putting a nil pointer inside a nil interface is confusing. It is indeed confusing, but it doesn't mean there are "two types" of nil. Nil just means empty.
It's not about Printf. It's about how these two different kind of nil values sometimes compare equal to nil, sometimes compare equal to each other, and sometimes not
Yes there is a real internal difference between the two that you can print. But that is the point the author is making.
Go had some poor design features, many of which have now been fixed, some of which can't be fixed. It's fine to warn people about those. But inventing intentionally confusing examples and then complaining about them is pretty close to strawmanning.
It's confusing enough that it has an FAQ entry and that people tried to get it changed for Go 2. Evidently people are running in to this. (I for sure did)
EVERY language has certain pitfalls like this. Back when I wrote PHP for 20+ years I had a Google doc full of every stupid PHP pitfall I came across.
And they were always almost a combination of something silly in the language, and horrible design by the developer, or trying to take a shortcut and losing the plot.
It's sort of a known sharp edge that people occasionally cut themselves on. No language is perfect, but when people run into them they rightfully complain about it
What are you trying to clarify by printing the types? I know what the types are, and that's why I could provide the succinct weird example. I know what the result of the comparisons are, and why.
And the "why" is "because there are two types of nil, because it's a bad language choice".
I've seen this in real code. Someone compares a variable to nil, it's not, and then they call a method (receiver), and it crashes with nil dereference.
Edit, according to this comment this two-types-of-null bites other people in production: https://news.ycombinator.com/item?id=44983576
There aren't two types of nil. Would you call an empty bucket and an empty cup "two types of empty"?
There is one nil, which means different things in different contexts. You're muddying the waters and making something which is actually quite straightforward (an interface can contain other things, including things that are themselves empty) seem complicated.
> I've seen this in real code. Someone compares a variable to nil, it's not, and then they call a method (receiver), and it crashes with nil dereference.
Sure, I've seen pointer-to-pointer dereferences fail for the same reason in C. It's not particularly different.
But that’s all this is, is a list of annoyances the author experiences when using Go. Great, write an article about the problems with Go, but don’t say “therefore it’s a bad language”.
While I agree with many of the points brought up, none of them seems like such a huge issue that it’s even worth discussing, honestly. So you have different taste the the original designers. Who cares? What language do you say is better? I can find just as many problems with that language.
Also, defer is great.
First time its assigned nil, second time its overwritten in case there's an error in the 2nd function. I dont see the authors issue? Its very explicit.
After checking for nil, there's no reason `err` should still be in scope. That's why it's recommended to write `if err := foo(); err != nil`, because after that, one cannot even accidentally refer to `err`.
I'm giving examples where Go syntactically does not allow you to limit the lifetime of the variable. The variable, not its value.
You are describing what happens. I have no problem with what happens, but with the language.
The example from the blog post would fail, because `return err` referred to an `err` that was no longer in scope. It would syntactically prevent accidentally writing `foo99()` instead of `err := foo99()`.
yeah no. you need an acyclic structure to maybe guarantee this, in CPython. other Python implementations are more normal in that you shouldn't rely on finalizers at all.
> It is possible (though not recommended!) for the __del__() method to postpone destruction of the instance by creating a new reference to it. This is called object resurrection.
[0]: https://docs.python.org/3/reference/datamodel.html#object.__...
Reading: cyclic GC, yes, the section I linked explicitly discusses that problem, and how it’s solved.
Yes, yes. Hence the words "almost" and "pretty much". For exactly this reason.
Use the language where it makes sense. You do know what if you have an issue that the language fails at, you can solve that particular problem in another language and.. call that code?
We used to have a node ts service. We had some computationally heavy stuff, we moved that one part to Go because it was good for that ONE thing. I think later someone ported that one thing to Rust and it became a standalone project.
Idk. It’s just code. Nobody really cares, we use these tools to solve problems.
Its main virtues are low cognitive load and encouraging simple straightforward ways of doing things, with the latter feeding into the former.
Languages with sophisticated powerful type systems and other features are superior in a lot of ways, but in the hands of most developers they are excuses to massively over-complicate everything. Sophomore developers (not junior but not yet senior) love complexity and will use any chance to add as much of it as they can, either to show off how smart they are, to explore, or to try to implement things they think they need but actually don't. Go somewhat discourages this, though devs will still find a way of course.
Experienced developers know that complexity is evil and simplicity is actually the sign of intelligence and skill. A language with advanced features is there to make it easier and simpler to express difficult concepts, not to make it more difficult and complex to express simple concepts. Every language feature should not always be used.
You end up creating these elegant abstractions that are very seductive from a programmer-as-artist perspective, but usually a distraction from just getting the work done in a good enough way.
You can tell that the creators of Go are very familiar with engineer psychology and what gets them off track. Go takes away all shiny toys.
With all of that, Go becomes the perfect language for the age of LLMs writing all the code. Let the LLMs deal with all the boilerplate and misery of Go, while at the same time its total lack of elegance is also well suited to LLMs which similarly have the most dim notions of code elegance.
Here's the accompanying playground: https://go.dev/play/p/Kt93xQGAiHK
If you run the code, you will see that calling read() on ControlMessage causes a panic even though there is a nil check. However, it doesn't happen for Message. See the read() implementation for Message: we need to have a nil check inside the pointer-receiver struct methods. This is the simplest solution. We have a linter for this. The ecosystem also helps, e.g protobuf generated code also has nil checks inside pointer receivers.
First one - you have an address to a struct, you pass it, all good.
Second case: you set address of struct to "nil". What is nil? It's an address like anything else. Maybe it's 0x000000 or something else. At this point from memory perspective it exists, but OS will prevent you from touching anything that NULL pointer allows you to touch.
Because you don't touch ANYTHING nothing fails. It's like a deadly poison in a box you don't open.
Third example id the same as second one. You have a IMessage but it points to NULL (instead NULL pointing to deadly poison).
And in fourth, you finally open the box.
Is it magic knowledge? I don't think so, but I'm also not surprised about how you can modify data through slice passing.
IMO the biggest Go shortcoming is selling itself as a high level language, while it touches more bare metal that people are used to touch.
> The standard library does that. fmt.Print when calling .String(), and the standard library HTTP server does that, for exceptions in the HTTP handlers.
Apart from this most doesn't seem that big of a deal, except for `append` which is truly a bad syntax. If you doing it inplace append don't return the value.
This means that if you do:
And `get_something()` panics, then the program continues with a locked mutex. There are more dangerous things than a deadlocked program, of course.It's non-optional to use defer, and thus write exception safe code. Even if you never use exceptions.
Python sucks balls but it has more fanboys than a K-pop idol. Enforced whitespace? No strong typing? A global interpreter lock? Garbage error messages? A package repository you can't search (only partially because it's full of trash packages and bad forks) with random names and no naming convention? A complete lack of standardization in installing or setting up applications/environments? 100 lines to run a command and read the stout and stderr in real time?
The real reason everyone uses Python and Go is because Google used them. Otherwise Python looks like BASIC with objects and Go is an overhyped niche language.
I really like Go. It scratches every itch that I have. Is it the language for your problems? I don't know, but very possibly that answer is "no".
Go is easy to learn, very simple (this is a strong feature, for me) and if you want something more, you can code that up pretty quickly.
The blog article author lost me completely when they said this:
> Why do I care about memory use? RAM is cheap.
That is something that only the inexperienced say. At scale, nothing is cheap; there is no cheap resource if you are writing software for scale or for customers. Often, single bytes count. RAM usage counts. CPU cycles count. Allocations count. People want to pretend that they don't matter because it makes their job easier, but if you want to write performant software, you better have that those cpu cache lines in mind, and if you have those in mind, you have memory usage of your types in mind.
Well if maximalist performance tuning is your stated goal, to the point that single bytes count, I would imagine Go is a pretty terrible choice? There are definitely languages with a more tunable GC and more cache line friendly tools than Go.
But honestly, your comment reads more like gatekeeping, saying someone is inexperienced because they aren't working with software at the same scale as you. You sound equally inexperienced (and uninterested) with their problem domain.
I chuckled
Never had any problems with Go as it makes me millions each year.
:Error variable Scope -> Yes can be confusing at the beginning, but if you have some experience it doesnt really matter. Would it be cool to scope it down?`Sure, but it feels like here is something blown up to an "issue" where i would see other things to alot more important for the go team to revisit. Regarding the error handling in go, some hate it , some love it : i personally like it (yes i really do) so i think its more a preference than a "bad" thing.
:Two types of nil -> Funny, i never encountered this in > 10 years of go with ALOT of work in pointer juggling, so i wonder in which reality this hits your where it cant be avoided. Tho confusing i admit
:It’s not portable -> I have no opinion here since i work on unix systems only and i have my compiled binaries specific shrug dont see any issue here either.
:append with no defined ownership -> I mean... seriously? Your test case, while the results may be unexpected, is a super wierd one. Why you you append a mid field, if you think about what these functions do under the hood your attemp actualyl feels like you WANT to procude strange behaviour and things like that can be done in any language.
:defer is dumb -> Here i 100% agree - from my pov it leads to massive resource wasting and in certain situations it can also create strange errors, but im not motivated to explain this - ill just say defer, while it seems usefull, from my pov is a bad thing and should not be used.
:The standard library swallows exceptions, so all hope is lost -> "So all hope is lost" i mean you already left the realm of objectiveness long before tbut this really tops it. I wrote some quite big go applications and i never had a situation where i could not handle an exception simply by adjusting my code in a way that i prevent it from even happening. Again - i feel like someone is just in search of things to complain that could simply be avoided. (also in case someone comes up with a super specific probably once in a million case, well alrways keep in mind that language design doesnt orient on the least occuring thing).
:Sometimes things aren’t UTF-8 -> I wont bother to read another whole article, if its important include an example. I have dealth with different encodings (web crawler) and i could handle all of them.
:Memory use -> What you describe is one of the design decisions im not absolutly happy with, the memory handling. But than, one of my golang projects is an in memory graph storage/database - which in one of my cases run for ~2years without restart and had about 18GB of dataset stored in it. It has a lot of mutex handling (regarding your earlier complain with exxceptions, never had one) and it btw run as backend of a internet facing service so it wasnt just fed internal data.
--------------------
Finally i wanne say : often things come down to personal preference. I could spend days raging about javascript, java, c++ or some other languages, but whatfor? Pick the language that fits your use case and your liking, dont pick one that doesnt and complain about it.
Also , just to show im not just a big "golang is the best" fanboy, because it isnt - there are things to critizize like the previously mentioned memory handling.
While i still think you just created memory leaks in your app, golang had this idea of "arenas" which would enable the code to manage memory partly himself and therefor developt much more memory efficient applications. This has stalled lately and i REALLY hope the go team will pick it up again and make this a stable thing to use. I probably would update all of my bigger codebases using it.
Also - and thats something thats annoying me ALOT beacuse it made me spend alot of hours - the golang plugin system. I wrote an architecture to orchestrate processing and for certain reasons i wanted to implement the orchestrated "things" as plugins. But the plugin system as it is rn can only be described as the torments of hell. I messed with it for like 3 years till i recently dropped the plugin functionality and added the stuff directly. Plugins are a very powerfull thing and a good plugin system could be a great thing, but in its current state i would recommend noone to touch it.
These are just two points, i could list some more but the point i want to get to is : there are real things you can critizize instead of things that you create yourself or that are language design decision that you just dont like. Im not sure if such articles are the rage of someone who just is bored or its ragebait to make people read it. Either way its not helping anyone.
:Two types of nil
Other commenters have. I have. Not everyone will. Doesn't make it good.
:append with no defined ownership
I've seen it. Of course one can just "not do that", but wouldn't it be nice if it were syntactically prevented?
:It’s not portable ("just Unix")
I also only work on Unix systems. But if you only work on amd64 Linux, then portability is not a concern. Supporting BSD and Linux is where I encounter this mess.
:All hope is lost
All hope is lost specifically on the idea of not needing to write exception safe code. If panics did always crash the problem, then that'd be fine. But no coding standard can save you from the standard library, so yes, all hope about being able to pretend panic exits the problem, is lost.
You don't need to read my blog posts. Looking forward to reading your, much better, critique.
I say switching to Go is like a different kind of Zen. It takes time, to settle in and get in the flow of Go... Unlike the others, the LSP is fast, the developer, not so much. Once you've lost all will to live you become quite proficient at it. /s
I can still check out the code to any of them, open it and it'll look the same as modern code. I can also compile all of them with the latest compiler (1.25?) and it'll just work.
No need to investigate 5 years of package manager changes and new frameworks.
ISTG if I get downvoted for sharing my opinion I will give up on life.
This is the key problem with defer. It operates a lot like a finally block, but only on function exit which means it's not actually suited to the task.
And as the sibling pointed out, you could use an anonymous function that's immediately called, but that's just awkward, even if it has become idiomatic.
I don't really care if you want that. Everyone should know that that's just the way slices work. Nothing more nothing less.
I really don't give a damn about that, i just know how slices behave, because I learned the language. That's what you should do when you are programming with it (professionally)
Just like every PHP coder should know that the ternary operator associativity is backwards compared to every other language.
If you code in a language, then you should know what's bad about that language. That doesn't make those aspects not bad.
For anyone interested, this article explains the fundamentals very well, imo: https://go.dev/blog/slices-intro
You really don't see why people would point a definition that changes underneath you out as a bad definition? They're not arguing the documentation is wrong.
"Consistent" is necessary but not sufficient for "good".
If that's your argument then there are no bad design decisions for any language.
It's a shame because it is just as effective as pissing in the wind.
Of course, by your reasoning this also means you yourself have designed a language.
I'll leave out repeating your colorful language if you haven't done any of these things.
Actually I think that's a reasonable argument. I've not designed a language myself (other than toy experiments) so I'm hesitant to denigrate other people's design choices because even with my limited experience I'm aware that there are always compromises.
Similarly, I'm not impressed by literary critics whose own writing is unimpressive.
No, I stick by my position. I may not be able to do any better, but I can tell when something’s not good.
(I have no opinion on Go. I’ve barely used it. This is only on the general principle of being able to judge something you couldn’t do yourself. I mean, the Olympics have gymnastic judges who are not gold medalists.)
I really don’t like your logic. I’m not a Michelin chef, but I’m qualified to say that a restaurant ruined my dessert. While I probably couldn’t make a crème brûlée any better than theirs, I can still tell that they screwed it up compared to their competitor next door.
For example, I love Python, but it’s going to be inherently slow in places because `sum(list)` has to check the type of every single item to see what __add__ function to call. Doesn’t matter if they’re all integers; there’s no way to prove to the interpreter that a string couldn’t have sneaked in there, so the interpreter has to check each and every time.
See? I’ve never written a language, let alone one as popular as Python, but I’m still qualified to point out its shortcomings compared to other languages.