Good-bye core types; Hello Go as we know and love it
Only thing that bothered me was hearing an interview with the Go devs where one of the key devs sounded like Generics would never make its way into Go and it put me off the way he seemed so adamantly against such a feature, but now that generics are in I might start doing some of my side projects with Go moving forward just to force myself to become more familiar with Go.
First, to understand how to handle errors differently, you have to understand how errors are different.
Like, is a person's age an error? Your gut reaction is almost certainly "What? No. A person's age isn't an error." Yet soon enough you're writing age verification checks like: if age < 18 { /* not an adult */ }. if age < 21 { /* not old enough to drink in the USA */ } – with all the exact same problems if err != nil {} has. Clearly it is an error in certain contexts.
Keep going and you start to wonder what branching situation isn't error handling. So, really, it seems to me what we really want is a better way to express branching operations. "if" is one of the earliest additions to programming languages, so it stands to reason that it is getting a little long in the tooth.
The effort to improve error handling is clearly there. Core team member Ian Lance Taylor submitted a new proposal and built a reference implementation just within the last few months. There have been ~200 error handling proposals! It is a super hard problem, though. A "tiny bit more" thinking is not sufficient.
So, really, it seems to me what we really want is a better way to express branching operations.
There is a fairly wide-adopted and battle-proven pattern. It's called monads. Basically, you put a "context" over parts of your code and then some type of branching becomes implicit.
Good languages generalize it and allow their developers to create and use their own contexts. Other languages at least have special cases for very common cases. async/await is such an example.
If by fairly wide-adopted you mean it is adopted in a handful of language nobody uses, sure.
> Other languages at least have special cases for very common cases.
Where they usually screw it up. Look at the canonical example of a "monad" in Rust:
let result = divide(3.0, 2.0);
match result {
Just(x) => println!("Answer: ", x),
Nothing => println!("division failed; we'll get 'em next time."),
}
That is just "if" by another name!If by fairly wide-adopted you mean it is adopted in a handful of language nobody uses, sure.
Please precisely define your criteria so that I can end this discussion by giving you a concrete counter example.
That is just "if" by another name!
It appears you haven't understand the difference. Here's a counter example:
let first_result = divide(10, ???);
let second_result = divide(20, ???);
let final_result = first_result.or(second_result).unwrap_or(42);
println!("Result: {}", final_result);
Logic being, do two divisions and print the result of the first one, or if it failed, of the second one, or fall back to 42.We can go one with using lists of results, results of results and doing conditional calculations and so on and so on - and all of that without writing a single if.
Now do that in go without using if. Won't look as nice for sure.
You introduced the term. It is on you to define it. My stab in the dark to try and get us to an understanding may have failed, but that's where you would logically come in and explain yourself in better detail.
> so that I can end this discussion by giving you a concrete counter example.
A single counter example? I definitely would have never guessed that by wide-adoption you were thinking something singular. No wonder you've been so afraid to share your definition.
> Logic being, do two divisions and print the result of the first one, or if it failed, of the second one, or fall back to 42.
That too is just "if" with different syntax, and worse syntax – treading into Perl one-liner territory, but I concede that this particular example may not be a good basis on which to start from.
And my example is not "if". There is no if. Maybe there is "if" being used in the underlying functions? Again, eventually everything is machine code, but then every discussion about PLs is meaningless...
It doesn't literally include the keyword "if", but you've only rewritten an "if" statement in different syntax – and with worse syntax at that. It doesn't take things to a higher level.
But, again, I understand that this is probably not a good example on which to start from. We only ended up with it because it is a common snippet found elsewhere. How about you pick something that is a better demonstration?
> Again, eventually everything is machine code
Right, but we have higher level languages that enable expressing ideas beyond what the machine is directly capable of. The "if" statement itself is one such abstraction, albeit a fairly low-level one.
but you've only rewritten an "if" statement in different syntax
I don't understand what you mean. There is, to my knowledge, no different syntax for if-statements. In my code example there simply is no if being used.
Care to elaborate?
Can we agree that the ternary operator in C provides a way to write if-statements with different syntax?
> In my code example there simply is no if being used.
Assuming we do agree, there is no literal "if" being used when using the ternary operator. However, it doesn't take the concept to a higher level. It is still leans on the exact same conceptual if-statement expression, albeit written in a slightly different way.
The same is true of your code example.
Can we agree that the ternary operator in C provides a way to write if-statements with different syntax?
I can neither agree or disagree because I don't know, but we are not talking about C and my code example didn't use a ternary operator either, so from my POV it doesn't matter.
async/await was a better example. That moves something to a higher level of abstraction. However, it is not clear how to make that generalize
It is clear. Just not to you. Check Haskels do-notation or Scala for-comprehension or F#'s do!-notation and how they work. They all generalize over what async/await does (all with slightly different syntax though)
It matters because it is an attempt to explain what I am trying to convey. Technical people in particular tend to not share a common vernacular, so it is likely that I am using words in ways that do not match your understanding of them.
Communication is hard. I may have fallen short in my attempt, but surely we can try again? What do you hope to gain from this standoffishness?
> It is clear. Just not to you.
Fair enough, but that's the whole reason we're here, isn't it? To change that. If it were already clear, what purpose would the discussion serve?
So, let's go back to the opening comment with the age check problem. That is the explicit case where a higher level abstraction is being sought, and I think it serves as a good place to think about generalization without getting caught in the weeds.
I looked at Haskell's do notation and don't see how it helps. I prompted several LLMs in hopes that maybe they could resolve, but in every case they reversed back to using the exact same if-statements we started with. We're going to have to rely on your wisdom here.
Fair enough, but that's the whole reason we're here, isn't it? To change that.
I'm afraid to tell you that I can't change that with my explanation. You will have to actually apply this in practice for a while. So long, that you have gotten used to it. And then compare it to how you work in golang. Because otherwise the code will just look unfamiliar to you and you'll potentially suffer from the blub paradox.
Of course this takes tons of time, so I can't expect you (or anyone) to do it. I can only tell how it looks from my POV since I've done it both ways.
I did. But you said it doesn't matter. Now it does matter?
> I'm afraid to tell you that I can't change that with my explanation.
No need for an explanation. Code will suffice. Perhaps I cannot write it myself, but I will eventually be able to understand it.
> Because otherwise the code will just look unfamiliar to you
That may be true in the first instant, but time marches forward and familiarity grows. This isn't an issue.
Is the real problem here that the code is so long and arduous that you don't have the time to write it? I don't think that satisfies the intent, if so. The idea is that the abstraction should be better than an if-statement, not the same or worse.
Assuming we do agree, there is no literal "if" being used when using the ternary operator. However, it doesn't take the concept to a higher level.
So does it mean that you take back your original claim of "if in a different syntax" then and move the goalpost to "higher level"?
If so, I think "higher level" is too subjective and I'm tired of discussing this topic.
No need for an explanation. Code will suffice. Perhaps I cannot write it myself, but I will eventually be able to understand it.
Then please do me a favor and explain the requirements again, because I'm not sure what exactly you have in mind for the age constraint(s).
That may be true in the first instant, but time marches forward and familiarity grows. This isn't an issue.
I don't think so. You have to actually use it yourself, over time. At least that's true for everyone I know. Otherwise the abstraction will not and can not feel better to you, because you assess it in the wrong context.
Its like showing someone prolog code who is used to C. That just doesn't work. I can write you the code, but I doubt it will help you. You already called the other example of the divide function to not look great, so from there it only gets worse for you if that style is unfamiliar to you.
Take back what now? I said that your code example demonstrated an "if" statement written in different syntax; that it did not approach things from a higher level. The same would be true if you had written it with a ternary operator, if the language in question had such a feature. How does restating the same thing in a slightly different way lead to some kind of contradiction or whatever it is you are seeing here?
> If so, I think "higher level" is too subjective and I'm tired of discussing this topic.
I suspect once again you're getting too caught up in what words mean to you and not what they mean to me. As much as I'd like to use words as you know them, I haven't quite figured out how to drill into your brain to extract that information. Best I can do is use what I have and work with you to clarify intent as needed.
But we've already discussed that to death. There was no need to say it again – especially if it has tired you. Why did you bring it up again, exactly...?
> You have to actually use it yourself, over time.
Time I have. All we lack is your code.
> I can write you the code, but I doubt it will help you.
Humorously, you've put way more effort into telling me this than the supporting if-statement approach would have required. Does this one again suggest that what you envision is so long and arduous that you are avoiding it because of the effort required?
Otherwise you may as well try me. If it doesn't help we're still in the same place with much less energy expenditure on your part as compared to whatever this is.
> It's like showing someone prolog code who is used to C. That just doesn't work.
No, it works just fine. Prolog is perfectly understandable to read. It is harder to write without first understanding a bunch of technical nuance, but we don't need the uninitiated to write code here.
Why not admit that you can't write the code in question? This act isn't fooling anyone.
Take back what now?
Sorry, it was late and you indeed mentioned that it wasn't making it "high level". My bad. Unfortunately, "high level" is quite subjective and I'm not willing to put the time to discuss if something falls under this definition or not.
Why not admit that you can't write the code in question? This act isn't fooling anyone.
I asked you to clarify what exactly you want to see (which you didn't quote, even though you quoted basically everything else in my reply). In this particular thread here, there was no mention of age, that was in one of the other threads. So before I write code for something that you don't want to you
Again, please explain the requirements for the code again. What exactly do you want to see written in this style?
Uh... https://news.ycombinator.com/item?id=43487167
With respect, are you okay? Between this and your other comment [https://news.ycombinator.com/item?id=43502693] that was completely off the rails, I'm worried for you. You seem to have lost your sense of what is going on.
If it is simply that you are posting too much and can't keep things straight, making up stories like that you could code something but won't because I wouldn't be able to understand your brilliance to cover your tracks, maybe it's time to stop using the internet for a while?
I ask because it's very much not "canonical," Just and Nothing are Haskell terms, not Rust terms. You'd expect at least Some and None.
Regardless, I do agree that in this specific circumstance, you're emulating an if. There's even a form in this case that emulates it even more closely:
if Some(x) = divide(3.0, 2.0) {
println!("Answer: ", x);
} else {
println!("division failed; we'll get 'em next time.");
}
But these simple cases don't show off where Option/Result truly shine.Sent one in: https://en.wikipedia.org/w/index.php?title=Monad_(functional... we'll see if it gets reverted. Honestly, I find this whole section kind of awkward. I fully agree with you that it doesn't really show off why this stuff works, and works well. Real-world code doesn't get written like this.
A "tiny bit more" thinking is not sufficient.
They could've at least looked at what other languages were doing at the time, and that would have been much better than what they ended up with. However, as with many other aspects of Go, the creators ignored existing work and went with what seemed like improvements to problems from decades ago.
Other languages at the time were doing the exact same thing, except, maybe, wrapping the error in a monad such that you have to check the error before doing anything with value, but still with "if" or an if-like construct.
But that is unnecessary in Go as it believes that values should always be useful – which means you don't need to even consider the error to use the value.
you don't need to even consider the error to use the value
That's how you get garbage (but valid) values fed as input to your code, eventually resulting in garbage output produced on the other end of the pipe. Always fun to debug.
If you face a function author who doesn't know what the hell they are doing and, as a result, foolishly feeds you garbage, then yes, it is possible for that garbage to propagate unknowingly. But when faced with a function author that doesn't know what the hell they are doing, that's going to be a problem no matter what. No language construct ever conceived can save you from someone who doesn't know what the hell they are doing. They will do foolish things in every language ever created. -- That is not a reasonable position.
If a function is written sensibly, the value can never be garbage. How could it be?
First things first, how can you read from a file that doesn't exist? Even you questioned that later, so this is a strange question. – Do you mean if you try to open a file that doesn't exist? nil would be a reasonable value. It is how Go signifies the absence of something. nil is always useful and with `if file == nil` will tell you that the file isn't there. No need to observe the error.
If you need to know why the file isn't there then, sure, you're also going to have to look at the error. Sometimes that is important. In the case of opening a file you realistically will need to know why it failed to open so that you can resolve the condition, but in other cases you don't need to worry about why it failed. Checking the error would be unnecessary in those cases, assuming the function author has some understanding of how to make even half-decent APIs.
Let's assume here, for the sake of discussion, that you have no need to know why the file was unable to opened. Again, how do you envision garbage propagating into the rest of your program?
But also, let's say the read function in question is named ReadInt32. Then what does it return on error?
But that is unnecessary in Go as it believes that values should always be useful – which means you don't need to even consider the error to use the value.
Is this true? Are values always useful? You shouldn't even need to consider the error?
For example, if GPS errored while calling an Uber, is it useful to ignore the error and instead display the Uber driver as being in the middle of the ocean? https://www.reddit.com/r/uberdrivers/comments/zjpv69/well_ho...
`if err != nil {}`
What would you like instead?
1. Errors quickly lose their usefulness if you simply pass them up the stack. Even just on the surface, if you don't even know where the error came from, good luck making sense of it. Other languages have tried to avoid that problem by having things like "sidecar" handlers that can handle the error elsewhere, but in the end that's just moving code around. You haven't actually solved the overhead of needing to do something with the error.
2. Errors simply passed up the stack are, more often than not, going to leak implementation details. Consider a function that fetches data from a SQL database, where the underlying operations produce a "SQLNoResults" error. If you let that flow through, callers are going to start to rely on it. Now, imagine new requirements dictate that you need to fetch the data from an HTTP service instead. If you continue to simply pass the error along, now callers are going to get a "HTTPNotFound" error instead, breaking their usage. Not a good situation.
Errors quickly lose their usefulness if you simply pass them up the stack.
Nope, not in languages where you have a stacktrace attached.
Errors simply passed up the stack are, more often than not, going to leak implementation details.
That's why in a good language you can just wrap them at the right level of abstraction.
Like you say, the stack trace needs to be attached. For that you, at very least, need a "sidecar" handler if not done so in the same execution path, as we discussed earlier. Did you, uh, forget to read the thread?
> That's why in a good language you can just wrap them at the right level of abstraction.
You can move the logic around, but you can't avoid it, as we discussed earlier. Did you, uh, forget to read the thread?
Like you say, the stack trace needs to be attached. For that you, at very least, need a "sidecar" handler if not done so in the same execution path, as we discussed earlier. Did you, uh, forget to read the thread?
I just read it again, but I'm not sure what you mean. And sorry, I'm not familiar with that terminology (sidecar) but from the perspective of the user/developer, does it matter? As system or library developer, I don't need to do anything - I'll have the stacktrace available when I need it. There is extra code necessary. (one has to be mindful of the performance, but that's it)
You can move the logic around, but you can't avoid it, as we discussed earlier. Did you, uh, forget to read the thread?
Doesn't have to do anything with moving logic around.
Let's say you are function foo and you call other functions and one of them is bar and it will fail with barError. Then, to avoid breaking your (= foo's) consumers if bar changes its internals, you simple wrap bar's error with your own. That can be as simple as doing `bar.mapError(barError -> fooError(cause = barError))` and that's it.
That is all I wanted to say.
Then, depending on the language, you stil don't have to repeatedly do "`if err != nil {}`" or so. There are enough alternatives, e.g. monadic error handling like in Haskell or macros like in Rust.
You were familiar with it earlier – you couldn't have sensibly replied otherwise. How did you manage to lose it in the meantime?
> That can be as simple as doing `bar.mapError(barError -> fooError(cause = barError))` and that's it.
At the end of the day is that really any different than: `err = errors.Join(MyError{}, err)`?
But you've still just moved logic around (e.g. into mapError/Join). You've not changed what needs to be done.
At the end of the day is that really any different than: `err = errors.Join(MyError{}, err)`?
In the end, everything is machine code. You tell me if that is any different or not.
But you've still just moved logic around (e.g. into mapError/Join)
To improve backwards compatibility, yeah. Somehow the error needs to be changed.
But: with a stacktrace and a good language, this is a single line of code. No if/else etc. needed, even in the case of multiple different errors in different places in foo.
And my impression was that this is what we were discussing here - ergonomics of error handling.
Code is ultimately written for humans, not machines. If we only cared about the machine you could flip toggle switches and not worry about all these pesky human problems found in understanding code.
> You tell me if that is any different or not.
I don't think there is. But I may have missed your intent. The question was posed to ensure that we are on the same page. If you leave it up to me, we are on the same page, which means your earlier comment really doesn't work. There is no `if err != nil` to be found.
But: with a stacktrace and a good language, this is a single line of code.
Why can't it be a single line of code in Go? In fact, at one point Go even did include the stack trace in that single line of code in some pre-release work, but real-world usage determined that nobody ever used it (all the information you need is already there without a stack trace!), so it was stricken before final delivery. You can still do it yourself if you want, though. Errors are not magic.
Why can't it be a single line of code in Go?
Because Golang has (to my knowledge) no support of any syntax that supports that.
You are a bit hard to discuss with, but I want to show good will, so I'll try to explain and hope you can appreciate that! :-)
Golang (just like most, but not all!) languages has one default way of doing things. Which is: execute each line (or statement / expression) sequentially.
That's why you can write `loadMissiles(); fireMissles()` and it works.
But it could be different. Imagine a language where each of those is, by default, executed in parallel. There are academic languages that actually work like that.
How would you then do something sequentually? By rewriting your code: `var result = loadMissiles(); fireMissles(result)`. This is a semantical enforcement of sequential execution.
Now let's change this a little bit and add a `.then()` method onto every value (even `null` if the language has that). Then we rewrite the code:
`loadMissiles().then(result -> fireMissles(result))`.
Looks familiar? If we add builtin error-handling then we just have re-invented javascript promises and this is not a coincidence.
Now, there is a duality to that - executing code independent of each other, so non-sequential. (whether it is actually run in parallel or not does not matter, as long as the outcome is the same, minus performance implications of course).
How would one do that? By adding a new method, let's call it `all()` that accepts a list of expressions. Unlike methods like .fold or .reduce, there is no way for the elements inside the list of expressions to interact with each other. That means even in a language that is "sequential by default" these expressions can (or could) be executed in parallel without a problem. This is basically Promise.all() in javascript.
Two more final steps.
First step: we have now invented promises (including sequential and non-sequential execution) which describe asynchronous computations. But how about other things? Let's think of results. They are similar - sometimes we need a successful result to continue (sequential) sometimes we can execute logic non-sequential. How about optionality? Well, it's basically like a result where the error has no information, so same thing. How about parsers? Sometimes we can need to parse something and then we decide how to keep parsing based on the result (sequential) - sometimes we can parse multiple things non-sequential. What about resources? Sometimes we need a database connection to open a network connection (sequential). Sometimes we can do both non-sequential.
And so on. See the pattern? Let's call those things "contexts" and then allow developers to define those contexts themselves, because we certainly can't foresee all contexts that exist in the world. Certain things are necessary to allow to do that, including some kind of parametrism (like generics).
Second step:
Now that we have those contexts, we can use them. But it would be nice to write code in the same way as "normal context" code (whatever that means for our language). So we should have some syntax to help with context switches - optimally for both sequentual and non-sequential logic. And optimally generalized and not specialized to single contexts.
Different languages have different strategies for the second step. Golang doesn't have anything like that (well, to my knowledge, I'm not a golang dev). It certainly doesn't have a generalized version though, that is for sure.
Therefore to come back to:
Why can't it be a single line of code in Go?
The answer is, because it lacks the syntax in the second step and - to my knowledge - the way to define contexts (at least typesafe ones, my unsafe ones are possible) and in particular the syntax to deal with them (without having to call .then() or - worse - if/else).
The single line was already demonstrated...
> That's why you can write `loadMissiles(); fireMissles()` and it works.
Maybe.
func loadMissles() {
go func() {
// Do the things.
}()
}
Maybe not.Get back to us when you gain at least a surface understanding of how computers work.
> See the pattern?
All that just to convert one type/value to another? That is complete and utter insanity.
Did you write this piece before reading the thread and decide to arbitrarily dump it upon us, totally oblivious to what is happening around you, to satisfy your sunk cost fallacy pangs?
2. There are places where you want to abstract your errors, but those places are not every function call or even most function calls.
Do you mean often you want multiple segments of code to all do the same thing on error? If there is only one segment then you well and truly have just moved things around.
Multiple segments all doing the same thing on error would give more justification to centralizing functionality, but at the same time if you have multiple segments of code all doing the same thing you've probably not thought your design through. Papering over design mistakes with language features is commonly done, but I'm not sure it is something to strive for.
> but those places are not every function call or even most function calls.
If the original error is your own you don't need to abstract it, but if you are passing your own errors through multiple levels of indirection you've, again, probably not thought your design through very well. Papering over design mistakes with language features is commonly done, but I'm not sure it is something to strive for.
Do you mean often you want multiple segments of code to all do the same thing on error? If there is only one segment then you well and truly have just moved things around.
If a segment has 6 function calls and you want the same error handling for each one, you can't get rid of the boilerplate with the current language.
If the original error is your own
Assume the original error is not my own then.
My deepest sympathies for the person who has to respond to the resultant error once you've collected your paycheque and have moved on to the next project. Which of the six functions produced the error? Nobody knows. That may be all well and good for a contrived internet comment example, but if you write real code like that someone's life is soon going to become a living hell.
Every other language has recognized that you can't have the same error handling for each of the function calls. Even where they have special error handling semantics have special ways to ensure that the handling is different in each case. Why do you think it would work in Go?
> Assume the original error is not my own then.
Then you've forever hitched your horse to their code. A better or more performant replacement comes along in the future and you want to use it instead? Too bad. You can't without breaking your own API – and for what reason?
But, okay, we accept that you like to live life on the edge (or come from the Javascript world and thus don't know any better) and if the people using your functions start having breakage, too bad so sad. However, if you're just passing values through from another package, why are you really offering your callers in the first place? Why don't they just use the other package directly?
Then you've forever hitched your horse to their code. A better or more performant replacement comes along in the future and you want to use it instead? Too bad. You can't without breaking your own API – and for what reason?
No, I did not say that. What I said is that the place to prevent that is not every function call. If my code goes 4 functions deep, I need at least one of them to handle errors I didn't cause, or convert them into my own errors for the sake of a stable API. But many of the other functions can pass errors through.
If it only calls one function and isn't part of the public API it is likely that you can get away with it. There is a time and place for that, but if that time and place is most of the time like your earlier comment indicated and something that can be counted as many in this comment... I'd like to see this codebase because I am highly skeptical that it is something anyone would ever want to work on[1].
If it calls two or more functions, then you're back to the "which function was it?" problem.
[1] And, as it happens, Google actually commissioned a study on how frequently that kind of code is actually written based on open source projects and other code they had access to when evaluating an error handling proposal. They found it to be an unusual case. It being "most" or "many" is definitely limited to within your works, not something applicable in general. There just might be a reason why your ways haven't caught on, but thrill me!
Do this call
value, err := function()
if there is an error return it
if err != nil { return err }
otherwise give me the value
// rest of the code goes here
Rust's `?` operator on Result<T,E> types is flipping fantastic, puts all of the following to shame.
// can forget to check err
thing, err := getThing()
if err != nil {
panic(err)
}
// More verbose, now you could possible forget to assign thing
var thing Thing
if t, err := getThing(); err != nil {
panic(err)
} else {
thing = t
}
// What I end up doing half the time when I've got a string of many
// calls that may return err as a result of this
var whatIActuallyWant string
if first, err := getFirst(); err != nil {
return err
} else if second, err := doWith(first); err != nil {
return err
} else if final, err := doFinally(second); err != nil {
return err
} else {
whatIActuallyWant = final
}
It's actually to the point that in quite a few projects I've worked on I've added this: func [T] must(value T, err error) T {
if err != nil {
panic(err)
} else {
return value
}
}
type errHandler struct {
err error
}
func (eh *errHandler) getFirst() string {
// stuff
if err { eh.err = err }
return result
}
func (eh *errHandler) doWith(input string) string {
if eh.err != nil {
return ""
}
//stuff
if err { eh.err = err }
return result
}
func (eh *errHandler) doFinally(input string) string {
if eh.err != nil {
return ""
}
//stuff
if err { eh.err = err }
return result
}
func (eh *errHandler) Err() error {
return eh.err
}
func main() {
eh := &errHandler{}
first := eh.getFirst()
second := eh.doWith(first)
final := eh.doFinally(second)
if err := eh.Err(); err != nil {
panic(err)
}
}
func foo() (final int, err error) {
defer func() {
if e, ok := recover().(failure); ok {
err = e
} else {
panic(e)
}
}()
first := getFirst()
doWith(first)
final = doFinally()
return
}
encoding/json does it. It's okay if you understand the tradeoffs.But look at what you could have wrote:
func foo() (int, error) {
first, err := getFirst()
if err != nil {
return 0, ErrFirst
}
err = doWith(first)
if err != nil {
return 0, ErrDo
}
final, err := doFinally()
if err != nil {
return 0, ErrFinally
}
return final, nil
}
This one is actually quite nice to read, unlike the others, and provides a better experience for the caller too – which is arguably more important than all other attributes. func foo() (int, error) {
first := getFirst()?
doWith(first)?
return doFinally()
}
or this: func foo() (int, error) {
first := getFirst() % ErrFirst
doWith(first) % ErrDo
return doFinally() % ErrFinally
}
The first one is a significant upgrade over the exception version. It cuts out half the code and makes the early return points explicit.I think something similar to the second one is also nice to read, and it gives the same improved experience to the caller as your suggestion.
Albeit a contrived suggestion for the sake of brevity. In the real world you are going to need to write something more like:
first, err := getFirst()
var err1 *fooError
var err2 *barError
switch {
case errors.As(err, &err1):
return nil, FirstError1{err1.Blah()}
case errors.As(err, &err2):
return nil, FirstError2{err2.Meh()}
case errors.Is(err, io.EOF):
return nil, EOF{}
// ...
case err != nil:
return nil, FirstError{err}
}
And that is where eyes start to gloss over. The trouble with errors is that they quickly explode exponentially. Programmers long to distill all possible errors into one logical operation to not have to actually think about all the cases, since that is hard and programmers are lazy, but that is not sufficient for a lot of programming problems.The cutesy shortcuts like ? and % operators are fine for some classes of programming problems, to be sure, but there are numerous languages that are already designed for those classes of problems. Does Go even need to consider travelling into those spaces? In the original Go announcement it was made explicitly clear that it was designed for a very particular need and was never intended to be a general purpose programming language.
I'm certainly not the gatekeeper. If Go wants to move away from its roots and become the must-have language for the classes of problems where something like ? is a wonderful fit, so be it. But, from my point of view, putting energy into tackling the big problems is more interesting. There should be plenty of room for improvement in the above code without losing what it stands for. But that is going to require a lot more deep thought than I've seen put in and programmers are lazy, so...
Does Go even need to consider travelling into those spaces?
Oh come on. Changing how one common piece of boilerplate is written is not travelling into new spaces or moving away from Go's roots.
Isn't that what your tests are for? Linters aren't normally intended to stop you from creating undefined behaviour.
It is not like Rust negates the need for those tests. Remembering to handle an error is not sufficient. You also need to ensure that you handle it correctly and define a contract to ensure that the intent is documented for human consumption and remains handled correctly as changes are made. Rust is very much a language designed around testing like every other popular language.
What you do need to do is document how the function is intended to behave. If, for example, your function opens a file, you need to describe to other developers what is expected to happen when the file cannot be open.
"The compiler won't let me forget to handle the error" is not sufficient to answer that. That you need to handle the error is a reasonable assumption, but upon error... Should it return a subsequent error? Should it try to open a file on another device? Should it fall back to using a network resource? That is what you need to answer.
And tests are the way to answer it. It is quite straightforward to do so: You write a test that sees the file open failure occur and check that the expected result happened (it returned the right error, it returned the right result from the network resource, etc.). Other programmers can then read your example to understand what is expected of the function. This is as necessary in Rust as it is in Go as it is in any other language you are conceivably going to be using. Otherwise, once you are gone, how will anyone ever know what it is supposed to do? As changes occur through the ongoing development cycle, how will they ever ensure that they haven't broken away from your original intent?
So, once you've written the necessary tests – those that are equally necessary in Rust as in any other language – how, exactly, are you going to forget to handle the error? You can't! It's impossible.
I don't know why this silly thought persists. It is so painfully contrived. If one is a complete dummy who doesn't understand the software development process perhaps they can go out of their way to make it a problem, but if one is that much of dummy they won't be able to grasp the complexities of Rust anyway, so...
PHP also ignored errors out of the box (even has @ to suppress error output), but error_reporting(-1); is basically how every sane framework starts, and PHP 8 set the default level to E_ALL[1](in 5.3 accessing undefined variable was also ignored (notices - as far as I remember)[2])
Python and NodeJS by default aborts on error. (Python had exceptions before user-defined classes in the last millennia.[3])
...
Rob Pike said in 2015 "don't just check errors, handle them gracefully."[4]
I think Scala/ZIO has the most powerful and comprehensive and compact[5] error handling machinery that I have experience with, that's where it's the easiest to live up to Mr Pike's admonition.
...
All are fine. I hope/predict Go will pull a Bash or PHP and will introduce flags to turn some/most unchecked errors into aborts. (Or perhaps people will build tooling and alternative stdlib eventually.)
Result types are nice because it allows library developers to encode a shitton of really useful semantic information into them, and it's really easy to handle them however the downstream user wishes. (Rust offers .unwrap(), TypeScript ! (assume non-null), and JS itself has the ? optional chaining operator.)
[1] https://php.watch/versions/8.0/error-display-E_ALL
[2] https://stackoverflow.com/a/69291454/44166
[3] https://python-history.blogspot.com/2009/03/how-exceptions-c...
Only thing that bothered me was hearing an interview with the Go devs where one of the key devs sounded like Generics would never make its way into Go and it put me off the way he seemed so adamantly against such a feature
Everything about the development trajectory of Go so far indicates that What Is Right And Good at any given moment is largely determined by whatever makes building the compiler easier and not what makes the lives of external developers easier, until the external developers get loud enough about how the language is failing to learn from the mistakes of the past that the internal team relents with an "ok, ok, you win, our bad".
And if I never see another apologist refrain of "You don't need <x>. Just use this code generator to flood your repo with thousands of lines of project-specific-for-no-good-reason boilerplate" again it will be too soon.
Go isn’t perfect, but it’s wildly more productive in my experience than any other language, and that matters a lot more to me than being able to be maximally expressive or abstract.
I wish we could use some boring language that works, like C# or Kotlin.
In any case, designing and even implementing a PL better than Go is not a particularly hard thing to do. Making it popular in this day and age, on the other hand, generally requires a large corporation backing you.
"It's because people who like Go are stupid and companies need stupid people to write stupid code" may have made you feel smug and secure during the last 15 years, but what about the next 15? Or the 15 after that? Are you still going to be complaining about the stupid 70 year old wheel made by that dumb guy Pike that you hate so much?
Argument from success disregards the simple fact that, in this day and age, no language succeeds without massive corporate backing - and, conversely, a large corporation can throw money at a language to prop it up in a situation where it would struggle to get market share otherwise. So, no, the fact that Go is as popular as it is, is not particularly interesting. If it actually overtook Java - its most direct spiritual competitor - then yeah, I'd consider that a more serious data point.
There are already many better languages out there.
Not really. Even if you're just looking for a reasonably productive mainstream language with effortless native, static compilation Go is likely the only language that fits the bill.
struct Foo { int X; }
class Bar { Foo F; }
the value of (new Bar().F.X) is zero.(i.e. you can author default constructor and field initializers for structs, in C#, the main "hole" remains the use of 'default(MyStruct)', but at least it is explicit and you get what you ask for)
new Derived();
class Base {
public Base() { Foo(); }
public virtual void Foo() {}
}
class Derived: Base {
public string s; // non-null!
public Derived() { s = "abc"; }
public override void Foo() { WriteLine(s == null); }
}
This will compile without warnings even with #nullable enable, and will print True at runtime. Note that this could also be rewritten without virtual methods by doing a downcast or pattern match: void Foo(Base b) { WriteLine(((Derived)b).s == null); }
C++ solves this problem by making all virtual calls dispatch to Base until Base() completes (which involves swapping vtable pointers at runtime as needed), and then doing the same in reverse for destructors; it also makes the type of object Base for all other purposes like dynamic_cast. That is, in effect, in C++ the actual type of the object changes as it is constructed or destructed. Which is great from a theoretical point of view, but very counter-intuitive unless you understand the problem it's trying to solve.That aside, with respect to default(T), the other problem is arrays. If T is non-nullable and you ask for a new T[1], what should the value of the element be? C# lies and allows it to be null even for non-nullable reference types, which means that (new string[1])[0].ToString() is an NRE with no compiler diagnostic even with #nullable enabled. In C++, you actually cannot do this at all if T doesn't have a default constructor, but this then means that the language needs stuff like in-place new and explicit destructor calls to actually make a generic dynamic array possible to implement.
It's not as common of a problem in C# to just initialize the array to the elements you need. Most arrays in regular code you will see will have their elements initialized or materialized from LINQ anyway.
I agree that NRTs are "leaky" (and wish they did it better) but I do not run into the issues you do it seems and they are a massive productivity improvement, their static-analysis-based nature also helps with not dealing with some of the ceremony caused by e.g. Option<T> in Rust.
On the snippet above - it will print true only when called inside the constructor, when the field has not been yet assigned to. It will print false otherwise. I do not see this as an issue, although it is indeed subtle.
And to be clear, I'm not saying that nullability checking is a bad idea! On the contrary, it's great. It's especially great when the type system doesn't lie to you (i.e. if something being not nullable actually means that it can never be null), but even partial enforcement is better than nothing.
However, the OP to whom I responded specifically complained about "idea of the zero values for uninitialized struct fields" in Go, and then in the same breath mentioned "language that works, like C#" - which is rather ironic given that C# does, in fact have this exact thing, and I tried to explain why it needs it.
I’m playing devil’s advocate.
https://shkspr.mobi/blog/2019/11/you-are-not-the-devils-advo...
Again, I worked with Go in a production setting for 3+ years
The amount of time you spent doing whatever you were doing is not relevant. The google search you seek is mere keystrokes away.
Since generics were introduced in Go 1.18, I’ve used them exactly zero times. I’ve had zero need for them. I still haven’t encountered generics in any of the real-world code I work with — and honestly, I’ve dreaded the day I’d see something like `func Foo[K comparable, V any, R any, F ~func(K, V) R](m map[K]V, f F) map[K]R {` That’s exactly the kind of code I try to avoid — unless I plan to replace every developer with an AI.
Anyway, I like seeing this slight reversion in favor of simplicity, I think it's the right call for where Go's targeted: being a better Java for teams of mid-tier engineers.
there's literally almost no reason to use Go in that case.
I work in C# and C++ day to day now, and in $PREV_JOB I used Go and C++. my go builds on a similar size project were quicker than the linter in my C# project is right now. Go's killer feature IMO is that it's _almost_ scripting level iteration speed.
Basically, the mental model required for coding in go is low load. That's a great feature.
as you say, you get near instant linting feedback about a lot
Sorry, it wasn't clear. I could run `go build && ./myapp` and have my application running quicker than `dotnet format` finishes. Linting in .net is slower than compiling in go.
Agree on everything else. It has it's share of footguns, for sure, but so does every language.
Overall, the tooling could be faster but because it is JIT-based and performs heavy unbound reflection, it's not very amenable to the NAOT compilation in its current form if you are used to frequently running 'dotnet build' and 'dotnet run' where startup latency imposed by JIT is most noticeable. Also keep in mind that both invoke the full build-system. It's closer to what Cargo does than what Go tooling does. Are you using .NET 9 SDK? Another feature I suggest looking at is hot-reload with 'dotnet watch'. It can shorten iteration cycles for doing the back-end work substantially.
It does not matter on CI, it matters locally where you should be using something else.
And of course if you are looking for an excuse to use Go (which is a worse language), fixing this or any other "issue" will not help - there will always be another reason.
StackOverflow survey (self reported) for 2024 shows it at #5 leaving out HTML/CSS, and SQL[0]
DevJobsScanner shows it at #4 via scraping job postings[1]
It definitely has heavy adoption; well above Rust and Go despite what we see here on HN.
> Does it have any actual advantages over Java?
The language evolves faster and is more akin to Kotlin than to Java, IMO. The DX is fantastic and there are a few gems like LINQ, Entity Framework, and Roslyn source generators. Modern C# can be very dense yet still highly legible.C# switch expressions with pattern matching (not switch-case), for example[2], are fantastic.
[0] https://survey.stackoverflow.co/2024/technology
[1] https://www.devjobsscanner.com/blog/top-8-most-demanded-prog...
[2] https://timdeschryver.dev/blog/pattern-matching-examples-in-...
For a quick comparison, check out https://typescript-is-like-csharp.chrlschn.dev/
Does it have any actual advantages over Java?
Probably not. Why learn C# if you already know Java? Similarly, why learn Java if you already know C#?
It is also generally less nice to work with - Maven and Gradle are a way bigger PITA than .NET CLI (which is similar to Cargo and Go CLI), NuGet and MSBuild. Base C# syntax lends itself to more streamlined expression of business logic (e.g. with pattern matching, tuples, records and their deconstruction).
In Java, there are odd issues and resulting method gymnastics caused by generics with type erasure, and many of its base containers don't unify nicely as the ones in C# do to IEnumerable<T> or Span<T>.
Writing highly concurrent + parallelized code is way more cumbersome (and generally less efficient) with the current rendition of virtual threads, completable futures or even upcoming structured concurrency API than doing so with .NET 'Task<T>'s and their composition.
ASP.NET Core is much faster and more focused than Spring Boot. EF Core is way more powerful and significantly terser to use than Hibernate, JPA or, to an extent, JOOQ.
You can also relatively easily ship fully self-contained and relatively compact (with trimming) applications, often as a single file. With additional effort, NativeAOT provides native compilation and smaller-than-Go binaries while having much wider support across ecosystem than GraalVM Native Image within JVM space (e.g. it's one command away to get a gRPC-based ASP.NET Core microservice template which compiles to fully native binary, it also does not use any special tricks - just regular code).
They are very much not 1:1 languages. Depending on the domain, there may very large differences in developer productivity, level of comfort and effort required to achieve a competitive implementation.
Java strengths lie in its comparatively larger and more diverse ecosystem in enterprise space alongside certain high-profile projects, predominantly by Apache foundation. On technical merits it does have less to offer.
The main technical exception - Java has superior GC implementation(s).
I'd say if your goal to expand your horizons, then it's more important to pick a problem that can't be nicely solved with Java. In that case, C# will offer more pleasant and moderately familiar experience over C, C++ or, to an extent, Rust (which is another great language to learn).
Yes JVM currently sucks in value types and low level C++ like coding, .NET is great there, but not everyone needs those capabilities, and when they do, most don't shy away of doing some JNI.
On the technical level, .NET doesn't have GraalVM like tooling (it is a whole compiler framework not a plain AOT compiler), the MSR Phoenix project was canceled, Longhorn was canceled so nothing Android like, no real time GCs and bare metal deployments like PTC, Aicas and microEJ, no VMs for M2M, copiers, telephone switches.
In what concerns EF, I keep my point of view that I rather use Dapper with SP.
kotlin are an option on JVM
I definitely liked writing Kotlin code in the past, but we had a medium sized Kotlin web api at a previous job, and a very large c++ app. The c++ app was quicker to compile than the Kotlin app on many many occasions, and the toolchain and IDE integration situation reminded me (not in a good way) of working with eclipse - even with intellij
Compile times might be an issue, I know it was an issue in the past.
C++ can actually be quite fast to compile, if the right decisions how to approach the build infrastructure and code styles were taken, which usually is not the case, hence its fame to slowness.
Here's some data of 12M scraped job offerings:
https://www.devjobsscanner.com/blog/top-8-most-demanded-prog...
JavaScript: 31.42%
Python: 19.68%
Java: 18.51%
C#: 11.90%
C/C++: 8.29%
Go: 2.38%
Rust: 0.39%
I was a bit shocked about the Rust numbers. I'd expected it to be slightly above Go. Anyway. C# is strong. Java even more.
Why's that?
Go was designed explicitly to serve the particular needs of a particular area of software development that allegedly sees more than average development activity. In fact, its designers have expressed some surprise that people found it useful in other areas of programming. Rust, on the other hand, tries to be much more general purpose. Being a jack of all trades master of none, so to speak, is a great technical quality, but without particular focus it is much harder to get the numbers up.
It is quite similar to why Javascript blows all of the other languages out of the water. It would be surprising if Rust had more usage like it would be surprising if C had more usage than Javascript.
Its also popular in Game dev mostly due to Unity, but you have other good options to use C# in game dev as well. I can't think of another kind of software company that uses it a lot other than ones that integrate deeply into the microsoft stack.
As a language, it was far, far ahead of Java for many years (the long dark tea-time of Java 7).
Do real organizations actually use C#?
Sure. For me the best thing about dotnet is that you most likely find an official solution to most "basic" things needed to develop microservices (I intentionally call these basic because you don't want to worry about lots of things at this level). Go on the other hand excels at cross-cut and platform development.
AOT is pretty good when it works.
It's easy to get things simple and wrong, and hard to get things simple and right. And sometimes complexity is there for a good reason, but if you don't work hard to understand, you'll fall into the "simple and wrong" camp often.
Brainfuck the language is simple to use and implement. It has only 8 commands. Brainfuck programs are virtually unreadable, unless you can get your head around it.
The advantage of trying to solve complexity is that if you can solve it, you've solved it once and exactly once for everyone involved.
E.g. that whole `iota` thing is hardly a good example of "simple and obvious" language design compared to enums in... just about everything else.
I actually disagree with this specific take. I do agree that iota is an unnecessary bit of cleverness (especially with that name) but I’d much rather a langage have nothing than the pile of lies and garbage that are C enums. At least then it’s not pretending.
The only godsend of C is that code written in the 1970's can still be compiled today -- half a century later. You can write code in your 20's in C and still be assured it will still work a half century later in your 70s. (as long as you don't use system libraries which might change.... etc.)
A lot of people complain about Rust compile times. But honestly, I'd rather work in a language that is trying to solve complexity rather than push it off on to the user.
As I wrote at the time on Lambda The Ultimate back in 2012,
There’s a very high probability that something like Cockroach would use C++ if Go had never existed, so Rob Pike was sort of right, if you squint. On the other hand, if Cockroach were started today it would probably be written in Rust.
My friend tells me the primary use case for go are microservices no more than one page worth of code deployed in kubernetes.
I think that's correct. Anything larger is just masochism.
There will be a revolt at some point, the question will be to what? Rust? Probably not. Maybe a C++ resurgence...
Why would anybody choose C++ over Rust in 2025? The biggest (valid) criticisms against Rust are that it is difficult to learn and use, but C++ is like 1000 times harder. When people say Rust is complicated they’re comparing it to modern GCed languages, not to C++.
In fact, tell them to write a brainfuck transpiler in brainfuck to transpile go lang to brainfuck to make it easier for you communicate with their native tongue directly.
https://stackoverflow.com/questions/16836860/how-does-the-br...
But honestly if you're a professional programmer, you should constantly be asking yourself how do I reduce complexity for others first, not myself. And that's where golang gets it wrong. Golang asks very specifically first and foremost how do they get their compiler right -- even if it comes at the potential expense of the users.
I mean golang is not brainfuck by any margin, and it is reasonable for what it tries to do. But, in my experience, if you're writing code longer than a page, golang is probably the wrong language.
Designing a language for clarity and maintainability is a laudable goal, and so is choosing to use one. Chasing complexity, or reaching for the latest trendy language that lets you "express yourself" in ten different ways to do the same thing, isn't what makes someone an S-tier engineer.
But it sure looks good on your resume!
That cuts me right to the bone.
I do like to dabble in F# still.
I think the ultimate goal of making a programming language is to cause the least friction for a programmer trying to get real work done, and in my experience Go's great from that point of view. Language bells and whistles may be exciting, but often don't pay their way in terms of real world productivity, IMHO.
But sure, it makes it easier for programmers at all levels to get started, and to get real work done.
It is this condescending attitude that I feel many golang advocates (online) share that makes me shiver.
What are you implying, that devs using other languages don't get "real work" done?
Let's wait until Go has the same history, maturity and reach of e.g. Java and then let's see how well it will hold up in comparison.
In the time you get everyone on the team to agree whether you should use Maven or Gradle, which testing framework to use, or figure out how to autoformat your code, your Go program will be done.
Let's wait another 15 year and then compare the new languages at that time against golang. Then let's see how golang is doing in comparison.
* everyone agrees to use Cargo
* everyone agrees to use `cargo test` (what even is a “testing framework”)?
* everyone agrees to use `cargo fmt`
What’s the advantage of go here?
By the way, the formatting situation is actually worse in Go because there are both gofmt and gofumpt used in the wild, at least gofmt has different behavior depending on different flags, and there are additional linters people use to e.g. ban long lines that for some reason the formatters don’t cover.
I don't know why we're talking about Rust in the first place, but an obvious advantage would be compilation time and iteration speed in general.
I used Go for most of my own projects and as I got deeper into it began to realize its warts, but the worst was that you can't get performance by "share memory with communicating"--channels are slow. Reading the non-idiomatic stdlib implementation shows the difference of who it's made by vs who it's for (which isn't the authors).
There was a sort of misunderstood dream in the early days of go that it would make fanning out and using your 24 cores easy as empowered by channels: this is still not easy in go, although it may be easier and less error prone than c.
In the intervening decade, python has made say a parallel for loop immensely easier.
What's the difference? The opposite, so to speak, of system is script, and I don't think system management falls into the scripting category. A system management system is a system too. But that isn't what they were talking about anyway. They were talking in the context of building servers (think like a HTTP server). That was clearly spelt out.
I understand that the Rust crowd has reimagined system to mean something akin to kernel, much like they have reimagined enums to be akin to sum types. Taking established words and coming up with entirely new meanings for them is what they like to do. But that reimagining has no applicability outside of their little community. This is not how the industry in general considers it.
Abstractions make large systems easier to understand, not harder. Each line of Go is easy to understand, but whole programs are not.
Well, in the real world a lot of people have to work on teams where many of their co-workers never grow beyond beginner level. So anything that can be done to reduce the burden of having to deal with them is welcome. Not everyone gets to sit in the Silicon Valley ivory tower beside the greats.
> your career lasts 40 years
40 years is a peculiar number. If it is your passion, you should easily be able to see 60-70 years (assuming you live to an average age), and if you are only in it for the paycheque the comparatively high salary offers you retirement long before 40 years comes around.
IMO it is better said that Go is designed as being a good language for Senior and Junior developers, where mid-tiers will probably hate it.
It's not just types, either. Look at the signature for the built-in sort, which is amazingly cumbersome to use. A generic wrapper around it hides all the ugly.
The language reserves syntax for itself; you can't overload a[b] to mean anything other than either array/slice indexing or indexing into a map. You have to write methods like ".Get" or something, there's no __getattr__ or anything.
It’s more what you must implement along with your new data structure to make it usable in the language that was the cry of the generics advocates.
In fact I created a struct that you can range over just a couple of days ago.
I can’t say I love the way Go implemented it though. But it does work.
I can’t say I love the way Go implemented it though.
Why not?
1. How it should be implemented “correctly”
2. The resulting code isn’t clear how it works at first glance (particularly with the yield command, it has “magical” properties that take a little effort to grok)
3. Requires calling a method
Example code: https://github.com/lmorg/Ttyphoon/blob/321738f289e4791e9674d...
I did write this at something like 11pm so it’s entirely possible I’ve done this completely wrong though.
Also please ignore the weird use of mutexes here too.
I’m also aware that sync.Map could/should have been used here. This struct was more of an experiment than anything that will ultimately find its way into production code.
But I suspect they meant a struct which contains/encapsulates data which can be ranged over.
EDIT: Had to check, yeah I had to implement the iter.Seq type to do it
Go generics are a bit of a blip in this to be honest, A) because it is a big change, and B) because it can be difficult to use (generic functions defined on types cannot use generic parameters that aren't defined on that type, for example).
But in a way, I also think the constraints help avoid the overuse of generics. I have seen Java and Typescript projects where developers had way too much fun playing around with the type system, and the resulting code is actually quite unclear.
In conclusion, I pray the Go team strive to and continue to be conservative with the language.
generic functions defined on types cannot use generic parameters that aren't defined on that type, for example
This is bonkers to me, why???
- need overloading to support instantiation - need type functions/associated types. For example what type is inside this container? - need operator overloading so you can substitute generic types into assignment expressions. For example notions of equality. - need duck typing or traits to communicate capabilities
The two simplifications which have historically worked are to use dynamic types, or to use code generators.
In Haskell this is something like map :: (a->b) -> C a -> C b
Can you share what that would like in Go with type classes?
You can also define an fmap interface that doesn't actually map the type, but can apply function that do not change the type: https://go.dev/play/p/836wr3nuw4U
But currently I don't think it's possible to combine this. That is have an fmap as it is in Haskell. You would need to capability to add generic arguments to an interface. It could look something like this:
C# can do this because it has a JIT and can compile such methods dynamically (and rejig vtable if needed). In an AOT-compiled language like Go, it would either need to treat this pessimistically and instantiate every possible implementation of every method that could be virtually dispatched anywhere in the program, or else it needs to do what Swift does and generate code that is generic at runtime - i.e. pass some kind of type descriptor with information like size of type and everything else that's needed to handle it, and then the code would look at that type descriptor and do the right thing; this works, but it's non-trivial, and generated code is very slow.
Indeed this involves expensive lookup for generic virtual methods. It is also not very friendly to NAOT's binary size once you start having many generic instantiations of the type with such method(s). In the case of Go, I'd assume it will require making both its VM and code reachability analysis more complex to make it work, and they decided to simply shift the responsibility and ceremony onto the programmer which is the standard in Go design.
. They are likely the two most difficult parts of any design for parametric polymorphism. In retrospect, we were biased too much by experience with C++ without concepts and Java generics. We would have been well-served to spend more time with CLU and C++ concepts earlier.
https://go.googlesource.com/proposal/+/master/design/go2draf...
It seems there are tons of things the designers banned because they are bad in C, and didn’t supply any replacement because the C++ version is overly complicated or hard to use, and they weren’t aware of anything better.
Metaprogramming is a perfect example. C macros are bad, C++ metaprogramming facilities are baroque, so let’s not have any metaprogramming features whatsoever and tell people to rely on code generations or reflection instead.
(There is a very curious but prevalent phenomena where highly intelligent people let a governing ideology do the thinking for them and refuse to countenance ideologically discordant possibilities. (Ancient too, as we see from the ideological reaction of Pythagorean's to existence of √2..))
There is a very curious but prevalent phenomena where highly intelligent people let a governing ideology do the thinking for them and refuse to countenance ideologically discordant possibilities.
Highly intelligent people experience having correct intuition -- a product of reasoning occurring at below a conscious level -- a lot, to the point where they may learn to intrinsically trust their intuition to be correct and be highly confident it will prove correct. However, while intuition can often be a manifestation of that kind of unconscious reasoning, it can also be a product of aesthetic/ideological (really, the same thing) preference, among other biases.
Work started in February 1999, while .NET 1.0 was released in October 2021.
https://learn.microsoft.com/en-us/archive/blogs/dsyme/netc-g...
Like you’d never get the amazing type safety of libraries like prisma and kysely (an ORM and a query builder) if you couldn’t make such ridiculously expressive generics. But, only a teeny tiny subset of TS devs can work on generics that complex. I definitely can’t!
It’s a tradeoff like anything else.
But things I encounter in the regular non-library world are usually recursive types that have specific constraints. A couple years ago I had my first foray into typescript generics, and was so stumped that I actually gave up. I was trying to map the type of one nested object to another. This[2] is the stackoverflow post from the legendary jcalz that saved me! Check jcalz's link to the TS playground
[1]https://github.com/kysely-org/kysely/blob/master/src/query-b... [2]https://stackoverflow.com/questions/72461962/is-it-possible-...
eg https://github.com/jakearchibald/idb
Honestly most complicated types boil down to a handful of concepts (mapped types, conditionals, recursion) with a few tricks for working around sharp corners (deferring evaluation for performance, controlling whether a union distributes).
I've even seen someone solve 8-queens in types using a formal grammar but no clue how that worked. Speaking of, if anyone else has other examples of defining a grammar or ast in typescript, I'd love to see it.
Honestly most complicated types boil down to a handful of concepts (mapped types, conditionals, recursion) with a few tricks for working around sharp corners (deferring evaluation for performance, controlling whether a union distributes).
I've even seen someone solve 8-queens in types using a formal grammar but no clue how that worked. Speaking of, if anyone else has other examples of defining a grammar or ast in typescript, I'd love to see it.
That's ninety lines of type constraints.
Couldn't find a proper example on the spot in an open source project.
[1] https://github.com/sindresorhus/type-fest/blob/main/source/g...
It’s a tradeoff like anything else.
Could not disagree harder with the sentiment. Selling your kidneys for $20 is a tradeoff too. You can't just throw "it's a tradeoff" around as a go-to discussion and critical thought ender.
Didn't mean it as a discussion ender, I think it's the type of tradeoff that anybody really needs to think about when they start down the rabbit hole of crafting the perfect generic type. Like for day to day stuff, I agree with the grug brained developer[1].
But if I'm writing something that needs to be used in a super specific way, I think it's worth the extra effort/complexity to ensure that other devs can't use it incorrectly. I'm usually responsible for making smaller components that other devs can build from, and it's amazing what you can communicate through types.
But in a way, I also think the constraints help avoid the overuse of generics. I have seen Java and Typescript projects where developers had way too much fun playing around with the type system, and the resulting code is actually quite unclear.
Have you seen some C++ code where they make heavy use of templating?
Maybe we get sum types by 1.30 :)
This is all solvable - C++ and Rust both do it, for example. But it introduces a lot of complexity to the language.
It means that now you need to have rules in the language for definitely-initialized fields (since you can't just default-init them to null as there is no null).
You don't need rules, you just don't allow for uninitialized fields.
Which means that you need full-fledged constructors for everything, and syntax for invoking them in all cases where they may be needed
Rust doesn't have constructors. It's not needed.
- e.g. think about what should happen if you try to create an array of structs with ref fields in them.
struct Ref<'a> {
x: &'a i32,
}
fn main() {
let a = 5;
let array = [
Ref { x: &a },
Ref { x: &a },
];
}
No big deal.This all being said, that doesn't mean that I think zero values are a mistake in Go, exactly. They make sense with the design of the language. But I do think both sides of this have tradeoffs, and I personally prefer the tradeoffs of no nulls and zero values.
you just don't allow for uninitialized fields
It sounds simple, but it's the kind of thing that has very far-reaching effects across the language. If you start with this premise and then design everything else around it, things feel natural. If you take a language already designed around the notion of null values and bolt things on, either idiomatic code changes massively, or you leave enough loopholes in the type system to still let people write the stuff they have always done and that always worked (even though it's technically unsound).
And yes, of course it's a tradeoff. For my own part, I also think that null values and the simplicity that comes with them in some things aren't worth the trouble that they bring. I'm also not a fan of Go, to put it mildly. But given where the language is already, and given their explicit stated design goals (which I disagree with), I can totally see why on this particular issue they went with nulls just to keep things simple that were traditionally simple.
FWIW my personal opinion is that the resulting complexity is necessary, and we as engineers just have to bite the bullet and deal with that mental overhead (and if it means that coding becomes too complicated for some, so be it). But the market ultimately decides; and it decided in favor of lots of code that's cheap and fast to write even at the expense of bugs, so we have tools catering specifically to that.
Rust can do this with less complexity by embracing choices that, while conceptually simple, are very unorthodox and unintuitive, such as everything being a move rather than a copy with few exceptions (whereas in pretty much every other PL it has always been the other way around, if move semantics is supported at all).
I wrote that, after 15 years of baggage, there's nothing purposely ridiculous about stating that Go sum types must have a zero value (either nil or something else).
Either Go sum types have a zero value; or they can't be used everywhere a type can be used in Go; or you're radically and backwards incompatibly changing the language.
But adding sumtypes in 2025 and using a default nil or zero value/type to it, yeah that I do call ridiculous and I stand by it.
It might be a good decision even (I don't know), but it's still ridiculous by my understanding of that word.
Additionally, for it to fit the 2025 implementation of the runtime, its representation in memory must have a fixed size, with fixed locations for any pointers, and the memory representation of the zero value must be zero.
The runtime can change, but for the runtime to change to accommodate new concepts, the change can't obliterate the expectations of 15 years of existing code.
Given those restrictions, you can either not have sum types (which is the current state of affairs, and will be for the foreseeable future), or you can pick a zero value for your sum types.
If you find having a zero value for sum types ridiculous, you're simply rejecting sum types in Go, which is fine. There is, after all, a reason the proposal wasn't accepted.
Otherwise, we're all happy to accept suggestions that meet the criteria of not breaking compatibility or wreaking havoc with existing code.
Requiring a value to be provided at usage/definition site would go a long way. Also, of course, having to provide a default value by hand when initializing an array. Go also has easy-to-use callbacks, so having also APIs that take a callback returning a sentinel/default value should be easy enough. The problem isn't having defaults, but having pervasive defaults that are set in stone by the language itself, even in places where it doesn't make sense.
For low-level code that fiddles with uninitialized memory buffers and allocators, things might get complicated, if zero values are not allowed. (Rust's struggles around `MaybeUninit` is a poster child example of the ensuing complexity.) However, I think that a very Go-style solution would have been that the type system allows for zero-initialized types, but everything around the provided APIs and language semantics makes creating them hard. It's a similar solution with "our strings are UTF-8, but we don't check and world doesn't explode if they are not". I generally dislike this kind of "worse is better" design, but it certainly fits Go very well.
The main problem with Go, arguably, would be what to do with null pointers. They could have gone with separate reference types for "definitely not null" and "may be null", with a specialized if-like check-and-coerce-operation, without having full generics and sum types like Rust does.
Because Go has a GC and thus is capable of having arbitrary object graphs, there shouldn't be a problem of initializing multiple values with a reference to the same object, so user providing a default value and the API cloning it to fill the array should work. And in case of a "may be null" ref, having it to be null is not a problem.
[1] https://www.infoq.com/news/2019/07/go-try-proposal-rejected/
every interface being zeroable seems to be embedded quite deeply in the language.
Not just every interface, every single type. And it’s at the very core of the language. For any type T you can name, you can write
var v T
And it’ll give you a v you can interact with, no opt out. You can barely opt out of implicit shallow copies via hacks based around lock method names.Go has constant (`const`) type that can be evaluated at compile-time but no const type for something that can only be evaluated at Run time.
Do you trust yourself to write perfect code 100% of the time ? No? Then padlock it is
Const support in languages never makes all modifications to data accessed through the variable locked out, just the top level, which makes it much more difficult to ensure that the assumptions about immutability hold without constantly doing deep copies or having to double and triple check that your Const definitions are correct.
Const often leads to a false sense of security.
If I have a type Foo with a field that is a pointer to a mutable value, instantiating a Const Foo just means I’m always pointing at the same mutable value, not that I have an unchangeable Foo.
yes and that's fine ? for instance how would you encode a graph operation where you want the graph structure to be immutable and the content of the nodes to be variable?
not the object
//Constructor goes here
}Rect r1 = new Rect(1,2);
Is this not sufficient to create an immutable rectangle that I can pass around safely in a multi-threaded code?
Unless the object is immutable, like String, Integer, Long, ImmutableCollections, etc. Or your own immutable objects.
Exactly. You can have immutable primitives. You can have immutable classes. And you can combine them to form thread-safe immutable classes.
It's much better to have immutable bindings/references so that nothing that mutates the object can be done through them. Rust does it very well, for example. Even C++ has a good version of this.
But I was glad it existed because it enabled a whole set of valuable business automation at the time.
package main
import "fmt"
// Define types for our "sum type"
type Success struct {
Value string
}
type Error struct {
Message string
}
// Interface for our sum type
type Result interface {
isResult()
}
// Implement the interface
func (s Success) isResult() {}
func (e Error) isResult() {}
// Pattern matching using type switch
func handleResult(r Result) string {
switch v := r.(type) {
case Success:
return fmt.Sprintf("Success: %s", v.Value)
case Error:
return fmt.Sprintf("Error: %s", v.Message)
default:
// Go requires a default case, which can help catch new types
panic("Unhandled result type")
}
}
func main() {
result := Success{Value: "Operation completed"}
fmt.Println(handleResult(result))
}
Here is a practical example of it: package main
import (
"fmt"
"time"
)
// Common interface for our "sum type"
type Notification interface {
Send() string
isNotification() // marker method
}
// Email notification
type EmailNotification struct {
To string
Subject string
Body string
}
func (n EmailNotification) Send() string {
return fmt.Sprintf("Email sent to %s with subject '%s'", n.To, n.Subject)
}
func (EmailNotification) isNotification() {}
// SMS notification
type SMSNotification struct {
PhoneNumber string
Message string
}
func (n SMSNotification) Send() string {
return fmt.Sprintf("SMS sent to %s", n.PhoneNumber)
}
func (SMSNotification) isNotification() {}
// Push notification
type PushNotification struct {
DeviceToken string
Title string
Message string
ExpiresAt time.Time
}
func (n PushNotification) Send() string {
return fmt.Sprintf("Push notification sent to device %s", n.DeviceToken)
}
func (PushNotification) isNotification() {}
// Function that handles different notification types
func ProcessNotification(notification Notification) {
// Type switch for pattern matching
switch n := notification.(type) {
case EmailNotification:
fmt.Printf("Processing Email: %s\n", n.Send())
fmt.Printf("Email details - To: %s, Subject: %s\n", n.To, n.Subject)
case SMSNotification:
fmt.Printf("Processing SMS: %s\n", n.Send())
fmt.Printf("SMS length: %d characters\n", len(n.Message))
case PushNotification:
fmt.Printf("Processing Push: %s\n", n.Send())
timeToExpiry := time.Until(n.ExpiresAt)
fmt.Printf("Push expires in: %v\n", timeToExpiry)
default:
// This catches any future notification types that we haven't handled
fmt.Println("Unknown notification type")
}
}
// Function to record notifications in different ways based on type
func LogNotification(notification Notification) string {
timestamp := time.Now().Format(time.RFC3339)
switch n := notification.(type) {
case EmailNotification:
return fmt.Sprintf("[%s] EMAIL: To=%s Subject=%s",
timestamp, n.To, n.Subject)
case SMSNotification:
return fmt.Sprintf("[%s] SMS: To=%s",
timestamp, n.PhoneNumber)
case PushNotification:
return fmt.Sprintf("[%s] PUSH: Device=%s Title=%s ExpiresAt=%s",
timestamp, n.DeviceToken, n.Title, n.ExpiresAt.Format(time.RFC3339))
default:
return fmt.Sprintf("[%s] UNKNOWN notification type", timestamp)
}
}
func main() {
// Create different notification types
email := EmailNotification{
To: "user@example.com",
Subject: "Important Update",
Body: "Hello, this is an important update about your account.",
}
sms := SMSNotification{
PhoneNumber: "+1234567890",
Message: "Your verification code is 123456",
}
push := PushNotification{
DeviceToken: "device-token-abc123",
Title: "New Message",
Message: "You have a new message from a friend",
ExpiresAt: time.Now().Add(24 * time.Hour),
}
// Process notifications
fmt.Println("=== Processing Notifications ===")
ProcessNotification(email)
fmt.Println()
ProcessNotification(sms)
fmt.Println()
ProcessNotification(push)
// Log notifications
fmt.Println("\n=== Logging Notifications ===")
fmt.Println(LogNotification(email))
fmt.Println(LogNotification(sms))
fmt.Println(LogNotification(push))
// We can also store different notification types in a slice
notifications := []Notification{email, sms, push}
fmt.Println("\n=== Processing Notification Queue ===")
for i, notification := range notifications {
fmt.Printf("Item %d: %s\n", i+1, LogNotification(notification))
}
}
The marker method pattern isNotification() prevents other types that happen to have a Send() method from being considered notifications. TIdeally sum types would be concrete/non-nilable somehow.
type NotificationWrapper struct {
// This field holds the actual notification data
Value interface{}
}
And use it like: func NewPushNotification(token, title, message string, expires time.Time) NotificationWrapper {
return NotificationWrapper{
Value: PushNotification{
DeviceToken: token,
Title: title,
Message: message,
ExpiresAt: expires,
},
}
}
And destructure it with something like func ProcessNotification(notification NotificationWrapper) string {
switch n := notification.Value.(type) {
Roll it all up in a nice syntax with a preprocessor haha type NotificationWrapper struct {
// This field holds the actual notification data
value interface{}
}
Within `ProcessNotification` you'd also want to always assert the struct is initialized. You have to handle that error case via `err` or `panic`.With a true sum type, you could be assured that the value is always concrete, removing the need for error handling, which otherwise complicates the business logic of your application. Errors that should be safeguarded against by the compiler become run time checks, eating up cycles on the CPU, or paniced upon, potentially leading to segfaults which can only be caught by tests (not asserted valid via the act of compiling).
Most languages settle for something close and users defend their choice/Stockholm syndrome.
Also for refactoring they seem to do ok. Things like change this to a function, extract this type etc.
They excel in snippets like “get unique items in this array” or “sort this by property x” kind of stuff where you could easily write or find an answer.
Oh I also like to use them for code review. Not that I’d blindly trust one but you can have another eye to look at your pr (i use claude code for this and love it) and see if you introduced any side effects or missed something.
For anything more complex like having one write some feature from scratch… meh. I haven’t had much luck. Also they seem to fuck up royally in a little complex project if you do not isolate your request like I mentioned above.
Do you mean that AI can help write perfectly memory safe code and so new languages shouldn’t have a garbage collector?
I wish more languages supported that, we did some crazy things with template metaprogramming.
If the type of the argument to `close` is a type parameter all types in its type set must be channels with the same element type. It is an error if any of those channels is a receive-only channel.
That doesn't seem to be true - intuitively, element type doesn't seem relevant at all for `close` since it doesn't affect any elements, and it compiles just fine when using a type set that has two different element types: https://go.dev/play/p/IQjTfea9XXy?v=gotip
</nitpick>
Seems like a solid documentation improvement! Hopefully this also helps accelerate some of the flexibility-extensions like shared fields (or a personal hope: "any struct type" plz! it's super useful for strongly encouraging safe habits in high level APIs, currently there's no way to say "should not be a primitive type").