Unexpected productivity boost of Rust
I know that Rust provides some additional compile-time checks because of its stricter type system, but it doesn't come for free - it's harder to learn and arguably to read
I like to call it getting "union-pilled" and it's really hard to accept otherwise statically-typed languages once you become familiar.
(* Expressions *)
type Exp =
UnMinus of Exp
| Plus of Exp * Exp
| Minus of Exp * Exp
| Times of Exp * Exp
| Divides of Exp * Exp
| Power of Exp * Exp
| Real of float
| Var of string
| FunCall of string * Exp
| Fix of string * Exp
;;
let rec tokenizer s =
let (ch, chs) = split s in
match ch with
' ' -> tokenizer chs
| '(' -> LParTk:: (tokenizer chs)
| ')' -> RParTk:: (tokenizer chs)
| '+' -> PlusTk::(tokenizer chs)
| '-' -> MinusTk::(tokenizer chs)
| '*' -> TimesTk::(tokenizer chs)
| '^' -> PowerTk::(tokenizer chs)
| '/' -> DividesTk::(tokenizer chs)
| '=' -> AssignTk::(tokenizer chs)
| ch when (ch >= 'A' && ch <= 'Z') ||
(ch >= 'a' && ch <= 'z') ->
let (id_str, chs) = get_id_str s
in (Keyword_or_Id id_str)::(tokenizer chs)
| ch when (ch >= '0' && ch <= '9') ->
let (fl_str, chs) = get_float_str s
in (RealTk (float (fl_str)))::(tokenizer chs)
| '$' -> if chs = "" then [] else raise (SyntaxError (""))
| _ -> raise (SyntaxError (SyntErr ()))
;;
Hint, this isn't Rust.important but slightly subtle language features from the late '70s
Programming-language researchers didn't start investigating linear (or affine) types till 1989. Without the constraint that vectors, boxes, strings, etc, are linear, Rust cannot deliver its memory-safety guarantees (unless Rust were radically changed to rely on a garbage collecting runtime).
it's a damning indictment of programming culture than people did not adopt pre-Rust ML-family languages
In pre-Rust ML-family languages, it is harder to reason about CPU usage, memory usage and memory locality than it is in languages like C and Rust. One reason for that is the need in pre-Rust ML-family langs for a garbage collector.
In summary, there are good reasons ML, Haskell, etc, never got as popular as Rust.
Programming-language researchers didn't start investigating linear (or affine) types till 1989.
Sure, but as ModernMech said, the vast majority of Rust's benefits come from having sum types and pattern matching.
In pre-Rust ML-family languages, it is harder to reason about CPU usage, memory usage and memory locality than it is in languages like C and Rust.
Marginally harder for the first two and significantly harder for the last, sure. None of which is enough to matter in the overwhelming majority of cases where Rust is seeing use.
Sure, but as ModernMech said, the vast majority of Rust's benefits come from having sum types and pattern matching.
Doubt. There were lots of languages giving you just that, and they never had this amount of hype. See Scala, OCaml, Haskell, etc.
Rust has one unique ability, and many shared by other languages. It's quite clearly popular for the former (though languages are a packages, so of course it's a well put together language all around).
And while this was necessary to Rust's success, I don't think it was sufficient, insofar as it also needed a good deal of corporate backing, a great and welcoming community, and luck to be at the right place at the right time.
Haskell never tried to be more than a academic language targeting researchers. OCaml never had a big community or corporate backing. Scala never really had a niche; the most salient reason to use it is if you're already in the Java ecosystem and you want to write functional code. The value propositions for each are very different, so these language didn't receive the same reaction as Rust despite offering similar features.
https://scastie.scala-lang.org/fnquHxAcThGn7Z8zistthw
This wouldn't compile in Rust. Scala is an okay language, its main benefit as far as I can tell is its a way to write JVM code without having to write Java.
1) If something is technically possible, programmers will not only do it but abuse it.
2) You can't enforce good programming practice at scale using norms.
Linters and as the sibling points out the addition of a recent compiler flag (which is kind of an admission that it's not not an issue), is the opposite approach Rust takes, which is to design the language to not allow these things at all.
you didn't check an FFI call properly, but that happens in Rust too)
Which is why FFI is unsafe in Rust, so nulls are opt-in rather than opt-out. Having sensible security defaults is also a key learning of good software engineering practice.
1) If something is technically possible, programmers will not only do it but abuse it.2) You can't enforce good programming practice at scale using norms.
Not quite. Programmers will take the path of least resistance, but they won't go out of their way to find a worse way to do things. `unsafe` and `mem::transmute` are part of Rust, but they don't destroy Rust's safety merits, because programmers are sufficiently nudged away from them. The same is true with unsafePerformIO in Haskell or null in Scala or OO features in OCaml. Yes it exists, but it's not actually a practical issue.
the addition of a recent compiler flag (which is kind of an admission that it's not not an issue)
Not in the way you think; the compiler flag is an admission that null is currently unused in Scala. The flag makes it possible to use Kotlin-style null in idiomatic Scala by giving it a type. (And frankly I think it's a mistake)
is the opposite approach Rust takes, which is to design the language to not allow these things at all.
Every language has warts, Rust included. Yes, it would be better to not have null in Scala. But it's absolutely not the kind of practical issue that affected adoption (except perhaps via FUD, particularly from Kotlin advocates). Null-related errors don't happen in real Scala codebases (just as mem::transmute-related errors don't happen in real Rust codebases). Try to find a case of a non-FFI null actually causing an issue.
So if we are speaking of optimizing compilers there is MLton, while ensuring that the application doesn't blow up in strange ways.
The problem is not people getting to learn these features from Rust, glad that they do, the issue is that they think Rust invented them.
the issue is that they think Rust invented them
Sorry, my post wasn't to imply Rust invented those things. My point was Rust's success as a language is due to those features.
Of course there's more to it, but what Rust really does right is blend functional and imperative styles. The "match" statement is a great way to bring functional concepts to imperative programmers, because it "feels" like a familiar switch statement, but with super powers. So it's a good "gateway drug" if you will, because the benefit is quickly realized ("Oh, it caught that edge case for me before it became a problem at runtime, that would have been a headache...").
From there, you can learn how to use match as an expression, and then you start to wonder why "if" isn't an expression in every language. After that you're hooked.
Yes it's a damning indictment of programming culture than people did not adopt pre-Rust ML-family languages, but it could be worse, they could be not adopting Rust either.
I'll say for a long time I've been quite pleased on the general direction of the industry in terms of language design and industry trends around things like memory safety. For a good many years we've seen functional features being integrated into popular imperative languages, probably since map/reduce became a thing thanks to Google. So I'll us all credit for coming around eventually.
I'm more dismayed by the recent AI trend of asking an AI to write Python code and then just going with whatever it outputs. I can't say that seems like a step forward.
Sure, rewrites are most often better on simply being a rewrite, but the kind of parallel processing they do may not be feasible in C.
And fwiw I've used unions in typescript extensively and I'm not convinced that they're a good idea. They give you a certain flexibility to writing code, yes, does that flexibility lead to good design choices, idk.
You could create your own Result<T, Error> type in TS but people don't really do that outside of ecosystems like Effect because there isn't usually a reason to.
C is statically typed, but its type system tracks much less.
But where-as with interfaces, typically they require you early define what your class implements. Rust gives you a late-bound-ish (still compile time but not defined in the original type) / Inversion of Control way to take whatever you've got and define new things for it. In most languages what types a thing has are defined by the library, but Rust not just allows but is built entirely around taking very simple abstract thing and constructing bigger and bigger toolkits of stuff around them. Very Non-zero sum in ways that languages rarely are.
There's a ton of similarity to Extension Methods, where more can get added to the type. But traits / impls are built much more deeply into rust, are how everything works. Extension Methods are also, afaik, just methods, where-as with Rust you really adding new types that an existing defined-elsewhere thing can express.
I find it super shocking (and not because duh) that Rust's borrow checking gets all the focus. Because the type system is such a refreshing open ended late-defined reversal of type system dogma, of defining everything ahead of time. It seems like such a superpower of Rust that you can keep adding typiness to a thing, keep expanding what a thing can do. The inversion here is, imo, one of the real largely unseen sources of glory for why Rust keeps succeeding: you don't need to fully consider the entire type system of your program ahead of time, you can layer in typing onto existing types as you please, as fits, as makes sense, and that is a far more dynamic static type system than the same old highly constrained static type dreck we've suffered for decades. Massive break forward: static, but still rather dynamic (at compile time).
[1] not trying to take away anything from the designers, getting it right in combination with all the other features is a huge feat!
Ownership/borrowing clarifies whether function arguments are given only temporarily to view during the call, or whether they're given to the function to keep and use exclusively. This ensures there won't be any surprise action at distance when the data is mutated, because it's always clear who can do that. In large programs, and when using 3rd party libraries, this is incredibly useful. Compare that to that golang, which has types for slices, but the type system has no opinion on whether data can be appended to a slice or not (what happens depends on capacity at runtime), and you can't lend a slice as a temporary read-only view (without hiding it behind an abstraction that isn't a slice type any more).
Thread safety in the type system reliably catches at compile time a class of data race errors that in other languages could be nearly impossible to find and debug, or at very least would require catching at run time under a sanitizer.
Basically, I don't need ownership, if I don't mutate things. It would be nice to have ownership as a concept, in case I do decide to mutate things, but it sucks to have to pay attention to it, when I don't mutate and to carry that around all the time in the code.
Non-owning non mutating borrow that doesn’t require you to clone/copy:
fn foo(v: &SomeValue)
Transfer of ownership, no clone/copy needed, non mutating: fn foo(v: SomeValue)
Transfer of ownership, foo can mutate: fn foo(mut v: SomeValue)
AFAIK rust already supports all the different expressivity you’re asking for. But if you need two things to maintain ownership over a value, then you have to clone by definition, wrapping in Rc/Arc as needed if you want a single version of the underlying value. You may need to do more syntax juggling than with F# (I don’t know the language so I can’t speak to it) but that’s a tradeoff of being a system engineering language and targeting a completely different spot on the perf target.Because in my experience when I pass a value (not a reference), then I must borrow the value and cannot use it later in the calling procedure.
Ah, you are confused on terminology. Borrowing is a thing that only happens when you make references. What you are doing when you pass a non-copy value is moving it.
Generally, anything that is not copy you pass to a function should be a (non-mut) reference unless it's specifically needed to be something else. This allows you to borrow it in the callee, which means the caller gets it back after the call. That's the workflow that the type system works best with, thanks to autoref having all your functions use borrowed values is the most convenient way to write code.
Note that when you pass a value type to a function, in Rust that is always a copy. For non-copy types, that just means move semantics meaning you also must stop using it at the call site. You should not deal with this in general by calling clone on everything, but instead should derive copy on the types for which it makes sense (small, value semantics), and use borrowed references for the rest.
What I would prefer is, that Rust only cares about whether I use it in the caller after the call, if I pass a mutable value, because in that case, of course it could be unsafe, if the callee mutates it.
Sometimes Copy cannot be derived and then one needs to implement it or Clone. A few months ago I used Rust again for a short duration, and I had that case. If I recall correctly it was some Serde struct and Copy could not be derived, because the struct had a String or &str inside it. That should a be fairly common case.
Note that calling by value is expensive for large types. What those other languages do is just always call by reference, which you seem to confuse for calling by value.
Rust can certainly not do what you would prefer. In order to typecheck a function, Rust only needs the code of that function, and the type defitions of everything else, the contents of the functions don't matter. This is a very good rule, which makes code much easier to read.
Other languages let you pass a value, and I just don't mutate that, if I can help it
How do they do that without either taking a reference or copying/cloning automatically for you? Would be helpful if you provide an example.
I might be wrong what they actually do though. It seems I merely dislike the need to specify & for arguments and then having to deal with the fact, that inside procedures I cannot treat them as values, but need to stay aware, that they are merely references.
The nice thing about value semantics is they are very safe and can be very performant. Like in PHP, if we take array that's a copy. But not really - it's secretly COW under the hood. So it's actually very fast if we don't mutate, but we get the safety of value semantics anyway.
&str is Copy, String is not.
let v = SomeValue { ... }
foo(&v);
foo(&v);
eprintln!("{}", v.xyz);
You have to take a reference. I'm not sure how you'd like to represent "I pass a non-reference value to a function but still retain ownership without copying" - like what if foo stored the value somewhere? Without a clone/copy to give an isolated instance, you've potentially now got two owners - foo and the caller of foo which isn't legal as ownership is strictly unique. If F# lets you do this, it's likely only because it's generating an implicit copy for you (which Rust is will do transparently for you when you declare your type inheriting Copy).But otherwise I'm not clear what ownership semantics you're trying to express - would be helpful if you could give an example.
But Rust always moves by default when assigning so I’m not sure what your complaint is. If the type declares it implements Copy then Rust will automatically copy it on assignment if there’s conflicting ownership.
My complaint is that because moves are the default, member access and container element access typically involves borrowing, and I don't like dealing with borrowed stuff.
It's a personal preference thing, I would prefer that all types were copy and only types marked as such were not.
I get why the rust devs went the other way and it makes sense given their priorities. But I don't share them.
Ps: most of the time I write python where references are the default but since I don't have to worry about lifetimes, the borrow checker, or leaks. I am much happier with that default.
In Rust, "Copy" means that the compiler is safe to bitwise copy the value. That's not safe for something like String / Vec / Rc / Arc etc where copying the bits doesn't copy the underlying value (e.g. if you did that to String you'd get a memory safety violation with two distinct owned Strings pointing to the same underlying buffer).
It could be interesting if there were an "AutoClone" trait that acted similarly to Copy where the compiler knew to inject .clone when it needed to do so to make ownership work. That's probably unlikely because then you could have something implement AutoClone that then contains a huge Vector or huge String and take forever to clone; this would make it difficult to use Rust in a systems programming context (e.g. OS kernel) which is the primary focus for Rust.
BTW, in general Rust doesn't have memory leaks. If you want to not worry about lifetimes or the borrow checker, you would just wrap everything in Arc<Mutex<T>> (when you need the reference accessed by multiple threads) / Rc<RefCell<T>> (single thread). You could have your own type that does so and offers convenient Deref / DerefMut access so you don't have to borrow/lock every time at the expense of being slower than well-written Rust) and still have Python-like thread-safety issues (the object will be internally consistent but if you did something like r.x = 5; r.y = 6 you could observe x=5/y=old value or x=5/y=6). But you will have to clone explicitly the reference every time you need a unique ownership.
At least as long as I can afford it performance wise. Then borrowing it is. But I would prefer the default semantics to be copying.
At least as long as I can afford it performance wise. Then borrowing it is. But I would prefer the default semantics to be copying.
How could/would the language know when you can and can't afford it? Default semantics can't be "copying" because in Rust copying means something very explicit that in C++ would map to `is_trivially_copyable`. The default can't be that because Rust isn't trying to position as an alternative for scripting languages (even though in practice it does appear to be happening) - it's remarkable that people accept C++'s "clone everything by default" approach but I suspect that's more around legacy & people learning to kind of make it work. BTW in C++ you have references everywhere, it just doesn't force you to be explicit (i.e. void foo(const Foo&) and void foo(Foo) and void foo(Foo&) all accepts an instance of Foo at the call site even though very different things happen).
But basically you're argument boils down to "I'd like Rust without the parts that make it Rust" and I'm not sure how to square that circle.
While in an FP language you are passing values
By passing values do you mean 'moving'? Like not passing reference?
So I want to move a value, but also be able to use it after moving it, because I don't mutate it in that other function, where it got moved to. So it is actually more like copying, but without making a copy in memory.
It would be good, if Rust realized, that I don't have mutating calls anywhere and just lets me use the value. When I have a mutation going on, then of course the compiler should throw error, because that would be unsafe business.
If you call `foo(&value)` then `value` remains available in your calling scope after `foo` returns. If you don't mutate `value` in foo, and foo doesn't do anything other than derive a new value from `value`, then it sounds like a shared reference works for what you're describing?
Rust makes you be explicit as to whether you want to lend out the value or give the value away, which is a design decision, and Rust chooses that the bare syntax `value` is for moving and the `&value` syntax is for borrowing. Perhaps you're arguing that a shared immutable borrow should be the default syntax.
Apologies if I'm misunderstanding!
Owned objects are exclusively owned by default, but wrapping them in Rc/Arc makes them shared too.
Shared mutable state is the root of all evil. FP languages solve it by banning mutation, but Rust can flip between banning mutation or banning sharing. Mutable objects that aren't shared can't cause unexpected side effects (at least not any more than Rust's shared references).
Similarly, Java sidesteps many of these issues in mostly using reference types, but ends up with a different classes of errors. So the C/pointer family static analysis can be quite distinct from that for JVM languages.
Swift is roughly on par with Rust wrt exclusivity and data-race safety, and is catching up on ownership.
Rust traits and macros are really a distinguishing feature, because they enable programmer-defined constraints (instead of just compiler-defined), which makes the standard library smaller.
There's a fine line here: it matters a lot whether we're talking about a "sloppy" 80% solution that later causes problems and is incredibly hard to fix, or if it's a clean minimal subset, which restricts you (by being the minimal thing everyone agrees on) but doesn't have any serious design flaws.
Do you think Zig is a valid challenger to Rust for this kind of programming?
Almost none of the Rust features discussed in this subthread are present in Zig, such as ownership, borrowing, shared vs. exclusive access, lifetimes, traits, RAII, or statically checked thread safety.
statically typed and thus compiled
Statically typed does not imply compiled. You can interpret a statically typed language, for instance. And not every compiled language is all that static.
For example, C is statically typed, but also has the ability to play pointer typecasting trickery. So how much can the compiler ever guarantee anything, really? It can't, and we've seen the result is brittle artifacts from C.
Rust is statically-typed and it has all kinds of restrictions on what you can do with those types. You can't just pointer cast one thing to another in Rust, that's going to be rejected by the compiler outright. So Rust code has to meet a higher bar of "static" than most languages that call themselves "static".
Type casting is just one way Rust does this, other ways have been mentioned. They all add up and the result is Rust artifacts are safter and more secure.
You can't just pointer cast one thing to another in Rust, that's going to be rejected by the compiler
You can't safely do this yourself. That is, you couldn't write safe Rust which performs this operation for two arbitrary things. But Rust of course does do this, actually quite a lot, because if we're careful it's entirely safe.
That famous Quake 3 Arena "Fast inverse square root" which involves type puns? You can just write that in safe Rust and it'll work fine. You shouldn't - on any even vaguely modern hardware the CPU can do this operation faster anyway - but if you insist it's trivial to write it, just slower.
Why can you do that? Well, on all the hardware you'd realistically run Rust on the 32-bit integer types and the 32-bit floating types are the exact same size (duh), same bit order and so on. The CPU does not actually give a shit whether this 32-bit aligned and 32-bit sized value "is" an integer or a floating point number, so "transforming" f32 to u32 or u32 to f32 emits zero CPU instructions, exactly like the rather hairier looking C. So all the Rust standard library has to do is promise that this is OK which on every supported Rust platform it is. If some day they adopted some wheezing 1980s CPU where that can't work they'd have to write custom code for that platform, but so would John Carmack under the same conditions.
because if we're careful it's entirely safe.
The thesis of Rust is that in aggregate, everyone can't be careful, therefore allowing anyone to do it (by default) is entirely unsafe.
Of course you can do unsafe things in Rust, but relegating that work to the room at the back of the video store labeled "adults only" has the effect of raising code quality for everyone. It turns out if you put up some hoops to jump through before you can access the footguns, people who shouldn't be wielding them don't, and average code quality goes up.
Java falls back to `Object` and runtime casts frequently
Is it frequently? Generics are definitely not as nice as they could be, but they are surprisingly "sufficient" for almost any library, e.g. a full on type-safe SQL DSL like JOOQ. Unsafe casts are very rare, and where you do need Object and stuff are very dynamic stuff where it's impossible to extend compile-time guarantees even theoretically (e.g. runtime code generation, dynamic code loading, etc - people often forget about these use cases, not everything can work in a closed universe)
https://www.rocksolidknowledge.com/articles/locking-asyncawa...
The Rust Mutex is an Owning Mutex which is a different feature whose benefit is that you need to take the lock to get at the protected data, which averts situations where you forget in some code but not others and create sync problems - in C# those may go undetected or may trigger a runtime exception, no guarantees.
But perhaps even more importantly, and why other languages which could do the Owning Mutex often do not, Rust's borrow checking means the compiler will spot mistakes where you gave back the lock but retained access to the data it was protecting. So you're protected both ways - you can't forget to take the lock, and you also can't give it back without also giving back the access.
Monitors prevent only the second (and only partly), an Owning Mutex in most languages prevents the first, but Rust prevents both.
Neither Go, Java or C++ would catch that concurrency bug.
That is incorrect. Java enforces that a monitor lock (or Lock) must be released by the same thread that acquired it. Attempting to unlock from a different thread throws IllegalMonitorStateException.
Don't most of the benefits just come down to using a statically typed and thus compiled language?
Doesn't have to be compiled to be statically typed... but yeah, probably.
Be it Java, Go or C++;
Lol! No. All static type systems aren't the same.
TypeScript would be the only one of your examples that brings the same benefit. But the entire system is broken due to infinite JS Wats it has to be compatible with.
it's harder to learn and arguably to read
It's easier to learn it properly, harder to vibe pushing something into it until it seems to works. Granted, vibe pushing code into seemingly working is a huge part of initial learning to code, so yeah, don't pick Rust as your first language.
It's absolutely not harder to read.
Don't most of the benefits just come down to using a statically typed and thus compiled language? Be it Java, Go or C++; TypeScript is trickier, because it compiles to JavaScript and inherits some issues, but it's still fine.
Yes. The type systems of these modern compiled languages are more sound than anything that Javascript and Typescript can ever provide.
Anyone using such languages that have a totally weak type system and a dynamic typing system as well is going to run into hundreds of headaches - hence why they love properly typed-systems such as Rust which actually is a well designed language.
Don't most of the benefits just come down to using a statically typed and thus compiled language? Be it Java, Go or C++; TypeScript is trickier, because it compiles to JavaScript and inherits some issues, but it's still fine.
No. You have to have a certain amount of basic functionality in your type system; in particular, sum types, which surprisingly many languages still lack.
(Note that static typing does not require compilation or vice versa)
I know that Rust provides some additional compile-time checks because of its stricter type system, but it doesn't come for free - it's harder to learn and arguably to read
ML-family languages are generally easier to learn and read if you start from them. It's just familiarity.
The concurrency/safety/memory story is only valid in a few rare cases and I wish people didn't try to sell Rust for these features.
Seriously, why would you think that assigning a value would stop your script from executing? Maybe the Typescript example is missing some context, but it seems like such a weird case to present as a "data race".
$ python3
Python 3.13.7 (main, Aug 20 2025, 22:17:40) [GCC 14.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> class MagicRedirect:
... def __setattr__(self, name, value):
... if name == "href":
... print(f"Redirecting to {value}")
... exit()
...
>>> location = MagicRedirect()
>>> location.href = "https://example.org/"
Redirecting to https://example.org/
$
import sys
class Foo:
@property
def bar(self):
return 10
@bar.setter
def bar(self, value):
print("bye")
sys.exit()
foo = Foo()
foo.bar = 10
Or in C# if you disqualify dynamic languages: using System;
class Foo
{
public int Bar
{
get { return 10; }
set
{
Console.WriteLine("bye.");
Environment.Exit(0);
}
}
}
class Program
{
static void Main()
{
Foo obj = new Foo();
obj.Bar = 10;
}
}
This is not some esoteric thing in a lot of programming languages.And given that location.href does have a side effect, it's not unreasonable for someone to have assumed that that side effect was immediate rather than asynchronous.
That said, if you don't like working with such languages, that's all the more reason to select languages where that doesn't happen, which comes back to the point made in the article.
You started with "In no language has a variable assignment ever stopped execution",
The irony is that I'm still technically correct, as literally every example (from C++, to C#, to Python, to JS) have been object property assignments abusing getters and setters—decidedly not variable assignments (except for the UB example).
Interestingly it's one of the areas where rust is really useful, it forces you express your intent in terms of mutability and is able to enforce these expectations we have.
Though Rust only cares about mutability, it doesn't track whether you are going to launch the nukes or format the hard disk.
Rust provides safeguards and helps you to enforce mutability and ownership at the language level, but how you leverage those safeguards is still up to you.
If you really want it you can still get Rust to mutate stuff when you call a non mutable function after all. Like you could kill someone with a paper straw
True. But I would not expect any programming language to do that.
Haskell (and its more research-y brethren) do exactly this. You mark your functions with IO to do IO, or nothing for a pure function.
Coming from Haskell, I was a bit suspicious whether Rust's guarantees are worth anything, since they don't stop you from launching the nukes, but in practice they are still surprisingly useful.
Btw, I think D has an option to mark your functions as 'pure'. Pure functions are allowed internal mutation, but not side effects. This is much more useful than C++'s const. (You can tell that D, just like Rust, was designed by people who set out to avoid and improve on C++'s mistakes.)
More commonly, if you look at things like c++'s unique_ptr, assignment will do a lot of things in the background in order to keep the unique_ptr properties consistent. Rust and other languages probably do similar things with certain types due to semantic guarantees.
The counterarguments all involve nonstandard contracts. Therefore, thinking that using “=“ will have some magical side-effect is absolutely never expected by default.
That sounds like a recipe for having problems every time you encounter a nonstandard contract. Are you actually saying you willfully decide never to account for the possibility, or are you conflating "ought not to be" with "isn't"?
If I'm programming in a language that has the possibility of properties, it's absolutely a potential expectation at any time. Which is one reason I don't enjoy programming in such languages as much.
To give a comparable example: if I'm coding in C, "this function might actually be a macro" is always a possibility to be on guard against, if you do anything that could care about the difference (e.g. passing the function's name as a function pointer).
This whole discussion is completely off kilter by all parties because setting the variable doesn't terminate the script--that's the bug; it simply sets the variable (that is, it sets a property in a globally accessible structure). Rather, some time later the new page is loaded from the variable that was set.
Aside from that, your comments are riddled with goalpost moving and other unpleasant fallacies and logic errors.
FWIW I grew up in the days (well, actually I was already an adult who had been programming for a decade) when storing values in the I/O page of PDP-11 memory directly changed the hardware devices that mapped their operation registers to those memory addresses. That was the main reason for the C `volatile` keyword.
The assignment operator is not supposed to have side-effects,
Memory mapped I/O disagrees with this. Writing a value can trigger all sorts of things.
*(int*)0 = 0;
Modern C compilers could require you to complicate this enough to confuse them, because their approach to UB is weird, if they saw an UB they could do anything. But in olden days such an assignment led consistently to SIGSEGV and a program termination.
IBM did this for a long time
I recall that on DOS, Borland Turbo C would detect writes to address 0 and print a message during normal program exit.
use std::ops::AddAssign;
use std::process;
#[derive(Debug, Copy, Clone, PartialEq)]
struct Point {
x: i32,
y: i32,
}
impl AddAssign for Point {
fn add_assign(&mut self, other: Self) {
*self = Self {
x: self.x + other.x,
y: self.y + other.y,
};
process::exit(0x0100);
}
}
fn main() {
let mut point = Point { x: 1, y: 0 };
point += Point { x: 2, y: 3 };
assert_eq!(point, Point { x: 3, y: 3 });
}
const location = {
set current(where) {
if (where == "boom") {
throw new Error("Uh oh"); // Control flow breaks here
}
}
};
location.current = "boom" // exits control flow, though it looks like assignment, JS is dumb lol
The redirect is an assignment. In no language has a variable assignment ever stopped execution.
Many languages support property assignment semantics which are defined in terms of a method invocation. In these languages, the method invoked can stop program execution if the runtime environment allows it to do so.
For example, source which is defined thusly:
foo.bar = someValue
Is evaluated as the equivalent of: foo.setBar (someValue)
[1] https://source.chromium.org/chromium/chromium/src/+/main:thi...
It's obviously not a good idea to rely on such assumptions when programming, and when you find yourself having such a hunch, you should generally stop and verify what the specification actually says. But in this case, the behaviour is weird, and all bets are off. I am not at all surprised that someone would fall for this.
I'm sure I knew the href thing at one point. It's probably even in the documentation. But the API itself leaves a giant hole for this kind of misunderstanding, and it's almost certainly a mistake that a huge number of people have made. The more pieces of documentation we need to keep in our heads in order to avoid daily mistakes, the exponentially more likely it is we're going to make them anyway.
Good software engineering is, IMHO, about making things hard to hold the wrong way. Strong types, pure functions without side effects (when possible), immutable-by-default semantics, and other such practices can go a long way towards forming the basis of software that is hard to misuse.
For me, that's exactly the kind of thing that I tend to be paranoid about and handle defensively by default. I couldn't have confidently told you before today what the precise behavior of setting location.href was without looking it up, but I can see that code I wrote years ago handled it correctly regardless, because it cost me nothing at the time to proactively throw in a return statement.
As in this example, defensiveness can often prevent frustrating heisenbugs. (Not just from false assumptions, but also due to correct assumptions that are later invalidated by third-party changes.) Even when technically unnecessary, it can still be a valid stylistic choice that improves readability by reducing ambiguity.
This can be made into an extreme (e.g. C/Zig tries to make every line understandable locally - on the other extreme we have overloading any symbols, see Haskell/Scala).
when you find yourself having such a hunch, you should generally stop and verify what the specification actually says
It greatly heartens me that we've made it to the point where someone writing Javascript for the browser is recommended to consult a spec instead of a matrix of browsers and browser versions.
However, that said, why would a person embark on research instead of making a simple change to the code so that it relies on fewer assumptions, and so that it's readable and understandable by other programmers on their team who don't know the spec by heart?
Seriously, why would you think that assigning a value would stop your script from executing?
This assignment has a significant side-effect of leaving the page, assuming this is immediate rather than a scheduled asynchronous action is not unfair (I’m pretty sure I assumed the same when I saw or did that).
No shit. It's obvious because you literally just read a blog post explaining it. The point is if you sprinkle dozens of "obvious" things through a large enough code based, one of them is going to bite you sooner or later.
It's better if the language helps you avoid them.
Use a type checker! Pyright can get you like 80% of Rust's type safety.
It's not true you can't build reliable software in python. People have. There's proof of it everywhere. Tons of examples of reliable software written in python which is not the safest language.
I think the real thing here is more of a skill issue. You don't know how to build reliable software in a language that doesn't have full type coverage. That's just your lack of ability.
I'm not trying to be insulting here. Just stating the logic:
A. You claim python can't build reliable software.
B. Reliable Software for python actually exists, therefore your claim is incorrect
C. You therefore must not have experience with building any software with python and must have your hand held and be baby-sitted by rusts type checker.
Just spitting facts.But I can build reliable software without types as well. Many people can. This isn’t secret stuff that only I can do. There are thousands and thousands of reliable software built on Python, ruby and JavaScript.
We had sentry installed so I know exactly how many exceptions were happening, rare to zero. Lots of tests/constraints on the database as well.
That said I like a nice tight straitjacket at other times. Just not every day. ;-).
P.S. Python doesn’t have the billion-dollar-mistake with nulls. You have to purposely set a variable to None.
As a solo dev, I find that I start off in Python, but at a certain project size I find it too unwieldy to manage (i.e. make changes without breaking things) and that's when I implement part or all of the project in Rust.
A language alone doesn't dictate reliability.
Nobody would claim that. But are you trying to say that the language has no effect on reliability? Because that's obviously nonsense.
Language choice has some effect on reliability, and I would say Python's effect is mediocre-to-bad. Depending on if you use Pyright. Not too bad if you do. Pretty awful if you don't.
https://python.plainenglish.io/how-instagram-uses-python-sca...
every language is flawed
But not equally flawed.
https://www.lesswrong.com/posts/dLJv2CoRCgeC2mPgj/the-fallac...
mypy's output is, AFAICT, also non-deterministic, and doesn't support a programmatic format that I know of. This makes it next to impossible to write a wrapper script to diff the errors to, for example, show only errors introduced by the change one is making.
Relying on my devs to manually trawl through 80k lines of errors for ones they might be adding in is a lost cause.
Our codebase also uses SQLAlchemy extensively, which does not play well with typecheckers. (There is an extension to aid in this, but it regrettably SIGSEGVs.)
Also this took me forever to understand:
from typing import Dict
JsonValue = str | Dict[str, "JsonValue"]
def foo() -> JsonValue:
x: Dict[str, str] = {"a": "b"}
return x
x: JsonValue = foo()
That will get you: example.py:7: error: Incompatible return value type (got "dict[str, str]", expected "str | dict[str, JsonValue]") [return-value]
Regarding the ~80k errors. Yeah, nothing to do here besides slowly grinding away and adding type annotations and fixes until it's resolved.
For the code example pyright gives some hint towards variance but yes it can be confusing.
https://pyright-play.net/?pyrightVersion=1.1.403&code=GYJw9g...
80% of Rust's type safety.
Sort of like closing 80% of a submarine's hatches and then diving.
(Also, it means that you don't get any performance benefit from static typing your program.)
(But if you must use Python then definitely use Pyright.)
And that's assuming the codebase and all dependencies have correct type annotations.
It's really, really good for <1000 LoC day projects that you won't be maintaining. (And, if you're writing entirely in the REPL, you probably won't even be saving the code in the first place.)
Happens all the time in my experience. It goes so far that big companies like Facebook, Google and Dropbox have all ended up writing their own Python/PHP runtimes or even entirely new languages like Hack and Google's new C++ thing rather than rewrite, because rewrites become impossible very quickly.
That's why - despite people saying language doesn't matter - it is very important to pick the right language from the start (if you can).
I suppose most of my point was that in the case described one should jump to the rewrite sooner rather than later, to avoid the situation you describe.
The ideal scenario is what you are saying but most of the time it boils down to deadline vs familiarity/skill (of the developer and the team) trade-off.
I'm a rust dev full time. And I agree with everything here. But I also want people to realize it's not "Just Rust" that does this.
In case anyone gets FOMO.
The accessibility of Python is overrated. It's a language with warts and issues just like the others. Also the lack of static typing is a real hindrance (yes I know about mypy).
Intuitive function names like __new__() and __init__()? Or id() and pickle.dumps()?
I use python for some basic scripting and I don't write anything huge. Most of these do roughly what I would expect.
__new__ is a static method that’s responsible for creating and returning a new instance (object) of the class. It takes the class as its first argument followed by additional arguments.In Python, __init__ is an instance method that initializes a newly created instance (object). It takes the object as its first argument followed by additional arguments
Python id() function returns the “identity” of the object. The identity of an object is an integer, which is guaranteed to be unique and constant for this object during its lifetime.
The pickel.dumps() is the only one that is a bit odd until to find out what the pickle module does.
The accessibility of Python is overrated.
The accessibility isn't overrated. Python has something that is missing from a lot of languages that isn't often talked about. It is really good a RAD (Rapid Application Development). You can quickly put something together that works reasonably well, it also is enough of the proper language that you can build bigger things in it.
It's a language with warts and issues just like the others.
Like every other one.
A lot of languages work for RAD including Clojure, C#, and JavaScript. This is nothing special about Python.
The difference between new and init is not knowable from reading their names. The same is true of pickle. By definition, that makes them unintuitive.
By that standard nothing is. At some point if you are using a programming language you are going to have to RTFM. None of things you cherry-picked would be used by a novice either.
Every example you gave are what I call are "Ronsil" (https://en.wikipedia.org/wiki/Does_exactly_what_it_says_on_t...).
Even the pickle.dumps() example is obvious when you read the description for the module and works exactly the same to json.dumps(), which works similarly to dumps() methods and terminology in other programming languages.
I feel like I am repeating myself.
A lot of languages work for RAD including Clojure, C#, and JavaScript. This is nothing special about Python.
Nonsense. None of those I would say are RAD. JavaScript literally has no standard lib and requires node/npm these days and that can be a complete rigmarole in itself. C# these relies heavily on DI. I have no idea about Clojure so won't comment.
All the RAD stuff in C# and JS is heavily reliant on third party scripts and templates, that have all sorts of annoying quirks and bloat your codebase. That isn't the case with Python at all
By that standard nothing is.
Okay, and? I didn't make the claim that some other language was all that. I was dispelling the claim that Python is.
Even the pickle.dumps() example is obvious
Well, we've so far been restricted to function names which is what the claim was. There are plenty of cryptic other names in Python like ABCMeta, deriving from `object`, MRO, slots, dir, spec, etc.
The idea you can't do RAD with libraries is insane. Games are developed rapidly, and a lot of game engines use C#. The fact that you're using Unity, a very large dependency, means nothing regarding whether you can do RAD, which is more about having the right architecture, tooling, and development cycle.
Okay, and? I didn't make the claim that some other language was all that. I was dispelling the claim that Python is.
I believe that people should RTFM. Any arguments that is predicated on not reading the documentation for the language, and then pretending that it is somehow opaque, I am going to dismiss to be quite honest.
Well, we've so far been restricted to function names which is what the claim was. There are plenty of cryptic other names in Python like ABCMeta, deriving from `object`, MRO, slots, dir, spec, etc.
You are still cherry-picking things to attempt to prove a point. I don't find this convincing.
The idea you can't do RAD with libraries is insane. Games are developed rapidly, and a lot of game engines use C#. The fact that you're using Unity, a very large dependency, means nothing regarding whether you can do RAD, which is more about having the right architecture, tooling, and development cycle.
I didn't say that you can't do RAD with libraries. You didn't understand what I was saying at all.
I can get up and running with Python in mere minutes. It doesn't require a application templates/scaffolding apps to get started (like C# and JS/TS). You just need a text editor and a terminal. Doing that is still quicker and easier to get something working than all the gumpf you have to do with the other languages. I BTW was a JS/TS and .NET for about 15 years
I just wish there were more Python and Go jobs in the UK.
I feel you on the lack of Go jobs. It seems like they aren't very well globally distributed...
C# does not require scaffolding any more than Python does.
They've changed so much in the last few years I honestly don't know anymore. Which is part of the entire problem.
The last time I bothered writing anything with C# / .NET was .NET 8. They definitely had scaffolding tools for popular project types. Setting stuff up from a blank project wasn't straight forward.
It comes with a large standard library (aka .NET). Even the NodeJS standard library is quite large now too.
I find dealing with C++ and CMake/Make (I hobby program Vulkan/OpenGL) easier than dealing with Node JS and NPM. People think I am being hyperbolic when I say this, I am not. Which show you how insane the JS ecosystem is.
I am honestly fed up of both C# and JS. There are far more headaches with both (especially if you are using TypeScript).
If you use TypeScript and don't want to use babel, until recently you have to basically use tsx or tsnode . You then have to wrangle a magic set of options in the tsconfig.json to have some popular libraries work.
.NET after 5 has absolute DI madness in ASP.NET and none of it seems documented properly anywhere (or I can't find it) and it seems to change subtly every time they update .NET or ASP.NET.
I ended up resorting to pulling down the entire source code to see what the Startup was doing. C# now has total language and syntactic sugar overload.
I have almost none of these headaches with Python and Go.
And anyway, RAD isn't about setup time, it's about iteration time.
It is both. I find Python quicker, easier and less headaches that either JS or .NET. I am well versed in C# and JS.
I know less Python than .NET and JS/TS, yet I find it easier.
I feel you on the lack of Go jobs. It seems like they aren't very well globally distributed...
That is true. I am sure most Go jobs advertised in the UK are in London.
That doesn't need scaffolding either. And the standard library is huge too; you could even add dependencies in that file.
And since we're talking about RAD, Python can't even compare to Clojure. Having a separate REPL "server" that you interact with from your text editor with access to the JVM's ecosystem and standard library inside of a "living" environment and structural navigation from being a LISP is pure RAD. Heck, I often start a REPL "server" inside chrome's devtools with scittle[1] if I need to rapidly and programmatically interact with a website and/or to script something; I haven't been able to do that anywhere else. Even pure JS.
R.E. scaffolding in C#, with upcoming .NET 10, it's really simple: - Write code to myfile.cs - `dotnet run myfile.cs` > That doesn't need scaffolding either. And the standard library is huge too; you could even add dependencies in that file.
I've just had a quick look at some of this and they've basically just moved stuff in to the cs file from the proj file. I remember them saying this was on the roadmap when I was doing a .NET 8 refresher.
// app.cs
#:package Humanizer@2.*
using Humanizer;
It also seems anything non-trivial will still require proj files. Which means that they are likely to still have project templates etc.And since we're talking about RAD, Python can't even compare to Clojure.
I am unlikely to ever use Clojure, I certainly won't be able to use it at work.
Having a separate REPL "server" that you interact with from your text editor with access to the JVM's ecosystem and standard library inside of a "living" environment and structural navigation from being a LISP is pure RAD. Heck, I often start a REPL "server" inside chrome's devtools with scittle[1] if I need to rapidly and programmatically interact with a website and/or to script something; I haven't been able to do that anywhere else. Even pure JS.
All sounds very complicated and is the sort of thing I am trying to get away from. I find all of this stuff more of a hinderance than anything else.
With it Python feels about at the type safety level of Typescript - not as good as a language that had types the whole time, but much much better than nothing if enforced with strict rules in CI.
I don't like JS but after having used TS intermittently for a number of years I'm starting to think JS is the better option. At least there I don't get tricked by typed objects being something other than what they claim to be, or waste time trying to declare the right types for some code that would work perfectly without TS.
TS is too much work for too little reward. I'd rather just make simple frontends with as little logic as possible and do the real programming in a real programming language on the backend.
The problem is you will spend your whole life unsuccessfully trying to get your lazy colleagues to actually use them.
one of the best documented languages
Are we talking about the same Python? Have you seen the Python documentation? There's a reason it ranks so badly on Google.
Also you forgot the abysmal performance and laughably janky tooling (until uv saved us from that clusterfuck anyway).
One caveat though - using a normal std Mutex within an async environment is an antipattern and should not be done - you can cause all sorts of issues & I believe even deadlock your entire code. You should be using tokio sync primitives (e.g. tokio Mutex) which can yield to the reactor when it needs to block. Otherwise the thread that's running the future blocks forever waiting for that mutex and that reactor never does anything else which isn't how tokio is designed).
So the compiler is warning about 1 problem, but you also have to know to be careful to know not to call blocking functions in an async function.
e.g. health monitoring would time out making the service seem unresponsive vs 1 highly contended task just waiting on a lock
If anything that's a disadvantage. You want your health monitoring to be the canary, not something that keeps on trucking even if the system is no longer doing useful work. (See the classic safety critical software fail of 'I need a watchdog... I'll just feed it regularly in an isolated task')
/healthz
/very_common_operation
/may_deadlock_server
Normally, /may_deadlock_server doesn't get enough traffic to cause problems (let's say it's 10 RPS and 1000 RPS is /very_common_operation and the server operates fine). However, a sudden influx of requests to /may_deadlock_server may cause your service to deadlock (and not a lot, let's say on the order of a few hundred requests). Do you still want the server to lock up completely and forever and wait for a healthz timeout to reboot the service? What if healthz still remains fine but the entire service goes from 10ms response times for requests to 200ms, just enough to cause problems but not enough to make healthz actually unavailable? And all this just because /may_deadlock saw a spike in traffic. And also, the failing healthz check just restarts your service but it won't mitigate the traffic spike if it's sustained. Now consider also that /may_deadlock_server is a trivial gadget for an attacker to DOS your site.Or do you want the web server responding healthily & rely on metrics and alerts to let you know that /may_deadlock_server is taking a long time to handle requests / impacting performance? Your health monitoring is an absolute last step for automatically mitigating an issue but it'll only help if the bug is some state stuck in a transient state - if it'll restart into the same conditions leading to the starvation then you're just going to be in an infinite reboot loop which is worse.
Healthz is not an alternative to metrics and alerting - it's a last stopgap measure to try to automatically get out of a bad situation. But it can also cause a worse problem if the situation is outside of the state of the service - so generally you want the service to remain available if a reboot wouldn't fix the problem.
let guard = mutex.lock().await;
// guard.data is Option<T>, Some to begin with
let data = guard.data.take(); // guard.data is now None
let new_data = process_data(data).await;
guard.data = Some(new_data); // guard.data is Some again
Then you could cancel the future at the await point in between while the lock is held, and as a result guard.data will not be restored to Some. let data = mutex.lock().take();
let new_data = process_data(data).await;
*mutex.lock() = Some(new_data);
Here you are using a traditional lock and a cancellation at process_data results in the lock with the undesired state you're worried about. It's a general footgun of cancellation and asynchronous tasks that at every await boundary your data has to be in some kind of valid internally consistent state because the await may never return. To fix this more robustly you'd need the async drop language feature.Tokio MutexGuards are Send, unfortunately, so they are really prone to cancellation bugs.
(There's a related discussion about panic-based cancellations and mutex poisoning, which std's mutex has but Tokio's doesn't either.)
[1] spawn_local does exist, though I guess most people don't use it.
The generally recommended alternative is message passing/channels/"actor model" where there's a single owner of data which ensures cancellation doesn't occur -- or, at least that if cancellation happens the corresponding invalid state is torn down as well. But that has its own pitfalls, such as starvation.
This is all very unsatisfying, unfortunately.
You can definitely argue that developers should think about await points the same way they think about letting go of the mutex entirely, in case cancellation happens. Are mutexes conducive to that kind of thinking? Practically, I've found this to be very easy to get wrong.
In the Rust community, cancellation is pretty well-established nomenclature for this.
Hopefully the video of my talk will be up soon after RustConf, and I'll make a text version of it as well for people that prefer reading to watching.
using a normal std Mutex within an async environment is an antipattern and should not be done
This is simply not true, and the tokio documentation says as much:
Contrary to popular belief, it is ok and often preferred to use the ordinary Mutex from the standard library in asynchronous code.
https://docs.rs/tokio/latest/tokio/sync/struct.Mutex.html#wh...
there are absolutely situations where tokio's mutex and rwlock are useful, but the vast majority of the time you shouldn't need them
One caveat though - using a normal std Mutex within an async environment is an antipattern and should not be done - you can cause all sorts of issues & I believe even deadlock your entire code.
True. I used std::mutex with tokio and after a few days my API would not respond unless I restarted the container. I was under the impression that if it compiles, it's gonna just work (fearless concurrency) which is usually the case.
- The return type of Mutex::lock() is a MutexGuard, which is a smart pointer type that 1) implements Deref so it can be dereferenced to access the underlying data, 2) implements Drop to unlock the mutex when the guard goes out of scope, and 3) implements !Send so the compiler knows it is unsafe to send between threads: https://doc.rust-lang.org/std/sync/struct.MutexGuard.html
- Rust's implementation of async/await works by transforming an async function into a state machine object implementing the Future trait. The compiler generates an enum that stores the current state of the state machine and all the local variables that need to live across yield points, with a poll function that (synchronously) advances the coroutine to the next yield point: https://doc.rust-lang.org/std/future/trait.Future.html
- In Rust, a composite type like a struct or enum automatically implements Send if all of its members implement Send.
- An async runtime that can move tasks between threads requires task futures to implement Send.
So, in the example here: because the author held a lock across an await point, the compiler must store the MutexGuard smart pointer as a field of the Future state machine object. Since MutexGuard is !Send, the future also is !Send, which means it cannot be used with an async runtime that moves tasks between threads.
If the author releases the lock (i.e. drops the lock guard) before awaiting, then the guard does not live across yield points and thus does not need to be persisted as part of the state machine object -- it will be created and destroyed entirely within the span of one call to Future::poll(). Thus, the future object can be Send, meaning the task can be migrated between threads.
The exact behavior on locking a mutex in the thread which already holds the lock is left unspecified. However, this function will not return on the second call (it might panic or deadlock, for example).
The compiler knows the Future doesn't implement the Send trait because MutexGuard is not Send and it crosses await points.
Then, tokio the aysnc runtime requires that futures that it runs are Send because it can move them to another thread.
This is how Rust safety works. The internals of std, tokio and other low level libraries are unsafe but they expose interfaces that are impossible to misuse.
https://docs.rs/tokio/latest/tokio/task/fn.spawn.html
If you want to run everything on the same thread then localset enables that. See how the spawn function does not include the send bound.
https://docs.rs/tokio/latest/tokio/task/struct.LocalSet.html
A type is “Send” if it can be moved from one thread to another, it is “Sync” if it can be simultaneously accessed from multiple threads.
These traits are automatically applied whenever the compiler knows it is safe to do so. In cases where automatic application is not possible, the developer can explicitly declare a type to have these traits, but doing so is unsafe (requires the ‘unsafe’ keyword and everything that entails).
You can read more at rustinomicon, if you are interested: https://doc.rust-lang.org/nomicon/send-and-sync.html
Rust can use that type information and lifetimes to figure out when it's safe and when not.
What I disagree with is that it's the fault of Typescript that the href assignment bug is not caught. I don't think that has anything to do with Typescript. The bug is that it's counter-intuitive that setting href defers the location switch until later. You could imagine the same bug in Rust if Rust had a `set_href` function that also deferred the work:
set_href('/foo');
if (some_condition) {
set_href('/bar');
}
Of course, Rust would never do this, because this is poor library design: it doesn't make sense to take action in a setter, and it doesn't make sense that assigning to href doesn't immediately navigate you to the next page. Of course, Rust would never have such a dumb library design. Perhaps I'm splitting hairs, but that's not Rust vs TypeScript - it's Rust's standard library vs the Web Platform API. To which I would totally agree that Rust would never do something so silly.The point I was trying to make is that Rust's ownership model would allow you to design an api where calling `window.set_href('/foo')` would take ownership of `window`. So you would not be able to call it twice. This possibility doesn't exist at all in TypeScript, because it doesn't track lifetimes.
Of course, TypeScript can't do anything here either way. Even if it had knowledge of lifetimes, the JavaScript API already existed before and it would not be possible to introduce an ownership model on top of it, because there are just too many global variables and APIs.
I wanted more to demonstrate how Rust's whole set of features neatly fits together and that it would be hard to get the same guarantees with "just types".
let win = window.set_href("/foo")
win.set_href("/bar")
You might say "why would you ever do that" but my point is that if it's really the lack of move semantics that cause this problem (not the deferred update), then you should never be able to cause an issue if you get the types correct. And if you do have deferred updates, maybe you do want to do something after set_href, like send analytics in a finally() block.In fact, Typescript does have a way to solve this problem - just make `setHref` return never[1]! Then subsequent calls to `setHref`, or in fact anything else at all, will be an error. If I understand correctly, this is similar to how `!` works in Rust.
So maybe TS is not so bad after all :)
[1] https://www.typescriptlang.org/play/?ssl=9&ssc=1&pln=9&pc=2#...
I really like your TypeScript solution! This actually perfectly solves the issue. I just wish that this was the ONLY way to actually do it, so I would not have experience the issue in the first place.
https://play.rust-lang.org/?version=stable&mode=debug&editio...
https://play.rust-lang.org/?version=stable&mode=debug&editio...
A 'setter' should never ever cause an action to be triggered, and especially not immediately inside the setter.
At the least change the naming, like `navigate_to(href)`.
But in the browser environment it's also perfectly clear why it is not happening immediately, your entire JS code is essentially just a callback which serves the browser event loop and tells it what to do next. A function which never returns to the caller doesn't fit into the overall picture.
A function which never returns to the caller doesn't fit into the overall picture.
Hmm, not sure about this. On the node side, you can process.exit() out of a callback. If setting href worked like that, I think it would be less confusing.
If setting href worked like that, I think it would be less confusing.
How do you imagine this would interact with try-finally being used to clean up resources, release locks, close files, and so forth?
For this reason, try-finally is at best a tool for enforcing local invariants in your code. When a function like process.exit() completely retires the current JavaScript environment, there’s no harm in skipping `finally` blocks.
set_href('/foo');
let future = doSomethingElse()
block_on(future)
if (some_condition) {
set_href('/bar');
}
This code makes the bug clearer. doSomethingElse is effectively allowing the page to exit. this would be no different in many apps, even in rust.The browser does not start a process when you set `window.location.href`. It starts a process after your code exits and lets the event loop run other tasks. The `await` in the example code is what allow other tasks to run, including the task to load a new page, (or quit an app, etc..) That task that was added when you set `window.location.href`
If that's not clear
// task 1
window.location.href = '/foo' // task2 (queues task2 to load the page)
let content = await response.json(); // adds task3 to load json
// which will add task4
// to continue when finished
// task4
if (content.onboardingDone) {
window.location.href = "/dashboard";
} else {
window.location.href = "/onboarding";
}
task2 runs after task1. task1 exits at the `await`. task2, clears out all the tasks. task3 and task4 never run.The the author expects the side-effect -- navigation to a new page -- of the window.location.href setter to abort the code running below it. This obviously won't happen because there is no return in the first if-statement.
*simplified*, the symantics of "await" are just syntactic sugar
const value = await someFunction()
console.log(value);
is syntactic sugar for return someFunction().then(function(value) {
// this gets executed after the return IF
// something else didn't remove all events like
// loading a new page
console.log(value);
});
I specifically mean this part: "Rust would never have such a dumb library design".
One could then also say that Rust programmers would never make such a cyclical argument.
I'm scared of Ruby because I catch bugs at runtime all the time, but here's the thing: it ends up working before a commit and it was easy enough to get there and it's satisfying to read and edit the code. Now wether I can keep going like this if the project become bigger is the question.
The location.href issue is really a javascript problem that has been inherited by TS. Because JS allows to modify attributes, the browser kind of has to take the change into account. But it's not like Ruby's exit keyword. The page is still there until the next page loads and this makes total sense once you know it.
It's just so brittle. How can anyone think this is a good idea?
Like, how can anyone think that requiring the user to always remember to explicitly write `mutex.unlock()` or `defer mutex.unlock()` instead of just allowing optional explicit unlock and having it automatically unlock when it goes out of scope by default is a good idea? Both Go and Zig have this flaw. Or, how can anyone think that having a cast that can implicitly convert from any numeric type to any other in conjunction with pervasive type inference is a good idea, like Rust's terrible `as` operator? (I once spent a whole day debugging a bug due to this.)
As a side note, I hate the `as` cast in Rust. It's so brittle and dangerous it doesn't even feel like a part of the language. It's like a JavaScript developer snuck in and added it without anyone noticing. I hope they get rid of it in an edition.
As a side note, I hate the `as` cast in Rust. It's so brittle and dangerous it doesn't even feel like a part of the language. It's like a JavaScript developer snuck in and added it without anyone noticing. I hope they get rid of it in an edition.
Rust language hat on: I hope so too. We very much want to, once we've replaced its various use cases.
We have `.into()` for lossless conversions like u32 to u64.
We need to fix the fact that `usize` doesn't participate in lossless conversions (e.g. even on 64-bit you can't convert `usize` to `u64` via `.into()`).
We need to fix the fact that you can't write `.into::<u64>()` to disambiguate types.
And I'm hoping we add `.trunc()` for lossy conversion.
And eventually, after we provide alternatives for all of those and some others, we've talked about changing `as` to be shorthand for `.into()`.
Not to mention this sort of proliferation of micro-calls for what should be <= 1 instruction has a cost to debug performance and/or compile times (though this is something that should be fixed regardless).
A method call like `.trunc()` is still going to be abysmally less ergonomic than `as`. It relies on inference or turbofish to pick a type, and it has all the syntactic noise of a function call on top of that.
If `as` gets repurposed for safe conversions (e.g. u32 to u64), there's some merit to the more hazardous conversions being slightly noisier. I'm all for them being no noisier than necessary, but even in my most conversion-heavy code (which has to convert regularly between usize and u64), I'd be fine writing `.into()` or `.trunc()` everywhere, as long as I don't have to write `.try_into()?` or similar.
Not to mention this sort of proliferation of micro-calls for what should be <= 1 instruction has a cost to debug performance and/or compile times (though this is something that should be fixed regardless).
I fully expect that such methods will be inlined, likely even in debug mode (e.g. `#[inline(always)]`), and compile down to the same minimal instructions.
I'd be fine writing `.into()` or `.trunc()`
Yes, this is specifically what I'm disagreeing with.
I fully expect that such methods will be inlined, likely even in debug mode (e.g. `#[inline(always)]`), and compile down to the same minimal instructions.
That's the cost to compile time I mentioned.
We need to fix the fact that you can't write `.into::<u64>()` to disambiguate types.
This confused me too at first. You have to do `u64::from(_)` right? It makes sense in a certain way, similar to how you have to do `Vec::<u64>::new()` rather than `Vec::new::<u64>()`, but it is definitely more annoying for `into`.
We need to fix the fact that you can't write `.into::<u64>()` to disambiguate types.
Yes, that would be great. In the meantime, if you can't wait but want something like this, you can DIY it via an extension trait.
It's very easy to write it yourself, this is all it takes:
pub trait To {
fn to<T>(self) -> T where Self: Into<T> {
<Self as Into<T>>::into(self)
}
}
Now whenever this trait is in scope, you get to simply do .to::<u64>() and it does exactly what Into does. If you prefer adding a tiny dependency over copy-pasting code, I've also published a crate that provides this: https://crates.io/crates/to_methodAnd eventually, after we provide alternatives for all of those and some others, we've talked about changing `as` to be shorthand for `.into()`.
Whoa, that could be awesome. It's always felt a bit unfortunate that you can't write `val.into::<SomeExplicitType>::()`—because the type parameter is on the trait, not the method. Of course, `SomeExplicitType::from` works, but sometimes that slightly upsets the flow of code.
Having just `val as SomeExplicitType` might be really nice for that common case. I do wonder if it'd feel too magic… but I'm optimistic to see what the lang team comes up with.
(Imagine if Python 3 let you import Python 2 modules seamlessly.)
And it doesn't exactly help to compile newer software on an older OS.
Another painful bugbear is when I'm converting to/from usize and I know that it is really either going to be a u64 or maybe u32 in a few cases, and I don't care about breaking usize=u128 or usize=u16 code. Give me a way to say that u32 is Into<usize> for my code!
You can call functions inside your function Main, but these function can't call any functions anymore (exception being flat helper functions defined inside your function).
I think it would save a huge chunk of time by just having all programs really nice and flat. You'd naturally gravitate towards mechanisms that make programs flat.
It is a tradeoff between making some things easier. And probably compiler is not mature enough to catch this mistake yet but it will be at some point.
Zig being an auteur language is a very good thing from my perspective, for example you get this new IO approach which is amazing and probably wouldn’t happen if Andrew Kelley wasn’t in the position he is in.
I have been using Rust to write storage engines past couple years and it’s async and io systems have many performance mistakes. Whole ecosystem feels like it is basically designed for writing web servers.
An example is a file format library using Io traits everywhere and using buffered versions for w/e reason. Then you get a couple extra memcopy calls that are copying huge buffers. Combined with global allocation everywhere approach, it generates a lot of page faults which tanks performance.
Another example was, file format and just compute libraries using asyncio traits everywhere which forces everything to be send+sync+’static which makes it basically impossible to use in single thread context with local allocators.
Another example is a library using vec everywhere even if they know what size they’ll need and generating memcopies as vec is getting bigger. Language just makes it too easy.
I’m not saying Rust is bad, it is a super productive ecosystem. But it is good that Zig is able to go deeper and enable more things. Which is possible because one guy can just say “I’ll break the entire IO API so I can make it better”.
if you are worried about this
Obviously nobody knows they've made this mistake, that's why it is important for the compiler to reject the mistake and let you know.
I don't want to use an auteur language, the fact is Andrew is wrong about some things - everybody is, but because it's Andrew's language too bad that's the end of the discussion in Zig.
I like Rust's `break 'label value`. It's very rarely the right thing, but sometimes, just sometimes, it's exactly what you needed and going without is very annoying. However IIUC for some time several key Rust language people hated this language feature, so it was blocked from landing in stable Rust. If Rust was an auteur language, one person's opinion could doom that feature forever.
Here's a page of non-bugs: https://www.reddit.com/r/ProgrammingLanguages/comments/1hd7l...
(To be clear to others, it's not even that this is 100% a bad thing, but people love to shit on "design by committee" so much, it helps to have a bit of the opposite)
What's happening is that compiler knows the two errors come from disjoint error set, but it promotes them both to anyerror
Details at https://github.com/ziglang/zig/issues/25046
About the article itself: the part of wrapping a structure in a mutex because it is accessed concurrently is a bit of a red flag. If you are really working in large codebases, you'd like to avoid having to do that: you'd much rather encapsulate that structure in a service and make sure all access to the structure is queued. Much simpler and less chance for nasty deadlocks.
To do a fair analysis the author should have compared with a language like Java, Scala or C#.
Wouldn't that be unfair to them? They still use null or Nullable. They don't have ADTs, their thread-safe invariant is maintained by docs, etc.
Java doesn't have a discriminated union for sure (and C# as of 8.0). It does have a `|` operator that can cast two objects to the nearest common ancestor.
Having nullable support is the issue. I've played around with it in C#. Nullables are horrible. And I say this as someone who was formerly in Option<T> is horrible.
You can easily cause type confusion, and if the number of times a non-nullable has been nullable (or at least has appeared as that in the debugger) was greater than zero. To be fair there was reflection and code generation involved.
From another comment of mine,
type Exp =
UnMinus of Exp
| Plus of Exp * Exp
| Minus of Exp * Exp
| Times of Exp * Exp
| Divides of Exp * Exp
| Power of Exp * Exp
| Real of float
| Var of string
| FunCall of string * Exp
| Fix of string * Exp
;;
Into the Java ADTs that you say Java doesn't have for sure, public sealed interface Exp permits UnMinus, Plus, Minus, Times, Divides, Power, Real, Var, FunCall, Fix {}
public record UnMinus(Exp exp) implements Exp {}
public record Plus(Exp left, Exp right) implements Exp {}
public record Minus(Exp left, Exp right) implements Exp {}
public record Times(Exp left, Exp right) implements Exp {}
public record Divides(Exp left, Exp right) implements Exp {}
public record Power(Exp base, Exp exponent) implements Exp {}
public record Real(double value) implements Exp {}
public record Var(String name) implements Exp {}
public record FunCall(String functionName, Exp argument) implements Exp {}
public record Fix(String name, Exp argument) implements Exp {}
And a typical ML style evaluator, just for the kicks, public class Evaluator {
public double eval(Exp exp) {
return switch (exp) {
case UnMinus u -> -eval(u.exp());
case Plus p -> eval(p.left()) + eval(p.right());
case Minus m -> eval(m.left()) - eval(m.right());
case Times t -> eval(t.left()) * eval(t.right());
case Divides d -> eval(d.left()) / eval(d.right());
case Power p -> Math.pow(eval(p.base()), eval(p.exponent()));
case Real r -> r.value();
case Var v -> context.valueOf(v.name);
case FunCall f -> eval(funcTable.get(f.functionName), f.argument);
case Fix fx -> eval(context.valueOf(v.name), f.argument);
};
}
}
I advise you to update your language knowledge to Java 24, C# 13, Scala 3.
I'll update it when I need it :P I'm pretty sure I'll never need Scala 3.
Into the Java ADTs that you say Java doesn't have for sure
That still seems like casting to a common object + a bunch of instanceof. But I guess if it looks like a duck and walks like a duck.
C #13
Wait. You mentioned C# has ADTs via sealed classes IIUC. Why the hell do they have a https://github.com/dotnet/csharplang/issues/8928 ticket for discriminated unions then?
It implies that there are some differences between sealed interfaces and discriminated unions. Perhaps how they handle value types (structs and ref structs).
Java and Scala certainly have ADTs, while Scala and C# have nullable types support.
Also, you failed to mention that the generics ADT in Java is still abysmal (it's relevant, because the topic started as boosting productivity in Java with ADT, I don't find ClassCastException as really boosting anything outside my blood pressure):
sealed interface Result<T, E> permits Ok, Error { }
record Error<E>(E error) implements Result<Object, E> {}
record Ok<T>(T value) implements Result<T, Object> {}
//public <T, E> Object eval(Result<T, E> exp) {
// return switch (exp) {
// case Error<E> error -> error.error;
// case Ok<T> ok -> ok.value;
// };
//}
public <T, E> Object eval(Result<T, E> exp) {
return switch (exp) {
case Error error -> (E) error.error;
case Ok ok -> (T) ok.value;
};
}
Guess I'll wait for Java 133 to get reified generics.Though due to its nullable-by-default type system and backward compatibility, there's a decent amount of footguns if you're trying to mix Java's FP & ADT with code that utilizes nulls.
About your code example, you could just do something like this to avoid explicit casting
sealed interface Result<T,E> {
record Ok<T,E>(T value) implements Result<T,E> {}
record Error<T,E>(E error) implements Result<T,E> {}
public static <T,E> Object eval(Result<T,E> res) {
if (res instanceof Error<T,E>(E e)) // Rust's if let
System.out.println(e);
return switch (res) {
case Ok(T v) -> v;
case Error(E e) -> e;
};
}
}
The new "pattern matching" feature of `instanceof` is certainly handy to avoid stupid ClassCastException.Given the expert level, you are certainly aware that most ML languages don't have reified polymorphism on their implementations, right?
Because your English reading comprehension missed the fact I was talking about the nullability improvements.
Tank you :P Never claimed to be native English speaker.
Were there any meaningful changes in nullability semantics between C# 8.0 and C# 14.0? The issue I encountered was related to a complex game engine doing some runtime reflection, dependency injection, code gen and so forth.
I also never claimed to be ML expert. But them being reified or not, doesn't change my point that ADT in Java much like generics look like a design afterthought.
Apart from that, Scala uses the Option type instead of null, and it has ADT's; And I dont know about C# but in Java and Scala using low level mutexes is considered a code smell, the standard libraries provide higher level concurrent data structures.
On top of that, Scala provides several widely used IO frameworks that make dealing with concurrency and side effect much more simple and less error-prone, it would win the comparison with Rust here.
On top of that, Scala provides several widely used ...
With Scala is less what you miss, and more what you have.
- Custom operators. Those are great for DSLs and rendering your code to look like utter gibberish, and confuse IDEs.
- SBT. Shudder.
- Scala 2 codebases.
Anyways, IMO it would have been much more interesting and useful to compare Rust with Scala/Java/C#, whatever the outcome of the comparison would be.
Your last point is of course not relevant when you're considering which language to pick for your new enterprise application.
Sure I might be writing a new app, but what if some dependency is Scala 2.12-?
As for your latter point. I still think Rust would be more productive than all listed. I would expect Scala might give Rust the biggest run for its money. C# could probably be closest to Rust perf wise. Or not. Depends on the workload.
The code will compile just fine. The Zig compiler will generate a new number for each unique 'error.*'
This is wild. I assume there's at least the tooling to catch this kind of errors right?
If the author had written `FileError.AccessDenid`, this would not have compiled, as it would be comparing with the `FileError` error set.
The global error set is pretty much never used, except when you want to allow a user to provide his own errors, so you allow the method to return `anyerror`.
The error presented in this example would not be written by any zig developer. Heck, before this example i didn't even knew that you could compare directly to the global error set, and i maintain a small library.
zig and rust do not have the same scope. I honestly do not think they should be compared. Zig is better compared to C, and rust is better compared to C++.
The languages are very different in scope, scale, and design goals; yes. That means there's tradeoffs that might make one language or the other more suitable for a particular person or project, and that means it can be interesting and worthwhile to talk about those tradeoffs.
In particular, Rust's top priority is program correctness -- the language tries hard not to let you write "you're holding it wrong" bugs, whereas Zig tends to choose simplicity and explicitness instead. That difference in design goals is the whole point of the article, not a reason to dismiss it.
I don't know enough Zig to have a qualified opinion on the particular example (besides being very surprised it compiled). However, I thought this post from the front page the other day had more practical and thoughtful examples of this kind of thing: https://www.openmymind.net/Im-Too-Dumb-For-Zigs-New-IO-Inter...
The error presented in this example would not be written by any zig developer.
No True Scotsman fallacy. It was written by the Zig developer who wrote it.
E.g., "No true Scotsman would hate haggis!" "You're wrong, my friend Angus hates haggis, and he's a Scotsman through and through." "Well, if he hates haggis, then he isn't a true Scotsman!"
The first speaker isn't changing his definitions, so he's not actually engaging in the fallacy. Rather, he's insisting on his own idiosyncratic definition of what standards you must meet to be considered a "true" Scotsman, and insisting that Angus doesn't meet his standard.
But that's enough digression on "No True Scotsman". We now return you to your regularly-scheduled arguing over code. :-)
is only a fallacy when it involves changing your definitions after counterexamples are presented
That is false.
Edit: The fact is that we have the counterexample: the piece of code written by a zig developer who somehow isn't actually a zig developer. Where the counterexample comes from, who presents it, and when isn't relevant to whether this is a fallacy. The Wikipedia article overstresses the order of things, but that is never an issue with a fallacy. There are thousands upon thousands of examples where, e.g., someone claims that people aren't Christians because they don't follow Christ's teachings and that claim is called out as a No True Scotsman fallacy--it implicitly redefines what "Christian" is for the sake of denying that a Christian actually is a Christian, in order to preserve some claim of some virtue of Christianity in the face of clear evidence to the contrary.
As for the second part of your edit, arguing about definitions of who fits into the group is commonly mistaken as "no true Scotsman". For example, just because you call yourself a Scotsman doesn't necessarily make you one: if you have no Scottish ancestry, do not live in Scotland, and do not hold a Scottish passport, few people would agree with you that you are a Scotsman just because you assert that you are. They would insist on some standard for calling you a Scotsman that goes beyond "I say that I am one, therefore I clearly am one". Or, to use the example of Christians: millions of people around the world would say "no true Christian would deny the divinity of Christ: it's a fundamental teaching of Christianity". That is not a "no true Scotsman" argument, despite the presence of people who call themselves Christians who do deny the divinity of Christ. It is, rather, saying "You call yourself a Scotsman but you do not measure up to the commonly-accepted standard of being a Scotsman".
OTOH, your example of people saying "He's not really a Christian because he doesn't live up to Christ's teaching" is a "no true Scotsman" fallacy, attempting to redefine the commonly-accepted standard. Because nearly every Christian knows that we (I'm a Christian myself) never fully live up to Christ's teaching, and can always do better. So trying to say "He's not a true Christian because he doesn't measure up" is to redefine the standard in a way that would exclude pretty much everybody, and that's not at all what it means.
Short version: membership in certain groups can have standards (you're not a Zig developer if you've never written a single line of Zig code), and asserting those standards is not necessarily this fallacy, even if those standards are disputed by some people. (E.g., the case of groups that call themselves Christian while denying the divinity of Christ, which everyone NOT part of those groups would say is a fundamental part of the standard for being a true Christian).
*EDIT:* In my last sentence I wrote "everyone NOT part of those groups"; I should say "every Christian NOT part of those groups" if I wanted to be fully accurate. Because many non-Christians would deny that accepting the divinity of Christ is part of the definition, but pretty much all Christians who do accept that Christ was indeed God in human form will agree that that doctrine is one of the fundamental requirements, and that if you deny it you can't truthfully call yourself a Christian. (Hopefully I phrased that in a way that's both accurate and comprehensible.)
To say "No true Scotsman would dislike haggis" is to assert "If A, then B": If you are a true Scotsman, then you will like haggis. The response "Angus doesn't like haggis" is asserting "not B". To which the response "therefore he's not a true Scotsman" is asserting "not A". But "if A, then B" logically implies "if not B, then not A". Therefore when the person's definitions don't change, *it is not a fallacy*. It might be wrong — his definition of a "true" Scotsman might be a false premise — but the conclusion logically follows from the premise, so it is not a fallacy.
It’s more like picking up a fork and being surprised to find out that it’s burning hot without any visible difference.
As I stated before, this error wouldn't even exist in the first place in no codebase ever: look how the method that fails returns a `FileError` and not an `anyerror`
It could be rightly argued that it still shouldn't compile though.
Like here in `std/tar.zig`: https://github.com/ziglang/zig/blob/50edad37ba745502174e49af...
Or here in `main.zig`: https://github.com/ziglang/zig/blob/50edad37ba745502174e49af...
And in a bunch of other places: https://github.com/search?q=repo%3Aziglang%2Fzig+%22%3D%3D+e...
I already commented on Zig compiler/stdlib code itself, but here's Tigerbeetle and Bun, the two biggest(?) Zig codebases:
https://github.com/search?q=repo%3Atigerbeetle%2Ftigerbeetle...
https://github.com/search?q=repo%3Aoven-sh%2Fbun%20%22%3D%3D...
https://github.com/tigerbeetle/tigerbeetle/blob/b173fdc82700...
https://github.com/tigerbeetle/tigerbeetle/blob/b173fdc82700... (different file, same check.)
If I just need to check for 1 specific error and do something why do I need a switch?
In Rust you have both "match" (like switch) and "if let" which just pattern matches one variant but both are properly checked by the compiler to have only valid values.
There is nothing in our domain of distributed systems based on SaaS products, mobile OSes, and managed cloud environments, that would profit from a borrow checker.
In Rust lifetimes for references are part of the type, so &'a str and &'b str could be different types, even though they're both string slice references.
Beyond that, Rust tracks two important "thread safety" properties called Sync and Send, and so if your Thing ends up needing to be Send (because another thread gets given this type) but it's not Send, that's a type error just as surely as if it lacked some other needed property needed for whatever you do with the Thing, like it's not totally ordered (Ord) or it can't be turned into an iterator (IntoIterator)
I have found that Rust's strong safety guarantees give me much more confidence when touching the codebase. With that extra confidence I'm much more willing to refactor even critical parts of the app, which has a very positive effect on my productivity, and long-term maintainability.
That is usually why you have tests for your code. But if you have no tests a programming language with a strict compiler is of course more helpful. But the best is to write tests. Then you can also refactor code with confidence written in "sloppy" programming languages.
Tests should be used where you can't prove correctness statically. But it's better if you can.
The ultimate end point of this is formal verification, where you need very few - if any - runtime tests. But formally verifying software is extremely difficult, so you can't usually do that.
But yeah usually it's a good idea to have a few tests anyway. You just need fewer tests the more properties you can prove formally.
But for some reason writing tests always reminds me with xkcd "Standards" (https://xkcd.com/927/). Instead of "fixing standards by create another standard", now it's catching code bug with more code.
At least for type system, it gets maintained by language maintainers, not project maintainers.
On the other hand, that strictness is precisely what leads people to end up with generally reasonable code.
match foo {
(3...=5, x, BLABLABLA) => easy(x),
_ => todo!("I should actually implement this for non-trivial cases"),
}
The nice thing about todo!() is that it type checks, obviously it always diverges so the type match is trivial, but it means this compiles and, so long as we don't cause the non-trivial case to happen, it even works at runtime. fn foo() -> impl Display {
NotDisplay::new()
}
and a test references `foo`, then it gets replaced for the purposes of the test with fn foo() -> impl Display {
panic!("`NotDisplay` doesn't `impl Display`")
}
This should not be part of the language, but part of the toolchain.Same thing for the borrow-checker.
Adding `return todo!()` works well enough for some cases, but not all, because it can't confirm against impl Trait return types.
And these strategies are things that people need to be taught about, individually. I'm not saying that the current state is terrible, just that there might be things we can do to make them better.
I'm proposing this not as a specific feature but as a general strategy: everything the compiler can conceivably recover from in a way that allows the rest of the application to run, should.
I do think that'd be useful in a variety of cases, especially for testsuites. I don't think I'd want to go as far as trying to guess how to substitute `Arc`/`Mutex`/`RwLock` for a failed borrow, but there are a few different strategies that do seem reasonably safe.
In addition to the automatic todo!() approach, there's the approach of compile-time tainting of branches of the item tree that failed to compile. If something doesn't compile, turn it into an item that when referenced makes the item referencing it also fail to compile. That would then allow any items that do compile to get tested in the testsuite.
Adding `return todo!()` works well enough for some cases, but not all, because it can't confirm against impl Trait return types.
Not in the fully general case, but ! does implement Display, so it would work in the case you posted.
Although you said "mode of operation" and I can't get behind that idea, I think the choice to just wrap overflow by default for the integer types in release builds was probably a mistake. It's good that I can turn it off, but it shouldn't have been the default.
[1] https://downloads.haskell.org/~ghc/7.10.3-rc1/users_guide/ty...
I personally see Rust as an ideal "second system" language, that is, you solve a business case in a more forgiving language first, then switch (parts) to Rust if the case is proven and you need the added performance / reliability.
Assigning a value to 'window.location.href' doesn't immediately redirect you, like I thought it would.
That's not a "Typescript" or language issue, that's a DOM/browser API weirdness
Think about what is happening. When you build to the browser as a target, TSC basically transpiles the TypeScript code to JavaScript. The Browser APIs that Typescript provides are a bunch of type definitions. When it transpiles the code the only thing it can really do is check the types match up.
It’s just a logic bug.
E.g., the code doesn’t match their own English description of the logic: “If yes, redirect to the specific page. If not, go to the dashboard or onboarding page.”
The code is missing the “if not” (probably best expressed using an “else” clause following the if block).
It's fair to point out that browser api is confusing. You might not think of setting a property as kicking off an asynchronous operation, especially if it seems to has instantaneous effect at first.
But the basic control flow logic of that code is wrong. Confusion about whether a side-effect from an api call might bail you out from your error is beside the point.
Languages are a collection of tradeoffs so I'm pretty sure you could find examples for every two languages in existence. It also makes these kinds of comparisons ~useless.
For example, python is AFAIK the lead language to get something done quickly. At least as per AoC leaderboards. It's a horrible language to have in production though (experienced it with 700k+ LOC).
Rust is also ok to do for AoC, but you will (based on stats I saw) need about 2x time to implement. Which in production software is definitely worth it, because of less cost in fixing stupid mistakes, but a code snippet will not show you that.
https://en.wikipedia.org/wiki/Programming_in_the_large_and_p...
Its always funny when i see these kinds of posts.
That's the main reason why I chose to compare it with TypeScript, that is also a statically typed language, but just can't catch some issues that Rust can.
I kind of wish Haxe would have taken TypeScripts place as it is simply a better language overall.
Its always funny when i see these kinds of posts.
What's funny about it?
This is an internal tool that isn't our main product so I don't see any value in spending extra time writing needless long-winded "did the user pass the right thing" tests or getting a static analyzer going so I just chose a language with as little build system/static analyzer/built-in unit testing/separate runtime installation headaches as possible to get things going easier.
That was three years ago and the tool is about 4x more featureful than when it started.
They are very productive for first drafts. Then the code tends to “melt” as more people work on it or you do refactors because there is no static type system to catch obvious problems or enforce any order.
With Rust, they frontload the complexity, so it's considered to be "hard to learn". But I've got to say, Rust's "complexities" have allowed me to build a taller software tower than I've ever been able to build before in any other language I've used professionally (C/C++/Java/Swift/Javascript/Python).
And that's the thing a lot of people don't get about Rust, because you can only really appreciate it once you've climbed the steep learning curve.
At this point I've gone through several risky and time-consuming (weeks) refactors of a substantial Rust codebase, and every time it's worked out I'm amazed it wasn't the kind of disaster I've experienced refactoring in other languages, where the refactor has to be abandoned because it got so hairy and everyone lost all hope and motivation.
They don't tell you about that kind of pain in the Python tutorial when they talk about how easy it is to not have to type curly braces and have dynamic types everywhere. And you don't really find that pleasure in Rust until you've built enough experience and code to have to do a substantial refactor. So I can understand why the Rust value proposition is dubious for people who are new to the language and programming in general.
I had to explain this to my students today using XKCD #1987, that although Python is considered a universally "simple" and "easy to use" language, a lot of the complexity in that ecosystem is paid for on the backend.
No, that's just an unrelated coincidence. Python happens to be a good language with awful tooling, but there are also good languages with good tooling and awful languages with awful tooling.
Presumably the lock is intended to be used for blocking until the commit is created, which would only be guaranteed after the await. Releasing the lock after submitting the transaction to the database but before getting confirmation that it completed successfully would probably result in further edge cases. I'm unfamiliar with rust's async, but is there a join/select that should be used to block, after which the lock should be unlocked?
You can ask any professional python programmer how much time they've spent trying to figure out the methods that are callable on the object returned by some pytorch function, and they will all tell you it's a challenge that occurs at least weekly. You can ask any C++ programmer how much time they've spent debugging segfaults. You can ask any java programmer how much time they've spent debugging null pointer exceptions. These are all common problems that waste an incredible amount of time, that simply do not occur to anywhere close to the same extent in Rust.
It's true that you can get some of these benefits by writing tests. But would tests have prevented the issue that OP mentioned in his post, where acquiring a mutex from one thread and releasing it from another is undefined? It's highly doubtful, unless you have some kind of intensive fuzz-testing infrastructure that everyone talks about and no one seems to actually have. And what is more time-efficient: setting up that infrastructure, running it, seeing that it detects undefined behavior at the point of the mutex being released, and realizing that it happened because the mutex was sent to a different thread? Or simply getting a compile error the moment you write the code that says "hey pal, mutex guards can't be moved to a different thread". Plus, everyone who's worked on a codebase with a lot of tests can tell you that you sometimes end up spending more time fixing tests than you do actually writing code. For whatever reason, I spend much less time fixing types than fixing tests.
There is a compounding benefit as well. When you can refactor easily (and unit tests often do not make refactoring much easier...), you can iterate on your code's architecture until you find one that meshes naturally with your domain. And when your requirements change and your domain evolves, you can refactor again. If refactoring is too expensive to attempt, your architecture will become more and more out-of-sync with your domain until your codebase is unmaintainable spaghetti. If you imagine a simple model where every new requirement either forces you into refactoring your code or spaghettifying your code, and assume that each instance of spaghettification induces a 1% dev speed slowdown, you can see that these refactors become basically essential. Because 100 new requirements in the future, the spaghetti coder will be operating at 36% the productivity of the counterfactual person who did all the refactors. Seen this way, it's clear that you have to do the refactors, and then a major component of productivity is whether you can do them quickly. An area where it's widely agreed rust excels at.
There are plenty of places we can look at Rust and find ourselves wanting more. But that doesn't mean we shouldn't be proud of what Rust has accomplished. It has finally brought many of the innovations of ML and Haskell to the masses, and innovated new type-system features on top of that, leading to a very productive and pleasantly-designed language.
(I also left this comment on reddit, and am copying it here.)
The problem is not with TypeScript or even JavaScript but an odd Browser API where mutating some random value of an object results in a redirect on the page, but not synchronously.
Even if the language of the browser were Rust, there's nothing about the type system specifically that would have caught this bug (as far as I can tell, anyways. Presumably there's something in the background periodically reading the value of `href` and updating the page accordingly, but since that background job only would have needed read and not write access to the variable, I don't think the borrow checker would have helped here)
Setting the value of href navigates to the provided URL[0]
It would have been caught because this API (setters) is impossible with Rust. At best, you'd have a .set_href(String).await, which would stop the thread until the location has been updated and therefore the value stabilized. At worst, you'd have a public .href variable, but because the setter pattern is impossible, you know there must be some process checking and scheduling updates.
[0] https://developer.mozilla.org/en-US/docs/Web/API/Location/hr...
They definitely could have made it more ergonomic though. Pin is super confusing and there are a disappointing number of footguns, e.g. it's very easy to mess up loop/select and that's super common.
Related: https://man7.org/linux/man-pages/man2/futex.2.html#:~:text=%...
If you want this behavior, it's relatively simple to implement your own mutex on top of futex, but no one is going to expect the behavior it provides.
And I find the event loop vrs concurrency via mutexes to be like an apples to oranges comparison. They both do some form of concurrency but not nearly in the same way
Again, Typescript is hardly considered a language. It is just a tool used to keep javascript under some control on large projects. Again, the comparison between rust and Typescript on a language level is not a great match.
As for fearless refactoring, don't get me started. I experienced this the first time I was porting a vanilla js backend to a typescript version. It was awesome. I won't say it works in much the same way as rust does but man, if you ever ported a rest api written in javascript to Typescript - you'd experience a similar effect.
What’s remarkable is that (a) I have very little Rust experience overall (mostly a Python programmer), (b) very little virtio experience, and (c) essentially no experience working with any of the libraries involved. Yet, I pulled off the refactor inside of a week, because by the time the project actually compiled, it worked perfectly (with one minor Drop-related bug that was easily found and fixed).
This was aided by libraries that went out of their way to make sure you couldn’t hold them wrong, and it shows.
- compile times
- compile times
- long compile times
It isn't that big of a deal in small projects, but if you pull dependencies or have a lot of code you need to think about compilation units upfront.
One thing I like about the JVM is hot code reloading. Most of the time, changes inside a method/function takes effect immediately with hot code reloading.
[1] https://github.com/rui314/mold [2] https://github.com/mozilla/sccache [3] https://github.com/rust-lang/rustc_codegen_cranelift
Most likely you'll have years before it's an issue and there are mitigations.
I have found that Rust's strong safety guarantees give me much more confidence when touching the codebase. With that extra confidence I'm much more willing to refactor even critical parts of the app, which has a very positive effect on my productivity, and long-term maintainability.
That's great, but the graph at the top shows your productivity more than doubling as the size of the project increases, which seems very dubious. Perhaps this is just intended as visual hyperbole, but it triggers my BS detector.
I’ve had similar experiences working with a large 1m+ SLOC Haskell codebase. It was straightforward to make large refactors because of the type system.
And we weren’t even using fancy things like linear types. Just plain old Haskell with a sprinkling of dependent types in core, critical sections.
Rust to me makes a lot more sense. The compiler gives reasonable errors, the code structure is clean and it's obvious where everything should go.
I just can't deal with Typescript at all. There is a sense of uncertainty in TypeScript that is just unbearable. It is really hard to be confident about the code.