Java at 30: Interview with James Gosling
Java syntax isn't perfect, but it is consistent, and predictable. And hey, if you're using an Idea or Eclipse (and not notepad, atom, etc), it's just pressing control-space all day and you're fine.
Java memory management seems weird from a Unix Philosophy POV, till you understand whats happening. Again, not perfect, but a good tradeoff.
What do you get for all of these tradeoffs? Speed, memory safety. But with that you still still have dynamic invocation capabilities (making things like interception possible) and hotswap/live redefinition (things that C/CPP cannot do).
Perfect? No, but very practical for the real world use case.
Java the language eventually drove me away because the productivity was so poor until it started improving around 2006-2007.
Now I keep an eye on it for other languages that run on the JVM: JRuby, Clojure, Scala, Groovy, Kotlin, etc.
IMO JRuby is the most interesting since you gain access to 2 very mature ecosystems by using it. When Java introduced Project Loom and made it possible to use Ruby's Fibers on the JVM via Virtual Threads it was a win for both.
Charles Nutter really doesn't get anywhere close to enough credit for his work there.
You can take pretty much any code written for Java 1.0 and you can still build and run it on Java 24. There are exceptions (sun.misc.Unsafe usage, for example) but they are few and far between. Moreso than nearly any other language backwards compatibility has been key to java. Heck, there's a pretty good chance you can take a jar compiled for 1.0 and still use it to this day without recompiling it.
Both Ruby and Python, with pedigrees nearly as old as Java's, have made changes to their languages which make things look better, but ultimately break things. Heck, C++ tends to have so many undefined quirks and common compiler extensions that it's not uncommon to see code that only compiles with specific C++ compilers.
Although Python is pretty close, if you exclude Windows (and don't we all want to do that?).
This takes nothing away from Java and the Java ecosystem though. The JVM allows around the same number of target systems to run not one language but dozens. There’s JRuby, Jython, Clojure, Scala, Kotlin, jgo, multiple COBOL compilers that target JVM, Armed Bear Common Lisp, Eta, Sulong, Oxygene (Object Pascal IIRC), Rakudo (the main compiler for Perl’s sister language Raku) can target JVM, JPHP, Renjin (R), multiple implementations of Scheme, Yeti, Open Source Simula, Redline (Smalltalk), Ballerina, Fantom, Haxe (which targets multiple VM backends), Ceylon, and more.
Perl has a way to inline other languages, but is only really targeted by Perl and by a really ancient version of PHP. The JVM is a bona fide target for so many. Even LLVM intermediate code has a tool to target the JVM, so basically any language with an LLVM frontend. I wouldn’t be surprised if there’s a PCode to JVM tool somewhere.
JavaScript has a few languages targeting it. WebAssembly has a bunch and growing, including C, Rust, and Go. That’s probably the closest thing to the JVM.
> I can run basically any Perl code back to Perl 4 (March 1991) on Perl 5.40.2 which is current.
Yes, but can you _read_ it?I'm only half joking. Perl has so many ways to do things, many of them obscure but preferable for specific cases. It's often a write-only language if you can't get ahold of the dev who wrote whatever script you're trying to debug.
I wonder if modern LLMs could actually help with that.
Yes, but can you _read_ it?
Java was marketed (at least in its early days) as a WORA language - WRITE ONCE RUN ANYWHERE.
Perl was unmarketed as a WORM language - WRITE ONCE READ MANY (TIMES). ;)
jk, i actually like perl somewhat.
but I think Larry and team went somewhat overboard with that human-style linguistics stuff that they applied to perl.
Can you read arbitrary code written by developers from around the world in PL/1, or Ada, Forth, APL, or even C++? Big languages have lots of syntax choices, yes. It doesn’t need to be abused.
Even C has an obfuscated code contest that’s been going on for decades.
There's also Groovy.
I wonder what other languages run on the JVM. What about Perl, Icon, SNOBOL, Prolog, Forth, Rexx, Nim, MUMPS, Haskell, OCaml, Ada, Rust, BASIC, Rebol, Haxe, Red, etc.?
Partly facetious question, because I think there are some limitations in some cases that prevent it (not sure, but a language being too closely tied to Unix or hardware could be why), but also serious. Since the JVM platform has all that power and performance, some of these languages could benefit from that, I'm guessing.
#lazyweb
There is an OCaml-Java. NetRexx targets the JVM. For Prolog there are JIProlog and TuProlog at least.
Red basically is REBOL. Yes, Red targets IA-32, ARM, JVM, AVM2, x64, and the CLR.
I’ve seen some experiments for running Perl on the JVM. Rakudo can target the JVM for Raku, which is Perl’s sister language.
For Ada, gnat can target the JVM. https://docs.adacore.com/gnatvm-docs/jgnat_ug.html
For Forth there are a number of implementations. JVMForth, jForth, Misty Beach Forth, HolinJ Forth, bjforth, and xforth at least.
I’ve seen a couple different Java libraries for SNOBOL-style matching but I’ve never seen a SNOBOL tool that targets the JVM.
MUMPS has M4J.
Rust is interesting. There are JVMs written in Rust. There’s support for the JNI for Rust for interoperability. I’m not aware of a JVM target for Rust, though. However, Rust still uses LLVM as its primary code generator. As I mentioned, there are LLVM IC to JVM compilers.
Basic isn’t really a single language. For something like MS Visual Basic, there’s Jabasco. There’s JVMBasic. MBC transpiles Basic to C or C++, so you could use clang to put its output on the JVM. PuffinBASIC is a Basic interpreter written in Java. GLBasic compiles to C++, so again with LLVM all things are possible here. BCX Basic also outputs C or C++. There’s something just called “BASIC Compiler” that is both written in Java and compiles its source to JVM bytecode. The basgo compiler outputs Go code, so anywhere you can target golang code you can target Basic code with basgo, including the JVM. I’m sure there are a lot more. These are different versions of Basic on the source side, some of them similar to one another.
Nimlvm is a Nim compiler to LLVM intermediate code. So once again, as a chain of steps it can be done.
Speaking of chaining translators/transpilers/compilers, did I mention there’s a WebAssembly to JVM compiler? There are actually more than one. Chicory is one and asmble is another. There’s something called “Happy New Moon with Report”. There’s also a WASM written in Scala called Swam. I’m sure I’m missing some.
So the huge and growing list of languages that target WASM can also be chain-translated to target the JVM. That includes C, Rust, Nim, TypeScript, C++, Forth, Go, F#, Lua, Zig, and more. https://wasmlang.org/
So if it targets the JVM, it can run on the JVM. But also if it targets WebAssembly, C, C++, LLVM IC, Lua, Go, and more it can also through other tools target the JVM. Or if it has an interpreter that runs on the JVM because it’s written in Java, Scala, Clojure, or some other language you can get it there. If you really want to get exotic and esoteric, there’s an x86 emulator in WASM out there and you could probably run that on one of the JVM WASM interpreters.
Haskell
https://github.com/Frege/frege
https://github.com/typelead/eta
Of the others you mentioned, I bet there's a couple JVM Prologs out there, but haven't encountered any myself.
I can run THE SAME CODE on DOS, BeOS, Amiga, Atari ST, any of the BSDs, Linux distros, macOS, OS X, Windows, HP/UX, SunOS, Solaris, IRIX, OSF/1, Tru64, z/OS, Android, classic Mac, and more.
No, you really can't. Not anything significant anyway. There are too many deviations between some of those systems to all you to run the same code.
There are differences, but they’re usually esoteric ( https://perldoc.perl.org/perlport#PLATFORMS ).
I don’t know if it is a me problem or if I’m missing the right incantations to set up the environment or whatever. Never had that much problems with Java.
But I’m a Java and Ruby person so it might really be missing knowledge.
I no longer shy away from writing <500 LOC utility/glue scripts in python thanks to uv.
e.g. poetry, venv and pyenv have been mentioned in just the next few comments below yours. and this is just one example. i have seen other such seeming confusion and different statements by different people about what package management approach to use for python.
Prior to that I would frequently have issues (and still have issues with one-off random scripts that use system python).
Moreso than nearly any other language backwards compatibility has been key to java.
The Java 8 and 8+ divide very much works against this. It was a needed change (much like Python 2 vs 3) but nowhere near pleasant, especially if some of your dependencies used the old Java packages that were removed in, say, OpenJDK 11.
Furthermore, whenever you get the likes of Lombok or something that has any dynamic compilation (I recall Jaspersoft having issues after version upgrade, even 7 to 8 I think), or sometimes issues with DB drivers (Oracle in particular across JDK versions) or with connection pooling solutions (c3p0 in particular), there's less smooth sailing.
Not to say that the state of the ecosystem damns the language itself and overall the language itself is pretty okay when it comes to backwards compatibility, though it's certainly not ideal for most non-trivial software.
I agree it was somewhat painful, but not nearly to the level of the 2->3 change which ended up changing python syntax. Most dependencies worked throughout the change and still work.
Furthermore, whenever you get the likes of Lombok or something that has any dynamic compilation ... there's less smooth sailing.
Not sure about c3p0, but Lombok goes out of it's way to inject itself into javac internals in order to make it's output changes. It's using the most internal of internal code in order to make that @Getter annotation work. Plenty of other annotation processing APIs are completely unaffected by javac updates because they choose not to use internal APIs. Immutables, Autovalue, dagger2, all examples of dynamic compilation that continue to work regardless the version of java. Lombok is a horrible little dependency that I wish people would abandon. It's making a mess and then complaining that it's somehow Java's fault because Lombok decided it needed access to the AST.
I get it, things have broken. But what has been broken is literally the undefined and non-public APIs which went so far as to call their packages things like `sun.misc.unsafe` just to try and ward off people from using these APIs. (With the javadocs to boot which told people not to use this).
And even with the break, the Java devs went out of their ways to make stand-in apis.
It always made me wonder why I hear about companies who are running very old versions of Java though. It always seemed like backwards compatibility would make keeping up to date with the latest an almost automatic thing.
Another problem is crashes. Java runtime is highly reliable, but still bugs happens.
(It's still a valid point. It's just not the point you labeled it as.)
int record = 1;
double var = 2;
even though `var` and `record` are now used to create and define things.Java is, in my opinion, a complete mess. And I think it's weird how anybody could like it past the 1990s.
C++ not being compilable later hasn't been true since pre standard C++. We're talking 1980s now.
https://blog.habets.se/2022/08/Java-a-fractal-of-bad-experim...
Which also points to another thing where Java compatibility shines. One can have a GUI application that is from nineties and it still runs. It can be very ugly especially on a high DPI screen, but still one can use it.
You can take pretty much any code written for Java 1.0 and you can still build and run it on Java 24.
This is not my experience with Java at all. I very often have to modify $JAVA_HOME.
The whole spec was great with the exception of entitybeans.
It provided things that are still not available it anything else.. why do we store configuration/credentials in git (encrypted, but still).
And the context were easy to configure/enter.
Caucho’s resin, highly underrated app server. Maybe not underrated, but at least not very well known
In the early 2000's, I used to work on JEE stuff for my day job, then go home and build PHP-based web apps. PHP was at least 10x more productive.
Just a lot of boilerplate code, but the overal architecture and structure of JEE is still very sound
What did you like about JEE? I worked in that world for years and don't miss it in the least.
I dunno it just worked for me. But I kept using the standards, no vendor lockin stuff, which bea etc always wanted.
I think the servlet design is great, the whole packaging/deployment model is great. And then well the session beans were overkill in general, so they were swapped out quite early by me. Swapped jsp out for I think velocity templates. And that application is still alive, running, and on the same platform.
And Java meant, at least for me and how I configured it, proper debugging and IDE support (love Eclipse), hot code reload, easy releases, repeatability, ci/cd. The last 20 years, no __significant__ improvements in my opinion.
I never understood the fight against ORMs. In the end you'll simply be writing your own framework
I grew to be a big fan of JBoss and was really disappointed when the Torquebox project stopped keeping up (Rubyized version of JBoss).
After about 10+ years Spring kind of took over JEE.
Omg. Spring was just like moving code to xml. Different now, but still.
What I miss from JEE:
- single file ear/war deployment, today that’s a docker image
- the whole resource api from Java (filesystem/jar doesn’t matter). It means you don’t necessarily have to unpack the jar
- configuration / contexts for settings, + UI for it, database connections etc. Docker kind of works, most most images fail to achieve this. Docker compose kind of takes care of things.
- as said before.. all Java still runs everywhere
And hey, if you're using an Idea or Eclipse (and not notepad, atom, etc),
Java's tools are really top notch. Using IntelliJ for Java feels a whole new different world from using IDEs for other languages.
Speaking of Go, does anyone know why Go community is not hot on developing containers for concurrent data structures? I see Mutex this and lock that scattering in Go code, while in Java community the #1 advice on writing concurrency code is to use Java's amazing containers. Sometimes, I do miss the java.util.concurrent and JCTools.
The patterns are available, its up to the community to apply proper concurrency patterns.
You can use platform threads, user-space threads, language-provided "green" threads, goroutines, continuations or whatever you wish for concurrency management, but that's almost orthogonal to data safety.
It’s not that you need locking to use threads. You need locking to stop threads from ruining any shared resource/data they are both trying to touch at the same time.
https://github.com/crossbeam-rs/crossbeam/blob/master/crossb...
One very common queue implementation you can use to implement actors is the crossbeam-deque
I can't find any references to a "crossbeam dequeue" outside of Rust sources. Is this a neologism for a "very common" pattern, or just very common in Rust?
The generic name is just Deque. https://en.wikipedia.org/wiki/Double-ended_queue
Don't communicate by sharing memory; share memory by communicating.
The overuse of Mutex and Lock are from developers bringing along patterns from other language where they are used to communicating via shared memory. So this aspect of the language just doesn't click as well for many people at first. How long it takes you to get it depends on your experience.But then I have also encountered Rust people that will look down on Java but had no idea buffered I/O had higher throughput than unbuffered.
Unbuffered IO is a tradeoff. For certain use cases it does help, because throughput isn't everything. I'm sure Buffered is better in the average use case, but that doesn't mean you would never need unbuffered.
Depending on the situation, channels can absolutely be higher overhead and not worthwhile.
Like streaming arrays one byte at a time through the channel.
Such devs just aren't very good, and hear "Google internally recommends not using them in many situations" but jump to inferring that means all of their situations qualify.
but that doesn't mean you would never need unbuffered.
Note that this was never claimed.
In fact my experience has been that overuse of channels is a code smell that alot of new go developers fall into and later regret. There's a reason the log package uses a mutex for synchronization.
In general I think channels are great for connecting a few large chunks of your program together. Concurrency is great but also not every function call benefits from being turned into a distributed system.
I think that it would be a great idea to develop more concurrent go data structures with generics and I suspect inertia is what's keeping the community from doing it.
My credentials such as the are: been writing go since 1.0. worked at Google and taught go classes as well as owned some of the original go services (the downloads server aka payload server).
Java memory management seems weird from a Unix Philosophy POV, till you understand whats happening. Again, not perfect, but a good tradeoff.
The GC story is just great, however. Pretty much the best you can get in the entire ecosystem of managed-memory languages.
You have different GC algorithms implemented, and you can pick and tune the one that best fits your use-case.
The elephant in the room is of course ZGC, which has been delivering great improvements in lowering the Stop-the-world GC pauses. I've seen it consistently deliver sub-millisecond pauses whereas other algorithms would usually do 40-60 msec.
Needless to say, you can also write GC-free code, if you need that. It's not really advertised, but it's feasible.
Needless to say, you can also write GC-free code, if you need that. It's not really advertised, but it's feasible.
It is not feasible under the JVM type system. Even once Valhalla gets released it will carry restrictions that will keep that highly impractical.
It's much less needed with ZGC but even the poster child C# from the GC-based language family when it comes to writing allocation-free and zero-cost abstraction code presents challenges the second you need to use code written by someone who does not care as much about performance.
The downside is that you sacrifice a lot of the benefits of guard rails of the language and tooling for what may not end up being much savings, depending on your workload.
The downside is that you sacrifice a lot of the benefits of guard rails of the language and tooling for what may not end up being much savings, depending on your workload.
I think that's mostly done in organisation where there's time, budget and willingness to optimize as far as possible.
Sacrificing the guardrails doesn't make sense for the "general public" software but does tremendous sense in environment where latency is critical and the scale is massive. But then again, in those environments there are people handsomely paid to have a thorough understanding of the software and keep it working (making updates, implementing features etc).
I worked on a software that was written to be garbage-free whenever it could. Latency (server-side latency, i mean) in production (so real-world use case, not synthetic benchmark) was about 7-8 microseconds per request (p99.9) and STW garbage collection was at around 5msec (G1GC, p50, mostly young generation) or ~40 msec (p99.9, full gc) and was later lowered to ~800-900 microseconds with ZGC.
I know it might sound elitist but the real difference here are... Skill issues. Some people will just shun java down and yap about rewriting in rust or something like that, while some other people will benefit from the already existing Java ecosystem (and tooling) and optimize what they need to get the speed they're targeting.
I know I'll be downvoted by the rust evangelism task force, but meh.
Is data access in such a project purely from in-memory sources?
yes.
I suspect most people shun it because database access- especially if the DB itself is over the network on a different machine- already slow enough that ZGC / zero allocation won't be noticed.
Database access, if you're talking network latency query/transaction processing time, is essentially irrespective of the language being used, so that's not a good reason to shun Java as a language/rutime anyway.
The elephant in the room is of course ZGC, which has been delivering great improvements in lowering the Stop-the-world GC pauses. I've seen it consistently deliver sub-millisecond pauses whereas other algorithms would usually do 40-60 msec.
As someone who's always been interested in gamedev, I genuinely wonder whether that would be good enough to implement cutting-edge combo modern acceleration structures/streaming systems (e.g. UE5's Nanite level-of-detail system.)
I have the ability to understand these modern systems abstractly, and I have the ability to write some high-intensity nearly stutter-free gamedev code that balances memory collection and allocation for predicable latency, but not both, at least without mistakes.
As someone who's always been interested in gamedev, I genuinely wonder whether that would be good enough to implement cutting-edge combo modern acceleration structures/streaming systems (e.g. UE5's Nanite level-of-detail system.)
The GC would be the least of your problems.
Java is neat, but the memory model (on which the GC relies) and lack of operator overloading does mean that for games going for that level of performance would be incredibly tedious. You also have the warm up time, and the various hacks to get around that which exist.
Back when J2ME was a thing there was a mini industry of people cranking out games with no object allocation, everything in primitive arrays and so on. I knew of several studios with C and even C++ to Java translators because it was easier to write such code in a subset of those and automatically translate and optimize than it was to write the Java of the same thing by hand.
There's no value types (outside primitives) and everything is about pointer chasing. And surely if there was less pointer chasing it'd be easier to do the GC work at the same time.
I'd also throw in what was possibly their greatest idea that sped adoption and that's javadoc. I'm not sure it was a 100% original idea, but having inline docs baked into the compiler and generating HTML documentation automatically was a real godsend for building libraries and making them usable very quickly. Strong typing also lined up nicely with making the documents hyper-linkable.
Java was really made to solve problems for large engineering teams moreso than a single developer at a keyboard.
javadoc
Indeed. Many languages have something similar to Javadoc, yet somehow I haven't encountered anything quite as good as Javadoc, and I can't explain why or exactly how it's better. I admit I haven't tried that hard either. But I suspect it's down to the nature of the language and how, with well designed libraries at least (and not all are, certainly,) there is a nice decomposition of modules, packages, classes/interfaces and methods that leads to everything somehow having a correct place, and the Javadoc just follows. The strong typing is another contributor, where 90% of the time you can just look and the signature and imply what is intended. Finally, the old-fashioned frames based HTML typically used with Javadoc is a great benefit.
Also, I've found I experience less reluctance to author Javadoc for some reason. Again, part of this is due to strong types, and much of the legwork being correctly generated in nearly every case.
Lombok, when used with moderation, is wonderful. Mockito is magic, of a good kind. Maven still gets it done for me; I've yet to care about any problems Gradle purports to solve, and I think that's down to not creating the problems that Gradle is designed to paper over in the first place.
Today, if I had my choice of one thing I'd like to see in Java that doesn't presently exist it's Python's "yield". Yes, there are several ways to achieve this in Java. I want the totally frictionless generators of Python in Java.
Java performance isn't the fastest, that's ok, a close 3rd place behind C/CPP ain't bad.
When Java got popular, around 1999-2001, it was not a close third behind C (or C++).
At that time, on those machines, the gap between programs written in C and programs written in Java was about the same as the gap right now between programs written in Java and programs written in pure Python.
A mix of K&R C, C89, C++ARM compilers catching up with WG21 work, POSIX flavours, and lovely autoconf scripts.
And yet many of us embraced Java,
That was my point - performance was never one of the primary considerations for enterprises. They had more important considerations.
That is one of the things I have been doing across JVM/ART, CLR, V8 for decades now, moreso if we include Perl and Tcl into the mix, I seldom write full blown 100% C or C++ code, when I reach to them, is to write native libraries or implement language bindings.
Java performance isn't the fastest, that's ok, a close 3rd place behind C/CPP ain't bad. And you're still ahead of Go, and 10x or more ahead of Python and Ruby.
Fastest at what, exactly?
With the JVM you basically outsource all the work you need to do in C/C++ to optimize memory management and a typical developer is going to have a hell of a time beating it for non-trivial, heterogenous workloads. The main disadvantage (at least as I understand) is the memory overhead that Java objects incur which prevent it from being fully optimized the way you can with C/C++.
So giving the entire system to the JVM, performing some warmup prior to a service considering itself “healthy”, and the JVM was reasonably fast. It devoured memory and you couldn’t really do anything else with the host, but you got the Java ecosystem, for better or worse).
There was a lot of good tooling that is missing from other platforms, but also a ton of overhead that I am happy to not have to deal with at the moment.
Using the function application syntax for primitives like + is nice because you get the same flexibility of normal functions, like variable argument length: (+ a b c).
Clojure is a little bit less uniform than other Lispy languages in that it has special brackets for lists (square brackets) and for maps (curly brackets), but that's pretty much it.
(BTW Clojure, as a Lisp dialect, has almost no syntax. You can learn THAT in 5 minutes. The challenge is in training your programming mind to think in totally new concepts)
and that's before you throw in real multithreading
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
The fact that you specifically mention Go explains a lot. btw c# is faster than Java, so not third place, it's more a 5th~
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
Benchmarks aren’t all that useful since usually the bottleneck is File IO, external api calls, db calls, user latency or a combo of the 4.
Go’s main advantage in this matchup would be conciseness.
Go code looks so clean and nice, Java doesn’t.
I personally don’t care for Java, but I have bills so it’s always in my back pocket. Things happen , sometimes you need to write Java to eat.
Fred, you really should just take the job.
Martha you don’t understand, it’s Java with an Oracle DB.
Fred, I know it’s bad, I know you still have nightmares about the 4 types of string in Java 8, but it’s not just us now.
With a tear in his eye , fear in his heart.
Martha, if I must… I’ll open Eclipse once more.
I personally don’t care for Java, but I have bills so it’s always in my back pocket. Things happen , sometimes you need to write Java to eat.
I write Java to pay bills and my eyes and fingers thank me everyday for sparing them from a sea of if err != nil. I won't even go(!) into the iota stupidly compared to Java's enums.
I built a small phone app with it. The best way to describe it is incomplete. I think the philosophy is to make things more minimal?
A lot of frameworks don’t exactly do everything they need to.
I’ll still pick it over Rust, which seems to like punishing the developer.
Java, .NET and languages like that also lend themselves pretty well to tools (and even LLMs) understanding what's going on, their runtimes are pretty good and platform differences don't give you too many issues from what I've seen. Though when it comes to the frameworks or libraries you might use (e.g. how often you will see the likes of Spring Boot in enterprise projects) will definitely leave some performance on the table[1] and have plenty of awkward and confusing situations along the way[2], especially if you're unlucky enough to have to work on codebases that have been around for over a decade. Old Java projects really suck sometimes, though maybe that applies to many of the old projects, regardless of tech.
Overall, I quite like them and am pretty productive, plus I unironically think that using Maven is pleasant enough (even adding custom repos isn't too convoluted[3]) and the modern approach of self contained .jar files that can just be run with a JDK install instead of separately having to manage Tomcat or GlassFish (or TomEE or Payara nowadays, I guess) is a step in the right direction!
Though while I do like packages such as Apache Commons and they're very useful, I also very much enjoy using something like Go more recently, it's easier to get started making simple web apps with it, less ceremony and overhead, more just getting to writing code.
[1] https://www.techempower.com/benchmarks/#section=data-r23&l=z...
[2] https://blog.kronis.dev/blog/it-works-on-my-docker (a short rant of mine, but honestly I very much prefer when you have configuration done explicitly in the code, not some layered abstractions with cryptic failure modes; Dropwizard is way nicer in that regard than Spring Boot)
[3] https://maven.apache.org/guides/mini/guide-multiple-reposito... and https://maven.apache.org/guides/mini/guide-mirror-settings.h...
Java syntax isn't perfect, but it is consistent, and predictable
This is something I greatly value with the recent changes to Java. They found a great way to include sealed classes, switch expression, project Loom, records that feels at home in the existing Java syntax.
The tooling with heap dump analyzers, thread dump analyzers, GC analyzers is also top notch.
Hearing the work he and others did to gradually introduce pattern matching without painting themselves into a corner was fascinating and inspiring.
Only have to deal with gigalines of log4j excreta filling up disks.
Java performance isn't the fastest, that's ok, a close 3rd place behind C/CPP ain't bad. And you're still ahead of Go, and 10x or more ahead of Python and Ruby.
Of course, rank depends on what we include or exclude from the rankings, for example:
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
:versus:
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
Edit: 1.4, not 1.7
It's a shame imo that it's not seen as a "cool" option for startups, because at this point, the productivity gap compared to other languages is small, if nonexistent.
But nobody seems to talk about or care about C# except for Unity. Microsoft really missed the boat on getting mindshare for it back in the day.
See https://mckoder.medium.com/the-achilles-heel-of-c-why-its-ex...
they are heavily discouraged throughout Java code
That's so ignorant. Read the article please.
- https://news.ycombinator.com/item?id=43226624
- https://news.ycombinator.com/item?id=43584056
- https://news.ycombinator.com/item?id=36736326
And more. I'm not sure what you found in (checked) exceptions. If you'd like explicit error handling, we have holy grail in the form of Rust which beautifully solves it with implicit returns, error type conversions and disambiguation between error and panic model. I'd prefer to use that one as it actually reduces boilerplate and improves correctness, the opposite to the outcome of using checked exceptions.
I'm not sure what you found in (checked) exceptions.
I could copy/paste the entire article here... but it would be easier if you could take a gander: https://mckoder.medium.com/the-achilles-heel-of-c-why-its-ex...
Summary:
Crashy code: You have no compiler-enforced way to know what exceptions might be thrown from a method or library.
More crashy code: If a method starts throwing a new exception, you might not realize you need to update your error handling.
Dead code: If a method stops throwing an exception, old catch blocks may linger, becoming dead code.
I'd prefer to use that one as it actually reduces boilerplate and improves correctness, the opposite to the outcome of using checked exceptions.
Reducing boilerplate is not a valuable goal in and of itself. The question is, does the boilerplate buy you something? I think that with checked exceptions it does. Having an explicit type signature for what errors a function can raise improves correctness a great deal because the compiler can enforce the contracts of those functions.
I agree that the Rust approach is good too, though I don't agree it has any strong advantages over the way Java does things. Both approaches are equally respectable in my view.
Your linked blog is pretty wild. Only throw RuntimeExceptions to crash? Why not just Exit if that's the proper thing to do?
If you treat all C# exceptions as RuntimeExceptions, then it satisfies the blog anyhow.
https://jessewarden.com/2021/07/why-functional-programmers-a...
While composing methods in stream style is convenient, methods that can throw exceptions warrant more careful coding, so convenience should not always be the priority.
The problem is that you then need a way to capture exception specifications as generic type parameters to properly propagate contracts, which complicates the type system quite a bit. Which is why Java ultimately went with the much simpler proposal that didn't even try to tackle this.
https://www.artima.com/articles/the-trouble-with-checked-exc...
They believed that people would just catch Exception most of the time anyway
Their belief was wrong. Microsoft now recommends against catching Exception.
The article you linked to is addressed at the bottom of this article: https://mckoder.medium.com/the-achilles-heel-of-c-why-its-ex...
Java kept growing and wound up everywhere. It played nice with Linux. Enterprise Mac developers didn't have trouble writing it with IntelliJ. It spread faster because it was open.
Satya Nadella fixed a lot of Microsoft's ills, but it was too late for C# to rise to prominence. It's now the Github / TypeScript / AI era, and Satya is killing it.
The one good thing to say about Ballmer is that he kicked off Azure. Microsoft grew from strength to strength after that.
Satya Nadella fixed a lot of Microsoft's ills,
which ones?
Microsoft isn't thought of as evil anymore, but is open source and Linux friendly. GitHub, VScode, Typescript. Azure is booming, ...
But the big one: stock price.
VSCode itself, for all the promotional materials about how it's open source, is officially "a distribution of the Code - OSS repository with Microsoft-specific customizations released under a traditional Microsoft product license".
But wait, you might say, it's just like Chrome vs Chromium - so long as we have the OSS edition, it's all good! But, unless you're writing JS or TS, you need extensions to do anything useful. Python is an extension. So is C#, and C++. And all of these are partially closed source - e.g. code completion for all three, or debugging for both C# and C++.
Worse yet, the licenses for those closed source parts specifically prohibit their installation and use in anything other than the official closed source VSCode distro. And this isn't just verbiage - there are actual runtime checks in all these products that block attempts to use them in VSCodium, Cursor etc.
The same goes for the official VSCode extension gallery / marketplace - you can't legally use it from anything other than the official VSCode. Enforcing that is trickier, but even here Microsoft managed to find a way to frustrate its users: it used to be possible to download a .vsix from the Marketplace, but that feature has been removed recently, precisely because people were using that in conjunction with Cursor etc.
Much open source, indeed.
Microsoft has been historically much less aggressive with lawyers compared to Oracle.
https://www.theregister.com/2025/05/09/users_advised_to_revi...
You can easily just not use the Oracle JDK, though, unless you're running commercial software which requires running on the Oracle runtime to get technical support.
As others have said, the problem is not the runtime, but libraries: many major .NET libraries have been going fully commercial, you can't really trust the ecosystem anymore.
Moreover, a lot of these libraries are well-supported to this day. For example, Hibernate (the best ORM in business) is 28 years old, and has just released a new version. I recently consulted my former client (from 15 years ago), and I still recognized most parts of the stack that I set up way back then.
Nevertheless, as a platform, the JVM and JDK were fantastic and miles ahead most alternatives during the late 1990s and 2000s. The only platform for large development that offered some compelling advantages was Erlang, with BEAM and OTP.
Aside from early versions being rushed, I feel that Java's success and adoption were the bigger issue. While Microsoft could iterate quickly and break backwards compatibility with major versions of C# and the .NET runtime, Java was deliberately moving at a much slower pace.
It was really from 2007 on (.NET 3.5 / C# 3.0) that C# started to get major features at an ever increasing pace while Java significantly stagnated for quite a long time.
So really, Sun and Oracle could have definitely moved faster around Java 6 and 7, the Java 8 release took a long time given the feature set.
I feel that records could have come quicker, their implementation isn't exactly ground breaking. Avoiding the async/await route was a smart call though, and Loom could probably not have happened much earlier.
Valhalla is another can of worms entirely
For example, Go does not understand cgroups limits and needs an external package to solve this. .NET can read and accommodate those natively. It also ships with excellent epoll-based socket engine implementation. It's on par with Go (although I'm not sure which one is better, but .NET performs really well on high-throughput workloads).
But nobody seems to talk about or care about C# except for Unity. Microsoft really missed the boat on getting mindshare for it back in the day.
There was this guy Miguel de Icaza. From when I followed the open source ecosystem at the time, it seemed to be his personal mission to promote independent clones of a bunch of Microsoft technologies like C# on his own time even though they didn't ask him to do it.
I don't think I ever understood why someone would do this. It's like in the 2000s where people seemed to think you could solve all technical problems by inventing new kinds of XML.
https://web.archive.org/web/20000815075927/http://www.helixc...
Miguel de Icaza has been stanning for Microsoft technologies, literally since the nineties.
C# is extremely popular in all kinds of 'boring' industries. Having worked in areas like logistics and civil engineering, C# is everywhere.
MS does have an uphill PR battle though.
Rust feels like walking on a minefield, praying to never meet any lifetime problem that's going to ruin your afternoon productivity ( recently lost an afternoon on something that could very well be a known compiler bug, but on a method with such a horrible signature that i never can be sure. in the end i recoded the thing with macros instead).
The feeling of typesafety is satisfying , i agree. But calling the overall experience a "joy" ?
I've been trying rust for the past 2 months fulltime,recently lost an afternoon on something that could very well be a known compiler bug
With respect, at two months, you're still in the throes of the learning curve, and it seems highly unlikely you've found a compiler bug. Most folks (myself included) struggled for a few months before we hit the 'joyful' part of Rust.
Simply using axum with code using multiple layers of async was enough.
But then again, it looked like this bug (the error message is the same), however at this point i'm really unsure if it's exactly the same. The error message and the method signature was so atrocious that i just gave up and found a simpler design using macros that dodged the bullet.
Go felt the same way (but with a much lower order of magnitude) : you feel like bumping into language limitations, but once you learn to do it "simply" in go, your style will have changed into something much more elegant.
As for the bug in question, it has been quite "popular" for about 5 years now, and is actively tracked : https://github.com/rust-lang/rust/issues/110338. Nothing really weird. Just async hitting the limits of the current rust design.
The other one is thread safety, due to the compiler-enforced ownership semantics that prevent threads from accessing shared data unless they do so in a well-defined way.
And macros are a part of that!
Rust's macros on the other hand are excellent, and more languages should have expressive macros like that.
Other languages use constructs like context managers or try-with-resources to capture this, but these constructs are very limited and make it very hard or impossible for these resource types to be put into a container and passed between threads. In Rust this is trivial and actually just works.
Garbage collectors usually give much weaker guarantees about when objects are freed, so destructors (which are sometimes not even available, like in JS) might only be called much later. You can't rely on a GC to unlock a muted for you. But in Rust it happens when the guard is dropped, always, immediately after it's last needed.
Why do I need to? Why can't I let the garbage collector deal with it?
Determinism.
With Rust lifetimes, you can statically prove when resources will be released. Garbage collectors provide no such guarantees. It has been shown that garbage-collected languages have a space-performance tradeoff: you need five times as much RAM to achieve the same performance, even with a "good" GC, as the same program with explicit memory management:
However, at the moment i still feel i'm using a huge amount of layers upon layer of complex type definitions in order to get anything done. Just using an object's reference across async calls in a safe manner leads to insane types and type constraints, which read like ancient egyptian scripture. And at every layer, i feel like changing anything could blow everything up with lifetimes.
The language has this very special feel of something both advanced and extremely raw and low-level at the same time. It's very unique.
Also, it’s worth saying, you probably don’t need async.
Then you want to declare an async function that takes an async closure over that dependency. And you end up with a total garbage of a method signature.
As for async, the ecosystem for server-side is totally filled with async everywhere now. I don't think it's realistic to hope escaping those issues anyway in any real-world project. i thought i might as well learn to get comfortable with async.
I've been trying rust for the past 2 months fulltime
Rust has a horrid learning curve
I've programmed for decades in many languages, and I felt the same as you
Persevere.
Surrender! to compile
Weather the ferocious storm
You will find, true bliss
However, at some point you have to ask yourself why you're accepting to face all those challenges. Is it worth it ? When was the last time i faced a race condition when developping a backend ?
The reason i started with rust was for a very specific need on building a cross-platform library (including wasm), and that was a good justification, and i'm happy that i did. However now that i'm using it for the server as well and face the same kind of challenges, i seriously question whether this is a wise choice.
Of all the languages I've had to work with trying to get to know unfamiliar code-bases, it's the Go codebases I've been quickest to grok, and yielded the fewest surprises since as the code I'm looking for is almost always where I expect it to be.
Simple example, JAX-RS running on top of Java SE. I agree, JAX-RS is not what one might call "simple". It IS complex, or I should say, it CAN be complex. But Happy Path, staying in the middle of the road, it's pretty sweet for knocking out HTTP backed services. The Jersey reference implementation will do most anything you need (including stuff not "included" in raw JAX-RS). No need for a container, no need for a lot that stuff and all that hanger-on. The core runtime is pretty broad and powerful.
Consider my current project, it uses the built in Java HTTP server. Which works! It's fast, it's functional, it's free. (Yes, it's in a com.sun.net... package, but it's not going anywhere.) It's awkward to use. It's aggravatingly raw. It follows the tenet "why be difficult, when, with just a little effort, you can be impossible."
So, I wrote a simple "servlet-esque-ish" inspired layer for response and request handling, a better path based regex-y router, and a nicer query parser for queries and forms. 500 lines. Add on a JSON library and I can process JSON-y web request/response, easily. (I also wrote my own Multipart processor -- that was another 500 lines, boy that was fun, but most folks don't need that.)
A little bit of code and the built in server is MUCH easier to use. No tomcat, no deploys, zip. ...and no dependencies (save the JSON library).
Something all of these cool frameworks and such have shown me is what's really nice to have, but at the same time, just what isn't really necessary to get work done. I mean, CDI is really neat. Very cool. But, whoo boy. So I have a single singleton to handle application life cycle and global services. It works great with tests. I have a 50 line Event Bus. I have a 100 line "Workflow Engine". 150 line java.util.Logger wrapper (which is mostly, you know, wrapper). I wrote that way back whenever they introduced varargs to java (Java 5? 6?). The modern Java logging landscape is just...oh boy. I'm content with JUL -- I can make it work.
My current project is "magic free". I think @Overide in the single annotation anywhere in it. But it's comfortable to use, the cognitive load is quite load (outside of the actual application itself, which is NOT low -- sheesh). No swearing at frameworks. It's all my fault :).
Anyway, the point is that "simple Java" lurks in there. It needs a bit of uplifting, but not a lot.
It's an uphill battle to convince my co-workers to do things my way.
While operator overloading and infix functions aren't a Java anti-pattern, I also think the language would be improved by their removal.
Using "fmt.Errorf" is lean and painless compared to defining custom errors.
This is the whole story of Go, they pick something established and reimplement a heavily cut down version of it for "reasons", then slowly catch up to competition over the next decade or so.
In practice you have to use a combination of error wrapping and custom stack trace errors for your production logs to be useful on failure. The stdlib errors really should have stack traces.
The main area they get excessively lengthy is in certain frameworks and testing tools that can add like 100 lines to the trace.
stacktraces where you need 3 vertical monitors stacked together
If you wrote code with such deep stacktraces, it's all on you.
There's a performance cost to all that excessive stack depth too, often.
Also, as open-source folks say, "rewrite is always better". It also serves as a good security review. But companies typically don't have resources to do complete rewrites every so often, I saw it only in Google.
I found it hard taking over an existing Rails project - it felt frail to me, that any small change might have unexpected consequences.
Whereas when I've taken over Java projects - or come in late to an existing team - I felt quite confident getting started, even if it is a bit of a mess.
My other issues with the JVM is how much of a black box it is from a platform perspective, which makes debugging a PITA with standard ops tools like strace, gdb, etc. The JVM's over allocation of memory robs the kernel of real insight as to how the workload is actually performing. When you use the JVM, you are completely locked in and god help you if there isn't a JVM expert to debug your thing and unravel how it translates to a platform implementation.
Then of course there's the weird licensing, it's association with Oracle, managing JDK versions, it's lack of it factor in 2025, and a huge boatload of legacy holding it back (which is not unique to Java).
I have successfully navigated my career with minimal exposure to Java, and nowadays there's a glut of highly performant languages with GC that support minimal runtimes, static compilation, and just look like regular binaries such that the problems solved by something like the Java or Python VMs just aren't as relevant anymore - they just add operational complexity.
To reiterate, I admire JG just like any tech person should. Java's success is clear and apparent, but I'm glad I don't have to use it.
My other issues with the JVM is how much of a black box it is from a platform perspective, which makes debugging a PITA
Java has one the greatest debugging capabilities ever. dynamic breakpoints, conditional breakpoints, hell you can ever restart a stack frame after hot deploying code without a restart. You can overwrite any variable in memory, set uncaught exception breakpoints, and even have the JVM wait for a debugger to connect before starting. There is no equivalent in any other language that does _all_ of these things. And to top this off, there is 0 equivalent to Idea or Eclipse for any other language.
For runtime dynamics, JMX/JConsole is good enough for daily use, Java Flight Recorder gives you deep insight, or in a system you don't have direct access to. Hell even running jstack on a JVM is a good debug tool. If those don't do the trick, there's plain old HPROF (similar to other languages) and Eclipse Memory Analyzer.
Then of course there's the weird licensing,
The JVM is open source. There are no licensing issues. OpenJDK can be freely downloaded and run in production without restrictions on any use. If you really want to buy a JVM from Oracle... well thats your prerogative.
it's lack of it factor in 2025,
sdkman
a huge boatload of legacy holding it back
what legacy code?
dynamic breakpoints, conditional breakpoints, hell you can ever restart a stack frame after hot deploying code without a restart. You can overwrite any variable in memory, set uncaught exception breakpoints, and even have the JVM wait for a debugger to connect before starting. There is no equivalent in any other language that does _all_ of these things
.NET + C# can do all of these things.
what legacy code?
The Java API has its fair deal of baggage due to its extreme backward compatibility. Boolean.getBoolean[1] is one of the more accessible examples of a bad API that exists only because of legacy reasons, but there quite a number of them.
[1] https://docs.oracle.com/javase/8/docs/api/java/lang/Boolean....
Mentioning Java and Python in the same way in the context of performance is really odd. Python is nowhere near the JVM when it comes to performance
I strongly urge reading some elementary tutorials to educate yourself.
See https://www.baeldung.com/java-application-remote-debugging for CLI based remote debugging
But most people use IDE's.
See https://www.jetbrains.com/help/idea/debugging-your-first-jav...
and https://www.jetbrains.com/help/idea/tutorial-remote-debug.ht...
Java's debugging experience is better than any language out there - with the possible exception of Common LISP. I always cry when I need to debug a project written in another language after so much comfort using Java.
My other issues with the JVM is how much of a black box it is from a platform perspective, which makes debugging a PITA
You state how you don't really use java, but the above confirms it.
Java debugging and diagnostic tooling is second to none.
And I think there is some parallel with the kernel vs GC and mmap vs buffer pools - the GC simply has better context in the scope of the application. With other processes in the picture, though, yeah there is some provisioning complexity there.
OpenJDK, the de facto standard version used by everyone, is licensed under the GPL version 2 with the classpath exception.
No offence, but you simply aren’t well informed.
For instance, Java introduced the fork/join pool for work stealing and recommended it for short-lived tasks that decomposed into smaller tasks. .NET decided to simply add work-stealing to their global thread pool. The result: sync-over-async code, which is the only way to fold an asynchronous library into a synchronous codebase, frequently results in whole-application deadlocks on .NET, and this issue is well-documented: https://blog.stephencleary.com/2012/07/dont-block-on-async-c...
Notice the solution in this blog is "convert all your sync code to async", which can be infeasible for a large existing codebase.
There are so many other cases like this that I run into. While there have been many mistakes in the Java ecosystem they've mostly been in the library/framework level so it's easier to move on when people finally realize the dead end. However, when you mess up in the standard library, the runtime, or language, it's very hard to fix, and Java seems to have gotten it more right here than anywhere else.
But reading your message it doesn't sound like it.
The thread pool implementation has been tweaked over the years to reduce the impact of this problem. The latest tweak that will be in .NET 10:
https://github.com/dotnet/runtime/pull/112796
I’m not sure a thread pool implementation can immune to misuse (many tasks that synchronously block on the completion of other tasks in the pool). All you can do is add more threads or try to be smarter about the order tasks are run. I’m not a thread pool expert, so I might have no idea what I’m talking about.
And it's still stable, fast and reliable with a massive ecosystem of stable, fast and reliable libraries and software. With good developer tooling, profilers and debuggers to go with it. And big enterprise support teams from RedHat, Oracle, IBM, etc. throwing in their (paid) support services.
It might not be the best language in any of the categories (speed - runtime and compile time, tooling, ecosystem, portability, employee pool), but there's pretty much almost no languages that are as good in all categories at once.
And to top it off, JVM can host other languages so it can easily interoperate with more modern takes on language design like Kotlin while still running on pretty much all major operating systems used in the wild and most CPU architectures as well. It'll run on your car's SoC, your phone and on your server. In many cases, using the same libraries and same code underneath.
syntactically compatible with C++
Not. And certainly not semantically.
You can literally write code that will compile in both.
An example, please.
if (n <= 0) return 0;
if (n == 1) return 1;
int a = 0, b = 1, temp;
for (int i = 2; i <= n; i++) {
temp = a + b;
a = b;
b = temp;
}
return b;
}...if only the return type was "Crow" then you could .eat() that...
I think Java succeeded for the same reasons C++ succeeded - built on familiar syntax, reasonably free and "supported by" a large company. Java being a decent language is a consequence of its success more than of its original design.
Microsoft had C#, at one point IBM pushed SmallTalk. C++ for these environments is doable but going to slow you down at development a lot, as well as being much harder to secure.
At that time the dynamic alternative was Perl, and that remained true basically until Rails came along.
I would say that many things in IT are not chosen on technical merits alone. You have people that do not want to accrue any blame. Back then, by choosing what IBM endorses or what Microsoft endorses, you absolve yourself of fallout from if and when things go wrong.
Back in the 90s, it felt like IBM, Redhat, Sun kind of, sort of, got together and wanted to keep Microsoft from taking over the Enterprise space by offering Java solutions.
In the late 90s, I got stuck making a scheduling program in Java, but it had to run on the 16-bit Windows systems of the time. That was a huge pain, because the 16-bit version didn't have all the capabilities that management was expecting based on the hype. These days, I sometimes have to install enormous enterprise applications that tie up north of 32G of RAM even though they're just basic hardware management tools that would take a fraction of that if built in something like C++ with a standard GUI library. I manage to avoid Java most of the time, but it's been an occasional thorn in my side for 30 years.
The key thing I think with Java is the programming model & structure scale well with team size and with codebase size, perhaps even in a way that tolerates junior developers, outsourced sub-teams, and even lower quality developers. All of those things end up becoming part of your reality on big Enterprise products, so if the language is somehow adding some tolerance for it that is a good thing.
The other things around Syntax and such that people complain about? Those are often minor considerations once the team size and code base size get large enough. Across my career there has always been the lone guy complaining that if we did everything in a LISP derived language everything would be perfect. But that guy has almost always been the guy who worked on a small tool off by himself, not on the main product.
Java has changed a tremendous amount as well. A modern Java system has very little in common with something written before Generics and before all the Functional code has been added. Where I work now we have heavily exploited the Functional java add-ons for years, it has been fantastic.
Then most organisations had deployed windows for staff but needed to run things on Sun servers. Java was a god send as a free and actually cross platform solution that let devs work on windows and run the same thing on the corporate server infra without changes. The culture at the time would not consider deploying scripting language sfor full scale applications acceptable, so Java with it's C++-like structure but built in cross platform capabilities and generous stack of batteries included libraries (for the time) was an absolute god send.
Things tend to form fractal systems of systems for efficiency. A cleanly delineated org chart maps to a cleanly delineated codebase.
How did he build something adopted by so many enterprises?
It does some things at scale very well and has been afforded the performance improvements of very smart people for 30y.
It’s not to say the language isn’t verbose, one of my favourite features was the ability to write code in other languages right inside the a Java app pretty well in-line by using the JVM, thanks to JSR-223.
It was possible to write Ruby or Python code via Jruby or Jython and run it in the JVM.
Clojure also runs on the JVM.
https://docs.oracle.com/javase/8/docs/technotes/guides/scrip...
Decent tooling. Been around for long enough that a lot of the quirks of it are well known and documented. Basically it's a blue collar programming language without too many gotchas. Modern(ish) day Cobol.
(I'm predominantly a Java dev still, even after diversions over the years to Javascript, Python and C#).
GC. Single file modules. No "forward". The Collection suite. Fast compiles.
The magic of the ClassLoader. The ClassLoader, that was insightful. I don't know how much thought went into that when they came up with it, but, wow. That ClassLoader is behind a large swath of Java magic. It really hasn't changed much over time, but boy is it powerful.
When I started Java, I started it because of the nascent Java web stack of the day. Early servlets and JSP. I picked because of two things. One, JSPs were just Servlets. A JSP was compiled down into a Servlet, and shazam, Servlet all the way down. Two, single language stack. Java in JSPs, Java in Servlets. Java in library code. Java everywhere. In contrast to the MS ASP (pre .NET) world.
Mono-language meant my page building controller folks could talk to my backend folks and share expertise. Big win.
Servlets were a great model. Filters were easy and powerful. Free sessions. Free database connection pools in the server. I mean, we had that in '98, '99.
And, of course, portability. First project was using Netscapes server, which was spitting up bits 2 weeks before we went live, so we switched to JRun in a day or two (yay standard-ish things...). Then, Management(tm) decided "No, Sun/Oracle, we're going NT/SQL Server". Oh no. But, yup, transitioned to that in a week. Month later, CTO was fired, and we went back to Sun/Oracle.
Java EE had a rough start, but it offered a single thing nobody else was offering. Not out of the box. Not "cheap", and that was a transaction manager, and declarative transactions on top of that. We're talking about legit "Enterprise grade" transaction manager. Before you had Tuxedo, or MS MTS. Not cheap, not "out of the box", not integrated. JBoss came out and gave all that tech away. Then Sun jumped on with early, free, Sun Java Enterprise 8 which begat Glassfish which was open source. Glassfish was amazing. Did I mention that the included message queues are part and parcel of the integrated, distributed transaction model for Java EE? Doesn't everyone get to rollback their message queue transactions when their DB commit fails? Message Driven Beans, sigh, warms my heart.
There were certainly some bad decisions in early Java EE. The component model was far too flexible for 95% of the applications and got in the way of the Happy Path. Early persistence (BMP, CMP) was just Not Good. We punted on those almost right away and just stuck with Session Beans for transaction management and JDBC. We were content with that.
The whole "everything is remote" over CORBA IIOP and such. But none of that really lasted. EJB 3 knocked it out of the park with local beans, annotations in lieu of XML, etc. Introduction of the JPA. Modern Jakarta EE is amazing, lightweight, stupid powerful (and I'm not even talking Spring, that whole Other Enterprise Stack). There's lots of baggage in there, you just don't have to use it. JAX-RS alone will take you VERY far. Just be gentle, Java Enterprise offers lots and lots of rope.
None of this speaks to the advances in the JVM. The early HotSpot JIT was amazing. "Don't mind me, I'm just going to seamlessly sneak in some binary compiled code where that stack machine stuff was a nano-second ago. I've been watching it, this is better. Off you go!" Like presents from Santa. The current rocket ship that in JDK development (this is good and bad, I still do not like the Java 9 JPMS module stuff, I think it's too intrusive for the vast majority of applications). But OpenJDK, the Graal stuff. Sheesh, just get all light headed thinking about it.
Along with the JVM we have the JDK, its trivial install. Pretty sure I have, like, 20 of them installed on my machine. Swapped out with a PATH and JAVA_HOME change. The JVM is our VM, the Servlet container is our container. Maven is our dependency manager. Our WAR files are self-contained. And all that doesn't go stomping on our computer files like Paul Bunyan and Blue making lakes in Minnesota.
It's no wonder I was struggling to grok all the talk about VMs, Dockers, and containers and all that stuff folks mess with to install software. We never had to deal with that. It just was not an issue.
I can distribute source code, with a pom.xml, and a mvnw wrapper script, and anyone can build that project with pretty much zero drama. Without breaking everything on their system. And whatever IDE they're using can trivially import that project. It's also fast. My current little project, > 10K lines of code, < 3s to clean/build/package.
Obviously, there's always issues. The Stories folks hear are all true. The legacy stuff, the FactoryInterfaceFactoryImpl stuff. The Old Days. It's all real. It's imperfect.
But, boy, is it impressive. (And, hey, portable GUI folks, Java FX is pretty darn good...)
Gosling primarily uses the NetBeans IDE for development, praising its open source, Apache-licensed nature and dedicated community. He expresses frustration with developers who cling to outdated tools: “The thing that drives me nuts the most are people who are madly grasping the ’80s or the ’70s — people who still want to use Vi, which was high-tech in the ’70s.”
From one of the key developers in the Emacs history, and genesis.
He moved on, others keep trying to live in the past.
As for Rust, there is a reason the large majority is either on VSCode or RustRover.
the large majority is on VSCode
Here, fixed it for ya. But since when is VSCode an IDE? It is just an extensible editor, not very far from Emacs or neovim. We’ll see how it plays out, but I assume that IDE is a dead concept. No one develops new IDEs anymore, besides the usual money-milking with “Idea + plugin=>new ${name}ide” from Jetbrains.
When people discuss Go tooling feels like Renaissance folks resdicovering Roman city enginnering.
VSCode is certainly an Integrated Development Editor, and it is such a dead concept that one of the key Visual Age and Eclipse linage of IDEs is the one behind it, Erich Gamma.
Biggest difference is that one hardly needs to code extensions, or manually configure them most of the time, a simple install button press is only that is needed to get any extension going, many of which graphical, taking all advantage of the Web platform.
You mean tooling as it was already kind of available on Turbo Pascal for MS-DOS?
Spare me your stories of the TurboPascal, I am not a youngster, not easily impressed by name-dropping $some_old_thing and do not care about old men yelling at clouds. You may program in TurboPascal, if you like it.
When people discuss Go tooling feels like Renaissance folks resdicovering Roman city enginnering.
People talk about Go tooling because it is good, lightweight, editor-agnostic, helpful and provided with the language. There are multiple language which fulfil some of the criteria, but not many fulfilling all of them.
VSCode is certainly an Integrated Development Editor
If VSCode is an IDE, then also emacs and neovim and your lamenting has no meaning. And of course, the one behind VSCode is Atom which was heavily influenced by Sublime text.
Biggest difference is that one hardly needs to code extensions, or manually configure them most of the time, a simple install button press is only that is needed to get any extension going, many of which graphical, taking all advantage of the Web platform.
So, basically a neovim distribution? Got it!
Emacs could be an IDE, if it came with the whole Lisp Machine for the ride, sadly it is only a subset of the whole experience.
VSCode has zero lines of Atom code into it, it started from Azure as Monaco project.
Does neovim distribution handle graphical development plugins, without spawing external windows?
Yeah, right.
Apparently many people need the stories, given how much they boost Go about things that are prior art.
No one but you here cares about prior art. And not even you, most likely, otherwise you would write your in some kind of lisp which was first on most of the things.
Emacs could be an IDE, if it came with the whole Lisp Machine for the ride, sadly it is only a subset of the whole experience.
Lisp is just an implementation detail in (GNU) Emacs, even though it is what makes it a delight to configure. The rest is done, as everywhere, with plugins and built-in package management.
VSCode has zero lines of Atom code into it, it started from Azure as Monaco project.
VSCode has everything from Atom in it, since it is basically a rewrite of Atom, using Electron, which was written for Atom.
Does neovim distribution handle graphical development plugins, without spawing external windows?
What should it be, UML-plugins? Coroner has called, he wants his dead things back. But most stuff is handled with overlays these days with Telescope.
Can I put my cursor on a defun in Visual Studio Code, Eclipse, or IntelliJ, mash C-M-x, and immediately have that functionality available in my running editor session with all my work? I can in Emacs, and that capability has allowed me to smooth out the rough spots in many a workflow.
Lisp is not just an "implementation detail", it's central to what makes Emacs great. GNU Emacs is a running Lisp image, which you can extend and shape at will as you work on something else. A phenomenal tool.
For programming purposes emacs is not that different from any modern editor - there are some built-in capabilities, the rest you get from ELPA/MELPA/Git. It has no proper plugin arhitecture, which is a gift and a curse at the same time, but mostly works out. Where emacs shines compared to vscode/neovim/etc is that it has a very solid support for prose, org-mode, denote and prots color schemes which are that good. It is why I continue to use it, even though I am not interested in lisp per se these days and could easily replace it for programming with any other advanced editor.
As long as I am expressing gratitude, I would also like to call out the Clojure team for developing a wonderful ecosystem in top of Java and the JVM.
It must be wonderful to do work that positively affects the lives of millions of people.
I took the severance package when Taligent imploded, dropped everything I was doing at the time, and have been working with Java and its related software ever since.
It remains a shame that it didn't launch with generics though, and I still think operator overloading would have been good. Had it done so I think a lot more people would have stuck around for when the performance improved with HotSpot.
I think it's incredible with hindsight how Java countered many of the mid 90s C++ problems, especially by avoiding multiple inheritance.
This is because Java is based on an older language called Objective-C that doesn't have multiple inheritance :)
It's not based on C++, that's just the other OO language from the era people usually think of.
This is because Java is based on an older language called Objective-C that doesn't have multiple inheritance :)
No it's not, certainly not any more than it's "based" on Smalltalk.
I haven't seen things quite so bad on the .NET side at this client. Yes there's a ton of legacy ASP.NET apps. But there are also a lot of .NET Core apps. They haven't quite made it to the post Core versions of .NET, but it's still a healthier state than I see with Java. I guess all of this to say that modern versions of "ancient" programming languages are great and really do improve things. But chances are if you're working with an ancient programming language you'll be stuck maintaining legacy shit and won't ever get to utilize the shiny stuff.
This is keeping in mind that your average programmer will never even try to interview for FAANG never mind grind leetcode and programming language trivia for weeks like seems so common here.
Linux support is an afterthought and it shows. And you never know if it might be dropped next year.
The API
Which one?
These people could not care less about engaging with the subject, they are here because they feel obliged to engage in a moment of hatred of what they think is an enemy tribe.
And the bit where you got angry because I didn't reply quick enough on an internet forum shows that perhaps you need to improve your manners.
If you are a Java shop everything just works so why touch it?
In java there's no equivalent to daemon() (unless you go out of your way to call the libc) and java doesn't support SOCK_DGRAM for unix sockets, so no syslog either.
.net seems to have the same issues.
everything just works" is true only for a very very narrow definition of "everything" which leaves out "daemon that works decently
For those interested as to why: https://news.ycombinator.com/item?id=43396171
A few more arguments while we're at it:
https://dotnet.microsoft.com/en-us/platform/telemetry (Linux leads with 77% of all systems invoking .NET CLI commands)
https://github.com/dotnet/runtime/blob/main/src/libraries/Sy... (first-class epoll/kqueue integration with async, much like the one Go has with goroutines via netpoll)
https://github.com/dotnet/runtime/blob/main/src/coreclr/gc/u... (GC implementation is cgroups-aware, unlike Go)
I went to a Java school. I remember my operating systems class involved writing simulated OS code in Java (for example, round robin for context switching). The argument was that it would be easier to understand the algorithms if the hardware complexities were minimized. I understand that sentiment, but I don't think Java was the right choice. Python would have accomplished the same task even better (understanding algorithms). I think there was a huge influence from industry to teach college students Java from day one. I had taught myself BASIC and some C back in high school, so it was a bit of a step backwards to learn a high-level language just to do simulated low-level OS programming.
Even as early as Java 1.1 and 1.2 he was not particularly involved in making runtime, library, or even language decisions, and later he wasn't the key to generics, etc.
Mark Reinhold has been the hand's-on lead since 1.1, first integrating early JIT's, HotSpot, the 1.2 10X class explosion, and has been running the team all the way through Oracle's purchase, making the JVM suitable for dynamic language like Kotlin and Clojure, open-sourcing, moving to a faster release cadence, pushing JVM method and field handles that form the basis for modern language features, migrating between GC's, and on and on.
As far as I can tell, everything that makes Java great has come down to Mark Reinhold pushing and guiding.
I have no love for Oracle the big bad company. But I am deeply greatful they've managed to keep that group moving forward.
Java is a great success story. Though, to be fair, James Gosling was the spark but has not been the steward.
That's like saying Linus was only the spark for git because he spent two weeks hacking it from scratch.
The whole world uses git now.
Why couldn't we have had these things for Lisp?* I mean, if 1/1000 of the intellectual horsepower that's been thrown at Java had been thrown at Lisp, we'd all be driving to work in orbit-capable flying cars that used a teaspoon of fuel per year.
* Of course Lisp invented the insanely great IDE around 1984 but then everybody forgot about it and had to rediscover the idea 30 years later.
To me, Clojure is an "almost-lisp" because of its lack of cons cells, its use of all the brackets on the keyboard, and its dependence on the JVM which can't do tail jumps.
I love Common Lisp because it compiles down to the metal and you can write code with it that starts instantly and runs very fast.
But all the above is more about personal taste than anything else, so maybe I should try Clojure again.
For a simple IPv4 address normally representable using 4 bytes/ 32 bits Java uses 56 bytes. The reason for it is Inet4Address object takes 24 B and the InetAddressHolder object takes another 32 B. The InetAddressHolder can contain not only the address but also the address family and original hostname that was possibly resolved to the address.
For an IPv6 address normally representable using 16 bytes/ 128 bits Java uses 120 bytes. An Inet6Address contains the InetAddressHolder inherited from InetAddress and adds an Inet6AddressHolder that has additional information such as the scope of the address and a byte array containing the actual address. This is an interesting approach especially when compared to the implementation of UUID, which uses two longs for storing the 128 bits of data.
Java's approach is causing 15x overhead for IPv4 and 7.5x overhead for IPv6 which seems excessive. What am I missing here? Can or should this be streamlined?
For my part, most of the Java code that I have written that needs to use IP addresses needs somewhere between 1 and 10 of them, so I'd never notice this overhead. If you want to write, like, a BGP server in Java I guess you should write your own class for handling IP addresses.
Gosling, unsurprisingly, designed Java with the NeWS model in mind, where web pages were programs, not just static HTML documents. When I got him to sign my copy of "The Java Programming Language", I asked him if Java was the revenge of NeWS. He just smiled.
I wonder what Gosling thinks of the fact that NeWS ultimately won in the end, even on Microsoft systems.
We could not depend on the printer to stay functional, though. Have you heard of a Winmodem? SPARCprinters were essentially that: they were configured as a "dumb display device" where all the imaging logic was contained in the software and run on the server. A page was written in PostScript, rendered on the print server, and dispatched to the printer as if it were a framebuffer/monitor.
Unfortunately, for whatever reason, the server software was not so reliable, or the printer hardware wasn't reliable, and because of this peculiar symbiotic parasitism, whenever our printer wedged, our server was also toast. Every process went into "D" for device wait; load averages spiked and all our work ground to a halt. We would need to pull the worker off the desktop, reboot the whole server, and start over with the printer.
That printer haunted my dreams, all though my transition from clerk, to network operator, to sysadmin, and it wasn't until 2011 when I was able to reconcile with printers in general. I still miss SunOS 4 and the whole SPARC ecosystem, but good riddance to Display PostScript.
Many of Java's novel language choices have proven unfavorable in the long run (e.g. everything is a class, and even its syntax was needlessly verbose and ceremonious from day one) and all of what makes it a halfway decent language these days are good ideas that originated in other languages, often eons ago, which Java, for some reason, often elects to rebrand with its own terminology.
That said, the maintainers also do a phenomenal job managing the evolution of the language and preserving compatibility, but from a pure programming language design standpoint it's largely a messy amalgam of great ideas from a bunch of other places awkwardly realized. Great, robust ecosystem, great platform, great management, mediocre language design.
Finally managed to get a job offer (after being unemployed for a bit) doing Python. It's starting to look like demand for JVM experience is beginning to wane. Might be time to move on anyway :shrug:
I'm old... as long as there's a steady paycheck involved, I'll code in whatever language you say.
Though, currently working on a little personal project in Scala. :)
It may not be cool to use Java for startups, but we do and are immensely productive with it.
being now older (40+ ;-), i would suggest to just use the tool that gets the job done.
in todays world, this is Java or C# - while im highly advocating for the latter, because the eco system feels much more tightly integrated: i can spin up whatever application for every usecase with C# in 1 minute; also the language still evolves massivly, there is enough HR-power on the market, also .NET is now crossplatform.
the language is just elegant and very efficient, it makes the job much easier
Jasmin was written because, at the time that we wrote the Java Virtual Machine book for O'Reilly, Sun had not published an assembler format for the Java virtual machine.Generating a binary Java .class file is pretty fiddly. Its like creating an a.out (or .exe) file by hand. Even using a Java package like JAS (a Java API for creating class files, used internally by Jasmin and written by KB Sriram), you need to know a lot about the philosophy of the Java Virtual Machine before you can write something at the Virtual Machine level and generate a Java class.
We wanted something that made it very easy for a student or programmer to explore the Java Virtual Machine, or write a new language which targets the VM, without getting into the details of constant pool indices, attribute tables, and so on.
https://en.wikipedia.org/wiki/Gosling_Emacs
Gosling Emacs was especially noteworthy because of the effective redisplay code, which used a dynamic programming technique to solve the classical string-to-string correction problem. The algorithm was quite sophisticated; that section of the source was headed by a skull-and-crossbones in ASCII art, warning any would-be improver that even if they thought they understood how the display code worked, they probably did not.