dminik 3 hours ago

One aspect that I find interesting is that Rust projects often seem deceivingly small.

First, dependencies don't translate easily to the perceived size. In C++ dependencies on large projects are often vendored (or even not used at all). And so it is easy to look at your ~400000 line codebase and go "it's slow, but there's a lot of code here after all".

Second (and a much worse problem) are macros. They actually hit the same issue. A macro that expands to 10s or 100s of lines can very quickly take your 10000 line project and turn it into a million line behemoth.

Third are generics. They also suffer the same problem. Every generic instantiation is eating your CPU.

But I do want to offer a bit of an excuse for rust. Because these are great features. They turn what would have taken 100000 lines of C or 25000 lines of C++ to a few 1000s lines of Rust.

However, there is definitely an overuse here making the ecosystem seem slow. For instance, at work we use async-graphql. The library itself is great, but it's an absolute proc-macro hog. There's issues in the repository open for years about the performance. You can literally feel the compiler getting slower for each type you add.

  • jvanderbot 2 hours ago

    You can literally feel the compiler getting slower for each type you add.

    YSK: The compiler performance is IIRC exponential in the "depth" of types. And oh boy does GraphQL love their nested types.

taylorallred 20 hours ago

So there's this guy you may have heard of called Ryan Fleury who makes the RAD debugger for Epic. The whole thing is made with 278k lines of C and is built as a unity build (all the code is included into one file that is compiled as a single translation unit). On a decent windows machine it takes 1.5 seconds to do a clean compile. This seems like a clear case-study that compilation can be incredibly fast and makes me wonder why other languages like Rust and Swift can't just do something similar to achieve similar speeds.

  • lordofgibbons 20 hours ago

    The more your compiler does for you at build time, the longer it will take to build, it's that simple.

    Go has sub-second build times even on massive code-bases. Why? because it doesn't do a lot at build time. It has a simple module system, (relatively) simple type system, and leaves a whole bunch of stuff be handled by the GC at runtime. It's great for its intended use case.

    When you have things like macros, advanced type systems, and want robustness guarantees at build time.. then you have to pay for that.

    • duped 19 hours ago

      I think this is mostly a myth. If you look at Rust compiler benchmarks, while typechecking isn't _free_ it's also not the bottleneck.

      A big reason that amalgamation builds of C and C++ can absolutely fly is because they aren't reparsing headers and generating exactly one object file so the linker has no work to do.

      Once you add static linking to the toolchain (in all of its forms) things get really fucking slow.

      Codegen is also a problem. Rust tends to generate a lot more code than C or C++, so while the compiler is done doing most of its typechecking work, the backend and assembler has a lot of things to chuck through.

      • benreesman 4 hours ago

        The meme that static linking is slow or produces anything other than the best executables is demonstrably false and the result of surprisingly sinister agendas. Get out readelf and nm and PS sometime and do the arithematic: most programs don't link much of glibc (and its static link is broken by design, musl is better at just about everything). Matt Godbolt has a great talk about how dynamic linking actually works that should give anyone pause.

        DLLs got their start when early windowing systems didn't quite fit on the workstations of the era in the late 80s / early 90s.

        In about 4 minutes both Microsoft and GNU were like, "let me get this straight, it will never work on another system and I can silently change it whenever I want?" Debian went along because it gives distro maintainers degrees of freedom they like and don't bear the costs of.

        Fast forward 30 years and Docker is too profitable a problem to fix by the simple expedient of calling a stable kernel ABI on anything, and don't even get me started on how penetrated everything but libressl and libsodium are. Protip: TLS is popular with the establishment because even Wireshark requires special settings and privileges for a user to see their own traffic, security patches my ass. eBPF is easier.

        Dynamic linking moves control from users to vendors and governments at ruinous cost in performance, props up bloated industries like the cloud compute and Docker industrial complex, and should die in a fire.

        Don't take my word for it, swing by cat-v.org sometimes and see what the authors of Unix have to say about it.

        I'll save the rant about how rustc somehow manages to be slower than clang++ and clang-tidy combined for another day.

        • duped 2 hours ago

          I think you're confused about my comment and this thread - I'm talking about build times.

          • benreesman 9 minutes ago

            You said something false and important and I took the opportunity to educate anyone reading about why this aspect of their computing experience is a mess. All of that is germane to how we ended up in a situation where someone is calling rustc with a Dockerfile and this is considered normal.

        • jrmg 3 hours ago

          …surprisingly sinister agendas.

          Dynamic linking moves control from users to vendors and governments at ruinous cost in performance, props up bloated industries...

          This is ridiculous. Not everything is a conspiracy!

          • benreesman 12 minutes ago

            I didn't say anything was a conspiracy, let alone everything. I said inferior software is promoted by vendors on Linux as well as on MacOS and Windows with unpleasant consequences for users in a way that serves those vendors and the even more powerful institutions to which they are beholden. Sinister intentions are everywhere in this business (go read the opinions of the people who run YC), that's not even remotely controversial.

            If fact, if there was anything remotely controversial about a bunch of extremely specific, extremely falsifiable claims I made, one imagines your rebuttal would have mentioned at least one.

            I said inflmatory things (Docker is both arsonist and fireman at ruinous cost), but they're fucking true. That Alpine in the Docker jank? Links musl!

          • k__ 2 hours ago

            That's an even more reasonable fear than trusting trust, and people seem to take that seriously.

      • treyd 17 hours ago

        Not only does it generate more code, the initially generated code before optimizations is also often worse. For example, heavy use of iterators means a ton of generics being instantiated and a ton of call code for setting up and tearing down call frames. This gets heavily inlined and flattened out, so in the end it's extremely well-optimized, but it's a lot of work for the compiler. Writing it all out classically with for loops and ifs is possible, but it's harder to read.

        • estebank 2 hours ago

          For loops are sugar around an Iterator instantiation:

            for i in 0..10 {}
          
          translates to roughly

            let mut iter = Range { start: 0, end: 10 }.into_iter();
            while let Some(i) = iter.next() {}
      • fingerlocks 16 hours ago

        The swift compiler is definitely bottle necked by type checking. For example, as a language requirement, generic types are left more or less in-tact after compilation. They are type checked independent of what is happening. This is unlike C++ templates which are effectively copy-pasting the resolved type with the generic for every occurrence of type resolution.

        This has tradeoffs: increased ABI stability at the cost of longer compile times.

        • slavapestov an hour ago

          > This has tradeoffs: increased ABI stability at the cost of longer compile times.

          Nah. Slow type checking in Swift is primarily caused by the fact that functions and operators can be overloaded on type.

          Separately-compiled generics don't introduce any algorithmic complexity and are actually good for compile time, because you don't have to re-type check every template expansion more than once.

        • willtemperley 12 hours ago

          A lot can be done by the programmer to mitigate slow builds in Swift. Breaking up long expressions into smaller ones and using explicit types where type inference is expensive for example.

          I’d like to see tooling for this to pinpoint bottlenecks - it’s not always obvious what’s making builds slow.

          • never_inline 4 hours ago

            > Breaking up long expressions into smaller ones

            If it improves compile time, that sounds like a bug in the compiler or the design of the language itself.

          • ykonstant 9 hours ago

            >I’d like to see tooling for this to pinpoint bottlenecks - it’s not always obvious what’s making builds slow.

            I second this enthusiastically.

            • glhaynes 6 hours ago

              I'll third it. I've started to see more and more cargo culting of "fixes" that I'm extremely suspicious do nothing aside from making the code bulkier.

        • windward 8 hours ago

          >This is unlike C++ templates which are effectively copy-pasting the resolved type with the generic for every occurrence of type resolution.

          Even this can lead to unworkable compile times, to the point that code is rewritten.

      • the-lazy-guy 5 hours ago

        > Once you add static linking to the toolchain (in all of its forms) things get really fucking slow.

        Could you expand on that, please? Every time you run dynmically linked program, it is linked at runtime. (unless it explicitly avoids linking unneccessary stuff by dlopening things lazily; which pretty much never happens). If it is fine to link on every program launch, linking at build time should not be a problem at all.

        If you want to have link time optimization, that's another story. But you absolutely don't have to do that if you care about build speed.

      • windward 8 hours ago

        >Codegen is also a problem. Rust tends to generate a lot more code than C or C++

        Wouldn't you say a lot of that comes from the macros and (by way of monomorphisation) the type system?

        • jandrewrogers an hour ago

          Modern C++ in particular does a lot of similar, albeit not identical, codegen due to its extensive metaprogramming facilities. (C is, of course, dead simple.) I've never looked into it too much but anecdotally Rust does seem to generate significantly more code than C++ in cases where I would intuitively expect the codegen to be similar. For whatever reason, the "in theory" doesn't translate to "in practice" reliably.

          I suspect this leaks into both compile-time and run-time costs.

      • blizdiddy 4 hours ago

        Go is static by default and still fast as hell

        • vintagedave 4 hours ago

          Delphi is static by default and incredibly fast too.

    • ChadNauseam 19 hours ago

      That the type system is responsible for rust's slow builds is a common and enduring myth. `cargo check` (which just does typechecking) is actually usually pretty fast. Most of the build time is spent in the code generation phase. Some macros do cause problems as you mention, since the code that contains the macro must be compiled before the code that uses it, so they reduce parallelism.

      • rstuart4133 19 hours ago

        > Most of the build time is spent in the code generation phase.

        I can believe that, but even so it's caused by the type system monomorphising everything. When it use qsort from libc, you are using per-compiled code from a library. When you use slice::sort(), you get custom assembler compiled to suit your application. Thus, there is a lot more code generation going on, and that is caused by the tradeoffs they've made with the type system.

        Rusts approach give you all sorts of advantages, like fast code and strong compile time type checking. But it comes with warts too, like fat binaries, and a bug in slice::sort() can't be fixed by just shipping of the std dynamic library, because there is no such library. It's been recompiled, just for you.

        FWIW, modern C++ (like boost) that places everything in templates in .h files suffers from the same problem. If Swift suffers from it too, I'd wager it's the same cause.

        • badmintonbaseba 7 hours ago

          It's partly by the type system. You can implement a std::sort (or slice::sort()) that just delegates to qsort or a qsort-like implementation and have roughly the same compile time performance as just using qsort straight.

          But not having to is a win, as the monomorphised sorts are just much faster at runtime than having to do an indirect call for each comparison.

          • estebank 2 hours ago

            This is a pattern a crate author can rely on (write a function that uses genetics that immediately delegates to a function that uses trait objects or converts to the needed types eagerly so the common logic gets compiled only once), and there have been multiple efforts to have the compiler do that automatically. It has been called polymorphization and it comes up every now and then: https://internals.rust-lang.org/t/add-back-polymorphization/...

      • tedunangst 18 hours ago

        I just ran cargo check on nushell, and it took a minute and a half. I didn't time how long it took to compile, maybe five minutes earlier today? So I would call it faster, but still not fast.

        I was all excited to conduct the "cargo check; mrustc; cc" is 100x faster experiment, but I think at best, the multiple is going to be pretty small.

        • ChadNauseam 17 hours ago

          Did you do it from a clean build? In that case, it's actually a slightly misleading metric, since rust needs to actually compile macros in order to typecheck code that uses them. (And therefore must also compile all the code that the macro depends on.) My bad for suggesting it, haha. Incremental cargo check is often a better way of seeing how long typechecking takes, since usually you haven't modified any macros that will need to be recompiled. On my project at work, incremental cargo check takes `1.71s`.

          • estebank 2 hours ago

            Side note: There's an effort to cache proc macro invocations so that they get executed only once if the item they annotate hasn't changed: https://github.com/rust-lang/rust/pull/129102

            There are multiple caveats on providing this to users (we can't assume that macro invocations are idempotent, so the new behavior would have to be opt in, and this only benefits incremental compilation), but it's in our radar.

        • CryZe 8 hours ago

          A ton of that is actually still doing codegen (for the proc macros for example).

    • cogman10 19 hours ago

      Yes but I'd also add that Go specifically does not optimize well.

      The compiler is optimized for compilation speed, not runtime performance. Generally speaking, it does well enough. Especially because it's usecase is often applications where "good enough" is good enough (IE, IO heavy applications).

      You can see that with "gccgo". Slower to compile, faster to run.

      • cherryteastain 18 hours ago

        Is gccgo really faster? Last time I looked it looked like it was abandoned (stuck at go 1.18, had no generics support) and was not really faster than the "actual" compiler.

        • cogman10 13 hours ago

          Digging around, looks like it's workload dependent.

          For pure computational workloads, it'll be faster. However, anything with heavy allocation will suffer as apparently the gccgo GC and GC related optimizations aren't as good as cgo's.

    • Mawr 10 hours ago

      Not really. The root reason behind Go's fast compilation is that it was specifically designed to compile fast. The implementation details are just a natural consequence of that design decision.

      Since fast compilation was a goal, every part of the design was looked at through a rough "can this be a horrible bottleneck?", and discarded if so. For example, the import (package) system was designed to avoid the horrible, inefficient mess of C++. It's obvious that you never want to compile the same package more than once and that you need to support parallel package compilation. These may be blindingly obvious, but if you don't think about compilation speed at design time, you'll get this wrong and will never be able to fix it.

      As far as optimizations vs compile speed goes, it's just a simple case of diminishing returns. Since Rust has maximum possible perfomance as a goal, it's forced to go well into the diminishing returns territory, sacrificing a ton of compile speed for minor performance improvements. Go has far more modest performance goals, so it can get 80% of the possible performance for only 20% of the compile cost. Rust can't afford to relax its stance because it's competing with languages like C++, and to some extent C, that are willing to go to any length to squeeze out an extra 1% of perfomance.

    • phplovesong 2 hours ago

      Thats not really true. As a counter example, Ocaml has a very advanced type system, full typeinference, generics and all that jazz. Still its on par, or even faster to compile than Go.

    • jstanley 5 hours ago

      > Go has sub-second build times even on massive code-bases.

      Unless you use sqlite, in which case your build takes a million years.

      • Groxx 4 hours ago

        Yeah, I deal with multiple Go projects that take a couple minutes to link the final binary, much less build all the intermediates.

        Compilation speed depends on what you do with a language. "Fast" is not an absolute, and for most people it depends heavily on community habits. Rust habits tend to favor extreme optimizability and/or extreme compile-time guarantees, and that's obviously going to be slower than simpler code.

    • Zardoz84 19 hours ago

      Dlang compilers does more than any C++ compiler (metaprogramming, a better template system and compile time execution) and it's hugely faster. Language syntax design has a role here.

  • dhosek 20 hours ago

    Because Russt and Swift are doing much more work than a C compiler would? The analysis necessary for the borrow checker is not free, likewise with a lot of other compile-time checks in both languages. C can be fast because it effectively does no compile-time checking of things beyond basic syntax so you can call foo(char) with foo(int) and other unholy things.

    • steveklabnik 20 hours ago

      The borrow checker is usually a blip on the overall graph of compilation time.

      The overall principle is sound though: it's true that doing some work is more than doing no work. But the borrow checker and other safety checks are not the root of compile time performance in Rust.

      • kimixa 18 hours ago

        While the borrow checker is one big difference, it's certainly not the only thing the rust compiler offers on top of C that takes more work.

        Stuff like inserting bounds checking puts more work on the optimization passes and codegen backend as it simply has to deal with more instructions. And that then puts more symbols and larger sections in the input to the linker, slowing that down. Even if the frontend "proves" it's unnecessary that calculation isn't free. Many of those features are related to "safety" due to the goals of the language. I doubt the syntax itself really makes much of a difference as the parser isn't normally high on the profiled times either.

        Generally it provides stricter checks that are normally punted to a linter tool in the c/c++ world - and nobody has accused clang-tidy of being fast :P

        • simonask 7 hours ago

          It truly is not about bounds checks. Index lookups are rare in practical Rust code, and the amount of code generated from them is miniscule.

          But it _is_ about the sheer volume of stuff passed to LLVM, as you say, which comes from a couple of places, mostly related to monomorphization (generics), but also many calls to tiny inlined functions. Incidentally, this is also what makes many "modern" C++ projects slow to compile.

          In my experience, similarly sized Rust and C++ projects seem to see similar compilation times. Sometimes C++ wins due to better parallelization (translation units in Rust are crates, not source files).

    • taylorallred 20 hours ago

      These languages do more at compile time, yes. However, I learned from Ryan's discord server that he did a unity build in a C++ codebase and got similar results (just a few seconds slower than the C code). Also, you could see in the article that most of the time was being spent in LLVM and linking. With a unity build, you nearly cut out link step entirely. Rust and Swift do some sophisticated things (hinley-milner, generics, etc.) but I have my doubts that those things cause the most slowdown.

    • drivebyhooting 20 hours ago

      That’s not a good example. Foo(int) is analyzed by compiler and a type conversion is inserted. The language spec might be bad, but this isn’t letting the compiler cut corners.

    • jvanderbot 19 hours ago

      If you'd like the rust compiler to operate quickly:

      * Make no nested types - these slow compiler time a lot

      * Include no crates, or ones that emphasize compiler speed

      C is still v. fast though. That's why I love it (and Rust).

      • windward 8 hours ago

        >Make no nested types

        I wouldn't like it that much

    • Thiez 20 hours ago

      This explanation gets repeated over and over again in discussions about the speed of the Rust compiler, but apart from rare pathological cases, the majority of time in a release build is not spent doing compile-time checks, but in LLVM. Rust has zero-cost abstractions, but the zero-cost refers to runtime, sadly there's a lot of junk generated at compile-time that LLVM has to work to remove. Which is does, very well, but at cost of slower compilation.

      • vbezhenar 19 hours ago

        Is it possible to generate less junk? Sounds like compiler developers took a shortcuts, which could be improved over time.

        • LtdJorge 8 hours ago

          Well, zero-cost abstractions are still abstractions. It’s not junk per-se, but things that will be optimized out if the IR has enough information to safely do so, so basically lots of extra metadata to actually prove to LLVM that these things are safe.

        • zozbot234 18 hours ago

          You can address the junk problem manually by having generic functions delegate as much of their work as possible to non-generic or "less" generic functions (Where a "less" generic function is one that depends only on a known subset of type traits, such as size or alignment. Then delegating can help the compiler generate fewer redundant copies of your code, even if it can't avoid code monomorphization altogether.)

          • andrepd 9 hours ago

            Isn't something like this blocked on the lack of specialisation?

            • dwattttt 5 hours ago

              I believe the specific advice they're referring to has been stable for a while. You take your generic function & split it into a thin generic wrapper, and a non-generic worker.

              As an example, say your function takes anything that can be turned into a String. You'd write a generic wrapper that does the ToString step, then change the existing function to just take a String. That way when your function is called, only the thin outer function is monomorphised, and the bulk of the work is a single implementation.

              It's not _that_ commonly known, as it only becomes a problem for a library that becomes popular.

              • estebank 2 hours ago

                To illustrate:

                  fn foo<S: Into<String>>(s: S) {
                      fn inner(s: String) { ... }
                      inner(s.into())
                  }
        • rcxdude 18 hours ago

          Probably, but it's the kind of thing that needs a lot of fairly significant overhauls in the compiler architecture to really move the needle on, as far as I understand.

  • vbezhenar 19 hours ago

    I encountered one project in 2000-th with few dozens of KLoC in C++. It compiled in a fraction of a second on old computer. My hello world code with Boost took few seconds to compile. So it's not just about language, it's about structuring your code and using features with heavy compilation cost. I'm pretty sure that you can write Doom with C macros and it won't be fast. I'm also pretty sure, that you can write Rust code in a way to compile very fast.

    • taylorallred 19 hours ago

      I'd be very interested to see a list of features/patterns and the cost that they incur on the compiler. Ideally, people should be able to use the whole language without having to wait so long for the result.

      • vbezhenar 18 hours ago

        So there are few distinctive patterns I observed in that project. Please note that many of those patterns are considered anti-patterns by many people, so I don't necessarily suggest to use them.

        1. Use pointers and do not include header file for class, if you need pointer to that class. I think that's a pretty established pattern in C++. So if you want to declare pointer to a class in your header, you just write `class SomeClass;` instead of `#include "SomeClass.hpp"`.

        2. Do not use STL or IOstreams. That project used only libc and POSIX API. I know that author really hated STL and considered it a huge mistake to be included to the standard language.

        3. Avoid generic templates unless absolutely necessary. Templates force you to write your code in header file, so it'll be parsed multiple times for every include, compiled to multiple copies, etc. And even when you use templates, try to split the class to generic and non-generic part, so some code could be moved from header to source. Generally prefer run-time polymorphism to generic compile-time polymorphism.

        • dieortin 16 hours ago

          Why use C++ at that point? Also, pre declaring classes instead of including the corresponding headers has quite a few drawbacks.

          • maccard 9 hours ago

            References, for one. Also there’s a huge difference between “avoid templates unless necessary” and “don’t use templates”.

          • kortilla 15 hours ago

            RAII? shared pointers?

      • kccqzy 19 hours ago

        Templates as one single feature can be hugely variable. Its effect on compilation time can be unmeasurable. Or you can easily write a few dozen lines that will take hours to compile.

    • herewulf 8 hours ago

      My anecdata would be that the average C++ developer puts includes inside of every header file which includes more headers to the point where everything is including everything else and a single .cpp file draws huge swaths of unnecessary code in and the project takes eons to compile on a fast computer.

      That's my 2000s development experience. Fortunately I've spent a good chunk of the 2010s and most of the 2020s using other languages.

      The classic XKCD compilation comic exists for a reason.

  • tptacek 20 hours ago

    I don't think it's interesting to observe that C code can be compiled quickly (so can Go, a language designed specifically for fast compilation). It's not a problem intrinsic to compilation; the interesting hard problem is to make Rust's semantics compile quickly. This is a FAQ on the Rust website.

  • weinzierl 8 hours ago

    This is sometimes called amalgamation and you can do it Rust as well. Either manually or with tools. The point is that apart from very specific niches it is just not a practical approach.

    It's not that it can't be done but that it usually is not worth the hassle and our goal should be for compilation to be fast despite not everything being in one file.

    Turbo Pascal is a prime example for a compiler that won the market not least because of its - for the time - outstanding compilation speed.

    In the same vein, a language can be designed for fast compilation. Pascal in general was designed for single-pass compilation which made it naturally fast. All the necessary forward declarations were a pain though and the victory of languages that are not designed for single-pass compilation proofs that while doable it was not worth it in the end.

  • ceronman 20 hours ago

    I bet that if you take those 278k lines of code and rewrite them in simple Rust, without using generics, or macros, and using a single crate, without dependencies, you could achieve very similar compile times. The Rust compiler can be very fast if the code is simple. It's when you have dependencies and heavy abstractions (macros, generics, traits, deep dependency trees) that things become slow.

    • taylorallred 18 hours ago

      I'm curious about that point you made about dependencies. This Rust project (https://github.com/microsoft/edit) is made with essentially no dependencies, is 17,426 lines of code, and on an M4 Max it compiles in 1.83s debug and 5.40s release. The code seems pretty simple as well. Edit: Note also that this is 10k more lines than the OP's project. This certainly makes those deps suspicious.

      • MindSpunk 16 hours ago

        The 'essentially no dependencies' isn't entirely true. It depends on the 'windows' crate, which is Microsoft's auto-generated Win32 bindings. The 'windows' crate is huge, and would be leading to hundreds of thousands of LoC being pulled in.

        There's some other dependencies in there that are only used when building for test/benchmarking like serde, zstd, and criterion. You would need to be certain you're building only the library and not the test harness to be sure those aren't being built too.

    • 90s_dev 20 hours ago

      I can't help but think the borrow checker alone would slow this down by at least 1 or 2 orders of magnitude.

      • steveklabnik 19 hours ago

        Your intuition would be wrong: the borrow checker does not take much time at all.

      • tomjakubowski 19 hours ago

        The borrow checker is really not that expensive. On a random example, a release build of the regex crate, I see <1% of time spent in borrowck. >80% is spent in codegen and LLVM.

      • FridgeSeal 19 hours ago

        Again, as this been often repeated, and backed up with data, the borrow-checker is a tiny fraction of a Rust apps build time, the biggest chunk of time is spent in LLVM.

  • john-h-k 7 hours ago

    My C compiler, which is pretty naive and around ~90,000 lines, can compile _itself_ in around 1 second. Clang can do it in like 0.4.

    The simple truth is a C compiler doesn’t need to do very much!

  • ben-schaaf 14 hours ago

    Every claim I've seen about unity builds being fast just never rings true to me. I just downloaded the rad debugger and ran the build script on a 7950x (about as fast as you can get). A debug build took 5s, a release build 34s with either gcc or clang.

    Maybe it's a MSVC thing - it does seem to have some multi-threading stuff. In any case raddbg non-clean builds take longer than any of my rust projects.

    • maccard 9 hours ago

      I use unity builds day in day out. The speed up is an order of magnitude on a 2m+ LOC project.

      If you want to see the difference download unreal engine and compile the editor with and without unity builds enabled.

      My experience has been the polar opposite of yours - similar size rust projects are an order of magnitude slower than C++ ones. Could you share an example of a project to compare with?

      • ben-schaaf 2 hours ago

        > If you want to see the difference download unreal engine and compile the editor with and without unity builds enabled.

        UE doesn't use a full unity build, it groups some files together into small "modules". I can see how this approach may have some benefits; you're trading off a faster clean build for a slower incremental build.

        I tested compiling UnrealFrontend, and a default setup with the hybrid unity build took 148s. I noticed it was only using half my cores due to memory constraints. I disabled unity and upped the parallelism and got 182s, so 22% slower while still using less memory. A similarly configured unity build was 108s, so best case is ~2x.

        On the other hand only changing the file TraceTools/SFilterPreset.cpp resulted in 10s compilation time under a unity build, and only 2s without unit.

        I can see how this approach has its benefits (and drawbacks). But to be clear this isn't what projects like raddbg and sqlite3 are doing. They're doing a single translation unit for the entire project. No parallelism, no incremental builds, just a single compiler invocation. This is usually what I've seen people mean by a unity build.

        > My experience has been the polar opposite of yours - similar size rust projects are an order of magnitude slower than C++ ones. Could you share an example of a project to compare with?

        I just did a release build of egui in 35s, about the same as raddbg's release build. This includes compiling dependencies like wgpu, serde and about 290 other dependencies which add up to well over a million lines of code.

        Note I do have mold configured as my linker, which speeds things up significantly.

      • almostgotcaught 2 hours ago

        How many LOC is unreal? I'm trying to estimate whether making LLVM compatible with UNITY_BUILD would be worth the effort.

        EDIT: i signed up to get access to unreal so take a look at how they do unity builds and turns out they have their own build tool (not CMake) that orchestrates the build. so does anyone know (can someone comment) whether unity builds for them (unreal) means literally one file for literally all project sources files or if it's "higher-granularity" like UNITY_BUILD in CMake (i.e., single file per object).

        • Culonavirus an hour ago

          At least 10M (from what I remember, maybe more now)

  • motorest 5 hours ago

    > This seems like a clear case-study that compilation can be incredibly fast (...)

    Have you tried troubleshooting a compiler error in a unity build?

    Yeah.

  • glandium 15 hours ago

    That is kind of surprising. The sqlite "unity" build, has about the same number of lines of C and takes a lot longer than that to compile.

  • Aurornis 20 hours ago

    > makes me wonder why other languages like Rust and Swift can't just do something similar to achieve similar speeds.

    One of the primary features of Rust is the extensive compile-time checking. Monomorphization is also a complex operation, which is not exclusive to Rust.

    C compile times should be very fast because it's a relatively low-level language.

    On the grand scale of programming languages and their compile-time complexity, C code is closer to assembly language than modern languages like Rust or Swift.

  • js2 20 hours ago

    "Just". Probably because there's a lot of complexity you're waving away. Almost nothing is ever simple as "just".

    • pixelpoet 20 hours ago

      At a previous company, we had a rule: whoever says "just" gets to implement it :)

      • forrestthewoods 19 hours ago

        I wanted to ban “just” but your rule is better. Brilliant.

    • taylorallred 20 hours ago

      That "just" was too flippant. My bad. What I meant to convey is "hey, there's some fast compiling going on here and it wasn't that hard to pull off. Can we at least take a look at why that is and maybe do the same thing?".

      • steveklabnik 20 hours ago

        > "hey, there's some fast compiling going on here and it wasn't that hard to pull off. Can we at least take a look at why that is and maybe do the same thing?".

        Do you really believe that nobody over the course of Rust's lifetime has ever taken a look at C compilers and thought about if techniques they use could apply to the Rust compiler?

        • taylorallred 20 hours ago

          Of course not. But it wouldn't surprise me if nobody thought to use a unity build. (Maybe they did. Idk. I'm curious).

          • steveklabnik 20 hours ago

            Rust and C have differences around compilation units: Rust's already tend to be much larger than C on average, because the entire crate (aka tree of modules) is the compilation unit in Rust, as opposed to the file-based (okay not if you're on some weird architecture) compilation unit of C.

            Unity builds are useful for C programs because they tend to reduce header processing overhead, whereas Rust does not have the preprocessor or header files at all.

            They also can help with reducing the number of object files (down to one from many), so that the linker has less work to do, this is already sort of done (though not to literally one) due to what I mentioned above.

            In general, the conventional advice is to do the exact opposite: breaking large Rust projects into more, smaller compilation units can help do less "spurious" rebuilding, so smaller changes have less overall impact.

            Basically, Rust's compile time issues lie elsewhere.

          • ameliaquining 19 hours ago

            Can you explain why a unity build would help? Conventional wisdom is that Rust compilation is slow in part because it has too few translation units (one per crate, plus codegen units which only sometimes work), not too many.

  • troupo 9 hours ago

    There's also Jonathan Blow's jai where he routinely builds an entire game from scratch in a few seconds (hopefully public beta will be released by the end of this year).

  • TZubiri 7 hours ago

    I guess you can do that, but if for some reason you needed to compile separately, (suppose you sell the system to a third party to a client, and they need to modify module 1, module 2 and the main loop.) It would be pretty trivial to remove some #include "module3.c" lines and add some -o module3 options to the compiler. Right?

    I'm not sure what Rust or docker have to do with this basic issue, it just feels like young blood attempting 2020 solutions before exploring 1970 solutions.

  • rowanG077 8 hours ago

    C hardly requires any high effort compile things. No templates, no generics, super simple types, no high level structures.

    • dgb23 5 hours ago

      Are we seeing similar compilation speed when a Rust program doesn't use these types of features?

  • maxk42 19 hours ago

    Rust is doing a lot more under the hood. C doesn't track variable lifetimes, ownership, types, generics, handle dependency management, or handle compile-time execution (beyond the limited language that is the pre-compiler). The rust compiler also makes intelligent (scary intelligent!) suggestions when you've made a mistake: it needs a lot of context to be able to do that.

    The rust compiler is actually pretty fast for all the work it's doing. It's just an absolutely insane amount of additional work. You shouldn't expect it to compile as fast as C.

rednafi 18 hours ago

I’m glad that Go went the other way around: compilation speed over optimization.

For the kind of work I do — writing servers, networking, and glue code — fast compilation is absolutely paramount. At the same time, I want some type safety, but not the overly obnoxious kind that won’t let me sloppily prototype. Also, the GC helps. So I’ll gladly pay the price. Not having to deal with sigil soup is another plus point.

I guess Google’s years of experience led to the conclusion that, for software development to scale, a simple type system, GC, and wicked fast compilation speed are more important than raw runtime throughput and semantic correctness. Given the amount of networking and large - scale infrastructure software written in Go, I think they absolutely nailed it.

But of course there are places where GC can’t be tolerated or correctness matters more than development speed. But I don’t work in that arena and am quite happy with the tradeoffs that Go made.

  • paldepind2 6 hours ago

    > I guess Google’s years of experience led to the conclusion that, for software development to scale, a simple type system, GC, and wicked fast compilation speed are more important than raw runtime throughput and semantic correctness.

    I'm a fan of Go, but I don't think it's the product of some awesome collective Google wisdom and experience. Had it been, I think they'd have come to the conclusion that statically eliminating null pointer exceptions was a worthwhile endeavor, just to mention one thing. Instead, I think it's just the product of some people at Google making a language they way they wanted to.

    • melodyogonna 2 hours ago

      But those people at Google were veteran researchers who wanted to make a language that could scale for Google's use cases; these things are well documented.

      For example, Ken Thompson has said his job at Google was just to find things he could make better.

      • nine_k 2 minutes ago

        They also built a language that can be learned in a weekend (well, now two) and is small enough for a fresh grad hire to learn at the job.

        Go has a very low barrier to entry, but also a relatively low ceiling. The proliferation of codegen tools for Go is a testament of its limited expressive power.

        It doesn't mean that Go didn't hit a sweet spot. For certain tasks, it very much did.

  • mike_hearn 9 hours ago

    > fast compilation is absolutely paramount. At the same time, I want some type safety, but not the overly obnoxious kind that won’t let me sloppily prototype. Also, the GC helps

    Well, that point in the design space was already occupied by Java which also has extremely fast builds. Go exists primarily because the designers wanted to make a new programming language, as far as I can tell. It has some nice implementation aspects but it picked up its users mostly from the Python/Ruby/JS world rather than C/C++/Java, which was the original target market they had in mind (i.e. Google servers). Scripting language users were in the market for a language that had a type system but not one that was too advanced, and which kept the scripting "feel" of very fast turnaround times. But not Java because that was old and unhip, and all the interesting intellectual space like writing libs/conf talks was camped on already.

    • frollogaston 3 minutes ago

      Main feature of Golang was greenthreading. Java has had no good way to do IO-heavy multitasking, leading to all those async/promise frameworks and stuff that jack up your code. I cannot even read the Java code we have at work. Java recently got virtual threads, but even if that fixes the problem, it'll be a while before things change to that.

    • loudmax 3 hours ago

      As a system administrator, I vastly prefer to deploy Go programs over Java programs. Go programs are typically distributed as a single executable file with no reliance on external libraries. I can usually run `./program -h` and it tells me about all the flags.

      Java programs rely on the JVM, of which there are many variants. Run time options are often split into multiple XML files -- one file for logging, another to control the number of threads and so on. Checking for the running process using `ps | grep` yields some long line that wraps the terminal window, or doesn't fit neatly into columns shown in `htop` or `btop`.

      These complaints are mostly about conventions and idioms, not the languages themselves. I appreciate that the Java ecosystem is extremely powerful and flexible. It is possible to compile Java programs into standalone binaries, though I rarely see these in practice. Containers can mitigate the messiness, and that helps, up until the point when you need to debug some weird runtime issue.

      I wouldn't argue that people should stop programming in Java, as there are places where it really is the best choice. For example deep legacy codebases, or where you need the power of the JVM for dynamic runtime performance optimizations.

      There are a lot of places where Go is the best choice (eg. simple network services, CLI utilities), and in those cases, please, please deploy simple Go programs. Most of the time, developers will reach for whatever language they're most comfortable with.

      What I like most about Go is how convenient it is, by default. This makes a big difference.

    • rednafi 42 minutes ago

      Java absolutely does not fill in the niche that Go targeted. Even without OO theology, JVM applications are heavy and memory intensive. Plus the startup time of the VM alone is a show stopper for the type of work I do. Also yes, Java isn’t hip and you couldn’t pay me to write it anymore.

    • rsanheim 8 hours ago

      Java still had slow startup and warmup time circa 2005-2007, on the order of 1-3 seconds for hello world and quite a bit more for real apps. That is horrendous for anything CLI based.

      And you left out classloader/classpath/JAR dependency hell, which was horrid circa late 90s/early 2000s...and I'm guessing was still a struggle when Go really started development. Especially at Google's scale.

      Don't get me wrong, Java has come a long way and is a fine language and the JVM is fantastic. But the java of 2025 is not the same as mid-to-late 2000s.

      • mike_hearn 4 hours ago

        Maybe so, although I don't recall it being that bad.

        But Go wasn't designed for CLI apps. It was designed for writing highly multi-threaded servers at Google, according to the designers, hence the focus on features like goroutines. And in that context startup time just doesn't matter. Startup time of servers at Google was (in that era) dominated by cluster scheduling, connecting to backends, loading reference data and so on. Nothing that a change in programming language would have fixed.

        Google didn't use classloader based frameworks so that also wasn't relevant.

    • k__ 2 hours ago

      "it picked up its users mostly from the Python/Ruby/JS world rather than C/C++/Java"

      And with the increasing performance of Bun, it seems that Go is about to get a whooping by JS.

      (Which isn't really true, as most of the Bun perf comes from Zig, but they are targeting JS Devs.)

      • rednafi 36 minutes ago

        Runtimes like Bun, Deno, or type systems like TypeScript don’t change the fact it’s still JS underneath — a crappily designed language that should’ve never left throwable frontend code.

        None of these runtimes make JS anywhere even close to single-threaded Go perf, let alone multithreaded (goroutine) perf.

  • frollogaston 9 minutes ago

    Same but with Python and NodeJS cause I'm doing less performance-critical stuff. Dealing with type safety and slow builds would cost way more than it's worth.

  • galangalalgol 17 hours ago

    That is exactly what go was meant for and there is nothing better than picking the right tool for the job. The only foot gun I have seen people run into is parallelism with mutable shared state through channels can be subtly and exploitably wrong. I don't feel like most people use channels like that though? I use rust because that isn't the job I have. I usually have to cramb slow algorithms into slower hardware, and the problems are usually almost but not quite embarrassingly parallel.

    • bjackman 8 hours ago

      I think a lot of the materials that the Go folks put out in the early days encourage a very channel-heavy style of programming that leads to extremely bad places.

      Nowadays the culture seems to have evolved a bit. I now go into high alert mode if I see a channel cross a function boundary or a goroutine that wasn't created via errgroup or similar.

      People also seem to have chilled out about the "share by communicating" thing. It's usually better to just use a mutex and I think people recognise that now.

      • rednafi 35 minutes ago

        This is true. I have been writing Go for years and still think channel is a bit too low level. It probably would've benefited from a different layer of abstraction.

  • mark38848 11 hours ago

    What are obnoxious types? Types either represent the data correctly or not. I think you can force types to shut up the compiler in any language including Haskell, Idris, PureScript...

    • Mawr 10 hours ago

      I'd say you already get like 70% of the benefit of a type system with just the basic "you can't pass an int where string is expected". Being able to define your own types based on the basic ones, like "type Email string", so it's no longer possible to pass a "string" where "Email" is expected gets you to 80%. Add Result and Optional types (or arguably just sum types if you prefer) and you're at 95%. Anything more and you're pushing into diminishing returns.

      • hgomersall 8 hours ago

        Well it depends what you're doing. 95% is like, just your opinion man. The rust type system allows, in many cases, APIs that you cannot use wrongly, or are highly resistant to incorrect usage, but to do that requires careful thinking about. To be clear, such APIs are just as relevant internally to a project as externally if you want to design a system that is long term maintainable and robust and I would argue is the point when the type system starts to get really useful (rather than diminishing returns).

        • rednafi 27 minutes ago

          > The rust type system allows, in many cases, APIs that you cannot use wrongly, or are highly resistant to incorrect usage, but to do that requires careful thinking about

          I need none of that guarantee and all of the compilation speed along with a language where juniors in my team can contribute quickly. Different problem space.

    • ratorx 8 hours ago

      This might work for the types you create, but what about all the code written in the language that expects the “proper” structure?

      > Types either represent the data or not

      This definitely required, but is only really the first step. Where types get really useful is when you need to change them later on. The key aspects here are how easily you can change them, and how much the language tooling can help.

    • throwawaymaths 3 hours ago

      > Types either represent the data correctly or not.

      No. two types can represent the same payload, but one might be a simple structure, the other one could be three or twenty nested type template abstractions deep, and created by a proc macro so you can't chase down how it was made so easily.

  • silverwind 7 hours ago

    You can have the best of both worlds: A fast, but sloppy compiler and slow, but thorough checkers/linters. I think it's ideal that way, but rust seems to have chosen to needlessly combine both actions into one.

  • danielscrubs 6 hours ago

    One day I would like to just change pascals syntax a bit to be Pythonic and just blow the socks of junior and Go developers.

    • the_sleaze_ 2 hours ago

      That's what they did to Erlang with Elixir and now there are a lot of people saying it's the Greatest Of All Time.

      I'd be interested in this project if you do decide to pursue it.

    • rednafi 32 minutes ago

      Sounds like the guy who wanted to write curl in a weekend. /s

  • ode 16 hours ago

    Is Go still in heavy use at Google these days?

    • fsmv 3 hours ago

      Go has never been in heavy use at Google

      • melodyogonna 2 hours ago

        Isn't it heavily used in Google Cloud?

    • hu3 15 hours ago

      What would they use for networking if not Go?

      • homebrewer 5 hours ago

        Last time I paid any attention to Google's high level conference presenters (like Titus Winters), they almost didn't use Go at all. Judging by the sibling comment, this hasn't changed much. For some reason people are thinking that half of Google is written in Go at this point, when in reality if you listen to what they themselves are saying, it's 99% C++ and Java, with a tiny bit of Python and other languages where it makes sense.

        It's just a project from a few very talented people who happen to draw their salary from Google's coffers.

      • surajrmal 9 hours ago

        C++ and Java. Go is still used, but it's never caught up to the big two.

ahartmetz 21 hours ago

That person seems to be confused. Installing a single, statically linked binary is clearly simpler than managing a container?!

  • jerf 21 hours ago

    Also strikes me as not fully understanding what exactly docker is doing. In reference to building everything in a docker image:

    "Unfortunately, this will rebuild everything from scratch whenever there's any change."

    In this situation, with only one person as the builder, with no need for CI or CD or whatever, there's nothing wrong with building locally with all the local conveniences and just slurping the result into a docker container. Double-check any settings that may accidentally add paths if the paths have anything that would bother you. (In my case it would merely reveal that, yes, someone with my username built it and they have a "src" directory... you can tell how worried I am about both those tidbits by the fact I just posted them publicly.)

    It's good for CI/CD in a professional setting to ensure that you can build a project from a hard drive, a magnetic needle, and a monkey trained to scratch a minimal kernel on to it, and boot strap from there, but personal projects don't need that.

    • scuff3d 18 hours ago

      Thank you! I got a couple minutes in and was confused as hell. There is no reason to do the builds in the container.

      Even at work, I have a few projects where we had to build a Java uber jar (all the dependencies bundled into one big far) and when we need it containerized we just copy the jar in.

      I honestly don't see much reason to do builds in the container unless there is some limitation in my CICD pipeline where I don't have access to necessary build tools.

      • mike_hearn 9 hours ago

        It's pretty clear that this whole project was god-tier level procrastination so I wouldn't worry too much about the details. The original stated problem could have been solved with a 5-line shell script.

    • linkage 16 hours ago

      Half the point of containerization is to have reproducible builds. You want a build environment that you can trust will be identical 100% of the time. Your host machine is not that. If you run `pacman -Syu`, you no longer have the same build environment as you did earlier.

      If you now copy your binary to the container and it implicitly expects there to be a shared library in /usr/lib or wherever, it could blow up at runtime because of a library version mismatch.

      • missingdays 4 hours ago

        Nobody is suggesting to copy the binary to the Docker container.

        When developing locally, use `cargo test` in your cli. When deploying to the server, build the Docker image on CI. If it takes 5 minutes to build it, so be it.

  • hu3 21 hours ago

    From the article, the goal was not to simplify, but rather to modernize:

    > So instead, I'd like to switch to deploying my website with containers (be it Docker, Kubernetes, or otherwise), matching the vast majority of software deployed any time in the last decade.

    Containers offer many benefits. To name some: process isolation, increased security, standardized logging and mature horizontal scalability.

    • adastra22 20 hours ago

      So put the binary in the container. Why does it have to be compiled within the container?

      • hu3 20 hours ago

        That is what they are doing. It's a 2 stage Dockerfile.

        First stage compiles the code. This is good for isolation and reproducibility.

        Second stage is a lightweight container to run the compiled binary.

        Why is the author being attacked (by multiple comments) for not making things simpler when that was not claimed that as the goal. They are modernizing it.

        Containers are good practice for CI/CD anyway.

        • AndrewDucker 20 hours ago

          I'm not sure why "complicate things unnecessarily" is considered more modern.

          Don't do what you don't need to do.

          • hu3 20 hours ago

            You realize the author is compiling a Rust webserver for a static website right?

            They are already long past the point of "complicate things unnecessarily".

            A simple Dockerfile pales in comparison.

        • MobiusHorizons 20 hours ago

          That’s a reasonable deployment strategy, but a pretty terrible local development strategy

          • taberiand 17 hours ago

            Devcontainers are a good compromise though - you can develop within a context that can be very nearly identical to production; with a bit of finagling you could even use the same dockerfile for the devcontainer, and the build image and the deployed image

        • adastra22 16 hours ago

          Because he spends a good deal of the intro complaining that this makes his dev practice slow. So don’t do it! It has nothing to do with docker but rather the fact he is wiping the cache on every triggered build.

    • dwattttt 19 hours ago

      Mightily resisting the urge to be flippant, but all of those benefits were achieved before Docker.

      Docker is a (the, in some areas) modern way to do it, but far from the only way.

    • a3w 19 hours ago

      Increased security compared to bare hardware, lower than VMs. Also, lower than Jails and RKT (Rocket) which seems to be dead.

    • eeZah7Ux 18 hours ago

      > process isolation, increased security

      no, that's sandboxing.

  • vorgol 20 hours ago

    Exactly. I immediately thought of the grug brain dev when I read that.

MangoToupe 20 hours ago

I don't really consider it to be slow at all. It seems about as performant as any other language this complexity, and it's far faster than the 15 minute C++ and Scala build times I'd place in the same category.

  • mountainriver 14 hours ago

    I also don’t understand this, the rust compiler hardly bothers me at all when I’m working. I feel like this is due to how bad it was early on and people just sticking to that narrative

  • BanterTrouble 4 hours ago

    The memory usage is quite large compared to C/C++ when compiling. I use Virtual Machines for Demos on my YouTube Channel and compiling something large in Rust requires 8GB+.

    In C/C++ I don't even have to worry about it.

    • gpm 15 minutes ago

      I can't agree, I've had C/C++ builds of well known open source projects try to use >100GB of memory...

    • windward 4 hours ago

      I can't say the same. Telling people to use `-j$(nproc)` in lieu of `-j` to avoid the wrath of the OOM-killer is a rite of passage

  • randomNumber7 19 hours ago

    When C++ templates are turing complete is it pointless to complain about the compile times without considering the actual code :)

adastra22 20 hours ago

As a former C++ developer, claims that rust compilation is slow leave me scratching my head.

  • eikenberry 20 hours ago

    Which is one of the reasons why Rust is considered to be targeting C++'s developers. C++ devs already have the Stockholm syndrome needed to tolerate the tooling.

    • MyOutfitIsVague 20 hours ago

      Rust's compilation is slow, but the tooling is just about the best that any programming language has.

      • adastra22 9 hours ago

        Slow compared to what? I’m still scraping my head at this. My cargo builds are insanely fast, never taking more than a minute or two even on large projects. The only ahead of time compiled language I’ve used with faster compilation speed is Go, and that is a language specifically designed around (and arguably crippled by) the requirement for fast compilation. Rust is comparable to C compilation, and definitely faster than C++, Haskell, Java, Fortran, Algol, and Common Lisp.

      • GuB-42 17 hours ago

        How good is the debugger? "edit and continue"? Hot reload? Full IDE?

        I don't know enough Rust, but I find these aspects are seriously lacking in C++ on Linux, and it is one of the few things I think Windows has it better for developers. Is Rust better?

        • steveklabnik 14 hours ago

          > debugger

          I've only ever really used a debugger on embedded, we used gdb there. I know VS: Code has a debugger that works, I'm sure other IDEs do too.

          > edit and continue

          Hard to do in a pre-compiled language with no runtime, if you're asking about what I think you're asking about.

          > Hot reload

          Other folks gave you good links, but this stuff is pretty new, so I wouldn't claim that this is great and often good and such.

          > Full IDE

          I'm not aware of Rust-specific IDEs, but many IDEs have good support for Rust. VS: Code is the most popular amongst users according to the annual survey. The Rust Project distributes an official LSP server, so you can use that with any editor that supports it.

          • izacus 8 hours ago

            So the answer is very clear "no" on all accounts, just like for other languages built by people who don't understand the value of good tooling.

        • adastra22 16 hours ago

          No idea because I never do that. Nor does any rust programmer I know. Which may answer your question ;)

    • galangalalgol 16 hours ago

      Also modern c++ with value semantics is more functional than many other languages people might come to rust from, that keeps the borrow checker from being as annoying. If people are used to making webs of stateful classes with references to each pther. The borrow checker is horrific, but that is because that design pattern is horrific if you multithread it.

  • MobiusHorizons 20 hours ago

    Things can still be slow in absolute terms without being as slow as C++. The issues with compiling C++ are incredibly well understood and documented. It is one of the worst languages on earth for compile times. Rust doesn’t share those language level issues, so the expectations are understandably higher.

    • int_19h 18 hours ago

      But it does share some of those issues. Specifically, while Rust generics aren't as unstructured as C++ templates, the main burden is actually from compiling all those tiny instantiations, and Rust monomorphization has the same exact problem responsible for the bulk of its compile times.

    • const_cast 16 hours ago

      Rust shares pretty much every language-level issue C++ has with compile times, no? Monomorphization explosion, turing-complete compile time macros, complex type system.

      • steveklabnik 14 hours ago

        There's a lot of overlap, but not that simple. Unless you also discount C issues that C++ inherits. Even then, there's subtleties and differences between the two that matter.

  • oreally 7 hours ago

    Classic case of:

    New features: yes

    Talking to users and fixing actual problems: lolno, I CBF

  • shadowgovt 20 hours ago

    I thorougly enjoy all the work on encapsulation and reducing the steps of compilation to compile, then link that C does... Only to have C++ come along and undo almost all of it through the simple expedient of requiring templates for everything.

    Oops, changed one template in one header. And that impacts.... 98% of my code.

namibj 20 hours ago

Incremental compilation good. If you want, freeze the initial incremental cache after a single fresh build to use for building/deploying updates, to mitigate the risk of intermediate states gradually corrupting the cache.

Works great with docker: upon new compiler version or major website update, rebuild the layer with the incremental cache; otherwise just run from the snapshot and build newest website update version/state, and upload/deploy the resulting static binary. Just set so that mere code changes won't force rebuilding the layer that caches/materializes the fresh clean build's incremental compilation cache.

  • maccard 9 hours ago

    The intermediates for my project are 150GB+ alone. Last time I worked with docker images that large we had massive massive problems.

AndyKelley 20 hours ago

My homepage takes 73ms to rebuild: 17ms to recompile the static site generator, then 56ms to run it.

    andy@bark ~/d/andrewkelley.me (master)> zig build --watch -fincremental
    Build Summary: 3/3 steps succeeded
    install success
    └─ run exe compile success 57ms MaxRSS:3M
       └─ compile exe compile Debug native success 331ms
    Build Summary: 3/3 steps succeeded
    install success
    └─ run exe compile success 56ms MaxRSS:3M
       └─ compile exe compile Debug native success 17ms
    watching 75 directories, 1 processes
  • whoisyc 20 hours ago

    Just like every submission about C/C++ gets a comment about how great Rust is, every submission about Rust gets a comment about how great Zig is. Like a clockwork.

    Edit: apparently I am replying to the main Zig author? Language evangelism is by far the worst part of Rust and has likely stirred up more anti Rust sentiment than “converting” people to Rust. If you truly care for your language you should use whatever leverage you have to steer your community away from evangelism, not embrace it.

    • AlienRobot 4 hours ago

      If you can't be proud about a programming language you made what is even the point?

  • qualeed 20 hours ago

    Neat, I guess?

    This comment would be a lot better if it engaged with the posted article, or really had any sort of insight beyond a single compile time metric. What do you want me to take away from your comment? Zig good and Rust bad?

    • kristoff_it 19 hours ago

      I think the most relevant thing is that building a simple website can (and should) take milliseconds, not minutes, and that -- quoting from the post:

      > A brief note: 50 seconds is fine, actually!

      50 seconds should actually not be considered fine.

      • qualeed 18 hours ago

        As you've just demonstrated, that point can be made without even mentioning Zig, let alone copy/pasting some compile time stuff with no other comment or context. Which is why I thought (well, hoped) there might be something more to it than just a dunk attempt.

        Now we get all of this off-topic discussion about Zig. Which I guess is good for you Zig folk... But it's pretty off-putting for me.

        whoisyc's comment is extremely on point. As the VP of community, I would really encourage thinking about what they said.

        • kristoff_it 17 hours ago

          > As you've just demonstrated, that point can be made without even mentioning Zig, let alone copy/pasting some compile time stuff with no other comment or context. Which is why I thought (well, hoped) there might be something more to it than just a dunk attempt.

          Having concrete proof that something can be done more efficiently is extremely important and, no, I haven't "demonstrated" anything, since my earlier comment would have had way less substance to it without the previous context.

          The comment from Andrew is not just random compiler stats, but a datapoint showing a comparable example having dramatically different performance characteristics.

          You can find in this very HN submission various comments that assume that Rust's compiler performance is impossible to improve because of reasons that actually are mostly (if not entirely) irrelevant. Case in point, see people talking about how Rust compilation must take longer because of the borrow checker (and other safety checks) and Steve pointing out that, no, actually that part of the compilation pipeline is very small.

          > Now we get all of this off-topic discussion about Zig.

          So no, I would argue the opposite: this discussion is very much on topic.

        • maccard 9 hours ago

          I disagree. Zig and go are perfect frames of reference to say “actually no, Rust really is slow. Here are examples for you to go and see for yourself”

  • taylorallred 20 hours ago

    @AndyKelley I'm super curious what you think the main factors are that make languages like Zig super fast at compiling where languages like Rust and Swift are quite slow. What's the key difference?

    • steveklabnik 20 hours ago

      I'm not Andrew, but Rust has made several language design decisions that make compiler performance difficult. Some aspects of compiler speed come down to that.

      One major difference is the way each project considers compiler performance:

      The Rust team has always cared to some degree about this. But, from my recollection of many RFCs, "how does this impact compiler performance" wasn't a first-class concern. And that also doesn't really speak to a lot of the features that were basically implemented before the RFC system existed. So while it's important, it's secondary to other things. And so while a bunch of hard-working people have put in a ton of work to improve performance, they also run up against these more fundamental limitations at the limit.

      Andrew has pretty clearly made compiler performance a first-class concern, and that's affected language design decisions. Naturally this leads to a very performant compiler.

      • rtpg 12 hours ago

        > Rust has made several language design decisions that make compiler performance difficult

        Do you have a list off the top of your head/do you know of a decent list? I've now read many "compiler slow" thoughtpieces by many people and I have yet to see someone point at a specific feature and say "this is just intrinsically harder".

        I believe that it likely exists, but would be good to know what feature to get mad at! Half joking of course

        • Mawr 10 hours ago

          You can have your house built fast, cheap, or well. Pick two; or a bit of all three that adds up to the same effort required. You can't have all three.

          You can't have a language with 100% of the possible runtime perf, 100% of the possible compile speed and 100% of the possible programmer ease-of-use.

          At best you can abuse the law of diminishing returns aka the 80-20 rule, but that's not easy to balance and you run the risk of creating a language that's okay at everything, but without any strong selling points, like the stellar runtime performance Rust is known for.

          So a better way to think about it is: Given Rust's numerous benefits, is having subpar compilation time really that big of a deal?

          • rtfeldman 14 minutes ago

            > Given Rust's numerous benefits, is having subpar compilation time really that big of a deal?

            As someone who uses Rust as a daily driver at work at zed.dev (about 600K LoC of Rust), and Zig outside of work on roc-lang.org (which was about 300K LoC of Rust before we decided to rewrite it in Zig, in significant part because of Rust's compilation speed), yes - it is an absolutely huge deal.

            I like a lot of things about Rust, but its build times are my biggest pain point.

        • steveklabnik 38 minutes ago

          Brian Anderson wrote up his thoughts here, and it's a good intro to the topic: https://www.pingcap.com/blog/rust-compilation-model-calamity...

          Let's dig into this bit of that, to give you some more color:

          > Split compiler/package manager — although it is normal for languages to have a package manager separate from the compiler, in Rust at least this results in both cargo and rustc having imperfect and redundant information about the overall compilation pipeline. As more parts of the pipeline are short-circuited for efficiency, more metadata needs to be transferred between instances of the compiler, mostly through the filesystem, which has overhead.

          > Per-compilation-unit code-generation — rustc generates machine code each time it compiles a crate, but it doesn’t need to — with most Rust projects being statically linked, the machine code isn’t needed until the final link step. There may be efficiencies to be achieved by completely separating analysis and code generation.

          Rust decided to go with the classic separate compilation model that languages like C use. Let's talk about that compared to Zig, since it was already brought up in this thread.

          So imagine we have a project, A, and it depends on B. B is a huge library, 200,000 lines of code, but we only use one function from it in A, and that function is ten lines. Yes, this is probably a bad project management decision, but we're using extremes here to make a point.

          Cargo will compile B first, and then A, and then link things together. That's the classic model. And it works. But it's slow: rust had to compile all 200,000 lines of code in B, even though we only are gonna need ten lines. We do all of this work, and then we throw it away at the end. A ton of wasted time and effort. This is often mitigated by the fact that you compile B once, and then compile A a lot, but this still puts a lot of pressure on the linker, and generics also makes this more complex, but I'm getting a bit astray of the main point here, so I'll leave that alone for now.

          Zig, on the other hand, does not do this. It requires that you compile your whole program all at once. This means that they can drive the compilation process beginning from main, in other words, only compile the code that's actually reachable in your program. This means that in the equivalent situation, Zig only compiles those ten lines from B, and never bothers with the rest. That's just always going to be faster.

          Of course, there are pros and cons to both of these decisions, Rust made the choice it did here for good reasons. But it does mean it's just going to be slower.

        • mike_hearn 9 hours ago

          Rust heavily uses value types with specialized generics, which explodes the work needed by the compiler. It can - sometimes - improve performance. But it always slows down compilation.

    • AndyKelley 20 hours ago

      Basically, not depending on LLVM or LLD. The above is only possible because we invested years into making our own x86_64 backend and our own linker. You can see all the people ridiculing this decision 2 years ago https://news.ycombinator.com/item?id=36529456

      • unclad5968 20 hours ago

        LLVM isnt a good scapegoat. A C application equivalent in size to a rust or c++ application will compile an order of magnitude quicker and they all use LLVM. I'm not a compiler expert, but it doesn't seem right to me that the only possible path to quick compilation for Zig was a custom backend.

        • MobiusHorizons 20 hours ago

          Be that as it may, many C compilers are still an order of magnitude faster than LLVM. Probably the best example is tcc, although it is not the only one. C is a much simpler language than rust, so it is expected that compilation should take less time for C. That doesn’t mean llvm isn’t a significant contributor to compilation speed. I believe cranelift compilation of rust is also much faster than the llvm path

          • unclad5968 19 hours ago

            > That doesn’t mean llvm isn’t a significant contributor to compilation speed.

            That's not what I said. I said it's unlikely that fast compilation cannot be achieved while using LLVM which, I would argue, is proven by the existence of a fast compiler that uses LLVM.

        • int_19h 18 hours ago

          It will compile an order of magnitude quicker because it often doesn't do the same thing - e.g. functions that are aggressively inlined in C++ or Rust or Zig would be compiled separately and linked normally, and generally there's less equivalent of compile-time generics in C code (because you have to either spell out all the instantiations by hand or use preprocessor or a code generator to do something that is two lines of code in C++).

      • zozbot234 19 hours ago

        The Rust folks have cranelift and wild BTW. There are alternatives to LLVM and LLD, even though they might not be as obvious to most users.

      • VeejayRampay 7 hours ago

        what is even the point of quoting reactions from two years ago?

        this is a terrible look for your whole community

        • elktown an hour ago

          Honestly I think it's good to highlight it. As a industry we're too hampered by "Don't even try that, use the existing thing" and it's causing these end results.

    • coolsunglasses 20 hours ago

      I'm also curious because I've (recently) compiled more or less identical programs in Zig and Rust and they took the same amount of time to compile. I'm guessing people are just making Zig programs with less code and fewer dependencies and not really comparing apples to apples.

      • kristoff_it 19 hours ago

        Zig is starting to migrate to custom backends for debug builds (instead of using LLVM) plus incremental compilation.

        All Zig code is built in a single compilation unit and everything is compiled from scratch every time you change something, including all dependencies and all the parts of the stdlib that you use in your project.

        So you've been comparing Zig rebuilds that do all the work every time with Rust rebuilds that cache all dependencies.

        Once incremental is fully released you will see instant rebuilds.

        • metaltyphoon 17 hours ago

          When does this land in Zig? Will aarch64 be supported?

          • mlugg 16 hours ago

            When targeting x86_64, the self-hosted backend is already enabled by default on the latest builds of Zig (when compiling in Debug mode). The self-hosted aarch64 backend currently isn't generally usable (so we still default to LLVM when targeting aarch64), but it's likely to be the next ISA we focus on codegen for.

            • metaltyphoon 16 hours ago

              I assume x86_64 is Linux only correct?

              • AndyKelley 14 hours ago

                Not quite- any ELF or MachO target is enabled by default already. Windows is waiting on some COFF linker bug fixes.

    • AlienRobot 4 hours ago

      One difference that Zig has is that it doesn't have multiline comments or multiline strings, meaning that the parser can parse any line correctly without context. I assume this makes parallelization trivial.

      There is ino operator overloading like C, so A + B can only mean one thing.

      You can't redeclare a variable, so foo can only map to one thing.

      The list goes on.

      Basically it was designed to compile faster, and that means many issues on Github have been getting rejected in order to keep it that way. It's full of compromises.

  • nicoburns 19 hours ago

    My non-static Rust website (includes an actual webserver as well as a react-like framework for templating) takes 1.25s to do an incremental recompile with "cargo watch" (which is an external watcher that just kills the process and reruns "cargo run").

    And it can be considerably faster if you use something like subsecond[0] (which does incremental linking and hotpatches the running binary). It's not quite as fast as Zig, but it's close.

    However, if that 331ms build above is a clean (uncached) build then that's a lot faster than a clean build of my website which takes ~12s.

    [0]: https://news.ycombinator.com/item?id=44369642

    • AndyKelley 19 hours ago

      The 331ms time is mostly uncached. In this case the build script was already cached (must be re-done if the build script is edited), and compiler_rt was already cached (must be done exactly once per target; almost never rebuilt).

  • ww520 20 hours ago

    Nice. Didn't realize zig build has --watch and -fincremental added. I was mostly using "watchexec -e zig zig build" for recompile on file changes.

  • vlovich123 20 hours ago

    Zig isn’t memory safe though right?

    • pixelpoet 20 hours ago

      It isn't a lot of things, but I would argue that its exceptionally (heh) good exception handling model / philosophy (making it good, required, and performant) is more important than memory safety, especially when a lot of performance-oriented / bit-banging Rust code just gets shoved into Unsafe blocks anyway. Even C/C++ can be made memory safe, cf. https://github.com/pizlonator/llvm-project-deluge

      What I'm more interested to know is what the runtime performance tradeoff is like now; one really has to assume that it's slower than LLVM-generated code, otherwise that monumental achievement seems to have somehow been eclipsed in very short time, with much shorter compile times to boot.

      • jorvi 14 hours ago

        > especially when a lot of performance-oriented / bit-banging Rust code just gets shoved into Unsafe blocks anyway. Even C/C++ can be made memory safe, cf.

        Your first claim is unverifiable and the second one is just so, so wrong. Even big projects with very talented, well-paid C or C++ devs eventually end up with CVEs, ~80% of them memory-related. Humans are just not capable of 0% error rate in their code.

        If Zig somehow got more popular than C/C++, we would still be stuck in the same CVE bog because of memory unsafety. No thank you.

        • dgb23 5 hours ago

          > If Zig somehow got more popular than C/C++, we would still be stuck in the same CVE bog because of memory unsafety. No thank you.

          Zig does a lot of things to prevent or detect memory safety related bugs. I personally haven't encountered a single one so far, while learning the language.

          > ~80% of them memory-related.

          I assume you're referencing the 70% that MS has published? I think they categorized null pointer exceptions as memory safety bugs as well among other things. Zig is strict about those, has error unions, and is strict and explicit around casting. It can also detect memory leaks and use after free among other things. It's a language that's very explicit about a lot of things, such as control flow, allocation strategies etc. And there's comptime, which is a very potent tool to guarantee all sorts of things that go well beyond memory safety.

          I almost want to say that your comment presents a false dichotomy in terms of the safety concern, but I'm not an expert in either Rust or Zig. I think however it's a bit broad and unfair.

      • vlovich123 19 hours ago

        > Even C/C++ can be made memory safe, cf. https://github.com/pizlonator/llvm-project-deluge

        > Fil-C achieves this using a combination of concurrent garbage collection and invisible capabilities (each pointer in memory has a corresponding capability, not visible to the C address space)

        With significant performance and memory overhead. That just isn't the same ballpark that Rust is playing in although hugely important if you want to bring forward performance insensitive C code into a more secure execution environment.

        • mike_hearn 8 hours ago

          Fil-C has advanced a lot since I last looked at it:

          > Fil-C is currently 1.5x slower than normal C in good cases, and about 4x slower in the worst cases.

          with room for optimization still. Compatibility has improved massively too, due to big changes to how it works. The early versions were kind of toys, but if Filip's claims about the current version hold up then this is starting to look like a very useful bit of kit. And he has the kind of background that means we should take this seriously. There's a LOT of use cases for taking stuff written in C and eliminating memory safety issues for only a 50% slowdown.

    • kristoff_it 19 hours ago

      How confident are you that memory safety (or lack thereof) is a significant variable in how fast a compiler is?

    • ummonk 19 hours ago

      Zig is less memory safe than Rust, but more than C/C++. Neither Zig nor Rust is fundamentally memory safe.

      • Ar-Curunir 19 hours ago

        What? Zig is definitively not memory-safe, while safe Rust, is, by definition, memory-safe. Unsafe rust is not memory-safe, but you generally don't need to have a lot of it around.

        • ummonk 18 hours ago

          Safe Rust is demonstrably not memory-safe: https://github.com/Speykious/cve-rs/tree/main

          • steveklabnik 17 hours ago

            This is a compiler bug. This has no bearing on the language itself. Bugs happen, and they will be fixed, even this one.

            • ummonk 14 hours ago

              It's a 10 year old bug which will eventually be fixed but may require changes to how Rust handles type variance.

              Until you guys write an actual formal specification, the compiler is the language.

              • steveklabnik 14 hours ago

                It’s a ten year old bug because it has never been found in the wild, ever, in those ten years. Low impact, high implementation effort bugs take less priority than bugs that affect real users.

                The project is adopting Ferrocene for the spec.

                • ummonk 14 hours ago

                  Ferrocene is intended to document the behavior of the current version of the rustc compiler, so it's just an effort to formalize "the compiler is the language".

                  Yes, the soundness hole itself is low impact and doesn't need to be prioritized but it undermines the binary "Zig is definitively not memory-safe, while safe Rust, is, by definition, memory-safe" argument that was made in response to me. Now you're dealing with qualitative / quantitative questions of practical impact, in which my original statement holds: "Zig is less memory safe than Rust, but more than C/C++. Neither Zig nor Rust is fundamentally memory safe."

                  You can of course declare that Safe Rust is by definition memory safe, but that doesn't make it any more true than declaring that Rust solves the halting problem or that it proves P=NP. RustBelt is proven sound. Rust by contrast, as being documented by Ferrocene, is currently fundamentally unsound (though you won't hit the soundness issues in practice).

                  • _flux 4 hours ago

                    I believe these two statements should show the fundamental difference:

                    - If a safe Rust program exhibits a memory safety problem, it is a bug in the Rust compiler that is to be fixed - If a Zig program exhibits a memory safety problem, it is a bug in the Zig program that needs to be fixed, not in the compiler

                    Wouldn't you agree?

                    > Ferrocene is intended to document the behavior of the current version of the rustc compiler, so it's just an effort to formalize "the compiler is the language".

                    I must admit I haven't read the specification, but I doubt they attempt to be "bug for bug" compatible in the sense that the spec enumerates memory safety bugs present in the Rust compiler. But am I then mistaken?

                    • ummonk 2 hours ago

                      No, I don't agree. A compiler bug is something that gets fixed in a patch release after it's reported, or perhaps some platform-specific regression that gets fixed in the next release after it's reported. What we're discussing by contrast is a soundness hole in the language itself - one which will most likely require breaking changes to the language to close (i.e. some older programs that were perfectly safe will fail to compile as a side effect of tightening up the Rust language to prevent this soundness hole).

                      As to the Ferrocene specification, it explicitly states "Any difference between the FLS and the behavior of the Rust compiler is considered an error on our part and the FLS will be updated accordingly."

                      Proposals to fix the soundness hole in Rust either change the variance rules themselves, or require where clauses in certain places. Either of these changes would require corresponding changes to chapter 4 of the Ferrocene specification.

                      • steveklabnik an hour ago

                        > As to the Ferrocene specification, it explicitly states "Any difference between the FLS and the behavior of the Rust compiler is considered an error on our part and the FLS will be updated accordingly."

                        Right, this is from before it's adopted as the actual spec, because it was from outside the project, and so could not be.

                        Also, these goalposts are moving: it was "Rust doesn't have a spec" and now it's "I don't like the spec."

                        Fixing this soundness hole does not require a breaking change to the language. It is an implementation bug, not a problem with the language as specified. But even if it were, Rust's policies around soundness do allow for this, and the project has done it in the past.

                      • Ar-Curunir 2 hours ago

                        And Rust has and will make those breaking changes, while Zig will likely not. In fact there are documented and blessed ways to break memory safety in Zig, and no one is calling them soundness bugs!

                        I really don’t see how you can claim with a straight face that the two approaches are the same.

                    • vlovich123 2 hours ago

                      > If a safe Rust program exhibits a memory safety problem, it is a bug in the Rust compiler that is to be fixed - If a Zig program exhibits a memory safety problem, it is a bug in the Zig program that needs to be fixed, not in the compiler

                      That is the absolute best description of memory safety I’ve heard expressed.

        • Graziano_M 16 hours ago

          The second you have any `unsafe`, Rust is _by definition_ not memory-safe.

          • Ar-Curunir 8 hours ago

            By that definition, Python is not memory-safe, Java is not memory-safe, Go is not memory-safe, and so on. All of these languages contain escape hatches to do memory-unsafe stuff, yet no one is calling them memory unsafe.

            • ummonk 2 hours ago

              Go is more memory unsafe than Java or Rust. Data races in concurrent Go code can cause memory corruption, unlike in concurrent Java code. Safe Rust is designed to avoid data races altogether using static analysis.

          • Meneth 5 hours ago

            And the majority of the Rust standard library uses `unsafe`.

            • Measter 3 hours ago

              Prove it. Show me the stats that the standard library is over 50% unsafe.

        • rurban 6 hours ago

          By definition yes. There were a lot of lies to persuade managers. You can write a lot into your documentation.

          But by implementation and spec definitely not.

  • echelon 20 hours ago

    Zig is a small and simple language. It doesn't need a complicated compiler.

    Rust is a large and robust language meant for serious systems programming. The scope of problems Rust addresses is large, and Rust seeks to be deployed to very large scale software problems.

    These two are not the same and do not merit an apples to apples comparison.

    edit: I made some changes to my phrasing. I described Zig as a "toy" language, which wasn't the right wording.

    These languages are at different stages of maturity, have different levels of complexity, and have different customers. They shouldn't be measured against each other so superficially.

    • ummonk 19 hours ago

      This is an amusing argument to make in favor of Rust, since it's exactly the kind of dismissive statement that Ada proponents make about other languages including Rust.

    • steveklabnik 20 hours ago

      Come on now. This isn't acceptable behavior.

      (EDIT: The parent has since edited this comment to contain more than just "zig bad rust good", but I still think the combative-ness and insulting tone at the time I made this comment isn't cool.)

      • echelon 20 hours ago

        > but I still think the combative-ness and insulting tone at the time I made this comment isn't cool

        Respectfully, the parent only offers up a Zig compile time metric. That's it. That's the entire comment.

        This HN post about Rust is now being dominated by a cheap shot Zig one liner humblebrag from the lead author of Zig.

        I think this thread needs a little more nuance.

        • steveklabnik 20 hours ago

          FWIW, I think your revised comment is far better, even though I disagree with some of the framing, there's at least some substance there.

          Being frustrated by perceived bad behavior doesn't mean responding with more bad behavior is a good way to improve the discourse, if that's your goal here.

          • echelon 20 hours ago

            You're 100% right, Steve. Thank you for your voice of moderation. You've been amazing to this community.

            • steveklabnik 20 hours ago

              It's all good. I'm very guilty of bad behavior myself a lot of the time. It's on all of us to give gentle nudges when we see each other getting out of line. I deserve to be told the same if you see me doing this too!

        • Mawr 9 hours ago

          > Respectfully, the parent only offers up a Zig compile time metric. That's it. That's the entire comment.

          That's correct, but slinging cheap shots at each other is not how discussions on this site are supposed to be.

          > I think this thread needs a little more nuance.

          Yes, but your comment offers none.

feelamee 8 hours ago

> Vim hangs when you open it

you can enable word wrapping as a workaround ( `:set wrap`). Lifehack: it can be hard to navigate in such file with just `h, j, k, l`, but you can use `gh, gj, etc`. With `g` vim will work with visual lines, while without it with just lines splitted with LF/CRLF

  • mmh0000 7 hours ago

    With a little bit of vimrc magic you can make it transparent:

      "Make k/j up/down work more naturally by going to the next displayed line vs
      "going to the next logical line (for when word-wrapping is on):
      noremap k gk
      noremap j gj
      noremap <up> gk
      noremap <down> gj
      "Same as above, but for arrow keys in insert mode:
      inoremap <up> <Esc>gka
      inoremap <down> <Esc>gja
kenoath69 20 hours ago

Where is Cranelift mentioned

My 2c on this is nearly ditching rust for game development due to the compile times, in digging it turned out that LLVM is very slow regardless of opt level. Indeed it's what the Jai devs have been saying.

So Cranelift might be relevant for OP, I will shill it endlessly, took my game from 16 seconds to 4 seconds. Incredible work Cranelift team.

  • norman784 20 hours ago

    Nice, I checked a while ago and was no support for macOS aarch64, but seems that now it is supported.

  • lll-o-lll 19 hours ago

    Wait. You were going to ditch rust because of 16 second build times?

    • Mawr 9 hours ago

      "Wait. You were going to ditch subversion for git because of 16 second branch merge times?"

      Performance matters.

    • metaltyphoon 18 hours ago

      Over time that adds up when your coding consists of REPL like workflow.

    • kenoath69 13 hours ago

      Pulling out Instagram 100 times in every workday, yes, it's a total disaster

      • johnisgood 5 hours ago

        It may also contribute to smoking. :D Or (over-)eating... or whatever your vice is.

    • sarchertech 18 hours ago

      16 seconds is infuriating for something that needs to be manually tested like does this jump feel too floaty.

      But it’s also probable that 16 seconds was fairly early in development and it would get much worse from there.

duped 19 hours ago

A lot of people are replying to the title instead of the article.

> To get your Rust program in a container, the typical approach you might find would be something like:

If you have `cargo build --target x86_64-unknown-linux-musl` in your build process you do not need to do this anywhere in your Dockerfile. You should compile and copy into /sbin or something.

If you really want to build in a docker image I would suggest using `cargo --target-dir=/target ...` and then run with `docker run --mount type-bind,...` and then copy out of the bind mount into /bin or wherever.

  • remram 2 hours ago

    The author dismissed that option saying "I value that docker build can have a clean environment every time", so this is self-inflicted.

  • edude03 16 hours ago

    Many docker users develop on arm64-darwin and deploy to x86_64 (g)libc, so I don't think that'll work generally.

    • duped 14 hours ago

      Those users are wrong :shrug:

ozgrakkurt 18 hours ago

Rust compiler is very very fast but language has too many features.

The slowness is because everyone has to write code with generics and macros in Java Enterprise style in order to show they are smart with rust.

This is really sad to see but most libraries abuse codegen features really hard.

You have to write a lot of things manually if you want fast compilation in rust.

Compilation speed of code just doesn’t seem to be a priority in general with the community.

  • aquariusDue 16 hours ago

    Yeah, for application code in my experience the more I stick to the dumb way to do it the less I fight the borrow checker along with fewer trait issues.

    Refactoring seems to take about the same time too so no loss on that front. After all is said and done I'm just left with various logic bugs to fix which is par for the course (at least for me) and a sense of wondering if I actually did everything properly.

    I suppose maybe two years from now we'll have people that suggest avoiding generics and tempering macro usage. These days most people have heard the advice about not stressing over cloning and unwraping (though expect is much better imo) on the first pass more or less.

    Something something shiny tool syndrome?

  • skeezyboy 4 hours ago

    >Compilation speed of code just doesn’t seem to be a priority in general with the community.

    they have only one priority, memory safety (from a certain class of memory bugs)

edude03 15 hours ago

First time someone I know in real life has made it to the HN front page (hey sharnoff, congrats) anyway -

I think this post (accidentally?) conflates two different sources of slowness:

1) Building in docker 2) The compiler being "slow"

They mention they could use bind mounts, yet wanting a clean build environment - personally, I think that may be misguided. Rust with incremental builds is actually pretty fast and the time you lose fighting dockers caching would likely be made up in build times - since you'd generally build and deploy way more often than you'd fight the cache (which, you'd delete the cache and build from scratch in that case anyway)

So - for developers who build rust containers, I highly recommend either using cache mounts or building outside the container and adding just the binary to the image.

2) The compiler being slow - having experienced ocaml, go and scala for comparisons the rust compiler is slower than go and ocaml, sure, but for non interactive (ie, REPL like) workflows, this tends not to matter in my experience - realistically, using incremental builds in dev mode takes seconds, then once the code is working, you push to CI at which point you can often accept the (worst case?) scenario that it takes 20 minutes to build your container since you're free to go do other things.

So while I appreciate the deep research and great explanations, I don't think the rust compiler is actually slow, just slower than what people might be use to coming from typescript or go for example.

amelius 8 hours ago

Meanwhile, other languages have a JIT compiler which compiles code as it runs. This would be great for development even if it turns out to be slower overall.

  • akkad33 8 hours ago

    Actually JITs can be faster than AOT compilation because they can be optimized for the current architecture they are running in. There were claims Julia, a JIT language can beat C in some benchmarks

    • amelius 7 hours ago

      In fact, JITs can be faster because they can specialize code, i.e. make optimizations based on live data.

smcleod 19 hours ago

I've got to say when I come across an open source project and realise it's in rust I flinch a bit know how incredibly slow the build process is. It's certainly been one of the deterrents to learning it.

s_ting765 7 hours ago

OP could have skipped all this by doing the compilation with cache on the host system and copying the compiled statically linked binary back to the docker image build.

ecshafer 20 hours ago

The Rust compiler is slow. But if you want more features from your compiler you need to have a slower compiler, there isn't a way around that. However this blog post doesn't really seem to be around that and more an annoyance in how they deploy binaries.

aappleby 20 hours ago

you had a functional and minimal deployment process (compile copy restart) and now you have...

  • canyp 12 hours ago

    ...Kubernetes.

    Damn, this makes such a great ad.

tmtvl 21 hours ago

Just set up a build server and have your docker containers fetch prebuilt binaries from that?

kelnos 21 hours ago

> This is... not ideal.

What? That's absolutely ideal! It's incredibly simple. I wish deployment processes were always that simple! Docker is not going to make your deployment process simpler than that.

I did enjoy the deep dive into figuring out what was taking a long time when compiling.

  • quectophoton 19 hours ago

    One thing I like about Alpine Linux is how easy and dumbproof it is to make packages. It's not some wild beast like trying to create `.deb` files.

    If anyone out there is already fully committed to using only Alpine Linux, I'd recommend trying creating native packages at least once.

    • eddd-ddde 14 hours ago

      I'm not familiar with .deb packages, but one thing I love about Arch Linux is PKGBUILD and makepkg. It is ridiculously easy to make a package.

ic_fly2 20 hours ago

This is such a weird cannon on sparrows approach.

The local builds are fast, why would you rebuild docker for small changes?

Also why is a personal page so much rust and so many dependencies. For a larger project with more complex stuff you’d have a test suite that takes time too. Run both in parallel in your CI and call it a day.

gz09 20 hours ago

Unfortunately, removing debug symbols in most cases isn't a good/useful option

  • magackame 20 hours ago

    What "most" cases are you thinking of? Also don't forget that a binary that in release weights 10 MB, when compiled with debug symbols can weight 300 MB, which is way less practical to distribute.

fschuett 5 hours ago

For deploying Rust servers, I use Spin WASM functions[1], so no Docker / Kubernetes is necessary. Not affiliated with them, just saying. I just build the final WASM binary and then the rest is managed by the runtime.

Sadly, the compile time is just as bad, but I think in this case the allocator is the biggest culprit, since disabling optimization will degrade run-time performance. The Rust team should maybe look into shipping their own bundled allocator, "native" allocators are highly unpredictable.

[^1]: https://www.fermyon.com

b0a04gl 20 hours ago

rust prioritises build-time correctness: no runtime linker or no dynamic deps. all checks (types, traits, ownership) happen before execution. this makes builds sensitive to upstream changes. docker uses content-hash layers, so small context edits invalidate caches. without careful layer ordering, rust gets fully recompiled on every change.

TZubiri 7 hours ago

>Every time I wanted to make a change, I would:

>Build a new statically linked binary (with --target=x86_64-unknown-linux-musl) >Copy it to my server >Restart the website

Isn't it a basic C compiler feature that you can compile a file as an Object, and then link the objects into a single executable? Then you only recompile the file you changed.

Not sure what I'm missing.

  • pornel 5 hours ago

    That's how Rust works already.

    The problem has been created by Docker which destroys all of the state. If this was C, you'd also end up losing all of the object files and rebuilding them every time.

jeden 6 hours ago

why rust compiler create so BIG executable!

senderista 20 hours ago

WRT compilation efficiency, the C/C++ model of compiling separate translation units in parallel seems like an advance over the Rust model (but obviously forecloses opportunities for whole-program optimization).

  • woodruffw 20 hours ago

    Rust can and does compile separate translation units in parallel; it's just that the translation unit is (roughly) a crate instead of a single C or C++ source file.

    • EnPissant 20 hours ago

      And even for crates, Rust has incremental compilation.

RS-232 20 hours ago

Is there an equivalent of ninja for rust yet?

  • steveklabnik 20 hours ago

    It depends on what you mean by 'equivalent of ninja.'

    Cargo is the standard build system for Rust projects, though some users use other ones. (And some build those on top of Cargo too.)

jmyeet 17 hours ago

Early design decisions favored run-time over compile-time [1]:

> * Borrowing — Rust’s defining feature. Its sophisticated pointer analysis spends compile-time to make run-time safe.

> * Monomorphization — Rust translates each generic instantiation into its own machine code, creating code bloat and increasing compile time.

> * Stack unwinding — stack unwinding after unrecoverable exceptions traverses the callstack backwards and runs cleanup code. It requires lots of compile-time book-keeping and code generation.

> * Build scripts — build scripts allow arbitrary code to be run at compile-time, and pull in their own dependencies that need to be compiled. Their unknown side-effects and unknown inputs and outputs limit assumptions tools can make about them, which e.g. limits caching opportunities.

> * Macros — macros require multiple passes to expand, expand to often surprising amounts of hidden code, and impose limitations on partial parsing. Procedural macros have negative impacts similar to build scripts.

> * LLVM backend — LLVM produces good machine code, but runs relatively slowly. Relying too much on the LLVM optimizer — Rust is well-known for generating a large quantity of LLVM IR and letting LLVM optimize it away. This is exacerbated by duplication from monomorphization.

> * Split compiler/package manager — although it is normal for languages to have a package manager separate from the compiler, in Rust at least this results in both cargo and rustc having imperfect and redundant information about the overall compilation pipeline. As more parts of the pipeline are short-circuited for efficiency, more metadata needs to be transferred between instances of the compiler, mostly through the filesystem, which has overhead.

> * Per-compilation-unit code-generation — rustc generates machine code each time it compiles a crate, but it doesn’t need to — with most Rust projects being statically linked, the machine code isn’t needed until the final link step. There may be efficiencies to be achieved by completely separating analysis and code generation.

> * Single-threaded compiler — ideally, all CPUs are occupied for the entire compilation. This is not close to true with Rust today. And with the original compiler being single-threaded, the language is not as friendly to parallel compilation as it might be. There are efforts going into parallelizing the compiler, but it may never use all your cores.

> * Trait coherence — Rust’s traits have a property called “coherence”, which makes it impossible to define implementations that conflict with each other. Trait coherence imposes restrictions on where code is allowed to live. As such, it is difficult to decompose Rust abstractions into, small, easily-parallelizable compilation units.

> * Tests next to code — Rust encourages tests to reside in the same codebase as the code they are testing. With Rust’s compilation model, this requires compiling and linking that code twice, which is expensive, particularly for large crates.

[1]: https://www.pingcap.com/blog/rust-compilation-model-calamity...

cratermoon 18 hours ago

Some code that can make Rust compilation pathologically slow is complex const expressions. Because the compiler can evaluate a subset of expressions at compile time[1], a complex expression can take an unbounded amount of time to evaluate. The long-running-const-eval will by default abort the compilation if the evaluation takes too long.

1 https://doc.rust-lang.org/reference/const_eval.html

Devasta 19 hours ago

Slow compile times are a feature, get to make a cuppa.

  • zozbot234 19 hours ago

    > Slow compile times are a feature

    xkcd is always relevant: https://xkcd.com/303/

    • randomNumber7 18 hours ago

      On the other hand you get mentally insane if you try to work in a way that you do s.th. usefull during the 5-10 min compile times you often have with C++ projects.

      When I had to deal with this I would just open the newspaper and read an article in front of my boss.

      • PhilipRoman 3 hours ago

        Slow compile times really mess with your brain. When I wanted to test two different solutions, I would keep multiple separate clones (each one takes about 80GB, mind you) and do the manual equivalent of branch prediction by compiling both, just in case I needed the other one as well.

juped 19 hours ago

I don't think rustc is that slow. It's usually cargo/the dozens of crates that make it take a long time, even if you've set up a cache and rustc is doing nothing but hitting the cache.

OtomotO 20 hours ago

It's not. It's just doing way more work than many other compilers, due to a sane type system.

Personally I don't care anymore, since I do hotpatching:

https://lib.rs/crates/subsecond

Zig is faster, but then again, Zig isn't memory save, so personally I don't care. It's an impressive language, I love the syntax, the simplicity. But I don't trust myself to keep all the memory relevant invariants in my head anymore as I used to do many years ago. So Zig isn't for me. Simply not the target audience.

charcircuit 20 hours ago

Why doesn't the Rust ecosystem optimize around compile time? It seems a lot of these frameworks and libraries encourage doing things which are slow to compile.

  • int_19h 18 hours ago

    It would be more accurate to say that idiomatic Rust encourages doing things which are slow to compile: lots of small generic functions everywhere. And the most effective way to speed this up is to avoid monomorphization by using RTTI to provide a single generic compiled implementation that can be reused for different types, like what Swift does when generics across the module boundary. But this is less efficient at runtime because of all the runtime checks and computations that now need to be done to deal with objects of different sizes etc, many direct or even inlined calls now become virtual etc.

    Here's a somewhat dated but still good overview of various approaches to generics in different languages including C++, Rust, Swift, and Zig and their tradeoffs: https://thume.ca/2019/07/14/a-tour-of-metaprogramming-models...

  • kzrdude an hour ago

    Lots of developers in the ecosystem avoid proc macros for example. But going as far as avoiding monomorphisation and generics is not that widespread

  • nicoburns 18 hours ago

    It's starting to, but a lot of people are using Rust because they need (or want) the best possible runtime performance, so that tends to be prioritised a lot of the time.

  • steveklabnik 20 hours ago

    The ecosystem is vast, and different people have different priorities. Simple as that.

o11c 20 hours ago

TL;DR `async` considered harmful.

For all the C++ laughing in this thread, there's really only one thing that makes C++ slow - non-`extern` templates - and C++ gives you a lot more space to speed them up than Rust does.

  • int_19h 18 hours ago

    C++ also has async these days.

    As for templates, I can't think of anything about them that would speed up things substantially wrt Rust aside from extern template and manually managing your instantiations in separate .cpp files. Since otherwise it's fundamentally the same problem - recompiling the same code over and over again because it's parametrized with different types every time.

    Indeed, out of the box I would actually expect C++ to do worse because a C++ header template has potentially different environment in every translation unit in which that header is included, so without precompiled headers the compiler pretty much has to assume the worst...

    • sgt 9 minutes ago

      What happened with Zig and async? Last I heard they might never implement it.

leoh 14 hours ago

tl;dr: it’s slow because it finds far more bugs before runtime than literally any other mainstream compiled language

ac130kz 12 hours ago

tldr as always, don't use Musl, if you want performance, compatibility.

  • ac130kz 21 minutes ago

    Some "smart" folks even downvote this advice. Yeah, I've seen articles on musl's horrible performance back in 2017-2018, and apparently it still holds, yet I get a downvote.