Roc with Richard Feldman
About their reasons for migrating from Rust to Zig
2025-11-13 63 min
Description & Show Notes
Building a new programming language from scratch is a monumental undertaking. In this episode, we talk to Richard Feldman, creator of the Roc programming language, about building a functional language that is fast, friendly, and functional. We discuss why the Roc team moved away from using Rust as a host language and instead is in the process of migrating to Zig. What was the decision-making process like? What can Rust learn this decision? And how does Zig compare to Rust for this kind of systems programming work?
About Roc
Roc is a fast, friendly, functional programming language currently in alpha development. It's a single-paradigm functional language with 100% type inference that compiles to machine code or WebAssembly. Roc takes inspiration from Elm but extends those ideas beyond the frontend, introducing innovations like platforms vs applications, opportunistic mutation, and purity inference. The language features static dispatch, a small set of simple primitives that work well together, and excellent compiler error messages. Roc is already being used in production by companies like Vendr, and is supported by a nonprofit foundation with corporate and individual sponsors.
About Richard Feldman
Richard Feldman is the creator of the Roc programming language and author of "Elm in Action." He works at Zed Industries and has extensive experience with functional programming, particularly Elm. Richard is also the host of Software Unscripted, a weekly podcast featuring casual conversations about code with programming language creators and industry experts. He's a frequent conference speaker and teacher, with courses available on Frontend Masters. Richard has been a longtime contributor to the functional programming community and previously worked at NoRedInk building large-scale Elm applications.
Links From The Episode
- Zig - Better than Rust?
- Rust in Production: Zed - Our interview with Richard's colleague with more details about Zed
- Richards blogpost about migrating from Rust to Zig - Sent in by many listeners
- Elm - Initial inspiration for Roc
- NoRedInk - Richard's first experience with Elm
- Haskell - A workable Elm on the backend substitute
- OCaml - Functional language, but pure functions only encouraged
- F# - Similar shortcomings as OCaml
- Evan Czaplicki - Creator of Elm
- Ghostty - Terminal emulator from Mitchel Hashimoto with lots of code contributions in Zig
- bumpref - A tiny Rust crate that came out of this discussion, providing Arc::bump(), which is an alias for clone().
- RAII - Resource acquisition is initialization, developed for C++, now a core part of Rust
- Frontend Masters: The Rust Programming Language - Richard's course teaching Rust
- Rust by Example: From and Into - Traits for ergonomic initialising of objects in Rust
- The Rust Programming Language: Lifetime Annotations on Struct Definitions - Learn from Roc: try to avoid having lifetime type parameters
- Rust By Example: Box, stack and heap - Putting objects on the heap can slow down your application
- Design Patterns: Elements of Reusable Object-Oriented Software - Seminal book popularising many common patterns in use today, written by the so-called "Gang of Four"
- Casey Muratori: The Big OOPs - Game developer explaining why OOP was an obvious mistake for high performance code
- Alan Kay - Coined the term "object-oriented" while developing the Smalltalk language in the 70s
- Niklaus Wirth - Working on Modula, a modular programming language, at the same time
- Kotlin - A new and popular language, basically Java++
- Go - Popular "greenfield" language, i.e. not coupled to an existing language, not using the object oriented paradigm
- Cranelift backend for Rust - A faster backend than LLVM, but still not released
- Andrew Kelly - Creator of Zig
- Software Unscripted - Richard's Podcast
- GPUI - Zed's own UI crate
- Structure of Arrays vs Array of structures - A big source of unsafe code in the Rust implementation of Roc
- The Zig Programming Language: comptime - Zig's replacement for Rust's proc-macros, with much broader utility
- crabtime - Comptime crate for Rust
- Roc - Roc's namesake, the mythical bird
- Rust in Production: Tweede Golf - Podcast episode with Volkert de Vries, one of the first contributors to Roc
Official Links
Transcript
Here's another episode of Rust in Production, a podcast about companies who
use Rust to shape the future of infrastructure.
My name is Matthias Endler from corrode, and today we talk to Richard Feldman
from ROC about their reasons for migrating from Rust to Zig.
Richard, thanks so much for taking the time for the interview.
Can you say a few words about yourself and about ROC, the programming language you're working on?
Yeah, sure. So I'm Richard. So my day job is I work at Zed Industries,
making the Zed code editor, which is all written in Rust.
So mostly what I spend my time on is writing Rust. On the side,
I have been developing for the past several years, the Roc programming language,
which I created, but I work on with a number of other people.
Originally, Roc's compiler was written in Rust completely. And now it's being
rewritten in Zig, which is what I assume we're going to talk about.
Precisely. Yeah, it's perfect. we had said in
a previous episode as well if you want to check that out we
will link it in the show notes that was an amazing episode as
well and also for this one i have
really high hopes because your name was mentioned a couple times by a couple
of our listeners because you wrote an article or maybe you published a post
let's say about your migration from rust to Zig but before we get into that
what is Roc what makes it special why did you start that project.
Yeah, so Roc's tagline is fast, friendly, functional.
And basically, it's a functional programming language. The goal is to be really, really fast.
We want it to be the fastest garbage collected language. That's sort of the goalpost that we set.
So I would say that Go right now, as far as I'm aware, is the fastest garbage collected language.
And we want to be faster than Go, not as fast as Rust or C++ or Zig or any of
those, because we want to be completely memory safe, guaranteed.
Speed and basically the the goal is to
be competing with languages like go or maybe like
you know ocamel also has a good reputation for speed while being
a functional programming language as opposed to like go being very imperative
so yeah that's that's the goal in terms of speed in terms of friendliness it's
like have really simple apis very very heavily inspired by elm Roc is sort
of an outgrowth of my really positive experiences with elm but elm is targeted
at compiling to JavaScript and front-end web development,
whereas Roc compiles to machine code or to WebAssembly and is sort of trying
to address all of these use cases that are not front-end web development.
Also, Roc's design has evolved quite a bit from Elm's. Now it's a much different
language. When it started out, it was very similar to Elm.
Now it's a lot less similar, but I think the essence of that sort of ethos of
simplicity and really nice APIs and really friendly compiler error messages and things like that.
Actually, Elm's error messages inspired Rust, even though So I think Rust is
now more famous for compiler errors.
So all of that is sort of what we're going for with Roc.
And take us back to the time when you started the project.
What was the ecosystem like? What were some of the other languages that you
got inspired by when you started with Roc?
And also, what was that initial spark that told you, hey, I need to do this,
and now is the best time for it?
Really, the inspiration for it came from an experience I had.
This is back when I was working at NoRedInk. We had this huge Elm front end,
and we had Ruby on Rails for the back end. And Rails had been at the back end
for long before we started using Elm.
And we were trying to move the back end to something more functional.
And kind of what we discovered was that there was just really this missing piece
in the ecosystem, or at least that was my perception of it.
We ended up going with Haskell and trying to use Haskell in kind of an Elm-like way.
But my feeling was what we really want is a functional language that's statically
type checked with type inference built around purity. So OCaml is a functional
language, but it's really not like pure functions everywhere.
It's sort of like pure functions are encouraged, but they're not really first class in OCaml.
Same thing with F sharp. And basically what we ended up concluding was that it just doesn't exist.
Like there is no Elm equivalent outside of the browser.
And that was kind of the spark for me to go and do it.
As far as what languages inspired it, really it was Elm plus like maybe little
bits and pieces of other languages, but it's very primarily like,
okay, how do I take this great experience we've had with Elm and translate it
to other domains besides just the browser?
And I didn't want to just do servers. The kind of the idea was i'd be using
this phrase the long tail of domains like basically there's lots of different
use cases out there it's not just servers there's command line apps there's native graphical apps.
There's microcontrollers robotics you know there's this very long tail of different
things you can do with programming and i wanted to see could we bring an elm
like experience to all those different domains in terms of being a very simple
functional experience yeah.
And i could see a few ways how this could go because the Rust compiler is not,
or the first version of it wasn't written in Rust because Rust didn't exist, obviously.
It was written on OCaml as well. And,
Then you have Elm, which might be another way to implement a new language on
top of, or maybe there's a language that I don't know of that you could use.
But why did you choose Rust and which alternatives did you consider?
Yeah, also a great question. So basically, the reason I chose Rust was that
I wanted the compiler to be as fast as possible.
And I was, to be totally honest, scared of trying to do it in a memory unsafe
language based on past experiences I'd had with segmentation faults that I really
struggled to debug and stuff like that.
When I was very young, like middle school era, I got into C++ because I wanted to do game dev.
And I have some scars from trying
to do that and bumbling around and just making a huge mess for myself.
And the pitch of Rust of being, hey, you can have this really,
really high performance ceiling. In other words, there's no mandatory garbage collector.
There's no built-in boxing of things you
can be as fast as a c or c plus plus program but
you're going to have memory safety you don't have to worry about seg
faults and all these things that that you know i had these bad experiences with
in the past that was the pitch that sort of sold me because my thinking was
well if i do the compiler in oCaml or Haskell or something like that yeah that's
going to be a more familiar experience to me but at some point i'm going to
reach this point where i'm like i want it to go faster and i can't because i'm
limited by the performance ceiling of this language.
I don't want that and then at that point i'm like what am i going to do
i'm going to say oh now i rewrite it in rust you know this is something that's happening
in the javascript ecosystem a lot is that a lot of projects started off
being written in javascript because that's what was familiar to people and then
they realized past a certain point that a compiler is one of those use cases
where just more performance is better it's not like you get down to this point
you know in a lot of graphical applications it's kind of like well as long as
you're not dropping frames as long as you're under the you know however many
milliseconds you have within each frame Thank you.
Humans can't perceive a difference it's like it's fast enough and
if you make it faster nobody notices compilers are
not like that faster is pretty much always better it's
a it's just a big part of the ux is you know
how long are you waiting for rebuilds and stuff like that and so
my feeling was especially actually this this came
up in elm when we were using element NoRedInk at some point we
became the largest elm code base we had a couple hundred thousand lines
i don't remember exactly how big it was at the time but elm 0.18
we were really feeling the pain of compile
times it's like we'd had a really positive experience and it was becoming a
very negative experience just because we were waiting so long for rebuilds
and then when elm 0.19 came out evan had rewritten
a huge amount of the elm compiler which was written in haskell for
performance and it took him a long time it's months and
months and months but when it came out it was just this amazing
breath of fresh air all of our builds were fast again like they used to be when
our when our code base was small and that was a really important lesson for
me was yeah take the stuff seriously like compile times are a really big deal
and if you want to build a language that has a good end user experience you
know fast friendly functional part of that friendliness and user friendliness.
Is committing to having a compiler that is really really really fast and that's
something that i've been completely unwilling to compromise on since the beginning
of the project the compiler's always It's got to be really fast.
Yeah. And that was in 2019. Correct me if I'm wrong here.
What was your level of Rust experience when you started with implementing Roc?
Was it your first big Rust project?
It was my first big Rust project. So previously in 2017, I had dabbled in Rust, I would say.
I'd written the Elm test runner originally in Node.js.
And it was honestly out of frustration with the Node.js ecosystem that I was
like, I just want to try something else. Why don't I try Rust?
I never actually shipped anything out of that project It was like I just started
rewriting it in Rust So I kind of got a feel for it And the experience that
I had initially with Rust Was positive enough that I was like Yeah, this...
This seems like something I'm willing to commit to for this compiler,
even though I know this is going to be a very, very long, you know,
project that I'm going to spend way more time on than this little sort of side
project thing that was just kind of for fun.
And I mean, my feeling going into this was that this is a language that I'm
planning on putting multiple decades of my life into.
And so committing to Rust with just that minimal level of experience was a bit
of a leaf of faith in the sense that I didn't have enough Rust experience to
know that I was going to like it that much long term.
But at the same time, I had enough experience.
It wasn't zero. It wasn't like I was sitting down and writing Hello World and Rust.
And then I was like, that's what this compiler is going to be in.
So I would say it was an informed decision, but my Rust experience at that point was pretty minimal.
When you think in decades, how does it shape the decision-making process?
I would say the main thing is that it gives me the comfort to make long-term investments.
So a lot of projects that I've worked on in my career have been for startups.
And startups are often like, yep, in 18 months, we're going to be out of money
and disappeared off the face of the planet. So it's really important that we
survive the next 18 months.
And so making a long-term investment has to be balanced against,
okay, if we make this long-term investment, is that investment going to cause
us to not exist long enough to reap the benefits of the long-term investment?
And there's always that constant tension.
With this project, my feeling was the opposite. It's like, I'm very comfortable
making long-term investments.
The main thing I was actually thinking about was, and this is again,
shaped by my early experiences with Elm and participating in having the good
fortune to live in the same city as Evan Czaplicki who created Elm and being
able to talk to him about these things.
But basically understanding that if I'm going to be making a long-term investment in the language...
It's important that I also communicate that clearly and set expectations clearly
to people who are using the language.
So from the very beginning, I've tried really hard to communicate,
hey, things are not set in stone here. We're going to be making breaking changes
to this language, et cetera.
The strongest way that I've done that is we intentionally, even though,
like you said, the first line of code was 2019. I actually started designing it in 2018.
But ever since then, we still do not have a numbered release.
We're planning on having our first numbered release in 2026 like
0.1.0 but the whole reason we've been doing that
is just to communicate you have to build from nightly expect things
to break despite this we still do actually have one
person who's been consistently using Roc in production at his job but that's
to me an appropriate level of cautiousness where like we only have one person
doing it even though lots of people have done Roc for hobby projects and stuff
because we are making big changes and so those are all things that i see as
long-term investments i think if i were thinking more short term,
I might be like, okay, no, we already need to be backwards compatible.
We can't, you know, new releases need to not break things.
Rust and Gleam and Go are all languages that I think went 1.0 and had a strong
backwards compatibility guarantee relatively early on. And I think that really
helped them with adoption.
But to me, my goal is not immediate term adoption. I'm more thinking about what
we're trying to do is really ambitious.
And I really, really, really want to get it right, or at least as close to right
as we can get it before we make commitments to backwards compatibility.
So that's the type of thing that I think thinking in decades has as a benefit.
It's that I can say, yeah, we've been working on this for years,
but we still don't have a numbered release because I really want to clearly
communicate to people, things are going to break.
Please don't expect that we're going to have backwards compatibility because
we're not ready for that yet.
But I think we're actually almost to the point where we are ready for that,
which is pretty exciting milestone to be coming up on hopefully next year.
I want to talk about Rust, the
good parts, for a moment, because not everything was bad, I'm assuming.
In your post, you already mentioned a couple of really sane reasons to move away from Rust.
But can you talk to us a little bit about the good parts, the things that you
like about Rust, working with larger codebases, and what does it really feel
like to work in the trenches with such a language?
Yeah, there are lots of good parts to Rust. And to be clear,
I like Rust overall as a language.
I'm not like a Rust hater. I'm not like, oh, Rust is old and busted and Zig
is like the new hotness. It's very much the opposite.
It's more that for this particular project, I ended up deciding that if we're
going to do a rewrite anyway, then that rewrite should be in Zig rather than
rewriting it again in Rust, which we'll get to. But yeah, Rust's the good part.
So I mean, the first thing that I would mention is that, like I said,
I work at Zed full time, and I don't think it would make any sense for Zed to move away from Rust.
Like Rust seems like it just fits the way that Zed's codebase works like a glove.
If you wanted to rewrite Zed in something other than Rust, first of all,
that sounds like a really daunting proposition, not just because it's a lot
of work, but because Zed's codebase very, very heavily works.
Relies on a combination of features that Rust is really good at,
such as you have the memory safety, which when you have a wide contributor base,
we have Zed's open source.
So it's not just that we have people at Zed working on Zed, but also we get
all sorts of outside contributions.
It's really helpful in that context to be able to look at someone's pull request
and be able to say with a lot of confidence, okay, this is not introducing sneaky memory safety bugs.
Now, granted, you can make the point that, you know, lots of non-memory safe
languages have lots of contributors too, like Zig, for example.
Also Ghosty, which Mitchell Hashimoto made in Zig, has lots of contributors.
But I think it's definitely a selling point of Rust that when I look at a pull
request coming into Zed, I can say with a lot of confidence,
this is not going to introduce some very subtle memory safe debug that I'm going
to have a really difficult time debugging later.
Sort of the blast radius of what a new contribution can do in terms of mistakes
is a lot smaller I do think that's something we benefited from in the Rust version
of the Roc compiler as well.
So outside contributors, but also, of course, you know, within the code base,
it's the same thing. It's definitely a benefit to not have to worry about those things.
I wouldn't say I've gotten zero segmentation faults or memory safety errors
in either the original Rust compiler for Roc or in Zed.
Actually, literally yesterday we had a seg fault in CI.
And it was it was on well it was it was in our new Windows code base and some
of the stuff that we're doing is very low level you know interacting directly
with the operating system graphics APIs and drivers and stuff like that because
we're it's a very high performance code base and you know,
Zed is a good example of the classic Rust model of if you're going to have to
have some unsafe code because you're doing just innately unsafe things,
maybe for performance, maybe for FFI, whatever, just having it very small and contained is helpful.
And so, you know, it is helpful to know that the surface area of that is smaller and more contained.
And if you look at Zed's code base, I think you see what you want to see in
a classic Rust code base, which is the amount of unsafe is a very,
very small percentage of the overall code base.
So yeah i mean i think all those things are great obviously the
the cargo ecosystem is very large and there's lots
of stuff you can get off the shelf which is nice there's this classic
problem in a language where if you were to time travel back
to when rust 1.0 came out or you know even before
that everybody would say well ecosystem is
a downside of rust where they would say like well there's no ecosystem you
know how can you how can you ignore the gigantic c and c plus plus
ecosystem whereas of course now ecosystem is the selling point
i don't know how like people have amnesia about this but
like it's like every language that has a big ecosystem
today used to have no ecosystem it just
grew over time that's just it's a normal part of a language growing so it's
it's totally reasonable for people to say i don't want to use this language
right now because right now the ecosystem is small but it's a little bit of
a pet peeve of mine when people act like that's just this permanent state it's
like no actually it's the opposite of that it always the ecosystem always grows.
That's where all the ecosystems you think of as big came from.
But certainly at this point, the crates.io ecosystem is just,
you know, very large and you can pull all sorts of stuff off the shelf.
And that's a selling point. There are some things in Zig that we use at Zed
that we couldn't get off the shelf in Zig.
We would have to do like FFI to a C library or a C++ and C++.
If you're trying to pull stuff off the shelf in C++ and use that in Zig,
that's a much bigger pain than trying to pull something off the shelf in C and use that.
See it's a lot easier to interoperate with in Zig so
yeah the ecosystem also a selling point and i
think the biggest thing for zed's code base is just the fact
that you have for lack of a better term drop and because you have drop you have
reference counting automatic reference counting and we use arc all over the
zed code base it's just everywhere and we rely on it very heavily we're constantly
cloning arc things when i first got to the zed code base i was a little bit
nervous because I would see these .clones everywhere.
And my experience from writing the Rok compiler in Rust, which used very little
Arc or RC, was that, oh, if I see a .clone, that's a hotspot.
That's a big performance potential problem.
And we had this convention of in the few places where we would use Arc,
we would do the Arc colon colon clone. So it's really clear,
like, don't worry, don't worry. It's okay.
It's just a reference count bump. We're not deep cloning the whole thing.
But in Zed's code base, if you did that, you would be writing arc colon colon
clone so so many times it would just be everywhere so what happens instead is
you just use dot clone because that's way more concise because that's the normal
way that cloning is used in the Z code base,
but to do that you do have to have that,
drop, basically. You need to be able to have automatic, C++ people would call it RAII.
Zigg, by design, does not have that. And so that's one of the examples of why
the idea of rewriting Zed's code base in Zigg sounds totally ridiculous to me.
It would just be a terrible choice.
Because some projects, one language is a good fit for them, and then other projects,
another language is a better fit for them.
So I'm comfortable making the claim that I am happy writing rust at zed as my
day job and i think that's the correct choice for zed and also outside of work
having the Roc compiler be in Zig and i think that's the right choice for Roc
compiler so different projects different needs that.
Thing that you mentioned with arc clone and making that explicit really like
that idea that's that's such a great idea because yeah you show that,
Cost of cloning is really low. And of course,
you couldn't do that in all places where you had a lot of usages of ARC,
but I wondered if you could build a trade extension for that and you had a ARC
underscore clone method on the type that was ARC, for example.
And then it would be kind of, you know, explicit, but also weirdly strange.
So I'm not 100% sure if you want to go down that route.
But no, that's... I mean, we're probably not going to do that in Zed because
I mean, the number of places we would have to switch to that is extremely high.
But yeah, you can call it bump or something. Like you're just bumping the reference
count. That's actually a really cool idea. I never thought of that.
Yeah, nice. So you wrote a lot of Rust code, probably many tens of thousands,
maybe hundreds of thousands, maybe millions of lines of code by now. I don't know.
Probably hundreds of thousands. I don't think millions. I think I'm under 1 million probably.
So yeah, the Roc compiler at the point where we decided to switch over and
do the rewrite was at like 300,000 lines of Rust code.
Now I did not write all 300,000 lines. There's a lot of other contributors involved in that for sure.
But at the same time, I also wrote a lot of lines that ended up getting deleted
or rewritten, et cetera.
So I think between that and my work at Zed, I'm definitely sure it's multiple
hundreds of thousands of Rust code.
I also taught a course on Rust. I taught intro to Rust for frontendmasters.com.
So that was like an eight hour course just teaching people Rust from scratch.
I've not taught a Zig course there. I did teach a course but
but yeah so i have a lot of experience hands-on
in rust a lot a lot of hours with it and also i've taught
it to beginners which always requires you know learning a little bit of extra
detail about things where you sort of have these gaps and as soon as you have
to explain it to someone in depth where they need to understand it you sort
of immediately realize where the gaps are and kind of have to go fill some of
them in so yeah i definitely feel comfortable saying i feel like a Rust expert,
but not a Zig expert yet, I don't think. I've had a lot less time with Zig.
Because that kind of leads me to the next question which is
with that level of experience some
patterns more or less naturally must have evolved and
there are probably things that you would call idiomatic rust that maybe are
not written down anywhere or maybe are not in the book what what would you say
are some things that you would consider ergonomic or rustic code how does it
look like what is it like in practice.
Yeah well i mean the first thing is of course use unsafe as little as possible
ideally not at all if you can avoid it,
I guess there's a lot of using traits for things so for example if you can initialize
something you probably want to use from and into like that's a kind of a standard
practice in the ecosystem just so you can make the conversions a little bit more ergonomic,
similarly if you're going to be formatting it as a string then okay maybe you
want to give it display you probably want to give it debug I think there's a
lot of trait centric thinking when I'm writing Rust code, like in terms of like
what traits do I want to implement, even just the baseline.
Don't write standalone functions in the top level of your
module prefer to make an impl for the
particular type and then you know add it
to there unless you're writing some function that's really not tied
to any one particular type kind of the default is
to think in terms of types and in terms of traits and then in terms
of impl and putting things on there um also like
pretty straightforward you know to think about doing like
taking references to things so like you know ampersand or
ampersand mutt i say mutt people that said usually say mute
but i don't know i've always said but so no i
i since my first experience writing rust
was with other people who you know were doing rust for the first time we
kind of developed our own like terminology i guess
or way of pronouncing things and when you're looking at
other people's code you don't you don't see how they pronounce it but it said
we pair program a lot i think i'm the only one who says mutt everybody else
says mute but it's fine which makes sense it's mutable but anyway um so yeah
So I think another sort of unwritten rule is that you really should try to avoid
putting references in data structures,
which is to say you should really try to avoid having lifetime type parameters
in your structs or what have you.
This is something that in the ROC code base, like the original one, we did not really do.
So we kind of took the approach of, well, yeah, I mean, we don't want to have
allocations for these things. So let's use Bumpalo for all of our arena allocations
and then store references to either stuff on the arena or stuff...
I guess, yeah, it always ended up being in the arena.
It wasn't really on the stack because then the lifetimes wouldn't work out.
But yeah, so that's another...
I don't know unwritten rule i guess is and and it's it's not the
type of thing where you necessarily need a rule because as we discovered
if you do that the ergonomics end up being
pretty annoying because you have this like tick a that's just following you
around everything you write tick a tick a tick a tick a everywhere so
i would say it's it's not that you can't do that
in rust but rather that you know you're asking like what's idiomatic i
think that's that's something that people tend to avoid is having
life in lifetime annotations be everywhere it's more like if you're going to
have lifetime annotations you kind of ideally want them to just be confined
to a function signature rather than sort of leaking out into
your data structures and that's kind of i guess a lot of people will solve that
by using like reference counting or things like that or interior mutability
or something like that of course at that point as soon as you get into like
cell and things like that it's sort of like well if you make a mistake then
now you're potentially you know dealing with a runtime panic which is not great
or or else having to do all these conditional things.
So yeah, we did a lot of that and I think kind of learned the hard way that
that's something that you should probably avoid when you're writing Rust code
is having lifetime annotations in your structs, like your data structures.
What else? Yeah, I mean, I guess in general...
Try to avoid boxing and you know doing
things on the heap but i would say rust is
less hardcore about avoiding those
things than something like c or Zig is so if you look at
like the lowest level c operating system apis c really
really like the operating system is is really not going to
be calling malloc for you like almost ever it's really
going to be a lot more doing things like okay you need
to pass in what in rust would be the equivalent of like a
mutable reference to something where you're saying like give me
a pointer to this data that you've already pre-initialized and
also tell me how much you know how many bytes i have to
work with i'm going to write into there like the the caller says
here's some memory that i've you know in rust this would be like maybe uninit
basically and it's like hey give me give me some data i'm gonna you know write
to it and then i'm gonna give it back to you you know i'm gonna return back
to you and don't worry the data has been written there and i'll tell you how
many bytes i wrote into there that's a really common idiom that you see in C,
at least in low-level operating system type code.
Whereas in Rust, it would tend to be the opposite. It would be more like,
no, no, don't worry. I'm just going to take care of that for you.
There might be some API that's like, give me a mutable reference to something
and I'll write into it, but that's a lot less common in Rust. So.
There's a pro and con there on the one hand the apis and rust
tend to be nicer looking i would say they look a little bit more
dare i say functional in some
cases where you're like i'm gonna call this it's gonna do some heap allocation
with the global allocator and then return you know a string
or a vec or whatever whereas in c it's more like yeah give me a buffer and it's
like well what if the buffer is not big enough it's like well then you're probably
gonna have to call me again and i'll fill in the rest of the buffer after you
resize it for me or something like that so you don't tend to see that type of
thing in rust it tends to be more open to doing silent allocations than than see your Zig now.
From experience i know that quite a few people that i would deem to be experts
in rust or very experienced are pretty lightweight on design patterns for some
reason and i wonder if you also saw that in working with larger code bases and
where you stand on that side of the spectrum.
So do you use a lot of design patterns in your code or are you also just like
leaning into what you mentioned, structs, traits, and combining those?
And do the design patterns evolve more or less naturally for you or is it something
that you consciously think about when you write code?
So let's be a little bit more specific. When you say design patterns,
are you thinking about sort of the concept as in like, you know,
patterns of design that come up like, you know, techniques and things like that?
I mean, of course, in that sense, I would just call those, I don't know,
idioms or something like that.
Usually when I think design patterns, I'm thinking like Gang of Four,
observer pattern, you know, things like that, visitor pattern and whatnot.
So certainly I'm not thinking in terms of gang of four stuff.
Like I did a lot of enterprise Java early on in my career.
And that based on that experience, I've kind of, I don't know,
sampled the object oriented style and I've sampled the functional style and
I've sampled the imperative procedural, not object oriented style,
which is what I would put Rust and Zig in that category.
And my conclusion is that imperative procedural is fine. Functional is fine. Yeah.
Object-oriented is a mistake i i i've been trying to sugarcoat that more and more over the years,
trying to be diplomatic about it but for the purposes of this podcast i'm just
going to say yeah i don't think any of that was the right direction i think
it's going to turn out to be a dead end in programming that you know people
are going to look back on as like that was the thing that people used to do
i don't think that's the future of programming so yeah i i'm happy to have moved past that.
Oh that's interesting because you use different ways to encapsulate code then
And you use namespaces and modules and freestanding functions to group that logic.
But don't you still need some way to hold state, like structs or things where
you can maintain some sort of order of the events that come in,
like as in an actor system or so?
It's kind of a pretty bold statement to say OOP was a mistake.
What's the alternative then?
Well, it depends on who you ask, right? So if you talk to some of the like Casey
Muratori and Jonathan Blow and people like them who do a lot of like high performance
game dev saying OO it was a mistake.
Is there just like, yeah, so let's talk about something interesting.
You know, like we know we've known this for a very long time.
Yes. In the broader programming world, I agree that it's a bold statement,
but there are quite a lot of people who are just like, yeah,
of course it's, it's not controversial at all.
So yeah, I mean, like you mentioned encapsulation. So for example,
encapsulation is a special case of modularity. So modularity is the idea of the boundary.
It's like saying it's just pub and private, right? Public and private interface.
So you can say like, here are the things that I'm exposing. Here are the things
that I'm intentionally not exposing.
Nobody outside my module can access my private things encapsulation just
means modularity but in an object-oriented context
it's like modularity applied to classes and objects
well if you don't have classes and objects and you just got like
you said structs and rust like modularity is
fine you can you know all we're talking about there is just like
did i decide to make this field on my struct pub
or not right everything just sort of follows from
that it's the very basic thing of saying i'm intentionally hiding
these implementation details and the reason that i'm hiding these
implementation details is that for example i'm trying
to create an invariant that i want to be enforced by the compiler like
i want to make sure that nobody is you know reaching in and messing with these internals because
if they did it would violate my assumptions and i want to really hardcore communicate
that by giving anyone who tries to do that a compiler error or i might be making
a library and i might want to say i want to have really strong backwards compatibility
guarantees but i might want to change the internal structure of this same thing
right i'm going to make these things pub and these things private.
That's all it is that's that's modularity and then encapsulation is just modularity
but i'm specifically talking about modularity at the level of a class and since
Rust doesn't have classes i would argue that technically speaking Rust doesn't
have encapsulation Rust just has modularity,
somebody might but people use these terms pretty interchangeably i think in
a lot of cases like a lot of people well first of all even though the term modularity
i think might have come first i don't know now we're getting back into like
alan k coined the term object oriented in the 70s and you know nicholas virz
was working on modula also in the 70s i'm not sure exactly.
Which term technically came first, but certainly the term encapsulation is more popular today.
So I think that's what more people associate with that boundary system.
But I think it's worth noting that from a historical perspective,
modularity is the thing that everybody seems to agree that they want now.
Literally right now, I think on Hacker News yesterday or the day before,
there was a front page article about modules in C++ and when are they coming?
So an observation i would make is that every language
that just had encapsulation just to say the only way
to do public and private was at the class boundary every single
one of them has ended up wanting a module system every
i should say every popular one okay like i'm sure
there's some you know obscure languages that don't have it i'm sure small talk
will never you know consider adding a module system but but i
mean c++ java right like all these languages have
ended up having some separate mechanism so it's like actually we don't
want only to be able to do that through classes and that
makes sense modules are a really nice idea but the question becomes if
you have public and private at the module level do
you also need a separate class thing which
also has its own public and private or is module level sufficient and it seems
to me to be the case that module level turned out to be you know sufficient
and you know Rust is an example of a language that does not have objects does
not have classes unless you want to count like you know whatever din objects dyn you.
Like trade objects and stuff like that but that's just the name they chose for
that that's not like they are not objects in the object oriented sense they
don't have classes they don't have inheritance or any of that stuff so yeah
I don't know I don't want to get too far in the weeds on that but I guess to sum it up I would say that.
Actually, I gave a whole talk on this. I gave multiple talks on the subject
of object-oriented and whatnot, but I gave one called The Next Paradigm Shift
in Programming and also The Return of Procedural Programming.
The Return of Procedural Programming is a very on-the-nose talk about OO got really big.
If you look at what are the biggest new languages that have come out that are
getting popular, they break down into two categories.
One is they're just a plus-plus version
of an already popular language so for example kotlin is basically
java plus plus it's like they took java and then intentionally have
seamless java interop and so you
could not possibly make a language that has seamless java interop without having
it be object-oriented because java is extremely object-oriented but if
you look at if you exclude those and say like languages that are not coupled
to an existing already popular language what are
the greenfield languages like and there's only two
there's rust and there's go those are the two languages that
are getting popular that are greenfield not based on
and coupled to an existing already popular language and neither
of them are object-oriented at all they're just procedural languages and
that's hence the title of the talk the return of procedural programming so
if someone wants to make the case that object orientation is the future you
might want to ask why is it that the only new popular object-oriented languages
that are coming out and gaining actual traction in the real world why are they
all coupled to legacy object-oriented languages and nobody's making new object-oriented
languages from scratch that are seeing popularity.
I can get behind that. Never seen it this way, but the way you explained it makes a lot of sense.
We will link those talks in the show notes.
I guess Rust kind of found a nice balance there because that's also somewhat
what I consider idiomatic, Rust,
to make use of separate input blocks, to make use of composability with traits and so on.
And you don't always need trade objects and that sort of thing.
Ensure a lot of things that compile time and make it still composable right and still have,
all the ways to do different
kinds of visibility at your disposal you can have
pop you can have pop crate you can have public modules and so on so that's it's
pretty cool i'm sure we could talk about this for a long time but i would like
to shift gears too the first time where you noticed that Rust might not have
been the right choice for the language that you were building.
What was the first time when you noticed, well, no, I need to think about an alternative to Rust?
That's very easy. We talked early on about how a big part of programming language UX is compile times.
They got slow. When we had a small Rust code base, their compile times were
fine. Everything was nice and snappy.
And as it got bigger and bigger, they got slower and slower.
And now they're just quite slow.
I just did this morning. I did a scratch build of the Rust code base and it
took like 30-40 seconds somewhere in there and a rebuild still took about 10
seconds and in contrast our Zig project,
which granted this is apples and oranges. The Rust code base is like 300,000 lines.
A Zig rewrite is currently at about 100,000 lines.
The scratch build of the Zig one was like 13 seconds or something like that and then the,
the incremental rebuild was like 1.3 seconds. Now that's worth noting that that's
going to get much, much faster in the future because I'm working on an ARM64 Mac.
Zig has an alternative backend to LLVM, but it's only X64 right now.
The ARM one is still a work in progress. It hasn't shipped yet. It's not stable.
And once it does, then that 1.3 second rebuild is going to get much smaller still.
And also Zig still doesn't have full, like they have not fully realized their
goals in terms of incremental rebuilds, whereas Rust, I would say,
has at this point based on the constraints of the language with crates being
the smallest compilation unit rather than individual files.
So given all of that, it's already the case that we're seeing a much bigger
gap between Zig and Rust than sort of linear in terms of lines of code.
Like if all those being equal, Rust code bases, you know, 300,000 lines versus
Zig one being 100,000 lines, you would think that the Zig one would be maybe
three times faster, but not 10 times faster for like incremental rebuilds.
And also, it's not just that it's already 10 times faster, which it is,
it's one-tenth the incremental compilation time, but also that it's going to
get much faster in an upcoming release, like probably the next one after the current release.
When I'm assuming the ARM64 one's going to ship, and actually people who are
working on the raw compiler in x64 processors are already benefiting from that
much faster compilation time because that one already has shipped.
So meanwhile like on
the rust side i mean lvm is a very very known culprit
for making slow builds and unfortunately you
have to you can't just you know do like check like cargo
check you know that's not the whole thing you want to run your tests and in
order to run your tests you have to do code generation and that requires lvm
in rust there is a crane lift back then but i'm not kidding when i started Roc
wrote the first lines of code the
crane lift back in for rust was work in progress, it still hasn't landed.
So that was 2019. And maybe it's right around the corner, but I can only hold
my breath through so long.
Yeah. And I guess you probably used all the tricks at your disposal to make it faster.
You probably went through a lot of tutorials and tried a few things.
It's probably optimized to the max by now, the Rust code that is.
I wouldn't say we went all the way to the max. We certainly tried a lot of things.
So like the really obvious one that everybody says, oh, just try mold.
And said mold made no difference whatsoever for us i mean we like
we didn't even bother enabling on ci because we had people who are
using like daily driving linux and like oh let's try it
out they're like i mean it's a little faster like
you know mold for linking we tried splitting up
our crates along more boundaries etc i mean
really the the overall feeling that we
ended up having was that even if we
do like use all the maximum number of tricks and
and try to like contort the code base around just the number
one goal is trying to get it to build as fast as possible it's
just still not going to be anywhere near as fast as Zig is building i mean the
Zig team is in the progress of working towards hot code reloading where like
you make a you're running your Zig program and while it's running you make it
you know you save and it's like instantly updated with the running program.
That's i i it's inconceivable to me
that Rust would ever have that like in my lifetime i'm not saying it's
impossible i'm just saying that just based on
the i don't know the way things are organized right now
and i've talked with a lot of people in the rust project about compile
times there are definitely some heroes who are like really trying
to fight the good fight you know and trying to make like
compile times a major priority but unfortunately it
really it feels like that's not
the majority view right now it's it's the majority view is
not that compile times are a major priority or
need to be a major priority or maybe it's that people think
that they should be a priority but their ambitions for how much of an improvement
they think are possible is way below what the what the ambition level is for
Zig like when i talk to Rust core team members i've told them this so this is
not i'm not you know i'm not saying anything here that I haven't said to people who work on Rust.
When I talk to them about the compile time situation, the response that I usually
get is something along the lines of, here's why it's really hard to make it faster.
When I talk to Zig people about compile times, it's the exact opposite.
They're like, I'm so sorry that it's not already as fast as the hardware could possibly make it be.
We're working really, really hard on making it be that whatever your hardware
is, we're using it to the maximum extent to give you the absolute lightning fast compile times.
The attitude difference is just totally night and day and this was something that i picked up on,
ever since i met andrew kelly who created Zig at a conference is
just there's this big attitude difference in
terms of how important compile times
are to the roadmap every year i fill out the rust state of
rust survey and you know i'm outing myself
you know whatever somebody reads that they could tell who if it's me you
know even if i didn't put my name on it like they ask about
oh do you want this feature do you want that thing stabilized you want this that every year
i'm just like i don't care i don't care i don't care i don't care just make it faster all
i care is compile times i don't care if this gets stabilized or that
gets stabilized or this is available in language everything in rust is already
like from a feature perspective fine like all these little like oh what if what
if you have like async functions i'm like i don't care i'll just put them in
the block it's fine like i make an async block of closure it's like i don't
i don't care i just care about i want the compile times to be a lot a lot a lot lower than they are,
but of course I don't seem to be in the majority on that in terms of Rust users
either it seems like to be totally honest a lot of Rust users.
Are okay with the compile times being the way they are and maybe that's because
they're working on smaller code bases I don't know or maybe it's because,
they're coming from like a Java or a C++ background and they're not they're
like what's the problem but coming from an Elm background and also now you know
having experienced like Zig compile times I'm like I just I don't,
I can't not be bothered by it a lot, all the time, like knowing what's possible.
So yeah, I don't know. I've ranted on my podcast, which is software unscripted,
by the way, a lot about Rust compile times.
We were thinking about a rewrite anyway. I don't know that if we would have
like seriously discussed doing it in Zig if Rust compile times were fast.
I think we would have just been like, no, we're going to rewrite,
but we're going to rewrite it in Rust again.
Okay. I'm probably in the camp where compile times are not that big of a problem
for me nowadays it used to be a problem and then two things happened first modern
arm hardware is pretty decent at compiling Rust so you have m4 and it's it's
much better than it used to be,
and the other part was the Rust compiler did
actually get faster since 1.0 i hear that
a lot from people that come from functional programming and
from game dev for people that need fast
feedback loops because this is kind of
a development model that doesn't really fit well
or perfectly well right now with rust i feel like
if i were to write a game from scratch i would also be cautious to maybe use
rust or not i wouldn't be sure but is that true like does that ring true to
you where you say it's because of your background maybe even before using Rust,
you learned that there's a different way to build software.
I certainly think that the degree to which compile times bother you would depend,
sort of obviously, on what you're used to and what you, I don't know,
think of as sort of possible or normal.
Like if I'm used to Elm and like, you know, sub second, you know,
recompiles and stuff like that, then yeah, I mean, it's going to bother me when
I'm waiting 10 seconds, you know, to, to be able to build my thing or to run my tests.
Similarly, I mean, like at Zed, it's, it's just really obvious.
Like the classic example of the, the really obvious pain point here is I'm like,
I just need to scoot this thing over by two pixels because it's a little bit off.
I just need to make a little tweak here and I'm just sitting there waiting for
that. I'm like, if I were using a browser, JavaScript or something like that,
I wouldn't even need to think about this. I would just be like, refresh the page.
Now, granted, in modern front-end development, a lot of people have TypeScript
and Next.js and this and that, but those are self-inflicted wounds.
You have the option to not do that to yourself.
In Rust, there is no such option, as far as I'm aware. There's no way to say,
I have this big Rust code base.
I'm just making this small tweak to this one thing. I think if you wanted to
do something like that, But you would probably need to, I don't know,
do something like reorganize, like really aggressively reorganize your crates
in a way where you can try to just make it so that, I don't know,
the caching works out or something like that.
I mean, we've tried to do some of that at Zed, obviously, because compile times are such a pain point.
But yeah, if you need to make a change to something deep in the bowels of our
code base, it's always going to be super slow.
Something that's dependent on by lots of things. there's just no way to get
it to be fast whereas again like if you have a giant javascript code base not
typescript you know where you have to do the builds and everything but just
have a you know giant vanilla javascript code base obviously that has its own
problems on many many levels
but in terms of feedback loop it's still just going to be very fast and yeah
so if if that's the type of thing that you're used to is this sort of javascript
or elm like feedback loop it's just very frustrating and innately i think to
to work on a large rust code base and to have these slow feedback loops.
Yeah. Zed has sort of their own UI framework as far as I remember,
but there are systems like Dioxus and Laptos now that have hot reloading and
it's an experimental feature,
but they somehow made it work such that you can make certain modifications in
your code and it's kind of guarded by a certain way to write things.
Not everything is hot reloadable, but some components are
but of course you're not
writing some sort of user interface with
Roc you're writing a compiler so that's completely different
right yeah and i i can see how that
might be frustrating where make a single line change and then
you wait for 30 seconds for a clean build
and so on so that can can certainly be true have
you considered any alternatives other languages that
maybe have faster compile time spoiler or not maybe that
much of a paradigm shift like maybe i don't know
microsoft for example they wrote rewrote c
sharp in rust but also typescript in go or there might be yeah actually typescript
and c sharp might also be alternatives i don't know if you want to write a compiler
in typescript necessarily but perhaps c sharp i don't know or have you considered
any other languages other than Zig as well for the rewrite well.
This gets back to the the performance ceiling right.
So like.
I said it's been a hard non-negotiable goal that Roc the Rocs compiler,
needs to be as fast as possible it needs to be really really fast it needs to
not just be like kind of fast it needs to be extremely extremely fast that's
a really really important thing to me about the compiler which means that any
language of the garbage cluster immediately out so go immediately out c sharp all of them you know,
no dice so really i mean the only contenders
i would say would be like rust Zig or
some other language that has basically supports memory
unsafety so even like when we got
far like very far into the the rust
compiler like the rust implementation of the Roc compiler we
were using unsafe more and more and using it in the rusty way where you know
you try to like minimize the blast radius but to do things like destructive
arrays and whatnot and i don't know maybe we should get into that but yeah like
there is like even though like so Roc also has a goal of being for a language
that automatically manages memory for you and guarantees memory safety.
It tries to be very very fast and like i said like you know we want like run
faster than go is a goal one of the things that that makes me realize is just
how important memory unsafety is to performance like basically there are some
things that you cannot do or cannot do ergonomically,
if you have a hard requirement of like memory safety is
enforced there are just some things that require memory
on safety if you want them to go fast because in order to enforce memory
safety you have to do runtime checks so really really easy
example of this is bounce checks so sometimes
the compiler can align this but sometimes they can't like the way
that rust gets memory safety around
buffer overruns is just that you know
if you have a slice or you have a vec then you do like an
access at a particular point in there that's going to be checked
at runtime and if you do like the square brackets then it's
going to panic if you have an index out of bounds or if you
do like a you know dot get then it's going to give you back an option then you
have to handle you know what if it was inbounds what if it was out of bounds c
does not have that of course rust does have the unsafe equivalent of that but
then you have to wrap it in unsafe and then you avoid the bounce check well
the point is that if you want maximum performance you don't want that bounce
check you want to just not do that and the downside is if you make a mistake
then now you have undefined behavior so obviously there's a trade-off there.
But in a language like Roc, we don't offer the unsafe option.
Like even if you're in like the hottest of hot loops and you're like,
no, I really, I can, you know, the compiler is not able to elide this balance
check and the balance check is actually causing problems for me in practice somehow.
You're stuck. You just like, sorry, we don't support that. And so the performance
ceiling on something like Rust or C, because, you know, Rust does allow you
the option to say, no, you can do it without the balance check if you're really, really sure.
C just doesn't have the balance check in the first place. and Zig much like
rust has the option of yeah you can do it with it without the bounce check,
Zig actually will do it with like a well
so you can when you're compiling Zig has different optimization
levels one of which is optimize safe which is basically like what rust does
so it's like always have the bounce check you can also have optimize fast which
basically means we'll do the bounce check and debug builds but in release builds
we will not do it kind of similar to how rust will do panic on overflow and
debug builds but then in release builds it'll just,
let the number overflow and just the energy just wraps around Zig does that
same thing for like memory access if you compile with release fast as opposed
to release safe release safe we'll just keep them so basically those types of
things are not accessible in a language like c sharp or something like that
unless you want to drop into cffi and at that point i'm like well,
if i'm going to do ffi why don't i just write it in one language and,
So that's the type of thing that I see as sort of a reason that we did not consider
languages like C-sharp or Go.
Even though they are plenty fast for garbage-collected languages,
however fast they might be, I want that maximum performance ceiling to be available.
One of the killer features that keeps coming up in every single discussion I
have with people that compare Zig and Rust, which are fundamentally two different languages,
is that Zig has this very nice concept of comp time.
And in Rust, we have not so much.
We have derives, we have a build RS, and we have macro rules.
But that's more or less what we can do.
So from your perspective, knowing both concepts, what is your current feeling about comp time?
Is it something that Rust should add? Or is it something that is very much native
to Zig and the way Zig works? And there are better alternatives in Rust?
I don't think you could retrofit comp time onto rust i
don't think that would make sense um the thing that's really
cool about the way that Zig uses comp time primarily
i would say is that Zig has
comp time instead of a whole laundry list
of features that rust has to try to do the same kind of
job so you'll learn this one concept and i
mean there's some different aspects of the concept it's not like it's this you
know oh here's comp time it runs a compile time the end there's a
whole like supporting ecosystem around that like okay if i want to do type
introspection that's just like a thing i can do at compile
time but you do actually have to go learn like how the type
introspection stuff works kind of like how in rust you
have to go learn proc macros though proc macros are a lot more complicated than
comp time which i would say is the other nice thing about comp time it's not
just that it's one powerful feature that takes the place of several other features
in rust but also that it is conceptually pretty simple the learning curve is
not as high as something like, again, I'll pick on proc macros.
That said, I don't want to say that it's all upside and no downside.
I mean, one of the things that is a, I don't know, thing that I miss from Rust
when I'm working in Zig is that because Rust has like full parametric polymorphism,
including of arguments,
Rust types tend to be more self-descriptive than Zig types are.
That's like pretty common in Zig to see something where at the type annotation
level, it says something like, oh, this takes in any type.
Now this is not the same thing as like in typescript
for example you have a type called any it's just this magical thing that
like you know you can give it anything and it's all going to work out unless it
crashes at runtime or gives you horrible things that's not
what it means what it means is just that this is something that's going to be
resolved like this type is going to be resolved at compile time rather than
being like fixed to a hard-coded type that you've written down so you will still
get a compiler error if you misuse that function you try to give it a like incompatible type.
But again, the thing that I miss from Rust is that in Rust, I would be able
to see a type written out.
So just at a glance, I can say, what does this function take?
In Zig, it's much more common than it is in Rust because it's much more common
in order to find out whether or not something succeeded or failed,
I actually need to do a build. So the analogy here would be macros and Rust.
Sometimes I have some code and I'm like, okay, I'm looking at this.
And based on looking at it, I'm not totally sure if this is going to compile or not.
I need to actually compile it to figure out if it's going to work or not,
because it has to resolve all the macro stuff.
And then based on how the macros resolve, okay, maybe it's going to turn out
that something did or did not compile, and then I'll get an error.
So I'm still getting the error at compile time. But of course,
I would prefer to just have it be more visually obvious where
i can just look at it and say like this is definitely going to work or this isn't going to
work without actually having to wait for the compiler to you
know go through the macro expansion to be able to tell whether or not it
compiled that's kind of that analogy i think
applies to Zig's comp time as well where there is a higher number of cases where
i can't just look at it and know for sure like what its type is or what types
this function accepts or doesn't accept i have to wait for the compiler to go
through and give me that feedback So it's sort of like the absolute fastest
feedback loop is I just look at it and I can tell from, you know,
my intuition, my experience with the language.
The second fastest feedback loop is compile time. Then the third fastest would be runtime.
So there's more things in Zig that are in that second case of I can't just look at it.
I have to actually run the compiler than there are in Rust when it comes specifically
to type annotations because comp time is used in type annotations like that.
So this is kind of an inside baseball thing, but I think anyone who's used both
languages would pick up on that as something like, yeah, I do appreciate the
conceptual simplicity of comp time and how it can take the place of all these
other language features,
make the language simpler and learning curve and all that good stuff. And it's very powerful.
There's things that you can use comp time for in Zig that I guess I haven't
maybe thought through it, but it seems like it would be harder to try to wrangle
all the different Rust features together to do the same thing.
But that is a downside is that you don't get.
Some of your type annotations are just not as self-descriptive and in order to tell
what this function takes i either need to go read the implementation or more
likely i need to read the comments for the function that's like okay here's
what you can give this thing right but i can't tell just by looking at the type
annotation i actually do need to go read some documentation
just to know what i can pass to this that will compile at compile time just like with a macro.
I'm not an expert on this but as far as i know six comp time also takes the
outer scope into consideration if something was defined before the comp time,
call then you could access that i don't know if that is true maybe correct me
if i'm wrong but there is a crate called crab time now which.
Oh yeah works for us and.
It it takes a piece of code and then compiles that in a separate project and
then it inserts the result into your project.
Yeah so it does not.
Access the outer scope though.
But again like i mean going back to like to
me the the cool part about comp time is it
has something in common with functional programming which i love about
functional programming which is the subtractive aspect it's like it's
the thing that i most like about comp time and
Zig is all the features that Zig does not have because it
has comp time so if you add comp time to rust which
already has all those other features it it just seems like it's
it's moving in the wrong direction it's like well no the thing that i
like is the the reduced feature set smaller set of
simple primitives similarly to how you know if you take something like javascript
or typescript you're like oh let me add some more functional programming stuff
and then it's like well no but i what i like about it is the subtracted the
smaller tool set i want a smaller tool set of simple primitives i don't want
to just add yet another way to do it into something that I think is already
too complex in the JavaScript ecosystem.
Yeah, I can fully get behind that.
And also, whenever you talk through such a technical problem,
or maybe a technical decision that was made, it always feels like your opinion
is very balanced. You don't really take any of both sides.
It's just that you pick what works best for you and for the team, right?
And that's very refreshing to hear as well, because everything is backed by
facts and backed by a lot of knowledge as well. so that was pretty impressive
i guess we could talk about that for ages.
Unfortunately we did.
Run out of time but there's one final question that i commonly ask at the end
which is what is your message to the rust community.
I think the biggest thing would be focus on the end user,
i think it's it's easy because rust is such a big language with so many different
features and there's so many different competing concerns you can have in terms
of like i need to balance it's like making this safe and also making it well
tested and well structured and easy to maintain,
there's all these different things that you could be thinking about and i think,
focusing on what's the end user gonna you
know get out of my program that's the most important thing that's got to be
the north star and don't lose sight of that when you're thinking about all these
other things that you as a programmer could be thinking about and i think that's
something that zed really embraces which i really appreciate about zed it's
like we brag about things like Like, you know,
this is a code editor that runs like 120 frames per second when you're like
scrolling through your tabs, you know, as switching like code editors as fast
as possible and things like that. And I think that's how it ought to be.
It shouldn't be that we're bragging about, you know, Z is innately good because
it's written in Rust, but rather that Z is really, really fast and writing it
in Rust is the way that we achieve that.
That was pretty amazing Richard thanks so much for taking the time where can
people learn more about Roc.
Yeah so Roc does R-O-C it's
named after the mythical bird not the not the rock like the
you know inanimate object or the genre of
music so roc-lang.org like i
said we're in the middle of a compiler rewrite so you can definitely try it out but
stay tuned i would say like you know this is going to come
out i'm assuming somewhat before advent of code 2025 but
we're hoping to have the new compiler ready to
rock for pun actually retroactively intended um
ready to rock for advent of code 2025 so
people can try it out and so far we're on track to be
able to do that but yeah there's going to be a lot of new design changes
and awesome stuff for the language it's going to be easier to get
into especially for beginners and people can
look forward to all that but if you want to get involved sooner than that or certainly
if you want to contribute to the compiler or donate to what we're doing we
have a non-profit foundation set up that people we can donate through so it's
tax exempt in the u.s and yeah all that is at roc-lang.org and of course if
you want to follow any of the rust stuff i'm doing check out zed zed.dev it's
been my daily driver editor for years now i love it and it's been just getting better and better.
Yeah that was amazing greetings also to Conrad Irvin and Folkert de Vries who
used to be guests as you mentioned And Richard, thanks so much for taking the time.
Thank you.
Rust in Production is a podcast by corrode. It is hosted by me,
Matthias Endler, and produced by Simon Brüggen.
For show notes, transcripts, and to learn more about how we can help your company
make the most of Rust, visit corrode.dev.
Thanks for listening to Rust in Production.
Richard
00:00:27
Matthias
00:00:51
Richard
00:01:25
Matthias
00:02:51
Richard
00:03:09
Matthias
00:04:50
Richard
00:05:16
Matthias
00:08:13
Richard
00:08:25
Matthias
00:09:37
Richard
00:09:42
Matthias
00:12:38
Richard
00:13:04
Matthias
00:19:26
Richard
00:20:00
Matthias
00:20:14
Richard
00:20:25
Matthias
00:21:33
Richard
00:22:00
Matthias
00:27:09
Richard
00:27:45
Matthias
00:29:00
Richard
00:29:30
Matthias
00:34:50
Richard
00:35:59
Matthias
00:38:48
Richard
00:39:02
Matthias
00:43:11
Richard
00:44:15
Matthias
00:46:27
Richard
00:47:47
Matthias
00:47:49
Richard
00:47:50
Matthias
00:51:45
Richard
00:52:31
Matthias
00:56:53
Richard
00:57:15
Matthias
00:57:17
Richard
00:57:26
Matthias
00:57:28
Richard
00:57:29
Matthias
00:58:18
Richard
00:58:51
Matthias
00:58:53
Richard
00:59:03
Matthias
01:00:07
Richard
01:00:14
Matthias
01:01:21
Richard
01:01:33
Matthias
01:01:34