Tweede Golf with Folkert de Vries
Matthias Endler and Folkert de Vries discuss Rust's role in modernizing infrastructure, including its use in the Roc compiler and time synchronization protocols. They highlight the value of Rust in reliable software and upgrading existing users.
2024-02-08 73 min
Description & Show Notes
Have you ever wondered how computers stay in sync with the time? That is the
responsibility of the Network Time Protocol (NTP). Around since 1985, NTP is one
of the oldest protocols still in use on the internet and its reference
implementation, ntpd, written in C, is still widely used today.
That's a problem. C is a language that is not memory safe, and ntpd has had its
share of security vulnerabilities. Here is a list of CVEs.
As part of Project Pendulum, Folkert de Vries and his colleagues from Tweede
Golf have been working on a Rust implementation of NTP. I sit down with Folkert
to talk about the project, the challenges they faced, and the benefits of using
Rust for this kind of project.
Along the way, we learn about funding open source projects, the challenges of
establishing a new implementation of a protocol, and all sorts of other
interesting things that might or might not be related to NTP.
responsibility of the Network Time Protocol (NTP). Around since 1985, NTP is one
of the oldest protocols still in use on the internet and its reference
implementation, ntpd, written in C, is still widely used today.
That's a problem. C is a language that is not memory safe, and ntpd has had its
share of security vulnerabilities. Here is a list of CVEs.
As part of Project Pendulum, Folkert de Vries and his colleagues from Tweede
Golf have been working on a Rust implementation of NTP. I sit down with Folkert
to talk about the project, the challenges they faced, and the benefits of using
Rust for this kind of project.
Along the way, we learn about funding open source projects, the challenges of
establishing a new implementation of a protocol, and all sorts of other
interesting things that might or might not be related to NTP.
The Network Time Protocol (NTP) is a cornerstone of the internet. It provides accurate time synchronization on millions of devices, but its C-based implementation, which dates back to the 1980s, is showing its age. In this episode, we talk to Folkert de Vries, Systems Software Engineer at Tweede Golf, about their work on reimplementing NTP in Rust as part of Project Pendulum.
ntpd-rs is an open-source implementation of the Network Time Protocol, completely written in Rust with the goal of creating a modern, memory-safe implementation of the NTP protocol.
About Tweede Golf
Tweede Golf is a Dutch software consultancy that specializes in safe and privacy-friendly software. They work on projects that are critical for creating a safe internet infrastructure, protecting citizens' privacy, and securing connected devices with Embedded Rust.
Tweede Golf is also an organizing partner of RustNL, a conference about the Rust programming language, which takes place in the Netherlands.
About Folkert de Vries
Folkert is a Systems Software Engineer at Tweede Golf, where he works on low-level protocols that ensure the safety and security of the internet and devices connected to it. He is an open source maintainer and polyglot, working with and extending languages as diverse as Rust, Elm, and Roc.
Links From The Show
- The Roc programming language
- ntpd-rs - Implementation of the Network Time Protocol in Rust
- Network Time Protocol (NTP)
- Precision Time Protocol (PTP)
- Simple Network Time Protocol (SNTP)
- sudo-rs - A memory safe implementation of sudo and su
- Fuzzing in Rust with cargo-fuzz
- Tokio Async Runtime
- Internet Security Research Group
- Sovereign Tech Fund
Official Links
Transcript
This is Rust in Production, a podcast about companies who use Rust to shape
the future of infrastructure.
My name is Matthias Endler from corrode, and today we are talking to Folkert de
Vries from Tweede Golf about how to modernize critical time-sinking infrastructure with Rust.
Folkert, welcome to the show.
Thanks.
Can you just briefly talk about yourself and about your role at Tweede Golf?
Sure. So I've been using Rust for about four years. In my spare time,
I mostly work on the Roc programming language, where I'm one of the main contributors.
And it's been a really cool place to try out a lot of low-level stuff.
And then at work at Tweede Golf, I mostly work on systems programming projects,
decently low-level, and in recent years, mostly on our big open-source projects.
When you mentioned Roc, I know it's a Rust podcast, but maybe people might
still be interested in what it is.
Can you maybe say a few words about that and where Roc comes from?
Yeah, it's a pure functional language. So it is in that tradition.
It's mostly, well, it's sort of a combination of like Haskell and Elm are sort
of the two main languages.
And really what makes Roc different is that it compiles to native code.
And so occasionally we also just compile straight to assembly.
That's a fun skill to have.
And we want to have a very high performance ceiling for this language.
So functional programming is typically associated with, it might look kind of
fancy, but it's kind of slow, but actually with recent sort of research,
we're able to create, you know, like to eventually output very fast code.
You mentioned Haskell, you mentioned Elm I wonder where Rust fits in there like
you still seem to be in the in the Rust space you still seem to be working with
Rust how does Rust tie into that picture.
This is interesting so like when we when we wanted to make the raw compiler
uh sort of the obvious choice would have been Haskell given that we were functional
programmers we quite liked Haskell,
but we had seen some existing Haskell projects and we just knew that,
In terms of performance, it wasn't the best that it could really be.
So we knew that in theory with Rust or with like a low-level language,
we could do like, you know, SIMD and various other kinds of optimizations.
We had no idea how to do any of them, right?
Like we were functional programmers and had heard about this sort of optimization
stuff, you know, in university or other places, but we hadn't actually experienced that.
And so in practice, C++ would have been a non-starter for us.
But with Rust, we felt right at home, given its type system that was very familiar.
And then, you know, the rest of the language is sort of imperative.
But, you know, you are exposed to that anyway.
We know how to write a for loop, right?
So that is sort of how we got started with Rust. And, you know,
sometimes we look back at some of the old code, and it's quite bad in terms
of, you know, idiomatic Rust.
But over time, we got a lot better and we actually sort of descended down to
the lowest level of the computer where we're now reasoning about like cache
locality and using SIMD and minimizing the number of syscalls, that sort of stuff.
When you explained that, I was wondering if you used LLVM as a backend.
So for the uninitiated, LLVM is a compiler backend infrastructure that,
for example, powers Rust as well. Well, is that the same for Roc-lang,
or do you have your own compiler backend?
I know for the people that are listening and are not too technical,
apologies for the detour, but I'm just genuinely curious about it.
Yeah, so we have different modes, and this is in fact something that Rust will
also soon have that many languages are now looking at.
So LLVM is really great for getting the best final output. put.
So if you want a sort of super optimized program, LLVM is basically the state of the art.
The problem is that most of the time when you compile your code,
you don't really need that level of optimization.
You really just want to check whether the change you made sort of fixes the
test or, you know, like the program now runs correctly.
And then LLVM, the sort of trade-off that it makes is sort of a bad one for
you because it spends a lot of time optimizing a program.
And even if you ask it to sort of not do any work, it's still very slow.
And, you know, you will will notice this all the time with Rust,
where compile times are very slow.
And often, this is really not because of the borrow checker or anything.
That is compiler code written by Rust developers, and they can make that fast.
The problem is usually LLVM, or a significant part of it is LLVM.
So a lot of languages are looking at alternative backends for development.
We have custom assembly backends for x86, AARCH, and WebAssembly.
They are pretty much on par now with the LLVM, but they are like blazingly fast.
So Roc can actually compile and link and then run Hello World faster than Python
can print it, which is about like, yeah.
So like that is the sort of speed that your computer can actually achieve if you don't use LLVM.
We can beat LLVM, you know, Like thousands of hours of labor have gone into
sort of having that produce very fast programs.
But it just makes the tradeoff of like, we're going to set everything up to optimize.
And then if you decide not to do that, you've wasted a lot of time and resources, basically.
Isn't it also true that if you are at a stage where you can compile a reasonably
sophisticated language, maybe that is similar to Haskell or Elm to some extent,
I'm talking about Roc in this case, if you have such a backend,
couldn't you also use it for Rust, which is also in this ML sort of space of languages,
similar to, it's pretty similar to Haskell.
Could you use the backend for Rust at some point?
It's, that would require quite a bit of additional work, I think.
Like it's really specialized towards the sort of operations that Roc allows for.
Rust has a lot more primitives, ultimately, that you would, like in particular,
things around pointers, right? That you would need to somehow translate.
It's totally feasible. It's like an extension that you could make.
But Rust is currently looking at the Cranelift backend, which is really cool.
There's a lot of sort of cool research that goes into how that one works.
And then also the GCC backend, which I believe is mostly about supporting more
embedded targets, but potentially also has performance improvements over the LLVM backend.
Rust does not have a garbage collector. Does Roc have one?
And if so, how does it impact the language design or the compiler design? design so.
In Roc we use reference counting
as our garbage collection strategy so we
don't have a garbage collector in a classic sense where you
run into like garbage collecting pauses and things like that effectively like
everything is wrapped in an arc right and in rust terms and so cloning is cheap
but you do need to like that that reference count needs to be updated all the
time so there is a certain overhead associated with that also all of your allocations
are slightly slightly bigger.
You need space for that reference count to live.
But generally speaking, that is a very,
like sort of good trade-off to make. We get very reliable performance.
You need something like this anyway.
And then for a functional language in particular, we also get mutation.
So normally in a functional language, you have this like referential transparency
idea where you can actually update or mutably update any values.
So that means a lot of cloning unless you're smart, but we can actually allow for in-place updates.
Basically when the reference count is one you get a immutable reference basically
because there is just a single owner of that value and you can you can make updates.
Feels like we went a bit ahead of ourselves sure we kind of jumped right in
and I kind of would like to tie it back into the topic of this podcast which
is about NTP and other protocols which are useful for various things like timing.
But I guess before we jump into that, maybe we can take a quick step back and
say the overarching theme of what we talked about so far is,
in my opinion, performance and understanding how computers work.
Is that also what motivates you to work on these problems?
Yeah, it's really, you know, it's like I now end up being able to work with
a lot of things that I never really thought I would have worked on in university, for instance.
So I was this very sort of Haskell-y programming language theory sort of person.
And then Rust very naturally allowed me to sort of come into contact with a
lot of these low-level concepts, right?
Like, you can use Rust in a way that is quite similar to Python in a way.
In practice, people will quibble with this.
But I think it's sort of true. Like, you don't need to understand a lot about
the machine to still write Rust code.
And then in practice, it is actually quite performant.
But very slowly, right, you click through the documentation,
you read some stuff, you click on go to source a couple of times,
and you will end up at these places where a lot of low-level details actually start to matter.
And I think this is really nice about Rust that like go to source generally
works even into the standard library. It's like occasionally you hit some macro
and that is kind of tricky, but generally you will find something that makes
sense and you can learn about it.
This is very different in say Haskell or Python or Java or whatever,
where the primitives are implemented in a different language and you have no idea how they work.
I kind of share the same background. Maybe I wasn't as deeply involved in Haskell
as you, but back in university, I did some Haskell.
And for some mathematics departments, I wrote some algorithms and implemented
some geometric functions.
Functions and what i felt was when i
was introduced to rust at some point i realized
that it's real world Haskell or it's pragmatic Haskell to some extent and it
felt like i could actually use all of the things that i learned from Haskell
type theory and algebraic data
types and so on in a real world project did it feel the same to you yeah.
It's well like i think if you come at it with that background where the type
system isn't actually that intimidating to you.
And there's just some like syntax to learn, you know, like what is a trait? What is an impl block?
That sort of stuff, right? Like you can pick that up very quickly.
And so I think with that background,
Rust isn't nearly as intimidating as when you are sort of coming from say like
a Java or a Python background where Rust's approach to types and then lifetimes
is is quite different from what you're used to.
Like to me, it made total sense. Like, okay, sure, we have a problem.
We're going to use the type system to sort of make that problem go away. Great.
Like I know exactly how that sort of general form of solution works.
And so that was very comfortable. But then like, as you say,
like actually it is much easier to get practical stuff done with Rust.
I also just think that it helps that Rust is a much more modern language.
Which Haskell really shows sort of signs of aging at this point.
It is over 30 years old.
Around that sort of age, it just shows. It shows in the standard library.
It shows in an accumulation of features that we don't use anymore and have to
caution people to not use.
And so Rust has a sort of major advantage there in that it was able to learn
really from Haskell, from ML, from C++, from all the other languages.
Early on, you mentioned that Rust made
it really accessible for newcomers to learn
about the underlying implementation and
you mentioned that you can click on
view source in the documentation i do think
that this is an amazing feature are there other features like that that are
just maybe a little more hidden that people might not know about that maybe
also enable you to learn how rust works under the hood or other things that
make the language more accessible to newcomers i.
Think With regards to how things work, I think it's also something that is sort
of valued in the Rust community that this sort of low-level detail is sort of appreciated.
Like in a lot of other languages, people will say, oh, don't worry about that.
The runtime will take care of it. Perhaps even the compiler will take care of it.
But of course, in Rust, we know if you actually build the compiler or are quite
close to it, you will know exactly what limitations it has.
And also, you will know how fun it is to actually put a bunch of bytes in the
right order and have that constitute a program, right?
The sort of like hacking around with and really playing with computers,
really commanding the machine, I think is something that you can certainly find in the Rust community.
Community you know a couple of other communities as well at this point like
it's something you don't see all that often but that excitement about really
commanding your hardware is that is really cool.
Yes and there's this other notion of explicit versus implicit where a lot of
things in rust are not hidden they are visible in plain sight for example how
vectors are implemented or or what allocates and what doesn't allocate.
Yeah, I was particularly... Well, I think a cool example of this is the mutex
synchronization primitive, which I'm sure I've seen that in universities somewhere.
Some professor must have mentioned it at some point, like this is a mutex.
But I don't think we ever really got to play with it.
And I certainly didn't understand how it actually works. Like,
how does this mutex know whether anyone else is currently also looking at the same piece of data?
How does it get notified that the lock is now available and it can be taken by the current thread?
And in Rust, you can actually just go to source and it's right there.
It's a bunch of more primitive values and you actually can understand how it works.
Like most other languages don't have this. Like even in C or C++,
if you go to the source code of like the libc or the standard library,
it is incredibly arcane.
It looks very different from sort of standard C source code,
in my view anyway, as not a very experienced C programmer.
Like that stuff just looks really weird.
And I mean, in Rust, you can just find something, find an implementation that makes sense.
That is very true. I can still remember that back in the day,
we needed to use mutexes in C, for example.
And the one thing that still pops up or still top of mind is deadlocks.
And I do think that you could run into a lot of these situations with Rust as well.
I mean, it doesn't save you from deadlocks because essentially it's a logic
error, I would say. but at least if you use high level primitives it becomes
much much harder to use them wrong.
Yeah i i also think you
like in rust it'll be really clear
when you need a mutex right whereas i think in c it might be much harder to
know like okay i'm using this value over here but is some other thread also
touching this right now like you you sort of have no idea whereas in rust a
the type checker will tell you, right? Like, oh, you're using this.
Or, well, first of all, you have an immutable reference, so you can't mutate that thing ever.
If you have a mutable reference, then you know it isn't shared between multiple threads.
And if you do want something that is both shared and mutable right now,
you need to wrap it in a mutex.
Then, of course, there's still logic errors you can make.
That's one of my pet peeves with C, because even if it looks extremely explicit
about various things, it is actually quite implicit about what it does under the hood.
So a lot of things, a lot of details are conventions, or you need to be aware
that things return errors and so on.
In Rust, that is very rarely the case.
So it's the right kind of explicit, if you like. You don't need to deal with
memory management, which technically is also possible in C,
and you can be very explicit about it, but it allows you to make decisions which
can impact the program's design.
Yeah, for sure. Sure. I think it works out really well, the sort of approach
to, well, yeah, not exactly preventing that blocks,
but certainly making them more or like less likely to happen.
It feels like, imagine we have a conversation like this and there's someone
from the business side, maybe a CTO or a decision maker listening to us.
They might think these are problems that
only nerds care about or people that
are highly technical at least how do you bridge the gap between technology and
business then because at some point you need to be able to sell that and maybe
explain why rust or a technical solution is superior or gives you a business advantage yeah.
I mean like we would need to look more at like i guess like concrete cases
but it's true right like i need to eat and so like money needs to come in at
some point from someplace.
So I think you can ultimately distill this into, you know, value for users.
A lot of this low level stuff is really about user experience.
If you want to think about it that way, right? Like performance is ultimately user experience.
And I think we don't really appreciate that enough, but there's a lot of research
from like Google, right?
About like if a page load is this many milliseconds longer than this percentage
of people will just like lose interest, right? This is very well established.
In practice, a lot of software is really slow and only people sort of excited
about this sort of low level stuff can fix that.
And when they do, it's really it's like magic kind of.
There's been a couple of recent very interesting also Rust projects,
actually, like I think there is a Python linter right now that is like thousands,
like a thousand X faster than some of the other tooling in the Python space.
Like that is magic, right? For a Python programmer, like how did that happen?
But it happened because a couple of systems people got very excited and felt
like, hey, we can actually do much
better at this task than the existing implementations. Let's go do it.
And like, yeah. And then imagine the sort of like value that that actually has,
the sort of multiplier of this one person going really deep down into that rabbit
hole, coming up with a more efficient implementation. limitation.
But then all users of that tooling, and of course, that tooling,
like its usage is spreading very quickly, because 1000x improvement is ridiculous.
You know, the sort of the amount of time saved, the amount of extra productivity
gains is really something that is almost hard to even capture, right?
So like, that is ultimately the power of this low level stuff.
Like, yes, Yes, it is very nerdy. Yes, it is very, it seems to be very distanced
from sort of the real world sometimes, but it has real outputs.
But at the same time, if you approach me and you said, oh yeah,
we will have a 10x or 100x improvement in performance.
And I ask you, how long will it take? And you say, yeah, it will take five or 10 years to get there.
I'm not sure if that convinces me right there because it's a big time investment
and it might be even hard to quantify.
That that's true so you
really need to pick your battles there there is
just certain certain areas certain problems where this sort of approach works
well for some tasks like it it really doesn't matter and you know like in our
spare time we might still They'll play around with optimizing the last couple
of milliseconds or whatever,
but it doesn't have a lot of business value.
You need to be strategic about this, for sure.
Tweede Golf
Folkert, at Tweede Golf, you do a lot of open source work, I assume.
What does open source work look
? How do you pick a project? How do you get started?
like at
Can you name a few examples that you worked on?
How does that general process look like?
Yeah. We're a very small company, really, in a very unknown city in the eastern
part of the Netherlands. But
we do a lot of, I think, very interesting and sort of high quality work.
So historically, mostly in the web space, but then also now doing embedded.
We do a bunch of training and then we work on a couple of these like big open
source Rust projects right now.
So like, you know, we've always liked open source and might have contributed
occasionally to, you know, something or other.
But the big projects that we run now are sort of big funded open source projects
such that we can actually dedicate a good amount of like labor to them.
And so the current main ones are projects around sudo, ptp, and ntp,
where sudo is a sort of super user do command on Linux.
Obviously very sort of security critical that that implementation is sound.
And in that case, it also sort of needs to mirror the behavior of the existing implementation.
And then P2P and N2P stand for the Precision and Network Time Protocols.
And these are sort of important building blocks of the Internet.
And so that is sort of the main reason that those projects originally got started.
So sudoand NTP are funded originally by the Internet Security Research Group
Prossimo
through their Prossimo initiative. These are the people behind Let's Encrypt.
These are people that are very
passionate about the Internet and sort of preparing it for the future.
And this
initiative is basically their project to fund rewrites or implementations
of foundational building blocks of the Internet in memory-safe languages,
languages, which in practice is a euphemism for Rust.
But for funding purposes, we need to say memory-safe language.
And if we're generous, we could include C-sharp or Swift in that definition.
But that's currently not something that we do.
Right. So is Let's Encrypt a corporate entity?
Is that the business that funds all those projects? Or does Let's Encrypt also
just belong to another organization?
I only vaguely remember that they started in the Mozilla realm somewhere,
but I don't know. I haven't followed up what they do now.
Yeah, so the ISRG and Let's Encrypt, they're all not-for-profit businesses.
Businesses, they run on effectively, or at least the projects that we do are basically,
the money is ultimately from the big tech companies and the ISRG facilitates
sort of putting that money to good use.
So they do a lot of, I don't know, the convincing to someone at,
you know, a major tech company like,
hey, you have this budget that is, you know, left over or hey,
you have this problem with this old piece of software that is not really maintained anymore.
That might be a bit of security risk at this point.
Why don't you fund us to create an implementation in Rust, a modern one,
one that we can sort sort of guarantee will be maintained for the foreseeable future.
And so they really play this facilitating role with regards to our projects.
I think Let's Encrypt is something they run themselves. I'm not exactly sure
what the funding model for that is, but it is still a not-for-profit sort of
initiative that they set up.
Let's focus on NTP for a moment. What is NTP? What does it stand for? When was it introduced?
Give us some background on that.
Right. So NTP stands for the Network Time Protocol, and it is a protocol for
synchronizing time over the network.
And in practice, the network generally means the Internet. So the network is
an untrusted network that might span the globe.
So this really has been sort of with us since the start of the internet.
So it's like sort of around 40 years old at this point.
Very old. It's gone through a number of iterations. So the initial versions,
you know, have some flaws with the benefit of hindsight.
So we're currently on version 4 of the NTP specification.
And, you know, it runs on most devices, you know, around you that are connected
to the internet. they will occasionally synchronize their time to the internet
just to sort of be up to date and display the right time.
So for most consumer devices, that is all there is to it. Occasionally,
they sort of reach a server and ask like, hey, what is your time right now?
And they will just adopt that time basically.
But NTP is important for the internet because it is sort of crucial for a lot
of the infrastructure that keeps the internet together.
So in particular, like in a a data center, you actually want all your devices,
all your machines to be sort of synchronized to one another with regards to,
you know, when did a backup happen?
You really want that timestamp to be accurate, right? Like if some error happens,
for instance, you want to be able to interleave the logs of two machines and
have that constitute a sort of an accurate global timeline.
If the machines are not correctly synchronized, you could have a message that
is received before it's even and send, and then good luck debugging that,
right? Like that is awful.
So just for sort of sanity, synchronization is very useful and very important
in a sort of a data center or cloud sort of context.
And then for security, you have this thing called TLS, right?
The sort of foundation of HTTPS.
And it relies on being able to expire certificates.
These certificates just expire at a certain point in time.
And if you were able to manipulate the clock of a machine somehow,
you could trick it into using an expired certificate, and that basically breaks
all of the whole security model of TLS.
So NTP is important both sort of for infrastructure reasons,
but also for security reasons.
And that means that our implementation also needs to be very secure.
You need that that implementation to be good right otherwise you'll run into issues i.
Tried to remember when was the last time that
i manually set the clock on my computer and it's probably been a decade by now
is it also powered by ntp under the hood so i'm on mac os i don't know if it
supports all the platforms or if different platforms use different protocols it's.
It's generally the same protocol call, but most sort of consumer devices will
use a thing called simple NTP, so SNTP.
And that just does a simple call of like, hey, server at Apple or Microsoft
or on Linux, there's this thing called the NTP pool.
And that is a big pool of sort of servers that provide time for you.
And it will just adopt whatever gets sent back. So for these sort of more serious
applications in like a data data center,
we actually want to have multiple sources of time because one might get compromised
or there might be a power outage or whatever.
And then you don't want to just desynchronize. And we also sort of more sort
of accurately steer the clock at that point.
So generally, what you can do is sort of jump forward or backwards.
That is what a laptop will do. If I have a couple on a shelf here,
and like if you boot them after three months, then the clock will actually be quite far behind.
And then basically, it will just jump forward to the current time.
But if you want really good accuracy, what we actually do is sort of speed up
and slow down the clock slightly to sort of get its frequency to match the real frequency.
And this is tricky because the frequency actually depends on like the temperature
in the room, for instance.
So there isn't one true frequency. It always depends on the current circumstances of that machine. sheet.
It is true that when you haven't used your
computer in a while or your smartphone that the
clock might be off at some point i realize now that it is actually a problem
that i also ran into and then it takes a moment for it to be up to date again
i wonder if the hardware vendors sometimes do that on purpose even and integrate
cheap hardware because they think that ntp or or SNTP will solve the problem.
Yeah. I mean, technically it runs on billions of devices, right?
So it's kind of a big market.
In theory, of course, your computer could contain an atomic clock and you would
never have this issue, right?
But in practice, we don't because those things are expensive,
they're heavy, and they are sort of, if you drop them, they generally don't work anymore.
So the clocks in our actual devices are generally fine, but they will just drift.
They will drift quite seriously, actually, if you have equipment that is accurate enough to measure that.
So at our office, we actually have a GPS antenna that is a good source of sort
of true time, the real accurate time in the absence of an atomic clock.
We need to throw that thing out onto the street because our office turns out
to be a Faraday cage at the GPS frequencies.
So there's just this lone antenna in the street to give us an accurate time signal.
And then we can actually measure, like on a Raspberry Pi, for instance,
how much it drifts versus the true signal that we get.
And it's very measurable very quickly. But of course, as a human,
you would only really notice once it exceeds perhaps a couple of seconds,
but certainly a minute, right? And that's...
That takes a while before the desynchronization gets that bad.
It's a very funny intersection between hardware and software because you could
probably make hardware more accurate in order to improve your maybe accuracy in terms of time.
But at the same time, you can throw software at it and solve the problem at the software level.
But what i'm wondering about is
if you wanted to improve the situation
and you wanted to improve time synchronization with software looking at what
ntp is and the implementation itself would you say that there could be improvements
with regards to the protocol or would you say the main improvements come from
fixing the software flaws that are in the existing implementations.
So there are certainly improvements that can be made. This is in part what PTP was designed for.
So PTP stands for Precision Time Protocol. It is more precision.
And so generally, it uses some additional hardware features to get more accurate timestamps.
But also, a lot of the sort of problem with the accuracy on the internet is
the structure of the internet.
So, in particular, you don't exactly know how far that signal needs to travel
between you and the server that has the time.
And that transmission delay and sort of variance in the transmission delay,
because on the internet, sort of the path from A to B can be different from
the return path from B back to A.
It can go through different hardware, basically, different wires.
The sort of variance that that introduces and the general noise,
your time signal basically, determines how accurate your synchronization is.
Now, it turns out that one of my colleagues is really into making better algorithms for that problem.
So we believe that we have the best synchronization algorithm out there today.
And so there's probably still further improvements that can be made algorithmically.
But fundamentally you're sort of fighting against the
noise in the network and so what ptp allows you to do is say okay i have 20
meters of fiberglass cable right here right between these two devices and you
can program that in and then it will account for that and so long as that doesn't
change you can sort of reduce the the noise in that way.
Initially i thought oh why can't we just move from NTP to PTP,
but it feels like that's not feasible because you need all of that information about the environment.
You need to code that into the protocol to be more accurate.
But it feels like what you touched on before, which is this better algorithm
could be the way forward.
And is that part of newer version of NTP or would you need a completely different
implementation, completely different protocol to make that happen?
Yeah, just to clarify, PTP assumes the network is trusted.
And so it can take a couple of shortcuts because of that.
NTP needs to sort of guard itself against potentially untrusted input becoming my wing.
And so what it does is if you listen to multiple servers, it will pick the most
reliable ones, the ones that agree with one another.
So it needs to expend some extra effort basically to guard against malicious input.
So what's sort of interesting about the NTP specification is,
one, there is a specification.
And in practice, this is mostly what our implementation is based on.
It's not really based on the existing implementations, but we really started
from from the ground up based on that specification.
The specification does specify a synchronization algorithm, but it is sort of
a public secret that that synchronization algorithm isn't very good.
It may have been at the time, but certainly now we just know more about sort
of how to deal with the noise in that signal that we can get more accurate synchronization.
So the best implementations use a custom algorithm today And so technically,
they're not spec compliant, but practically, they behave as if they are.
They just give a more accurate time signal.
We are looking at contributing towards NTPv5, so the next version of the standard.
And in particular, something that we are trying to get in there is to not specify
the synchronization algorithm.
And to sort of leave that up to particular implementations, basically,
such that innovation in that space can happen without sort of being technically
not compliant with the spec.
But these are all things that you solve on the protocol level,
and you don't necessarily have to rewrite everything in Rust for that.
Maybe, can you briefly touch on some of the flaws of the C implementation that
made you think that Rust was a better case for this?
Yeah, so like, ultimately, this project started because the ISRG said,
like, okay, this is a fundamental building block of the internet,
it is not memory safe, there have been some incidents in the past.
And even though you know, it's like, if an attacker can know what time you have,
that isn't so bad, but it's sort of a jumping board for further attacks, basically.
So I think there are good reasons both to have modern implementations of NTP
and then in particular for picking Rust for the implementation language.
So we sort of touched on this before. When you're sort of looking at a 4NC code
base, it is very hard to understand what happens.
And for instance, around synchronization, and that is what this sort of software
needs to do, it sends a bunch of messages over the network. So it is inherently
sort of a threaded application.
In C, that is very hard to understand. And so the existing implementations are
quite old, cross-platform C codebases, and they have a hard time attracting
new contributors for, I think, very obvious reasons.
And so that also means that the existing pool of maintainers is very small and
aging rapidly, basically.
Basically so a lot of these folks have been
around since the invention of the protocol itself and so
they're absolutely close to if not past their
retirement age and they might decide that they don't want to maintain this
sort of software anymore and then the rest of the internet would have a problem
so i think there's a lot to gain from a modern implementation where you just
write in a language that has good tooling has good documentation and in general
sort of makes that implementation more accessible to more people,
basically, in terms of understanding what the code does.
Then you think Rust is sort of an obvious choice at that point.
We can guarantee that there are no memory safety problems.
We get good performance. We use
and Async to sort of handle all of that for us.
So we don't need to do our own threading in the application at all.
And of course rust has amazing tooling in terms of testing fuzzing fuzzing is
very important because we are accepting untrusted input from the internet we
want to just make sure that you know we don't run into panics or infinite loops or stuff like that,
and also rust is nice to work with and in practice we see that we actually get
a bunch of contributions to the code base because it is a rust code base so
these are not contributions to to the algorithmic side of NTP,
but more for instance, supporting the muscle lip C,
or FreeBSD or something like that, much more around the edges where you just
need a little bit of extra work to make our implementation run on those platforms.
And this is work that anyone, any Rust programmer can do without intimate knowledge
with the internals of NTP.
I wonder how a setup of such a huge project looks like.
I certainly see that there's a funding case for this, but then you still need
to set up a project structure and have a timeline.
And you mentioned testing and you also mentioned fuzzing. I wonder how you set
that up, how you structure the project,
how you go from the idea to the implementation to the rollout and so on.
And maybe briefly for people that are not in the know, can you just quickly
explain what fuzzing is as well?
Yeah, let's talk about fuzzing. So fuzzing is the idea that you throw random,
in quotes, random input at some program.
And verify that it behaves correctly.
And here correctly doesn't necessarily mean that it gives the right output because
your input might basically be invalid input.
But because we effectively listen for messages from the internet,
a, well, some sort of error or a malicious attacker can send any sequence of bytes our way.
And we just need to make sure that our program remains operational even if we
get some weird sequence of bytes as our input.
So we want to make sure that we don't crash, but we handle these errors gracefully,
and also that we don't run into other sort of weird error conditions where we
might enter an infinite loop or just use a lot of resources that could be used
to do some sort of denial of service attack.
We use fuzzing for a lot of our parsing logic, for instance,
and then also in several other cases, just to verify the correctness of certain
data structures, for instance.
Do you have any hard guarantees around the implementation itself that it doesn't
produce any errors? Is that written down somewhere?
And in general, how is the testing infrastructure and the general project infrastructure
for this sort of endeavor? ever.
So in Rust, it's really hard to guarantee that you have no panics in your code.
There are also certain cases where we do want to panic, and this really makes things tricky.
There are some very hacky ways to guarantee that a Rust program doesn't panic.
You need to do some linker stuff.
But we do actually want to panic in certain cases where we recognize that the
signal that we get is so messy that we can't make a good decision about how to synchronize.
And then we basically decide, Like we're just going to stop right here instead
of going in a wrong direction.
We're assuming that sort of having the clock go on as it currently is,
is better than us sort of moving at very radically or potentially in a sort
of incorrect direction.
So that makes it really tricky to guarantee that we have no panics.
And this is why the fuzzing is so important to just sort of make sure that for
any input that you give our program, it will not hit a panic.
Right and in Rust a lot of these panics are sort of hidden in like some sort of slice index or,
An unwrap somewhere is very easy to write sometimes, but we try to be very careful, basically.
This is unfortunately sort of a... Well, the fuzzing really helps,
but ultimately we also just need to be careful.
We need to be sort of dedicated to not write unwraps and other panics that can
happen in a real program code.
If i was a critic i
would wonder if i couldn't do the same thing in
a different code base for example with the existing c code base i could also
run the fuzzer and i could say this is just blind rewriting rust enthusiasm
around this project what do you think about this how would you answer to such criticism yeah.
So like it's it's important to emphasize here that like this this is not a straight-up
port of an existing C program.
There are some other projects where that does happen and where it actually still
makes sense to do that rewrite in Rust.
But we are really just a new implementation, a from-the-ground-up new implementation
because for this sort of project,
we believed and still believe that we could do better in terms of architecture
architecture or create a more idiomatic Rust program by just starting from scratch
based on the specification.
I think there are still guarantees that you can give about a Rust program that
you cannot reasonably give about a C program.
Of course, a lot of the fuzzing tooling does help.
Problems still occur, right? CVEs, like critical security vulnerabilities,
still happen in C codebases because it's very easy to miss something, actually.
In fact, we had one sort of mishap in our codebase where some part of the code just didn't get fuzzed.
And of course, there was some out-of-bounds access in there.
And it would panic if you send it an incorrect input stream.
So you still need to actually be very careful even with that tooling.
I think the Rust compiler just gives us so much baseline security that we have
less to worry about because out-of-bounds access will at least panic, right?
It won't silently continue to run.
It will actually sort of fail loudly.
But then also, I think Rust has all of these other benefits of,
it is a modern language, it has good documentation, it has a good beginner experience,
experience we can onboard new people onto the project easily so
besides like all of the people sort of working on the implementation right now
are sort of you know n20s early 30s so we have we have a bunch of decades that
we can remain active on this project and hopefully we can sort of keep that going but.
Shouldn't you get an error if you do something strange with with an input for
For example, I imagine that, let's say you want to convert it to UTF-8 or so,
you would assume to get an error message that you can handle.
Or why would you get a panic in such a case? Is it because of the parsing logic?
Yeah, it's generally parsing is sort of where we have untrusted input, basically. Yeah.
And there, you sometimes just write a straight up index, say,
OK, I want the 11th element out of this slice.
And then if you give an invalid input to that function that does this slice
access, then you can run into issues there.
This is, of course, rare because you do, or at least we test.
We have a very extensive test suite, which means that in practice,
the size of our code base is roughly comparable to the C code base,
but they generally seem to have way less in terms of testing of the actual individual bits of logic.
So of course, they will actually run the implementation and sort of verify that
it runs correctly in the wild.
But we have much more atomic, basically, tests of just the parsing logic or
of just the sort of spawning logic or synchronization logic, et cetera.
Were you able to use any existing crates for the parsing logic,
or did you have to write that yourself?
So that is interesting, and this is true for a bunch of these projects,
because they are... Well, basically, we want to have very few dependencies for a couple of reasons.
First of all, generally, the logic that we have is quite simple,
and so we can just write out the actual logic.
Like this is not complicated parsing it's really you get 48 bytes they all mean
something you need to sort of pull that apart so we can write that without some
sort of specialized parser crate,
we originally had a bunch of additional dependencies like clap for the command
line interface and there's a couple others in there that ultimately we've decided
to remove move, both because of simplicity,
reducing binary size, and also just supply chain attack risk ultimately.
Also, build times get better when you do this sort of thing.
So over time, we got quite hardcore about eliminating dependencies.
We still have a bunch. I think it's like around 100 total if you walk the tree
of dependencies, basically.
This is mostly like tokio and rust-tls, I think, are the two big parts of that tree.
But yeah no we we try to
be actually quite hardcore about doing the
low-level stuff ourselves because we we're
just i don't know more more lean more agile we can actually just write that
out and have full control over exactly what happens how errors are handled we
don't want some extra layer in between that might obscure what really goes on
or introduces inefficiencies.
Tokio is kind of a huge crate, or it has at least many sub crates.
What do you use it for exactly?
Yeah, so the architecture of this project is that we have this sort of pure
core of the program that implements all the logic. So this is parsing logic.
This is sort of there is a state machine in there that decides what to do with
the clock. This is very testable logic that, you know, in theory,
this would run on sort of any device.
Like it doesn't depend on any operating system capabilities.
And then we have the sort of outside layer that does all of the input output.
And because we want to maintain connections with multiple servers,
there's a bunch of timers in there for when to ask them next for what their
time is. We need to keep connections going without blocking the whole program, basically.
If we synchronize with three different servers, we want to run those requests
concurrently, but actually in parallel so that, well, that is just essential
for how this program works.
It means the core itself is synchronous and on the outer layer,
you have an async wrapper around it.
Okay, that makes sense.
Yeah, and that means like the core is very testable and very portable.
tokio
tokio
tokio
tokio
tokio
tokio
tokio
tokio
And then the outside layer, because of tokio, is actually also quite portable.
tokio
tokio
tokio
tokio
tokio
There is some stuff at the very bottom with regards to actually updating the
clock and with regards to configuring sockets to sort of capture timestamps that we need.
And that is the sort of operating system specific bit that needs custom work
for Linux or Mac or FreeBSD.
With regards to that, do you sometimes wish that the
ecosystem,
was a bit more granular or maybe the entire async Rust ecosystem was a bit more
granular and you could mix and match things and there were clean interfaces
between those dependencies?
as a dependency and you pretty much buy into
Whereas nowadays you have
the entire ecosystem there.
Would it help you in your situation or doesn't it really matter that much after all?
And maybe the coherence of the entire ecosystem is also an advantage.
Yeah, for sure. I think for us, it really helps that we don't have to think
will take care of it.
about threading a whole lot. It's just like
We are careful with limiting through feature flags sort of what gets pulled in.
It's still quite a lot, but I think all of those parts make sense and are a
kind of complexity that we don't really want to bother with.
is a sort of well-regarded dependency, so in terms of supply chain risk,
this is just a risk that we're sort of willing to take.
Also we don't upgrade immediately, so if something does happen,
we will have some time to sort we can just not upgrade to newer versions. Right?
So we're generally okay with that. I think
just makes a lot of sense and
it provides us with a lot of things that we just need and wouldn't want to do ourselves.
There are some cases where actually
is a limitation because we're doing
such low-level and sort of niche things.
, which is really nice.
But we actually got some stuff upstreamed into
So these days, it's always fun, right, if you can sort of throw some of your
unsafe code into a dependency and have it not be your maintenance burden anymore.
So that's worked out. And hey, right, everyone can use it now.
So that's been really fun. And also a sort of a cool part of the structure of
this project is that we have time to contribute back to the dependencies that we use.
And so when that makes sense, either for
or for libc or whatever else
might come up in the future, we're very happy to do that.
And in general, these projects are very receptive and sort of active.
That m
So that's been a good experience.
through the test and you maybe check it at the boundary of what it's capable of.
eans you put
I imagine that it's one of the areas that were not really well tested or maybe
you don't have that many use cases at such critical pieces of infrastructure.
And probably you run into problems that they haven't even encountered yet or even thought about.
Yeah, we were just doing such low-level things that the API support just wasn't quite there.
And so again, this is kind of cool, right? That like in Rust,
we can go to source and just learn about like, okay, how does
actually work though, right?
And how can I make this, how can I implement this feature?
Because I didn't really know much about the internals of async before this project.
internals.
And then now I know at least something about the
There's a lot to go through, but,
I think that is really nice that you aren't limited by the current state of
your dependencies, even the current state of things.
You can actually go in there and sort of make it work yourself.
And at first, we did this just internally in our code base. We can take some shortcuts.
But of course, ultimately, the right solution, the more robust solution also,
is to integrate it into, in this case,
.
Looks like you made great progress here. and it also feels like you're almost
done with the implementation or the protocol, the implementation itself is quite sophisticated.
If I wanted to play around with what you've built so far, how could I get started?
And what's next for the project?
Yeah, so basically the implementation of the protocol is complete.
We are working on some improvements to the protocol.
So this is the next version of
NTP. I believe we are the only version that implements the current draft.
It's behind a feature flag. Don't worry. It won't actually make it into release
binaries, but we can play around with it.
We're also looking at making it easier to use a thing called NTS,
which basically uses TLS to establish connections.
So you get a bit of extra security on your connection that, or at least you
know that that you get your time from a reliable server.
So we've had a 1.0 release last fall.
And so you can go and download installers. We have nice installers for Linux anyway.
And this installer will also do a bunch of setup because you need some stuff in certain places.
On other platforms, you need to build from source and you then need to set that up.
But that will also work on FreeBSD and macOS because these are so close that
it was just so easy to support them that we might as well, basically.
What's the name of the package?
It's called ntpd-rs is the name of the repository. repository.
I believe also the package in the sense that we are taking steps towards getting
included in common package repositories, for instance, Debian and Fedora.
That takes a while to get your project in there, but we're sort of on that path
and collaborating with maintainers there to make that happen.
It takes a while in general. And then also because we are a Rust project,
there are just some extra setup things that need to happen and take a bit of time.
I think in In general, that process is pretty good these days,
but it's still kind of different from what they're used to.
And so there is a little bit of extra work that needs to happen.
Well i
And at some point, will I be able to say apt-get install ntpd and it will pull the Rust version?
t would be apt-install ntpd-rs, or ntpd-rs, because it can't overlap with the
existing implementation.
But that is the plan, yes. Yes, like ultimately, we would like it to be that easy to install.
It would also be really cool if certain Linux distributions pick it as the default
NTP synchronization mechanism.
Because even in the sort of simple case for just a consumer device,
we can actually be more accurate with little extra resources used.
So hopefully that'll happen at some point.
And then will I be safer? Will it be more secure?
Are you doing certain things to make the setup as secure as possible for me?
Do I get a lot of benefits from the Rust implementation all of a sudden?
Mostly you just get that warm, fuzzy feeling of running Rust software.
I think on a personal machine, it's just nice to be running modern software.
I don't think the impact is that big. The impact is never that big for,
I think, your consumer individual device.
The real benefit is when you sort of deploy this at scale.
And so this is actually a sort of an interesting aspect of the project.
It's like, you know, we've made this new fancy implementation.
We're very happy with it. As far as we know, we have the best accuracy.
You know, we have good performance. The binary isn't like excessively large
or anything. thing. So we're doing good along all of these sort of technical dimensions.
And so like you hope that such a thing would sell itself.
And this turns out to like sort of not be the case. Certainly rewriting in Rust alone is not enough.
People just will not upgrade if their existing setup is sort of fine.
They just won't touch it.
Right. So that is why we already invest heavily in making sure we have good
documentation, that we have good observability, that, you know,
Our performance is at least on par, but hopefully better, than existing solutions.
We need to put in a lot of extra work to actually get existing users to move.
In particular, because NTP is something that you forget about,
I assume most people don't know about it really, even though it is this sort
of vital underpinning of the internet, it's very unknown.
And once you have a good setup that doesn't cause you issues, why would you switch?
And so we really need to put in a lot of work where at this point,
I would say, at least for the outside world, that it's implemented in Rust should
be an implementation detail.
What you get is actually a very mature and modern implementation of this crucial dependency.
And yeah, that's sort of the approach we're going with.
Like even without the Rust part, This implementation should be able to convince
users that generally don't care about what language their software is written in.
Now I have this small part written in Rust. Can I grow from within and rewrite other parts?
Does it make sense to spread out?
And are there any other protocols that you see might be a great fit for Rust in the same realm?
Yeah, so this is sort of interesting, I think, in terms of how do these projects happen.
So I mentioned that NTP was originally an ISRG project.
So they, I think, funded it for about a year, but the project's been going for
two years because we got a big grant from the Sovereign Tech Fund to continue
development, basically.
And so even though the money isn't sort of abundant necessarily,
I think it is less of a blocker for this sort of project at this point.
Main blockers are, you need to actually do the work, right? Like NTP is niche.
So like how many people really exist in the world that both know Rust and then
are excited about this domain and want to improve it.
So you need to have a lot of domain knowledge and then also the Rust knowledge
for, you know, the technical knowledge to sort of make a modern implementation.
Funding still requires work, but it is available. It is there.
Then the major issue at this point for us anyway is just how do we create that
adoption? How do we facilitate that?
Because, I mean, that is ultimately sort of crucial to making this project really successful, right?
We think that sort of along technical dimensions, it is very successful.
But in terms of real-world impact, you actually also need a lot of real-world use.
Yes, and I can imagine that if you do that again, you repeat the same process
for a different protocol,
you can reuse a lot of what you built with regards to packaging and testing
and the entire infrastructure that you have now is a bit of a template for future
projects. Do you agree? For.
Sure. Yeah. There's also a
lot of, so there's a lot of infrastructure that you can reuse. Absolutely.
So packaging is one, I think also sort of performance benchmarks are hopefully
another one that we can sort of standardize on soon, but a lot of the knowledge
is actually much more intangible.
I would like, how do you get something accepted into the Debian package repository?
For instance, how do you reduce dependencies and how sort of,
dedicated should you be in that regard. So there's a lot of experience that
you get from running a project like this. And actually, we've run a couple.
We will run more in the future where you also see it's much easier to get up to speed at this point.
And certainly also something that you learn is how to acquire the funding.
Once you've done that once, it's relatively easy to replicate.
You may not always end up getting the demands that you want,
but at least the process is now pretty clear.
What is the process?
Well, so for the ISRG is they have
their own agenda of our sort of list of projects that they want to do.
If they can find the funding for it, they will sort of pick an implementation partner.
So then it really matters that you sort of show that you have the technical
competence for that particular subject.
So for a lot of the timing-related projects, we have that experience now.
For other domains, you know, you have to show that you do or that you can acquire
that knowledge sort of along the way, right?
For the Sovereign Tech Fund, and like there are initiatives in a lot of European
countries for something similar, you just have to write a good proposal.
And really what that comes down
to is sort of, making the sort of public benefit of it sort of obvious.
So for some projects that is easier than for others, but generally you can make
that story work for most of these foundational projects.
Can you imagine any critical software right now that we use on a day-to-day
that is severely underfunded,
where you say there's definitely a case for this, we need to have more eyeballs,
we need to have better funding?
So I think sort of paradoxically, a lot of this software isn't actually that obvious.
I worked on the Dutch system for emergency response messages,
basically, the sort of a system that will wake up someone because there is some
sort of accident and they need to get there.
So luckily, this is now modernized, right? But for a couple of years,
the state of that software was pretty bad.
Like it was just some big binary blob that the source code wasn't available.
It ran fine, as far as I know, there were never any incidents,
but like that is not a good situation to have for something that crucial.
And you wouldn't really know about this because because that is not public knowledge, right?
But there's a lot of software like that in a lot of critical domains,
like energy, shipping, any form of transport.
A lot of these very vital sectors have a lot of this very dated software on very dated hardware.
That is a major issue, a major risk.
With regards to the internet, I know there's currently work on,
well, TLS is still continuing with Russell's, of course. There is work on DNS.
I think that covers most of the major protocols for sort of the public internet, at least.
And there is now some branching out. So you can sort of see sudo was also an ISRG project.
Their connection to the internet is already much less obvious,
but sudo is in the chain of running internet infrastructure.
And so ISRG still thought that that would be sort of in scope and sort of a
useful project to do. There is now sort of a focus more on media decoders.
So you may remember this recent problem on Apple devices, I think, with the font decoding.
There was a pretty serious bug.
This happens every so often. So either the image decoding or font decoding or
something like that will have a critical security vulnerability. Right.
And so that is the next sort of domain where, at least sort of internet related,
where the memory safety will have, should have a huge benefit.
But there, the performance is much more crucial, right?
Like you can't trade safety for performance there.
You need to maintain the performance and then also somehow guarantee the security.
So that is quite a tricky sort of engineering project where you want to limit
the amount of unsafe Rust code, but also you need some amount of unsafe Rust
code just to make sure that the performance is there.
It's funny to think how a lot of our public infrastructure runs on very deprecated legacy software.
Yeah, don't think about that.
There was this story on Hacker News the other day about some German train provider
looking for a Windows 3.11 admin.
I saw that.
Maybe in this context, it might even make sense.
I don't know all the details, but for sure, there's a lot of legacy out there
and a lot of deprecated things.
And people don't know how it works. Maybe they don't even have the source code, as you mentioned.
So it is true. true so there might be future projects that are worth funding.
Yeah there's this this uh there's this
fun story about the dutch tax collector's office that basically
can't collect tax anymore because they don't have any cobalt programmers this
is a major issue where all of that stuff still runs on these very old mainframes
and they just have a really hard time modernizing that infrastructure where
it now impacts the actual performance of sort of their their core business as it were.
This problem is everywhere.
I wonder if that's a bug or a feature, but that's the topic of another.
No, n
o, no. Like everyone agrees it is a bug because in effect, they can't make
any changes to tax policy because they just can't get the computer systems to go along with that.
So yeah, it's a big problem. It's the most obvious one too, right?
Like there's a lot of stuff that is, of course, hidden under the surface,
but that would benefit hugely from having a more modern software and hardware, really.
Time flies. We're already getting towards the end.
There's a tradition around here to ask a last final question.
That is, what would be your message to the Rust community?
Yeah, so I have this, I don't know if it's a thing, but I am very passionate
about compile time performance. performance.
And I think we should be more serious at, first of all, admitting that this is a problem.
Of course, the performance is better than C++, but doing better than C++ is
not hard and shouldn't be our benchmark.
Realistically, for all the projects that I do, compile times slow us down.
They are a major source of frustration and just lost,
productivity. And again, like any sort of even small improvement here has sort
of majorly sort of outsized benefits to all Rust programmers in the world,
which I believe is like over a million or over 2 million now.
3.7 million as far as I'm aware.
So y
ou know, even one second saved, right, per day translates into huge benefits.
I think we can do much better than that. There are very cool existing initiatives
in that direction, like the parallel frontend, etc.
But I think having more of that focus is important.
Because of course, we all have our favorite unstable feature,
right, that we would like to see stabilized.
But the performance of the compiler just hits you every time you hit compile
or test or check or whatever.
And like, that is no joke, like we need to really, I think, do better there collectively.
And then separately, I would like to say something about the RustNL conference
that we will be organizing the 7th and 8th of May this year in Delft,
which is reasonably close to Amsterdam.
For everyone from outside of the Netherlands, it's very, very close.
And so that was a lot of fun last year. We had a lot of, well,
both national and sort of international speakers. speakers and it was just a
great time and we hope to replicate that this year.
If you're in the area come by i will most likely be there and i hope to see
a lot of people it was so much fun to talk to you today i learned a lot about
protocols so thanks a lot for all the input it was amazing,
and i hope to see you in person at some point in time awesome thanks.
Rust in Production is a podcast by corrode and hosted by me, Matthias Endler.
For show notes, transcripts, and to learn more about how I can help your company
make the most of Rust, visit corrode.dev.
Thanks for listening to Rust in Production.
Folkert
00:00:21
Matthias
00:00:23
Folkert
00:00:30
Matthias
00:00:56
Folkert
00:01:08
Matthias
00:01:51
Folkert
00:02:06
Matthias
00:03:35
Folkert
00:03:59
Matthias
00:05:57
Folkert
00:06:25
Matthias
00:07:07
Folkert
00:07:16
Matthias
00:08:31
Folkert
00:09:09
Matthias
00:10:31
Folkert
00:11:15
Matthias
00:12:44
Folkert
00:13:18
Matthias
00:14:20
Folkert
00:14:37
Matthias
00:15:52
Folkert
00:16:28
Matthias
00:17:11
Folkert
00:17:50
Matthias
00:18:06
Folkert
00:18:42
Matthias
00:20:57
Folkert
00:21:21
Matthias
00:21:51
Folkert
00:22:09
Matthias
00:24:19
Folkert
00:24:39
Matthias
00:25:48
Folkert
00:25:59
Matthias
00:28:37
Folkert
00:28:58
Matthias
00:30:30
Folkert
00:31:02
Matthias
00:32:19
Folkert
00:33:05
Matthias
00:34:44
Folkert
00:35:19
Matthias
00:37:07
Folkert
00:37:25
Matthias
00:40:26
Folkert
00:41:01
Matthias
00:42:15
Folkert
00:42:32
Matthias
00:44:02
Folkert
00:44:26
Matthias
00:46:35
Folkert
00:46:56
Matthias
00:47:59
Folkert
00:48:06
Matthias
00:49:47
Folkert
00:49:56
Matthias
00:50:56
Folkert
00:51:07
Matthias
00:51:36
Folkert
00:52:13
Matthias
00:54:00
Folkert
00:54:27
Matthias
00:55:23
Folkert
00:55:40
Matthias
00:57:00
Folkert
00:57:02
Matthias
00:57:45
Folkert
00:57:52
Matthias
00:58:29
Folkert
00:58:41
Matthias
01:00:52
Folkert
01:01:08
Matthias
01:02:30
Folkert
01:02:48
Matthias
01:03:51
Folkert
01:03:54
Matthias
01:04:57
Folkert
01:05:14
Matthias
01:07:53
Folkert
01:08:03
Matthias
01:08:05
Folkert
01:08:38
Matthias
01:09:07
Folkert
01:09:12
Matthias
01:09:38
Folkert
01:09:52
Matthias
01:10:37
Folkert
01:10:40
Matthias
01:11:48