Rust in Production

Matthias Endler

Canonical with Jon Seager

About oxidizing Ubuntu with Rust

2025-11-27 58 min

Description & Show Notes

What does it take to rewrite the foundational components of one of the world's most popular Linux distributions? Ubuntu serves over 12 million daily desktop users alone, and the systems that power it, from sudo to core utilities, have been running for decades with what Jon Seager, VP of Engineering for Ubuntu at Canonical, calls "shaky underpinnings."

In this episode, we talk to Jon about the bold decision to "oxidize" Ubuntu's foundation. We explore why they're rewriting critical components like sudo in Rust, how they're managing the immense risk of changing software that millions depend on daily, and what it means to modernize a 20-year-old operating system without breaking the internet.

About Canonical

Canonical is the company behind Ubuntu, one of the most widely-used Linux distributions in the world. From personal desktops to cloud infrastructure, Ubuntu powers millions of systems globally. Canonical's mission is to make open source software available to people everywhere, and they're now pioneering the adoption of Rust in foundational system components to improve security and reliability for the next generation of computing.

About Jon Seager

Jon Seager is VP Engineering for Ubuntu at Canonical, where he oversees the Ubuntu Desktop, Server, and Foundations teams. Appointed to this role in January 2025, Jon is driving Ubuntu's modernization strategy with a focus on Communication, Automation, Process, and Modernisation. His vision includes adopting memory-safe languages like Rust for critical infrastructure components. Before this role, Jon spent three years as VP Engineering building Juju and Canonical's catalog of charms. He's passionate about making Ubuntu ready for the next 20 years of computing.

Links From The Episode

  • Juju - Jon's previous focus, a cloud orchestration tool
  • GNU coretuils - The widest used implementation of commands like ls, rm, cp, and more
  • uutils coreutils - coreutils implementation in Rust
  • sudo-rs - For your Rust based sandwiches needs
  • LTS - Long Term Support, a release model popularized by Ubuntu
  • coreutils-from-uutils - List of symbolic links used for coreutils on Ubuntu, some still point to the GNU implementation
  • man: sudo -E - Example of a feature that sudo-rs does not support
  • SIMD - Single instruction, multiple data
  • rust-coreutils - The Ubuntu package with all it's supported CPU platforms listed
  • fastcat - Matthias' blogpost about his faster version of cat
  • systemd-run0 - Alternative approach to sudo from the systemd project
  • AppArmor - The Linux Security Module used in Ubuntu
  • PAM - The Pluggable Authentication Modules, which handles all system authentication in Linux
  • SSSD - Enables LDAP user profiles on Linux machines
  • ntpd-rs - Timesynchronization daemon written in Rust which may land in Ubuntu 26.04
  • Trifecta Tech Foundation - Foundation supporting sudo-rs development
  • Sequioa PGP - OpenPGP tools written in Rust
  • Mir - Canonicals wayland compositor library, uses some Rust
  • Anbox Cloud - Canonical's Android streaming platform, includes Rust components
  • Simon Fels - Original creator of Anbox and Anbox Cloud team lead at Canonical
  • LXD - Container and VM hypervisor
  • dqlite - SQLite with a replication layer for distributed use cases, potentially being rewritten in Rust
  • Rust for Linux - Project to add Rust support to the Linux kernel
  • Nova GPU Driver - New Linux OSS driver for NVIDIA GPUs written in Rust
  • Ubuntu Asahi - Community project for Ubuntu on Apple Silicon
  • debian-devel: Hard Rust requirements from May onward - Parts of apt are being rewritten in Rust (announced a month after the recording of this episode)
  • Go Standard Library - Providing things like network protocols, cryptographic algorithms, and even tools to handle image formats
  • Python Standard Library - The origin of "batteries included"
  • The Rust Standard Library - Basic types, collections, filesystem access, threads, processes, synchronisation, and not much more
  • clap - Superstar library for CLI option parsing
  • serde - Famous high-level serilization and deserialization interface crate

Official Links

Transcript

Here's Rust in Production, a podcast about companies who use Rust to shape the future of infrastructure. My name is Matthias Endler from corrode and today we talk to John Seeger from Canonical about oxidizing Ubuntu with Rust. John, thanks so much for taking the time for the interview today. Can you quickly introduce yourself and Canonical, the company you work for?
Jon
00:00:27
Of course. So my name is John Seeger. I'm the VP of Engineering for Ubuntu, which is a Linux operating system. And Canonical is the company which ships Ubuntu, as well as a host of other open source utilities for building data centers and cloud applications and various other tools.
Matthias
00:00:44
You took over the Ubuntu management in January, as far as I remember, right? What drew you into that position?
Jon
00:00:52
So yeah, I did take over in January. I've actually been at Canonical for four and a half years. I'd spent a lot of time focusing on cloud orchestration, a tool called Juju, and some operators for things like databases. And at the tail end of last year, Mark Shuttleworth approached me and asked if I would think about taking over Ubuntu. We didn't have a VP who was kind of looking after Ubuntu. And he felt like there was some room for improvement in terms of the vision and what we were looking to achieve with Ubuntu and perhaps how the teams were working and what ultimately we wanted Ubuntu to be in the marketplace of Linux distributions. There's quite a spectrum from very exciting, new shiny things to very stable, less shiny things in the distribution market. And I think it was time for us to think about where we sat in that and what we wanted Ubuntu to be for the next 20 years. Last year was 20 years of Ubuntu but with that success i think it's important to have a point of introspection and decide you know where next we've had 20 years of success and we've enjoyed that but what do we want ubuntu to be as we look forward yeah.
Matthias
00:01:57
I can't believe it's been 20 years already it feels like yesterday to me now, You've been making quite some headlines recently with a very bold move, at least for some people, which is that in the next version of Ubuntu, you will make a bold step towards adopting Rust in the core part of the distribution. Can you talk a little bit about that?
Jon
00:02:24
Yes. So this was part of a set of kind of four initiatives or four themes that I started working towards with Ubuntu, one of which was modernization. And so the experiment that we're running at the moment is replacing GNU core utils with a Rust rewrite of the core utils project by an organization called UUtils and replacing sudo with a Rust rewrite called sudo-rs. And really, I think these projects are really impressive. I think they have great communities around them. I liked the level of enthusiasm I was seeing around these kind of core parts of a Linux distribution. And because of the kind of memory safety benefits that we get from Rust and the potential for there to be performance improvements, I thought they were an interesting place for us to start. And because they're so foundational to the experience that people have at the Linux command line, of course, it grabbed headlines. I think lots of people who use Linux for the first time and jump into the terminal, often the very first commands they type are from core utils, ls and rm and cp. And with sudo, my view is that's even more important because that's right at the security boundary between privileged and unprivileged users. And so for me, that one feels more critical from a security perspective, which is often what our paying customers are looking for from Ubuntu, right? It's security and resilience in the operating system that they ship for whatever their use case is.
Matthias
00:03:50
Yeah. Now a critical person might say wait but you already had a battle-tested set of tools which were developed for 30 plus years why would you rewrite them wouldn't you introduce more problems than you would solve what would you reply to that.
Jon
00:04:07
My reply to that is, that is true of every technological advancement in the history of humankind, right? Like, why do we need internal combustion engines when horses were great? They were great. And internal combustion engines have been useful to us, and whatever comes next will be useful to us again. I think, I certainly haven't done this to throw any shade on the GNU-Core Utils project, or indeed the original sudo project, right? And one of the things that I liked about both of these projects was the attitude they had towards the original implementations. So in the process of rewriting core utils, which started out as a hobby project for somebody to experiment with Rust, they uncovered bugs in GNU core utils, which they fired issues for and worked with the maintainers on to correct. And similarly in sudo, the sudo-rs maintainers are in really regular contact with the original sudo maintainer. They have uncovered bugs in their implementation, which turned out to be present in the original too. And so it is a bit of a collaborative effort, right? They are both a sort of modern interpretation of a set of well-known tools and utilities. I think one of the bits of context for Ubuntu is part of our success has been in offering long-term support. The idea of an LTS was a thing that. Only came about because of ubuntu right and so when we ship a tool or we ship a utility we ship some code we will stand by that for 12 and in some cases 15 years and so for me it's really important that we pick tools and utilities that have got momentum and community support and are written in languages which we can continue to employ talent to maintain and look after, and i think increasingly rust is gaining adoption in these kind of system level utilities people are interested in writing Rust, they're interested in learning about it. And it's a way of us encouraging talent, but also encouraging contribution to Ubuntu. I think with open source, the way it's gone, it's been in many ways so very successful. There are so many things that one could contribute to in open source these days. And I would like to focus very much on building the next generation of contributors to Linux, not just the applications that run on top of Linux, which is maybe taken for granted by some, I suppose.
Matthias
00:06:16
I guess you mentioned a very important point, which is the maintenance of the tools we all depend on. I don't know how many installations the core utils have, but it must be in the billions. And those tools run also billions of times a day on all of the machines combined, I guess. But the main point is, this is a thing that people take for granted. And it's also, to some extent at least, a maintenance burden because of C, because of the way it was written and so on. Can you elaborate on this part as well? the part of the project that helps people transition from C to Rust and people maybe preferring to work in Rust code bases.
Jon
00:07:02
Yes, and this has been called out a couple of times in commentary about this move in Ubuntu, where I've referred to, you know, as replacing 30-year-old code, like that's a bad thing. I have no, I have nothing against the GNU Core Utils code base. I suspect it's very good quality, right? It's been in production a long time. It's been finely tuned. But the fact remains that the single biggest vector for attack on modern systems in or certainly in the last few years has been through memory safety right it has been through where someone has missed an edge case or failed to validate something or you know missed a check for a null pointer and these whether we like it or not whether we want to admit it this is where the vast majority of security vulnerabilities come from the opportunity with something like rust core utils is to have a generation of software where that is significantly less likely. Not impossible, but significantly less likely. And as you say, billions of invocations per day, potentially, right? In scripts, as part of other software, in provisioning, in day-to-day use by end users. If all of those invocations were of utilities that fundamentally cannot suffer from the same issues and have the potential for higher performance and an engaged community that's going to look after them for the next few years, I struggle to see a disadvantage in that. It doesn't have to be. In spite of the original project, right? Like there will even be use cases where we probably still use utilities from GNU core utils for whatever reason. And I think that's okay. The two don't have to be fighting, right? They can coexist. And I think they will for a very long time.
Matthias
00:08:38
And the other misconception that a lot of people have is that the GNU core utils are basically done. They are finished. There are no more changes that need to be made. But in reality, that's also not true. Even just this week, there were multiple commits to the repository and the other part is vulnerabilities also still get discovered in those tools i just checked yesterday and there was a cve in sort which allowed you to if you controlled the input have access to memory that you shouldn't have access to because of you know i guess just a mishandling of the input or you know an exception so it's it's probably more than just language evangelism that that you propose here right it's there's a genuine technical necessity to move to safer languages it's.
Jon
00:09:37
Actually not language evangelism at all in the sense that i have a few rust projects but it's single digit numbers of rust projects right like i'm not a, I'm not a Rust person, if you will. I know Rust. I like the language. I've used it for a few projects. I think it is interesting. What is almost more interesting to me is the set of principles that the Rust community seem to rally around when it comes to safety and performance. Now, that doesn't mean all Rust software is better than all C software. It doesn't mean that everything that was written in C in Ubuntu will get replaced with something that has been written in Rust. But where there are compelling alternatives and engaged communities, they're on the table, right? Like I say, Ubuntu has been around for 20 years, and I hope it will be around for another 20. And it probably shouldn't look exactly like it does today in 20 years. Again, that would be a failing, I think, for our open source community, for our customers. It needs to continue to evolve and i think this is just a part of that.
Matthias
00:10:32
As someone who also sees the operational sides of things of maintaining a full distribution and also maybe taking some of that risk to make sure that you can ship on time and so on was adding rust to the toolchain a huge operational burden.
Jon
00:10:50
So when i begun this effort we already had rust the rust toolchain in ubuntu right like so you You could already get the Rust compiler. You could always already get tools like RustUp. So like we already had, because of the fact that we're downstream from Debian and us and the Debian community have been working on how to package Rust software. This was a relatively easy transition. In fact, Rust core utils and sudo-rs were already packages that existed in Ubuntu's archive. And so this is really a policy change, which says that we're going to bring that into what we call main, which is the part of the archive which has the longest and kind of the most stringent set of security promises from us, if you will, we're going to bring those packages into main and we're going to ship them by default. The old ones will continue to be available. And in fact, one of the more complicated pieces of work, one of the more genuinely complex things that we had to do here was some work on apt itself to make sure that the path to switch back to the old implementation was as simple as possible. This is not about us forcing this upon everybody at all costs. The GNU core utils will continue to be available on Ubuntu. And we have done actual engineering work in apt to make sure that going back is one command. And we'll publish that in the release notes. Same with sudo, right? sudo was a little easier the reason we had to do the work in apt is because if by the time apt got halfway through the process of swapping them it removes the new version of core utils suddenly all the utilities it needed to install the new one weren't there anymore so we had to do some work in terms of like sequencing that so yes i mean my view is it you know for 90% 99% of our users this will be a no-op they won't notice it but it makes it easier for us to maintain and it lowers the likelihood of issues like the one you just described in sort. Which don't come around often, but they do still come around. That's evidence that it does still happen.
Matthias
00:12:39
Do you get a lot of beta testers for the distributions usually? Do people test it stuff, or are you still sanding down the edges on the Rust integration?
Jon
00:12:50
Yeah, it's difficult to know exact numbers, but we definitely do get pretty good engagement. This change actually coincided with another change I made here on to this year, which was to do monthly snapshot releases. It's a slightly tangential subject, but part of my push towards kind of modernizing ubuntu was to streamline our release process and modernize that a little bit and i challenged our release team to stop just doing it every six months when we actually wanted to release but to basically exercise that process every month and put out snapshot releases and as a side effect of that we've had ubuntu isos out that have had the rust tools for for months right like incrementally more and more of the GNU core utils were replaced with the rusty ones and so now in the beta we're looking like the vast majority will there are still a couple of incompatibilities so we you know some of the utilities will still be the new ones but it means that people have had better access to it i don't know exact numbers but we certainly see, response to discourse posts and blogs saying that people are trying it out and we've also seen a significant uptick in bugs far against the packages as people have been testing it and finding things which has been really helpful to us and.
Matthias
00:13:53
On a technical level when you want to switch tool by tool you just symlink that and you make it the default in someone's shell right.
Jon
00:14:01
Yeah essentially it is just it is just a symlink i can actually if it's interesting for the show notes or something there's a link where you can actually see the set of links that we have right now where we're pointing to gnu tools still it's going to shrink in the next couple of days because the folks at just cut a new release which solves a bunch of issues that we had so yeah it's just a set of index essentially.
Matthias
00:14:20
Did you catch any surprises? Because in my understanding, that would probably be one of the hardest projects to work on because you have so many undocumented features of the old tools that people start to depend on in their workflows. But now you kind of change it from underneath them and you need to make it look as if it was the old thing or if it behaved exactly like the old thing, at least in the first version, right? Did you find any nasty surprises in that transition period?
Jon
00:14:53
So I think we should talk about core utils and sudo-rs differently because I think it's kind of interesting. The two projects actually approach this quite differently. So the uutils project aims to be bug-for-bug compatible with the original core utils. If core utils does a thing, uutils aims to do it identically, essentially. That's actually not the case in sudo-rs. But if we focus on core utils still, there were a couple of things that just simply weren't implemented when we started. So an example of this was most of the tools didn't have support for the dash capital Z, capital Z for your American listeners, for essentially manipulating viewing SC Linux context information for files. That functionality just hadn't been implemented. And that was one of the things that we sponsored the utils developers to implement before we shipped it. Now, we don't actually use SELinux by default on Ubuntu, but there are people in the community and people in our customer base who presumably do use SELinux with Ubuntu, and so we wanted to make sure that was present before we shipped it. Likewise, localization, internationalization, there was no infrastructure for translations. So, sort worked great in English and not so well in German or in French. And so, part of the work that we sponsored was to essentially build infrastructure into the project to allow localizations. The story with Sudo is a little bit different because they don't aim to be 100% bug for bug compatible. They aim to be mostly compatible with a view that some of the things in the original Sudo were implemented well and in good faith, but that the ideas didn't age so well. Like the idea about what the Sudo tool should do and how it should behave, perhaps if we were starting again in 2025, we'd either do differently or we just wouldn't do in the Sudo tool at all. And so there aren't many of these features but an example would be in the original sudo you can pass it a dash capital e and that will take your entire like environment variable context from your unprivileged environment into the privileged environment and they chose not to implement that on the grounds that they feel like it is safer that people are conscious about which parts of the environment they're pushing into that privileged space and so people who have the dash capital e and their scripts will find that that doesn't work which is why you know so either they can choose to say okay yep maybe i should be a bit more careful i need to take this variable and that variable or they can on their installations they can switch it back to the old version and carry on as they were.
Matthias
00:17:16
Wow do they get some exit code which tells them that this was the issue like passing the entire environment into privileged context sounds like a massive food I'm not sure why they even added that in the first place, but do they get some error message, which tells them how to do the right thing?
Jon
00:17:37
Well, they will get an error message from sudo-rs saying, I don't know what that switch is. So we haven't put code into sudo-rs to say, hey, we don't know what that switch is because you're using a nice shiny new version. Yeah. In that sense, it is a little bit opaque, but part of the effort has been updating our own documentation to try and make sure this information is captured. And when people search, it will come up in the Ubuntu docs to say, you know, we've made this change. This is how you can get back, or this is, you know, the alternatives. So, and, you know, unusually, mostly when Ubuntu releases a new feature, we just release a new feature and write about it in the release notes. As a as a kind of exception here we are actually going to write we've done this thing and here's how you can revert it if you if you care.
Matthias
00:18:21
Yeah um well if you have a tool so central and i'm talking about both the core utils and sudo-rs but if you have tools like this which are so central to your even the outputs become part of the api the interface yeah because people do various things with that output, which they shouldn't probably. They grab for certain keywords and so on. For example, as a German user, I know that some of these tools also have locale-dependent output. They might print something in German, whereas on a different machine they might print something in English, for example. How much attention do you have to pay to these details? And were there any things that, for example, were not solved in the Rust ecosystem specifically to, you know, handle all of these use cases.
Jon
00:19:16
So I think internationalization was the big one there. So because the uutils project is aiming for like a bug for bug compatibility, theoretically, the output should be identical. And if it's not, it's a bug and they'll fix it. Right. So we haven't really run into that. I don't think there is, there was an issue with the date tool where it failed to parse a certain kind of date time or it failed. I forget the exact issue, but either the way it outputted it or the way it interpreted a date was different from the GNU tool, and that caused us a huge headache in our build infrastructure. And so for a while, we were using the GNU date tool still. I believe that's now fixed. So we have seen issues like that. It's also worth mentioning why now in Ubuntu. So lots of people will know about Ubuntu and they know about the LTS, but it's worth talking about our release model. So every two years in April, we release a new LTS. So 20.04, 22.04, 24.04, 26.04. And that is where a huge majority of our user base lies. Lots of people will stay on an LTS until the next LTS comes around, then an update. But then every six months, we still do interim releases, what we call interim releases. So these are 25.04, 25.10, 26.10, 27.04, et cetera. And we see dramatically less users here, and particularly in the enterprise space where people aren't generally running production workloads on these. And so these interim releases are the opportunity for us to test this out. We will get thousands, tens of thousands, maybe hundreds of thousands of users. And we still want those to be very high quality. We want them to be stable, but it is our opportunity to try new things. So an example of this is, I think, back in 17.10, I want to say, either 17.10 or 17.04, long before I joined Canonical. I seem to recall they tried switching to Wayland by default. And in the next interim release, they were like, that didn't go very well. We're going to go back. Right? I think we're now back on Wayland by default. But the point is, that was an example of where we used the interim release to try something that we thought was going to work. It didn't work out that well, so we switched it back. And I am really committed to getting this change across the line in the LTS, but not to the point of destruction, if you see what I mean. If we roll 25.10 out the door and this change doesn't go well or it causes huge issues, then we'll have to roll it back. It doesn't mean we'll roll back forever, and it might be that we roll back some and not all of the utilities, but you know this is just the balance this is the trade-off right like we have to work out whether or not the thing is ready for 26.04 which is our next lts so we still have seven months eight months to get ready for that lts and 25.10 is kind of a milestone on the way to that and it gives people it gives us a way to get new ideas into the hands of lots of people to see whether it's ready for the lts now.
Matthias
00:22:02
You probably have a lot of integration tests anyway nowadays.
Jon
00:22:05
Yeah
Matthias
00:22:06
for these tools,
Jon
00:22:08
Certainly packages themselves are tested with a thing called auto package test which we inherit from debian so most all of the packages in main have to have auto package test tests to be in main, and actually that's where we've inflicted some pain on ourselves like where there have been differences in the utilities we've seen them in our build infrastructure because suddenly thousands of packages won't build or there'll be an error you know the test will fail and so that's been a useful tool for us for kind of identifying issues
Matthias
00:22:32
When people usually propose rust it's for a few reasons the main one is improved security but another one is better performance now in your case you're competing with c which is already quite performant you might even say that's a bit of a regression here because there are still statements out there that rust cannot be as fast as c and that very much depends on your use case but what would you say to that would you say well we take a performance hit yeah we accept it or is there even a possibility for the tools to be as fast or even faster than the older version.
Jon
00:23:11
I think in the mid to long term there's an opportunity for those tools to be faster and for a couple of reasons there was an article that went round recently which showed that you know there were some performance shortcomings uncovered in this i think the example was the checksum tool i actually found a regression in while we were engineering sprint in the WC, the word count utility that turned out to be slower. I was attempting to do the 1 billion rows challenge with WC and I found that it was slower. But the interesting thing to me is, that's not surprising. The software is newer, it's less mature than the GNU stuff. But what is interesting to me is how quickly these issues get resolved, how quickly people step up to find ways. And when they do step up, they often exceed the performance of the original tools. So the issue that was filed about checksum being 17 times slower than the GNU one was filed by a canonical engineer who works on our, in our foundations team, who was going out looking for problems to make sure that we ship it in the best shape we possibly can. That issue is now solved. The checksum tool in new utils, I believe, now outperforms the original. And the same with the word count utility. That was, I think it was under a day. And it went from being 1.1 times slower to 1.3 times faster. And I think on your point about C versus Rust, can Rust be faster? I buy the argument that in some cases there's overhead and, you know, a very, very optimized C program and a very, very optimized Rust program, the likelihood is that you could do better with C. I buy that argument. Okay. I get it. That said, things like threading from a developer experience perspective are significantly friendlier in Rust and it's significantly more likely to be correct. So, you know, very, very talented C developers with a real eye for performance, I'm sure they can squeeze the absolute maximum performance out of C. The question is, how many engineers around the world can do it? And I think in Rust, there's a likelihood that we will see a higher degree or a larger number of faster implementations, because the language makes some of the more advanced primitives a little bit more easy to hold, a little bit more easy to reason about, a little easier to debug, a little easier to test. So it is a trade-off, ultimately.
Matthias
00:25:24
Well, two things that I wanted to say here. one is that multi-threading is one the other one especially when it comes to for example the work count thing wc is simd support right correct yeah i think simd support in rust is pretty decent by now so you could even dare to try and make that use those cpu features which i'm not 100 sure, GNU core utils do i i honestly doubt it to be honest.
Jon
00:25:56
I yeah i don't think they do and you're absolutely right so the the speed up that we got on the workout utility i believe was assembly implementation i think it's the same for the checksum utility and so you know a counter argument there might be well what if i'm running on a processor that doesn't have those features and the answer is okay might be a bit slower um you can't literally have everything do you know We can't win on binary size and all-out performance and security and maintainability and community all at the same time, right? It's very, well, maybe there are projects that have achieved that. It's very difficult to do. And so part of this initiative is trying to raise the odds of raising the bar in each of those as much as we can.
Matthias
00:26:38
Do you compile those tools for different architectures with different CPU features?
Jon
00:26:44
Yes. So we support, at the moment, we support AMD64, we support ARM, ARMHF, S390, PowerPC64, and RISC-V. And the deal we have in Canonical with Ubuntu is if we support an architecture, we support an architecture. We don't do cross-compilation. We buy actual hardware with those architectures. So we have actual mainframes in our data centers, like S390 machines, on which we compile packages for the archive and we test packages for the archive. We also have landed some work in launchpad and in the archive that will allow us to deliver binaries that take advantage of micro architectural variants so you know if you're on a machine that has amd64v3 support you might be able to get access to a binary that's been compiled to take advantage of amd64v3 features where that makes sense and the same would be true of amd64v5 you've got armv8 armv9 armv10 and obviously i think in the risk ecosystem we're seeing kind of an explosion of this as the architecture or the instruction set is in its infancy, people are building processes, with wildly different capabilities and so we've had to make sure that our kind of build infrastructure has the capability to deal with that that's landed relatively recently there's a blog post coming in a couple weeks about it.
Matthias
00:28:00
And not many people know that some of these tools especially the legacy ones they highly underuse modern cpus there are certain things in there which are just not really done and they could could be done but it's not an easy task and it's a bit risky like wouldn't that also be an opportunity to leverage some of those functionalities that have been in cpus for a decade now or longer right yeah.
Jon
00:28:29
And again it's a trade-off between compatibility and performance right one of the nice things about gnu core utils is they it works the same in lots of places that perhaps will not be said of the uutils project where they have kind of conditional implementations based on CPU features, but where the vast majority of a huge data center, for example, in a cloud has AMD64v3 and has these CINDY features and stuff, surely we want to take advantage of those as much as we possibly can, not just from the performance perspective, but on the other side of the same coin is power usage and how much energy is consumed by doing a task. And often using these implementations sees a huge advantage there too.
Matthias
00:29:12
Yeah yeah my first thought was well it's a user space tool and probably not really that energy hungry but if you call it billions of times a day then it adds up very quickly right, and there's another misconception about core utils which i hear a lot which is, They are optimized already to a certain extent. And I guess that's my second point is that people think they can't really go in and improve those tools. But in reality, it's not the case. For example, I once wrote a faster version of cat just to see where the bottleneck was and if I could make it as fast as the new CoriTools one. And mine was just three times as fast. The reason was not that the code was specifically clever. It was just using a new feature in the linux kernel which is the splice feature and you can avoid one copy from kernel space to user space with that now did i dare to send in a patch for that functionality to be added to new corridors no of course not because i honestly i'm not a c developer and i kind of was afraid of the process some of these things they are not accessible to, everyday normal developers they have different processes and so on so i hope that also contributing to the uutils might be easier in the future yeah.
Jon
00:30:41
I think so people have various opinions about code forges github microsoft licenses it's all a trade-off at the end of the day and i think you know whatever microsoft's motivations which you know so far don't seem too impure to me There's millions of developers on GitHub. That's a huge audience. And so bringing the development of something so foundational as the core utils into a space where there exist millions of developers who know the contribution process and how to open pull requests and write comments and interact with people, again, I find that hard to kind of put across as a bad thing.
Matthias
00:31:18
You said that sudo-rs has a slightly different focus than core utils. Can you elaborate on that a bit? I guess sudo was the first thing you will integrate. And why start with this? Wouldn't it be easier to start with a completely new tool instead of rewriting something so foundational in the core of the distribution?
Jon
00:31:40
So I think it's precisely because, I mean, say, we didn't write sudo-rs, Canonical didn't rewrite sudo. This rewrite existed. It is authored by a non-profit called the Trifector Tech Foundation, and they are an organization who aims essentially to provide resilient software for public good, if you will, like infrastructure-level software. And I think, honestly, if there was going to be anywhere on the system where I would even trade performance for guarantees, it would be in the sudo tool. There are alternate approaches to privilege escalation on Linux. There's a new approach in Systemd called run0, which has been popular recently in some other distributions. I'm sure it's very good. The Systemd project has written some great software. But the sudo-RS project gives us a 90-something percent compatibility guarantee with memory safety. And to make it even more fun, they were absolutely delightful to work with. They were very excited to have their software in Ubuntu. They worked with us. They They were really professional. They worked with our security team. You know, they work with the OG kind of sudo developer. And so where you have this intersection of really motivated, really professional people and a programming language that kind of enforces certain ways of thinking and working for safer software, it just seemed like a natural fit. The security boundary is such a sensitive part of the operating system that it felt like a kind of natural place to start.
Matthias
00:33:07
Now, sudo-rs has 42,000 lines of code, and to some that might seem like very little code, to some that might seem like a lot of code. What does it do exactly that requires that many lines of code?
Jon
00:33:23
So there's a few things, really. There is the fundamentals of kind of escalating privilege of our process. There's all the configuration, parsing, different options that have evolved over time. One of the features that we asked to be added was AppArmor support like the ability for sudo and AppArmor to interact which is the LSM that we use, the Linux security module, they need to be able to talk to PAM for example and SSSD for things like LDAP authentication so that is another difference between the original sudo which can natively speak LDAP the folks behind sudo-rs decided that that wasn't something they wanted their tool to do and they'd rather hand that off to to SSSD, essentially. And so, I mean, there's a lot of features, right? Again, localization support is something that, you know, it doesn't come for free. It comes with code overhead.
Matthias
00:34:13
So it's not only sudo-rs that you add. It's also an entire chain of dependencies in an ecosystem of Rust grades.
Jon
00:34:23
Right, and things like sudo-edit, so you can do the kind of visudo thing, right? Edit, like, it has its kind of own secure editor for editing the permissions. It is, yeah, it is a whole ecosystem. And the folks there have been diligent in the dependencies that they pull in, and they're very careful about it for obvious reasons. But yes, it is a whole ecosystem, right? It's a fairly big project.
Matthias
00:34:43
As a Rust consultant, I'm super happy to hear that because it adds support to the entire Rust ecosystem. But from your perspective, I would wonder, am I exposing myself to a lot of additional risk because a lot of that code hasn't been... You know part of such a long-term project as ubuntu for such a long time and maybe hasn't seen a lot of production the reality of production let's say.
Jon
00:35:10
Yeah it is a risk like you're absolutely right it is a risk in fact of all of the challenges in the rust community i would say one of the largest is in creating an ecosystem of crates that we can all have trust in it's easy it's easy for people to poke fun at NPM, you know, left pad. And there was some recent things with the kind of color libraries, you know, crates.rs isn't that much different at the end of the day. Anyone can upload code there. And so what I would look for in projects like sudo-rs is a diligence in how that ecosystem is used. Like, do you pull in an entire dependency that's maintained by somebody else that you don't know? Or do you vendor some of the code, you know, following the licensing guidelines and bring it in and maintain it yourself, right? These are all software engineering decisions at the end of the day that you wait, you trade off being able to move fast and being able to move very safely. So it is a risk, but I mean, isn't any change to the world's most deployed Linux operating system a risk at the end of the day? Like we just have to balance that out. And again, that's why it's landed in 25.10 first. I suspect there will be people out there who can't wait for this to land to see if they can go and find a security vulnerability in sudo-RS that doesn't exist in the original and i suspect they will probably succeed right like i i don't believe that it is possible to write software that is perfect all the time what i'm gambling on is the long-term probability of that thing being safer in the round yeah.
Matthias
00:36:40
And even if they find a security vulnerability my assumption would be that it would make a lot of rust code safer because unless they use unsafe blocks everywhere. And then I haven't checked how many unsafe lines of code are in there. Maybe you know, that would be interesting to know.
Jon
00:36:58
Not at the top of my head.
Matthias
00:36:59
No. We can double check. Unless they use a lot of unsafe code, if vulnerability gets discovered, that means that's a compiler bug and that would be fixed probably immediately. And then a lot of other tools would not be exposed to that security vulnerability anymore.
Jon
00:37:19
Yeah, I think this is part of what I see as Canonical's responsibility to the tech industry. The tech industry really rallied to Ubuntu 20 years ago, and we have enjoyed success from it. We have lots of commercial customers, we have enough revenue to fund 800 engineers on good salaries all around the world to work on open source and Linux, and that's a position of huge privilege. But I think part of our giving back to that community, in a sense, is to use the exposure that we have got to give products or projects like this a bit of exposure, to give them the benefit of all of those eyes and all that testing to progress them. I think, you know, so long as we do that in a careful and kind of respectful way, that's just us evolving the OS and using our position to... Try and put some of these projects on a platform where we think that they you know their views and their approach aligns with ours other.
Matthias
00:38:15
Than sudo and core utils what what else are you working on that is written in rust.
Jon
00:38:21
Yeah so we considering ntpd rs which would be a time-syncing daemon written in rust as also a trifecta tech foundation what is that exactly.
Matthias
00:38:32
Can you elaborate on that a bit.
Jon
00:38:34
So if you think NTPD or crony or systemd timesyncd, things that do time syncing for the system clock, essentially, we are looking at ntpd-rs, which is a Rust implementation of that demon, essentially. I'm hoping we can try and do that for 26.04, but at this point, it's very much a hopeful, not a given. If not, we'll be trying that out in 26.10 to see how it plays. And this goes quite nicely hand in hand with a recent move that we made, which is to enable NTS by default, which is the secure version of that protocol for time syncing. So think HTTP versus HTTPS, similar sort of transition. We're also looking at switching the OpenPGP library that Apt uses to Sequoia, which is Rust implementation of OpenPGP. So that would be for package signature verification kind of thing. Various other utilities in the distro, we're doing some thinking about certificate revocation lists and how to handle those in the distribution. And I think there'll be some Rust involved there. Our own compositor Mir has some Rust code in it. Mir isn't shipped by default in Ubuntu at the moment but it is shipped in a bunch of places. I think Fedora's LXQt spin uses Mir as a compositor and there's some Rust in there. We have some work ongoing with real-time operating systems where some Rust exists. We have a, cloud streaming platform called unbox cloud inside which there is some rust so there's a few places right like we we're using it in a bunch of different projects.
Matthias
00:40:01
It's always cool to hear that because it's not the main selling point of all of these tools and applications it's just that no one kind of forced you to use rust for these tools you could have used anything else but you saw value in using rust specifically and for a lot of companies now rust is no longer just hype thing that i want to try it's just normal part of everyday infrastructure i.
Jon
00:40:34
Would agree with it and and for a bunch of different reasons i mean i think we were quite tentative in our adoption of rust at canonical we you know we haven't been on the bandwagon jumping up and down about this for years, we're quite deliberate about the languages that we choose. So we try not to have one team in Go, one in Erlang, one in OCaml, one in JavaScript. We don't have this huge proliferation of languages. It's generally Go, Python, C, and now we're adding Rust to that. And I just sort of believe in using the right tool for the job at the end of the day. If we were doing something that involved lots of concurrency, and not of asynchronous networking, I would probably choose Go at the moment. But if we're writing low-level systems utilities that are performance-critical or safety-critical, then I would choose Rust.
Matthias
00:41:22
What was the first language you mentioned? We don't use that here. I think you said Rust twice, right?
Jon
00:41:29
Oh, did I?
Matthias
00:41:31
I'm just kidding. No-Go is a very fine choice for networking services.
Jon
00:41:37
We have a lot of Go at Canonical, right? Juju is a huge Go code base. We have things like Pebble. It's creeping into Maz. We have a lot of Go. And it's served us very well, I think. I do just believe in tools for the job. There are some things in the Rust ecosystem that are still relatively early that Go really excels at. And there are some things that Go can never do well. It's a garbage-collected language at the end of the day. And so in places like Mir, for example, you don't want a garbage collector kicking in when you're in the middle of drawing frames on the screen. And so it would naturally not be a fit.
Matthias
00:42:11
Another thing we haven't talked about yet is a project called Anbox that you mentioned in passing. What's that about?
Jon
00:42:20
So Anbox started out as a project for running Android applications on Linux. So it's like an Android emulator type thing. It was led by a chat called Simon Fels, who we hired some years ago, and he has now been building Anbox at Canonical. And with that, we have built a commercial product called Anbox Cloud, which is essentially sits on top of some of our other products like LXD, the kind of hypervisor that we maintain. And it is used for streaming android applications over the cloud so one of the biggest use cases for this is there's a major mobile device manufacturer who uses this for game streaming to their devices so where the graphical requirements of the game is much higher than you could ever get out of a mobile device at the moment we can stream the video at very low latency and very high quality to the devices and get the kind of input stream from the device in terms of controls back and you're playing a game as if natively on the device but you're actually streaming it from the cloud but it has other applications in kind of VDI infrastructure and one of the other big use cases we see is in like automotive development so you think about someone who sat at a car manufacturer working on an android auto dashboard you know like in-car entertainment system they can use anbox to run anbox and do local development against android whilst they're sat on their workstations, right? Maybe that's streamed from the cloud, maybe it's local. And so there are some parts of Anbox which are moving to Rust. Perhaps were written in C or other languages before and some combination of performance and kind of ease of development have led them down that path.
Matthias
00:43:54
Earlier you said that you don't really impose Rust on anyone so it's a bit of a team decision and there was one project specifically, where people transitioned from Rust from another language which is dqlite. Can you elaborate a little bit on what that is and what the language was that they used before.
Jon
00:44:18
Yeah so this is this is ongoing so there isn't a rust dqlite yet in general we are encouraging heavily our teams who have large c++ or c code bases to consider looking at rust whether that's building a feature in rust you know bringing some library functionality in rust we are we are you know asking them to go and familiarize themselves with that ecosystem and think about it. dqlite is essentially distributed to sqlite. So you can think of it as a thin layer of kind of the raft consensus protocol over the top of sqlite, which allows you to do high availabilitysqlite And the bit that kind of manages the VFS layer syncing is written in C. And we have been doing some work on dqlite. We're embedding it into Juju. It's in LXD. It's in a lot of our products. And one of the considerations as we're trying to think about how to improve the performance of dqlite and the reliability of dqlite is do we consider rewriting some or all of it in Rust and so there have been some kind of early prototyping efforts around that and one again one of the kind of interesting aspects there was the threading model in Rust and whether that would allow us a slightly easier development for some of the kind of more thorny concurrency elements and so not a hundred percent, there's a reasonable chance that we would see that happening over the next year or so.
Matthias
00:45:38
Are there any parts at Canonical that you wanted to mention that use Rust that we haven't touched on yet?
Jon
00:45:45
There are a number of projects that we haven't touched on. I know of projects in our hardware certifications team. I know of Rust products, again, in real-time operating systems projects. I don't claim to know all of the projects using Rust in Canonical. They seem to be increasing, which I'm pleased about. There's also the Rust for Linux project, right? so Rust in the Linux kernel, we employ lots of kernel engineers, tens of kernel engineers in every time zone. And so naturally, we have to make sure that that team is up to speed on what's going on in the Rust for Linux project, because ultimately, we support kernels for a really long time. And so we will be, for better or for worse, supporting kernels that have Rust kernel modules and drivers for at least the next 12 years, probably longer. So that's another area where I think it will grow over the next couple of years.
Matthias
00:46:31
Yeah. And that involves work like a Nova driver for NVIDIA GPU support and various other bits and pieces that are written in Rust now in the Linux kernel.
Jon
00:46:43
Even the Apple Silicon video drivers, we have Ubuntu Asahi as a kind of spin of Ubuntu, which is, again, somebody who works at Canonical, and my understanding is much of the graphics drivers for that hardware is written in Rust, so...
Matthias
00:46:56
Even all of it, as far as I'm aware.
Jon
00:46:59
Yeah, I think so.
Matthias
00:47:00
Which is a pretty crazy project in and of itself. Being able to run a fully free operating system on Apple Silicon is kind of cool.
Jon
00:47:09
Yeah, it's a remarkable project.
Matthias
00:47:11
Now, strategically, moving away from the technical side, as someone who is on the siding end of things, and you are responsible for 12 million, ubuntu desktop users what are things that are important for rust adoption to grow in companies like canonical.
Jon
00:47:35
Yeah so the the numbers game is always a fun one we we have various ways of kind of collecting information about our users you know some basic telemetry you can read about it it's stuff it's mostly opt-in and very easy to work around the last numbers i heard was something along the lines of like we know of 12 million daily ubuntu desktop devices we also know that there's a whole bunch that we don't know of because they're behind corporate firewalls and proxies and things and there's a bunch of server like goodness knows how many instances of that are containers you know base images for containers that are scheduled around the world on kubernetes so like we don't really know how many ubuntu instances there are but lots of, definitely tens if not hundreds of millions, I would suggest. The decision-making process is probably not as complicated as it seems in the sense that the number one thing we have to maintain is that Ubuntu stays a trusted and reliable and resilient operating system. It's become known as the Linux that everyone can use, right? Linux for human beings, that was the whole thing. And I don't want to jeopardize that. As excited as I am about things like NixOS and other fun Linux distributions, it would be a failing for us to turn Ubuntu into that. That's not what we are, right? There's a space for those more experimental and kind of exotic distributions, but we have to build Linux for human beings, Linux for everyone, right? For students, for entrepreneurs, for huge businesses. And so my view is I'm always interested, I will always entertain suggestions for software that could be replaced, so long as it goes hand in hand with our vision for providing a resilient, performant, open source Linux operating system that gives people access to what we consider to be the very best of open source at that moment. To me, that's what it's really all about. It's about bringing open source to as many people as we possibly can. And not just open source, but the best open source that we can find and can support.
Matthias
00:49:30
Do you interact with the Rust Foundation? And what about the future of sponsoring that work? You mentioned Trifecta, for example. How do you see that collaboration evolving?
Jon
00:49:45
I hope it will continue. Honestly, we don't have an open ticket to just write sponsorship for any Rust project on the planet, right? But certainly, it's not fair for us to suddenly impose a set of expectations on a community or a maintainer without first having a discussion and think about funding and helping them. So before we announced publicly that we were going to do core utils or sudo-rs, we spoke with the maintainers and said, hey, this is an idea we've had. Do you think the project is ready for it? And we need these things to be fixed. And what would that cost us? I think that's a reasonable conversation to have. And I intend to continue that. Again, this is us, in a sense, paying it forward. We've enjoyed great success from Ubuntu. We're in a position to be able to help other projects that we see as promising to progress and gain adoption. And that can't be free, right? It can't always be free. So, of course, in some cases, we will need to pay. I'm having a conversation, which maybe you'll hear about in a few weeks, with another maintainer this week about another project that we might fund because we're very interested in what they want to achieve. So, yeah, I think we will continue.
Matthias
00:50:49
And when will you rewrite Apt in Rust? Just kidding.
Jon
00:50:54
I don't know that that's on the cars just yet, but I mean, I, you know, I certainly, I speak frequently with the maintainer of act and, you know, it's not, it's not never going to happen. It doesn't have to never happen, if you see what I mean. Like, I wouldn't be against it, but there would need to be a reason, right? Like, perhaps if we were going to introduce some major new functionality where it would be a good fit, then perhaps we could begin introducing Rust.
Matthias
00:51:16
Yeah, yeah. And I guess a bit more seriously, where do you see Rust in the Linux ecosystem in five years?
Jon
00:51:24
Honestly, I hope what we see is a sort of continuation of where we have started. I am excited by things like graphics drivers in Rust and safety critical code in Rust. I hope that things have settled down a little bit. Rust in Lilix has not been without its controversies. And I think that's natural. One of the observations I have about the kernel community is it doesn't seem to be attracting lots and lots of new talent. And I think rust could be an important way in which the kernel could attract new maintainers. So I hope that as many of them already do, the kernel community continues to recognize while there may be some trade-offs, people may have to learn new languages and adopt a new way of doing things. Perhaps build pipelines become more complicated. There's also a set of opportunities associated with landing Rust code in the kernel. So my guess is as that becomes more developed and more mainstream, the disconnect there will settle down a little bit. And I hope that that happens sooner rather than later so we can carry on with building great software and worry less about arguing about the logistics of building great software ultimately i.
Matthias
00:52:28
Would like to get back to a point that you mentioned earlier which keeps coming back to me which is crates.io is also just a registry and everyone can push code there and for someone who has such large requirements on long-term maintenance isn't that a huge risk of supply chain security, You end up trusting so many dependencies and sub-dependencies and so on. You build a distribution. Isn't that a lot of risk? And do you see ways on how Rust can improve there?
Jon
00:53:04
So it is a risk. I mean, anytime you use someone else's software, it's a risk, right? Anytime you bring in a dependency. We have strategies for managing that. We have a huge security team whose sole job is essentially to scour the internet for security reports and patch things in Ubuntu, patch things in Debian, try to patch things upstream and make sure that the version of something that we are shipping is not vulnerable. The crates.io is an amazing piece of infrastructure. I use it myself. It's, you know, why would you not? But I have noticed a trend for there being lots of kind of small libraries, which is a trend that we saw in other ecosystems. I guess this is a hard problem to solve. I think one area where Rust could maybe benefit is from perhaps a slightly stronger standard library and kind of like an approved approach to solving some problems. There obviously is a standard library in Rust, and the language hasn't been around for that long, so perhaps that will come. But if I make a direct comparison between the development time I've spent in Go and the development time I've spent in Rust, I more frequently have to reach for Excel dependencies in Rust than I do in Go. And I think that has been a long-touted benefit of Go in fairness. I think it's a very strong language there. Python is also very strong in the standard library that it brings. Rust to me feels, it is less strong there. And as a result, you end up with lots of small crates of kind of unknown origin, essentially. So it makes it harder. But I think part of this will come with maturity, right? There are certain superstar libraries that everyone knows about, clap and serde and various others. But in configuration file handling, for example, I shudder to think how many implementations there are of parsing a Toml config file on crates.io, right? Which seems yeah maybe that's useful i it seems like an interesting like bifurcation of effort in a sense for something that's so common yeah.
Matthias
00:54:52
It's a double-edged sword somehow on one side you also expose yourself to a lot of you know side effects and maybe you also accumulate a lot of functionality in the standard library if you follow this batteries included approach but on the other side you also push the burden more to the users which now have to vet smaller crates.
Jon
00:55:16
Yeah or choose not to bring a crate in and just vendor some of that code right is always an option it's an approach i would generally take myself i think you know if you often you can be at risk of bringing in thousands of lines of code from a library where you need like a 30 line function from it yeah these are all to a limited extent like this is all within a developer's control at the end of day i feel like the it's about the community's approach to it that can kind of signpost people to the right to make the right decisions more often it's.
Matthias
00:55:45
True i always wonder why this is not done more often especially for the smaller functions and libraries that you use as long as the license is compatible.
Jon
00:55:54
Exactly.
Matthias
00:55:56
We're getting close to the end. And traditionally, the final question is, what's your message to the Rust community?
Jon
00:56:05
I think my message to the Rust community is twofold. One is we're hiring Rust developers. So, you know, reach out if that's of interest to you. And secondly, if you've got a really interesting Rust project and you think it's a great fit for some of the work that we're doing and you want to have a conversation, then reach out. I'm open to ideas. I don't know all of the high-quality Rust projects out there. And if there are implementations of things that are very widespread that you think we should be using, then let me know.
Matthias
00:56:33
John, thanks so much for the interview. And thanks to Canonical for supporting the Rust ecosystem.
Jon
00:56:39
Thank you very much. It's been a pleasure.
Matthias
00:56:41
Rust in Production is a podcast by corrode. It is hosted by me, Matthias Endler, and produced by Simon Brüggen. For show notes, transcripts, and to learn more about how we can help your company make the most of Rust, visit corrode.dev. Thanks for listening to Rustin Production.