Rust in Production

Matthias Endler

Zed with Conrad Irwin

Next to writing their own operating system, another dream shared by many developers is building their own text editor. Conrad Irwin, a software engineer at Zed, is doing just that. Zed is a fully extensible, open-source text editor written entirely in Rust. It's fast, lightweight, and comes with excellent language support out of the box.

2024-10-17 62 min

Description & Show Notes

Next to writing their own operating system, another dream shared by many developers is building their own text editor. Conrad Irwin, a software engineer at Zed, is doing just that. Zed is a fully extensible, open-source text editor written entirely in Rust. It's fast, lightweight, and comes with excellent language support out of the box.

In the first episode of the third season, I sit down with Conrad to discuss Zed's mission to build a next-generation text editor and why it was necessary to rebuild the very foundation of text editing software from scratch to achieve their goals.

About Zed Industries

Zed isn't afraid of daunting tasks. Not only have they built a text editor from scratch, but they've also developed their own GUI toolkit, implemented advanced parsing techniques like tree-sitter, and integrated multi-user collaboration features directly into the editor. Zed is a text editor built for the future, with meticulous attention to detail and a focus on exceptional performance.

About Conrad Irwin

Before joining Zed, Conrad worked on Superhuman, an email client renowned for its speed and efficiency. He is a seasoned developer with a deep understanding of performance optimization and building fast, reliable software. Conrad is passionate about open-source software and is a strong advocate for Rust. He's also an excellent pair-programming partner and invites people to join him while working on Zed.

Links From The Show

  • Superhuman - High-performance email client known for its speed and efficiency
  • Visual Studio Code - Popular, extensible code editor
  • Neovim - Vim-based text editor focused on extensibility and usability
  • gpui crate - Zed's custom GUI toolkit for building fast, native user interfaces
  • Leptos - Rust framework for building reactive web applications
  • Dioxus - Rust library for building cross-platform user interfaces
  • Tokio - Asynchronous runtime for Rust, powering many network applications
  • async-std - Asynchronous version of the Rust standard library
  • smol - Small and fast async runtime for Rust
  • Glommio - Thread-per-core Rust async framework with a Linux-specific runtime
  • isahc - HTTP client library that supports multiple async runtimes
  • Zed Editor YouTube channel - Official channel for Zed editor tutorials and updates
  • Tree-sitter - Parser generator tool and incremental parsing library
  • Semgrep - Static analysis tool for finding and preventing bugs
  • Zed release changelogs - Official changelog for Zed editor releases
  • matklad's blog post: "Flat Is Better Than Nested" - Discusses organizing large Rust projects with a flat structure
  • rust-analyzer - Advanced language server for Rust, providing IDE-like features
  • Protobuf Rust crate - Protocol Buffers implementation for Rust
  • Postcard - Compact serialization format for Rust, designed for resource-constrained systems
  • CBOR - Concise Binary Object Representation, a data format similar to JSON but more compact
  • MessagePack - Efficient binary serialization format
  • RON (Rusty Object Notation) - Simple readable data serialization format similar to Rust syntax
  • James Munns' blog - Embedded systems expert and Rust consultant's blog
  • Delve - Debugger for the Go programming language
  • LLDB - Next generation, high-performance debugger used with Rust and other LLVM languages

Official Links


About corrode

"Rust in Production" is a podcast by corrode, a company that helps teams adopt Rust. We offer training, consulting, and development services to help you succeed with Rust. If you want to learn more about how we can help you, please get in touch.

Transcript

This is Rust in Production, a podcast about companies who use Rust to shape the future of infrastructure. My name is Matthias Endler from corrode, and today we're talking to Conrad Irwin from Zed about building a high-performance code editor in Rust. Conrad, thanks for having you. Can you quickly introduce yourself and Zed, the company you work for?
Conrad
00:00:24
Yep, I'm Conrad. I work at Zed, which is trying to build the next generation of text editor. Coding at the speed of thought is our motto. Prior to Zed, I built Superhuman, which is also focused on speed, building the fastest email client in the world. So I really like building tools that make people get what they need to get done faster and easier. And that's how I found myself working on Zed.
Matthias
00:00:46
It's pretty amazing. I have to say, I'm a Zed user myself. I completely switched from VS Code. and i can tell you the experience so far is fantastic you did a great job and this is also why i wanted to talk to you folks first off in your words what's wrong with the existing editors what are some pain points why do we need another editor i guess you hear that a lot but i want to hear the answer from you makes.
Conrad
00:01:13
Sense i think this there's kind of two approaches to think about one of which is as you look at the world today everything is collaborative by default people are designing in Figma, docs are in Google Docs, and then programmers are still stuck, kind of each person editing their own file and then git commit, git push, oh no, what happened to my git commits? It seems ludicrous that we don't have real-time collaboration for code. And so that was one of the key kind of design things is how do we build real-time collaboration incorrectly? But we knew that if we wanted to persuade people that they wanted to use that, it also needed to be better than what's out there. And if you compare it to something like a VS Code, you can take the browser emulator and the extensions APIs and all of that stuff that makes VS Code kind of slow and clunky and rebuild it natively in Rust and using a natural fast GPU native rendering. So taking more of kind of like the React approach and like a game would be built. On the other side, you have things like VIM, which are fast and people love them for the speed and the ease, but they don't work with any of the modern tools. So you spend all your time configuring language servers and like breaking plugins and fixing plugins and breaking plugins. And so we wanted to make something that had this kind of trio of collaborative, extremely fast, and just works out of the box. So really helping people get their work done. Not spending time configuring your editor if that makes sense if.
Matthias
00:02:37
Zed didn't exist which editor would you use.
Conrad
00:02:39
Well before it did exist i kept switching between neovim and vs code vs code because i like the language servers and all of that stuff but it would just frustrate me too often so i'd go back to neovim where i know how to be productive and so i've spent a lot of my time making zed's remote work for me and for and for the other people who use them so it's been fun.
Matthias
00:02:59
Yeah and You did a great job there. I have to say there are very few gaps in the Vim support by now. It wasn't the case just half a year ago, but just seeing the rapid development is really, really good, really surprising. And also I found myself switching between VS Code and NeoVim as well. So every time I became sick of VS Code's laggy performance, I would switch to NeoVim only to find out that the configuration was a bit of a hassle. And so I had to switch back at some point.
Conrad
00:03:32
Exactly. And so you're kind of the ideal target user, someone who wants better things from their tools. And so building set the right way has been very fun.
Matthias
00:03:41
Did that also entail any differences in how a set is architectured in comparison to other editors? Or was it just mostly Rust's performance that made it so snappy?
Conrad
00:03:53
Rust definitely helps a lot. It's not JavaScript, which is really nice. But one of the things that we do that is very different is that we render like a game renders so each frame so every eight milliseconds or 16 milliseconds we redraw the entire screen to the gpu and that means that we are not cpu bound when it comes to rendering stuff unlike say a vs code or an html that has to do a lot of cpu work to get stuff onto the screen as this old gpu out there and so that's kind of the biggest difference i would say also because of the time we started it like language servers are built in in VS code. Each language server is wrapped in an extension. In Zed, we do have some extensions that provide that, but language server is really kind of like the thing it's based on. And then the other big piece of tech that we use a lot of is tree sitter. So when you're trying to do things like jumping to matching brackets, we don't have to parse the entire file. We already have a tree of what the syntax looks like, and we can just jump you to the other end of the node. And so kind of like building on sensible foundations means that most operations are faster. We're not trying to like, iterate over the whole file at one time or that kind of stuff right.
Matthias
00:05:00
So the editing part works on the cpu i would assume that's the treesitter part. And the. Rendering part of the ui that works on the gpu is that correct.
Conrad
00:05:10
Yeah there's still some cpu work involved in rendering the screen like when you change the ui but it the cpu creates this kind of tree of nodes very similar to html and then the gpu is responsible for taking that tree of nodes and flushing them out to pixels and.
Matthias
00:05:25
Is that something that you have to handle on the rust side or does that more or less evolve automatically when you build the application with that in mind is that something where for example you have a, hypervisor for the gpu and then you have some other thing that takes care of state of text files that you have currently loaded or is that something that more or less happens automatically.
Conrad
00:05:49
It doesn't happen automatically like a lot of the code in zed is built to handle those things and so where rust really helps is you know things like lifetimes like you close a file that should be deallocated and rust makes really good guarantees about that kind of stuff for things like the gpu pipelines because rust is good interoperability with c we can just call straight into the graphics apis and just dump you know here's the here's the buffers that we need you to render gpu go do that and so there's a whole framework gpu i which is the the framework that Zed uses for all of that and a couple other apps are using it. But mostly we built that to build Zed. And so once you've built that, then you can build an editor on top of it.
Matthias
00:06:29
Someone might listen and think, oh, why didn't they use Leptos or dioxus or all the other Rust UI frameworks? Why did they have to invent their own thing? What would be your answer?
Conrad
00:06:42
Primarily speed of fixing things. And so Zed moves very quickly compared to any other software team I've worked on. and I've never been moved along so quickly. And so we're very careful to avoid putting things in the way that will move slowly and particularly unnecessary abstractions. So we spend a lot of time trying to make the macOS integration feel right. And if we had to each time go through someone else's crate, fix it there, then, pipe it all through the overall development time is slower it also means that it's very concretely designed for zed there's no extra apis that we don't need or stuff that we don't want and that comes important you know one one good example of that is keyboard shortcut handling it's very unlikely that someone else would have a keyboard shortcut thing that can support fin mode and all of the other things that we want to do with zed and so skipping all of that and doing it ourselves makes some things easier less abstractions to get in the way and.
Matthias
00:07:36
Absolutely you can feel that when you use it because it's snappy it feels native that by the way was one other thing that i always missed in neovim it didn't really feel like a native application it felt like a terminal emulated thing that ran inside of you know some some terminal of course, which it kind of was but that always kept me away from other things like emacs for example i i do like all of these experiences but i also want the native look and feel.
Conrad
00:08:06
Because especially.
Matthias
00:08:07
On mac os if you're used to this it's very hard to switch away from that experience again.
Conrad
00:08:12
Yeah strongly agree and some really interesting design decisions we have like in the mode the colon key brings up the command palette and one of the nice things about that is we can provide auto completion and then there's no space to do that because you don't have like a proper graphic graphical ui you only have lines in the command panel. And so, yeah, it's definitely freeing to not be in a terminal emulator.
Matthias
00:08:36
Yeah. Let's see, which keyboard shortcuts do you support? What sort of keyboard keymaps, I would say, do you support? You support Vim. Then you support the normal command or control-based keyboard chords, almost. Then you have actual chords, which are things like GD or GR. Or anything else that I'm missing?
Conrad
00:09:00
One that JetBrains users love, you can double tap shift to bring up the, if you enable the JetBrains key map, double tap shift brings up the command palette. And that was a fun one that we added recently just to kind of appease those people. I'm actually working on it here with a contributor right now to try and automatically support non-Latin keyboards. So, for example, in a macOS, if you do command A on a Russian keyboard, it's obviously going to do the same, select all. But in Vim mode, A, in Vim normal mode, it sees the Russian character instead and doesn't do anything. And so I've been trying to figure out how to make that work as well.
Matthias
00:09:36
Well, when you describe it, it sounds kind of straightforward, but you have to plan for this from the very beginning, because otherwise you will not be able to manage to support all of these different ways of input, right?
Conrad
00:09:50
Exactly. And actually, one thing that helps us avoid planning is because we own the whole thing from the very top-level Swift layer all the way down to the GPIO layer, we can control how that works. But handling things like international input and making that work sensibly with keyboard shortcuts, it's not an obvious problem. And so I look forward to fixing all the remaining edge cases. But we're along with that right now.
Matthias
00:10:14
Can you just quickly and briefly describe to us how that part works, how that part of set, the keyboard input works? Because in my mind, it's event-based. You have things like, you definitely don't want to block the rendering in any way. You probably need to do that asynchronously or event-based. But I don't want to put words into your mouth. I want to hear from you. How does that part work?
Conrad
00:10:40
Yeah. So starting kind of from the top, macOS gives us an event-based API where they say, hey, someone pressed a key down at this point. We then have to say to them, okay, well, is this part of a multi-key key? So on a US keyboard, if you do option backtick, you get into the mode where you can type A with an acute accent on top. So we then have to integrate with that. And that's just a weird edge case of it has a re-entrant callback system. So it gives you the event then you call back into it then it calls back into you which Rust does not like that's very very unsafe as far as Rust is concerned but once we've gone through a few rounds of that we know okay the user pressed this key this is the character it's going to generate this is the key that was pressed we then send that to the matcher and we're like okay did either the character that was generated or the key that was pressed with the modifiers or shift control command, does that match any of the defined bindings if it matches the binding we do the the the action If it doesn't, then we input the text. And one of the things that we do to try and make the keyboard bindings easier to program is that each component in the app, so the editor is one big component, has a bunch of key bindings. And then the pane, which contains the editor, has a bunch of key bindings. And so you can set them at the right level. And that means you can have multiple different meanings for a given key binding, depending on where you're focused right now, which is really important for something that's complicated as that.
Matthias
00:12:05
And all of these events they end up in some sort of state machine or are they more or less modifying the state and then they they are more or less gone from the system again.
Conrad
00:12:18
A little bit of both so as you mentioned we have the ability to do multi-key keystrokes so if you do a g and then an r goes to all references in vm mode and so that piece is a state machine of like okay is there a key like could it be a pending match is it something else but once it's happened it really just happens and it's kind of like a callback into the rest of the code and so you hit gr and it goes okay run the final references code that's going to probably in that case kick off an async task so we don't want to run that on the main thread because we're going to communicate back to the language server and ask it questions so we kick it to the background wait for the response and then update the uim response to that so using a lot of the async Rust stuff to make that not happen on the main thread and.
Matthias
00:13:00
This is probably also how you circumvent issues with self-referential structs in Rust where maybe you want to, you know, you described a mechanism where you had an event that would trigger another event and you would have a callback upon a callback maybe. And these things, they trigger some sort of PTSD in my mind because if you do it the wrong way, then you will end up in a thing that doesn't compile anymore. And if you kind of have a dispatcher in between, you can completely get rid of this problem because asyncrust solves that problem for you. Is that correct?
Conrad
00:13:38
I'm not sure exactly if AsyncRust solves that problem, but the GPI framework has some tools for it. So mostly the UI is a tree. It's really the only data structure you can have in Rust where you have the root view that maintains handles to all of the subviews. But obviously, often you want to be able to refer back up the tree. Like, okay, I'm an editor, but I know I'm rendered inside a pane or I know I'm rendered inside something else. And so we have a system of kind of strong pointers and weak pointers. And that works pretty well for avoiding it. but it is one area where i wish the compiler was better because it can't tell you hey you have a smart pointer cycle right it's like you have two strong pointers in each direction, but as long as you're somewhat careful to maintain the order in your head it's mostly fine and we don't really have that many problems with memory leaks at least none that we found yet are.
Matthias
00:14:25
There any issues that you defer to runtime or is all of that done at compile time.
Conrad
00:14:30
Yes so we have a whole bunch of smart pointers for view components because we need to be able to reference them from the renderer and also from the view tree and so they need to be kind of like an arc reference from multiple places we don't use an arc directly instead we we have our own wrapper around it that's a little bit more type safe for some things and so if you do have things like a double bar you gotta panic instead of a compile time check and so it would be nice if the compiler supported that but i understand why it doesn't it would be really hard to check what.
Matthias
00:15:02
Are the additional type safety guarantees that you mentioned in your own implementation of this arc alternative.
Conrad
00:15:07
So the the framework gives you it calls them a view and you can read it or update it just like just like kind of an arc mutex what's nice about the view is it supports a bunch of other things so a view can be rendered a view can be converted into kind of like a black box any view that the renderer can deal with and so you know when you're given a view handle like there are some things you can do with it. It's not just a knock around an arbitrary struct.
Matthias
00:15:33
Right. Is that something that I would be able to use as an external person, maybe as part of my application? I know that you open sourced the entire code, but is that something that I could use in any other context?
Conrad
00:15:49
Yeah, I mean, yes, yes, but. So if you're using GPU UI to build an app, you definitely would use it. And it gives you the kind of the whole suite of tools. If you were just trying to pull out bits of GPU on its own, that's probably not a bit I would pull out just because it's so tied into the rendering system, if that makes sense.
Matthias
00:16:07
Right. So these are abstractions that you built specifically for GPU UI and things that are paper cuts in your working environment and you just had the power to do it. So you went ahead and did it.
Conrad
00:16:21
Exactly. And so, yeah, it gives us all a bunch of nice guarantees.
Matthias
00:16:25
Is that a very common pattern, I'd said, where maybe the ecosystem is just not there yet or you have very specific requirements, so you have to build it yourself? Or does the ecosystem help you a lot?
Conrad
00:16:37
I would say we very much are on the rebuild side of the spectrum. So there have been a lot of things where we haven't used WhatsApp there. I think the other big one is Tokio. So Tokio has its own async runtime but on macOS it's not main thread aware and macOS has a whole bunch of cool scheduling stuff to do when you have a geographical app so there's kind of a specific thread that Coco knows okay like regardless of how busy the system is like we're going to keep this thread running because it's visible to the user and so we wanted to be able to hook into that system and so we have our own async runtime that's not Tokio but but kind of works works in that more, UI based way of like, if you want UI updates, you have to be on the main thread. Otherwise you have to be on a background thread. And we have a bunch of helpers to make sure that your code is always running on the thread you expect.
Matthias
00:17:25
When I looked through the code, I actually saw that you use both Tokio and smol. I would assume that your own async runtime is based on top of smol, or is that even a separate runtime? You have three in total.
Conrad
00:17:36
We don't use the runtimes from Tokio or smol. I think we use a bunch of the helper stuff from them.
Matthias
00:17:41
Right.
Conrad
00:17:41
But I know that was a push to get rid of smol because it's like, well, we don't really need anything from here. But that's kind of where the Rust ecosystem stuff comes in. It's a very big ecosystem with a whole bunch of stuff in it. And so it's very easy to end up with a small dependency on these things. You know through someone else even if we're not using the main parts of the libraries.
Matthias
00:17:58
Yeah and i also would assume that if you took your async runtime and published it as a separate project that would essentially mean that you would have to maintain it for external people and it could also be a bit of an issue in regard to how you want to use that runtime inside z so i'm assuming that you don't want to do that.
Conrad
00:18:20
I think we actually did i was trying to look it up it was something we were trying to do and so we took that piece of gpu and split it out into a separate crate but i'm now looking and i can't see it within 10 seconds so i'll have to look it up after this and find it.
Matthias
00:18:36
It's very nice to know that you have your own runtime because i'm kind of a big proponent of diversity and we have tokio we had async std we have smol and there's glommio and a couple others what's the state of the async Rust ecosystem in your mind.
Conrad
00:18:56
I can already see that so you.
Matthias
00:19:01
Have some stuff to say.
Conrad
00:19:02
A little a little like side story we're using a really old http client right now called isahc which was very popular like four or five years ago and the reason we're using it is it's the only one that lets you plug and play the async runtime and so i really like the idea of having this kind of diverse set of async runtimes but there seems to be some kind of this the abstraction's not in quite the right place because it seems like if you're building something you can't just say oh give me any async runtime that's not as easy to do as it is to say okay we'll just use tokio so everyone seems to be landing on right now, and so you know in the abstract it's good for people to kind of solidify around tokio that seems to be where most of the energy is because it gives you like more batteries included but, suddenly you're in this situation where there's tools that we want to use like tokio's http stuff which we can't because we don't use tokio's runtime and so one of the yeah it's on the back burner right now but we need to upgrade the http client to upgrade lib ssl which we don't really want to link but there's no alternative async http client that isn't tokio based and i don't want to build my own please so we'll have to kind of figure that out and that's kind of my experience with it you know there's the day-to-day async pain points in the language itself but the ecosystem is just very either tokio or kind of on your you're on your own a little bit is.
Matthias
00:20:17
That in your opinion something that will just solve itself at some point once the ecosystem grows a bit or is it a systemic issue.
Conrad
00:20:25
I don't know the answer. I think it's, as with all of these things, it's part cultural, part technical. It's hard to write an async runtime because it's such an abstract piece of thing to do. Like the end result is only a few hundred lines of code, but the right few hundred lines of code, if that makes sense. And so most people don't think to do that, which means kind of like, okay, culturally, sure, let's just use Tokio. It's the most common one. And so I think if we go that way, it's, I don't know, maybe that's okay, but if they don't support the async runtimes that everyone wants to use, then you can end up with these problems. I don't know. It'll be interesting to see how the community kind of like responds to it. I think the Rust community kind of likes rebuilding things over and over again. So I'm sure we'll be fine. It'll be interesting to see what comes up.
Matthias
00:21:10
If you compared it with Rust 's error handling story, where we had multiple iterations and we ended up with anyhow, this error, snafu, and all of these other abstractions that we didn't really have, all of these dependencies are relatively new in comparison to how old Rust is. That took a different turn because it took some time to mature and we didn't really stabilize on one error handling library. Whereas in Async Rust, pretty much from the beginning, people kind of settled on Tokio because as you said, it's a much more complicated piece of software and the ecosystem just wasn't as mature yet.
Conrad
00:21:52
But now we.
Matthias
00:21:53
Find ourselves in a situation where tokio is the dominant async runtime and, is it just me or do you also see that problem there and what would be the revelation here.
Conrad
00:22:06
Yeah i mean i definitely see the problem as given given the examples that we have i think one of the so one of the things that i'm hopeful for is that as the language evolves you know async rust is still very beta as i think they called it in the the latest you know rust planning blog post like As they evolve the language support for it, what I hope is that it becomes easier to do both sides of things, both easier to use as a consumer, but also easier to create as kind of like, hey, here's my new async library for you. And I think if you look at something like jQuery to use a very different example, it was something that started out very dominant, or started out small, became very dominant very quickly. And then as the underlying browser APIs improved, became kind of less relevant. So that's kind of the hope I'd see, if that makes sense. As the APIs get more sensible and people get more used to all of the concepts involved, it becomes easier to do both sides of the coin. So I have nothing against Tokio. It's a great piece of software. But I also, to your point earlier, I really like the diversity, being able to do things in multiple different ways.
Matthias
00:23:08
At the same time, it gives me hope when you say that because the jQuery example was a good outcome in my book because we were able to experiment with these alternative JavaScript framework or so. And then browsers caught up and then they added some of the features into their own native implementation. I guess the same could happen in Rust where we take parts of Tokio and stabilize that, put it into the standard library, but we have to be careful there. For example, there's an async read and async write trait, and technically you could already stabilize that, but it's a bit debated right now whether this is the right abstraction that we want to settle on going forward.
Conrad
00:23:51
Exactly and it it takes time to figure those out particularly for something like rust that's such a big complicated project like any extra complexity they bring into the standard library like deserves a lot of scrutiny but but even things like you know async traits and stuff like that there's so much obvious stuff that needs improvement first that you know i think there's there's definitely time to figure it out even if it feels like it's not figured out yet come on what.
Matthias
00:24:14
Percentage of set is async.
Conrad
00:24:17
I would guess 50-50, but I don't know if we could have a look. But pretty much everything that requires accessing the disk is async. Anything that's the network is async. And text editors do a lot of disk access and lots of language server access. And so the stuff that's not async is really just, oh, you typed a key, okay, left. We'll move the cursor one position to the left. We can do that synchronously. But even things like search, if you search for a character, we kick that off to an async background thread so that we can keep rendering UI while we search. And so yeah pretty much anything that uses a lot of cpu is not on the main thread is.
Matthias
00:24:52
That rule somewhere codified is that part of your code style for example you separate sync and asyncrust or is that something that evolved naturally.
Conrad
00:25:02
It's something that evolved naturally so back to gpu i again it provides a context and context a roughly a way of storing global state you know but not in a global but what that lets you do is that code that is sync takes an app context and code that is async takes an async context and so if you build something slow you make it take an async context you can't call it on the main thread anymore and so it gets easy to manage it kind of at a type level but it does require when you're building a new feature that is cpu bound to notice and put that in the background that's.
Matthias
00:25:40
So funny when you say that because you almost take the function coloring problem and make it a feature.
Conrad
00:25:45
Where right you.
Matthias
00:25:47
You take the slow parts and you mark them as async so that you know exactly that this is something that you should not run on the main thread.
Conrad
00:25:56
Yep yeah exactly i hadn't thought about it that way but you're right it's like it's kind of nice like this is async go away i will be back later and.
Matthias
00:26:04
You can see it in the type system too are there any other challenging aspects of building a text editor in rust any unexpected hurdles.
Conrad
00:26:14
Well, I mean, beyond the kind of the complexity of the editor piece, one of the things that is tricky with Rust, and we kind of touched on this a bit of time, is that the ownership rules make it challenging because the text editor is fairly self-referential. You have all of the things you want to know about the editor, and the editor needs to be able to tell lots of things stuff. And so we have several different solutions for, okay, here's the editor, we'll send events this way, or you can observe it and get things out. But it's not as easy as it would be in a language like JavaScript to just, okay, we'll plonk this thing on the side and the editor can talk to it and it can talk to the editor and we don't have to worry about it. And so that's probably the main piece that Rust makes it tricky. Other than that, the hard bit is just building a text editor. There's so many features that people expect to just work.
Matthias
00:27:04
Yeah, because everyone wants a different set of features.
Conrad
00:27:07
That too. Vim mode is the classic example of that.
Matthias
00:27:11
Why is that?
Conrad
00:27:12
Because the people who want Vim mode can't live without it and everyone else is like, I don't care about this. And so that's true for not every feature we build, but a number of them. We've been working on Jupyter Notebooks, for example, and people who need Jupyter Notebooks, they love that feature. And everyone else is like, yeah, whatever. I don't use that. And so kind of trying to navigate the trade-offs of which features do we build and who do we make happy in what order is a big problem. But that's definitely a text editor's problem, not a Rust problem.
Matthias
00:27:42
In preparation for this interview, I also checked out your new YouTube channel because you recently started a channel about talking about the internals inside, and I will link it in the show notes. It's pretty majestic. The one thing that I realized from these interviews was that you sometimes touched on a library called TreeSitter. You mentioned it before, but that seems to be a bit of the secret sauce in there. Because apparently, and correct me if I'm wrong here, other editors are not built this way. They don't work on this level of abstraction, almost like an AST, an abstract syntax tree level of abstraction to modify text. Can you elaborate on this a bit? What is it? And also, do you agree? Is that a really critical, central part of that?
Conrad
00:28:31
Yeah, definitely agree. So if you think about a programming text editor, kind of one of the first features you want to build is syntax highlighting. And if you look at really all editors, it's a hand-coded parser for each language that does it. Not going to fly. We don't have time to build a hand-coded parser for every language. Then maybe a decade or two ago, people started using regular expressions. Like, cool, here's a regular expression that does it. And in some cases, like the KDE text editor was kind of influential early in this. It's like a half XML language, half regular expression. So So you get a bit of recursion, a bit of regular expression, and kind of a mix in there. And these things are all fine, and they work for what they work for, but they only solve the syntax highlighting problem. And so if you want to be able to understand a little bit more about, okay, so this text on the screen is just an array of Unicode bytes, but what does it mean? You need something that cannot just look at it byte by byte, but really divide it up into syntax. and one of the Z founders, Max, built TreeSitter to do this in Atom. But it's like, okay, we get syntax highlighting for this for free because it understands the language. But we also get things like jumping to matching brackets or if you want to jump to what's the... You want to look at a file and say, what's in this file? We have a thing called the outline view. And so you can just see all the things that are defined in there. And that's all powered again by TreeSitter. And so it's fundamental to the way that we do programming language stuff. Which can all be done instead of having to do it like byte by byte by tree reversal instead which is you know orders of magnitude faster and.
Matthias
00:30:05
How does tree sitter work on a type level is it like an intermediate representation where you map certain keywords in different languages to the same keyword in tree sitter.
Conrad
00:30:17
So tree sitter each language in tree sitter has its own kind of definitions. And then Zed has a couple of things that map from those definitions to our definitions. And so each supported language has a mapping of like, okay, in the TreeSitter grammar, there's a thing called comment. In the Zed code highlighting, there's a thing called comment. Those are the same thing. So we have that mapping for each language. Similarly, it's like, okay, if you want to extract all the function definitions from a file, this is the TreeSitter query used to get that. And those queries can all run on a background thread. And TreeSitter itself is it's kind of crazy. It's a bunch of C libraries written for each language. So we run those inside a WebAssembly module to avoid the obvious problems with running C libraries. And that's been good for reliability.
Matthias
00:31:02
That's a very smart move where does the name tree sitter come from is that because it's, almost a recursive structure where you can think of it as a tree inside a tree inside a tree where you have different levels of abstraction that you can iterate on or does it come from the abstract syntax tree no i what.
Conrad
00:31:23
I do know that kind of the key feature of it as opposed to like a cause for a compiler is that it's error tolerant so if you have you know a trailing quote mark it's not going to throw it off it can always do something and so it has optimized, small edits in the text lead to small changes in the tree so it kind of i guess it mind it monitors or babysits your tree maybe that's where it comes from i don't know.
Matthias
00:31:48
Do you know semgrep.
Conrad
00:31:51
I know the name but you'd have to remind me.
Matthias
00:31:54
It's sort of library where you can modify code based on certain i would even say i like a grammar or something where you can say i want to do a code modification and i give it an expression and then it figures out what to do based on this expression it's more high level than a regular expression it's also more powerful i I would say, can you do certain things like this in Z as well on a syntax level? I know it's not exposed to the user, but internally, could you do such modifications?
Conrad
00:32:31
Yes, is kind of the answer. So yeah, we don't have much built on that right now, but it's kind of there in the background. One of the most obvious things we can do, there are bindings, select more and select less, and they work on the tree set to definition. So you can kind of, okay, start in this quote, then I expand more until I have the whole function or less until I'm back down to the thing. But we don't really have many go edit via syntax tree stuff right now. One small example of something we do have in vim we have an argument object so you can select an argument with via you know select inside an argument and that uses the tree city grammar to find the the argument that you're in every.
Matthias
00:33:11
Day i get an update which is.
Conrad
00:33:13
Nice because they work.
Matthias
00:33:14
Flawlessly and they work every single time and i find myself reading the changelog a lot because the changelog is nicely formatted you can see what's going on i wonder how you do these updates how does the pipeline work to create these updates how do you push them to the clients.
Conrad
00:33:30
And how.
Matthias
00:33:31
Do you make it so flawless.
Conrad
00:33:32
Keeping it simple and building it ourselves i guess the two the two parts of that so internally we have a bunch of branches one one for each kind of version so right now we're on 149 stable 150 preview any commits that get added to those branches, don't do anything until you run a script which is script trigger release and you give it stable or preview that kicks off a build that uploads to github and there's a lot involved in making a build of something like this because you have x86 you have arm 64 you have mac and linux and so you end up with four or five different binary downloads that it creates it uploads them all to github and then it marks releases preview and then joseph usually but it could be anyone goes in takes all the commit messages and formats out the release notes and we have some tooling to kind of help with the most of that work, To your point, if you want to make them breedable and nicely formatted, there's no auto, like, oh, yeah, we just pull it in from the PR and call it a day. It doesn't work well enough. And so we want to make sure that it's easy to understand how that is changing. And so we spend manual time on that. And then, yeah, we can go from there. And then the auto-updater on the client side is just a loop. Sleep for an hour. Is there an update? If there is, download it, copy it into place, and then reboot.
Matthias
00:34:49
The tree sitter is written in c and is wrapped in web assembly that means you must run some sort of web assembly runtime is that a custom build or.
Conrad
00:35:00
Do you use anything no we use something that's out there i don't remember which one off the top of my head but we've yeah we use something that's out there we have fixed some bugs in that because we saw some crashes from it so you know it's kind of fun is.
Matthias
00:35:13
That also something that you used that you use for the extensions.
Conrad
00:35:17
Extensions can register TreeSitter grammars and that that's kind of the main interaction there and then extensions themselves they also have the ability to run some code in web assembly right now it's pretty limited so the most common use for an extension today is a language server and trying to download the correct version of a language server requires a little bit of code and so that code will runs in web assembly as well.
Matthias
00:35:42
And will you support traditional extensions at some point too and how will that look like.
Conrad
00:35:48
Definitely yes and who knows so kind of one of the things that we we really care about is the speed and the performance piece of it and so when we were first talking about extensions it's like well we could have like a javascript runtime and run javascript extensions but i don't think that's going to happen anymore there are enough languages that are easy to put to compile down to a web family thing, but we'll do it there. The second piece that's tricky is we have a completely custom ui framework that doesn't have any bindings for any other language and so how i imagine things going is that we kind of continue down the approach we have today which is that we expose simple things that you can extend one of which is like the ai features have some hook points where you can pull in more context for the model that's one another is language server a third is themes but the there's not it's not like hey you can run arbitrary code in our process thank you very much i think we'll keep it pretty tight for now the next piece i really want to build is being able to kind of bind keyboard shortcuts and then run some code that modifies the editor and i think that that's kind of a solvable piece that we could kind of ship by yourself and so let's say you want you know let's say shah 256 support right you could imagine extension that registers a new command and a keyboard shortcut to shah 256 the file gives you the answer but trying to build something that allows you to render UI is a long way off, I think.
Matthias
00:37:12
I definitely appreciate the focus on stability and performance, because those are the main two reasons why I use it. I would like to keep it this way. That's kind of nice.
Conrad
00:37:24
Yeah, I've been doing a little bit of user research and talking to people working on very large code bases in VS Code. They really, really try not to restart their computer or turn off VS Code ever, because they turn it back on and it takes five or six minutes to update all the extensions and like spin around and show up banners until it's ready to use and it's like oh we have to we have to avoid that and it's hard because all those things do something useful for someone but really try to make sure that they don't get in the way i think is really important.
Matthias
00:37:51
And once the project grows, it will not get easier because I just checked yesterday and Zed is approaching 500,000 lines of Rust code, which is crazy.
Conrad
00:38:02
It's crazy.
Matthias
00:38:04
And I saw some interesting bits that I learned from you. For example, you have a flat hierarchy of crates. You have one workspace still. You keep it all in one workspace. And then you follow a very flat hierarchy. Can you elaborate on this design decisions and maybe other decisions that are reasonable or maybe even necessary after reaching a certain size of like a code base.
Conrad
00:38:34
Yeah so the the primary input to the crate structure is compile time because 500 000 lines of Rust takes a long time to compile and so really really the question is like what code gets recompiled a lot and how do reduce the amount of it and so that's where the the crate structure comes from there's obviously a little bit around abstractions and not leaking between things but but really primarily it's it's the speed thing so we don't really want a kind of a deeply nested thing where visibility is tightly controlled and we have lots of you know that's not important we trust our we trust the whole code base and so it's making sure that it's somewhat easy to find what you're looking for and that when you've rebuilt you know if you make a change to the editor you don't have to rebuild too much else of the code base to to get you to get into testing it's.
Matthias
00:39:23
Funny because lately i i read an article by matklad who is the author of rust analyzer and he mentioned this one thing about nested trees where you have to make a decision where you put things and it's a conscious decision it might be wrong and so eventually you might end up in a suboptimal space in a suboptimal structure of your project so he advocated for a flat hierarchy too where you don't even have to make the decision because yeah it's flat anyway if if you are wondering where to put it the answer is put it into the root of your workspace.
Conrad
00:40:03
Right yeah and i i like the pragmatism there we do have some crates that are too big, but mostly it's pretty well factored out, I think.
Matthias
00:40:13
Any other such tips or perhaps even things that you would avoid now?
Conrad
00:40:19
So we did a, one thing I learned, we did a rewrite of GPUI, the graphics framework that was fairly heavy on use of generics and ended up exporting a bunch of things that were instantiated multiple times with multiple different types. And the way that Rust compiles generics is to kind of copy paste the code each time for each type. And so we had some kind of big compile time regressions at that point just because of the way the code was structured and so one thing to look out for is as often as you can avoid having a generic type exported from a crate so if you have a function that takes generic arguments you kind of want to keep that internal because otherwise every time someone uses it you get another copy of that whole thing out and obviously like everything there's a trade-off there are some things where it is worth the pain but there are other things where if you can avoid that kind of type boilerplate exploding you should.
Matthias
00:41:10
And just to clarify for the people who are listening this also happens if you just use these generics inside of your workspace in a different crate because that's a separate compilation unit yeah and that means even just exposing it within your project might be problematic at times and what about lifetimes because you have a very high focus on performance you want to make this thing as fast as possible and one suggestion of you know various people that come from systems level programming languages like c++ and c is that you want to avoid allocations you want to make everything a view into memory as much as possible and you want to deal with the raw data as much as possible is that something that you have to follow to reach that level of performance or are there ways around it?
Conrad
00:42:01
So for the hot code pass, yes, for sure. So for things like the rendering pipeline, there's two frames that are just big static chunks of memory that we switch between, a fairly common approach there. And Rust is actually kind of helpful for that. It tells you if you mess up because you know which one is live and what you have access to at the moment. For most of the rest, not yet, because if you think about the way it works, so let's say you're on a 120 hertz screen, you have eight milliseconds to render every frame. So rendering a frame needs to be really fast, but the average person can only type a couple of characters every second. So it's kind of fine if it's slow to respond to a keypress, where slow means you have eight milliseconds and you're not doing that much work. And so, yeah, we use a fair amount of ref counted pointers, you know, just to maintain all of this, to keep the code sane, even though that wouldn't strictly be optimal, just because we're not using them often enough for it to be a problem.
Matthias
00:42:55
And when you run into a problem with performance what's your approach, Do you benchmark that before you make any changes or do you just guess where the bottleneck is or you have an intuition for it?
Conrad
00:43:08
Mostly the instruments tools from Xcode have been super helpful on Mac. Linux is a little newer and there are some tools there, but I'm not as familiar with them. One of the really interesting things that's kind of on the back burner for me is that deallocating a frame in Linux can take nearly a millisecond or two, which we're like, that shouldn't be the case. And so, you know, that's a, if anyone listening is a good Linux performance person, figuring that out would be great. But if we're running a profiler on it, it's like, why is dropping the frame taking so much time? Because, you know, you only have eight milliseconds. And if you're using two of them doing nothing, it's a complete waste of time.
Matthias
00:43:44
By frame, you mean what? A frame of memory in the kernel?
Conrad
00:43:49
A frame of pixels to update.
Matthias
00:43:51
Ah, okay. Do you use a lot of macros for code generation? or is that another thing that you tend to avoid in the hot paths of the code?
Conrad
00:44:01
Relatively few. We have a couple that are used a lot, but for most things, we just write the code out.
Matthias
00:44:07
And that's for ergonomics reasons or for other reasons?
Conrad
00:44:13
Mostly stylistic, I think. That's the way the code base is. But again, macros can be a performance problem. They haven't been for us. Is that because we got lucky by choosing the style or before my time, someone chose that style and now we all copy it.
Matthias
00:44:27
Mm-hmm. When you ported Z to Linux, were there any surprises that you hit other than the mentioned issues with dropping frames?
Conrad
00:44:40
Yes, quite a lot. So macOS is actually a really nice target to develop against because similar to their reputation on iPhone, there's only really one platform you need to support. And sure, the APIs shift a little bit as time goes on. You can look at one number that's like, this is the version of Coco that you have, and you know all the libraries that you have. Linux is not like that at all. Right down to the fact of about half of Linux users use X11, the old Windows server, half of them using Wayland, the new Windows server. They both work differently, quite fundamentally differently. And so we have two graphics pipelines on Linux, one for Wayland, one for X11. And that kind of fragmentation hits us at every layer of the stack. So on macOS, you want to choose a file, just open the system file chooser. On Linux, well, they might not even have a system file chooser installed. Now what are you going to do and so that was kind of the most surprising thing for me is just how. Just how customized everyone's look setup is like even even what i would consider like, surely this is just provided like a file picker isn't there and so trying to navigate those trade-offs of like making it work for as many people as possible without going truly insane has been hard another good example is gpus mac os has a gpu it works always you just do it like Linux has a GPU, but maybe the drivers are out of date or the drivers that are the wrong version or they're closed source or they crash. And so we have a whole bunch of people who have tried to use ZLX and that it just hasn't worked. And it's like, well, when we try and talk to your GPU, it crashes. So is that our problem? Maybe. Is it your problem? Maybe. I don't know. Like we have to try and find more people who know more about how GPUs work under the hood and why they might not be working.
Matthias
00:46:20
I learned that it also compiles on Windows.
Conrad
00:46:23
Do you want to comment on that? We have a dedicated team of volunteers. There's three or four people who I see regularly doing Windows fixes and ports. We need a breath after Linux before we dump into the next platform, but it is something we'd like to have. Windows is going to be fun for different reasons than Linux. Some of the same problems. It's a little bit more fragmented, the less so. but the the big one is the the file path separator is the wrong way around and we use rust's path buff extensively internally but if we allow collaboration between linux and windows we can't represent a path in a path buff because it might be a windows utf 16 path or it might be a linux utf 8 path or so we need some kind of like new path file path extraction that is not tied to the current system which is one of the one of the downsides of the way rust does that.
Matthias
00:47:15
And he explained that i wondered how would i test that i would have a a really sophisticated test environment for different environments like do you test it in vms do you test it just with unit tests or manual testing how does that part work.
Conrad
00:47:31
Testing in general or cross-platform testing.
Matthias
00:47:35
Interested in both yeah.
Conrad
00:47:36
But specifically cross-platform cross-platform is a little manual to be honest so the way that the app is set up you have kind of a platform specific layer and then everything else is rust we have a test implementation of the platform specific layer so we can very easily test all the stuff that's not platform specific and it mostly just works and sure there are a couple of statements that depend on what platform you're on but mostly the code is the same for everyone and that is one of the nice things about rust it is just rust, when it comes to testing platform integrations like back to keyboard shortcut handling like when you type these keys on this keyboard layout on macOS it should do this instead i have not figured out a better way than just getting yourself into that setup and trying it and so to be determined do.
Matthias
00:48:22
You focus on unit tests or integration tests for the rest of the code base.
Conrad
00:48:26
Pretty much integration tests for the most part so we have as you know we have collaboration and because the server piece is also written in rust and also part of the same repository we boot up the server we boot up zeds and we talk through both of them and so we have full integration tests and i kind of like that approach because it a lets you test interesting ordering properties so the test platform will reorder different async events that happen simultaneously so that you get more test coverage for example it also means you can refactor the code and it doesn't break the tests that's always been my gripe with unit tests you change the code and then you have to change the tasks. So what's the point?
Matthias
00:49:04
How do you communicate with the server? JSON? Why exactly do you use protobuf and not anything else?
Conrad
00:49:13
Protobufs are kind of like the classic solution to this. We're actually thinking about changing off them because they don't integrate with Rust super well. And as all of our coders in Rust, it'd be nice to have something that integrates better. But one of the main challenges of a distributed system is you have to be able to deal with messages sent from the past. So like forward compatibility. And then you also need to be able to deal with messages sent from the future. So if someone is on a newer version of the set than you, they could send you a message. and you need to be able to do something in that case that isn't just crash. And so there's not much that handles that. There's protobufs and a couple other big, heavy solutions. But there seems to be a missing niche for a Rust-based thing that can solve this. Because one of the downsides of protobufs is it generates a whole bunch of struct definitions. It's like, well, we have all the structs defined in our codebase, and we have to map between the two. It would be nice if we could more like Serti just say, and make the struct forward-backward-compatible over the wire, please. But I haven't found anyone who's built that yet.
Matthias
00:50:11
There are a few things that come to mind. One is Postcard by James Munns, who has a slightly different focus. It's serialization format, yeah, but it's not based on Rust structs as far as I remember. Then there's CBOAR, which is another serialization format. it i honestly don't know what that wire format looks like.
Conrad
00:50:37
But i.
Matthias
00:50:38
Think there's also one that is based on rust structs themselves.
Conrad
00:50:43
We use one i'm trying to remember what it's called in a different part of the code base it's like message pack or something like that which works fine but it doesn't have any versioning support i.
Matthias
00:50:55
Think the one that i meant was called rson.
Conrad
00:50:58
Okay no that's that's different again it's like.
Matthias
00:51:01
A chase on like thing but with rust structs because the one issue that i found with protobuf was that you need to carry the definition file you need to put it somewhere and.
Conrad
00:51:13
Then you need to.
Matthias
00:51:14
Compile your stuff against whatever protobuf definition you have somewhere.
Conrad
00:51:18
And that can.
Matthias
00:51:19
Be a little annoying in the workflow.
Conrad
00:51:21
Definitely and we have a build step that does it for you but you know for example if you have an error in the protobuf file, it breaks everything because the build step fails and then it's like oh you can't build and you have to really dig in and find out why that is but yeah thanks for the postcard link and i will look into those thank you.
Matthias
00:51:38
Yeah shout out to james for building.
Conrad
00:51:41
That.
Matthias
00:51:43
I took a look at the issue tracker, and I found that one of the most often, if not the most often requested features that's missing in set right now is debug support. And a lot of people might say, well, why haven't they added it just yet? And how can it be so complicated? Can you say a few words about that?
Conrad
00:52:07
Sure. I guess what makes this so complicated is that there are 50 different programming languages that we're trying to support. The other thing that makes it complicated is it's actually a very fiddly piece of UI and UX. And obviously, there are lots of existing ones, so we can kind of copy them. But it's not just a, you know, copy paste from VS Code or something like that. There's a lot to think about in Log2Build so that it not only works well, but it feels intuitive and you can actually understand how to use it. So one of the things that's really interesting, there's kind of a debugger protocol that's beginning to feel somewhat standard, which is the one that Chrome uses for its dev tools. That's the one that VS Code builds on. There are obviously other implementations, like some go directly to the debuggers. But what I imagine we'll do first is kind of support the debug protocol, kind of punt a little bit on the languages that don't work with that and make it a language problem. But I hope if we do that, we can kind of like language servers. We get most of the benefit with a tenth of the work but they're still building all the ui so you can look at local variables jump list lines you know if you start to think about all the things that a debugger can do it's it's definitely a lot more than just play and pause.
Matthias
00:53:20
Which languages work best with this new protocol.
Conrad
00:53:24
So javascript definitely works well with the protocol because it was written for that i know that goes debugger delve also has support for it one thing i'm not sure about is does llpb which is the rust debugger or nec based language i don't know if it supports that protocol yet but that's definitely a debugger that we would like to have support for nice.
Matthias
00:53:42
I'm really looking forward to that.
Conrad
00:53:44
Yeah you you and about 500 other people i think based on the outputs the.
Matthias
00:53:50
Pressure is on but i'm sure that when it hits it will be fine because this is how i experienced this editor or this approach so far it's always very well thought.
Conrad
00:54:00
Through which is great and.
Matthias
00:54:02
Speaking of which about things that look easy on the surface but are hard in hindsight is there any peculiar bug that you remember anything that people take for granted but in reality is very hard to pull off.
Conrad
00:54:18
Yeah, A particularly unique bug, I guess. So Zed allows you to edit files of any size. And we had a bug where if you had a file that was over about 100,000 lines long and you scrolled down, the line numbers would not be at the right position. You'd have the line of text and the line number would be plus or minus a couple of pixels. And we looked into it and it turned out that because our graphics coordinates float 32s, when we were multiplying the line number by the float 32 to try and figure out the distance from the very top of the file, it just didn't work out at all. And so we ended up having to first subtract from the first visible line and then do the offset and then that just fixed it but but it's really interesting to think about how do you have a file that's so long and you could just edit it without having to rewrite the entire file every time yeah.
Matthias
00:55:07
Most text editors would even crash at this point or they would not even get to that point.
Conrad
00:55:11
Well and you can still open files that are big enough that that will crash we don't do anything super clever around paging things in yet but one of the things that we do do is when you load a file we break it up into a bunch of chunks and so as you're editing a file we're not having to say you know take a evacuate which is the the rust underlying string type insert one in the middle and reallocate the rest that would be way too slow so when you insert into the middle we we use our crdt to say oh we're just the first n characters from here then this character then and these end characters. And so because we're representing it as a tree, kind of like a rope, if you've heard of that data structure, that means that everything is kind of quick. And because that is collaborative, because it's a CRDT, it also works for collaboration natively. So we know for each chunk who added it and in which order. And so even as multiple people are concurrently editing the string, everyone ends up with the same representation without having to copy and paste everything across.
Matthias
00:56:12
Exactly. And it's also elegant because if you use two big vectors, you will probably have some jank on the interface between the two or like in between the two when you jump from one block to the other. But if you use a smarter data structure, you kind of circumvent that issue altogether.
Conrad
00:56:31
And then one of the things that wraps that I think is pretty cool is we have to maintain where the line breaks are. And we don't want to scroll, like scan through the string and figure out where all the line breaks are. And so we maintain a couple of indexes on top so that we know, okay, in this half of the file, there's a 1500 line breaks in this half of the file, there's more. so that as you scroll, we can quickly jump to the right part of the file without having to, from the beginning, scan and count the new lines. And so using indexes like that to help us navigate the strings means that we're basically never doing anything that's order n in the size of the file, which is nice.
Matthias
00:57:10
It sounds like a very hard problem that you have to figure out once, but then you never have to touch. Are there other things like this in the editor?
Conrad
00:57:19
Well, we've talked about one of them, which is TreeSitter. it's like syntax highlighting for arbitrary languages is a very hard problem but once you've solved it with something like tree center you only have to solve it once i yeah beyond that it's it's hard to say i think the yeah i think the crdt it's one of the few parts of the codeways where i basically never find myself there because everything just works which is nice.
Matthias
00:57:40
Let me tell you about my first interaction we've said i had a problem with vim support and i went into the issue tracker and i found an issue that exactly described my problem and so i read the last couple comments and what i found interesting was that you conrad reached out to the people in this issue and said i'm open to hack on this together that was the first time i ever saw this level of interactivity with any project because it's not only, through issues, you would invite people to block time in your calendar to use the tool you're going to improve together and work interactively. Is that something that you do a lot? How does that influence your workflow?
Conrad
00:58:32
Yeah. So kind of back to the beginning, one of the things that I love working on is tools that help people. And I believe that collaboration in programming is way behind where it should be. And so I have kind of a secret goal, which is to get more people trying it out instead and trying, you know, get more converts that way. But what I found is that it's very slow comparatively to have a discussion about how to fix an issue in GitHub issues, because it's kind of like email. You sound like, bang, maybe you wait a few days, someone replies back. You don't understand what they said, like, you know, another few days to clarify. If you can say, hey, like, let's work on this together. Some people opt out, either they don't feel like they could be useful in Rust or they don't feel like the English is good enough. But for the people who say, sure, and join in, we can, in half an hour of working together, solve an issue that would have taken us each an hour or so back and forth on GitHub. So I have on Calendly, Calendly link, two hours on Tuesdays, two hours on Fridays that are just like compare with me time. And it's kind of fun both to work on issues that other people feel are important, you know, sometimes, particularly Vim, there are so many features in Vim where I'm like, I never even knew this existed and I don't understand why you need it, but you really, really want it. Great. Let's build it together. Cause then you, I'm getting you into Z and getting more people kind of helping out and you feel amazing because you've got to implement the thing that you wanted to build. And so it's been really helpful to do that. In general, obviously the whole code base is open source. We intend to keep it that way. We want people to make Zed the editor that they want. And so really encouraging people to send in changes. And they can be pretty major. There's been one contributor working tirelessly on what he calls Zen mode, which is how do you get rid of all the UI? And so setting by setting, he's adding them all in so he can do that. But a lot of people just come in, they're like, hey, I hit this bug, here's a fix, let's do that. And so really trying to make sure that the community is kind of making Zed the editor they want to see. It helps everyone. And so I would strongly encourage you if you are using Zed or if you're not yet using Zed to use it and then fix the bugs that you find.
Matthias
01:00:32
If someone out there wants to contribute to ZedNow, how do they get started?
Conrad
01:00:37
Very simply, look in the issue tracker. There's about 2,000 issues. Some of them are tagged with, I can't remember the name, but it's like a first issue tag. Otherwise, my suggestion to anyone trying to get into anything is find something that irks you or something that you missed from the editor you were in before and build it or file an issue about it and discuss it. We're very happy as a team to pair with people. It saves a lot of time. And so filing an issue and trying to talk to someone in Discord helps a lot.
Matthias
01:01:03
We're coming to the end, and it's become a bit of a tradition around here to ask this one final question. What would be your statement to the Rust community?
Conrad
01:01:12
I think what it would be is remember to keep things simple for newcomers. I've been doing Rust for about a year now, and I just about feel like a beginner. So if we can make it simpler for people joining in, I think that will help with everything.
Matthias
01:01:26
Very nice final statement Conrad thank you so much for taking the time it was amazing well.
Conrad
01:01:32
Thank you for organizing this great to spend time.
Matthias
01:01:36
Rust in Production is a podcast by corrode it is hosted by me Matthias Endler and produced by Simon Brüggen for show notes, transcripts and to learn more about how we can help your company make the most of Rust visit corrode.dev thanks for listening to Rust in Production.