1Password with Andrew Burkhart
About securing logins with Rust
2025-06-26 64 min
Description & Show Notes
Handling secrets is extremely hard. You have to keep them safe (obviously), while at the same time you need to integrate with a ton of different systems and always provide a great user-experience, because otherwise people will just find a way around your system. When talking to peers, a lot of people mention 1Password as a company that nailed this balance.
In today's episode, I talk to Andrew about how 1Password uses Rust to build critical systems that must never fail, how Rust helps them handle secrets for millions of users, and the lessons they learned when adopting Rust in their stack.
About 1Password
1Password is a password manager that helps users securely store and manage their passwords, credit card information, and other sensitive data. It provides a user-friendly interface and strong security features to protect users' secrets across multiple devices.
About Andrew Burkhart
Andrew is a Senior Rust Developer at 1Password in the Product Foundations org, on the Frameworks team and specifically on the Core Platform squad handling the asynchronous frameworks other developers use to build features (i.e. requests into the Rust core from the Native clients, data sync, etc.). He specifically specialized in that synchronization process, getting data federated from cloud to clients to native apps and back.
Links From The Episode
- Backend for Frontend Pattern - Architectural pattern for creating dedicated backends for specific frontends
- typeshare - Generate types for multiple languages from Rust code
- zeroizing-alloc - 1Password's minimal secure heap zero-on-free implementation for Rust
- arboard - Cross-platform clipboard manager written in Rust
- passkey-rs - Pure Rust implementation of the WebAuthn Passkey specification
- WebAssembly (WASM) - Binary instruction format for portable execution across platforms
- tokio - The de facto standard async runtime for Rust
- Clippy - A collection of lints to catch common mistakes in Rust
- cargo-deny - Cargo plugin for linting dependencies, licenses, and security advisories
- Nix - Purely functional package manager for reproducible builds
- Nix Flakes - Experimental feature for hermetic, reproducible Nix builds
- direnv - Load and unload environment variables based on current directory
- ACM: Spotify Guilds - A study into Spotify's Agile Model's Guilds
- axum - Ergonomic and modular web framework built on tokio and tower
- tower - Library for building robust networking clients and servers
- tracing - Application-level tracing framework for async-aware diagnostics
- rusqlite - Ergonomic wrapper for SQLite in Rust
- mockall - Powerful mock object library for Rust
- pretty_assertions - Better assertion macros with colored diff output
- neon - Library for writing native Node.js modules in Rust
- nom-supreme - Parser combinator additions and utilities for nom
- crane - Nix library for building Cargo projects
- Rust in Production: Zed - High-performance code editor built in Rust
- tokio-console - Debugger for async Rust programs using tokio
- Rust Atomics and Locks by Mara Bos - Free online book about low-level concurrency in Rust
- The Rust Programming Language (Brown University Edition) - Interactive version of the Rust Book with quizzes
- Rustlings - Small exercises to get you used to reading and writing Rust code
Official Links
Transcript
It's Rust in Production, a podcast about companies who use Rust to shape the
future of infrastructure.
I'm Matthias Endler from corrode, and today we talk to Andrew Burkhardt from
1Password about securing logins with Rust.
Andrew, thanks so much for being a guest today. Can you say a few words about
yourself and about 1Password?
Yeah, definitely. Thank you for having me. So I'm Andrew Burkhardt.
I'm a senior Rust developer at 1Password. I work on the product foundations
team, or in the org, I should say, on the frameworks team.
So we specifically focus mostly on async Rust frameworks.
I've been here about three years now and working on data synchronization mostly,
which has been pretty interesting as we grow.
We spread kind of to other products that have come in. You know,
the password manager has kind of historically been our core business,
and that's what people have known us for for almost 20 years now.
But, you know, over the last few years, especially, we've grown a lot and acquired
a few companies and so forth and really spread ourselves across the security space,
you know, with various things from the password manager to, you know,
developer tools and a lot of like enterprise password management features.
And we have things like connect servers and so forth that, you know,
skim bridges that help with identity management, all these things.
And then into the companies that we acquired, like Passage, which helps with
Passkeys, Collide, which does on-device health checking, Metrolika,
which handles provisioning and some things like that.
So, yeah, so across all those various areas, you know, sync has been a really,
really interesting use case to kind of combine all the various information we have in different ways.
And by sync, you mean actually combining all of the information from devices
or user profiles or what does that mean exactly?
Yeah, so the sort of summary, I guess, would probably be, you know,
federating or, you know, getting the data to be the same from clients to the
Rust core to the server and all the way back around to any clients that are relevant.
So that may be something as simple as, you know, you and I are on the same one
password account in the password manager and I add an item that you now need.
So we sync that data across, but it can also be other things like,
you know, if you're in the enterprise space,
just making sure that the data from one place talks to another and so forth,
so that if there is relevant context from one area, that is available, area, things like that.
Yeah i can totally relate to that because one of the worst
developer experiences is if data that
you expect to be there isn't there or is in
a sort of limbo and i think sync needs to be really reliable and that's the
core of the product if that is not reliable then well everything else is just
nice user experience but it's not really like i would say user interface but
not a great user experience.
Yeah definitely and that's you know it's created
a lot of you know patterns i guess you
would say that we hold pretty strongly to like you know sync
here conversations about this this morning actually sync here is
always sort of a background task so anything
that you're doing sync should essentially be a side effect
of it and it should happen asynchronously in the background and you know that
way you kind of always assume that the data you have is current because sync
has been happening in the background and therefore you have the expectation
that that data has already been brought locally to your device,
which gives the feature developers a little bit of freedom because you don't
have to go request things and wait for it and do this.
I mean, there are cases where that has to happen, but in general,
the vast majority of the data has already been brought locally and cached and so forth.
So I can just act on that cache as though, you know, I'm the only one using
this data or something, which is convenient in a lot of cases.
It's sort of funny that you need async in order for something to be synced,
but it does make sense if you think about it because, well, it happens in the background and,
A lot of things need to happen for it to look synchronous, but it's not easy.
Yeah, yeah, definitely. And you get into the nitty gritty details of like synchronous
versus synchronized and so forth. And we definitely overload the word sync probably way too much.
So take us back a couple of years ago when you started. You said you're at 1Password
for three years now, I guess.
What was your background back then? Where do you come from? And then what was
the state of Rust at 1Password when you started?
Yeah, so I had done mostly, my prior job had been a lot of Kotlin and Spring
Boot, which, you know, is kind of like, it's a framework for Kotlin doing server side stuff.
And I was doing a lot of that, like backend web programming.
I also worked on the front end a little bit with TypeScript and the Vue framework for UI.
But I wasn't really like a front end developer. I did, you know,
it's referred to, I think, as like the BFF pattern where like,
you have one server that manages a lot of different things.
And then you have all these front ends that manage pieces of that,
right? Like, for example, I worked on an app that did a lot of settings,
like behind the scenes settings.
And then we had a lot of other apps where actual end users would interact with
it. And the one main server powered it all.
But instead of muddying that server with, you know, all these either,
you know, like you could use something like GraphQL, I suppose.
But like instead of muddying it with all these APIs that supported vastly different
use cases, we created these BFF servers where each front end has its own back
end that then translates to the real sort of underlying back end.
So I'd always done kind of that mid layer.
Of development. And that was actually very helpful because it turns out,
you know, getting into the state of Rust and 1Password is that's kind of how Rust is used here.
So each of our apps, you know, obviously we have quite a few between iOS and
Android and Mac and Windows and Linux and web and extension and all these different places.
All of them are native clients.
So they operate, you know, the iOS app is in Swift and so forth.
And then they are bundled with kind of like an embedded backend,
which is a Rust, we call it the core.
So the Rust core is inside of all of these and that is doing as much of the
business logic as we can to leave those native clients to just be display layer,
essentially a presentation layer.
And then that communicates to the actual cloud server and so forth.
And so Rust got picked up here. I don't actually know when it was,
but my understanding is it was originally picked up for the Windows app that was being built.
And And over time, when we got from 1Password 7 to 1Password 8,
I think it was, again, it was slightly before me, but I think that was when
it was like, okay, we need more cross-platform functionality because we're building
so many features that were not originally just password management concerns.
We're building all these different things, and they need to work on all these
platforms, and we're implementing them over and over and over again.
And so when they decided they wanted to create sort of the singular core that
was going to power all the apps, they had already started to use Rust on Windows
and it had worked really well. And so a lot of it just got ported over.
And so when I got here, we were just in this kind of big growth phase back in
2022 and a ton of new people coming in. And so Rust was sort of proliferating
very quickly as the core was building, you know, all these developer features
and these different things.
And it was right about then we did a reorg of the development teams and created
the team that I'm now on, which is the frameworks team,
along with a few other teams that do, for example, like security specific development
or platform specific development, like handling the nuances of Windows or Android or whatever it might be.
And all of us kind of came together to design patterns and rules and so forth
around Rust development to try to sort of wrangle all these different things that were happening.
And that's kind of got us into the patterns that we have today,
which have been really nice to keep this giant core as maintainable as possible.
Nice. I want to get back to the patterns in a second. But before we do that,
what did you use before Rust? Was it very platform dependent?
Did you have this business logic in, for example, Kotlin and then Swift and so on?
Yeah, exactly. So, like I said, that was all before me.
But my understanding is that everything was basically built natively,
which gives you some benefits, right? There are pros and cons of that.
But at the scale that we're at now, it's just not feasible to have that much
code and have that many different implementations. And obviously,
just when you're building a security app, you don't, multiple implementations is a problem, right?
Anytime you have to do that, there is now the risk that one of these implementations
is vulnerable in some way that the other isn't.
And we're trying to compare, you know, Swift code to Kotlin code to TypeScript
to, you know, Go or whatever and so forth, and make sure that these are all
doing the exact same thing.
But those languages may have different underlying models for async or memory
management or whatever it is. And so it just wasn't feasible to continue on like that.
And so the REST core really helped us to, I mean, it just enabled features that
we really weren't, that weren't possible before.
In my idealized way to think about it,
what I hope would happen is that once you merge in all of that code into one
code base, into one Rust code base,
that you see how different platforms or different languages solve these problems
or different developers solve the same problem and some of them solve different edge cases.
And then when you merge that together, you get a thing that is stronger than
its parts. did that happen?
Yeah I think so I think and you get the opposite too
right where it's like you know we'll get one component and it's like what is
this thing doing it's like oh we ported that from this other version like
oh that was the worst version why did we pick that one you know and so you do
see that but yeah I think in a lot of ways you definitely get the benefit of
that and we see that having native clients I mean we've got developers who have
done Swift and Mac development for you know 20 years or whatever it might be
and so then they come at it and it's like okay hey hey, we need to build this feature.
You know, I need some help with the Rust side of it. And you see something and
you're like, that's not at all how I would have thought to do it.
And they're like, well, here's how we do it, you know, in Mac land.
And you start to get interesting new ideas, I think, about how you can handle,
you know, how you can tackle various problems.
So I think we get that just by nature of having all the different developers,
but we definitely saw it in the growth of the Rust core and trying to pull things together.
And now trying to add in some of the browser features that we had implemented
natively in the browser, now that we're trying to pull them into Rust and compile
them into Wasm, we're seeing some of those patterns again that we can learn
from and improve throughout.
Was the Rust ecosystem prepared for what 1Password asked it to do?
Or were there any gaps in the Crate ecosystem, for example?
I think there were probably some. I would say, in general, yes.
I think the ecosystem was growing so fast and.
There were so many people building interesting things that I think it gave us
a lot of freedom to innovate on top of those.
I think there were small gaps, which is where you've seen some of the crates that we've put out.
Gaps like, for example, having those native clients, there's no type safety
across the FFI necessarily.
If I'm writing Swift and then I send something into the FFI and then into the
Rust core, I'm just hoping the types line up.
So we created the type share crate, which is, it's not giving you necessarily
complete type safety, but it is at least making sure that you're not writing
the types on the client side.
They are generated from your Rust code so that if the Rust code changes,
the client side types change. You're not having to manually keep those in sync.
So I think that's been really helpful.
And then there are other little things. Another one we released somewhat recently
is called zeroizing alloc.
Which is like a wrapper around the global allocator that makes sure,
you know, if you're using this, whenever memory would normally be deallocated,
which in that case, you know, it's sort of just dropped, right?
It's free for the system to use for other things.
We don't want to just let that sit around in case the system doesn't need it.
And that memory sits there forever, right? We can't have that when you're dealing
with sensitive data. So the zeroizing alloc crate will help to just flip those
all back to zeros during that deallocation process.
So there were little things like that, that obviously Rust as a language was
ready for. There just hadn't been that use case necessarily.
And so I think it created nice gaps for us to fill with some of those things.
Yeah, I came across this zeroizing alloc crate.
Actually before that, because the link was purple. So this is how I know.
I'm surprised that no one else built such a crate before. Or did you look at
some and didn't really like them?
I don't know how we came across that. And I wonder to some degree,
like if you look at the crate, it's not particularly complex,
right? It's a fairly straightforward implementation.
And I've seen this in other cases where...
I think sometimes we look at a problem and we go, oh, it's so simple,
like I don't need to make a crate to deal with this problem.
You know, it's maybe a common problem, but it's like, I'll just throw this together.
And, you know, maybe people don't want the dependency or something like that.
I'll just kind of hand roll it very similarly or something along those lines.
I think sometimes that happens, right?
And people are obviously wary sometimes of dependencies, you know,
especially ones that do smaller things, right? We saw, I think,
in the TypeScript NPM world, there was like the left pad incident,
right? where like all this thing is doing is handling this left padding.
And it's so simple, but then when it's gone, now we have a big problem, you know.
And so that could be the reason that some of those things just don't get created.
It may not be, you know, I doubt we were the first people to think of it.
You know, it's probably something someone else had solved before.
They maybe just didn't make a crate for it or something like that.
Or maybe there is one out there and I just haven't seen.
What is the policy around creating crates at one password?
You know, there's, this has been an interesting thing that I've seen here,
right, is because one password, the password manager concept is 20 years old, right?
And conceptually, a lot of it is similar to what it was then.
And then the Rust app itself, you know, now is, you know, five,
six, seven years old. I don't know exactly how old, but it's been around a while.
You have these things that have grown so much and for the most part creating
new crates internally it's just a matter of code architecture does it make sense,
is it a necessary separation a lot of problems you deal with anywhere,
making crates public is sometimes tricky because one you have to figure out
does this make sense to make public like zeroizing alloc, we probably could
have left that internal but we figured it might help some people,
And it's not like a, you know, private, you know, secret thing that 1Password does.
I think a lot of people know that, you know, we have to deal with memory this way.
So you have some of those initial concerns about whether or not we should make this public.
And then it's like untangling it from everything else you've built over the last few years, right?
And, you know, there are things like our, you know, clipboard manager,
it's called Rboard, that's made public, you know, and you got to go,
okay, well, we had all these, you know, 1Password specific assumptions.
Now we have to pull those out and make this just a dependency as though it were
a third party dependency to us as well, you know, and so a lot of it is just,
just small bits of process of untangling things.
But, but as far as deciding to make it public, that's pretty case specific,
I guess, or situation specific.
And I think we've been trying to, to get more and more out there.
You know, we released, for example, the passkey RS crate, which I think is only
valuable to other password managers.
Like, I don't know if there's anybody because it really just validates,
like, does verification of passkeys. And so I don't know that anyone else would ever need it.
But, you know, we released it because, one, it might help them.
And we really want to see passkeys get adopted in more places,
even if that's, you know, in a competing app.
We want to make sure that that's available and so forth. And it's also just
interesting, you know, rust that people might be able to learn from and so forth.
So, yeah, it definitely depends on the situation, but it's a bit of a process either way.
Maybe one of the listeners checks out that crate and builds a thing that we
both haven't anticipated.
It happens more often than I expect.
And the other thing is open sourcing software.
Whether it's Rust or not is a huge lift for every company, because what a lot
of people forget is everything other than the code, versioning,
documentation, testing, CI, communication, and so on.
And also, to some extent, exposing how you build things at a certain company.
And also, a lot of developers are afraid to share how they write code.
And did you ever get any external contributions to make it worth it?
Yeah, I mean, especially like with TypeShare, I think I'm trying to think of what language it was.
I want to say like Scala or something like that. Like some, you know, we don't use Scala.
And so we never built, you know, a TypeShare integration with Scala,
but someone from the community did.
And so now I want to say it's Scala. Now there is, you know,
the capability to convert your Rust types into Scala and so forth.
And I think those are nice. We don't, I don't know that we've ever really open
sourced anything with the hope that people would build features that we need.
You know, we generally are trying to do it sort of the other way around,
like, you know, make sure that our work is helping people where we can.
Like you said, it can be tricky, especially just from an operational standpoint, right?
There's, I think I saw some, we have like 150,000 businesses that use 1Password
or something like that or more than that.
Wow.
So when you have that number of people, the number of needs is pretty high,
right? We have a lot of, this company needs this thing and this company needs this thing and so forth.
And if you open source something, you lose the ability to make breaking changes quickly.
You don't want to, I mean, I guess you technically could, but that's not a very
good open source project if you're just breaking all the time.
So that's one of the harder parts is when you're making that decision,
is this something we can open source?
And can we give, like you said, versioning, can we give any kind of guarantee
that if we make version one, And version one is stable and safe and it's good.
That's fine. But now if we need to make version two a breaking change because
we need it for something we're trying to build, well, now version one kind of becomes unsupported.
So if version two doesn't work for what you're trying to do,
well, now what if there's a security vulnerability with version one?
Now you've created these two branches that should be maintained by someone.
And if that happens a second time and a third time, you just create this web
of challenges. So, you know, it would be great if we could open source everything,
you know, and let everyone see all that.
And I'm sure there are paths to that. It just doesn't, it's not very easy.
Like you said, it's very challenging. It's not as simple as like,
well, just make it public.
You know, you've got to do a lot of management to make that work.
And, you know, as things ebb and flow, that can be really hard to keep up with,
you know, responsibly and reliably.
Yeah. And even if you didn't open source it, and even if you just used a crate
across different teams, that is still a challenge. Do you have a monorepo?
What does the structure look like right now?
Yeah, so it is a monorepo. We have various repos for services and things like
that, but the Rust core itself and all of the client apps built on it is a monorepo,
which gives us some benefits from sort of a build and release avenue, right?
Where everything, you don't have to bring in other, you know,
repos and so forth to keep everything up to date. It's all in one place.
But like you said, that means that, you know, you know, merge conflicts can
be a problem and all these different things that are reliant on what you're working on.
And that's, that's tricky, especially from, you know, a team sitting where we
do kind of at the very bottom of the whole stack.
You know, when we make a change, there tends to be a very significant ripple
effect that, you know, and you get into challenges like, how do we manage that, right?
Does our team, you know, if you have code that depends on that,
do we just tell you, hey, we're going to break this and you need to be ready
to work with us to do the new thing or do we fix it for you,
you know, and so forth. And there can be a lot of communication that needs to occur as you grow.
And, you know, you talk about like the mythical man month, you know, concept, right?
Of like throwing two more people
onto a project does not give you two more people's worth of time, right?
You have extra communication. There's all these new channels that have been
created between those people and the other people and so forth.
And it will test your processes and
your org structure and your code architecture, you know, at every turn.
And so when you get to hundreds of developers working on sometimes completely
different, you know, things, right? They're not even password management concepts.
They are, you know, they may be related to secrets, right? Like the SSH agent that 1Password has.
Well, that's related to some amount of secret, but it is not at all related,
you know, to profiles or settings in the password manager or things like that.
So you can get into challenging situations where you're trying to coordinate
underlying frameworks across completely different feature teams and,
Yeah, that's problematic at times, and it definitely can slow things down,
but it also becomes a strength when you're learning from things.
Let's talk scale, because people like numbers.
Can you share some of the, let's say, I would say more interesting numbers about
the code size and the number of people working with Rust and crates maintained and so on?
Yeah, so we are, I think I just looked the other day and we were at like five
or 600,000 lines of rust in the core across like 600-ish crates.
I think we're just shy of 600 crates internal plus dependencies.
So it's a pretty big code base, right? And like I said, that core makes up as
much of the business logic as we can get it to.
So I don't know the exact spread of all of it because, you know,
when you're implementing a feature, you have to also implement it in the native clients, right?
So some amount, you know, every feature is going to have some Swift and some
Kotlin and TypeScript and all these different things.
So I would say, you know, and then you've got server-side stuff,
which not all of our server-side stuff is in Rust. There's a lot of Go on the server side.
So I would say probably 50% of everything new that we write is in Rust,
but it's definitely spread across all those other things.
You know, writing an end-to-end feature will involve Go and Rust.
And Swift and Kotlin and TypeScript and possibly go on the CLI side and so forth.
So there's quite a bit that goes into that. But there's probably,
I think we have a couple hundred developers now working across all the different
products that 1Password and Collide and Passage and Trelica and so forth have.
But we probably have, I don't know, I'd say 100 people that work pretty heavily
in Rust or maybe 50% or more in Rust, something like that. But that's definitely a bit of a guess.
Awesome which ffi bridge works the least well for you right now like what gives
you the most headaches at the moment.
It depends i think as far as the actual ffi goes the most challenging ones are
probably well i guess android can be a little bit tricky sometimes,
you know, especially just Android, the ecosystem's a little bit different.
You know, we were built originally as a macOS app. And so I think there's still
a little bit of that sort of, you know, Apple product DNA built in,
which, you know, we've tried to spread to all these different platforms and
make everything work equally well. But I'm sure there's still a little bit of that in there.
And then the Wasm boundaries can be tough.
I didn't know anything about Wasm until about a year ago, right?
I don't think I'd ever really touched it at all. I'd heard of it and knew nothing about it.
And then you find out little things like, I think there's like no built-in clock
in Wasm or something like that.
So you're like faking all these time-based things that you used to do.
And so that can be a little bit tough when you have concepts baked into how
the native clients communicate across the FFI that just don't work in Wasm.
And we end up with a lot of abstractions.
Like we have, you know, there's send and sync traits in Rust,
but we have a maybe send and maybe sync trait that handles, you know,
four people translating between, oh, is this going to be single threaded or
multi-threaded and so forth?
And so the WASM boundaries are probably the toughest, to be honest.
Okay. Yeah. The maybe send and maybe sync sounds interesting.
Wouldn't that be a question mark send and question mark sync?
I think it could be. Yeah, I think what we have, so the way we've broken this
out, right, is because we want, you know, code to be reusable across platforms, right?
But it may not necessarily work quite the same. Let's say for pass keys,
you know, or autofill, like, that happens completely differently on iOS than
it does on Android than it does on, you know, Windows or wherever.
And so there's all these different implementations.
But you want to do the same process. It's just that all the underlying calls,
you know, there may be syscalls or things like that, we reaching back out in
the OS, they work different.
Or, you know, some of them, you have to do it in the moment,
some of them, you have to seed some operation at unlock, you know, or whatever.
And so what we try to do is we have these, you know, we have FFIs by platform.
And then we have like an app level, where you could have multiple platforms
that use the same app level crate.
But then what that does, that reaches out to services. And the service level
is where all of your actual functionality comes in.
For example, sync, there is a service that handles sync, and there's,
you know, a different service that handles autofill and so forth.
And then those reach out into underlying implementations of things.
And so what we do is we try and make a trait that is like, okay,
here is, you know, let's say, I don't know, pass keys or something like that.
And then underneath that, it's like, all right, get the pass keys.
But then below that, it's like, well, what do we need to get?
Which data are we getting?
When are we getting this? You know, how is it being provided?
Do we need to get anything from, you know, the enclave or the operating system
or whatever it is? and those get implemented by platform.
And that gives us a little bit of flexibility into how we.
Build up things. But what that means is, if I'm doing something that is generally,
you know, we have built this and have used it, you know, let's say four or five
years now, completely async, right?
And we've been using, you know, we've spent up a ton of threads to do this thing
or whatever, or it's got to be on a background task or something like that.
Well, now maybe that doesn't work in wasm.
And so I think maybe send and maybe sync have been our easy,
traits to kind of layer in to be like, all right, put this on there to make sure,
you know we can sort of keep these things
moving forward and then that becomes a big challenge throughout
everything right when you've got you know 500 000 lines of
rust and now you're trying to build in
wasm that's a lot different than we're starting from scratch and
building in wasm support right we could have done things probably a
lot differently but there are hundreds of features across
you know a dozen products or whatever that are
trying to use this stuff and going back and refactoring them maybe just doesn't
doesn't add up right now and we don't have time for that and so there are probably
other ways we could use those x or whatever they call them to to better better
use i think these are probably our temporary temporary solutions but.
I fully get it because at some point you also need to ship things and this seems
like an escape hatch that kind of was a pragmatic approach to solving this problem.
What a lot of people forget is that these are apps that eventually need releases
to change or if they have breaking changes, then it's not that easy because
they might be used in other parts of the application.
So the other thing I wondered was,
Is there a guideline for keeping the core synchronous Rust?
How deep does the async part go? And at what point do you stop using async?
And do you start having your own little, say, domain, which is sync?
Is there such a differentiation? Or would you say that doesn't make any sense
at your scale or for the project you do?
You know, it might make sense in some places. I don't know that we really do
it much, though. It's definitely pretty async down to the root of things.
I mean, the way our requests from the client into the core work is,
you know, you make this request over the FFI, which is sort of inherently async, right?
Because that's not an API where you're waiting, you know, it's not baked into
like the HTTP protocol where you're going to get a response and blah, blah, blah.
So you sort of send this request into the core of the FFI. And then what the
core does is that FFI holds on to a sender and it hands off that request to
a thread that is watching the receiver.
And that dispatches invocations, you know, these requests that do the work and
handle the type process and blah, blah, blah, and then come back.
And we, you know, the way that we handle that is the front end sends in a callback
that once the request is done processing,
that invocation dispatch loop executes the callback with the resulting data
as an input to that callback.
So that way, the client side handles sort of a registry of here are all the
requests that I have made and sort of an identifier.
So when that callback gets executed, I can put it into my local cache or store
or plug it into this React component or whatever it might be.
But the whole process is sort of inherently async.
It is baked into everything from client side all the way down to the database and back.
So I'm sure there are some things where it's like, hey, we need to do multiple
things and they need to be synchronous in order, but they're a bit of like a
bubble that's still within an async, you know, flow end to end.
The callback mechanism that you describe sounds a lot like the Waker in Tokio,
but just on a higher level, I would assume, like on a data level somehow.
Exactly. Yeah, it is kind of like that where the we are.
It is sort of, yeah, I guess it's like building in our own hand rolled waker
in some way of like the the client technically sends off this request and it's done in theory.
Right. Or technologically it is done. But it has set up this function,
you know, that is somewhere in memory, and it has handed us off where this thing is.
So then when we go call that code with this data, it is now kicking off a new
thing. It just so happens that we have in code tied those two pieces together.
Right? So it's sort of like the mechanism of a waker, except that we're not
actually, you know, pulling it or anything like that.
It is inherently async to when that thing gets called back.
Did you have to use any unsafe code to pull it off?
I'm sure in the FFI there's some, you know, just the nature of, you know, how FFI works.
I'm sure the calls across the actual FFI are wrapped in unsafe.
And then I think there's probably some, you know, we talked about zeroizing alloc.
I'm sure there's some amount of wrapping there to ensure that bytes are flipped
to zeros, you know, in things that were not being handled by the Rust allocator.
But in general, the process is fairly safe, right?
You make the call over the FFI, and because that is holding this object with
the sender, once you send that all across, you know, that's fine.
Everything there can be done safely. And then executing the actual callback
probably technically has to be wrapped and unsafe, I would guess,
because you are sort of triggering just a memory location.
But a lot of that is based on, you know, guarantees that we can make in both sides of the equation.
So you mentioned 600 crates. And when I heard that number, I was, well, awestruck.
How do you not drown in dependency hell? How do you keep all of these crates up to date?
Yeah, I think you're assuming we aren't drowning in that. No,
it's, it's, it's not too bad. I mean, so part of it is automation, right?
You know, making sure that we're running things to ensure like we have dependency
monitoring for vulnerabilities and stuff like that to keep all that stuff up
to date and make sure that we're not, you know, we're not passing pipelines
that have vulnerable dependencies.
And also, if we have a dependency that has become vulnerable,
we're updating it quickly and so forth.
But beyond that, it can be tricky, you know, to manage all those dependencies.
A lot of it just comes with the architecture of like, let's make sure,
you know, we have directories, right? So we have, let's say, a data directory.
And this data directory, you know, okay, you are allowed to import anything
else that's in the data directory or any, you know, third party dependency that's safe.
But you can't import anything from services or apps or so forth,
right? And we create a bit of a inherent hierarchy to these things.
And so a lot of it is just making sure that we have clearly defined like app crates and.
Cannot, or app grates can really import anything that they want, except the FFI.
And app grates should only be imported by the FFI. No one else should ever import
an app grate. So there's our first, you know, level. And then we go to services.
It's like, all right, services can be imported by each other or app grates, but nobody else.
And now, you know, data can do this, right? And you create a little bit of these
rules, and then you end up with a ton of things that fall outside of,
you know they may be crates that define types or
a server client or all these different things and that's
where you get into the difficult parts of like well if we have
a server client who's allowed to use that you know
can anyone call that from anywhere or should
that ever talk to the data you know crates or
should we keep those separate inherently what about types what
if we have a type that is this way on the server and it is also that
way in the database well do we want to duplicate that to let those
like to let drift be okay so that way we're not you know
modifying both all the time or one all the time so you
definitely get into challenges that that come with having that many crates and
there's probably more automation that wants to happen but there's definitely
you know the short answer i guess is there is a lot of human process that comes
into you know we have different teams that own different concepts they don't
own necessarily specific bits of code.
They may, but in general, they own concepts.
And because you're owning that concept, we have automation, you know,
that we'll say, okay, these bits of code definitely touch that concept.
Or just a lot of people going, okay, this should probably be reviewed by such
and such team or so forth.
And just making sure that we keep everything as clean as possible.
And that we are, when we do code review, we're looking at it to go,
does this order of imports make sense?
You know, and we'll have things with like, oh, they implemented this on some
model, but it doesn't need to be there.
It was only being consumed in one place. So let's just inline all of that.
Let's get rid of that crate, you know, that whole model concept.
Let's just inline it right where it was.
And then we may find out in the future, oh, well, now somewhere else wants to
use it. Okay, let's pull it back out and put it back into the model and we'll
do that, you know, and that's okay.
You know, we want to avoid the like, well, we might need it in the future because
that's when crates really become a problem.
You know, we put it in this new crate because we might need it somewhere else.
Let's not do that let's just enlighten till we know we need it somewhere else
and then we'll deal with pouring out and that's really the only way that you
know you kind of bake in avoidance of crates just proliferating everywhere it.
Sounds like you're treating crates like cattle i wonder how much the rust ecosystem
helps with that and how much of that is still manual because there are tools
like cargo deny for example or,
I don't know. Do you have any tools that you like outside of,
say, Clippy that you prefer?
Yeah, a lot of Clippy here. Definitely Cargo Deny as well.
Make sure that, again, that if it's an FFI crate, you should only be importing
certain things, or if it's an app, you shouldn't be importing certain things or so forth.
Again, it probably wants more automation than we have.
A lot of it is just the code architecture, right? If we make sure that the data
layer is self-contained, and then we make sure FFI to app to service is a self-contained
flow, well, everything else then becomes domain-specific, right?
We're not creating global things beyond that, wherever we can avoid it at least.
So that way, if your team, let's say, works on autofill, you get to use the
data layer in prescribed ways, and then you import all your stuff into a service
which goes into an app which goes into the FFI.
Anything in between there, that's up to you. You know, if you create a circular
dependency or whatever, that's your problem.
Or if you create a bad architecture between these things, well,
you have to own that, right? You've got to maintain that.
So you're going to find the problems with that and generally fix it yourself.
And so a lot of it is just bad architecture of like, keep the top layer contained,
keep the bottom layer contained.
And then everything in the middle is feature specific or domain specific.
So that way, those bounded domains are dealing with themselves.
And there's a couple other things that have this, like we have a foundation
directory, which deals with OS-specific calls.
You know, anything that's Windows-specific has its own area and so forth.
So there are a few other things we do like that, where it's like,
okay, that is maintained by this platform advancement team.
And if you need changes there, you work with them to make those changes.
But you are just consuming those things. And any, you know, interweaving of
crates that you do within your bounded domain needs to happen within the box you were provided.
But it definitely does rely on a little bit of that automation,
like you said, of like cargo deny and clippy and so forth, to ensure that we
are cleaning that up where we can, but it's definitely a manual process a lot of times.
I'm not sure if it's true, but I heard that 1Password uses Nix for their development environment.
Can you say a few words about that?
Yeah, so we definitely have a lot of Nix usage.
So it powers our, on the core side at least, it powers a lot of our build processes and stuff like that.
And then we use it to manage our tool chains on the core side as well.
And that has been really, really nice because there is so much,
you know, tool chain stuff that you need when you're working across all these platforms.
You know, you might need all of these Linux-specific tools and you need these
Mac-specific tools and all these things that make Android Studio work and there's
all these different things that come together as well as, you know, internal tools, right?
We have the 1Password CLI, which we use for things and the TypeShare tool,
which is a CLI tool that is used for things And so whenever we make updates
to TypeShare, that needs to become part of developers' toolchains.
And so we use Nix to manage all of that.
And that way, whenever there is an update, if I pull main and that Nix flake
has changed on main, well, my toolchain just gets updated.
I don't know the difference. So, you know, I have to wait a few seconds for
it to pull everything in, build the tool chain. But from that point, I don't care.
And there's the direnv tool, which basically sets up that directory to automatically
use the Nix Flake as your tool chain for that directory. So I don't have to
worry about version management.
I don't have to worry about, you know, did I run the install script too long
ago and somebody added something to it since then?
That's not my problem. Nix just handles all of that.
And it's been really amazing. I mean, there's a bit of knowledge,
Nix-specific knowledge, flakes in the language and all these different things
can require a little bit of.
A little bit of playing with to really understand what's happening,
but it's been really powerful and it's helped.
We have a concept here, which I know some other companies have as well.
We call them guilds of like cross team interest in a topic. So like there is a Rust guild, right?
And we have all the developers on different teams. Like my team has,
let's say four Rust developers and then some other developers.
But then the autofill team has a couple of Rust developers and this team and so forth.
And we don't maybe collaborate all that directly, but now in the Rust guild, we can just talk Rust.
We don't worry about, you know, interactions with other things.
We just talk about how do we want to write Rust?
Here's an interesting article about Rust, so forth. And we have one of those for Nix as well.
So, you know, the build and release people are in there and some of the people
from my team who are all just kind of Nix fans are in there and so forth.
And that helps us to make sure that, you know, if someone who maybe owns the
tool chain is working on something, you know, like, hey, I couldn't figure out how to do this.
Well, now there's a bunch of other people who just work on this in their spare
time or, you know, have their home set up in Nix and so forth.
Like, oh, I've done that here. And we can, we can sort of augment the knowledge
on various teams and, and really help to skill people up.
Like the Rust Guild, we run a Rust study group that has, I think we had like
160 people in it or something like that the last time I looked.
And so it's people, you know, going through the book together and learning from
some of the experts that work here in Rust and then working on projects together
and all these different things and it helps us to really kind of like raise
the floor of skills in all these different domains, including them.
About the external dependencies, so any of the popular crates,
are there things that you use from the ecosystem that you kind of wanted to mention?
Could be anything, really.
Yeah, I mean, so obviously, like I said, we use Tokio a lot,
and we use the whole Tokio ecosystem, right?
We're using Tokio itself for channels and all these different things.
We use axum for various services and so forth.
Tower, you know, the tower middleware is really, really helpful because it allows us to share things.
You know, if we have one, again, feature that's totally independent from some
other feature, but if they create a middleware that handles,
oh, this handles authentication.
Great. Well, now this one can use it too because we auth the same way.
And so it allows a lot of sharing there.
Tracing is really critical. Obviously, again, having so many features and so
many layers using tracing gives us a lot of visibility into,
you know, did I make something that just slowed everything down?
Or where is my slowdown occurring, you know, and so forth. So that's great.
Serde is obviously pretty critical as well, right? We have to do a lot of serializing
and deserializing, you know, coming in from the server and so forth.
So that's really helpful.
And then some of our internal crates, right? Like I mentioned,
like TypeShare, passkey-rs, and so forth.
The only other ones that I know of that come up a lot are like RustQLite is
important for SQLite interaction. That makes things a lot easier.
And then testing things, right? Like MockAll is good.
Cool Asserts we use. We also have some developers here who have public crates
that we use that are nice.
Like one of our developers, I think, works on Neon, which handles like Node.js
bindings for the Electron apps.
Nathan West, who's here, made nom-supreme, which is a NOM helper library,
which is really cool. I really like NOM Supreme.
And so forth. And so there's, there's a lot of really cool things.
One of my teammates, Ivan, also has a project called Crane, which helps with Nick's projects.
And I don't think we actually are using Crane here. But it's funny,
because we'll come across open source projects, we'll talk to other teams.
We were talking to the team that built the Zed editor, when we were at RustConf.
And, you know, so we, we opened up some communication with them,
trying to, you know, help kind of beta test some stuff that they were working on.
And they were using Crane to build Zed. And so I was like, oh,
that's funny. That's something that one of the people here made.
So it's kind of neat seeing the ecosystem come together.
Yeah, totally. That's always surprising.
The code sharing that happens between different companies using Rust,
that's always very surprising.
And by the way, if someone wants to listen to the episode with Zed,
it's been a couple episodes before.
These guys are doing amazing work. we can't
talk about all of the crates but one thing specifically
that i wanted to ask you about was the tracing ecosystem of crates specifically
if you use tracing in combination with the log crate or if you use tracing for
tracing plus logging which is a very popular usage of that crate.
Yeah so that is actually something i think that is coming about now i mean we've
used tracing primarily for tracing, right, for sort of internal performance telemetry.
A lot, and we have a lot going on there. But one of the other things now is
especially, you know, in our cloud infrastructure, trying to make sure that,
you know, we have crates in, or crates, we have services in various languages,
especially now, you know, that all these other companies are coming into the fray.
There's, you know, Ruby and, you know, .NET, I think, and all these other things.
And so we want to make sure that our observability is not hindered by that.
So we have some, you know, intermediate things that are handling,
let's say logging output and making sure that it is normalized so that we can
handle observing, you know, cross service, you know, the whole ecosystem, right?
And so that's something that I think we're actually doing now is using tracing,
tracing subscriber to make sure that our log output is, you know,
meets the standard essentially so that any given Rust app can output their logs
in the right format and make sure that they're consumable by these other services.
Are there any things that you specifically like about tracing or you dislike
about tracing right now?
I don't know. I didn't work on the implementation of it too much,
so I would say I don't have a lot that I dislike about it.
It's really nice to be able to just jump on, to have some function and go,
I'll just add the tracing macro to this, right?
And now I'm getting data about this, you know, and I can instrument these things
very quickly and easily and it adds in there and I'm seeing,
you know, everything that I want.
It's been a huge help to be able to go. You know, one of the things we were looking at was like,
when we do these invocations, right, they are sort of inherently async,
but you may have invocations that are performance heavy, right?
Something that has to like, you know, let's say iterate through a giant list,
and it just has to happen that way.
So we have a blocking thread pool, you know, Tokio gives you,
you can do a blocking task right on this blocking thread pool.
But you also have async tasks and async tasks tend to be more,
you know, cooperative in a lot of cases.
And so we got down to the point where it was like, you know,
we have hundreds of these invocations.
Well, how do we know which ones should be blocking and should be async,
right? If it's got an await, it needs to be async.
But out of the hundred that maybe don't have an await, which ones are better
served on which thread pool?
And so we can use tracing and just go through and just literally flip it from
blocking to async and, you know, do the little minor, you know,
syntax conversions we need to do and just do AB,
you know, one to the other and go, okay, here was this one here was this one
oh it's faster that let's
put it over there you know and it makes it really nice to be able to just kind
of jump through all these things i did that i think it was i guess two years
ago now and just ran through every invocation as it was and then flip them all
to the other one and see which one's faster and had an mr with just everything
here is what came up faster with tracing and here's all my results so tracing
is i mean it's incredible that's.
A really nice use case for tracing the other thing is Tokio console which is
similar i don't know if you've heard about it, but it shows you
the state of Tokio and the futures that can execute it and how long they take,
Did you hear about that?
Yeah, Tokio console is pretty awesome as well. We have, I think Ivan actually,
the one that I mentioned who works on Crane, I think he also is one of the maintainers of Tokio.
But Ivan has helped me learn a lot. He said, I had no Rust background at all
when I got here. And Ivan is a Rust expert.
We have a few others as well. Like I mentioned, Nathan is absolutely incredible
just the things that they know when it comes to Rust.
But the last three years, I feel like I've learned more from Ivan in the last
three years than I learned in the prior seven or eight combined, everything that I did.
But yeah, showing me, When you get through and you've got the client app running
and it makes this request and we're dispatching this one task and then that
may kick off other tasks and the database access is all asynchronous and so
there's a queue of these requests and then there's events that get triggered
off of that which may trigger other async tasks.
And so all this stuff is going on and, you know, looking at all this stuff in
the console is a huge help to get visibility, like what's really happening, right?
You know, I see a log message just starting sync and then I see a log message just sync completed.
But inside of that, there was so many tasks that spun up and some were slow
and some were fast and so forth. And so, yeah, like you said,
Tokio console is really great for understanding the futures that happened.
You know, we use tracing, I think, more for like function calls, right?
Like, in our case, at least, like, we called this function, it is instrumented,
I see this function took 10 seconds.
And inside of that, this one took, well, I guess 10 would be a long time.
Let's say it took 100 milliseconds.
And inside of that, here are three others that took 10 milliseconds each, right?
And I get that visibility. But seeing the futures that kicked off and how long
those took does really give you another level of visibility into what's really happening under them.
If I understand correctly, you don't trace every future, but you kind of group
them into logical units, things that make sense to be executed together,
just to get some insights.
Yeah, and that's another thing. I mean, that's really great about the way that
the Tokio Tracing crate is set up is you have like the spans, right?
And you can create spans and enter spans and all these different things.
And so we use those pretty heavily. there are probably scenarios or code paths
where individual futures are traced.
But in general, you know, a lot of times it'll just be when we're going to some
block of code, what, you know, how long is this taking, right?
What do we, what is happening here? And so it's not always every single future.
It is, it is specifically what we're trying to instrument, right?
And so the, having the console as well is very helpful, but those spans make
it nice to have logical grouping, like you said, of different concepts.
And so I can see within the span, how long is the whole span taking and then
how long are individual blocks of that span taking, whether it's multiple futures
or one, I may or may not care, you know, how long the individual futures take, right?
Let's say I have some future that does some unit of work and that unit of work has to happen.
And there's no other way to do it. Maybe I don't necessarily care how long those
futures take, but I do care how long this block takes.
And that may help me, you know, determine, oh
this is a vec and it doesn't need to be a vec maybe now we can
order it and it you know with the data sets we have it makes more
sense to spend the upfront time hashing this or making it a btree set or something
like that and ordering it all so that you know most often what we're acting
on is we're inserting somewhere okay this gives us that that speed up on insert
and it's often worth you know the time to sort it or something so yeah the spans are really nice.
Well, some people might listen to this and say, I will never get to this level of proficiency.
And maybe they are curious what it feels like on a day to day to work together
with maybe 100 other Rust developers on the same code base.
What are some of the patterns that you see evolve over time?
What is the complexity like?
Is it a thing that people can teach themselves?
Are there any guidelines? What are some things that you would say evolved since
you started working with Rust that you would say you should definitely follow
this and these are the best practices that we see at 1Password?
Yeah, I mean, so I would say first and foremost, when I got here,
when they interviewed me, they tried to show me some Rust code because I had
never done it at that point.
They showed me some russ code and it was essentially a like
a number aware sort so making sure that you
know one password you know shows up in the right
order in this list or something like that right and i could not get through
it at the time like i i wasn't i was generally following like it's trying to
do this but it was using these chunks and i didn't really know what that was
and then it did a little bit of memory stuff and i had never touched memory
coming from Column TypeScript. So I had no clue what was going on.
And luckily they hired me anyway. I don't know why, but they did.
And so, you know, in the last three years is everything that I have learned
when it comes to Rust, right? That's the only time I've ever seen it.
And so it's definitely something you can pick up. I have not done it alone.
Right. I have had multiple teachers who are absolutely incredible when it comes to Rust.
And like I mentioned, we had that Rust study group, which is something that
I took part in in the first iteration when my co-workers Tanner had set it up.
And then it had been so massively helpful to me that I ended up taking it over
and running the second iteration of it. And now, you know, we've grown this
into this whole Rust guild.
So all that to say, anytime you can learn from other people,
it will be massively helpful, I think, especially in Rust, because you may be
coming from languages where the concepts just don't exist, right?
And you're trying to learn all these brand new things.
So that's a huge help. You can learn it on your own.
There are a lot of good resources out there now. One of the things that's been
really incredible in the Rust community is there's a lot of the books that are
amazing books in Rust that are just free, right?
Like the Rust Atomics and Locks book from Mara is free.
You can get it on the internet, you know, also buy it if you can.
But there's so many of these things. The Rust book itself is really great.
One of the hardest things to get from, you know, zero to productive,
I found, actually, I spoke at RustConf a bit about this this past year.
What we found teaching people is the issue is not learning the Rust syntax.
It tends to be that Rust, good Rust, requires a bit of engineering knowledge.
And that's not maybe engineering knowledge that the book is covered,
right? Right. And you and I talked about this a little bit, right?
Like, how do I know when I make a new crate?
Well, the Rust book doesn't, I mean, it's not going to tell you that that's
not really a Rust problem, right? That's a, your code base problem.
It's an engineering problem.
It tends to expose your, you know, lack of knowledge in other areas,
which again, I also have no computer science background.
I was a construction worker before and I worked at, you know,
a drugstore before I became a developer. So I had no idea about binary trees
and sorting. I still really don't to be honest but.
Rust has a way of bringing those things out in you, which is hard at first,
but it also helps you to shore them up because you see where your gaps are.
And there is so much information out there that you can learn from,
but it does take time and not everybody has time.
So that can be a bit difficult. But anyway, that's a long way to say you can
learn on your own, but learn from others wherever you can.
Look at code, try and understand it. And then getting into patterns,
a lot of it has been thinking about two things, I think, heavily.
What, like, why are you abstracting something? Because abstractions are really big in Rust, right?
You know, like I said, we use RusQlite, but we wrap every transaction so that
we add some counter that we want to follow in debug mode.
And then we have to wrap, you know, maybe the call to save to increment that counter, right?
To insert a row to increment that counter. So you end up with abstractions on
abstractions on abstractions.
And if you don't know what you're looking at, those are even harder to follow.
So making sure that your abstractions make sense. And if they have to be complicated
for any reason, document them really well.
That's, I mean, critical. Even if you're the one that's going to come back and
read it, make sure you know why you made this abstraction.
That's really important. And then the second thing I think is trying to make
sure that you are not driving performance at the cost of simplicity, right?
You know, if I look at this and it's like, well, technically the fastest way
is, you know, creating this one shot channel that watches. And once that gets
something, it kicks off a receiving loop on a receiver. And then it does this look up.
If you can do that with a Vec.Find or whatever, just do it.
Like, you know, unless you really, really need to eke out that performance,
keep things simple where you can, you know, you'll get to places like 1Password
where it's like, yeah, we have to do all this zeroizing allocation and we can't log dynamic strings.
There's a whole loggable trait
in this whole process it has to go through to make it, you know, work.
But that's not, you know, you don't need to understand the internals of that every time, right?
You know, just understand what you have to, keep things simple wherever you can.
Even here, you know, we try to make sure that we're holding each other accountable
for like, it doesn't need to be that hard.
Keep it simple. Yes, it's technically faster this other way.
We don't have realistic data sets that need that speed up.
You know, when we get there, we'll do that. Yes, in the future,
we might have those and we might see a slowdown. And guess what,
when that happens, we'll fix it.
And so forth. So I think a lot of it just comes back to Rust is complex.
Keep it simple wherever you can. Learn everything that you come across,
but, you know, take it one thing at a time. You will eventually get there.
And that pattern is true all the way up into 1Password where things are complex.
Keep it simple where you can and then learn things one at a time when they can't be simple.
I can totally get behind that. Makes a lot of sense. It matches with what I see out there.
Yeah
We're
getting close to the end and traditionally the final question is what's your
statement to the rust community.
Uh yeah so i mean i think one of the you know there there is some amount of
like thank you to the rust community for keeping things open right like i said
there's so many books out there and the rust book and the brown version of the
rust book is is amazing with the quizzes and things.
That's all great. I think we also need to continue to or increase how often
we relate to people, right?
Meet people where they're coming from, understanding, you know,
what they do and don't know.
Obviously, with someone coming from my background, that was really critical
that people here did that, you know, and there's a lot of, you know,
acronyms or computer science knowledge.
And there's so many people in Rust that are so smart, it can be difficult,
you know, sometimes coming in and going, you know, they're like,
well, you should make a beat reset.
Why? What? I don't understand why that matters, you know, and so forth.
And, and just a lot of little things, right.
Understanding how did you learn what you learned, you know? And it's like,
oh, well, in my OS class, we did this.
It's like, oh, I didn't have, you know, so there's, there's a lot of that,
that I think we can continue to improve on just finding paths to not just go
from zero to rust, which I think we've started on, or we have a really good
path to with the Rust book or to go from Haskell to Rust or C++ to Rust or whatever, right?
I think there is a missing middle gap of how do you go from TypeScript to Rust
or this or that where it's like, I understand variables. I understand all that.
So the whole beginning of the Rust book feels boring and I'm getting lost.
And then we go straight into memory or, you know, async and really how async
works. And there are these missing pieces sometimes.
And so I think finding practical applications of Rust is really critical.
One of the big things I push for a lot within 1Password in the Rust study group
is how can we build one cohesive thing?
So it's not learning all the Rust concepts one by one, and it's not getting
straight into the 1Password code base, which is terrifying.
It is how do I learn these concepts and
apply them to one thing that I have now built and now I
understand why you know
a Tokio Tokio mutex and
a standard mutex they're different and why it matters that they're different
and you might need both in different cases right I've seen it in use and now
I can understand you know thread boundaries or whatever it might be or weight
boundaries I should say and stuff starts to to make sense to you because you've
had a real use case for it,
not just what it was when I learned it, right?
There's things like rustlings, which is incredible for exercise-based learning.
But at the end, I didn't see exercise six mesh with exercise 10.
And now I understand that those are two maybe related things.
And so a lot of people say, right, build stuff. That is the greatest way.
But that's not realistic for everyone. I was working before I became a developer.
I was working 50, 60 hours a week and then you know, coming home and trying
to keep, you know, my apartment clean and all these things.
And I did not have that much time left to just build things.
It took me a year or more just to try and figure out, you know.
React or whatever it was at the time.
And if it were Rust, I was trying to learn, I have no idea how long it would have taken, you know.
And so I think just finding ways to bridge those gaps and make cohesive learnings
that make concepts actually make sense holistically.
I think that's a really, really huge area that will help a lot of people close
the gap of Rust, which will help Rust adoption.
You know, I've seen that even here, where it's like, we may have something we
build and go because ramping that team up on Rust is just not feasible in the timeframe we have.
Well, if there's better resources to get from Go to Rust, that becomes a non-issue,
you know, and I think we'll see Rust grow the more we are open to bringing people
in from more intermediate spaces into Rust.
What an amazing and insightful answer. I can really get behind it.
I can fully relate to that.
Andrew, thanks so much for being on the podcast today.
Yeah, thank you very much for having me. I really, really enjoyed it.
Rust in Production is a podcast by corrode. It is hosted by me,
Matthias Endler, and produced by Simon Brüggen. For show notes,
transcripts, and to learn more about how we can help your company make the most
of Rust, visit corrode.dev.
Thanks for listening to Rust in Production.
Andrew
00:00:26
Matthias
00:01:46
Andrew
00:01:56
Matthias
00:02:34
Andrew
00:03:02
Matthias
00:03:58
Andrew
00:04:14
Matthias
00:04:26
Andrew
00:04:41
Matthias
00:08:11
Andrew
00:08:26
Matthias
00:09:22
Andrew
00:09:53
Matthias
00:11:01
Andrew
00:11:11
Matthias
00:12:47
Andrew
00:13:05
Matthias
00:14:07
Andrew
00:14:12
Matthias
00:16:28
Andrew
00:17:16
Matthias
00:18:02
Andrew
00:18:02
Matthias
00:19:24
Andrew
00:19:38
Matthias
00:21:40
Andrew
00:21:57
Matthias
00:23:25
Andrew
00:23:34
Matthias
00:24:52
Andrew
00:25:02
Matthias
00:27:50
Andrew
00:28:41
Matthias
00:30:21
Andrew
00:30:35
Matthias
00:31:21
Andrew
00:31:26
Matthias
00:32:20
Andrew
00:32:35
Matthias
00:35:58
Andrew
00:36:19
Matthias
00:38:19
Andrew
00:38:30
Matthias
00:41:38
Andrew
00:41:55
Matthias
00:44:09
Andrew
00:44:50
Matthias
00:45:56
Andrew
00:46:04
Matthias
00:47:46
Andrew
00:48:03
Matthias
00:49:43
Andrew
00:49:56
Matthias
00:51:31
Andrew
00:52:12
Matthias
00:57:50
Andrew
00:57:58
Matthias
00:57:59
Andrew
00:58:09
Matthias
01:01:52
Andrew
01:02:05
Matthias
01:02:08