Rust in Production

Matthias Endler

AMP with Carter Schultz

AMP's Carter Schultz discusses AI, robotics, sustainability, adaptability, Rust's benefits, and future innovation for recycling solutions.

2024-05-16 69 min

Description & Show Notes

Imagine you're faced with the challenge to build a system that can handle billions of recyclable items per year with the goal of being better than a human at identifying what can be recycled. Material classification is a complex problem that requires a lot of data and a lot of processing power and it is a cutting-edge field of research. Carters Schultz and his colleagues at AMP chose Rust to build the core of this system -- and it worked "shockingly well". In this interview, Carter, charismatic and witty, shares his experience of getting your hands dirty with Rust, and building a cutting-edge production-ready system, which can be now replicated across the world.

About AMP

AMP gives waste and recycling leaders the power to harness artificial intelligence and solve the industry’s biggest challenges. The company uses cutting-edge technology to help waste and recycling facilities improve their operations and increase recycling rates.
AMP transforms the economics of the waste industry to make recycling more efficient, cost-effective, scalable, and sustainable.

About Carter Schultz

Carters Schultz is a Robotics Architect at AMP Robotics. He has a background in robotics and computer vision. Previous employers include Neya Systems and SpaceX. An engineer at heart, Carter is passionate about building systems that work reliably and efficiently and pushes the boundaries of what is possible with technology. He is a charismatic speaker and curious mind with a passion for teaching and learning.

Links From The Show

Official Links

About corrode

"Rust in Production" is a podcast by corrode, a company that helps teams adopt Rust. We offer training, consulting, and development services to help you succeed with Rust. If you want to learn more about how we can help you, please get in touch

Transcript

This is Rust in Production, a podcast about companies who use Rust to shape the future of infrastructure. My name is Matthias Endler from corrode, and today we talk to Carter Schultz from AMP about recycling billions of items per year with Rust. Okay, welcome to the show, Carter. Can you quickly introduce yourself and the company AMP Robotics that you work for?
Carter
00:00:25
Yeah, so my name is Carter Schultz. I am the robotics architect at AMP Robotics, and we are trying to solve the world's recycling problem as best as we possible can by just throwing a lot of AI at it. For the last 10 years, I've been writing an enormous amount of C++ code to make robots sort recycling. And for the last three years, we've started writing a lot of Rust code, and I'm very excited about it.
Matthias
00:00:48
When you say you try to solve the world's recycling problem like what is the scope that you target i just did some research and i learned that every american produces five pounds of garbage every single day or do you mostly tackle the american market right now yeah.
Carter
00:01:08
We actually have robots in europe asia and north america i think we have like 13 countries or something. So at least for our robotics business, we've kind of gone global with that. But really, the mission of the company was to solve waste in general. We wanted to fundamentally change humanity's relationship to waste. And so we're going wherever we can and wherever we can find have leverage to just make less waste be created and to get more and more recycled, reused, repurposed effectively. We have the corporate slides that we present to investors, but the The waste problem is real. Humanity is throwing away hundreds of billions of dollars worth of value of material that we invested in and spent resources to get to quality and purity, and then we just throw it away. And you want to talk about carbon sequestration options or ways to reduce energy usage or just make the world better, there's a huge opportunity in waste. And because it's not sexy and glamorous, not enough people are trying to solve it.
Matthias
00:02:09
I do think it's very sexy. I do think it's very important too. And a lot of people right now, engineers mostly, they look for meaning in their work. And I think you would be able to reach a lot of people that are interested in hard technical problems, because I assume that's what it is. How many pieces of material are we talking about? Pieces of recycling material that you try to sort with your systems?
Carter
00:02:37
So the stat we throw around a lot is in 2022, we sorted 5 billion items, which is, you know, it sounds very impressive and I'm very proud of it. It's still a very small percentage of the waste shed. Like we're trying to get to trillions of items as quickly as we can. And, you know, when we talk about like a recycling facility, they're typically rated in tons per hour. So tons of material that they take in at a moment. So we're building facilities that are in the, you know, like 20 tons an hour range of input material that they can take in. And that just means truckload after truckload after truckload in and out constantly. So the scale kind of boggles the mind when you really start to deal with it.
Matthias
00:03:16
Where's the bottleneck? Is it a technical bottleneck? Do the robots not move fast enough? Do you just need to build bigger plants?
Carter
00:03:25
There's bottlenecks everywhere. And I think like there's no there's no part of the process that's easy and there's no part of the process that's cheap. One of the fundamental realities right now is that an enormous amount of the money spent to make recycling happen is actually being spent on diesel fuel. Recycling commodities have a very low bulk density, so proportional to the value of the commodity, it's very expensive to ship it around. And when you talk about the fuel it takes to send recycling trucks down every street in America and pick up recycling bins and then bring that recycling to a facility to sort it, and then trucks that take it from that facility to other facilities that repurpose it, the amount of diesel that you can burn moving the material can often exceed the value of the material. Which is why a lot of people aren't the biggest fans of recycling and don't believe in it. But when you can move the sortation facilities closer to the producers, to the people producing the recycling, when you can move the repurposing facilities closer and have more of them, you cut that travel down substantially and can make the entire process a lot cheaper. One of the big things that the industry has basically fought with for most of the last century is that generally the processing plants that repurpose the material and the sortation facilities that sort it have really good economies of scale where when you build those facilities giant, they're much more efficient. But when you build them giant, you have to truck the material a bigger distance to get them there. And that's kind of been one of the double-edged swords of the recycling industry for a really long time.
Matthias
00:05:01
Did you spend any engineering resources on making these facilities easier to build and maybe making them easier to ramp up locally where the garbage is located?
Carter
00:05:12
That's kind of the dream of our new technology and what we've been working on. We just recently did a big unveiling, I think last week of this AMP ONE, which is the design of our facilities. We're actually like showing it out publicly to the world. Up until now, it's kind of been our our trade secret as to how these facilities operate internally. But we've come up with what we think is a very scalable, simple piece of technology that can sort material that you can then compose into modular facilities of various sizes and various effectiveness. So we're doing everything we can to take the technology unlocks that we have and figure out how to leverage them the most to make the biggest difference in the waste shed.
Matthias
00:05:55
When you explained that, I was wondering how much of it is a software and how much of it is a hardware problem. So do you see AMP Robotics as more of a software or more of a hardware company?
Carter
00:06:06
You know, at the start of the company, it was a software company. What made AMP Robotics possible and how AMP Robotics got going was that our CEO and founder realized that neural networks ability to identify materials and classify them correctly was on an insane exponential curve of performance. And so, you know, six years ago, we had a neural network that exceeded human sorter accuracy at identifying recycling materials. And that kind of turns the industry on its head a little bit where previously, you know, the best technology for identifying recycling materials were hyperspectral cameras that cost, you know, $100,000 for one camera. And now with a cheap little GPU and, you know, effectively a webcam, we can identify recycling better than that. And with that added ability and ability to see things, a lot of things were unlocked. So with that sort of core camera system, our first like real product we got traction with was the pick and place robots that we put behind it that manually did sorting. And those are good robots that do a good product and purpose in the world. But when you have that core camera system that lets you just see everything that's on the conveyor belt, you can actually do a lot of things else with that camera system. So as time's gone on, that camera system's gotten to a very matured state where we really understand how to make the neural networks see well. And more and more of the company's investment has gone into the mechanical side and in developing new hardware and new capabilities to extract value from that neural network unlock.
Matthias
00:07:41
Okay, so you move into hardware a lot. And is it also true that all of these pieces of recycling are different to handle or do you have a very scalable or maybe a very commoditized system to handle all of the different pieces of garbage?
Carter
00:08:00
So the best system that, you know, kind of existed before us and we're now just repurposing in a new way are actually air jet sorters. So most of what's sorting in our facilities are what we just call jets, which is, you know, the material goes off the end of a conveyor belt. It goes ballistic and it's free falling in air. And while it's falling in the air at the end of the conveyor belt, you hit it with a puff of air. That puff of air is incredibly good at like 99% of the materials that end up in the recycling stream. There's a small subset of stuff like phone books and frying pans that a puff of air can't really handle. But the vast majority of the sorting that's happening in one of our facilities is being done actually by these air jet sorters.
Matthias
00:08:43
It's kind of funny because you started with this idea of building a software solution. What turned out to be mostly a hardware problem. So you went from building software for, you know, vision or object recognition, I guess, and then you transition to sorting robots. And now you think about sorting facilities, like AMP ONE, for example. Is that correct, first and foremost? And second, what's the next step?
Carter
00:09:11
Tip? Yeah, no, I think that's correct. And it's a really good lesson, I think, in how to be successful as a startup, which is you can't eat the whole sandwich in one go. You have to take bite-sized chunks at it. And when you're a tiny little startup just getting off the ground, hardware is really expensive to build. It's really expensive to invest in. So when you can focus on the software first, prove that you have some value there and get a toehold in the market, that's a really nice way to grow. And then when you have enough money to invest in building a good robotics order, you build that. And that robotics order becomes part of the flywheel where now sales of those robots fund investments into more technology that let you build more and more things. So there's been this kind of turning flywheel of getting AMP going by just taking off the next bite-sized chunk of the problem that we have the ability to actually tackle and solve. The latest version of that has been building our own individual recycling facilities. The next step honestly comes in the problem of scale. We have a few facilities right now and it's like, how quickly can we get to dozens and hundreds of facilities? And when you actually look at everything it takes to build a facility, boy, that's a hard thing to scale. It takes a lot of work and a lot of technical know-how and a lot of people and skills to figure out how to get buildings and permits and electrical runs and cables and conveyor belts and trucks and logistics. And, you know, the tech now, you know, the software that made everything possible isn't the hard problem anymore. It's the problem of building factories to build factories like that's that's really hard.
Matthias
00:10:50
Couldn't you license out a lot of that technology to other vendors? Say, I want to use your technology, but I'm not an engineer or maybe I'm not well-versed in building these systems. I would probably go to you and approach you and ask you, hey, can you just license your technology to me and I build one facility myself? Yeah.
Carter
00:11:12
Yeah, it's actually, you know, it's definitely something that's on the table. And it's something we've experimented with in the past. At the same time that AMP built its own robotics order, we actually did license our vision system to another company that built a robotic system off of our vision system. So like, we've done that in the past, and we've had some success with it. The biggest challenge that we've really had is that the pace of innovation of the vision technology has been so fast that it's really hard for people to keep up to date with it. Like our neural nets are improving by 10 or 20 percent every year. And the ways that they're improving are not obvious ways that fit in the existing API of like the data that the neural net used to give you. It gives you new and different data that you have to then process and think about in different ways to extract the value from it. So right now, when we're building these facilities, we are intending in general to build and operate them ourselves. And we believe that like our staff actually running the facility that's built is the best way to get the value out of it for the time being, because this technology is still really, really new. Then we need people who are experts in it and are able to adapt to it changing quickly at the helm of actually operating it. Probably like five, 10 years from now when the technology hopefully settles down a little bit and I still have my hair and haven't ripped it all out, then it really becomes a commodity technology that everybody can have and everybody can use. And we have standard manuals and part sheets, and it's just an off-the-shelf standard thing. As it stands right now, every two years, it's evolving rapidly and dynamically. One of the things that AMP struggles with a lot is six years ago when we started doing vision-based neural networks, we were at the bleeding edge of it. And it was almost impossible to get the neural networks that we were using working in the ways we were getting them to work. But every year that gets easier and that gets a lot easier and Google and NVIDIA are spending billions of dollars to make it easier and easier and easier for other companies to catch up to us. So we can't just stop and say, oh yeah, no, the vision system's good enough and done. We keep iterating on it as hard as we possibly can.
Matthias
00:13:23
Right. In order to keep your competitive edge. And I guess that ties into what I had in mind when I thought about the drawbacks of licensing, because one thing you can do with this very tight knit environment is that you have vertical integration. Integration that means you can iterate very quickly if you have an improvement in your neural network maybe you can tweak your system such that it sorts more or you have your robotic arm and you have better software for it you can update it right away and then maybe it's faster so like not arms but chats i mean but the entire thing kind of, it improves from within yeah it's there's a positive feedback cycle yeah.
Carter
00:14:08
And in particular Our vision system has unlocked that positive feedback cycle. The vast majority of recycling facilities that exist right now, how do they optimize how well that facility is working? Some guy goes out on the factory floor and looks at the conveyor belts and says, oh, well, they look pretty good today, I guess, doing a good job. We don't have that. We have data. We have cameras on every single conveyor belt in the facility simultaneously measuring how well every piece of equipment is working and how the facility overall is performing that allows us to live while we're running continuously adjust parameters in the facility and in real time measure the effect of those changes on the sortation performance. And like that is the secret sauce like that unlocks better recycling than anybody has ever had and is the you know, the new amazing tech differentiator that we're really excited about. And like while it's really easy for people to figure out, you know, how to build a neural net to identify recycling. Well, not really easy, but it's getting easier. Figuring out how to auto-tune a recycling facility to self-optimize on the material it's processing. There's no other way of doing that than by building a bunch of recycling facilities and trying to do it, which is really expensive. So it's building a technological moat that makes it harder for people to really come contest our space in the market.
Matthias
00:15:28
What I hear from you is one word, disruption and radical integration and also maybe tackling problems from a completely different angle from a data from a data science angle maybe from an engineering angle and then i wonder you have so many things going on and maybe only so many innovation tokens to spend and now we're talking about rust which is yet another thing in this entire tool chain something that is maybe also So still on the verge of getting adapted or getting adopted on a broader sense. And do you think that it's too much innovation and maybe you overstretched a bit? Or would you say, no, actually, it's been pretty smooth sailing so far?
Carter
00:16:17
You know, we've definitely overreached in a few areas. Like not every product that AMP has tried to build has kind of found a market fit and sold like gangbusters. So like no path is perfectly smooth and easy. There have been plenty of bumps along the way. I would not say Rust is one of those bumps. I think my view here is very colored, but I'm incredibly satisfied with our choice of Rust. And I'm, you know, we're doing everything we can to double down on that choice and continue to invest more in it because we really did see immediate positive dividends from it.
Matthias
00:16:52
Did people immediately embrace Rust or did you have to do any convincing to move the team towards Rust?
Carter
00:17:00
You know, I got incredibly lucky here. And this is part of the advantage of being a startup that's growing quickly. So like, put yourselves in my shoes, wind back, I think three-ish years at this point. And we decide we're going to build our first real big production facility. We have a little tiny test facility that we built right by our headquarters so that we could work out some of the kinks. But this is like the first facility that's going to go straight into 24-7 operation. I'm responsible for figuring out the control system for that facility. What is actually going to turn the conveyor belts on and off? What is going to coordinate all of the different camera systems together? What is going to be the overall SCADA system, like the supervisory control? For that facility? And how do we get that done? Oh, and by the way, we want the facility to start running six months from now. That was like a hard problem that I was given. On the flip side, I was given, you know, kind of carte blanche, like any way I could figure out to make that work was probably good enough. I was hiring a new team to be the facility software controls teams. Like I brought one engineer over from our robotic software side, but I was able to hire a new team in the model of what I thought this new problem needed. and it was completely greenfield. Like we had a Python application that was actually running our little test facility that we'd worked out some of the drivers and communication protocols. And we had a toy example of a factory control system up, but we already knew Python was not the right choice for it and was causing us problems in a few different ways. And so from that model, like we already had a beta, it was just, okay, build a facility control system as quickly as we can. That is the right thing for us going forward. And I, at this point, I'd been playing with Rust on the side for probably a year and a half or two years. I was pretty convinced that Rust was a good contender for building something like this. And I was also pretty convinced that the right thing for AMP as a company was to invest in building this control system from scratch ourselves. That like the ability to control the facility overall in really, really fine detail with our own software that we completely own was going to be a major technological advantage for us going forward. And it was worth doing it ourselves, doing it exactly the way we wanted, building an application that was bespoke for our needs and not cobbled together with kind of the off the shelf industrial solutions that exist.
Matthias
00:19:24
You had six months you had to build this from scratch greenfield project i wonder when you hired people what background did you look for what sort of specialities did you need how did you try to integrate the team what were your metrics your kpis at that time.
Carter
00:19:48
Good questions. And it's a little bit of chaos. I think we have to acknowledge the way that startups build things is not always with the best plan, and it's a lot more firefighting. So the requirements for what we needed to be built were not in any way written down at the start. They were being discovered as we tried to make things work. For backgrounds, I went for programmers that I thought were malleable, that were willing to learn things, that were hungry for new experiences. I looked a lot for programmers that had some kind of real-time-ish experience, had worked on video games or worked on embedded applications where some connection to a tight timing loop was pretty important. And I looked for programmers that were willing to get their hands dirty with hardware. We couldn't have... Coming from a robotics background, programmers that just sit behind the keyboard all day are way less valuables than programmers that are willing to go out there and debug the wiring and debug the motor that's burning up and see the physical process that's happening and engage with both the physical and software side of it well. So we pulled from a pretty diverse background of people, like some from robotics, some from like the video game industry, some from just general like systems programming stuff. But everybody was eager to learn Rust. I mean, you know, people were coming with different levels of experience in different language from beforehand. So there wasn't any resistance to the language.
Matthias
00:21:17
Walk us through the early tech stack at the time. You said Python. Did you use OpenCV? Did those machines run on Linux? How low level was it? was it?
Carter
00:21:28
Yeah. So each of the individual camera systems, that is one application that at that point AMP had been working on that application for four or five years. And it's a C++-based application built around ROS, which is the robot operating system. It's not an operating system, but it's a framework for building robot software in a mixture of C++ and Python. And we'd pretty heavily invested in the ROS ecosystem. So most of the non-cloud, non-machine learning programmers at the company at the time were C++ developers used to working in this ROS framework. And the ROS framework is really microservices for robots. You just write a bunch of individual processes that publish messages to each other on a message bus to represent your robotics application. So an example of what one of the robotic applications looks like, there's a camera node that is pulling images off of the camera and publishing them on an image topic. There's then like a node that's running the neural network that's receiving those images, processing them with the neural network, outputting the results of the neural network. And you build this kind of compute graph of processes connected together, each one receiving some inputs and producing some outputs that kind of flows nicely from one side to the other side. And the core C++ application that that is, is responsible for both getting the images off the camera, doing the neural network work inference on them, and then also control of the immediate sorting system that's attached to that camera. So that whole system runs in a very, very tight time-critical loop of getting the frame off, doing the processing, executing the actual sorting, whether that's turning on specific air jets or planning a path for the robot to go pick. So that's kind of the core existing robot application we have that runs on equivalent to basically a regular desktop that's sitting at your desk. We had i7 processors, NVIDIA GPUs, nothing crazy or exceptional. We actually run that application on a really wide range of hardware because we've been experimenting with different hardware over time. So for our facilities, it typically runs on a server, in a server rack, in a server room. And we have one server per kind of camera as our hardware matchup right now.
Matthias
00:23:41
And what time frame are we talking about for each frame is it comparable to a real-time game engine that maybe runs at 30 FPS or 60 FPS so you have a couple milliseconds per frame yeah.
Carter
00:23:54
It's it's a slightly different than that because we're not, we're not doing everything per frame. There's no like one global clock that kind of runs that loop instead. Everything's asynchronously happening at the maximum frame rate that it can really handle. So like we can pull images off the camera at like up to 40 frames a second, but the neural network can't run that fast. It might be running at some, you know, frame rate of 10, 10 frames a second. It's actually all over the place depending on application. And then the control loop that's actually doing the sorting might be running behind that at like a thousand Hertz for the, for like the jet valves. And so different parts of the application are running at different speeds, and they're making the best decision they can at any given point based off of the most recent information they've received from the other parts of the application. And there's different timing requirements at every different part of that, that are hard to achieve, but you throw enough hardware at it, it's not too bad.
Matthias
00:24:49
I can see how moving from Python to C++ gives you really great great performance benefits. But I wonder, Rust is also, of course, very, very fast. It has a reputation for performance and safety, but at the same time, it also has a reputation for a very steep learning curve. And I wonder, once you hired this new team, and I assume some of them learned Rust on the job, how did that impact your team's productivity, especially under those tight deadlines?
Carter
00:25:23
You know, I... I have thoughts on Rust's learning curve. I don't think Rust has a very steep learning curve, especially when you compare it to the alternative we had in hand, which is C++. Like if you honestly ask me how hard is C++ to learn versus how hard is Rust to learn, I firmly, firmly believe C++ is harder to learn. And it's not, like the nice thing about C++ is you don't have to eat the whole sandwich at once. Like you can learn a part of C++, but you don't know templates. And then you do dumb programming in C++ plus because you don't know templates and then you try to use templates in c plus plus and you do it wrong and you get horrible impossible to debug bugs that drive you crazy because you don't know templates like templates is a turing complete language in and of itself buried inside of c plus plus that's brutal and the other thing that's brutal about c plus plus is like there's not one way of doing anything there's nine ways of doing anything and i'll fire you if you do seven out of the nine of them because they're dangerous and safe and bad practices and not things that should be done any well. So if you ask me, is it harder to teach someone to write Rust well or to write C++ well? I think it's harder to teach someone to write C++ well. And for people who are pretty experienced C++ devs, I think going from C++ to Rust feels like a breath of fresh air. There tends to be a pretty nice one-to-one pairing of concepts that you were using in C++. Well, here's the Rust version of them, and they're a little easier to use, and they're a little more cleaned up. And so most of the people who I taught Rust to had a very smooth path to it, having come from a decent level of C++ experience before. I think Rust is pretty brutal to learn if you've only been in Python and JavaScript and you've never dealt with memory management and mutexes and atomic reference counters. Those are hard concepts to learn. But if you're already struggling with all those concepts and already needing to do them in C++, in Rust, I think it's It's presented better, it's packaged better, and there's less rough edges on the actual learning process. Like the documentation for Rust is so much better than the documentation for C++ for the equivalent levels of complexity that they're dealing with.
Matthias
00:27:30
When you said you taught people how to use Rust, were there any specifics that you noticed that people got wrong initially or some repetitive patterns that over time, a lot of people struggled with this specific thing in Rust? Maybe besides the obvious, like fighting with the borrow check in the beginning.
Carter
00:27:53
Yeah, I think the two biggest things that we ran across, the first one is like general code architecture and layout. out. We had a lot of people who were just used to writing classes and they thought in classes and classes were how they wanted to organize their code. And so they would write a Rust class that had all of the data they wanted in it. And then they would try to write a bunch of functions on that class, but they don't realize that by putting everything in one big pile where now you're taking a reference to self, you have all these splitting borrow problems where you actually want to like mutably borrow part of self and not mutably borrow the other part of it. And so they started writing these, you know, Mondo structs with, you know, hundreds of impls that are individual functions on that struct, because that's just the C++ way to organize code. And kind of breaking people of those habits and having them write, you know, smaller structs, leave the data a little more broken apart, write fewer functions that are, you know, member functions of those structs, but write more free functions that are just floating around. Like in C++, it's dangerous to just have a function declared floating around somewhere in Rust, I actually think it's better in general to write a free function that just takes in two things than to write, you know, functions on self all the time. Like, unless you really need all of the data of self, why are you making that, you know, a member function of that struct instead of a free function that's just operating on only the data it needs? So first one, sorry, that was part one answer was just general code style. The other thing that we kind of most struggled with was just async in general. And this was something that even I struggled with in this project. You know, the previous architecture that we were coming from was writing this robotic application. And ROS's idea of microservices is essential to writing robust software in C++. Because, you know, undefined behavior and memory corruption can come from any part of your code in your application anywhere and theoretically corrupt anything else nearby. by, a way that we build robust systems in C++ is we break them into multiple processes. And by breaking them into multiple processes, if the camera driver that we're using from the vendor has a bug in it that causes memory corruption, well, what does it do? It takes down the camera driver, the camera driver restarts, and the whole system keeps on trucking. The camera driver crashing doesn't take down the robot sorting because they're actually isolated in different system processes. And we're trusting Linux's process isolation to keep one process from taking down another process. That turns out to be really good, but we naturally had this writing style of, okay, we write everything in individual microservices. And in C++, we had strongly erred towards the style of one thread per microservice, where we did not internally multi-thread any of those microservices because we've continuously found multi-threading is the source of the vast majority of bugs in C++. And so if you don't do internal multi-threading to a process, but instead push everything over the message bus between the nodes, you can avoid a lot of classes of bugs. So we were coming from that style into Rust. And we started writing our application in that same way, where everything was a standalone, non-async node. It was receiving stuff synchronously. It was publishing synchronously. Every node had one control loop around it. And we We were just publishing and subscribing on a message bus, and that worked pretty well, but then we started wanting to play with async, and we were like, oh, well, async should be better, and it should work well. And in fact, it really, really does. What got really messy is when we started to mix async with that message bus pattern, where now we have one node that's subscribing to the message bus, but internal to that node, it's now spawning a bunch of separate actors that are internally forming a little message bus out of channels inside of that node to wrangle data between it, and it just got confusing and undebugable. And I would say, like, I think one of the detriments of async rust is that when you start writing complex things in async rust, debugging them becomes really, really, really hard. Like, understanding the flow through your kind of async graph that you've made of where data is being passed and where things are getting deadlocked and where things are getting hung up is really, really hard. So, like, of course... Of actual bugs that we wrote in Rust that we had to debug when we were writing it, I think 90% of them were deadlocks in one form or another, where we had code that was waiting on code that was waiting on other code that was stuck in a loop. But when you write a lot of async Rust, seeing a loop in kind of the data graph that can create a deadlock, not the easiest thing to spot. Like in the way that just async code is written, those are non-obvious when they appear. And we struggle with that a lot.
Matthias
00:32:31
On one side, you have ROS, the robot operating system with its processes. I almost said actor model, a way to separate the concerns through individual threads. And on the other side, you have the tokio, or more in general, ROS async model, which has work stealing and doesn't necessarily have thread per core. It's just a different way of modeling your application. It seems like it's an architectural problem as well were you fighting those different models those different contexts when you try to integrate them you mentioned deadlocks right that was kind of a symptom already of that we.
Carter
00:33:16
Were definitely fighting it and i think you know at the beginning we wrote everything in the rust in the ROS model of it i hate how similar those two words.
Matthias
00:33:24
Hehe
Carter
00:33:25
But we wrote everything as individual processes to start with. And I think that was a fine architecture. We then started playing around with async and mixing async in with that architecture. And that was frankly a bad idea to just like mix it in a little bit like that. The two really don't match super well with each other. But what we found is, is that async rust is actually better. It's more ergonomic. It's easier to use when you it's way, way lower, performance overhead. And gradually over time, we've started re-architecting more and more towards that async model and away from the separate process model. But, It's been slow and gradual, and it's had to be done with some intention because our early experiments into it led us a little bit astray.
Matthias
00:34:10
Why did you start with AsyncRust in the first place? Is it an I.O. Bound problem that you're faced with? I thought that you mostly dealt with image recognition, which I would assume would be CPU bound. What does AsyncRust give you?
Carter
00:34:25
Yeah, so thing to remember. So we have the individual robot application, which is responsible for like gets a frame in from a camera, runs that data, does the motion planning. That's all still in C++ today. So like the core robotics application is still in C++. We're starting to add some rust in there, but that's going slower. For the facility control system, what is the facility control system doing? It's talking to something like 450 Ethernet devices that include all of those C++ applications that are doing the data, but also include IO devices and VFD devices. And it's doing frankly, very, very minimal compute on all of them. But what it needs to do is collect fresh data from all of them, ensure that it has connection to all of them, ensure that data is fresh and valid and everything is still healthy, and then keep the facility ticking with small, minor adjustments. So the main control loop of the Rust application, we don't need it to run faster than one hertz. Like one update a second is actually really fast for a facility control system. The vast majority of what we're doing is just dealing with talking to hundreds of other devices that are all of the things in the facility, getting all of that data marshaled into a single central location that we can then process on.
Matthias
00:35:39
Yeah, then it sounds like a perfect use case for asyncrust.
Carter
00:35:43
Yeah. And like, you know, we were having some bottlenecks in the beginning because, you know, VFDs are the variable frequency drives. They control each conveyor. We might have a hundred of those in a facility. We would spawn a separate process for each one of those VFDs at the very beginning. And we actually ran into bottlenecks in like, hey, when we need to bring the facility down and bring it up, spawning each one of those VFD processes was taking a really long time and causing facility startup to be slow because of weird annoyances in the drivers that we were using to talk to the VFDs, turning that from 100 separate processes to one node that is the VFD reader that has 100 actors inside of it, each one handling an individual VFD is where we're at today. And that's working very well for us.
Matthias
00:36:31
So the VFDs control the conveyor belt. Is that something that you have under control, the software part of it? Or is that a proprietary driver that you maybe cannot change? And if it is controlled by you, is it in C++ or Rust?
Carter
00:36:49
It's in Rust right now. And so the VFDs provide communication interfaces. Like basically all industrial devices nowadays provide an API for you to control them. Most of the APIs are in some asinine industrial protocol that there's not great representation for. And unfortunately, the most common one, the like most common industrial language that the most devices speak is a protocol called Modbus, which is from like the 1970s, which is literally like read and write raw addresses in memory. There's like five commands and it's, hey, at this magic number address, tell me the 16 bits that are present there or write those 16 bits to it. So instead, you get a manual from the manufacturer that's about a thousand pages long that has just tables of this memory address controls this function. And here's what the bits do in it. It's very similar if you're doing like embedded programming where like on the microcontroller, you get all the special addresses for that microcontroller. We're basically doing the same thing, but it's over an Ethernet cable, which is a little bit crappy. But, you know, the actual driver, you know, there are Rust Modbus libraries that very happily produce and send Modbus packets. So like those aren't hard for us to use. But then we have to wrap them in, OK, we're using this specific VFD for this specific manufacturer. So here's all the addresses that we care about. And then here's how to convert, you know, 16-bit Modbus types to actually like useful types on our end to send the data back and forth. So there's a lot of just converting down to some dumb industrial protocol and then converting back up to, you know, regular types that are useful to work with.
Matthias
00:38:21
It sounds fun when he explains it like that. It sounds like a lot of fun. And it also reminds me of Oxide Computer, which also takes what was there before and then they... You know reimagine it and bring it back into the future and they do something similar for server hardware but i imagine that it is much of the same work you try to make sense of what is out there and you read those data sheets and then you build your own abstractions and you probably do that at the very edge of computation so whenever possible you work on a high level and only at the last possible moment you convert it or serialize it to something that is just a byte stream that the controller can understand.
Carter
00:39:01
Yeah. You know, it's important for us to not be locked to a particular VFD manufacturers. Like we had to come up with a generic representation of a VFD that we could work with inside of our application. And then how do you convert that to different manufacturers representations under the hood and deal with, well, this VFD does or doesn't have this capability or not. And, you know, like enums and option are so useful and so good for representing things like that, of like this VFD does or does not have that capability. And making an application like a facility controller, you're really dealing with a lot of being generic over different hardware. That's a lot of the problem that we're dealing with. And Rust has just some great tools for that, that made that process very pleasurable to deal with. I'm going to take us back for a second because earlier you asked the question, okay, you were a software company and then you transitioned into a hardware company. And I kind of agreed with that and answered that. But honestly, I'm going to put a twist on it. And I tell this to people a lot. The value of AMP is not in its software or in its hardware. It's in its systems integration. It's in the ability to put everything together in one package and have that package cohesively work together. The weaknesses of the hardware have to be compensated by the software the weaknesses of the software have to be compensated by the hardware there's not like a bit of AMP that you can pull away and say like oh well they have really good vfd drivers that's great no the vfd drivers don't matter the vfd drivers are one small tiny part in getting a machine with 10 million parts to work together you know we have a server room with dozens of computers in it hundreds of ethernet cables switches network diverters converters there's serial devices with different communication protocols like there's just an enormous machine and most of the value isn't in either the hardware or the software it's in the capability to put it all together and actually have it work at the end of the day yes that's that's like a beautiful thing.
Matthias
00:40:57
I Thought about products today and i realized that if you build a great product people will look at it and think it's obvious it's it's simple it does exactly what it is supposed to do there are no extra parts that you don't need and getting to a point where you build a product like this is very hard engineering work and it feels like this is what you literally do you make a very very hard technical challenge consisting out of different levels of expertise and different disciplines you combine it into a very nice vertically integrated product, on the outside, it looks like, yeah, pretty much magic. And then you go in and you realize that all of those things need to be addressed and the problems need to be worked around. And you end up with something that just works. It's polished.
Carter
00:41:55
Yeah. And honestly, you know, the only advice I can really give to people who are trying to do the same thing is to not try to polish particularly upfront. The biggest lesson I've taken away from AMPand the biggest thing that i would say has made us successful is you know commitment to the fact that the fastest way to build the right thing is to build the wrong thing twice first, we always always always try to build the simplest dumbest thing like what could possibly solve this problem well yeah it might not work but let's build it and try it see what's wrong with it and then make adjustments from there we like we try to avoid over engineering as much as humanly possible everywhere like what is the simplest possible thing we could do do that thing and you know part of part of like why we ended up with a rust control system you might say like that flies in the face of it but at the beginning we wanted the simplest possible thing that would turn all the vfds on well what could i do that would get me that the fastest well i can walk in the back and I can write it myself and it will take me a day. Okay, I go write that, I've done that. Now we have something that can turn the VFDs on. Well, what's the next thing we have to build? And then just one at a time, laying the bricks on top of each other, not worrying about what the cathedral is going to look like at the end of the day, and being ready to knock down a few walls and rebuild some stuff as you go, as you learn what really matters in the problem that you're working on.
Matthias
00:43:21
Okay, but coming back to Rust for a second, I had two questions on my mind. Why is it bad to have a free function in C++?
Carter
00:43:32
I would say it's just generally viewed as bad style because of like poor namespace control and poor module controls. If you aren't putting that function under a class where I'm calling it with like dot through the class, it becomes easy to have namespace collisions with free function names. If you are not being pedantic about namespaces and the other libraries that you're using aren't being pedantic about namespaces. So like Rust's module system with like tighter includes just makes that a little bit easier to deal with in general. That's probably my biggest answer to it. The other answer I think really comes from like problems of RAII and data management where like when I have free functions in Rust that want to free things or allocate things or transfer memory ownership, you can't really express those well. And so in C++, I think the pattern that most people have kind of come to is, well, we have these functions that are operating on data and they're the member functions of our class and the class is concerned with the lifetime of the data and the member functions are concerned with the processing of it, but they've bifurcated it where there's this constructor function and there's this destructor function on the class that are going to deal with the memory management for them. And then the functions on the class can just be regular functions that they don't think about that much. But when you just have free functions floating in space, they really don't have any ability to control the memory that's being passed in and out of them in effective ways it's a little bit better nowadays with like how move has worked out but it's hard like most people get it wrong when they try to get it wrong when they try to do it.
Matthias
00:45:04
Okay, if i understand correctly then it's easier for the programmer to reason about memory usage if you talk about it on an object level because all the memory that is associated with this very object is encapsulated in the object model or the class itself. Whereas if you had a free function, it's not as easy to understand whether you can free memory or what even the parts are that this function might allocate or forget to deallocate. And that makes, let's say, programming hygiene a little harder. And it puts the burden on the programmer. I see. And the second question was, how much of the Rust ecosystem can you actually leverage to solve some of your problems. Do you use any crates or do you mostly write your own stuff? I mentioned that a lot of the things that you build are very purpose built, but maybe there are things like the bitflags crate or working with byte streams that you can still use and leverage.
Carter
00:46:07
Yeah, honestly, we ended up using a ton. And it's one of the main reasons that we ended up choosing Rust was that when we went looking for the crates that we would need, that we had C++ equivalents of, we're like, C++ can do this for this, whatever. We found Rust equivalents for the things that we were looking for. The vast majority of like the the rust crates that we're using are communication related both for the message bus that we're using between the nodes and for talking to the industrial devices so like tokio_tungstenite that does MQTTcommunication we use that an enormous amount wait i think that's i got that crate name right i'm gonna be embarrassed if i got that crate name wrong.
Matthias
00:46:47
There is a crate name called tokio_tungstenite.
Carter
00:46:50
Think tungstenite's tungstenite's either the WebSocket one, which we use, or it's the MQTT one. I think it might be the WebSocket one. But we use that. We use a bunch of the communication infrastructure to talk all kinds of different protocols, whether it's Modbus, MQTT, OPC UA, like JSON WebSocket stuff. All of that is being used at different points. And Rust's networking crates in particular are of exceptional quality and modular enough that you can adapt them to the kind of needs you want. I'll also call out like serde or serde as different people pronounce it is such a gem. Like the ability to just serialize and deserialize whatever we're working on into a bunch of different formats to send to different locations. Like that has been, oh my God, the amount of boilerplate that serde has replaced in our application is absurd.
Matthias
00:47:40
What serde formats are we talking about?
Carter
00:47:43
JSON is definitely the heaviest used one. There is serde ROS message, which actually lets us convert to the ROS byte level protocol that like ROS uses. There's also a few oddball ones for like random systems that we'll talk to. Another, you know, another great crate to just kind of mention and call out if we're just shouting out crates, our user interfaces are all Angular applications that are written TypeScript. That's kind of legacy from like, that's how our company's been writing UIs the entire time I've worked there. So we have a lot of TypeScript knowledge on building UIs. And okay, now you want to talk between a Rust backend and a TypeScript frontend. There's an amazing crate ts-rs that for any Rust types you define will define TypeScript types for them, so that you can have closed loop type safety between them and it doesn't matter really on what the serialization protocol like we end up using serde json to define a rust type serialize it to json send it to the front end the front end uses the ts-rs crate to get typescript types for it but we have you know closed loop type safety across both applications for practically free like that was so easy to set up and.
Matthias
00:48:48
Soon the front end will also be written in rust because there's leptos and tauri have you heard of.
Carter
00:48:54
Yeah and we we We keep, you know, I keep poking around with those. I think convincing our front-end developers to switch to Rust is a much harder pill to swallow than the C++ devs. Like the C++ devs, there's not a single one of them who's like, yeah, I'll defend C++. It's a great language. Every single one of them is like, yeah, it's an old beat-up truck. It still runs, so it gets the job done. But boy, howdy, would it be nice to drive something a little newer? So they're hungry for Rust. The front end devs, like, yeah, you can make some arguments for Rust there, but, you know, they've got a really nice flow in TypeScript and Angular and like performance is so much less critical and reliability is so much less critical. And, you know, we're not delivering these UIs over the Internet to people publicly like their internal HMIs within a closed loop network on our facility. So even some of like the security things are a little less interesting, right?
Matthias
00:49:51
When you wrapped up the project you wrote a blog post and in that blog post you said something which struck a chord with me which was the project went shockingly well and everyone that likes rust probably would not wonder that it worked shocking well we well but that you were kind of shocked by it. What were your presumptions going in? What did you anticipate? What might be the problems that you would have likely have to face, the setbacks and so on?
Carter
00:50:28
Yeah. So I come from a lineage of having stood up a lot of complex systems. And particularly when you're building robots, software often gets the really short end of the stick, where you're told to write all of the software, why mechanical and electrical are designing and building it mechanical and electrical run behind schedule so the physical hardware is ready you know 24 hours before the demo to the board and you have to test the software and integrate everything at midnight like i have done the like 1 a.m get the software on the hardware for the first time just before the demo schtick probably 10 times, This is one of the worst possible scenarios of that. Like this is an entire factory. It's hundreds of devices, millions of points where things could go wrong. At this point, the Rust application had only been tested against maybe like 10 devices sitting on a benchtop in our lab. Like it had never been plugged into the full facility scale, the full facility control. At this point, all of the code has been written in, you know, against simulated targets that We also wrote the simulated devices in Rust to try to end-to-end test the equipment. And the scale of the whole facility is just massive. There's a lot, a lot, a lot of things to go wrong. Considering how short the project timeline was, considering this was the first major project that anyone on the team had done in Rust, considering most of the team were new to the company, the odds of that software just working the first time and there not being massive glaring critical bug that prevents the facility running should have been zero. That application had no goddamn right just turning on and running correctly the first time. And I'm not going to say it was perfect. We probably fixed like two or three bugs on site in the first couple of days of running that caused random problem here and there. But the reality is from like the first time we tried to power up the facility, the rust application was running 99% smoothly. No major hiccups was not the blocker. So, you know, at the end of this six month build of an entire facility and it's time to turn on the facility and run software did not cost us a single day of schedule. Like we were able to test and develop the software in parallel to the hardware getting stood up. And at the end of the day, when we plugged them in together and it was like, okay, time to go integrate. It just worked. That never happens. That should not happen. I feel like an asshole getting up here and telling people that that happened because I won the lottery ticket. That's not a reproducible thing that people should be like, oh, well, that's what I should expect out of Rust. No, we got lucky. I will fully admit that. But I do think Rust helped make that possible. Just given the volume of C++ code I've written in the past, I know that if you write 150,000 lines of C++ code, there will probably be 15 or 20 gnarly hard to debug just insane things going on that are undefined behavior and segmentation faults and weird crap like that we had zero of those at time of powering up the facility like none of them appeared and that's that was just cool that's nice and.
Matthias
00:53:34
Why would you say rust allowed you to have less bugs in production production.
Carter
00:53:42
A few things. The first I would say, you know, in these types of extremely async programs that we're writing, like these applications where you're doing a ton of things in parallel and mixing the data in together, the majority of the bugs that we have historically encountered in C++ are memory safety bugs. They are data races. They are use after freeze. They are, you know, shared pointers getting freed or not freed when they should be. That is just kind of one of the penalties that you pay when you write those kinds of applications. in C++. With Rust's memory safety, with the borrow checker, there was fundamentally an entire class of bug that I am used to dealing with that we solve that compile time instead of solving at runtime. I'll also say this, I have used an enormous number of C++ libraries and you can't build these kinds of applications purely in Scratch yourself. A lot of people try to because it gives them control over the code quality, but we pull in dozens of libraries in our C++ applications, and we find dozens and dozens of bugs in them. In all of the Rust crates that we pulled off the shelf to kind of integrate into our application, we have never filed a bug against a single one of the Rust crates that we've pulled up. We started to file one against Paho crate, which I think is the MQTT one. I'm getting that correct now. And working with the authors of that crate, we traced that bug into they were wrapping the C library, and we found a bug in the C library that the Rust library was wrapping there. Like that's the only bug we found in a Rust crate we depend on in like three years of using the ecosystem. The quality just feels exceptional. It feels unbelievably high.
Matthias
00:55:21
That says a lot about supply chain security. That's a really nice benefit on top of all the other guarantees of Rust. But a lot of those crates that maybe some people use in production, I'm not sure they are still not 1.0. So they could technically be, have breaking changes in the future. How do you see the long-term sustainability of your facilities? How do you see ensuring that the factory runs as planned and that you can evolve with Rust over time?
Carter
00:55:57
Yeah, so this is at least kind of fundamental to the idea of writing our own facility control system. And if we were going to do this as a one-off for one facility, it would never have been worth it. The maintenance burden would be too high. The support would be, you know, it's too much investment for one facility. Our plan here is that we're writing a single application that we're going to deploy across eventually hundreds of facilities. All of the facilities are going to run the same application. Simply configured to be different facilities. That's already true right now. We are running, I think, four or five facilities at the moment off of one Rust application. And that means that having a sustaining team whose job it is to keep that Rust application live and running and fresh is well worth the investment. Because when we develop a new feature for one of our facilities, it's immediately available at all of them. And all of them are sharing the benefit of continuing to evolve and keep that ecosystem. So it was at least acknowledged and planned upfront that having resources into the sustaining aspects of this would be desired. And definitely, we're using a ton of crates that haven't hit 1.0 yet. We are fairly regularly updating all of our crate dependencies. And my God, the library authors are fantastic about correctly obeying SemVer, about having good release notes. I wish release notes and change logs got a little bit more standardized in how you document them, but we find the critical information we need. And also Rust makes writing good tests substantially easier than C++ does. So we're able to update a bunch of crate dependencies, run our test suite and have pretty damn high confidence, it's going to then work on the actual factory afterwards. So, you know, we've gone through, you know. 10 releases of the core language plus hundreds of crate updates and it's been pretty seamless and smooth and i know like that's not going to be everybody's mileage and we are still careful about which crate dependencies we do pull in and use and try to use you know trusted well supported ones that look like they're gonna stick it stick it out and continue to be there but so far you know no problems.
Matthias
00:58:00
And it makes sense to think about this as a platform now and have a team that makes sure that it runs smoothly and operations are predictable do you sometimes miss a tool for running rust applications at scale or in production maybe something like a supervisor that would tell you the state of dependencies or a thing that would tell you about upcoming breaking changes or that sort of thing or would you say in fact no it's pretty much smooth sailing from now on?
Carter
00:58:39
You know, I could imagine a tool like that being useful. I certainly think, you know, as the application continues to grow and there's more and more dependencies and more and more time goes on, the needs for that will be higher and higher. We're, you know, two, three years into this being a real application at this point and have not felt the need for that yet.
Matthias
00:58:58
That's a good verdict. Yeah. Now, if you reflect on the entire project, and if you were to start over, what would you do differently in terms of choosing and implementing Rust at AMP Robotics?
Carter
00:59:15
Yeah. You know, if I came with all of the Rust knowledge I had today, I would probably first and foremost say that I'm just not afraid of async in the way that I was, that I should have leaned even harder into Rust's ability to do things in parallel safely and not have memory corruption bugs. And I probably would have not done the full microservices architecture that we started with doing it over from scratch. I probably would have written it entirely as one single process with one tokio runtime at the core of it and everything as kind of actors within a, you know, a tokio runtime representing still the same compute graph, the same chain of things happening, but relying Relying way more on tracing and way more on the Rust ecosystem's tools for dealing with the communication between those different actors than what we did, which is rely on a message bus. One of the big things that we decided is, okay, we're going to have these different. Processes communicating with each other. We need to use some message bus there. There are a lot of options for those message buses. We ended up picking MQTT, which is not what people traditionally think of as for inter-process process communication on one system. But one of the major advantages of MQTT is that the broker that's kind of brokering that communication can be set up to forward those messages to the cloud. And we immediately for free got historical cloud metrics and databases for any topic that we wanted to mirror to the cloud. We just changed a config flag and suddenly that message, that topic field is being backed up. Into Google and we have databases on it. That was really nice and handy. But at the end of the day, it turned out MQTT ended up being quite a bit of a bottleneck for us. We had a few bugs with MQTT communication. We had to switch to async because there were some variable latency problems. The broker that we were using, there are many MQTT brokers available for running on your system. We had broker crashes, which would take the whole stack down. So one of the core of core pieces of technology that we ended up relying on at the heart of our application was this message bus that we decided to build from, you know, off the shelf tools that weren't Rust tools and then ended up being one of the biggest problems in the whole application at the end of the day. So certainly rethinking that message bus and instead of doing it as like MQTT topics, doing it just with straight up channels inside of the code in kind of the Rust more idiomatic way. And instead of breaking everything into multiple processes where we need to do IPC, doing it as actors that can have zero overhead in those message passes, yeah, I think that would have set us off on a better foundation. Frankly, everything I just said there is stuff that we are likely to change about this application in the next couple years as we continue to invest in it.
Matthias
01:02:02
The one thing that comes to mind is if you start using normal in-memory channels, then you might have a hard time rebooting the system if you have state. But I wonder if you even have state in the system or if you start from a clean slate pretty much every time.
Carter
01:02:21
There is one microservice with state. And so the state is as isolated as it is humanly possible to be. And every one of those actors tries to be a pure function. You know, actually winding back and saying, you know, a slightly different tact. One of the things that I absolutely loved about our microservice architecture and something that I would try to figure out how to make possible as we move that way is when we're developing on this application, our compile times were incredibly low because we're not compiling the entire application. We're compiling one of those processes, right? There's some common crates that they all depend on, but each one of those processes that is a node is in its own crate. So I can compile that node independently. I can also then stand up on my system, a live running simulation of the whole facility with all of the nodes in place. And I can iteratively compile and run new versions of that one node that then just drops into all of the running existing traffic so that I have a really short, quick iteration loop in being able to develop on one part of the application. You know, compile times for the whole application might be several minutes. But if we're talking about just iterating on one node, that's what you're currently developing on from like timed finish hitting line of code to seeing the results of that node running, we often keep under a second. Like you can recompile that one node, have it stood up and running and in like messages now traveling through it instantaneously and giving your developers really tight feedback loops like that is critical. And I think one of the things that's a challenge in a lot of bigger Rust monolithic applications.
Matthias
01:04:00
Do you have a shared types crate where you define the messages that go over the wire?
Carter
01:04:07
Yep, we have we have kind of one crate that that crate has nothing but a bunch of structs with a bunch of drive macros on it. That one crate generates the struct definitions that then all of the other crates use as well as that crate generates the TypeScript definitions that the front end uses. It also generates like JSON schema definitions that we use in other places. So one centralized types crate kind of untangles the dependency tree there. And it does mean that like we have a bottleneck in our compilation graph, but that crate is very quick to compile and doesn't change horrendously often.
Matthias
01:04:41
And finally, do you have a microservice called WALL-E? Because if not, then you should introduce one.
Carter
01:04:47
Why do we need one called? Oh, because he does the trash. So our conference rooms at our headquarters are all named after robots. So like WALL-Eis the conference room next to my desk.
Matthias
01:04:56
Amazing.
Carter
01:04:57
We have a lot of WALL-E references. We don't have microservices. I will say, you know, like three hard problems in programming, off by one errors and naming. We deal a lot with naming things. Boy, it never gets easier, does it?
Matthias
01:05:12
It has become sort of a tradition around here to ask the guest at the end whether they had a message to the entire Rust community. Is there something that just comes to mind right now?
Carter
01:05:27
Yeah, and it's a really positive message. Like the Rust community has built something truly incredible. And I think, you know, sitting behind our keyboards and staring at our screens all day, it's easy to lose sight of the impact of better software in the real world. Like it's easy to forget that, you know, like a slightly better programming language isn't just a nice to have thing that makes programmers lives better. It's a thing that ends up having true real world impact. And so at least in a little bit of a way, every single person who's contributed to the Rust ecosystem, whether it's the languages, whether it's the packages, whether it's the conferences or the documentation, or like incredible YouTube videos that make it really easy to teach people Rust, all of those things are helping achieve AMP's mission right now. Like without the Rust community, it would have been harder for this recycling company to build better recycling facilities and less material would have been recycled last year and the year before, if not for their efforts. So I just want to give them a huge shout out and a huge thank you to the entire community. You guys are making a positive impact in the world by making it easier to build better software that does have real world impacts, that does make a real world difference, that does make the world a better place. And I feel bad that, I get so much of that benefit and get to see it all. And I'm doing everything I can to try to figure out how to give back to that ecosystem and continue the virtuous positive cycle. But, you know, it's incredible what's been built here. And it really does matter. And thank you.
Matthias
01:06:54
Where can people learn more about AMP Robotics if they got interested in the topic? Can you share some resources?
Carter
01:07:01
Yeah, so we changed our name, I think, literally last week. And we dropped the word robotics off of it, because we're trying to be more about the facilities now. That's our new goal. So the company is just called AMP now, which is fine. Yeah, not the hugest change, but we did just launch a brand new website. So if you go to ampsortation.com now, which is our new website, it really shows all of the different technologies we've built, and it shows our facilities. We just put up a couple of incredibly well-produced YouTube videos that give you tours through our facilities and show you what these air jet sorters look like and show you what our camera systems look like and it's the first time that we've been really able to like publicly talk about what's inside these facilities and it's incredible to see so really go check out ampsortation.com the website looks slick it's not like it's got a crazy like scroll thing where like as you scroll through it you like zoom through the 3d model which is a little gimmicky but it's also really really well done. And I'm just excited about it.
Matthias
01:07:59
Yes, check it out. Carter, thanks a lot for being a guest on the show. It was really incredible. I learned a lot. And thanks.
Carter
01:08:08
Thank you.
Matthias
01:08:10
Rust in Production is a podcast by Corot and hosted by me, Matthias Endler. For show notes, transcripts, and to learn more about how I can help your company make the most of Rust, visit corrode.dev. Thanks for listening to Rust in Production.