PubNub with Stephen Blum
PubNub's CTO Stephen Blum discusses how implementing Rust improved memory and performance compared to their C and Python implementation. They highlight Rust's versatility, while emphasizing low latency and the importance of code simplicity.
2023-12-28 57 min
Description & Show Notes
In this episode, we are joined by Steven, the CTO of PubNub, a company that has developed an edge net messaging network with over a billion connected devices. Steven explains that while message buses like Kafka or RabbitMQ are suitable for smaller scales, PubNub focuses on the challenges of connecting mobile devices and laptops at a web scale. They aim to provide instant signal delivery at a massive scale, prioritizing low latency for a seamless user experience. To achieve this, PubNub has architected their system to be globally distributed, running on AWS with Kubernetes clusters spread across all of Amazon's zones. They utilize GeoDNS to ensure users connect to the closest region for the lowest latency possible. Steven goes on to discuss the challenges they faced in building their system, particularly in terms of memory management and cleanup. They had to deal with issues such as segmentation faults and memory leaks, which caused runtime problems, outages, and potential data loss. PubNub had to invest in additional memory to compensate for these leaks and spend time finding and fixing the problems. While C was efficient, it came with significant engineering costs. As a solution, PubNub started adopting Rust, which helped alleviate some of these challenges. When they replaced a service with Rust, they observed a 5x improvement in memory and performance. Steven also talks about choosing programming languages for their platform and the difficulties in finding and retaining C experts. They didn't consider Java due to its perceived academic nature, and Go didn't make the list of options at the time. However, they now have services in production written in Go, though rewriting part of their PubSub bus in Go performed poorly compared to their existing C system. Despite this, they are favoring Rust as their language of choice for new services, citing its popularity and impressive results. The conversation delves into performance considerations with Python and the use of PyPy as a just-in-time compiler for optimization. While PyPy improved performance, it also required a lot of memory, which could be expensive. On the other hand, Rust provided a significant boost in both memory and performance, making it a favorable choice for PubNub. They also discuss provisioning, taking into account budget and aiming to be as close to what they need as possible. Kubernetes and auto scaling with HPAs (Horizontal Pod Autoscaling) are used to dynamically adjust resources based on usage. Integrating new services into PubNub's infrastructure involves both API-based communication and event-driven approaches. They use frameworks like Axiom for API-based communication and leverage Kafka with Protobuf for event sourcing. JSON is also utilized in some cases. Steven explains that they chose Protobuf for high-traffic topics and where stability is crucial. While the primary API for customers is JSON-based, PubNub recognizes the superior performance of Protobuf and utilizes it for certain cases, especially for shrinking down large character strings like booleans. They also discuss the advantages of compression enabled with Protobuf. The team reflects on the philosophy behind exploring Rust's potential for profit and its use in infrastructure and devices like IoT. Rust's optimization for smaller binaries is highlighted, and PubNub sees it as their top choice for reliability and performance. They mention developing a Rust SDK for customers using IoT devices. The open-source nature of Rust and its ability to integrate into projects and develop open standards are also praised. While acknowledging downsides like potential instabilities and longer compilation time, they remain impressed with Rust's capabilities. The conversation covers stability and safety in Rust, with the speaker expressing confidence in the compiler's ability to handle alpha software and packages. Relying on native primitives for concurrency in Rust adds to the speaker's confidence in the compiler's safety. The Rust ecosystem is seen as providing adequate coverage, although packages like libRDKafka, which are pre-1.0, can be challenging to set up or deploy. The speaker emphasizes simplicity in code and avoiding excessive abstractions, although they acknowledge the benefits of features like generics and traits in Rust. They suggest resources like a book by David McCloyd that focuses on learning Rust without overwhelming complexity. Expanding on knowledge sharing within the team, Stephen discusses how Rust advocates within the team have encouraged its use and the possibilities it holds for AI infrastructure platforms. They believe Rust could improve performance and reduce latency, particularly for CPU tasks in AI. They mention the adoption of Rust in the data science field, such as its use in the Parquet data format. The importance of tooling improvements, setting strict standards, and eliminating unsafe code is highlighted. The speaker expresses the desire for a linter that enforces a simplified version of Rust to enhance code readability, maintainability, and testability. They discuss the balance between functional and object-oriented programming in Rust, suggesting object-oriented programming for larger-scale code structure and functional paradigms within functions. Onboarding Rust engineers is also addressed, considering whether to prioritize candidates with prior Rust experience or train individuals skilled in another language on the job. Recognizing the shortage of Rust engineers, Stephen encourages those interested in Rust to pursue a career at PubNub, pointing to resources like their website and LinkedIn page for tutorials and videos. They emphasize the importance of latency in their edge messaging technology and invite users to try it out.
About PubNub
PubNub is a global Data Stream Network (DSN) and realtime infrastructure-as-a-service company. PubNub's primary product is a realtime publish/subscribe messaging API built on a global data stream network which is made up of a replicated network with multiple points of presence around the world. PubNub's primary headquarters are in San Francisco, California, with additional offices in Mountain View, California, Eindhoven, Netherlands, and Cambridge, UK.Â
PubNub is a global Data Stream Network (DSN) and realtime infrastructure-as-a-service company. PubNub's primary product is a realtime publish/subscribe messaging API built on a global data stream network which is made up of a replicated network with multiple points of presence around the world. PubNub's primary headquarters are in San Francisco, California, with additional offices in Mountain View, California, Eindhoven, Netherlands, and Cambridge, UK.Â
About Stephen Blum
Stephen Blum is the founder and CTO of PubNub. He has worked in the realtime communications space for over 15 years, developing technologies that have been used by companies like Yahoo!, AOL, and Google. He is the author of several books on realtime communications and has been awarded several patents for his inventions.Â
Stephen Blum is the founder and CTO of PubNub. He has worked in the realtime communications space for over 15 years, developing technologies that have been used by companies like Yahoo!, AOL, and Google. He is the author of several books on realtime communications and has been awarded several patents for his inventions.Â
LinksÂ
- PubNub: https://www.pubnub.com/
- PubNub on Twitter: https://twitter.com/pubnub
- Stephen Blum on Twitter: https://twitter.com/stephenlb
- Stephen Blum on LinkedIn: https://www.linkedin.com/in/stephenlb/
Transcript
Stephen
00:00:23
Matthias
00:01:56
Stephen
00:02:27
Matthias
00:05:20
Stephen
00:05:25
Matthias
00:08:14
Stephen
00:08:44
Matthias
00:09:13
Stephen
00:09:25
Matthias
00:11:12
Stephen
00:11:40
Matthias
00:13:42
Stephen
00:13:42
Matthias
00:13:48
Stephen
00:14:05
Matthias
00:14:29
Stephen
00:14:31
Matthias
00:14:40
Stephen
00:14:51
Matthias
00:14:57
Stephen
00:14:58
Matthias
00:15:45
Stephen
00:15:52
Matthias
00:17:42
Stephen
00:17:45
Matthias
00:19:09
Stephen
00:19:44
Matthias
00:20:28
Stephen
00:20:30
Matthias
00:20:31
Stephen
00:20:34
Matthias
00:21:43
Stephen
00:22:47
Matthias
00:23:33
Stephen
00:23:39
Matthias
00:23:47
Stephen
00:24:13
Matthias
00:25:31
Stephen
00:26:17
Matthias
00:26:49
Stephen
00:26:57
Matthias
00:27:20
Stephen
00:27:59
Matthias
00:28:14
Stephen
00:29:15
Matthias
00:30:39
Stephen
00:30:46
Matthias
00:31:16
Stephen
00:31:21
Matthias
00:32:30
Stephen
00:34:22
Matthias
00:36:23
Stephen
00:36:58
Matthias
00:38:25
Stephen
00:38:50
Matthias
00:39:00
Stephen
00:39:16
Matthias
00:39:49
Stephen
00:40:14
Matthias
00:40:31
Stephen
00:40:42
Matthias
00:41:07
Stephen
00:41:08
Matthias
00:43:55
Stephen
00:43:55
Matthias
00:44:01
Stephen
00:44:07
Matthias
00:44:53
Stephen
00:45:36
Matthias
00:46:31
Stephen
00:48:26
Matthias
00:49:40
Stephen
00:49:55
Matthias
00:49:56
Stephen
00:50:33
Matthias
00:51:16
Stephen
00:52:34
Matthias
00:53:04
Stephen
00:53:22
Matthias
00:54:12
Stephen
00:55:10
Matthias
00:56:11
Stephen
00:57:00
Matthias
00:57:04