Agenda
Agenda
Day 1
Day 1
Registration, Ground Floor
Coffee served in Foyer
Registration, Ground Floor
Coffee served in Foyer
Registration, Ground Floor
Coffee served in Foyer
Running Start
Joran Dirk Greef | TigerBeetle
Running Start
Joran Dirk Greef | TigerBeetle
Running Start
Joran Dirk Greef | TigerBeetle
A Still-Secret Project!
Prof. Hannes Mühleisen | DuckDB
More information to follow at the end of May!
A Still-Secret Project!
Prof. Hannes Mühleisen | DuckDB
More information to follow at the end of May!
A Still-Secret Project!
Prof. Hannes Mühleisen | DuckDB
More information to follow at the end of May!
Building a Distributed Protocol
Dominik Tornow | Resonate HQ
Distributed protocols are the foundation of scalable and reliable systems — yet we often get lost in implementation details instead of grounding our designs in systems thinking. This talk offers a different path: we’ll explore how a small set of simple, well-crafted abstractions gives rise to complex, distributed systems. We’ll walk through how we move from foundational ideas to working systems, how we reason across layers to build reliable systems from unreliable components, and how we ensure correctness through formal modeling and deterministic simulation testing. A talk for system thinkers and system builders who want to move beyond ad hoc solutions — toward understandable distributed protocols that power scalable and reliable distributed systems. Distributed Async Await, a new programming model for the cloud, extends the simplicity of async-await to tame the complexity of the cloud. Building on the foundations of functions and promises, Distributed Async Await enables “simple code that just works”. In this talk, we will explore the theoretical and practical foundations that makes this elegant approach possible. Gain a comprehensive understanding of async-await, functions, promises, and the vital concepts of concurrency and coordination in distributed systems. Take away actionable insights on how to replicate the delightful developer experience of traditional applications as you craft cloud-based, distributed applications. Design and develop cloud applications that are not only scalable and reliable but downright delightful to work on. Transform the way you approach cloud computing.
Building a Distributed Protocol
Dominik Tornow | Resonate HQ
Distributed protocols are the foundation of scalable and reliable systems — yet we often get lost in implementation details instead of grounding our designs in systems thinking. This talk offers a different path: we’ll explore how a small set of simple, well-crafted abstractions gives rise to complex, distributed systems. We’ll walk through how we move from foundational ideas to working systems, how we reason across layers to build reliable systems from unreliable components, and how we ensure correctness through formal modeling and deterministic simulation testing. A talk for system thinkers and system builders who want to move beyond ad hoc solutions — toward understandable distributed protocols that power scalable and reliable distributed systems. Distributed Async Await, a new programming model for the cloud, extends the simplicity of async-await to tame the complexity of the cloud. Building on the foundations of functions and promises, Distributed Async Await enables “simple code that just works”. In this talk, we will explore the theoretical and practical foundations that makes this elegant approach possible. Gain a comprehensive understanding of async-await, functions, promises, and the vital concepts of concurrency and coordination in distributed systems. Take away actionable insights on how to replicate the delightful developer experience of traditional applications as you craft cloud-based, distributed applications. Design and develop cloud applications that are not only scalable and reliable but downright delightful to work on. Transform the way you approach cloud computing.
Building a Distributed Protocol
Dominik Tornow | Resonate HQ
Distributed protocols are the foundation of scalable and reliable systems — yet we often get lost in implementation details instead of grounding our designs in systems thinking. This talk offers a different path: we’ll explore how a small set of simple, well-crafted abstractions gives rise to complex, distributed systems. We’ll walk through how we move from foundational ideas to working systems, how we reason across layers to build reliable systems from unreliable components, and how we ensure correctness through formal modeling and deterministic simulation testing. A talk for system thinkers and system builders who want to move beyond ad hoc solutions — toward understandable distributed protocols that power scalable and reliable distributed systems. Distributed Async Await, a new programming model for the cloud, extends the simplicity of async-await to tame the complexity of the cloud. Building on the foundations of functions and promises, Distributed Async Await enables “simple code that just works”. In this talk, we will explore the theoretical and practical foundations that makes this elegant approach possible. Gain a comprehensive understanding of async-await, functions, promises, and the vital concepts of concurrency and coordination in distributed systems. Take away actionable insights on how to replicate the delightful developer experience of traditional applications as you craft cloud-based, distributed applications. Design and develop cloud applications that are not only scalable and reliable but downright delightful to work on. Transform the way you approach cloud computing.
Coffee Break
Afternoon Break
Coffee Break
Simplicity Is the New Black: Where Some Chase Scale for Scale’s Sake, Simplicity Is Your Competitive Edge
Stephanie Wang | MongoDB
Lately, we’ve seen a new wave of interest in distributing in-process, in-memory systems – projects like Deepseek’s Smallpond (distributed DuckDB) and DataFusion for Ray are getting a lot of buzz. But let’s be honest: this isn’t a new trend. For decades, “going distributed” (i.e. horizontal scale-out via partitioning) has been the go-to move when things get big, or might get big someday. But here’s the thing: just because something can be distributed doesn’t mean it should be. In this talk, we challenge the idea that “distributed” is the right default. We’ll unpack what really happens when you scale out and show how those tradeoffs can crush performance and developer sanity if you’re not careful. Instead, we’ll explore how to make big problems small and only layer on distributed strategies when it’s clearly the right solution. This isn’t a talk against distributed systems – it’s a talk about earning them. You’ll walk away with a systems-thinking mindset that helps you scale with purpose, not panic. Because sometimes, the smartest way to go big… is to stay small… until you absolutely can’t.
Simplicity Is the New Black: Where Some Chase Scale for Scale’s Sake, Simplicity Is Your Competitive Edge
Stephanie Wang | MongoDB
Lately, we’ve seen a new wave of interest in distributing in-process, in-memory systems – projects like Deepseek’s Smallpond (distributed DuckDB) and DataFusion for Ray are getting a lot of buzz. But let’s be honest: this isn’t a new trend. For decades, “going distributed” (i.e. horizontal scale-out via partitioning) has been the go-to move when things get big, or might get big someday. But here’s the thing: just because something can be distributed doesn’t mean it should be. In this talk, we challenge the idea that “distributed” is the right default. We’ll unpack what really happens when you scale out and show how those tradeoffs can crush performance and developer sanity if you’re not careful. Instead, we’ll explore how to make big problems small and only layer on distributed strategies when it’s clearly the right solution. This isn’t a talk against distributed systems – it’s a talk about earning them. You’ll walk away with a systems-thinking mindset that helps you scale with purpose, not panic. Because sometimes, the smartest way to go big… is to stay small… until you absolutely can’t.
Simplicity Is the New Black: Where Some Chase Scale for Scale’s Sake, Simplicity Is Your Competitive Edge
Stephanie Wang | MongoDB
Lately, we’ve seen a new wave of interest in distributing in-process, in-memory systems – projects like Deepseek’s Smallpond (distributed DuckDB) and DataFusion for Ray are getting a lot of buzz. But let’s be honest: this isn’t a new trend. For decades, “going distributed” (i.e. horizontal scale-out via partitioning) has been the go-to move when things get big, or might get big someday. But here’s the thing: just because something can be distributed doesn’t mean it should be. In this talk, we challenge the idea that “distributed” is the right default. We’ll unpack what really happens when you scale out and show how those tradeoffs can crush performance and developer sanity if you’re not careful. Instead, we’ll explore how to make big problems small and only layer on distributed strategies when it’s clearly the right solution. This isn’t a talk against distributed systems – it’s a talk about earning them. You’ll walk away with a systems-thinking mindset that helps you scale with purpose, not panic. Because sometimes, the smartest way to go big… is to stay small… until you absolutely can’t.
Lunch (60min)
Lunch (60min)
Lunch (60min)
Big Data and AI at the CERN LHC: Navigating the Edge of Scale and Speed for Physics Discovery
Dr. Thea Klaeboe Aarrestad | ETH Zürich
The CERN Large Hadron Collider (LHC) generates an unprecedented O(10,000) exabytes of raw data annually from high-energy proton collisions. Managing this vast data volume while adhering to computational and storage constraints requires real-time event filtering systems capable of processing millions of collisions per second. These systems, leveraging a multi-tiered architecture of FPGAs, CPUs, and GPUs, must rapidly reconstruct and analyze collision events, discarding over 98% of the data within microseconds. As the LHC transitions to its high-luminosity era (HL-LHC), these data-processing systems—operating in radiation-shielded caverns 100 meters underground — must contend with data rates comparable to 5% of global internet traffic, alongside unprecedented event complexity. Ensuring data integrity for physics discovery demands efficient machine learning (ML) algorithms optimized for real-time inference, achieving extreme throughput and ultra-low latency.
Big Data and AI at the CERN LHC: Navigating the Edge of Scale and Speed for Physics Discovery
Dr. Thea Klaeboe Aarrestad | ETH Zürich
The CERN Large Hadron Collider (LHC) generates an unprecedented O(10,000) exabytes of raw data annually from high-energy proton collisions. Managing this vast data volume while adhering to computational and storage constraints requires real-time event filtering systems capable of processing millions of collisions per second. These systems, leveraging a multi-tiered architecture of FPGAs, CPUs, and GPUs, must rapidly reconstruct and analyze collision events, discarding over 98% of the data within microseconds. As the LHC transitions to its high-luminosity era (HL-LHC), these data-processing systems—operating in radiation-shielded caverns 100 meters underground — must contend with data rates comparable to 5% of global internet traffic, alongside unprecedented event complexity. Ensuring data integrity for physics discovery demands efficient machine learning (ML) algorithms optimized for real-time inference, achieving extreme throughput and ultra-low latency.
Big Data and AI at the CERN LHC: Navigating the Edge of Scale and Speed for Physics Discovery
Dr. Thea Klaeboe Aarrestad | ETH Zürich
The CERN Large Hadron Collider (LHC) generates an unprecedented O(10,000) exabytes of raw data annually from high-energy proton collisions. Managing this vast data volume while adhering to computational and storage constraints requires real-time event filtering systems capable of processing millions of collisions per second. These systems, leveraging a multi-tiered architecture of FPGAs, CPUs, and GPUs, must rapidly reconstruct and analyze collision events, discarding over 98% of the data within microseconds. As the LHC transitions to its high-luminosity era (HL-LHC), these data-processing systems—operating in radiation-shielded caverns 100 meters underground — must contend with data rates comparable to 5% of global internet traffic, alongside unprecedented event complexity. Ensuring data integrity for physics discovery demands efficient machine learning (ML) algorithms optimized for real-time inference, achieving extreme throughput and ultra-low latency.
Coffee Break
Coffee Break
Coffee Break
Introduction to Systems Programming
Loris Cro | Zig Software Foundation
Sometimes you hear about the amazing escapades of systems programmers who delve into the depths of a niche subject and save the day by fixing impossible bugs and increasing performance by orders of magnitude. These are all great adventures, but usually those are not stories where we feel we could be the protagonist because most of us do not consider themselves "systems programmers". In this talk I will tell you a different story about systems thinking at the application level, one that could very well have anybody in this room as its protagonist. Most importantly, I will tell you a story about software written by developers for developers. In other words, a story where we are both the protagonist and, at times, even the dastardly villain.
Introduction to Systems Programming
Loris Cro | Zig Software Foundation
Sometimes you hear about the amazing escapades of systems programmers who delve into the depths of a niche subject and save the day by fixing impossible bugs and increasing performance by orders of magnitude. These are all great adventures, but usually those are not stories where we feel we could be the protagonist because most of us do not consider themselves "systems programmers". In this talk I will tell you a different story about systems thinking at the application level, one that could very well have anybody in this room as its protagonist. Most importantly, I will tell you a story about software written by developers for developers. In other words, a story where we are both the protagonist and, at times, even the dastardly villain.
Introduction to Systems Programming
Loris Cro | Zig Software Foundation
Sometimes you hear about the amazing escapades of systems programmers who delve into the depths of a niche subject and save the day by fixing impossible bugs and increasing performance by orders of magnitude. These are all great adventures, but usually those are not stories where we feel we could be the protagonist because most of us do not consider themselves "systems programmers". In this talk I will tell you a different story about systems thinking at the application level, one that could very well have anybody in this room as its protagonist. Most importantly, I will tell you a story about software written by developers for developers. In other words, a story where we are both the protagonist and, at times, even the dastardly villain.
Building Software, Simply
matklad | TigerBeetle
One of the meta values of TigerBeetle is simplicity. Simplicity is hard, but it gets you all the nice things — performance, correctness, maintainability. In this talk, we'll uncover fundamental simplicity in how software is built, tested, documented, and released — seemingly "boring" aspects, which non-the-less are a foundation for everything else
Building Software, Simply
matklad | TigerBeetle
One of the meta values of TigerBeetle is simplicity. Simplicity is hard, but it gets you all the nice things — performance, correctness, maintainability. In this talk, we'll uncover fundamental simplicity in how software is built, tested, documented, and released — seemingly "boring" aspects, which non-the-less are a foundation for everything else
Building Software, Simply
matklad | TigerBeetle
One of the meta values of TigerBeetle is simplicity. Simplicity is hard, but it gets you all the nice things — performance, correctness, maintainability. In this talk, we'll uncover fundamental simplicity in how software is built, tested, documented, and released — seemingly "boring" aspects, which non-the-less are a foundation for everything else
Wrap Up
Marina Pape | TigerBeetle
The Eye Filmmuseum remains open until midnight, attendees can stay on
Wrap Up
Marina Pape | TigerBeetle
The Eye Filmmuseum remains open until midnight, attendees can stay on
Wrap Up
Marina Pape | TigerBeetle
The Eye Filmmuseum remains open until midnight, attendees can stay on
Opening Night Rooftop Event*
*Exclusively for Premium Ticket Holders, TigerBeetle Team and Speakers
Opening Night Rooftop Event*
*Exclusively for Premium Ticket Holders, TigerBeetle Team and Speakers
Opening Night Rooftop Event*
*Exclusively for Premium Ticket Holders, TigerBeetle Team and Speakers
Day 2
Day 2
Registration, Ground Floor
Coffee served in Foyer
Registration, Ground Floor
Coffee served in Foyer
Registration, Ground Floor
Coffee served in Foyer
Running Start
Marina Pape | TigerBeetle
Running Start
Marina Pape | TigerBeetle
Running Start
Marina Pape | TigerBeetle
A Fresh Perspective on Fuzzing
Andrew Kelley | Zig Software Foundation
Have you ever heard of "beginner energy"? The idea is that when an enthusiastic newcomer approaches an age-old problem, although they have much to learn, they also bring new ideas and crucially, high expectations and a sense of optimism. If the person can learn wisdom from elders without becoming jaded themselves, they can be a key part of the movement to push the field forward. In this talk, I'll take you along my journey of discovering the world of fuzz testing, and while I still have much to learn, I'll share with you some new things that I've brought to the table, such as visualizations, interactivity, and toolchain integrations. Although it won't be groundbreaking, it will be entertaining for veterans and informative for newcomers.
A Fresh Perspective on Fuzzing
Andrew Kelley | Zig Software Foundation
Have you ever heard of "beginner energy"? The idea is that when an enthusiastic newcomer approaches an age-old problem, although they have much to learn, they also bring new ideas and crucially, high expectations and a sense of optimism. If the person can learn wisdom from elders without becoming jaded themselves, they can be a key part of the movement to push the field forward. In this talk, I'll take you along my journey of discovering the world of fuzz testing, and while I still have much to learn, I'll share with you some new things that I've brought to the table, such as visualizations, interactivity, and toolchain integrations. Although it won't be groundbreaking, it will be entertaining for veterans and informative for newcomers.
A Fresh Perspective on Fuzzing
Andrew Kelley | Zig Software Foundation
Have you ever heard of "beginner energy"? The idea is that when an enthusiastic newcomer approaches an age-old problem, although they have much to learn, they also bring new ideas and crucially, high expectations and a sense of optimism. If the person can learn wisdom from elders without becoming jaded themselves, they can be a key part of the movement to push the field forward. In this talk, I'll take you along my journey of discovering the world of fuzz testing, and while I still have much to learn, I'll share with you some new things that I've brought to the table, such as visualizations, interactivity, and toolchain integrations. Although it won't be groundbreaking, it will be entertaining for veterans and informative for newcomers.
New Shared-Log Abstractions for Modern Applications
Prof. Ram Alagappan | Co-Author of Protocol-Aware Recovery
Shared logs are at the heart of many modern applications. Every cloud provider today offers a shared-log service (e.g., AWS Kinesis, Google PubSub); open-source systems like Kafka, RedPanda and others offer the shared log functionality; many hyper-scalers use shared-log services for metadata. Perhaps surprisingly, despite years of research and the ubiquity of shared logs, all existing shared logs today suffer from high latencies. Our research group at the University of Illinois has been building new abstractions and designs to address the latency challenges in shared-log services. The first abstraction, LazyLog (SOSP 24), is a new shared log better suited for applications like message queues and event-driven databases that demand low-latency ingestion. The second one, SpecLog (OSDI 25), is a new shared-log design that reduces end-to-end latencies for critical applications like high-frequency trading, intrusion detection, and fraud monitoring. In this talk, I will describe the motivation, design, and benefits of these new shared logs.
New Shared-Log Abstractions for Modern Applications
Prof. Ram Alagappan | Co-Author of Protocol-Aware Recovery
Shared logs are at the heart of many modern applications. Every cloud provider today offers a shared-log service (e.g., AWS Kinesis, Google PubSub); open-source systems like Kafka, RedPanda and others offer the shared log functionality; many hyper-scalers use shared-log services for metadata. Perhaps surprisingly, despite years of research and the ubiquity of shared logs, all existing shared logs today suffer from high latencies. Our research group at the University of Illinois has been building new abstractions and designs to address the latency challenges in shared-log services. The first abstraction, LazyLog (SOSP 24), is a new shared log better suited for applications like message queues and event-driven databases that demand low-latency ingestion. The second one, SpecLog (OSDI 25), is a new shared-log design that reduces end-to-end latencies for critical applications like high-frequency trading, intrusion detection, and fraud monitoring. In this talk, I will describe the motivation, design, and benefits of these new shared logs.
New Shared-Log Abstractions for Modern Applications
Prof. Ram Alagappan | Co-Author of Protocol-Aware Recovery
Shared logs are at the heart of many modern applications. Every cloud provider today offers a shared-log service (e.g., AWS Kinesis, Google PubSub); open-source systems like Kafka, RedPanda and others offer the shared log functionality; many hyper-scalers use shared-log services for metadata. Perhaps surprisingly, despite years of research and the ubiquity of shared logs, all existing shared logs today suffer from high latencies. Our research group at the University of Illinois has been building new abstractions and designs to address the latency challenges in shared-log services. The first abstraction, LazyLog (SOSP 24), is a new shared log better suited for applications like message queues and event-driven databases that demand low-latency ingestion. The second one, SpecLog (OSDI 25), is a new shared-log design that reduces end-to-end latencies for critical applications like high-frequency trading, intrusion detection, and fraud monitoring. In this talk, I will describe the motivation, design, and benefits of these new shared logs.
Coffee Break
Afternoon Break
Coffee Break
Lightning Talks
hosted by Natalie Vais | Spark Capital
Lightning Talks
hosted by Natalie Vais | Spark Capital
Lightning Talks
hosted by Natalie Vais | Spark Capital
Lunch (60min)
Lunch (60min)
Lunch (60min)
TBD
Amod Malviya | Udaan
TBD
Amod Malviya | Udaan
TBD
Amod Malviya | Udaan
Coffee Break
Coffee Break
Coffee Break
Jepsen 18: Wait, Are Databases Good Now?
Kyle Kingsbury | Jepsen
We trust our databases, queues, and other systems to store acknowledged writes, to serve them up later, and to isolate transactions from one another. But can we really trust them? Jepsen combines concurrent, generative tests with fault injection to measure distributed systems safety. We'll learn about Datomic, Bufstream, and TigerBeetle, and show how three unconventional systems ensure--or violate--key safety properties.
Jepsen 18: Wait, Are Databases Good Now?
Kyle Kingsbury | Jepsen
We trust our databases, queues, and other systems to store acknowledged writes, to serve them up later, and to isolate transactions from one another. But can we really trust them? Jepsen combines concurrent, generative tests with fault injection to measure distributed systems safety. We'll learn about Datomic, Bufstream, and TigerBeetle, and show how three unconventional systems ensure--or violate--key safety properties.
Jepsen 18: Wait, Are Databases Good Now?
Kyle Kingsbury | Jepsen
We trust our databases, queues, and other systems to store acknowledged writes, to serve them up later, and to isolate transactions from one another. But can we really trust them? Jepsen combines concurrent, generative tests with fault injection to measure distributed systems safety. We'll learn about Datomic, Bufstream, and TigerBeetle, and show how three unconventional systems ensure--or violate--key safety properties.
1000x: The Power of an Interface for Performance
Joran Dirk Greef | TigerBeetle
TBD
1000x: The Power of an Interface for Performance
Joran Dirk Greef | TigerBeetle
TBD
1000x: The Power of an Interface for Performance
Joran Dirk Greef | TigerBeetle
TBD
Wrap up
Joran Dirk Greef | TigerBeetle
Wrap up
Joran Dirk Greef | TigerBeetle
Wrap up
Joran Dirk Greef | TigerBeetle
Follow @TigerBeetleDB for updates. Questions? Contact marina@tigerbeetle.com
Follow @TigerBeetleDB for updates. Questions? Contact marina@tigerbeetle.com