Getting There
Delayed, frozen, and already talking about Ginkgo
I've never been to Boston before. Not really, anyways. I've bounced through Logan on connections before, and was able to admire the harbor, but for all the friends who had told me how lovely it is, I hadn't the opportunity or reason. And for all the friends who had told me how lovely it was - it certainly (unfortunately) was not the case this time.
For readers who were there, or for anyone who paid attention to the North American news cycle in early 2026 - you may recall a certain "Apocalyptic Polar Vortex" that centered on the Northeast. I didn't need the news to tell me what I could feel on my face and through my coat; I want to go back in the Summer.
For me, however, the conference started before I got to Boston, at PIT's Gate A6. We had boarded and deplaned 3 times because every time we got to the runway, BOS called and told us they were initiating another ground hold. Josh Kangas, my PI and mentor from CMU (who happened to be on the same flight) remarked after hour 5 that he wished he had just driven. He introduced me to Christopher Langmead, who felt the same. We were able to use the 5 hours to talk shop, discussing the OpenAI × Ginkgo Bioworks paper that had just dropped. Chris called it low-hanging fruit - not wrong - but that it was a sign of things to come - also not wrong. He remarked how he wished that more tech companies could understand that biology and chemistry are orders of magnitude more complicated than they think they are. There's no bitter lesson for biology yet - not that we've found. Still, the framing stuck with me for the rest of the week: low-hanging fruit, but a sign of things to come.
Sunday
The old guard and the GNU
The conference technically starts on Monday, but SLAS offers short courses on Sunday. I attended Streamlining Your Lab With Python, taught in part by Mark Russo and William Neil, men who have the energy of people who really, deeply know what they're doing and don't particularly need you to know that they know. Strong opinions on architecture that were formed before I was in high school and have only gotten more entrenched since. I loved talking to them. They remind me of my Dad.
The material was foundational - a fair bit below where I work day-to-day - but I picked up a few things, and I was glad to see PyLabRobot make it onto the last couple of slides. It felt like an acknowledgment from people who don't give acknowledgments lightly. What struck me more, though, was the room itself. Several students admitted, some sheepishly, that they'd signed up for the course to understand what their coding agents were building for them. I saw at least a few Claude Code instances open on laptops around me. The instructors had used LLM chatbots (they told us as much). They probably wouldn't pay for one. Their students would, and did, and were using them to write the very code the instructors were trying to teach - and had shown up anyway, trying to close the gap between what the agent produced and what they understood.
Nobody was hostile about any of this. It was just a distance, visible and quiet, between two groups of people who both want better lab software and are arriving at it from directions that don't quite face each other yet.
The delays had cost me dinner with Josh. I grabbed a pizza and a pint at the hotel bar instead, got good sleep, and spent an hour before bed working on a reinforcement learning agent I've been training to play a card game I invented (more on that elsewhere). Good way to decompress.
Monday
The floor is electric with the sound of robots
I had come in with a full calendar of talks scheduled back to back. I abandoned most of them. The expo floor is where SLAS actually happens, and it took me half of Day One to figure that out.
SLAS gave me a warm welcome and I was glad for it. Then I went to find the Trilobio booth.
The show floor. I spent more time here than I planned, and it was the right call.
I had noticed them setting up on Sunday. About a year ago I had interviewed there - first with Maximilian Schommer, co-founder and head of robotics. We went 45 minutes over our allotted time just talking about where the field was heading. Afterwards came a technical round I was underprepared for. I didn't get the role. Seeing the booth half-assembled had given me a feeling I couldn't quite name. Something like unfinished business, I guess. So I walked over, found a clear angle on their robot demo, and watched for a while to see what had changed over the last year.
"Is that Jon Potter?" Max was on the other side of the glass. "I've been following your journey on LinkedIn, man - how's it going?"
Warmth and relief, in that order. We spent the next forty minutes catching up. He mentioned - with what I choose to read as genuine warmth rather than conference politeness - that the role I'd interviewed for no longer exists at the company anyway; the direction changed. He's working now on solvers and schedulers for their parallelization problem, which is exactly the kind of thing I find interesting. The Trilobio radial-axis liquid handler is slow, but the architecture is the point: modular units that link and stack, designed from the start to be massively parallelizable. Get it faster and it becomes a serious contender. I still really admire what they're building.
The Opentrons Flex stacker. The conveyor concept has more headroom than the current implementation suggests.
I spent time at the Opentrons booth as well. The Flex stacker is interesting hardware - it moves labware on and off deck via conveyor belt, which opens up real throughput gains for labs that are deck-space constrained (which is most of them). The concept has more headroom than the current implementation suggests: mirror the design, connect two Flex decks, and you have a transfer line between instruments. The hardware is almost there. The scheduling and software layer is where the hard work lives, and that's a difficult problem (and a fun one- to the right people). I'm watching this space closely - the stacker would be useful for my lab today, and what it could become in two or three years is more interesting still.
Tuesday & Wednesday
Bots that can see, labs in the cloud
On Tuesday, the Opentrons CEO gave a joint talk with NVIDIA. The through-line was AI-assisted protocol generation and a forthcoming camera system for mid-run QC using a vision-language model - 2026-2027 target, they said, with a VLA to follow in 2027-2028. The QC direction makes sense to me. Closed-loop error detection is exactly what makes automation trustworthy rather than just fast, and anyone who has run plates knows that the failure modes are mostly visible if something is watching. The open question is what it actually takes to make a VLM reliable enough to trust mid-run. Glare, meniscus, bubbles, edge effects — these are harder than they look in a demo, and the gap between "works in controlled conditions" and "works on a Tuesday afternoon when the light changed" is where most computer vision systems earn their keep or don't. The synthetic training data partnership with NVIDIA's world model is an interesting approach to that problem. What does the error rate look like on real plates, under real lab conditions? That's the number I want to see.
What I want to see from the broader ecosystem, and what I think the Opentrons roadmap is gesturing toward even if it isn't quite there yet: interoperability. Every platform is still an island. The labs that are clearly winning - and you could feel it on the floor - are the ones that have figured out how to bridge instruments rather than just optimize within a single workcell. The hardware is good enough. The scheduling and coordination layer is where the next decade of progress lives.
The Ginkgo Bioworks lab tour on Wednesday. The MagLev plate movers alone were worth the trip.
Wednesday included a tour of Ginkgo Bioworks, which was the highlight of the technical content for the week. MagLev plate movers. Isolated instrument racks. Tracks that open up the whole floor rather than a single deck, giving you parallelization that a workcell with a single arm simply cannot match. The OpenAI collaboration on experiment planning felt real and grounded. Seeing a working system does more for me than a paper. Ginkgo is also moving toward Cloud Lab services alongside their contractor model for helping other organizations build out their own automation infrastructure, which is a smart hedge on where the market goes. The whole operation has the energy of those RC train sets that rich retired engineers build in their basements. Made me think about Factorio. I think I fall into a very particular cluster of people who enjoy the sisyphean tasks found in the field of logistics.
The Margins
Speakeasies, lobster rolls, education, and standardization
The best conference content tends to happen in the gaps between the scheduled parts. Tuesday night, the PyLabRobot team invited our group to a speakeasy. The invite came through channels that did not include Discord, because they have principles about that sort of thing. I respect the commitment to open-source even where I'd make a different call. These are good hackers building toward a hardware-agnostic control layer for liquid handlers - a world where the driver doesn't care which OEM made the pipette. The more traction they get, the better it is for everyone trying to do something the manufacturer didn't anticipate. Which, in my experience, is most of the interesting work.
PyLabRobot speakeasy, Tuesday night.
Hamilton rented out the Boston science museum for Monday night. The exhibit rooms were open.
Josh Kangas and Kennedy McDaniel Bae hosted a forum on the state of education in lab automation, and I agreed with nearly everything said. This field has no standard pipeline for how people end up in it. No consistent job titles, no clear signal to prospective engineers that the work even exists. "Bioinformatician," "Automation Engineer," "Lab Scientist II" can all describe the same role, at the same company, depending on who wrote the job posting. The people doing this work are hard to find - and the field hasn't figured out yet how to reach them before they end up somewhere else entirely. But that's a solvable problem. I'm glad people are naming it at a conference this size. I recently saw Kennedy post that they had received some funding for this initiative, which made me smile.
Hamilton rented out the Boston science museum for Monday night and let several hundred automation engineers loose in it with good bourbon and lobster rolls. (I shudder at the price tag. What a monolithic task Opentrons and Trilobio are up to; what Hamilton can afford. Not to mention the other 15 new liquid handler start-ups I saw on the floor.) The exhibit rooms were open. I spent most of the evening with Josh and Melody Wang from Generate Biomedicines - she was my mentor for my first CMU capstone - trading notes on the week's best ideas and its worst ones. Some drama about our respective companies. Conversations that only happen when everyone is slightly tired and slightly well-fed and the planetarium is right there.
A Hamilton liquid handler demonstration from the expo floor. The precision is hard to appreciate until you watch it move.
My Work
A frugal self-driving lab, presented
I presented a poster on work done with Josh Kangas, John Kitchin, and Stefan Bernhard at Carnegie Mellon. The short version: we built a closed-loop self-driving lab on an Opentrons OT-2 - six $70 cameras on 3D-printed mounts, a persistent protocol runtime that avoids the six-minute reboot penalty, and a Gaussian process that proposes the next experiment. We ran it with ~100 high school students using dye color matching and an acid-base Battleship game.
Students outperformed the AI in round one, consistently. By rounds three and four, the GP had the edge. The Battleship bracket ran fully automated. The point wasn't that the AI wins - it's that the loop works, it's affordable, and it teaches active learning in a way that's tangible. Open-source code and CAD are available.
A Frugal Self-Driving Lab on Opentrons OT-2
Closed-loop color matching and acid-base "Battleship" on a low-cost OT-2 platform. Six fixed cameras on 3D-printed mounts, a persistent runtime architecture, Gaussian-process active learning with simplex-constrained recipe volumes. Deployed for ~100 high school students in 2025.
Students outperformed the AI in early rounds; the GP gained the edge by rounds three and four. Battleship ran a full single-elimination bracket with automated scoring. Designed to generalize across assay-like tasks without expensive optics or vendor-locked workflows.
Read the full abstract →No delays on the flight home. I was tired in the way that comes from five days of conversations that actually went somewhere, with people who are building things they believe in. This field is smaller than it looks from the outside and larger than it feels when you're in it day to day. The people doing the most interesting work are mostly findable if you know where to look: they show up when the PyLabRobot team sends an invite, they argue about job titles in an education forum, they eat lobster rolls in a science museum and talk about their companies until the bourbon runs out.
I'm glad to be among them. Boston in February: go back in summer. SLAS: go back next year.