Case Study on Benchling

2021-07-1116:0020axial.substack.com

Surveying great inventors and businesses

Image result for benchling logo

Benchling is a biological design tool. Saji and the team have done an incredible job to build a piece of software initially focused on a low-value academic market then transition toward drug development and industrial biotechnology. I was playing basketball at the RSF on Saturday wearing my old Benchling shirt I got in 2014. From the shirt, a Berkeley biophysics grad student struck up a conversation. He was working on predicting transcriptional factor dynamics (mainly on/off binding) - the interesting part was how he used Benchling to transfer some of the standard parts he created during his undergraduate career to his PhD work. This simple example shows the power of encoding network effects into a life sciences platform. As more people starting using the product, for Benchling it’s the editor tool, for Ginkgo the biofoundry, and Twist it’s synthesis, the experience by some metric gets better.

Founded in 2012 by Sajith Wickramasekara and Cory Li who were two students at MIT with research experience. Saji had actually won the Siemens Competition before going to MIT. With these experiences, the vision was premised on seeing how outdated the software academic labs were using and wanting to build something better. Benchling initially was a better plasmid editor - almost all labs were using NIH’s ApE tool, which worked but it’s UX was rudimentary and had poor portability. When Benchling came out in 2012, it was a pleasure to edit a plasmid on; personally, it made my life a lot easier. In 2014/2015, the company started raising capital and thinking about using its beachhead in the academic market to get into industry. And at the product level, CRISPR gene editing was taking off around this time - Benchling’s next feature was a guide design tool. Importantly, Ashutosh Singhal was brought on as a co-founder. Now, Benchling is rapidly growing with a wide customer base, but this was all premised on a simple plasmid editor focused on academic labs. Using this very poor market, academics are probably the worst customers to sell to - very cheap and hyper-rational, Benchling validated their biology by design model with the long-term vision to automate every task in a lab.

  1. Benchling is using its increasing dominance in biological design to gain a beachhead in lab automation - setting itself up for future wars to own the mindshare of life scientists.

  2. Three important drivers for Benchling’s growth and future is the spend on human labor, how their time is spent, and how much of that work is actually reproducible.

  3. This has set up Benchling to design a unique business model in life sciences, but maybe not for software in general; nevertheless, with its scale, the company can potentially show a new path to build software-focused life sciences companies and enable entirely new workflows in drug development and industrial biotechnology.

Technology

The laboratory notebook is still deeply stuck in the past. Often, the lab notebook is the only paper component of an experiment. It is equivalent to a software engineer writing their code on paper to execute it each time on a machine by hand. If the lab is to become increasingly automated, having a ledger of experiments and their specifications is pretty important:

Image result for benchling gif

Source: Benchling

Benchling has built out a product suite similar to Github. Biologists upload their designs and experimental instructions (i.e. cloning, gene editing). For Github, software engineers share their code. However, Benchling takes this another step further, mainly because life sciences still hasn’t been completely transformed by software, by allowing biologists to edit their “code” within their product. Whereas Github was mainly about sharing in an environment where there were so many text editors (i.e. Emacs versus Vim), Benchling enables sharing designs and operating on them because there was little viable competition for both at the time of founding. Centered around the lab notebook, Benchling has built out a seamless piece of software aggregating data that is often siloed off between people and institutions and in the long-term, allowing version control of life sciences. This is really important as software begins to drive versioning of biology in general - making an equivalent of Nike or iPhone updates for therapeutics, biomaterials, and more:

Source: Benchling

Benchling’s core product offering are single-purpose tools for:

  1. Analyzing gels

  2. Cloning

  3. Picking colonies to pick then clone

  4. Design of primers and guides

  5. Experimental tracking and workflow mapping

  6. Inventory tracking

These tools are connected to the lab notebook to automate many experimental design tasks. A scientists creates an experiment, designs the various parts of it, executes the experiment, and recording the results. Right now, everything but the execution is done on Benchling. Importantly, the products tracks everything. Most research labs were beholden to the “hit by the bus” phenomena - many paper notebooks are written in a way that only the author can successfully decode the result. If they were ever hit by a bus, a lot of their discoveries would probably just be lost. Also, if their intentions were more nefarious, a paper notebook can be used to cover up deceiving results. Historically, a researcher would write down their observations in a notebook, pasting pictures of gels within them, having a spreadsheet tracking inventory maybe with a common lab notebook or folder, and using multiple applications to analyze their experiments. Researchers really love Benchling because the product just put all of that activity into one tool. Unifying disparate tools is a key metric for Benchling that strengthen’s their network effects moat. More features attracts more people within a lab or new customers making sharing between individuals and groups easier. As more users are on boarded, companies have a single product to preserve institutional knowledge and intellectual property:

Image result for benchling gif

Source: Benchling

Image result for benchling editor gif

Source: Benchling

Source: Benchling

With this product, Benchling has been able to attract over 100K individual users setting it on to the endgame of Saji’s vision of automating the lab. Right now Benchling helps automate every task expect execution for a life sciences experiment. Over the next decade, Benchling more and more will transition toward execution within its product, and unlike companies like Synthace and Elemental Machines, Benchling’s going to have a large and growing user base to rely on. These users are using Benchling to construct templates for plasmid editing, CRISPR guides, and other activities. This is creating standardized experimental instructions for the instruments themselves. Where a company like Synthace’s advantage is writing drivers for more machines, Benchling is owning the designs and instructions to improve the machine readability of the data itself. Benchling would be wise to copy parts of UiPath’s playbook. Roughly, where Veeva built out a large niche in life sciences by somewhat copying Salesforce, Benchling could do something similar with UiPath. So Benchling along with many other companies will drive the increasing interoperability of life sciences work.

Ultimately, Benchling can help increase the efficiency and scale of life sciences. Right now most lab work is labor intensive - monitoring an LC or doing a mass spec run. Even doing a PCR can take a few hours. Automating liquid handling helps but doesn’t solve a lot of edge cases so most lab work is often still bespoke. This leads to various inefficiencies in the field - lack of reproducibility, failure to do every type of control. Overall, many life scientists are still spending too much time on mechanical tasks rather than deeply thinking about solving new problems. Benchling and other companies are helping drive abstraction of research from physical execution to protocol-level descriptions. This is going to take a very long time so companies like Benchling are focused on feasible and valuable use cases now in order to be around when more lab tasks are actually tractable for automation. If anybody is going to drive this forward, Benchling has a great shot at making a dent in this especially given their strong culture and history:

Source: Benchling

Market

Benchling right now is addressing mainly research and discovery work. There’s well over $100B at stakes for clinical work, but that’s a different product set with a strong incumbent called Veeva. The R&D market for life sciences generates at least $40B in annual sales with most of it for preclinical drug development. This market is being influenced by a lot of forces, which may or may not matter for Benchling. The company’s market opportunity is centered around automation - design first then execution - the three market drivers for Benchling are:

Source: EY, Nature

Business model

Benchling’s business model is very consumer-oriented - focused on user growth, number of designs uploaded, everything pointed toward scale. This approach is very unique when compared to many software companies in life sciences. Some of them end up trying to become drug companies themselves where they think they can generate better economics and others limit themselves to a few large customers - this is completely logical; with maybe 500 customers (versus 1000s in general enterprise software) in life sciences that can afford $10M in focused software spend beyond say Workday/Salesforce, many companies measure themselves by different metrics than Benchling.

By focusing on its number of users instead of squeezing out every last dollar from a relatively limited customer base, Benchling is achieving a certain scale similar products will probably never get to. Simply creating a network effect - the most powerful moat for software companies. This is probably going to lead to some interesting battles where Benchling’s swings around this model in sub-fields with capable incumbents. Benchling is moving to a LIMS product where there are some interesting companies - how much does a large user base matter here? Similarly for clinical trial software - does having a lot of scientists on boarded help dominate this market? By funneling customers through its academic product, Benchling has been acquiring higher-value customers. A lot of software tools comparable to Benchling and in life sciences general, focus solely on enterprise contracts. This could be valuable depending on the use case and product. However, by taking a road few have taken successful, Benchling took on a massive risk that a large number of users would automatically translate into sales on day - this was a similar risk Dropbox took and many other software companies of the last decade or so:

Source: Benchling

With revenues tripling for 2018, Benchling has many lessons to offer and is creating new opportunities for companies to build on top of:

  1. Showing that software sales in life sciences can increasingly go through the researcher - a phenomena well known in other industries; Twilio has executed the developer-oriented approach the best. Jeff is a great guy too.

  2. Making a large user base in biology valuable by connecting the value of the product connected to the number of people using it. A lot of tasks in life sciences are still not this way - a massive opportunity.

  3. Unbundling the biofoundry - companies like Benchling, Synthego, Culture, and others are showing that software can connect various experiments instead of relying on a centralized facility. Still a lot of work here to be done.

  4. Linking machines together through standardized data instead of creating custom packages to connect a very diverse set of firmware.

  5. An opportunity for Benchling and others to show that software and its scale can compel companies to provide a cut of the drug or product backend. I hope this happens - this will reconfigure a lot of investment theses.

  6. Enabling scalable parts construction and testing for drug development such as CARs and antibodies - opportunity companies like Serotiny, Asimov, and Distributed Bio are taking on.


Page 2

The Scripps Research Institute is an institution completely focused on life sciences research and advancing medical work. Originally financed by Ellen Browning Scripps (part of a vast newspaper fortune) in 1924 as a medical center (she was inspired by the discovery of insulin to treat diabetes) and spinning off as a separate research institution in 1993, Scripps has been at the forefront of using new tools to make medical breakthroughs and discover new drugs. With a high concentration of world-class labs and an entrepreneurial culture, Scripps has spun out countless companies and driven the approval of multiple new medicines (the most recent from the Kelly and Powers Labs). Given the quality of the culture, Scripps is positioned to support important companies.

Image result for scripps research institute logo
Diversity-based approaches in chemistry.

Recent

  • Peter Schultz is a legend in chemical biology. Having worked with Christopher Walsh who worked with E.O. Wilson, it was only natural for Schultz to pioneer the combination of chemistry with evolution. Schultz has invented various methods that are taught in every chemical biology graduate course and mentored some of the field’s leaders from David Liu, Chris Anderson to Alice Ting and Kevan Shokat. Peter has been able to form a string of companies around his lab’s work from Wildcat Discovery Technologies to Ambrx and Symyx Technologies. Schultz got his PhD from Caltech in 1984 and moved to UC Berkeley where he pioneered the field of combinatorial chemistry - testing millions of molecules for drug-like properties in a high-throughput manner. He went to join Scripps in 1999 to focus more on translational research leading Genomics Institute and Calibr along the way.

  • Two recent papers using genomic recoding to show that unnatural amino acids are powerful to improve protein stability. GRO and Synthorx are probably the most rigorous companies using these tools for new medicines https://pubs.acs.org/doi/abs/10.1021/acschembio.9b00002 and https://pubs.acs.org/doi/abs/10.1021/jacs.8b07157

  • Characterizing the secretome, the set of secreted proteins using barcodes and next-generation sequencing (NGS) to identify hits - https://cell.com/cell-chemical-biology/retrieve/pii/S2451945617301800

Past

Sharpless Lab

Selectively controlling chemical reactions.

Recent

  • Sharpless won a Nobel for pioneering work for stereoselective chemistry that formed the basis for click chemistry. Trained at Stanford where he worked on cholesterol biosynthesis with Eugene Tamelen and organometallic chemistry with James Collman. Moving to MIT in 1969 to work with Konrad Bloch to research enzymes. In 1970, he join the MIT faculty, went to Stanford after 7 years, and came back to MIT after 3 years at Stanford. His early career focused on transforming the flat C=C bond into a 3D molecule called asymmetric epoxidation. One of his students, Eric Jacobsen (taught me organic chemistry) expanded upon this work himself to tolerate a more diverse set of functional groups. This is the work that earned Sharpless a Nobel. In 1990, he moved to Scripps and began the work that used asymmetric chemistry to invent click chemistry (who his wife actually helped coin) - a toolkit to form any bond especially in water at room temperature; incredibly useful in biological systems. The canonical example is copper-catalyzed azide-alkyne cycloaddition (CuAAC) - the reactions brings a azide and an alkyne together to form a new bond (image below). Sharpless used this reaction as the first example of click chemistry. Sharpless and his group brought together decades of research to invent a tool that abstract away a lot of complicated chemistry to allow a biochemist to link a protein to a reporter, a material scientists to link molecules to a surface, and created a whole new class of chemistry.

    09713-cover1-cuacc.jpg
  • Inventing another biocompatible click chemistry method reliant on SuFEx transformations - https://onlinelibrary.wiley.com/doi/abs/10.1002/anie.201902489 - enabling easy S-N/S-O bond formation in cells. “Reactions that are commonly used and compatible for [DNA-encoded library] chemistry:”

    imageimage

Past

Powers Lab

Studying protein folding and aggregation.

Recent

  • With Sharpless, describing inverse drug discovery a target-agnostic method where a electrophilic molecule is screened against a set of proteins or cell lysates. Hits are targets that bind the molecule and the drug hunter then determines if the target is valuable whereas conventional drug discovery tries to figure out if the molecule is valuable. Covalent drugs can be dangerous because they form direct bonds with their targets and could generate serious off-target effects. As a result, most approved covalent drugs have been retrospectively discovered as so - inverse drug discovery is a useful framework to rationally execute these serendipitous processes - https://pubs.acs.org/doi/abs/10.1021/jacs.7b08366 - validating this method against 11 proteins and inventing a few covalent drug hits.

  • In E. coli, helping establish the set of rules for protein folding - https://www.cell.com/cell-reports/fulltext/S2211-1247(15)00271-5

    Figure thumbnail fx1

Past

  • The paper forming the basis for the recent drug approval for cardiomyopathy - invented a screening process to discover molecules against the serum protein transthyretin (TTR) - https://stm.sciencemag.org/content/3/97/97ra81.long - where the assay measure fluorescence polarization that is amplified upon binding and minimized via tumbling along with the protein:

Law Lab

Inventing broadly protecting antibodies to ever-changing viruses.

Recent

  • When a virus first enters the body, an immune response is mounted and memory cells are stored in preparation for the next invasion. Viruses change their out coating to subvert immune recognition making it difficult to adapt as the immune system is always anchored to the response to the first infection. This is where studying unique immune responses gets interesting - our repertoires are unique when compared to anyone. So using single-cell genomics and other tools to look into one-of-a-kind immune responses can potentially discover broad-acting antibodies with the potential to provide a long-lasting vaccine to viruses. Moreover, this work has incredible value to make new medicines for Alzheimer’s and many other diseases. Companies like Evaxion and others are doing interesting work around this problem set.

  • Really interesting paper to set off designing proteins that can structurally change but maintain the same function. Important to invent protein therapeutics that are non-human/humanized subverting the human immune system and enabling translation of microbial molecular tools into new medicine - https://www.biorxiv.org/content/biorxiv/early/2018/01/10/245985.full.pdf - delivery S. pyogenes and S. aureus Cas9s (chosen because of predicted lack of MHC overlap) via AAV and validating the lack of cross-reacting antibodies agains the two Cas9s in mice:

  • Review of vaccine design against hepatitis C - https://www.frontiersin.org/articles/10.3389/fimmu.2018.01315/full - providing a good framework mainly around structure-guided design for viruses in general (i.e. X-Vax).

  • Doing a study to discover broad-acting antibodies against hepatitis C- https://www.pnas.org/content/pnas/115/29/7569.full.pdf - these super-antibodies are an important approach to develop universal vaccines.

Hansen Lab

Studying the molecular basis of pain with a focus on mechanosensation, the basis of touch and sound.

Recent

  • Showing that TREK-1, a receptor with a role in sensation that is a target of anesthetics, activation is heavily influenced by the thickness of the plasma membrane -https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3155650 - it’s still not well understood how mechanosensative receptors transmit information; this study on TREK-1 acts as a model to characterize the biophysical mechanism.

  • Related to the TREK-1 work, showed that anesthetics disrupt lipid rafts (another interesting biological object; Erin O’Shea taught me this section of LS1A) to influence mechanosensitivity - https://www.biorxiv.org/content/10.1101/313973v2

Lander Lab

Using cryo-EM to study molecular machines.

Recent

  • Cryo-electron microscopy (cryo-EM) has been in a renaissance. The tool was transformed structural biology by making historically challenging molecules to study into tractable ones. From first determining the structure of the T4 bacteriophage tail to characterizing a wide array of systems, cryo-EM is being driven by advances in image-processing, more precise instruments, better electron-detection cameras, improved sample prep, and increasing automation. When compared to X-ray crystallography, the work horse for structural biologists, cryo-EM doesn’t need crystals (although crystals are still very useful), just a relatively small amount of sample. The sample will have a molecule in various conformations where cryo-EM will capture functional macromolecular complexes in different states. Whereas, X-ray crystallography captures one state. In cryo-EM, 3D to be more specific, a transmission electron microscope (TEM) takes multiple views that are reconstructed (mostly via Fourier inversion methods). This method has been very powerful for helical macromolecules that actin and phage tails. The long-term opportunity for cryo-EM is developing better methods to handle the large computational loads and using the tool to pursue hard drug targets.

  • Using cryo-EM to determine the structures of smaller macromolecules (i.e. below 100 kDa) has been pretty challenging. The Lander Lab made a recent set of breakthroughs by removing the phase plate and optimizing the amount of underfocus and TEM configuration - https://www.nature.com/articles/s41467-019-08991-8 - as cryo-EM gains more value for smaller structures, its capacity to aid structure-based drug design will become more powerful.

  • Invented a method (where there are still many bespoke tools) to accurately analyze cryo-EM data to accurately reconstruct a targets density - https://www.cell.com/structure/fulltext/S0969-2126(18)30364-2

    Figure thumbnail fx1

Past

  • With Eva Nogales (just saw her last evening eating at a restaurant in Berkeley I was in with grad school buddy), a legend, did an overview of cryo-EM and its use to complement other biophysical techniques - https://www.ncbi.nlm.nih.gov/pubmed/22835744

  • Using unnatural amino acid labeling (a tool the Schultz Lab pioneered), to selectively identify protein subunits during a cryo-EM run - https://www.ncbi.nlm.nih.gov/pubmed/26409249 - allowing easier internal domain labeling where most method focus on termini:

    An external file that holds a picture, illustration, etc.
Object name is nihms795109f1.jpg

Burton Lab

Pioneering rational vaccine design.

Recent


Read the original article

Comments

HackerNews