Blog
October 5, 2022
7 mins

Spatial omics: between technology development and early discoveries

There is no doubt that spatial omics technologies are rocking the field of human disease from oncology, to cardiovascular diseases, and virology. Scientists are beginning to understand the potential, they are learning how to use the technology, what you can learn from it, but also generating the data standards, the analysis algorithms, the reference datasets. There is a feeling that this is a field in the making, a new field in biology is being born. And with that, it’s no surprise that the current focus is on technology development on one side, and on early adoption amongst researchers on the other.

In addition to some amazing talks showcasing early discoveries, I found the panel discussion on the technology itself really fascinating. The debate is friendly, but straight to the point: “You need to bring the cost down”, says one of the panelists to representatives of technology makers. He continues, ”or, meet us halfway and generate the reference datasets from healthy tissue, so that we can go straight into studying tumour tissue”. Today, running a single spatial omics experiment can cost tens of thousands of dollars. Researchers are hopeful that this will follow the same trajectory that brought the cost of DNA sequencing down to less than $1,000/genome over the last decade, making this technology accessible to more laboratories.

Protein, RNA, and “true” multiomics.

Back on the exhibition floor as many as twelve technology vendors are battling for attention, old players and new faces included. Broadly speaking, the methods they commercialize are divided into RNA-based (e.g. NanoString, 10X Genomics, Vizgen) versus protein-based (e.g. Akoya, Fluidigm, Ionpath, Lunaphore). Some of the players claim they can do “true multiomics”, i.e. analyze both RNA and proteins on the same tissue slide (e.g. NanoString, 10X Genomics, Akoya), although researchers are somewhat sceptical, arguing that there are steps in the experimental procedures that are somewhat not compatible. For example RNA-based technologies use Proteinase-K, a reagent that chews away at protein domains, making antibody-based, protein detection problematic for obvious reasons. The general consensus is that analysing both RNA and protein on the same slide comes with compromises. To overcome this limitation, researchers are happily using consecutive slices from the same block of tissue to do multiomic analysis. Being only a few micrometres away from each other they allow you to look at the same cell types, arranged in the same way within the tissue.

The RNA versus protein conversation continues, shifting towards panel design and adoption of spatial biology in the clinic. “Get a pathologist in the room when you are designing your experiments!” one of the keynote speakers says, referring to the fact that if you are studying human disease your research might eventually translate into the clinic. She goes on to explain that pathologists ask different questions and reason on a very different wavelength: they don’t want the whole genome, they don’t know what to do with it. From their point of view, a biomarker needs to be easy to understand by clinicians and doctors, it needs to be easy to explain to a patient, and it needs to relate to disease progression and response to drug. Also, they tend to want to see protein as opposed to RNA, because that’s most often the nature of a drug target. “Pathologists have been diagnosing cancer from $2 H&E slides for several decades… that’s the barrier to adoption of spatial biology in the clinic! If we are to introduce a new way to diagnose, we need to take clinicians on the journey working hand in hand with them to select the most suitable biomarkers.”

The conversation starts to move into discussing the level of multiplexing, with a fellow researcher reminding the room that in order to even find a biomarker, one needs to start from a discovery phase, with whole transcriptome methods being more suitable to the task. So, rather than being a question of preference, RNA versus protein and hi-plex versus low-plex becomes very much a function of the type of application and the scope of a project.

Related content: Owkin's AI-based approach to biomarkers

Spatial omics data from NSCLC tissue, obtained with NanoString technology.

What is the right level of multiplexing?

While technology providers are battling on specs like resolution and level of multiplexing, the panel moderator asks “Do you always need 1,000plex?” The consensus in the room is not always: you do need high-plex and up to whole genome for discovery applications, but the story is different when it comes to validation studies. A technology provider representative explains: “With our higher-resolution system, we can go up to 1,000plex. You need ~250 markers just to profile the cell types you are looking at, so you have the rest to play with.”

But researchers still argue that at that level of multiplexing experiments are still very expensive and time consuming, which specifically makes things challenging when you are moving from discovery to validation: “If you asked me a year ago, I would have told you the more the better but now, conscious of price, of the time it takes to run these experiments, if I know what I am looking at I am happy to use a validation panel and look at 10-20 molecules”. People in the room tend to agree. Somebody from the back of the room says “Almost all the abstracts at the latest AACR in 2022 used 18 markers”, validating the argument. Designing smaller, targeted, validation panels will be key to validation applications where there is a need to find a balance between high-plex and costs.

However, in such early times, there is an urgent need for discovery projects that will generate reference datasets which will be both key to understanding disease in populations of patients with different biologies, and large enough to have clinical utility. For this, whole-genome methods where the level of multiplexing goes up to 10,000 - 20,000 molecules simultaneously on the same tissue slide, will prove essential. At the moment these methods focus on detecting RNA, with technologies that are usually based on nucleic acid probes hybridization, molecular barcoding, and detection by either sequencing or fluorescence. Due to their whole-genome nature, these RNA-based approaches find their perfect application in discovery projects, where the scientific questions are more open-ended. “I can live without protein for a very long time” says a researcher referring to the fact that she very happily uses technologies profiling the whole transcriptome, both traditional single-cell and spatial, for the entire discovery phase of a project and beyond. She continues, “I move to protein only when it’s strictly necessary - protein detection suffers from less reliable reagents. Antibodies require more extensive validation and are more prone to specificity and batch variation issues”, making another great point.

Handling the experimental data

“Every time I go to GitHub, I get a headache” says one of the conference speakers, referring to the plethora of computational methods for spatial biology, and to the lack of a standard method that is adopted by everyone in the field. She is frustrated about this and she is hoping that similarly to what happened to single-cell RNA sequencing analysis there will soon be consensus on a consolidated suite of analysis tools for spatial biology.

It’s clear that handling spatial biology data and unlocking the knowledge that is held within the data itself is a massive task. In spatial biology, forms of AI/ML are used to identify and draw the boundaries around individual cells in a tissue (aka cell segmentation), to find areas of interest within a tissue, to predict where the tumour vs non-tumour tissue is located on a slide, and to analyse and draw conclusions from datasets.

There is a lot of excitement in the field. Spatial omic technologies promise to generate new breakthroughs that will add a new dimension to the way we treat human disease, whether that be understanding a patient's biology, diagnosing diseases, or developing new drugs.

It’s clear thought that for this to happen nobody must work in isolation. Researchers, pathologists, technology makers, software developers, mathematicians, data scientists and AI experts all need to join forces, working hand in hand to keep improving the technologies and making them more accessible, developing the right computational tools to use them, and designing the right experiments from the discovery phase through to validation and clinical applications.

Spatial omics at Owkin

At Owkin we are working on a global initiative to create the largest spatial omics dataset in cancer. We believe that the convergence between AI and spatial omics will fuel the next revolution in cancer research. Do you want to know more? Watch this space or get in touch.

Authors
Davide Mantiero
Testimonial
No items found.
Spatial omics: between technology development and early discoveries

No items found.
No items found.
No items found.