Using Spatial Omics to Study Responder vs Non-Responder Regions in Preclinical Models

You run a controlled preclinical study: same model background, same dose, same schedule, same collection time. The bulk readout suggests the treatment has "some efficacy." Then you look at the tissue and realize the response isn't uniform: some regions look drug-engaged, while others look unchanged or adaptive.
That's usually the decision point for a pharmacology or MoA lead: not whether the average endpoint moved, but what separates responder-like regions from non-responder-like regions inside the same treated sample—and what that contrast implies for the next experiment.
Spatial omics helps by turning an averaged efficacy signal into localized response architectures—patterns you can locate, explain, and validate.
As broader context, spatial omics in drug discovery is increasingly used to link efficacy, mechanism, and tissue context when bulk measurements flatten the story.
Two guardrails keep these studies credible and reproducible: (1) treat ROI/region selection and QC as part of the experimental design (not a post hoc visualization step), and (2) report the region-definition logic clearly enough that another team could apply it to a matched cohort. Practical best-practice frameworks for spatial biology study design emphasize hypothesis-driven ROI strategy, acceptance criteria/QC, and orthogonal validation planning (see Krull et al., "A best practices framework for spatial biology studies in preclinical and translational research").
Key takeaways
- Responder vs non-responder biology isn't only a between-sample problem. In many studies, it appears first as regional variation within the same tissue section.
- Spatial omics is most valuable when it decomposes "average efficacy" into locatable, interpretable, verifiable local patterns.
- Strong design usually goes beyond treated vs control and organizes comparisons around response architecture (responder-like vs non-responder-like regions).
- High-value regional readouts often include pathway shifts, cell-state transitions, cell neighborhood rewiring, and morphology-aligned response patterns.
- If you describe service support, keep "research use only" wording inside the service section rather than repeating it across the article.
Why Responder vs Non-Responder Is Often a Regional Problem Before It Becomes a Sample-Level Problem
Responder and non-responder biology often shows up as regional variation inside tissue before it becomes a clean whole-sample label.
Why one treated sample can contain multiple response states
Within a single treated section, you may see strong-response regions next to weak-response regions, escape-like regions, and remodeling regions.
The practical shift is to stop treating "responder/non-responder" as a sample taxonomy and start treating it as a regional classification problem.
Why whole-sample labels hide mechanistically important zones
A sample can score as a responder on average while still containing non-responder-like pockets that matter because they can expand under selection pressure. Conversely, a weak global response can still contain a focal response area worth explaining because it may represent the conditions under which the treatment can engage tissue state.
If your next decision is dose, schedule, combination logic, or model selection, those focal contrasts are often more informative than the mean.
Why local response patterns matter more than an average endpoint
An average readout mainly tells you direction. A regional contrast is closer to mechanism because it forces you to explain divergence under matched exposure context.
In practice, local patterns help you test specific hypotheses, such as whether the limiting factor is target-engaged state transition, microenvironmental shielding, immune exclusion, or a barrier at the tissue interface.
Why preclinical models are well-suited for regional spatial comparison
Preclinical models often give you tighter control over dose, collection time, and handling, which makes regional contrasts easier to interpret and validate.

How to Define Responder and Non-Responder Regions in a Way That Supports Interpretation
Useful regional definitions are based on tissue evidence and comparison logic, not on arbitrary heatmap contrast or purely visual selection.
A good constraint: if you can't explain the region definition to a pathologist and a pharmacologist in one sentence, it probably won't support an actionable interpretation.
If you want a starting point for common assay setups that support region-aware labeling, CD Genomics' spatial transcriptomics services page provides an overview of categories teams often use in spatial projects.
Start with a clear response definition
Define what "response" means for this program. Different programs use "response" to mean different things, and mixing definitions produces uninterpretable labels.
In preclinical studies, response definitions often map to one of these buckets: target-engaged pathway suppression, immune activation aligned to expected mechanism, stromal remodeling that changes accessibility, injury reduction in an injury model, or entry into a target-engaged tissue state rather than a single marker change.
Use morphology and molecular evidence together
Avoid defining regions solely from molecular clusters. Interpretable responder/non-responder labels usually reflect both tissue structure (where the region is), histology (what it looks like), and region-level molecular features (what it is doing).
This is how you keep the analysis from producing "floating domains" that are statistically distinct but detached from tissue architecture.
Distinguish non-response from low-quality or non-informative tissue
A non-responder region is not the same thing as a low-quality region. Necrosis, folding, tears, edge effects, low cellularity, or technical failure can all masquerade as "non-response" if you label by contrast alone.
If you don't separate these categories early, you risk spending the study explaining artifacts as resistance.
Build region labels that can be reused across samples
If every slide requires a brand-new labeling story, cross-sample comparison collapses. Reusable labels don't have to be simplistic; they just need reproducible criteria. Many teams standardize a small set of region classes (responder-like, non-responder-like, transition, non-informative) and then add program-specific sublabels only when needed.
What to Compare When You Study Response Heterogeneity in Preclinical Models
The most informative comparisons are not always treated versus untreated. Often they're responder-like versus non-responder-like regions within and across matched contexts.
CD Genomics summarizes common preclinical application patterns under spatial omics solutions for drug discovery.
Within-sample region comparisons
Responder-like vs non-responder-like regions inside the same treated sample is often the cleanest contrast because it reduces confounding from baseline model differences and sample handling.
Across-sample region comparisons
Across-sample comparisons test reproducibility by asking whether the same regional signatures recur in matched tissue contexts.
Time-point comparisons
With multiple time points, you can separate response establishment from escape. For example, an early responder-like region may transition into an adaptive state later, while a persistently non-responder-like region may show stable barriers across time.
Dose or exposure comparisons
Dose/exposure comparisons answer whether response architecture itself changes with exposure and whether there is a threshold at which regions flip into a responder-like state. Keep this focused on tissue response architecture rather than turning it into a PK topic.
Matched microenvironment comparisons
Matched microenvironment comparisons hold morphology and baseline context as constant as possible, then ask why one region responds while a similar region does not. This often forces the most honest explanation of divergence.

Which Spatial Readouts Best Explain Why Some Regions Respond and Others Do Not
The most useful spatial readouts explain why response diverges locally, not merely that regions are different.
Two study design ideas help keep this interpretable as the dataset grows: plan multi-modal integration logic up front (so transcript/protein/morphology evidence supports the same contrast), and avoid repeatedly sampling highly correlated tissue areas. Engelhardt and colleagues addressed the sampling side as an experimental design problem in Nature Communications (2024; "Optimizing the design of spatial genomic studies").
If your study uses more than one modality, CD Genomics' overview of spatial multi-omics integration is a practical primer on integration logic.
Region-restricted pathway activity
Pathway analysis becomes decision-grade when it is tied to your response definition and consistent across contexts. The key question isn't "do pathways differ," but whether pathway shifts track with responder status across regions and samples.
A useful mindset is to treat pathway results as a region-level hypothesis generator: what pathway shift is consistent with target engagement, and what shift looks like adaptation or compensatory signaling?
Cell-state transitions
In responder/non-responder comparisons, state transitions often matter more than single markers. Responder-like regions may show a shift into a target-engaged state, while non-responder-like regions may retain baseline states or enter adaptive states.
Thinking in transitions also makes the next step clearer: what barrier prevents the transition, and what would you perturb to move it?
Neighborhood rewiring
Responder vs non-responder is often a neighborhood problem rather than a single-cell problem. Even when the same cell types are present, spatial arrangement can change what they can do.
Neighborhood readouts are most helpful when they point to a concrete mechanism candidate, such as stromal shielding, immune exclusion, interface disruption, or signaling isolation.
Morphology-aligned spatial patterns
Morphology is the scaffold that makes spatial results interpretable. High-value regional differences often align with tissue architecture: boundaries, interfaces, compartments, or injury patterns.
When a result is "just a colorful cluster" detached from histology, it's harder to convert into a validation plan.
A useful side note: when teams need to interpret localized injury or safety-associated patterns in preclinical sections, spatial readouts can be designed to support toxicology questions as well. CD Genomics outlines this category under spatial omics solutions for toxicology.
Which readouts most often guide follow-up experiments
A regional readout is most actionable when it localizes cleanly, aligns with tissue context, and suggests an orthogonal validation move.
For many programs, that means combining pathway evidence (what is active), state evidence (what changed), and neighborhood evidence (what context might be enabling or blocking the change).
What Strong Spatial Responder-vs-Non-Responder Studies Actually Show
Strong studies don't just separate two region types. They show a biologically coherent, tissue-aligned contrast that suggests what to validate next.
If you want a practical guide to finding spatial datasets for benchmarking your own region logic, CD Genomics' resource on how to find and use spatial omics datasets is a useful starting point.
Case example 1: Spatial drug-response heterogeneity can be mapped at high resolution
Tang and colleagues introduced SpaRx in Briefings in Bioinformatics (2023), a graph-based domain adaptation approach that integrates pharmacogenomics knowledge with single-cell spatial transcriptomics to infer heterogeneous cellular drug responses across spatial neighborhoods (see "SpaRx: elucidate single-cell spatial heterogeneity of drug responses for personalized treatment").
For preclinical teams, the takeaway is the framing: localized sensitivity and resistance are not just a qualitative impression. They can be mapped as region-level hypotheses that you can test and validate.
Case example 2: Niche-specific signaling under the same therapy context can be separated spatially
Grant and colleagues used GeoMx digital spatial profiling in an immunocompetent mouse model of breast cancer bone metastasis treated with α-PD-1, showing marrow and endosteal niches have distinct immune and tumor signaling programs (2026; "Digital spatial profiling of α-PD-1 treated breast cancer bone metastases reveals region-specific signaling and enrichment of immune-suppressive markers").
This design—comparing matched regions in defined niches—mirrors how responder vs non-responder questions emerge in preclinical models: the tissue is not a single environment.
Case example 3: Industry use cases treat efficacy and response as core preclinical spatial questions
Responder/non-responder framing is also reflected in how major platforms position spatial in preclinical pipelines. For example, 10x Genomics' pharma overview explicitly lists preclinical "research on efficacy and response" as a use case (see Single cell and spatial multiomics for drug development).
What these studies have in common
They share a few non-negotiables: clear comparison logic, region definitions that can be defended, contrasts aligned to tissue context, and outputs that point to validation.

How to Avoid the Most Common Mistakes in Regional Response Analysis
Most weak responder-vs-non-responder analyses fail because they confuse biology with artifact, over-interpret one region, or ignore validation logic.
CD Genomics' overview of spatial transcriptomics data analysis covers common analysis and interpretation pitfalls.
Treating any different region as a non-responder region
Difference is not non-response. Non-responder labels only make sense relative to a defined response.
Confusing poor tissue quality with biological non-response
Low quality tissue can produce clean-looking "non-responder" signatures simply because there's less information.
It helps to reserve a specific label like "non-informative region" for areas dominated by necrosis, folding, tears, edges, low cellularity, or technical QC failure. You can exclude them from responder/non-responder contrasts without implying a resistance mechanism.
Krull and colleagues' best-practices framework emphasizes feasibility testing and acceptance criteria for ROI strategy and assay performance (2025; "A best practices framework for spatial biology studies in preclinical and translational research"). A related modeling reminder is that "domain identification" (finding coherent spatial areas) is not automatically the same thing as "response labeling." Gui and colleagues introduced a graph-contrastive approach for spatial domain identification (GRAS4T) in Computational and Structural Biotechnology Journal (2024; paper on PMC).
Over-interpreting a single hotspot
Hotspots are memorable but often not reproducible. If a conclusion depends on one region, either show it recurs across matched contexts or explicitly frame it as a hypothesis rather than a mechanism claim.
Ignoring region selection bias
Region selection is not neutral. If ROI choice is driven mainly by "what looks strongest," you'll amplify bias and reduce reproducibility. Define selection logic up front and treat it as part of the experimental design.
Trying too many analyses without a regional hypothesis
Spatial datasets can support many analyses. Without a regional hypothesis, analysis breadth turns into fragmentation: a long list of differences that can't be ranked into an actionable story.
A Practical Workflow for Turning Regional Response Patterns Into Actionable Hypotheses
The most useful workflow moves from region definition to contrast analysis to hypothesis ranking to validation.
A related decision-support resource is CD Genomics' overview on how to choose spatial transcriptomic technologies, which is helpful when you're deciding what resolution and coverage you need to support a specific regional contrast.
Step 1: Define a reproducible region logic
Start with a small set of classes (responder-like, non-responder-like, transition, non-informative). Add complexity only if it preserves interpretability.
Step 2: Run region-level contrast analysis
Keep the contrast aligned with the response definition and tissue context, and prioritize outputs that you can validate.
Step 3: Rank the most plausible explanations
Instead of reporting every difference, prioritize a small number of explanations consistent with the regional patterns (for example: target-engaged state differences, stromal shielding, immune exclusion, or adaptive signaling).
Step 4: Choose the next validation move
Pick the next move explicitly—orthogonal staining, targeted validation assays, a follow-up spatial design, or a refined model design.
An example region-contrast workflow you can copy
The details vary by platform, but teams often need a concrete "what do we do next on Monday" template. The example below is illustrative (not a claim of universal thresholds) and is meant to make your region labels and contrasts audit-able.
- Lock the response definition before clustering
Example: "Responder-like = region shows target-pathway suppression plus morphology-consistent tissue improvement vs adjacent tissue; Non-responder-like = lacks that suppression under acceptable QC."
- Pre-register ROI/region rules and minimum QC gates
Example QC gates you can adapt:
- Exclude regions dominated by tears/folds/edge effects or severe necrosis on H&E.
- Exclude spots/segments that fail platform QC (e.g., very low library complexity or abnormally low detected features relative to the slide distribution).
- Define 3–4 reusable region classes
Example: responder-like, non-responder-like, transition/interface, non-informative.
- Run a region-level contrast that matches the decision
Example contrasts:
- responder-like vs non-responder-like within the same treated section
- responder-like regions across animals at the same time point to test reproducibility
- Report a small set of decision-grade outputs
Example output package (kept intentionally small):
- A ranked pathway set aligned to the MoA (what changes where)
- A short list of state markers/genes/proteins with spatial maps (what transitions)
- A neighborhood summary (who is next to whom in responder-like vs non-responder-like)
- Choose one orthogonal validation move per top hypothesis
Example: follow-up IHC/IF for 2–4 markers at the responder/non-responder boundary; or a targeted panel in a second cohort to test whether the same regional signature recurs.
If you want more detail on common analysis checkpoints and reporting pitfalls, CD Genomics summarizes a practical workflow in Spatial transcriptomics data analysis.

FAQ
How do I distinguish a true non-responder region from a low-quality region?
A true non-responder region should still be interpretable: morphology is intact, cellularity is sufficient, and technical QC is acceptable. It's defined relative to a response definition and shows evidence consistent with non-response under that definition. If the region looks "non-responsive" mainly because it is necrotic, torn, edge-affected, or low information, label it as non-informative and keep it out of responder/non-responder contrasts.
Can one preclinical sample contain both responder and non-responder regions?
Yes, and this is one of the most valuable use cases for spatial readouts. Within-sample regional contrasts reduce confounding and often generate clearer mechanistic hypotheses because you are comparing response states under the same model background and exposure context.
Do I need whole-section profiling to study response heterogeneity?
Not necessarily, but whole-section context often makes interpretation easier because it shows how responder and non-responder regions relate to each other spatially. ROI-only designs can work, but they require tighter control of selection bias and a clear rationale for how ROIs represent the tissue.
What is the most useful readout for comparing responder and non-responder regions?
There isn't a single best readout. In many preclinical programs, the most actionable evidence comes from a combination: region-restricted pathway shifts aligned with the response definition, state transitions that explain what changed under treatment, neighborhood patterns that reflect enabling or blocking context, and morphology alignment that keeps the contrast interpretable.
What makes a responder-vs-non-responder spatial study worth following up?
It's worth following up when region definitions are clear and reusable, contrasts reproduce across matched contexts, the explanation is consistent with tissue architecture, and the outputs point naturally to a validation move. If the result is only "these regions are different" with no ranked hypothesis or validation path, it usually won't change the next experiment.
How CD Genomics Can Support Regional Response Analysis in Preclinical Models
For research use only projects, CD Genomics can support preclinical spatial studies by helping teams compare responder-like and non-responder-like regions with tissue-aware workflows and downstream interpretation.
A high-level map of available capabilities is listed under spatial omics services.
Where existing capabilities fit best
CD Genomics is typically a fit when the project requires region-aware thinking rather than "one average profile per sample," for example when you need multi-region spatial analysis, responder/non-responder regional comparisons, or bioinformatics that keeps tissue context attached to the conclusions.
What to prepare before inquiry
It helps to align on model type, treatment/control (and optional dose) structure, time-point plan, response definition, and any expected responder/non-responder tissue patterns suggested by prior histology or endpoints.
What a good project kickoff should define
A strong kickoff defines how response will be operationalized, how regions will be selected, which comparisons are decision-critical, and what outputs would justify follow-up validation. When those are decided early, spatial analysis is more likely to produce testable hypotheses instead of a gallery of differences.
References
- Engelhardt et al. "Optimizing the design of spatial genomic studies". Nature Communications (2024).
- Krull et al. "A best practices framework for spatial biology studies in preclinical and translational research". (Best-practice guidance on ROI strategy, QC/acceptance criteria, and reporting).
- Tang et al. "SpaRx: elucidate single-cell spatial heterogeneity of drug responses for personalized treatment". Briefings in Bioinformatics (2023).
- Grant et al. "Digital spatial profiling of α-PD-1 treated breast cancer bone metastases reveals region-specific signaling and enrichment of immune-suppressive markers". (2026).
- CD Genomics. Spatial transcriptomics data analysis: workflow and tips.
- 10x Genomics. Single cell and spatial multiomics for drug development.