To explain how genotype becomes phenotype through regulatory, environmental, and multi-omic layers, with a focus on experimental design for complex trait biology, precision breeding, and predictive biology workflows.
Genotype tells you what a biological system can potentially do. Phenotype tells you what that system actually did under a particular set of conditions. The distance between those two statements is where most modern biology now lives. Post-GWAS research has made one thing clear: for many traits, sequence alone is not a good enough predictor of outcome. The reason is not that genetics failed. The reason is that genetics is upstream, while phenotype is the integrated result of regulatory state, environment, timing, and molecular execution. Recent open-access work across gene-by-environment interaction, perturbation mapping, and high-dimensional phenotyping all points in the same direction: the genotype–phenotype map is layered, nonlinear, and context dependent.
A more useful definition is this: genotype is information potential, while phenotype is functional realization under constraints. A variant may be present but inaccessible. An accessible locus may alter RNA without changing protein activity. A protein may change abundance without changing function because the decisive switch lies in localization or modification. A regulatory effect may be real but visible only under a specific pH range, heat regime, nutrient gradient, or developmental window. That is why serious genotype–phenotype mapping, or GPM, rarely succeeds through one assay or one endpoint. It succeeds by reconstructing the bridge between layers. In practical terms, a study may begin with Whole Genome Sequencing or Targeted Region Sequencing, but its explanatory power usually increases only after those data are connected to regulatory state, expression state, and phenotype-aware validation.
This overview focuses on research-use-only study design in experimental and non-clinical genotype–phenotype analysis. The goal is not to restate the obvious difference between genotype and phenotype. The goal is to decode the multi-omic architecture that sits between them, and to show how researchers can choose the right layer to measure first.

Figure 1. The Expanded Central Dogma Under Biological Noise
What it shows: A left-to-right cascade from DNA to chromatin, RNA, protein, metabolite, and final trait, with side inputs from epigenetic regulation, environmental variables, and post-translational control.
Why it matters: It reframes GPM as a distorted information flow rather than a clean pipeline.
How to read it: Follow the central path first, then read each side input as a source of amplification, buffering, delay, or rerouting.
The Classical Dichotomy Revisited
The textbook distinction still matters. Genotype is inherited sequence composition. Phenotype is observable output. But for complex traits, that definition is too thin. Modern biology rarely deals with a single endpoint trait emerging directly from one locus. Instead, researchers often work with trait assemblies: morphology, physiology, developmental timing, pathway activity, stress response, spectral signatures, growth kinetics, and molecular intermediates that behave like phenotypes long before any obvious visible trait appears. Open-access reviews on genotype-by-environment interactions emphasize that phenotype is generated by genotype, environment, and the interactions between the two. That sounds simple until you try to design a study that captures all three without compressing the biology too early.
The central dogma becomes more realistic when treated as a cascade with bottlenecks. Sequence must first become accessible in chromatin. Accessible DNA must be transcribed. Transcripts must be processed, localized, and translated. Proteins must fold, interact, and often be modified before they become functionally active. Downstream pathway outputs then move through metabolism, physiology, tissue context, and environmental exposure before they stabilize as phenotype. Every stage can distort the original signal. Some variants are buffered. Some are amplified. Some are delayed. Some are conditionally revealed. This is why a sequence-first study can be excellent for discovery but weak for explanation if it never measures the layers that carry the effect forward.
That practical point is often missed. Researchers sometimes frame weak genotype-to-trait prediction as evidence that the biology is too noisy. In many cases the biology is not noisy. The design is under-layered. If the expected mechanism lies in regulatory accessibility, then ATAC-Seq or ChIP-Seq may be the first interpretable bridge. If the expected signal lies in transcriptional response, RNA-Seq or Full-Length Transcripts Sequencing (Iso-Seq) may be more revealing. If the trait is strongly state-dependent, then the most useful phenotype may not be terminal morphology at all. It may be a measurable intermediate state.
The real question is therefore not whether genotype or phenotype matters more. The real question is where the causal path becomes legible enough to measure.
Quantifying Phenotypic Plasticity: The Reaction Norm and G×E Interaction
One genotype can produce different phenotypes under different conditions. That is the core idea of phenotypic plasticity. Many articles stop there. Serious study design cannot. Plasticity has to be quantified, and the classic tool for doing that is the reaction norm: a function describing how phenotype changes across an environmental gradient for a given genotype. Reaction norms matter because they turn “environment matters” into a measurable mapping problem. A flat slope implies robustness. A steep slope implies sensitivity. Crossing reaction norms imply that genotype ranking itself changes across environments, which is the core signature of genotype-by-environment interaction, or G×E.
This matters because many studies still reduce environment to a binary design: control versus stress, high input versus low input, standard versus perturbed. That design is convenient, but it often underperforms when the real biology is thresholded or nonlinear. A slight temperature shift may do almost nothing until a response boundary is crossed. A narrow pH window may expose a genotype-specific defect that is invisible under standard medium. A nutrient limitation may change one phenotype only after a developmental transition or after compensatory reserves are depleted. If the design samples only two points, it can erase the most informative part of the curve.
A useful way to see why gradient design matters is to look at drought-adaptive trait work in wheat. A recent Frontiers study used high-throughput phenotyping and hyperspectral indicators to dissect drought-adaptive traits across multiple seasons. The important lesson was not just that drought mattered. It was that yield-related outputs, spectral indicators, and stress-linked traits did not all move together or at the same stage. The combined use of HTP and genome-wide data was useful precisely because one endpoint could not summarize the system. Some measurements were earlier and more diagnostic, while others reflected downstream adaptation rather than immediate damage. That is exactly the kind of situation in which a simple control-versus-stress comparison looks tidy but loses explanatory power.
The same lesson appears in more explicitly plasticity-focused work. Frontiers studies on plant phenotypic plasticity and reaction norm slopes show that the slope itself can be used as a plasticity trait and can be mapped genetically, rather than treating plasticity as an unstructured side effect. That is a crucial shift. It means researchers can ask not only which genotype has the best average phenotype, but which genotype maintains stable performance across environmental variation and which one shows sharp divergence only under particular stress combinations.

Figure 2. Reaction Norm and G×E Interaction Landscape
What it shows: One genotype source feeding into a broad environmental gradient, with diverging phenotype outputs and a compact inset of crossing reaction norm curves.
Why it matters: It makes G×E visible as a quantitative mapping problem rather than a vague reminder that environment matters.
How to read it: Move from left to right across the gradient and compare how trait outputs separate as pH, temperature, nutrient level, or stress intensity changes.
The standard variance framework still helps:
[\sigma_P^2 = \sigma_G^2 + \sigma_E^2 + \sigma_{G\times E}^2 + \sigma_\epsilon^2]
The practical point is not the equation itself. It is that the interaction term can be large. In breeding, that means the best genotype under one regime may not remain the best under another. In experimental systems, that means a variant can look neutral until the right environment reveals it. When G×E is expected, it is often better to anchor gradient measurements to explicit variant backgrounds using Genotyping by Sequencing (GBS) or SNP Fine Mapping, rather than treating genotype as a coarse label.
Why binary designs often underperform
Binary designs are cheap. They are also biologically blunt. They work best when the expected response is monotonic, large, and immediate. They perform poorly when the expected response is buffered, delayed, or nonlinear. If the true curve has a plateau, then a steep transition, then recovery, a two-point comparison can produce at least three wrong interpretations: no effect, unstable effect, or weak effect. The underlying biology may be strong, but the design sampled the wrong coordinates.
This is one of the clearest places where extra words in the Methods section matter more than extra adjectives in the Discussion. A good G×E design specifies the environmental axis, the expected signal layer, the likely response window, and whether rank reversal is biologically plausible. Without that, many genotype–phenotype arguments remain underpowered even when the data volume looks impressive.
The problem of phenotypic lag
A second major complication is phenotypic lag. Molecular layers respond on different clocks. Accessibility can change early. RNA often moves next. Protein activity may lag behind RNA. Metabolite pools can spike and collapse on yet another timescale. Stable morphology or performance often appears last. If the study samples too early, visible phenotype can look absent even though the causal program has already started. If it samples too late, early drivers may already be buffered, and the final measurement may over-credit secondary compensation.
This is not a theoretical nuisance. PLOS Biology work on phenotypic delay in bacterial systems showed that genetic change can remain phenotypically hidden for several generations, demonstrating how a real genotype effect can be delayed far beyond the mutation event itself. PLOS Computational Biology further showed that phenotypic delay is often neglected in evolutionary and predictive models even though it changes how genotype becomes observable phenotype. The core lesson transfers well beyond bacterial systems: the appearance of phenotype is not guaranteed to coincide with the earliest molecular cause.
A good design therefore assigns early, mid, and late sampling windows before the experiment starts. Early windows may capture chromatin or RNA change. Mid windows may be better for translation-linked or activity-linked states. Late windows may be needed for physiology, morphology, or stable performance. This is one reason Ribosome Profiling (Ribo-seq) can be useful in studies where RNA and final phenotype disagree. It helps determine whether the missing bridge lies at the translation layer rather than in the transcriptome itself.

Figure 3. Temporal Decoupling: Phenotypic Lag Across Molecular Layers
What it shows: A perturbation event followed by offset peaks in RNA, protein, metabolite, and phenotype layers.
Why it matters: It explains why a single endpoint can make a real causal effect look weak or inconsistent.
How to read it: Compare where each layer peaks over time, not just the order of the layers.
The Dark Matter of Phenotypes: Epistasis, Pleiotropy, and Polygenic Scores
Even after environment is modeled more carefully, many traits remain difficult to predict because the underlying genetic architecture is distributed. Three ideas dominate this part of the problem: epistasis, pleiotropy, and polygenicity.
Epistasis means the effect of one locus depends on the state of another. This is why effect size is often conditional rather than universal. A variant that appears neutral in one background can become important in another if the pathway context changes. A perturbation can be masked by compensatory routes in one genotype and exposed in another. In practice, this means one-locus interpretations often fail because they ignore network dependence. PLOS work on genotype–environment interactions and causal pathways underscores the importance of looking for intermediates rather than assuming direct one-step causation between genotype and phenotype.
Pleiotropy means one locus influences multiple phenotypes. This is one reason optimization can be so frustrating. A change that improves one output may worsen another. A locus may affect growth, architecture, stress response, and timing at once, not because the system is messy but because the biology is integrated. A 2023 PLOS Genetics study on multiple phenotype association used network logic to show that one SNP or gene can influence more than one phenotype and that genotype–phenotype networks help clarify whether a locus acts narrowly or broadly across a phenotype landscape. That matters for both breeding and engineering. “Improvement” should not be judged by one preferred endpoint alone. It should be judged by the broader neighborhood of affected traits.
Polygenicity means many loci with individually modest effects contribute to the same phenotype. Polygenic scores, or PGS, try to compress that architecture into a usable predictor. That is useful, but only up to a point. When trait architecture is strongly interaction-dependent, condition-dependent, or poorly measured, additive compression can hide crucial biology. Frontiers reviews on genotype-to-phenotype prediction using machine learning make this point clearly: model performance depends not just on algorithm choice but on data quality, environmental metadata, and the biological fit between measured variables and the target phenotype. Better algorithms cannot recover biology that the study never measured in a usable form.
This is why “missing heritability” is often better understood as misplaced signal than absent signal. Some of the missing piece sits in rare or structural variation. Some sits in G×E. Some sits in epistasis. Some sits in trait definitions that are too coarse. Some sits in intermediate phenotypes that are closer to mechanism than the final endpoint. For discovery-oriented projects, Genome-wide Association Study (GWAS) and Pan Genome analyses remain valuable, but their biological yield rises sharply when followed by mechanistic layers rather than treated as the end of the story.

Figure 4. The Dark Matter of Phenotypes: Epistasis, Pleiotropy, and Polygenic Effects
What it shows: A 3D interaction network in which some loci feed multiple traits, many loci converge on one trait, and selected node pairs show interaction-dependent effects.
Why it matters: It turns “missing heritability” into an architecture problem instead of a vague statistical complaint.
How to read it: One-to-many edges indicate pleiotropy, many-to-one edges indicate polygenicity, and highlighted node pairs indicate epistatic dependence.
High-Throughput Phenotyping (HTP) and the “Phenome” Challenge
If genomics solved the problem of measuring variation at scale, phenomics is solving the harder problem of measuring consequence at scale. The task is harder because phenotype is dynamic. It changes with time, environment, developmental stage, and measurement modality. That is why phenotyping remained a bottleneck long after sequencing became routine. Hyperspectral imaging and related HTP approaches are now expanding the accessible phenotype space, but no single platform captures everything.
From manual observation to digital phenotyping
Manual scoring is slow and coarse. It is also poor at detecting transient states. Digital phenotyping replaces sparse observation with image streams, spectral signatures, sensor traces, and automated feature extraction. Frontiers and methods-primer literature on hyperspectral imaging show that the technology is useful precisely because it can characterize chemical and biological properties in a non-invasive way while preserving spatial information. That makes it powerful for early detection, repeated measurements, and trait decomposition rather than one-time scoring.
Imaging-based phenotyping
Imaging is strongest when the phenotype has a spatial face: morphology, architecture, color, lesion pattern, growth trajectory, canopy behavior, or visible stress onset. Hyperspectral imaging extends this further by capturing a richer waveband profile that can reveal biochemical differences before they are obvious in ordinary images. The trade-off is that imaging often sees consequence better than cause. It tells you that something changed, but not necessarily which molecular route produced the change.
MS-based metabolomics
Metabolomics sits closer to functional pathway output. It can report substrate allocation, redox behavior, defense chemistry, and biochemical state more directly than upstream sequence or expression layers. In many studies, metabolites behave like strong endophenotypes because they integrate several upstream steps into a state that is much closer to the trait. The trade-off is operational: chemistry-rich data are harder to scale densely over time and are more sensitive to sample handling.
Sensor-based physiology
Sensor-based physiology is strongest on time. It captures kinetics: gas exchange, fluorescence, thermal response, electrophysiology, and continuous environmental coupling. These methods are especially useful when the phenotype is not just the final value but the speed of adaptation, buffering capacity, or recovery trajectory. Their weakness is narrower molecular depth.
The precision gap
The main HTP problem is not simply platform availability. It is synchronization. Imaging is strong on space. Sensors are strong on time. Metabolomics is strong on chemistry. If those layers are sampled without temporal logic, the integrated phenotype map can look inconsistent even when the biology is coherent.
A good correction framework is simple. Anchor a real time zero to the perturbation event. Assign early, mid, and late windows to the layers expected to move. Pair every phenotype measurement with environment logging. Decide in advance whether the first interpretable signal is expected in morphology, physiology, chemistry, or a molecular intermediate. When these choices are made ahead of time, HTP becomes a causal mapping tool rather than a large-format camera attached to a weak design.
A concrete example helps. In drought-adaptive wheat studies, imaging-based or hyperspectral phenotyping is often the best first layer when the goal is population-scale screening for architecture and stress-linked signatures. If the goal shifts to understanding why two lines with similar canopy signals later diverge in productivity, chemistry-proximal measurements become more informative. If the key question is how fast lines respond after water withdrawal or recovery, sensor-heavy physiology may matter more than either. Platform choice should therefore follow the expected first interpretable signal, not the prestige of the instrument.

Figure 5. High-Throughput Phenotyping Platform Comparison
What it shows: A three-lane dashboard comparing imaging, metabolomics, and sensor-based physiology across spatial resolution, temporal resolution, molecular depth, and throughput.
Why it matters: It turns platform choice into a structure-of-signal decision rather than a shopping list.
How to read it: Identify which dimension matters most for the expected first interpretable signal, then choose the platform that resolves it best.
Study Design Checklist
Before selecting assays, ask:
- Is the causal variant already known, or does the study still need broad discovery?
- Is the expected phenotype strongly dependent on pH, temperature, nutrients, or timing?
- Which layer is expected to move first: chromatin, RNA, protein activity, metabolism, physiology, or visible morphology?
- Is time-resolved sampling necessary to avoid phenotypic lag artifacts?
- Does the study need spatial or cell-state resolution?
- Will the final claim require perturbation-based validation rather than correlation alone?
For studies where tissue context and local organization matter, 10x Spatial Transcriptome Sequencing Service can complement digital phenotyping by linking structure to molecular state.
Molecular Intermediates: The Endophenotype Revolution
If genotype is too upstream and final phenotype is too downstream, then the most informative measurements often live in between. These intermediate states are often called endophenotypes. Their value is practical. They shorten the inferential distance between sequence and final trait.
Transcriptome as a proxy
The transcriptome is one of the most useful bridges because it is dynamic and scalable. It often captures pathway activation, state transitions, or stress response before visible phenotype changes. Single-cell and spatial approaches strengthen this further by showing that genotype often manifests as changes in state composition or trajectory rather than as one simple average fold change. This is why transcript-aware profiling is often the first serious follow-up after variant discovery. In workflow terms, RNA-Seq and Full-Length Transcripts Sequencing (Iso-Seq) are not just cataloging tools. They are bridge-layer tools.
Why mRNA is useful but incomplete
mRNA is informative, but it is not the trait. It reflects what the system is preparing to do more reliably than what it has already finished doing. A strong transcript shift may produce only a modest trait effect if translation is constrained or the pathway is buffered downstream. Conversely, a modest transcript shift can produce a large phenotype if the affected genes sit at a bottleneck. Disagreement between transcriptome and phenotype is therefore not necessarily a failure. It is often a sign that the gating layer lies elsewhere.
Proteome and PTM as the final gate
That gating layer is often downstream of abundance. In many systems, the last decisive switch is not whether a protein exists, but whether it is active, stabilized, localized, complexed, or modified. This is why protein-state thinking matters even in transcript-rich workflows. A project can correctly identify a regulatory or transcriptional change and still fail to explain the phenotype if the decisive effect lies in downstream protein state.
Epigenome as pre-expression context
Upstream of transcriptomics sits the epigenome. Accessibility and methylation patterns help determine whether sequence potential becomes transcriptional reality. When the same genotype behaves differently across environments or developmental states, the difference often begins here. That is why Whole Genome Bisulfite Sequencing (WGBS) or EM-seq Service can be informative when the expected mechanism lies in regulation rather than coding change.
Choosing the right bridge layer
A common mistake in multi-omic design is to collect many layers without deciding which one is expected to become informative first. More data do not automatically create a better genotype–phenotype map. The right bridge depends on the expected biology.
If the hypothesis concerns enhancer use or accessibility, chromatin profiling should come first. If it concerns rapid response state, transcriptomics is often the best early bridge. If it concerns pathway execution, the decisive layer may be downstream of RNA. If it concerns output chemistry or resource allocation, a metabolite-linked layer may be closer to the final trait than transcript abundance. If it concerns local structure or developmental organization, spatial readouts may be necessary. PLOS Computational Biology work linking cell atlases and phenotypic foundations, as well as work on perturbative maps of transcriptional and morphological data, both reinforce the same point: cells and intermediate states often mediate the genotype-to-phenotype map more directly than bulk endpoint traits do.
That is the practical meaning of the endophenotype revolution. It is not just about adding omics. It is about choosing the layer that makes the causal path shortest and most testable. For integrated studies, Multi-Omics Service becomes most valuable when it is used to identify that bridge rather than simply to increase data volume.

Figure 6. The Multi-Omic Hierarchy: From SNP to Endophenotype to Trait
What it shows: A continuous sequence from genomic variant to transcriptome, proteome, PTM layer, metabolome, endophenotype, and final trait.
Why it matters: It distinguishes assay-addressable intermediate checkpoints from the broader “noise and distortion” perspective of Figure 1.
How to read it: Treat each layer as a possible measurement opportunity and ask which one is most likely to make the causal path visible first.
Conclusion: Engineering Phenotypes Through Synthetic Biology
GPM is becoming increasingly interventional. The field is shifting from asking which loci correlate with a trait to asking which edits, in which contexts, produce predictable output. That is a much harder standard. A clean edit does not guarantee a clean phenotype. The edit still has to pass through regulatory state, expression, molecular execution, environment, and time before the outcome stabilizes. PLOS Computational Biology work on perturbative maps emphasizes that modern reverse-genetics datasets increasingly connect perturbation identity to high-dimensional readouts such as microscopy and RNA sequencing, allowing phenotype to be mapped as a structured landscape rather than as one endpoint like growth or lethality.
A useful engineering loop now looks like this. First, define the likely causal target from variant and trait data. Second, edit or perturb it. Third, measure the first mechanistic consequence in the most relevant layer. Fourth, expose the system to the context where the phenotype is expected to emerge. Fifth, compare predicted and observed output and update the model. This logic matters because failures in phenotype engineering are often informative. They tell you which layer was ignored: context, time, compensation, or the wrong intermediate bridge.
For validation, the sequence event itself may be confirmed with CRISPR Sequencing, but mechanistic confidence usually improves when that confirmation is paired with specificity checks such as CRISPR Off-Target Validation and with the most relevant bridge-layer readout. For pooled workflows, CRISPR Screen Sequencing can help connect edit identity to downstream enrichment or depletion patterns before higher-content follow-up.

Figure 7. Designing Predictable Phenotypes: Edit, Measure, Validate
What it shows: A closed-loop workflow linking CRISPR editing, controlled context, multi-omic measurement, phenotype capture, predictive modeling, and iterative validation.
Why it matters: It reframes GPM as a design-and-test workflow instead of a static association exercise.
How to read it: Follow the forward loop from edit to phenotype first, then read the feedback arrow as model correction rather than experimental failure.
The strongest takeaway is simple. Genotype is not phenotype, but phenotype is not mysterious either. It is the integrated output of a layered biological computation. The closer a study gets to measuring that computation at the right time, under the right context, and at the right intermediate layer, the more predictive and engineerable the phenotype becomes.
FAQ
1. Why is genotype alone often insufficient to predict phenotype?
Because the sequence effect is filtered through accessibility, expression, translation, downstream activity control, environment, and timing before it becomes a stable trait.
2. What is a reaction norm in genotype–phenotype studies?
A reaction norm describes how a phenotype changes across an environmental gradient for a given genotype. It is one of the clearest tools for quantifying plasticity and G×E.
3. When is a gradient design better than a simple control-versus-stress comparison?
When the expected phenotype is thresholded, nonlinear, delayed, or likely to change rank across environments. Gradient designs capture response surfaces that binary contrasts often miss.
4. Why do multi-omic layers often disagree across time?
They often reflect different moments in the same causal chain. Accessibility, RNA, protein activity, metabolites, physiology, and visible phenotype move on different timescales.
5. What is phenotypic lag?
Phenotypic lag is the delay between early molecular response and later visible trait emergence. It is a major reason single-endpoint designs can be misleading.
6. How do you choose the right intermediate layer to measure first?
Choose the layer where the first interpretable signal is most likely to appear: chromatin for accessibility-driven hypotheses, RNA for rapid response, downstream activity-linked layers for execution, and spatial readouts for local context.
7. Do polygenic scores solve the genotype–phenotype problem?
They help in some settings, but predictive performance still depends strongly on trait architecture, data quality, metadata, and how well the training data capture relevant biology.
8. Why is high-throughput phenotyping still challenging?
Because phenotype is dynamic and multidimensional. The main difficulty is aligning space-rich, chemistry-rich, and time-rich measurements into one coherent mapping framework.
9. Why does synthetic biology increase the need for better genotype–phenotype maps?
Because editable sequence does not guarantee predictable output. Engineering phenotype requires understanding how an edit propagates through molecular layers and context before a stable trait appears.
References:
- Hecox-Lea B, et al. Gene by Environment Interactions reveal new regulatory aspects of a genotype–phenotype relationship. PLOS Genetics. 2022. DOI:10.1371/journal.pgen.1009988
- Rodrigues NTL, Bland T, Ng K, et al. Quantitative perturbation–phenotype maps reveal nonlinear responses underlying robustness of PAR-dependent asymmetric cell division. PLOS Biology. 2024. DOI:10.1371/journal.pbio.3002437
- Celik S, Hütter J-C, Carlos SM, et al. Building, benchmarking, and exploring perturbative maps of transcriptional and morphological data. PLOS Computational Biology. 2024. DOI:10.1371/journal.pcbi.1012463
- Raj A, van Oudenaarden A, et al. Cell atlases and the developmental foundations of the phenotype. PLOS Computational Biology. 2026. DOI:10.1371/journal.pcbi.1013944
- de Jong M, et al. Effective polyploidy causes phenotypic delay and influences bacterial evolvability. PLOS Biology. 2018. DOI:10.1371/journal.pbio.2004644
- Schmutzer M, Wagner A. Phenotypic delay in the evolution of bacterial antibiotic resistance. PLOS Computational Biology. 2020. DOI:10.1371/journal.pcbi.1007930
- Zeng H, et al. A novel method for multiple phenotype association studies based on genotype and phenotype networks. PLOS Genetics. 2023. DOI:10.1371/journal.pgen.1011245
- Danilevicz MF, et al. Plant genotype to phenotype prediction using machine learning. Frontiers in Genetics. 2022. DOI:10.3389/fgene.2022.822173
- Schrag TA, et al. A comparison of classical and machine-learning-based phenotype prediction in plants. Frontiers in Plant Science. 2022. DOI:10.3389/fpls.2022.932512
- Fu D, et al. Genetic analysis of phenotypic plasticity identifies BBX6-related response patterns using reaction norm slopes. Frontiers in Plant Science. 2023. DOI:10.3389/fpls.2023.1280331
- Elazab A, et al. High-throughput phenotyping using hyperspectral indicators for drought-adaptive traits in wheat breeding. Frontiers in Plant Science. 2024. DOI:10.3389/fpls.2024.1470520
- Hong D, Li C, Yokoya N, et al. Hyperspectral imaging. Nature Reviews Methods Primers. 2026. Included here for technical background on HTP, but check reuse terms separately if you plan to republish figures or adapted content, because publisher licensing may differ from CC BY defaults.
Related Services
Disclaimer: For research use only. Not for diagnostic or clinical use.