MLPA Test & Assay Workflow (RUO) What It Measures, Step-by-Step Method, Sample Requirements, and Deliverables

Multiplex ligation-dependent probe amplification (MLPA) is a targeted, probe-based assay workflow used to review copy number behavior at predefined loci, often at exon-level or gene-focused resolution. In RUO programs, its practical value is that it supports focused copy number review without requiring every project to move immediately into a broader discovery workflow. The assay logic remains consistent across foundational and workflow-focused sources: adjacent probes hybridize to target DNA, ligate only when correctly matched, amplify through universal primers, and are then separated by fragment analysis for comparative signal review.

Quick Start — What an "MLPA Test" Delivers in RUO Projects

Throughout this page, MLPA is described strictly as a research-use-only assay workflow for targeted copy number review at predefined loci. The outputs discussed here are intended for assay evaluation, internal project review, method comparison, and sample prioritization within RUO programs. They are not presented as diagnostic findings, treatment-guiding results, or patient-specific conclusions. Any interpretation must remain limited to assay scope, input quality, probe design, reference strategy, and the stated project context.

In a research-use-only setting, an "MLPA test" is best understood as a targeted copy number review workflow built around a defined probe mix, a comparative reference strategy, and a report package that can be checked by scientific or project stakeholders. It is useful when teams need to answer questions such as whether selected exons or loci show gain, loss, or stable relative signal behavior; whether a focused assay can help review a CNV-oriented hypothesis already narrowed by study design; or whether an outsourcing partner can provide a manageable output package for internal decision-making.

What you can claim scientifically in RUO language:

  • targeted copy number review across selected loci
  • relative signal-based copy number assessment against reference samples
  • assay-based follow-up for predefined genomic regions
  • internal project outputs for method comparison, workflow evaluation, or sample prioritization

MLPA RUO workflow and deliverables mapFigure 1. MLPA RUO workflow and deliverables map.
Caption: A project-level view linking DNA input, assay execution, fragment analysis, review checkpoints, and final deliverables such as ratio tables, QC notes, plots, and methods summary.

A well-run MLPA project should produce more than a single summary statement. At minimum, teams should expect a structured output package that can be reviewed later without reconstructing the assay logic from scratch. That package commonly includes a sample status overview, a table of probe- or target-level outputs, visual review plots, a QC summary, a concise methods summary, and a limitations note explaining what the assay did and did not cover.

New to MLPA terminology and principle? See our overview of what MLPA means and how the principle works.

Step-by-Step Workflow (From DNA to Report)

A practical way to understand MLPA is to separate it into two linked lanes: the wet-lab execution lane and the analysis/review lane. The wet-lab lane generates the fragment signal. The analysis lane determines whether that signal is interpretable, whether it is stable enough for internal review, and whether the package is ready to hand off.

1) Project intake and assay fit check

Before DNA reaches the bench, the project should already be narrowed to an operationally useful scope:

  • which loci, genes, or exon groups are relevant
  • whether the project is focused on targeted CNV review rather than broad discovery
  • how reference samples will be selected
  • whether repeat material is available if a sample underperforms
  • what final output format the receiving team actually needs

This intake step often determines whether MLPA is the right starting point or whether a broader method should be considered first. In some studies, a focused MLPA assay service is the right starting point; in others, a broader CNV sequencing service or a complementary CGH microarray service may better match the study scope.

This up-front fit check matters because the assay is most useful when the project question is already constrained to defined loci. If the team still needs discovery-scale breadth, it is usually better to settle that question before forcing a targeted assay into the wrong role.

2) Pre-QC: DNA quantity and quality checks

MLPA is often described as robust, but it is not indifferent to starting material. Poor input remains one of the most common reasons a project slows down or has to be repeated. Workflow references consistently place DNA preparation and fragment-analysis quality among the practical determinants of usable output.

At pre-QC, teams should review:

  • DNA amount: enough total material for the intended assay plus contingency
  • concentration window: workable for consistent setup and handling
  • buffer compatibility: clearly documented and not likely to interfere with reaction performance
  • integrity and handling history: especially for samples with uncertain storage records
  • identity and traceability: the physical tube must match the metadata sheet exactly

In practice, a useful outsourcing workflow usually classifies samples into three operational groups:

  • 1. Proceed — input appears acceptable for routine assay setup
  • 2. Proceed with caution — sample may be usable, but QC review should be stricter
  • 3. Hold / re-extract / resubmit — likely to waste time if pushed forward unchanged

For project managers, this is a scheduling issue. For platform leads, it is a reproducibility issue. In both cases, the lesson is the same: if the input record is weak, downstream ratios become harder to trust.

3) Hybridization

During hybridization, paired probes bind to adjacent target sequences. This adjacency requirement is central to MLPA specificity because only properly matched probe halves can support the ligation step. The foundational MLPA paper describes the method around this probe-binding and ligation logic, and the underlying mechanism still defines assay performance in modern workflows.

Operationally, readers do not need every protocol setting here. What matters is the consequence chain:

  • weak or inconsistent binding can destabilize later peak structure
  • target-related mismatch can suppress ligation opportunity
  • low-quality input may first reveal itself here, even if intake review looked acceptable

4) Ligation

Once adjacent probes are properly bound, they are ligated into amplifiable templates. This is the step that distinguishes correctly recognized targets from non-productive probe events. Because only ligated probe products proceed into amplification in the expected way, ligation efficiency strongly influences whether the downstream fragment profile is coherent.

In practical project terms, ligation is rarely visible to the client as a standalone deliverable, but it is highly visible in failure patterns. If multiple expected probe signals underperform together, or if the peak pattern looks globally weak, ligation performance becomes part of the troubleshooting logic.

5) Amplification

After ligation, the assay amplifies probe-derived products through universal primers. Importantly, MLPA amplifies ligated probe products, not the original genomic targets directly. That is one reason it can multiplex many predefined loci in one reaction while still producing size-coded outputs that can later be separated by capillary electrophoresis.

If the study instead needs sequence-level context, a broader target-space review, or direct read-based follow-up, a targeted region sequencing service or an amplicon sequencing service may fit better than forcing MLPA to answer a sequencing-oriented question.

6) Fragment analysis

After amplification, MLPA products are separated by size on a capillary electrophoresis platform, producing a set of peaks corresponding to the expected probe fragments. This stage is where assay chemistry becomes a structured data object that can be reviewed.

This step is critical because it determines whether the assay yields:

  • a clean peak architecture
  • stable sizing
  • sufficient relative intensity
  • a usable basis for ratio calculation and QC review

7) High-level analysis and packaging

At a high level, analysis includes fragment sizing, peak recognition, signal review, reference-based ratio calculation, QC flagging, and packaging of plots and tables for project review. This page focuses on workflow logic and output expectations rather than a full software-level interpretation manual.

That scope matters. Most readers on this page need to understand how the workflow moves from signal generation to reviewable outputs, not every software parameter used in detailed normalization or outlier handling.

MLPA wet-lab and analysis swimlane with QC decision pointsFigure 2. MLPA wet-lab and analysis swimlane with QC decision points.
Caption: A two-lane workflow showing pre-QC, hybridization, ligation, amplification, fragment analysis, and review steps, with decision points for proceed, repeat, or resubmission.

Taken together, the workflow can be summarized as:

Input DNA → Pre-QC → Hybridization → Ligation → Amplification → Fragment analysis → Peak review → Reference-based ratio review → QC summary → Final output package

That is the real operational meaning of an MLPA assay in a B2B RUO program.

Sample Requirements & Submission Checklist (B2B-ready)

In outsourced MLPA work, delays are often caused less by assay chemistry than by poor submission readiness. A high-quality workflow cannot compensate indefinitely for unclear labels, uncertain storage history, or incomplete metadata.

Sample types

For this page, the working assumption is genomic DNA from RUO sources. What matters most is not the label alone, but whether the extracted material is suitable for stable probe-based analysis and can be traced reliably throughout the workflow.

Typical expectations include:

  • purified genomic DNA
  • clear sample IDs
  • known extraction or elution context
  • enough total material for the planned assay
  • documented storage and shipping conditions

Recommended input range and concentration window

Because assay design and project context vary, it is better to define project-qualified ranges than to imply one universal number. A useful service-side framing is:

  • a recommended input range for routine processing
  • a minimum reviewable range for constrained material
  • a reserve material target for repeats or confirmation work

The same applies to concentration. It should be high enough for consistent setup and low enough to avoid repeated handling variability. When concentration is outside the preferred working window, the service team should either normalize it or flag the sample before setup begins.

Storage and shipping conditions

Submission notes should make the sample history legible:

  • storage temperature history
  • freeze-thaw history if known
  • shipping temperature control
  • buffer identity
  • unusual handling notes or material constraints

Core labs handling larger batches should standardize this metadata early. Batch reproducibility begins before the assay begins.

Labeling and manifest alignment

Each sample should map cleanly to a metadata row containing:

  • sample ID
  • concentration
  • volume
  • total amount
  • buffer
  • storage condition
  • project group or batch label
  • special notes if relevant

This is not busywork. It is what prevents the team from confusing sample-related risk with assay-related risk later.

Contamination avoidance and handling discipline

In this context, "contamination" is broader than obvious cross-sample carryover. It also includes process noise introduced by mixed handling conventions, inconsistent extraction residues, ambiguous labeling, or incomplete manifests.

A submission-ready project should therefore:

  • keep DNA preparation approaches as consistent as possible
  • align physical labels and spreadsheet entries exactly
  • confirm IDs before shipment
  • state handling limitations explicitly
  • avoid assuming that missing metadata can be reconstructed safely after receipt

If the project later needs a compact orthogonal follow-up at selected regions, a Sanger sequencing service or a multiplex PCR sequencing service may be more suitable than reopening the original submission package without a defined next-step plan.

Sample submission readiness checklist for MLPA projectsFigure 3. Sample submission readiness checklist for MLPA projects.
A submission-focused view linking sample ID, DNA amount, concentration, buffer, storage history, and manifest alignment to smoother assay execution.

Sample submission checklist

Before shipping samples for MLPA, confirm the following:

  • every tube has a unique and legible sample ID
  • the manifest matches the physical tubes exactly
  • DNA amount is sufficient for the intended assay and possible repeats
  • concentration is recorded in a consistent unit
  • buffer identity is stated
  • storage and shipping conditions are documented
  • any known sample limitations are written down
  • the target loci or assay scope are defined
  • control/reference expectations are stated
  • the desired output format is agreed in advance

Controls, Replicates, and How to Avoid Re-runs

This is the section where a project either becomes manageable or begins to drift.

Reference samples

MLPA is a relative method. That means the reference strategy is not an afterthought; it is part of the assay design. Teams should define:

  • which references are appropriate for the study context
  • whether they are processed in the same batch
  • whether they are stable enough for repeat use
  • how outlier references will be handled if they destabilize review

A weak reference strategy can make an otherwise well-executed assay harder to interpret.

Control behavior checks

Depending on probe design and workflow structure, control probes or internal behavior checks help determine whether the signal pattern is globally acceptable. They help identify:

  • low overall reaction quality
  • unstable peak behavior
  • batch-specific anomalies
  • weak comparative baselines

Replicate strategy

Replicates should match the operational goal of the project.

A practical framework is:

  • screening-oriented project: fewer repeats up front, more escalation for flagged samples
  • verification-oriented project: tighter repeat expectations around key samples
  • batch comparison project: stronger emphasis on shared controls and inter-run stability
  • limited-material project: define early when repeat is justified and when resubmission is more sensible

For project managers, replicates affect timing and reserve material. For platform leads, replicates affect confidence in batch-to-batch consistency. The right answer is not "repeat everything." The right answer is "repeat what meaningfully reduces uncertainty."

Common causes of reruns

The most common rerun triggers are usually:

  • insufficient or inconsistent input DNA
  • incomplete or mismatched metadata
  • weak or distorted peak profiles
  • unstable references or poorly matched controls
  • borderline outputs that need a clearer next-step decision

Troubleshooting guide: symptom → likely cause → practical fix

Symptom Likely cause Practical fix
Globally weak peak pattern low input, degraded DNA, inefficient setup re-check intake metrics, verify concentration, repeat only if reserve material exists
Inconsistent behavior across part of the probe set local target issue or sample-specific quality problem review whether the issue is sample-specific or panel-wide; consider repeat or complementary follow-up
Batch-to-batch instability inconsistent prep or weak reference strategy harmonize extraction, quantification, and reference handling
Sample fails review despite acceptable intake hidden handling issue or storage-related stress review sample history and consider resubmission
Ratios are directionally suggestive but unstable borderline signal, weak controls, or comparison uncertainty avoid over-interpretation; repeat or redirect to a better-fit method

For deeper guidance on study planning, see our resource on MLPA for CNV study design and interpretation.

Deliverables You Should Expect (Files + Reporting Depth)

For B2B users, a service is only as useful as its outputs. "Assay completed" is not a deliverable. A deliverable is a package that lets another scientist, project lead, or platform manager understand what was run, how stable the workflow was, and what the result package does and does not support.

Core files

A solid MLPA output package should typically include:

  • 1. Ratio table
    Probe-level or target-level outputs with clear sample mapping.
  • 2. Plots
    Per-sample or per-probe visual outputs that make directionality and outliers easier to review.
  • 3. QC summary
    Pass/flag notes, repeat notes, and caveats that matter for internal handoff.
  • 4. Methods summary
    Assay scope, workflow summary, and analysis frame in RUO language.
  • 5. Limitations note
    A short statement on target scope and what was not covered.

Deliverables summary table

Deliverable What it should contain Why it matters
Ratio table sample IDs, probe/target outputs, normalized values supports internal review
Plots per-sample or per-probe visual review highlights directionality and outliers
QC summary pass/flag notes, repeats, caveats supports confident handoff
Methods summary assay scope, workflow summary, analysis frame improves traceability
Limitations note target scope and non-covered areas prevents over-interpretation

Optional but useful formats

Depending on project maturity, teams may also want:

  • CSV or XLSX exports
  • PDF report package
  • data dictionary
  • batch summary sheet
  • sample manifest reconciliation notes

The most useful output package is not the longest one. It is the one that makes internal review easier without pretending the assay covers more than it does.

For a more detailed review framework, see our guide to MLPA QC, normalization, and reporting.

Choosing MLPA vs Other Methods (Short Decision Preview)

Before selecting MLPA, teams should screen the project against a small set of operational questions. This keeps the comparison anchored to study scope rather than vague platform preference.

Project question If yes If no Likely next step
Are the loci already defined? MLPA becomes more attractive Consider a broader discovery workflow decide between targeted and discovery scope
Is sequence-level context required? A sequencing-based method may fit better MLPA remains viable choose by data need
Is compact copy number review the main goal? MLPA is a strong fit Consider a broader platform align deliverables to scope
Are references and controls stable and available? Proceed with targeted design Fix study design first avoid weak comparative setup

MLPA is often the better fit when the project is focused on predefined loci, mainly interested in relative copy number review, and looking for a workflow with manageable output structure. It is less attractive when the study is still in a discovery phase, requires sequence-level context, or needs broad multi-class variant coverage in a single platform.

In other projects, a broader CNV sequencing workflow or a complementary microarray-based approach may be a better fit for the study scope. A good comparison does not ask which platform is "best" in general. It asks which one answers the actual project question with the right balance of scope, interpretability, and operational burden.

For a broader method comparison, see MLPA vs ddPCR vs qPCR vs NGS for CNV.

Decision Checklist: When to Use MLPA and When Not to Use It

Use MLPA when:

  • the loci of interest are already defined
  • targeted copy number review is the main need
  • the team wants a compact assay workflow
  • the sample map and metadata can be kept consistent
  • a manageable handoff package is more useful than broad discovery breadth

Do not default to MLPA when:

  • the project is still discovery-oriented
  • unknown loci are central to the study question
  • sequence-level context matters as much as copy number review
  • the available sample quality is too uncertain
  • the comparative reference strategy has not been stabilized

Quality Control and Troubleshooting

A useful QC section should help readers avoid two errors: over-trusting weak outputs and repeating weak samples without a plan.

Practical QC checkpoints

Checkpoint What to verify Why it matters
Sample intake concentration, amount, labeling, buffer, storage notes weak intake discipline creates downstream ambiguity
Reaction readiness enough material for setup and repeat reduces stalled workflows
Fragment profile clean peak architecture and expected separation supports interpretable review
Reference behavior stable comparative baseline MLPA is a relative method
Output consistency stable directional pattern and ratio logic reduces over-calling of borderline results

Frequent pitfalls

Pitfall 1: Treating all DNA as equivalent
Even when amount looks adequate, sample history may still affect performance.

Pitfall 2: Under-defining references
If the comparative baseline is weak, the final package becomes less convincing.

Pitfall 3: Sending partial metadata
Missing buffer or storage information may matter only when a borderline sample fails, but that is exactly when it becomes expensive.

Pitfall 4: Asking MLPA to answer discovery-scale questions
The assay is useful because it is focused. That focus should be treated as design logic, not as a limitation to hide.

Pitfall 5: Compressing the report too aggressively
A ratio table without QC context is often not enough for a confident internal handoff.

FAQ

1) What does an MLPA assay measure in an RUO project?

It measures relative copy number behavior across predefined loci represented by the selected probe set.

2) Is MLPA a sequencing method?

No. It is a probe-based ligation and amplification workflow followed by fragment analysis.

3) What sample type is usually expected?

Typically genomic DNA from RUO sources, with clear labeling, suitable amount, known buffer context, and documented handling history.

4) How many targets can MLPA assess at once?

The exact number depends on assay design, but the original MLPA publication described relative quantification of up to 40 nucleic acid sequences in one reaction, illustrating the method's multiplex design logic.

5) What is the most common reason for avoidable reruns?

Usually poor input DNA, incomplete metadata, unstable references, or borderline fragment profiles.

6) What deliverables should I ask for?

At minimum: a ratio table, review plots, a QC summary, a methods summary, and a limitations note.

7) When should MLPA be preferred over a broader platform?

When the project is already narrowed to known loci and a targeted copy number workflow is a better match than a discovery-scale platform.

8) Does MLPA always remove the need for orthogonal follow-up?

Not necessarily. In some projects it is the targeted review method itself. In others, project-critical assay signals may still need confirmation by a better-fit follow-up method.

9) Can MLPA support multi-sample or batch-oriented projects?

Yes, provided sample submission, reference handling, and batch discipline are controlled carefully.

10) What should a project manager ask before starting?

Ask about sample acceptance criteria, reserve material expectations, reference strategy, rerun logic, report format, and how project status will be communicated.

References:

  1. Schouten JP, McElgunn CJ, Waaijer R, Zwijnenburg D, Diepvens F, Pals G. Relative quantification of 40 nucleic acid sequences by multiplex ligation-dependent probe amplification. Nucleic Acids Research. 2002;30(12):e57. DOI: 10.1093/nar/gnf056
  2. Ohnesorg T, Turbitt E, White SJ. The Many Faces of MLPA. In: Park DJ, ed. PCR Protocols. Methods in Molecular Biology. 2011;687:193-205. DOI: 10.1007/978-1-60761-944-4_13 (Springer)
  3. Stuppia L, Antonucci I, Palka G, Gatta V. Use of the MLPA Assay in the Molecular Diagnosis of Gene Copy Number Alterations in Human Genetic Diseases. International Journal of Molecular Sciences. 2012;13(3):3245-3276. DOI: 10.3390/ijms13033245
  4. Thermo Fisher Scientific. MLPA Assays on the SeqStudio Genetic Analyzer. Application note supporting fragment-analysis execution and review logic.
  5. MRC Holland. MLPA Technique. Workflow-focused source supporting hybridization, ligation, amplification, and fragment-analysis logic.
For research purposes only, not intended for clinical diagnosis, treatment, or individual health assessments.
Related Services
Speak to Our Scientists
What would you like to discuss?
With whom will we be speaking?

* is a required item.

Contact CD Genomics
Terms & Conditions | Privacy Policy | Feedback   Copyright © CD Genomics. All rights reserved.
Top