Functional validation of a qPCR instrument

Jan Hellemans - Nov 21, 2017

There is the saying “quantitative PCR is easy to perform, but hard to do it right”. With high quality instruments, robust reagents and pre-designed (pre-validated) assays, even novice users can easily generate qPCR data. The challenge is in ensuring that the qPCR data accurately reflect the measure you are interested in. Many variables may negatively impact results: use of non-validated reference genes, non-specific assays, trace amounts of genomic DNA that may be co-amplified, non-calibrated pipets or instruments with too large well-to-well variation. In this blog, I will describe a method to assess instrument related measurement error.

Both the optics as well as the temperature control of a qPCR instrument can affect the outcome. In a quality controlled environment, it's common to have a cycler’s temperature being tested with calibrated temperature probes, either as a service or by investing in such a monitoring system. This comes at a relative high cost and ignores the fact that other aspects may also affect the quality of the data being generated by a qPCR instrument. That is why Biogazelle uses a functional validation test that is very affordable and can be easily run in every lab without the need for specific equipment or extensive training. We call this “run homogeneity” testing because it is based on the assessment of the homogeneity (or the lack thereof) of the results generated by your qPCR instrument.

The principle of run homogeneity testing is quite straight forward. A single qPCR mix containing all the required elements for a qPCR reaction (mastermix, template, primers (and probe)) is made and distributed across an entire plate, chip, rotor, or array, collectively called ‘run’ according to RDML nomenclature. I advise to use standard conditions (same mix, same volume), a robust assay and an assay-sample combination that yields Cq values around 25. Very low Cq values are probably not representative and too high Cq values suffer from Poisson noise that may be incorrectly interpreted as suboptimal instrument performance. Such a test can be set up and analyzed with less than half a day of work and for a cost of less than 100 EUR.

I encourage to run this functional validation test when acquiring a new instrument, upon instrument maintenance, in case of concerns about instrument performance (e.g. issues observed) and on a regular basis. The regular testing does not only serve as preventive measure but also allows to perform trend analysis; is there a gradual decline in instrument performance that may otherwise have remained undetected?

Analysis of a run homogeneity test should include at least an evaluation of Cq values, but may also include Tm values (for DNA binding dye assays) or end point fluorescence values. We typically combine an objective, statistical data analysis with a visual inspection of a plate heat map to screen for spatial effects. Relevant statistics are the standard deviation on the Cq values (representing overall variability), the number of missing values (may be missed in variance analysis) and the number of data points within a given Cq interval. Data can be processed in a spreadsheet, but the plate heat map may be more challenging for some users. For the quarterly validation of our four 384-well qPCR instruments, I developed an R-tool that automates the data analysis. The tool is available as a free web-service (offered as-is without support, monthly usage limits apply).

The interpretation of results may be relative (among instruments or in function of time (i.e. trend analysis)) or absolute (based on custom requirements). As always, universal absolute requirements cannot be given because they are determined by your instrument and reagent choice, the number of PCR replicates in a real study and, more importantly, by your experiment specific requirements (researching big differences or diagnosing small changes). I would personally be quite strict on missing values (zero tolerance if repeated tests indicate it is an instrument issue and not a mere pipetting mistake), aim for a standard deviation below 0.2 or 0.25, and require 95% and 100% of replicates to fall within 0.5 and 1 cycle, respectively. The smaller Cq interval is used because it matches our ongoing PCR replicate quality control. The latter, higher interval because there should not be any replicates deviating to such a degree. The figures show the outcome of a successful and a failed functional qPCR instrument validation test.

 successful functional qPCR instrument validation testfailed functional qPCR instrument validation test

 

Assess the measurement error of your qPCR cycler for free

Start now

Topics: quality control- testing- homgeneity

Jan Hellemans

Jan Hellemans

Jan Hellemans is co-founder and CEO of Biogazelle. He obtained a Master of Science in Biotechnology (2000) and a PhD in Medical Genetics (2007). He is the author of multiple peer reviewed papers and developer of the qBase software. As an expert in qPCR he has been teaching qPCR courses since 2008.

Previous Post

Is it better to pipet duplicates or triplicate reactions in real-time PCR?

Next Post

How to bring light to PCR: TaqMan probe or SYBR Green dye?

Stay up to date

Subscribe and we'll send new blog posts directly to your inbox!

Subscribe to Email Updates

Newest posts