Tips on how to evaluate new qPCR test systems

qPCR became a golden standard for many research and molecular diagnostic laboratories. Validation of assays and performance assessment of qPCR started with the MIQE guidelines, drafted by a group of opinion leaders coordinated by Professor Stephen Bustin. Those guidelines teach what information about assays and test performance shall be reported when submitting a scientific report for publication.

The European commission funded the project SPIDIA to generate results and tools on the preanalytical processes in molecular diagnostics. The International Organization for Standardization (ISO) launched eight new projects within “Clinical laboratory testing and in vitro diagnostic test systems”. The National Institute of Standards and Technology (NIST), has made standard reference materials (SRMs) available for genetic analyses, and the Clinical Laboratory and Standards Institute (CLSI) offered guidelines and protocols to validate tests’ performance. These tests have been implemented for qPCR applications in the software GenEx from MultiD Analyses.

Imagine your lab has to design a new qPCR assay, to prepare for a technology transfer or to approve the new qPCR instrument or change of the real-time master mix. Sounds like weeks of a hard work ahead? Not necessarily. The example from TATAA Biocenter experts (evaluation of the new high throughput qPCR instrument, using the ValidPrime and calibrated human genomic DNA) shows that evaluation can be done smoothly.

The new high throughput qPCR instrument with integrated liquid handling technology has been evaluated using the highly characterized ValidPrime assay (TATAA Biocenter) for the quantification of human genomic DNA (hgDNA) that has been calibrated against the Human DNA Quantitation Standard (SRM 3072) from the National Institute of Standards and Technology (NIST). Performance parameters include the PCR efficiency, limit of detection (LoD) and limit of quantification (LoQ), which allow for simple and direct comparison with other platforms.

The test system is based on Calibrated Human genomic DNA (hgDNA, TATAA Biocenter) with a stock concentration of 188 ng/μl. PCR mix was prepared using the GrandMaster mix (TATAA Biocenter) (table 1).

Calibration samples were prepared by serial 1:2 dilutions with multiple replicates at each concentration (table 2) and analyzed with the ValidPrime assay, which is a highly optimized assay specific for a single loci in haploid hgDNA. A 2-step PCR protocol was used (table 3).

Register at flash4science, to order free samples of TATAA Biocenter qPCR mixes! Subscribe to our newsletter to avoid missing out on future offers. Follow and share flash4science on linkedIn, facebook, twitter, google+.

Fig. 1. shows a qPCR standard curve plotted as Cq versus log10of concentration of the calibration samples. The spread of replicates is seen to increases with decreasing number of targets.

This is due to a phenomenon known as sampling ambiguity. As the number of target molecules per reaction volume gets low, the reproducibility is compromised. For example, if the average concentration is four molecules per microliter and we analyze replicate samples, many of them will contain four molecules. However, due to chance some samplings will contain three, perhaps only two molecules, while other will have five or even six.

Under ideal conditions the frequency of samplings having different number of molecules is given by the Poisson distribution (Fig. 2.).

From this follows that in the absence of measurement error, only about 20% of the samplings actually contain four molecules. 20% have three molecules, 15 % two molecules, 16 % have five etc. This sampling variation is insignificant when the average number of molecules per sampling is large, but is substantial contribution at low concentrations. LoD is defined as “the lowest amount of analyte in a sample that can be detected with (stated) probability, although perhaps not quantified as an exact value”.

In molecular diagnostics we commonly report results and make conclusions at a confidence of 95 %. Using this as criteria we can calculate the theoretical limit of detection (LoD) due to sampling ambiguity, i.e., when analyzing a subset of the total sample. This is the lowest concentration that leads to 95 % of the samplings being positive; hence, at this concentration the risk for a false negative result shall be no more than 5 %. This turns out to be 3 molecules per reaction volume. For a test the LoD is usually higher, because of handling and measurement errors.

Fig. 3. analyzes the data in Fig. 1. for LoD, showing the frequency of positive calls as function of log2 of the concentration of the hgDNA. Intersecting the graph at 95% we read out LoD as 2.95 molecules.

This is virtually the theoretical limit, evidencing that the experimental error here is negligible compared to the sampling error.

The limit of quantification (LoQ) is defined as “That concentration of analyte leading to a signal which cannot be confounded with the blank (background) and that can be quantified”. LoQ can be determined from the spread of replicates at each concentration of the standard curve. Criteria has to be set, which depend on the particular test and its requirements. A common criteria is that LoQ corresponds to the lowest concentration that gives a relative standard deviation, also known as the coefficient of variation (CV), of no more than 35 % on back calculated concentrations.