The Luminex assay is a versatile and widely used platform for large-scale analyte quantification, such as the quantification of cytokines (Dupont et al., 2005), antibodies (Lachmann et al., 2013), disease biomarkers (Lucas et al., 2017), specific DNA sequences (Dunbar, 2006), and protein-protein interactions (Blazer et al., 2010). This technique increases the scale of the more commonly used enzyme-linked immunosorbent assay (ELISA) by concurrently measuring multiple target analytes (Fulton et al., 1997). Rather than capturing the target antigen on a coated plate and using a colorimetric substrate to quantify its presence, the Luminex assay captures the target on coated beads in suspension and uses fluorescence as the quantifiable reporter (Kellar and Iannone, 2002; Fulton et al., 1997; Dupont et al., 2005). This allows for simultaneous detection of multiple analytes, substantially reducing required sample volume and hands-on experimental time and permits large-scale quantification of multiple targets (Kellar and Iannone, 2002; Lachmann et al., 2013). Despite its strengths, the assay can be limited by specificity and sensitivity introduced by various factors including antibody cross-reactivity, manufacturing variability, sample autofluorescence, and instrument drift (Rountree et al., 2024; Elshal and McCoy, 2006; Kellar and Iannone, 2002). Both experimental and analytical techniques can be employed to minimize error (Curion and Theis, 2024; Eckels et al., 2013). Here, we propose a comprehensive normalization method to analytically reduce the error introduced by background fluorescence and machine drift.
Antibody cross-reactivity, sample autofluorescence, bead fluorescence, and machine noise all contribute to the measured background fluorescence of each sample, thereby compromising the specificity of analyses (Rountree et al., 2024; Kellar and Iannone, 2002). Several experimental controls can be used to analytically correct for the background fluorescence. The most used control is the blocking protein-coated bead, often BSA, included in bead mixture, which is referred to as the blank bead. Additionally, there could be a blank well containing the bead mixture without sample serum and a negative well containing sample serum without the bead mixture. Typically, the background fluorescence for each experimental sample is corrected by subtracting the measured fluorescence of the blank bead from the fluorescence measured of the target-coated bead (Rausch et al., 2016; Dunbar, 2006; Kellar and Iannone, 2002). However, the blank beads exhibit a high propensity for non-specific binding, leading to overestimation of background fluorescence (Shadbahr et al., 2023; Cox et al., 2012). This overcorrection can subsequently decrease the sensitivity of the assay as it artificially reduces the true signal of the target analytes. Therefore, we propose using a negative control bead, coated with a target protein for which all patients are known to be seronegative, in addition to the blank bead to correct for background fluorescence using an orthogonal regression, a regression method that accounts for error in both variables to prevent overcorrection.
Another major consideration with the Luminex assay is the technological variation that occurs between runs. Specifically, gradual and often subtle changes in the machine's performance can result from environmental changes, mechanical wear, and calibration inconsistencies (Rountree et al., 2024; Dunbar, 2006). Correcting for machine drift is essential to reliably execute longitudinal studies and multi-plate analyses and increase accuracy and reproducibility of the data. To address the error introduced by machine drift, a standard curve is run on each plate. Commonly, the linear range of the standard curve is used to fit a linear regression model to calculate a correction to combine data from multiple plates (Rausch et al., 2016; Ellington et al., 2010). However, these linear models can oversimplify the data as a true calibration curve is often non-linear (Cox et al., 2012; Kingsmore, 2006). Therefore, we propose using a generalized additive model (GAM), a flexible regression technique that captures non-linear relationships, rather than a linear model to fit a smooth curve to correct for machine drift.
While some target antigens measured in a Luminex assay will resemble a normal distribution, some targets, such as the detection of antibodies, are binary and exhibit a bimodal distribution. In binary measures, there is often a need to determine the fluorescence levels which separate the positive node from the negative node (Japp et al., 2021). The data are often split in half to determine positive or negative, but this type of split fails to incorporate the complexity of human sampling, where it is unlikely that a specific antibody target is present or absent in exactly half of the population (Sharma and Jain, 2014; Japp et al., 2021). This method of splitting may result in poor accuracy, leading to decreased sensitivity and reproducibility (Shadbahr et al., 2023). Therefore, we propose using a computational clustering method, which identifies natural groupings within the data to determine a cut point based on the true distribution of the data.
Given the prevalence of such issues when using the Luminex assay, it is essential to devise methods to take these limitations into consideration in order to fully maximize the strengths of this otherwise powerful assay. We propose a novel three-step solution to address the current shortcomings in normalization: (1) using an orthogonal regression on the measured fluorescence of the blank and negative control beads to correct for background fluorescence, (2) using a GAM to calculate a plate normalization factor (PNF) to correct for machine drift, and (3) using a clustering algorithm to determine the division between the two populations in a binary assay.
Comments (0)