We are searching data for your request:
Upon completion, a link will appear to access the found materials.
I have searched in the web for a detailed explanation of doing such validation experiment, but unfortunately couldn't find a satisfactory one. I came across the following sources:
Thermo Scientific Tech tip #58: IMAHO, well written description, but still there are some issues
- Clear on describing terms and straight forward calculations
- Troubleshooting of the results is outlined
- Not designed to be performed in one 96-well plate ELISA
- No mention of any acceptable criteria for the both validation tests
- No mention of
Quansys Biosciences: a practical design to implement
- Both spike-and-recovery and linearity-of-dilution experiments can be performed in one 96-well plate ELISA
- Justified introduction of the
endogenous sample, which is diluted the same amount as the spiked sample
- Addition of
- Mention of acceptance criteria for both types of experiments
- No mention of troubleshooting
- a little bit messy plate design
R&D spike and recovery protocol: a brief one but useful
- Clear on design
- Mention of acceptable criteria at least for the spike recovery 80-120%
- Mention of
- Hint about possibility of using diluted neat sample until it gives reading
- One sample was used for assessment and not averaging on 3 samples at least
- No mention for any criteria for linearity of dilution
- Q1: How can we combine the two experiments into one plate to spare samples, and materials?
- Q2: Can we dilute sample with assay diluent instead of sample diluent?
- Q3: In case low sample material is there, can we skip the neat sample and add 1:2 diluted one instead to spare material?
- Q4: In preparing spikes in Thermo Scientific tech#58, why was it 10µl spike stock solution added to 50µl sample? what governs this addition, is there any ratio to stick to here or arbitrary ratio? Can we add 90µl sample and 10µl stock solution?
Please feel free to expand
questions, and of course
answersare more than welcome to achieve the following aim:
The best design for both validation tests (i.e., spike-and-recovery and linearity-of-dilution) in one 96-well plate ELISA that will use the minimum amount of sample material as possible with explanation of calculations based on averages and clear troubleshooting plan
PS: I am short of reputation to add important tags to enhance searching, e.g.:
- There are a lot of recipes to extract protein from human tissues out there, but all of them boil down to one thing; preserving the tissue proteins as much as possible while obtaining a reasonable yield for downstream applications, using extraction buffer, extra techniques, and by adding protease inhibitors
- ELISA is one of the those downstream applications that you might be interested in, but one important question? is the sample matrix that you have already obtained valid for the ELISA assay? keep in mind, your sample is not serum, it is a mixture of tissue, extraction buffer, and protease inhibitors, and other things as well
- Two known assays are usually performed to address the above question. These are, spike-and-recovery (SAR), and linearity-of-dilution (LOD), and these two assays are specific to the analyte that you want to measure in your samples, e.g., cytokines, factors, Igs, etc… and specific also to the tissue, and assay kit as well
- Often times, you are interested in performing these two assays with minimum resources as much as possible, i.e., less time, less kit materials, and, most importantly, less sample material. This can be quite challenging and it depends on many issues that is out scope of this post to discuss them all
- If you are interested in performing these two assays using only one 96-well ELISA plate and examine validity of different sample matrices, sample buffer (lysis buffer or sometimes called extraction buffer) for a certain analyte, say X, then this scheme below is for you:
- In this scheme, recovery is examined in 3 types of solutions;
samples. These are different in their complexity. Typically, the assay buffer is usually optimized to detect the standard protein provided with the commercial kit
- The idea of
spike-and-recoveryis that you add a certain amount from standard stock solution into the wells containing the solution to be tested (e.g., sample buffer or samples) and measure them, to see whether you can recover that amount again, and how much you can recover from it in %. If, for any reason, you couldn't recover that amount in comparison with a control well, where that same amount was added into assay buffer, then this means that something in the test solution is not in favor of the assay. The specific amount you add into the wells is called
spike, and it should be the same amount across all tested wells (try to be consistent). Make sure that the added amount when diluted in the well, its final concentration should still be measured by the assay and lie inside your standard curve. E.g., if your standard curve is from 4- 4500 pg/mL, and each well has 100µl of sample, if you want now to add spike into that sample well, you may add whatever amount you want that will still give you at the end a reading inside the standard curve, you may choose to add 10µl from the highest stock which is 500pg/mL, so you end up here with 1/10, that is 50pg/mL, which is fine as it lies inside the 4-500pg/mL range. Alternatively, you can add 50µl sample + 50µl spike (500pg/mL) and would end up with 1/2 dilution (which is the same in this scheme), that is 250pg/mL, which is also fine. So it is up to you to choose, keeping in mind the final concentration should lie inside your standard curve
- For the
linearity-of-dilution, it is obvious that you need a high concentration solution that you can make two-fold (or whatever fold you like) serial dilution from and its reading should still lie inside the standard curve. The best candidate for this is the high spiked wells. The
LODwould tell you about the effect of different dilutions on the precision of the assay
LODyou will get at the end an average percentage from 3 samples at least (calculation example will be provided later), I added a fourth sample, but this fourth can be a different sample type, say samples from tissue culture, so it is up to you to re-design this scheme and be innovative
- Choosing the best dilution is not an easy task, it is not without compromises sometimes. For example, if you have a harsh sample buffer, you may not have good recovery at 1/2, 1/4, 1/8 dilutions of the
assay buffer, whereas almost good recovery is achieved at 1/16, then this dilution factor is most probably the one you should go for in your assay. On the other hand, 1/16 might not be detected by your ELISA assay, which is limited by its sensitivity. Here, the
Ewells can give you a clue whether you detect something in these unspiked samples or not, that's why I recommend you include at least one sample of high expected value of the analyte in question (this is not easy sometimes to predict) in this scheme, here this is sample 3. One more thing, if you get good recovery at 1/16 DO NOT go to 1/32 or 1/64, as it is obvious that with these higher dilutions you are reducing the chance of the analyte's detectability as it is limited by your assay's sensitivity
- In most of the cases, you will see that the dilution factor of good recovery of the
sample bufferin columns 3 and 4, will coincide with that of the
spiked samplesin columns 5, 7, and 9. This will give you the assurance that you are in the right direction. If sample 4 is extracted by a different
sample buffer, then it is clear that should not expect to end up with the same dilution factor (please, be more reasonable!)
CAMPATH-1H is a humanised monoclonal antibody against the CD52 antigen which is being developed for treatment of chronic lymphocytic leukaemia (CLL), autoimmune disease and prevention of transplant rejection. Measurement of antibody serum levels is important for optimising dose regimens but difficult owing to the low concentration compared with normal human IgG.
After consideration of various methods, a suitable assay was developed based on indirect immunofluorescence. Test samples were incubated with target cells (HUT-78, a human T cell line) and the CAMPATH-1H was detected by binding of a fluorescent-labelled anti-human Ig using a flow cytometer. Robustness of the assay was demonstrated under a range of experimental conditions. Because of the low affinity of CAMPATH-1H, only a weak signal was seen at low concentrations. The limit of detection was 0.15 μg/ml and the limit of quantitation was 0.25 μg/ml. Since serum samples were diluted at least 1:2, the lowest concentration which can be measured in patient serum was 0.5 μg/ml. The overall precision (coefficient of variation) was ±13% and the overall accuracy (bias) was +9%. There was a low incidence of false-positive results (<2%) in normal or pre-treatment patient serum. Quantitative recovery was obtained from serum samples spiked with CAMPATH-1H and stored under a variety of conditions, including being treated at 56 °C for 30 min and frozen and thawed up to four times.
This validated assay is suitable for the measurement of CAMPATH-1H levels in clinical trials and the same principles may be applied to any other cell-binding monoclonal antibody.
The outbreak of the novel coronavirus disease 2019 (COVID-19) occurred in late 2019, and now has spread worldwide resulting in a global pandemic . COVID-19 is caused by a novel severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), which has infected more than 164 million people and resulted in more than 3.4 million deaths. The high morbidity and mortality of COVID-19 have far exceeded those of seasonal influenza and other diseases [2,3,4]. At present, SARS-CoV-2 is still spreading globally, causing long-term effects on human health and normal activities . Since no specific medicine or treatment for COVID-19 is available, the accurate diagnosis and a series of prevention and control measures have become the most effective means to prevent its spread.
Current diagnostic approaches mainly include two categories: nucleic acid testing based on RT-PCR technology and antibody testing based on immunochromatography [6,7,8,9,10,11]. The RNA test of SARS-CoV-2 pioneered by the Centers for Disease Control (CDC) has been deemed the “gold standard” for clinical diagnosis. However, drawbacks such as long hours to perform, need for specialized reagents, equipment, and trained operators restrict its application on a large scale [12, 13]. Except for RT-PCR, two isothermal techniques have been developed by researchers for the rapid and sensitive detection of viral RNA including loop-mediated isothermal amplification (LAMP) and recombinase polymerase amplification (RPA) [14,15,16,17,18]. However, previous studies showed that the positive rate of viral RNA testing is only 30–60%, which suggests a high false-negative rate of nucleic acid detection for COVID-19 [12, 19,20,21]. Several limitations and issues require further research such as different respiratory samples, inappropriate sample collection, transfer, and processing [22, 23]. On the other hand, degradation of purified RNA, the presence of RT-PCR inhibitors, or genomic mutations may cause false-negative results [22, 24, 25]. In addition, the immunoassay has also been used to detect the antibodies that are created by our body in response to SARS-CoV-2 infection [12, 22, 26]. Detection of microbe-specific IgM and IgG antibodies in circulating blood serves as a traditional method to identify whether a person has been infected with the pathogen . In COVID-19, IgM and IgG antibodies can arise nearly simultaneously in serum within 2 to 3 weeks after the onset of illness [9, 28, 29]. Several detection methods for detecting IgG and IgM have been developed as rapid diagnosis of COVID-19 such as ELISA and magnetic chemiluminescence immunoassay [30,31,32,33,34]. Liu et al. used two ELISA kits based on SARS-CoV-2 spike and nucleocapsid proteins to detect IgM and IgG antibodies and evaluated their diagnostic feasibility . Chen et al. reported a rapid and sensitive lateral flow immunoassay that used lanthanide-doped polystyrene nanoparticles to detect anti-SARV-CoV-2 IgG in human serum . Although antibody testing has also been used in the auxiliary diagnosis of COVID-19, some tests may cross-react with other coronaviruses, such as those that cause the common cold . Therefore, the development of new techniques with an improved diagnostic accuracy of COVID-19 is in high demand. The spike (S) and nucleocapsid (N) proteins of SARS-CoV-2 are two promising antigen biomarkers for the diagnosis of COVID-19 in human blood as they play key roles in the receptor recognition, virus replication, and immune response [38,39,40]. Many experts believe that the detection of viral protein antigens could be helpful for the diagnosis of COVID-19 in accordance with the SARS-CoV protein antigen detection experience [41,42,43]. Unlike exponential amplification of the nucleic acid detection, proteins cannot be directly amplified, thus the detection of minute amount of proteins demands ultrasensitive detection techniques .
Single molecule array (Simoa) is a digital ELISA . Simoa was developed by David Walt’s group for the detection of proteins with extremely high analytical sensitivity, which can be 1000 times higher than that of traditional ELISAs. In digital ELISA, the fluorescence produced by the enzyme−substrate reaction is confined into femtoliter-sized microwells. Since each microwell can only fit one bead, the presence of a single-protein molecule can be detected via fluorescence read-out . Herein, in this work, we have proposed a digital ELISA method to simultaneously and ultrasensitively detect S-RBD protein and N-protein via Simoa and the magnetic bead encoding technology. This work identifies the most reliable antibody pairs to detect two proteins by the selection process and performs the optimization of the Simoa reagents for the highest sensitivity and dynamic range using recombinant proteins. Furthermore, the proposed assay has high potential for the diagnosis of COVID-19 and offers opportunities to monitor the patients by testing S-RBD protein and N-protein in the blood.
Decreased kidney function may have a wide range of reasons like infections, toxins, genetic disorders, metabolic dysfunction like type 2 diabetes or autoimmune diseases. There are various underlying pathological changes, which may result in acute kidney injury (AKI) or promote the development of chronic kidney disease (CKD). AKI is defined as a sudden reduction in the glomerular filtration rate (GFR) and renal output, which results in the accumulation of nitrogenous waste , whereas CKD is characterized by structural or functional abnormalities of the kidney with implications for health over a time period for at least three months . Albuminuria, blood urea nitrogen (BUN) and estimated glomerular filtration rate (eGFR) are the classical guideline-endorsed biomarkers for the classification of CKD and are strong predictors of renal disease progression and morbidity in human, but also in rodents. Currently, there is huge effort undertaken to define new biomarkers or biomarker panels for prognosis and diagnosis of kidney disease, as well as for a deeper understanding of renal pathology and for the identification of potential therapeutic targets .
Endostatin could be one of these promising markers, as it has gained increasing interest over the last years. Of note, endostatin was already discovered 1997 by O’Reilly and colleagues and is a 20 kDa inhibitor of endothelial cell proliferation in vitro and a potent inhibitor of angiogenesis and tumor growth in vivo . It is the C-terminal fragment of type XVIII collagen and emerges mainly during extracellular matrix remodeling . Recent studies have shown that altered expression [6–8] and increased circulating endostatin concentration [9,10] are associated with impaired kidney function. Importantly, endostatin could be also used as a prediction marker for AKI in critically ill patients .
Vav-BCL2 transgenic mice (BCL2 tg ) are prone to suffer from follicular lymphoma with age and to develop kidney disease, i.e. glomerulonephritis of an autoimmune type . Previously a synergism between the ETV6/RUNX1 fusion product, resulting from the t(1221) translocation in humans, and the anti-apoptotic molecule BCL2 was identified using a new mouse model . After intercrossing ETV6/RUNX1 and BCL2 single transgenic mice the resulting double transgenic animals (E/R tg BCL2 tg ) harbored elevated B cell numbers and autoreactive immunoglobulin titers when compared to BCL2 tg mice. This resulted in profound deposition of immune complexes in glomeruli and accelerated development of immune complex glomerulonephritis .
In order to further identify the biological function of endostatin in different mouse models during kidney pathogenesis there is a need to reliably and reproducibly measure endostatin concentration in serum and plasma. As there is no high-quality quantification tool available, this study presents the development, validation and application of an enzyme-linked immunosorbent assay (ELISA) for measuring rodent endostatin levels, which is also available commercially. Biological relevance for endostatin as an early biomarker could be shown with the determination of endostatin concentration in BCL2 transgenic and ETV6/RUNX1BCL2 double transgenic mice with impaired kidney function.
Materials and Methods
Plant Material, Reagents and Chemicals
A total of 38 peanut genotypes (see Supplementary Table 1) were used while improving and standardizing the ELISA protocol. The seeds of these genotypes after harvest were stored at 10ଌ and were taken out only when needed for the experiment. ELISA components such as peanut allergen standards, monoclonal antibodies (MAbs primary and secondary biotinylated antibody), and enzyme conjugates were purchased from INDOOR Biotechnologies Inc. (Charlottesville, VA, USA) for the estimation of five important peanut allergens namely Ara h 1, Ara h 2 Ara h 3, Ara h 6, and Ara h 8. These include monoclonal antibody 2C12 (clone 2C12 A11 A3) and biotinylated antibody 2F7 (clone 2F7 C12 D10) for Ara h 1, the monoclonal antibody 1E8 and biotinylated monoclonal antibody 4G9 for Ara h 3 while monoclonal antibody 3B8 (clone 3B8 B5) and biotinylated antibody 3E12 (clone 3E12 C4 B3) for Ara h 6. Similarly, the monoclonal antibody 1C4 (clone 1C4 G4 C8) and a polyclonal rabbit anti Ara h 2 antibody (AH2) for Ara h 2 capture antibody 4G6 and a polyclonal antiserum raised in rabbit for Ara h 8 were purchased from INDOOR Biotechnologies Inc. Conjugated Goat anti-Rabbit IgG was purchased from Jacksons Laboratories (Bar Harbor, USA Cat No. 111-036-046). ABTS™ (Cat No. 11204521001) and all other reagents and chemicals were purchased from Sigma-Aldrich Co. (Oakville, ON, Canada).
Allergen Standards and Enzyme Solution Preparation
The allergen standards were purchased from INDOOR Biotechnologies Inc. (Charlottesville, VA, USA). These standards were isolated from lightly roasted peanut flour (Runner cultivar) and purified by affinity chromatography. The purified standards were supplied in phosphate buffer, received on the ice and stored at ଌ until further use. Purified natural Ara h 1 (Lot 39285 Conc. 20,000 ng/ml), Ara h 2 (Lot 39158 Conc. 2,500 ng/ml), Ara h 3 (Lot 39051 Conc. 1,250 ng/ml) Ara h 6 (Lot 39198 Conc. 1,000 ng/ml), and Ara h 8 (Lot 39033 Conc. 2,500 ng/ml) were used as allergen standards for each assay. The standard concentration of each allergen ranged from 2000 - 4 ng/ml for Ara h 1, 250 - 0.5 ng/ml for Ara h 2, 125 - 0.24 ng/ml for Ara h 3, 100 - 0.2 ng/ml for Ara h 6 and 250 - 0.49 ng/ml for Ara h 8. ABTS Substrate is a water-soluble peroxidase substrate that yields a measurable green end product for use in ELISA methods. The ABTS™ (Sigma Aldrich Cat No. 11204521001) was dissolved in 1 mM ABTS solution. The ABTS solution contains 0.1M anhydrous citric acid and 0.2M Dibasic Sodium phosphate.7H202. The 274 mg ABTS™ were dissolved in 500 ml ABTS solution and store in an amber color bottle at 4ଌ until use.
Development of Protocol for Allergen Estimation
The ELISA is sandwich format having double-antibody based on the specific interaction between antigen and antibody. The peanut allergen proteins are sandwich between two antibodies such as capture antibody and conjugated antibody with streptavidin peroxidase (Figure 1). For color development ABTS™ was used. The color intensity depends on the concentration of allergen protein present in the specific sample and measured with an iMark microplate reader (Bio-Rad) at 405 nm. The Microplate Manager (MPM) software was used while analyzing the optical density values of standards and samples. The systemic representation showed the key steps involved to develop the ELISA protocol, to estimate the peanut allergens through peanut seeds (Figure 1).
Figure 1. Systematic diagram showing the protocol for allergen estimation in peanut seed through sandwich ELISA.
Grinding of Seeds, Homogenization and Purification
Sample extracts were prepared by grinding two grams of peanut seeds in fine powder and then dissolved in 40 ml of PBS-T (0.05% Tween in phosphate buffered saline, pH 7.4) containing 1M NaCl in 50 ml falcon tubes (Sarstedt No:55.476). After 2 h of gentle stirring on a rocking platform at room temperature, the aqueous phase was collected by centrifugation at 2,500 rpm at 4ଌ for 20 min. The aqueous phase was subsequently centrifuged at 3,500 rpm for 10 min at room temperature to remove residual traces and insoluble particles. Protein extracts were stored at ଌ until use.
Dilution Factor for Different Peanut Allergens
Dilution of a sample extract is critical for an ELISA which in turn determines the values of detection range for antibody and target antigen concentrations. Subsequently, the concentration of that specific allergens sample was estimated by multiplying the concentration found from the graph by the dilution factor (36, 37). By using different dilutions, we estimated the detection range of target antigen and antibody concentration. We standardized the dilution factors to detect each allergen proteins presents in the peanut seeds. Each sample was diluted in three different dilutions to detect the allergic protein present in seeds. The major allergen proteins such as Ara h 1 (1/1,000, 1/2,000, and 1/4,000), Ara h 2 (1/5,000, 1/10,000, and 1/20,000), Ara h 3 (1/5,000, 1/10,000, and 1/20,000), and Ara h 6 (1/40,000, 1/80,000, and 1/160,000) were diluted on a high range as compared to minor allergen protein, Ara h 8, which was diluted in low dilution range (1/10, 1/20, and 1/40). The optimal concentration of HRP-conjugated streptavidin was determined in the same way.
Steps Involved in ELISA
Antibody coating and blocking
Polystyrene microtiter plates (NUNC Maxisorp, Roskilde, Denmark) were coated with 100 μL mAb at 10 μL /10 ml in 50 mM carbonate buffer (100 μL/well). After overnight incubation at 4ଌ, the coated wells were washed three times with washing buffer (phosphate buffer containing 0.05% Tween 20) and left to block with 1% BSA for 30 min at room temperature followed by three times washing with washing buffer.
Capture of allergen samples and standard
The standard and samples were diluted in washing buffer containing 1% bovine serum albumin fraction V (Sigma Aldrich Cat No. 10735086001). The standard of each allergen was diluted to make 10 serial doubling dilutions in dilution buffer. Subsequently, the allergen samples (100 μL/well) with three different dilutions were added in respective wells and incubated at room temperature for 1 h.
Adding of detection antibody in plate
After incubation plates were washed three times and specific biotinylated anti Ara h mAb were diluted to 1/1,000 in washing buffer containing BSA (1 mg/mL) was added to the wells (100 μL/well) and incubated for 1 h at room temperature.
After three washes, HRP-conjugated Streptavidin diluted to 1/1,000 in washing buffer containing BSA (1 mg/mL) was added to the wells and incubated for 1 h at room temperature.
Addition of the substrate
The colorimetric substrate was added to the wells and which formed a colored solution when catalyzed by the enzyme. The wells were washed three times and 100 μL of 1 mM ABTS was added to each well. After 5 min, the color development was observed.
Detection through ELISA reader
The optical density (OD) was measured at 405 nm using BioRad Microplate Reader and the data were processed using Microplate Manager V 6.1 (Bio-Rad Laboratories).
Spike and Recovery Studies
To test the accuracy of peanut allergens estimation, known amount of each allergen was spiked in the peanut extracts. The spike standard concentration for each allergen was ranging from 100 to 1,600 ng/ml for Ara h 1, 25 to 200 ng/ml for Ara h 2, and 6.25 to 100 ng/ml for Ara h 3, 10 to 160 for Ara h 3, and from 10 to 100 ng/ml for Ara h 8. These known amounts of individual standard (Ara h 1, Ara 2, Ara 3 Ara h 6, and Ara h 8) allergens were spiked in peanut extract and later calculating their final content in extract. The recovery was calculated as (B𠄼)/(A × 100), Where A = Known amount of peanut standard, B = Concentration of spiked standard, C = Concentration of peanut extract.
Statistical Analysis and Sample Analysis Design
All experiments were conducted with three replications based on dilution factor (DF). Mean ELISA plate reading values (Optical Density OD) for each standard and sample were used to plot a standard curve by placing each allergen standard concentration values on Y axis and respective OD values on X axis on an Excel spreadsheet on the computer. Using regression equation values, we estimated specific allergen concentration in each sample. Data are expressed as means ± standard deviation (SD). Allergen standards and samples were placed in the 96- well format plate as given in Figure 2. The statistical analysis of data was performed using SigmaPlot 11.0.
Figure 2. Diagram showing 96-well plate design and the serial dilution of allergen standard and peanut samples in three replicates and three dilutions. The wells A1 to A10 and B1 to B10 contain serial dilution of specific allergen standard A11 to A12 and B11 to B12 are blank while the C, D, E, F, G, and H wells are with unknown peanut samples in three different dilutions for specific allergen.
Blocking buffers consist of formulations of proteins designed to prevent non-specific binding to the plate. An optimal blocking buffer maximizes the signal-to-noise ratio and does not react with the antibodies or target protein. If cross-reactivity is observed, then a different blocking agent should be tested. If repeated cross-reactivity is observed, it may be advisable to switch to a non-mammalian protein blocking agent such as salmon serum or a protein-free blocking solution.
Some systems may benefit from the addition of a surfactant such as Tween 20 (a gentle non-ionic detergent) to the blocking solution. Surfactants can help to minimize hydrophobic interactions between the blocking protein and the antigen or antibodies. Typically a final concentration of 0.05% (v/v) Tween 20 is used. In addition, blocking buffers should be used in sufficient volumes to completely coat the wells. For example, 400 μL is generally used for each well of a 96-well plate.
4 DIM LIGHT MELATONIN ONSET
Lewy and Sack 37 showed that the time of onset of melatonin in plasma was an excellent marker of circadian rhythmicity in humans. Further, it was apparent that exposure to bright light in the late evening not only acutely suppressed the onset of pineal melatonin production, but also resulted in the delay in appearance of melatonin on the subsequent night. In those early studies, melatonin was measured using a sensitive mass spectrometry assay that reported daytime levels in the range of 1 pg/mL, using 1 mL of sample. They used a threshold approach to define the DLMO as the time of day that the concentration exceeds 10 pg/mL. Since then, there have been many different methods proposed to define circadian phase based on melatonin rhythmicity (Figure 2) these have included the hockey stick method, 38 the time that the level passes 2 standard deviations of the basal level 39 and the time that melatonin synthesis ceases (SynOff). 40 Each of these methods and others have their strengths and weaknesses as discussed by Klerman et al. 41 Some methods such as the SynOff and 25% of maximum level methods require multiple samples to be collected across the night and are quite impractical for most clinical or experimental purposes.
The obvious advantage of collection of saliva as opposed to blood is that the former is noninvasive and can be achieved in nonclinical settings. This has led to the widespread use of saliva melatonin measurements to determine the DLMO. In doing so, there has been a need to alter the definition of the DLMO for saliva melatonin to match that obtained from plasma due to the 30%-40% lower levels measured thus, rather than the original plasma DLMO threshold of 10 pg/mL, a saliva threshold of <4 pg/mL is appropriate. 42 Indeed, in a massive study of 1848 saliva DLMO collections (5 samples), a DLMO threshold of 4 pg/mL was used successfully for 1408 subjects (76%), while 213 subjects (11.5%) had values that never exceeded 4 pg/mL and only 136 (7.3%) had saliva melatonin levels that were consistently above 4 pg/mL during the collection window. 43
Oxytocin (OT) and Vasopressin (AVP) are phylogenetically conserved neuropeptides with effects on social behavior, cognition and stress responses. Although OT and AVP are most commonly measured in blood, urine and cerebrospinal fluid (CSF), these approaches present an array of challenges including concerns related to the invasiveness of sample collection, the potential for matrix interference in immunoassays, and whether samples can be collected at precise time points to assess event-linked endocrine responses.
We validated enzyme-linked immunosorbent assays (ELISAs) for the measurement of salivary OT and AVP in domestic dogs.
Both OT and AVP were present in dog saliva and detectable by ELISA and high performance liquid chromatography – mass spectrometry (HPLC–MS). OT concentrations in dog saliva were much higher than those typically detected in humans. OT concentrations in the same samples analyzed with and without sample extraction were highly correlated, but this was not true for AVP. ELISA validation studies revealed good accuracy and parallelism, both with and without solid phase extraction. Collection of salivary samples with different synthetic swabs, or following salivary stimulation or the consumption of food led to variance in results. However, samples collected from the same dogs using different techniques tended to be positively correlated. We detected concurrent elevations in salivary and plasma OT during nursing.
Comparison with existing methods
There are currently no other validated methods for measuring OT/AVP in dog saliva.
OT and AVP are present in dog saliva, and ELISAs for their detection are methodologically valid.
Colorectal cancer (CRC) is a broadly occurring and lethal cancer, with approximately 1.4 million new cases and 700,000 deaths yearly . CRC outcome dramatically improves with early detection followed by curative resection [2,3,4] thus CRC screening is recommended for all U.S. patients over 50 years of age [5, 6]. The gold standard screening test is colonoscopy, with some stool-based tests also having good performance [7, 8]. However, compliance with CRC screening recommendations is low by some measures only 40% of the population for which screening is recommended will undergo testing .
A low-burden CRC screening test, such as a blood-based test, has been widely sought. However, it has proven difficult to find blood-based CRC signal with performance matching that of colonoscopy or of stool-based tests across average risk patients. Blood-based CRC signal may be stronger in patients with more advanced disease, such as those with symptoms of colorectal neoplasia. If so, the appearance of symptoms would offer an opportunity to provide low-burden CRC testing with higher performance. Interest in a low-burden CRC test for symptomatic patients has also come from the clinical community. By itself, the increased CRC prevalence in the symptomatic population (10.9% in a Danish symptomatic cohort , as compared to 0.5–0.7% in the average risk population [7, 9]) would seem sufficient incentive for patients to follow clinicians’ recommendations to have colonoscopies. However, the compliance rate in the symptomatic population is estimated to be only 63.6% (unpublished observations). A low-burden CRC test for this population would highlight patient risk stratification, resulting in increased personalized incentive and increased colonoscopy compliance [11, 12].
Given the attractiveness of a low-burden CRC test for symptomatic patients, several groups have focused efforts here [10, 11, 13, 14]. The highest performing test, and the only validated test to date, was a blood-based test developed earlier in our laboratory [10, 14] using a sample set mirroring the composition of the intent-to-test (ITT) symptomatic population . The specific symptoms present in this population (abnormal bowel habits, abdominal pain, rectal bleeding, unexplained weight loss, meteorism, anemia, and/or palpable mass) indicated a likelihood of increased CRC risk, which was borne out by the study colonoscopy results hence we term these patients “high risk.” The positive predictive value (PPV) of this test was 31%, meaning that 31% of the patients with positive test results had CRC uncovered during colonoscopies. This was a dramatic improvement over the positive CRC rate from asymptomatic screening colonoscopy alone (0.5–0.7%, [7, 9]) and over the positive CRC rate within the symptomatic population without additional stratification (10.9%, ). The strong performance of this earlier test demonstrated that a low-burden test for symptomatic patients provides information that may dramatically improve colonoscopy compliance among these high risk patients—a positive result would indicate much more certainty about the usefulness of further testing.
In the present paper, we report the development of a new blood-based CRC test. The new test is directed to the ITT population of symptomatic patients, and was developed using a much larger sample set (4435 vs 922 patient samples) and employing an assay format with more robust analytic performance: electrochemiluminescence antibody-based assays. These assays offer greater dynamic ranges and higher sensitivities when compared to the ELISA format used in the earlier test . The new test’s validated classifier algorithm, developed using feature selection and machine learning applied to concentration measures of 27 proteins in the 4435 patient sample set, has significantly better specificity, resulting in a higher PPV. The new test offers a low-burden, high quality, and high performance CRC risk assessment to clinicians serving patients presenting with CRC symptoms. Results can be used to manage these patients’ choices about further CRC testing.
CSF BACE activity assay
In order to determine the best assay format for assaying BACE enzymatic activity in biological extracts or fluids such as CSF, we initially tested a solution-based assay and compared it with either a BACE capture assay format or a substrate capture assay format (see Supplemental Information) to determine the signal and background properties of these different assays. In the solution-based BACE enzymatic assay, which was superior to the capture-based assay formats (Fig. 1), a biotin labeled 15