Information

Is it possible to have a scientific review of a method if the author doesn't have direct experience of it?

Is it possible to have a scientific review of a method if the author doesn't have direct experience of it?


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

It seems like it is possible to conduct scientific research without actually getting close to the sample/specimen. However, what if the "sample" of the research is a method? For example, there are many methods to dissect a whale. Is it possible to have a scientific review of a method if the author hasn't had direct experience on that method, but only know its rules or steps, and other researches talking about it? And if the answer is yes, then how is they relatively confident that they don't miss something (the unknown unknowns)?

I suppose there are different cases to consider:

  • The author doesn't have direct experience on whales
  • The author has direct experience to whales, but doesn't have direct experience on any method to dissect a whale
  • The author has direct experience to whales and methods to dissect a whale

According to this meta post, this question invites answers to dissect what the question means. Such analysis on possible interpretations of the question can constitute an interesting philosophical analysis.


I laid out the argument that science is what scientists do here Philosophical assumptions underlying science

One-off or near unique events can still be investigated by science. Chixulub, gamma ray bursts, the big bang - we study that and are as 'far' from it as it's pissuble to be, in space & time. There was a first ever dissection of a whale done with science in mind. A less experienced scientist isn't necessarily less scientific - in fact almost every Nobel Prize was received for work done before age 30.

if the answer is yes, then how is they relatively confident that they don't miss something (the unknown unknowns)?

We develop domain-specific toolboxes. Consilience or convergence of evidence is a good start. Correcting for cognitive biases, with tools like double-blind trials, might be needed. Galen caused over a millennia of medics to ignore their own eyes during dissections, because he had nit dissected humans, only pigs & chimpanzees &c.

I think of the detection of gravity waves as a masterclass in how science is done. Each source of noise was systematically minimised, given the budget, with steps like damping, multiple separated detectors to eliminate noise, and error bars chased to below the signal range (though in fact, thus scale was unknown).

In the whale example, there are clearly transferable skills from dissection & other surgical procedures. Literature may indicate key whale-specific issues - for rabbits I think the gall bladder should be removed quickly or it 'taints' the meat, which might be relevant to investigations. Sperm whales alone produce ambergris, which might warrant specific investigation in that case. Whale-strandings have focused on ear damage, which has required developing more detailed knowledge of the minutiae of dead healthy whales - it's thought military radar has been causing deafening sound bursts, resulting in certain microscopic air bubbles, causing pods to flee the water. There's always going to be the cutting-edge of research doing new techniques, and work-a-day routine science - but the boundaries of both continuously expand.

There's a whole deeper level about hypothesis generation & falsifiability, creating tractable abstract models, scientific paradigms, but that doesn't seem like what you are asking. It seems like you are trying to get at something deeper that you haven't made explicit…

I think the summary is, being better able to review methodologies, and think through unknown unknowns is what we call being a better scientist. Often before the science is done, we don't know who is good, so we use proxies like exams & interviews. But the good science of better doing these things, results in noteworthy insightful science, and we try to elevate those who can do that to supervise & teach.


Yes. Science is done on datasets and theoretical models,1 not necessarily on direct experience:

Underground nuclear test

Neutron star

I can only add that in general, we don't need to experience something directly in order to feel it or live through it. We think it by daydreaming it. After doing it for your whole life, it gets pretty real. That's our superpower as humans -- sharing our individual experiences with each other.

1 except, that nowadays, theoretical models get routinely neglected in favor of the data science resulting in a frightening number of peer-reviewed studies doing something like this:


Is it possible to have a scientific review of a method if the author doesn't have direct experience of it? - Biology

By the end of this section, you will be able to:

  • Identify the shared characteristics of the natural sciences
  • Understand the process of scientific inquiry
  • Compare inductive reasoning with deductive reasoning
  • Describe the goals of basic science and applied science

Figure 1. Formerly called blue-green algae, the (a) cyanobacteria seen through a light microscope are some of Earth’s oldest life forms. These (b) stromatolites along the shores of Lake Thetis in Western Australia are ancient structures formed by the layering of cyanobacteria in shallow waters. (credit a: modification of work by NASA scale-bar data from Matt Russell credit b: modification of work by Ruth Ellison)

Figure 2. Biologists may choose to study Escherichia coli (E. coli), a bacterium that is a normal resident of our digestive tracts but which is also sometimes responsible for disease outbreaks. In this micrograph, the bacterium is visualized using a scanning electron microscope and digital colorization. (credit: Eric Erbe digital colorization by Christopher Pooley, USDA-ARS)

Like geology, physics, and chemistry, biology is a science that gathers knowledge about the natural world. Specifically, biology is the study of life. The discoveries of biology are made by a community of researchers who work individually and together using agreed-on methods. In this sense, biology, like all sciences is a social enterprise like politics or the arts. The methods of science include careful observation, record keeping, logical and mathematical reasoning, experimentation, and submitting conclusions to the scrutiny of others. Science also requires considerable imagination and creativity a well-designed experiment is commonly described as elegant, or beautiful. Like politics, science has considerable practical implications and some science is dedicated to practical applications, such as the prevention of disease (see Figure 2). Other science proceeds largely motivated by curiosity. Whatever its goal, there is no doubt that science, including biology, has transformed human existence and will continue to do so.


Intermittent fasting: Surprising update

There&rsquos a ton of incredibly promising intermittent fasting (IF) research done on fat rats. They lose weight, their blood pressure, cholesterol, and blood sugars improve&hellip but they&rsquore rats. Studies in humans, almost across the board, have shown that IF is safe and incredibly effective, but really no more effective than any other diet. In addition, many people find it difficult to fast.

But a growing body of research suggests that the timing of the fast is key, and can make IF a more realistic, sustainable, and effective approach for weight loss, as well as for diabetes prevention.

The backstory on intermittent fasting

IF as a weight loss approach has been around in various forms for ages, but was highly popularized in 2012 by BBC broadcast journalist Dr. Michael Mosley&rsquos TV documentary Eat Fast, Live Longer and book The Fast Diet, followed by journalist Kate Harrison&rsquos book The 5:2 Diet based on her own experience, and subsequently by Dr. Jason Fung&rsquos 2016 bestseller The Obesity Code. IF generated a steady positive buzz as anecdotes of its effectiveness proliferated.

As a lifestyle-leaning research doctor, I needed to understand the science. The Obesity Code seemed the most evidence-based summary resource, and I loved it. Fung successfully combines plenty of research, his clinical experience, and sensible nutrition advice, and also addresses the socioeconomic forces conspiring to make us fat. He is very clear that we should eat more fruits and veggies, fiber, healthy protein, and fats, and avoid sugar, refined grains, processed foods, and for God&rsquos sake, stop snacking. Check, check, check, I agree. The only part that was still questionable in my mind was the intermittent fasting part.

Intermittent fasting can help weight loss

IF makes intuitive sense. The food we eat is broken down by enzymes in our gut and eventually ends up as molecules in our bloodstream. Carbohydrates, particularly sugars and refined grains (think white flours and rice), are quickly broken down into sugar, which our cells use for energy. If our cells don&rsquot use it all, we store it in our fat cells as, well, fat. But sugar can only enter our cells with insulin, a hormone made in the pancreas. Insulin brings sugar into the fat cells and keeps it there.

Between meals, as long as we don&rsquot snack, our insulin levels will go down and our fat cells can then release their stored sugar, to be used as energy. We lose weight if we let our insulin levels go down. The entire idea of IF is to allow the insulin levels to go down far enough and for long enough that we burn off our fat.

Intermittent fasting can be hard&hellip but maybe it doesn&rsquot have to be

Initial human studies that compared fasting every other day to eating less every day showed that both worked about equally for weight loss, though people struggled with the fasting days. So, I had written off IF as no better or worse than simply eating less, only far more uncomfortable. My advice was to just stick with the sensible, plant-based, Mediterranean-style diet.

New research is suggesting that not all IF approaches are the same, and some are actually very reasonable, effective, and sustainable, especially when combined with a nutritious plant-based diet. So I&rsquom prepared to take my lumps on this one (and even revise my prior post).

We have evolved to be in sync with the day/night cycle, i.e., a circadian rhythm. Our metabolism has adapted to daytime food, nighttime sleep. Nighttime eating is well associated with a higher risk of obesity, as well as diabetes.

Based on this, researchers from the University of Alabama conducted a study with a small group of obese men with prediabetes. They compared a form of intermittent fasting called "early time-restricted feeding," where all meals were fit into an early eight-hour period of the day (7 am to 3 pm),or spread out over 12 hours (between 7 am and 7 pm). Both groups maintained their weight (did not gain or lose) but after five weeks, the eight-hours group had dramatically lower insulin levels and significantly improved insulin sensitivity, as well as significantly lower blood pressure. The best part? The eight-hours group also had significantly decreased appetite. They weren&rsquot starving.

Just changing the timing of meals, by eating earlier in the day and extending the overnight fast, significantly benefited metabolism even in people who didn&rsquot lose a single pound.

Why might changing timing help?

But why does simply changing the timing of our meals to allow for fasting make a difference in our body? An in-depth review of the science of IF recently published in New England Journal of Medicine sheds some light. Fasting is evolutionarily embedded within our physiology, triggering several essential cellular functions. Flipping the switch from a fed to fasting state does more than help us burn calories and lose weight. The researchers combed through dozens of animal and human studies to explain how simple fasting improves metabolism, lowering blood sugar lessens inflammation, which improves a range of health issues from arthritic pain to asthma and even helps clear out toxins and damaged cells, which lowers risk for cancer and enhances brain function. The article is deep, but worth a read!

So, is intermittent fasting as good as it sounds?

I was very curious about this, so I asked the opinion of metabolic expert Dr. Deborah Wexler, Director of the Massachusetts General Hospital Diabetes Center and associate professor at Harvard Medical School. Here is what she told me. "There is evidence to suggest that the circadian rhythm fasting approach, where meals are restricted to an eight to 10-hour period of the daytime, is effective," she confirmed, though generally she recommends that people "use an eating approach that works for them and is sustainable to them."

So, here&rsquos the deal. There is some good scientific evidence suggesting that circadian rhythm fasting, when combined with a healthy diet and lifestyle, can be a particularly effective approach to weight loss, especially for people at risk for diabetes. (However, people with advanced diabetes or who are on medications for diabetes, people with a history of eating disorders like anorexia and bulimia, and pregnant or breastfeeding women should not attempt intermittent fasting unless under the close supervision of a physician who can monitor them.)

4 ways to use this information for better health

  1. Avoid sugars and refined grains. Instead, eat fruits, vegetables, beans, lentils, whole grains, lean proteins, and healthy fats (a sensible, plant-based, Mediterranean-style diet).
  2. Let your body burn fat between meals. Don&rsquot snack. Be active throughout your day. Build muscle tone.
  3. Consider a simple form of intermittent fasting. Limit the hours of the day when you eat, and for best effect, make it earlier in the day (between 7 am to 3 pm, or even 10 am to 6 pm, but definitely not in the evening before bed).
  4. Avoid snacking or eating at nighttime, all the time.

Sources

Effects of intermittent fasting on health, aging, and disease. de Cabo R, Mattonson MP. New England Journal of Medicine, December 2019.


The Vision

For many years I have had a vision for an online technical journal publishing the very best creationist research at the highest possible professional level and presentation. Previously the best creationist researchers had to wait for four or five years for the next International Conference on Creationism to submit their papers to a forum and publication in which they could be assured of the highest standards of peer review, presentation, and dissemination. Furthermore, the paper (rather than online) publication of the ICC Proceedings has restricted the dissemination of this top-quality research. Even though the ICC is now moving to concurrent electronic publication, timely dissemination is still restricted and delayed by cost considerations.

It is my hope that now, because of both the rapid publication we will be offering in the Answers Research Journal (within two to three months of receipt) and the free-of-charge online publication to ensure the widest possible dissemination of their cutting-edge research results, leading creationist researchers around the world will make publication in the Answers Research Journal their first consideration. In this way I am confident the Answers Research Journal will quickly be at the forefront in setting the trend for online creationist technical publications, a “must-see” for all serious creationist researchers and students.


Guide for Authors

MethodsX publishes the small but important customizations you make to methods every day. By releasing the hidden gems from your lab book, you can get credit for the time, effort and money you've put in to making methods work for you. And because it is open access, it is even more visible and citable, giving your work the exposure it deserves.

MethodsX provides an outlet for technical information that can be useful for others working in the same field, and help them save time in their own research, while giving you the deserved credit for your efforts. Since this is relevant for any field doing experimental work, MethodsX welcomes submissions from all research areas.

MethodsX puts the technical aspects of your work into the spotlight. Publish essential details of the tweaks you have made to a method, without spending time on writing up a traditional article, with detailed background and contextual information. Your MethodsX article showcases the work you've done to customize a method. It's that simple.

  • an abstract to outline the customization
  • a graphical abstract visual to illustrate what you've done
  • the method(s) in sufficient detail to help people replicate it, including any relevant figures, tables etc
  • up to 25 references to the original description of the method you're using

While keeping focused on the technical aspect of the work, evidence of the efficiency of the method and/or comparison with pre-existing protocols needs to be provided. This should be immediately evident to the reader.

To see some examples please click here.

  1. The article is written in standard English and clearly understandable
  2. The manuscript adheres to the MethodsX format , see the author guidelines
  3. Control data supporting the claims made are included: evidence of the efficiency of the method and/or comparison with pre- existing protocols is provided
  4. The method should describe a change over established practices

Note that manuscripts not adhering to the above points may be rejected without full peer review.

Please select and download the correct template to prepare your article: the Methods article template or the Protocol article template .

Should you have a proposal or an idea for a thematic issue, please complete the thematic issue proposal form and send it to the Editorial Office (Ms. Divya Pillai, [email protected])

Some recent thematic issues covered these hot topics:
Microplastics analysis - Published 2020
Nanomaterials for analytical applications - Upcoming for 2021
Non-animal methods in toxicology - Upcoming for 2021
Microfluidics for various applications - Upcoming for 2021
Advanced mass spectrometric analysis for environmental and food safety - Upcoming for 2021

For any questions contact us at: [email protected]

MethodsX publishes the small but important customizations you make to methods every day. By releasing the hidden gems from your lab book, you can get credit for the time, effort and money you've put in to making methods work for you. And because it's Open Access, it is even more visible and citable, giving your work the exposure it deserves.

MethodsX provides an outlet for technical information that can be useful for others working in the same field, and help them save time in their own research, while giving you the deserved credit for your efforts. Since this is relevant for any field doing experimental work, MethodsX welcomes submissions from all research areas .

MethodsX puts the technical aspects of your work into the spotlight. Publish essential details of the tweaks you have made to a method, without spending time on writing up a traditional article, with detailed background and contextual information. Your MethodsX article showcases the work you've done to customize a method. It's that simple.

  • an abstract to outline the customization
  • a graphical abstract visual to illustrate what you've done
  • the method(s) in sufficient detail to help people replicate it, including any relevant figures, tables etc
  • at least one reference to the original description of the method you're using

To see some examples please click here

For any questions contact us at: [email protected]

To watch our five-minute overview highlighting the most important information for authors see here

The MethodsX Editorial and Review process
MethodsX aims at having a transparent and quick editorial process. All submitted articles conform to the MethodsX format will be sent out for review. As the content of a MethodsX article is purely technical, reviewers are asked to focus on the technical aspects of the manuscript. Are the procedures suggested by the authors plausible? Are the methods clear and logical to follow, so that someone else could reproduce them easily?

Authors are invited to revise and resubmit their manuscript when reviews are overall positive and request textual adjustments only. If extensive additional experiments are required, authors will be advised that their manuscript cannot be accepted for publication. Of course, every author will be welcome to resubmit their manuscript anew in the future.
MethodsX is a community effort, by researchers for researchers. We appreciate the work not only of the authors submitting, but also of the reviewers who provide valuable input to each submission. We therefore publish a standard "reviewer thank you" note in each published article and give the reviewers the choice to be named or to remain anonymous.

When submitting, you are encouraged to submit a list of potential reviewer (including their name, institutional email addresses, and institutional affiliation). When compiling this list of potential reviewers please consider the following important criteria: they must be knowledgeable about the manuscript subject area must not be from your own institution at least two of the suggested reviewers should be from another country than the authors' and they should not have recent (less than four years) joint publications with any of the authors. However, the final choice of reviewers is at the editors' discretion.

Please note that no cover letter to the Editor is required . Should you have comments or questions to the Editor, you can submit them in a free text box in the course of the submission process (submission step entitled Enter Comments).

Open access and Copyright
This journal is fully open access all articles will be immediately and permanently free for everyone to read and download. Upon acceptance of an article, authors will be asked to complete an 'Exclusive License Agreement' where authors will retain copyright (for more information on this see here) Permitted reuse is defined by the following Creative Commons user license:
Creative Commons Attribution (CC BY) : lets others distribute and copy the article, to create extracts, abstracts, and other revised versions, adaptations or derivative works of or from an article (such as a translation), to include in a collective work (such as an anthology), to text or data mine the article, even for commercial purposes, as long as they credit the author(s), do not represent the author as endorsing their adaptation of the article, and do not modify the article in such a way as to damage the author's honor or reputation.

To provide Open Access, this journal has a publication fee of USD 600 which needs to be met by the authors or their research funders upon acceptance.

Retained author rights
As an author you (or your employer or institution) retain certain rights, including copyright for details you are referred to here

Role of the funding source
You are requested to identify who provided financial support for the conduct of the research and/or preparation of the article and to briefly describe the role of the sponsor(s), if any, the design of the study, in the collection, analysis and interpretation of data in the writing of the report and in the decision to submit the article for publication. If the funding source(s) had no such involvement then this should be stated.

Funding body agreements and policies
Elsevier has established agreements and developed policies to allow authors whose articles appear in journals published by Elsevier, to comply with potential manuscript archiving requirements as specified as conditions of their grant awards. To learn more about existing agreements and policies please visit the agreements page.

Elsevier supports responsible sharing
Find out how you can share your research published in Elsevier journals.

Ethics in publishing
For information on Ethics in publishing and Ethical guidelines for journal publication see the publishing ethics and author ethics pages.

Human and animal rights
If the work involves the use of animal or human subjects, the author should ensure that the work described has been carried out in accordance with The Code of Ethics of the World Medical Association (Declaration of Helsinki) for experiments involving humans EU Directive 2010/63/EU for animal experiments Uniform Requirements for manuscripts submitted to Biomedical journals. Authors should include a statement in the manuscript that informed consent was obtained for experimentation with human subjects. The privacy rights of human subjects must always be observed.

Informed consent and patient details
Studies on patients or volunteers require ethics committee approval and informed consent, which should be documented in the paper. Appropriate consents, permissions and releases must be obtained where an author wishes to include case details or other personal information or images of patients and any other individuals in an Elsevier publication. Written consents must be retained by the author and copies of the consents or evidence that such consents have been obtained must be provided to Elsevier on request. For more information, please review the Elsevier Policy on the Use of Images or Personal Information of Patients or other Individuals, Unless you have written permission from the patient (or, where applicable, the next of kin), the personal details of any patient included in any part of the article and in any supplementary materials (including all illustrations and videos) must be removed before submission.

Conflict of interest
All authors are requested to disclose any actual or potential conflict of interest including any financial, personal or other relationships with other people or organizations within three years of beginning the submitted work that could inappropriately influence, or be perceived to influence, their work. See here for further information and an example of a Conflict of Interest form can be found here.

Submission declaration and verification
MethodsX is a platform to publish detailed information on your research methods this includes both new methods as well as adjustments or customizations to methods that have already been published. For example, we want to capture changes you have made to a known method to make it work in another organism, system or environment. Your submission can also be an extension of a previously published original research paper, whereas your MethodsX paper will include all of the technical details that might not have been included in your research paper.

Note that any published work your article relates to, such as the original method or your own research paper, should be cited in your MethodsX paper. Examples are available here

A paper is accepted for publication on the understanding that it has not been submitted simultaneously to another journal in the English language.

Authorship
All authors should have made substantial contributions to all of the following: (1) the conception and design of the study, or acquisition of data, or analysis and interpretation of data, (2) drafting the article or revising it critically for important intellectual content, (3) final approval of the version to be submitted.

Changes to authorship
This policy concerns the addition, deletion, or rearrangement of author names in the authorship of accepted manuscripts:
Before the accepted manuscript is published in an online issue: Requests to add or remove an author, or to rearrange the author names, must be sent to the Journal Manager from the corresponding author of the accepted manuscript and must include: (a) the reason the name should be added or removed, or the author names rearranged and (b) written confirmation (e-mail, fax, letter) from all authors that they agree with the addition, removal or rearrangement. In the case of addition or removal of authors, this includes confirmation from the author being added or removed. Requests that are not sent by the corresponding author will be forwarded by the Journal Manager to the corresponding author, who must follow the procedure as described above. Note that: (1) Journal Managers will inform the Journal Editors of any such requests and (2) publication of the accepted manuscript in an online issue is suspended until authorship has been agreed.
After the accepted manuscript is published in an online issue: Any requests to add, delete, or rearrange author names in an article published in an online issue will follow the same policies as noted above and result in a corrigendum.

Electronic Submission
Please note: for initial submission we only need to receive a pdf file containing all elements of your article (title, abstract, graphical abstract, methods with all figures/tables included, references). Supplementary material can however be uploaded separately.

Only upon revision will we need ALL original source files.

Always keep a backup copy of the electronic file for reference and safety. Full details of electronic submission and formats can be obtained from the journal authors pages or from Elsevier's Author Services.

  • Method articles describe new methods in all research areas.
  • Protocol articles focus on the life and health science areas.

Please select and download the correct template to prepare your article: Method article template or Protocol article template.

Language (usage and editing services)
Please write your text in good English (American or British usage is accepted, but not a mixture of these). Authors who feel their English language manuscript may require editing to eliminate possible grammar or spelling mistakes and to conform to correct scientific English may wish to use the English Language Editing service available from Elsevier's WebShop or visit our support center for more information.

Graphical abstract
A Graphical abstract is mandatory for this journal. It should summarize the contents of the article in a concise, pictorial form designed to capture the attention of a wide readership online. Authors must provide images that clearly represent the work described in the article. Upon revision graphical abstracts should be submitted as a separate file. Image size: please provide an image with a minimum of 531 × 1328 pixels (h × w) or proportionally more . The image should be readable at a size of 5 × 13 cm using a regular screen resolution of 96 dpi. Preferred file types: TIFF, EPS, PDF or MS Office files. See here for examples.

Image manipulations
All Western blots should be presented with molecular weights noted, replicated, quantified and with statistical analysis. Cropping of the image is acceptable but must be clearly indicated. Merging images together to give the appearance of one image is not acceptable. The method of normalization to total protein, or where appropriate, a loading control (e.g. cell signaling studies) should be explicitly stated in the text. Images may be subjected to analysis for manipulation prior to publication and authors may be requested to provide copies of the original data.

Data references
This journal encourages you to cite underlying or relevant datasets in your manuscript by citing them in your text and including a data reference in your Reference List. Data references should include the following elements: author name(s), dataset title, data repository, version (where available), year, and global persistent identifier. Add [dataset] immediately before the reference so we can properly identify it as a data reference. This identifier will not appear in your published article.
Example: [dataset] [5] M. Oguro, S. Imahiro, S. Saito, T. Nakashizuka, Mortality data for Japanese oak wilt disease and surrounding forest compositions, Mendeley Data, v1, 2015. http://dx.doi.org/10.17632/xwj98nb39r.1.

Color artwork
Please make sure that artwork files are in an acceptable format (TIFF, EPS or MS Office files) and with the correct resolution. If, together with your accepted article, you submit usable color figures Elsevier will ensure, at no additional cost, that these figures appear in color on the Web (e.g., ScienceDirect and other sites. For further information on the preparation of electronic artwork, please see here

Abbreviations
Define abbreviations that are not standard in the field in a footnote to be placed on the first page of the article. Those abbreviations that cannot be avoided in the abstract must be defined at their first mention there, as well as in the footnote. Ensure consistency of abbreviations throughout the article.

Database linking
Elsevier encourages authors to connect articles with external databases, giving their readers one-click access to relevant databases that help to build a better understanding of the described research. Please refer to relevant database identifiers using the following format in your article: Database: xxxx (e.g., TAIR: AT1G01020 CCDC: 734053 PDB: 1XFN). See database linking for more information and a full list of supported databases.

Footnotes
Footnotes should be used sparingly. Number them consecutively throughout the article. Many Word processors build footnotes into the text, and this feature may be used. Should this not be the case, indicate the position of footnotes in the text and present the footnotes themselves separately at the end of the article. Do not include footnotes in the Reference list.

Table footnotes
Indicate each table footnote with a superscript lowercase letter.

Journal abbreviations source
Journal names should be abbreviated according to the list of title word abbreviations here.

Supplementary material
Authors can also submit supplementary material (such as raw data). Each supplementary material file should have a short caption which will be placed at the bottom of the article, where it can assist the reader and also be used by search engines. Note that supplementary material will not appear in printable pdf files.

Video data
Elsevier accepts video material and animation sequences to support and enhance your scientific research. Authors who have video or animation files that they wish to submit with their article are strongly encouraged to include links to these within the body of the article. This can be done in the same way as a figure or table by referring to the video or animation content and noting in the body text where it should be placed. All submitted files should be properly labeled so that they directly relate to the video file's content. In order to ensure that your video or animation material is directly usable, please provide the files in one of our recommended file formats with a preferred maximum size of 50 MB. Video and animation files supplied will be published online in the electronic version of your article in Elsevier Web products, including ScienceDirect: https://www.sciencedirect.com. Please supply 'stills' with your files: you can choose any frame from the video or animation or make a separate image. These will be used instead of standard icons and will personalize the link to your video data. For more detailed instructions please visit our video instruction pages. Note: since video and animation cannot be embedded in the print version of the journal, please provide text for both the electronic and the print version for the portions of the article that refer to this content.

Submission checklist
Check the correct template has been used to prepare your article: Method article template or Protocol article template.

  • Keywords
  • All figure captions
  • All tables (including title, description, footnotes)
  • Manuscript has been 'spell-checked' and 'grammar-checked'
  • All references mentioned in the Reference list are cited in the text, and vice versa
  • Permission has been obtained for use of copyrighted material from other sources (including the Web)

For any further general information please visit our Support Center.

Use of the Digital Object Identifier
The Digital Object Identifier (DOI) may be used to cite and link to electronic documents. The DOI consists of a unique alpha-numeric character string which is assigned to a document by the publisher upon the initial electronic publication. The assigned DOI never changes. Therefore, it is an ideal medium for citing a document, particularly 'Articles in press' because they have not yet received their full bibliographic information. Example of a correctly given DOI (in URL format here an article in the journal Physics Letters B): https://doi.org/10.1016/j.physletb.2010.09.059
When you use a DOI to create links to documents on the web, the DOIs are guaranteed never to change.

Online proof correction
Corresponding authors will receive an e-mail with a link to our ProofCentral system, allowing annotation and correction of proofs online. The environment is similar to MS Word: in addition to editing text, you can also comment on figures/tables and answer questions from the Copy Editor. Web-based proofing provides a faster and less error-prone process by allowing you to directly type your corrections, eliminating the potential introduction of errors.
If preferred, you can still choose to annotate and upload your edits on the PDF version. All instructions for proofing will be given in the e-mail we send to authors, including alternative methods to the online version and PDF.
We will do everything possible to get your article published quickly and accurately - please upload all of your corrections within 48 hours. It is important to ensure that all corrections are sent back to us in one communication. Please check carefully before replying, as inclusion of any subsequent corrections cannot be guaranteed. Proofreading is solely your responsibility. Note that Elsevier may proceed with the publication of your article if no response is received.

AUTHOR INQUIRIES
For inquiries relating to the submission of articles (including electronic submission) please visit this journal's homepage. For detailed instructions on the preparation of electronic artwork, please visit Artwork instructions. You can also track accepted articles here, check our Journal Authors page and/or contact the Elsevier Support Center. Contact details for questions arising after acceptance of an article, especially those relating to proofs, will be provided by the publisher.


Big Ideas Articles & More

Here at Greater Good, we cover research into social and emotional well-being, and we try to help people apply findings to their personal and professional lives. We are well aware that our business is a tricky one.

Summarizing scientific studies and applying them to people’s lives isn’t just difficult for the obvious reasons, like understanding and then explaining scientific jargon or methods to non-specialists. It’s also the case that context gets lost when we translate findings into stories, tips, and tools for a more meaningful life, especially when we push it all through the nuance-squashing machine of the Internet. Many people never read past the headlines, which intrinsically aim to overgeneralize and provoke interest. Because our articles can never be as comprehensive as the original studies, they almost always omit some crucial caveats, such as limitations acknowledged by the researchers. To get those, you need access to the studies themselves.

And it’s very common for findings to seem to contradict each other. For example, we recently covered an experiment that suggests stress reduces empathy—after having previously discussed other research suggesting that stress-prone people can be more empathic. Some readers asked: Which one is correct? (You’ll find my answer here.)

But probably the most important missing piece is the future. That may sound like a funny thing to say, but, in fact, a new study is not worth the PDF it’s printed on until its findings are replicated and validated by other studies—studies that haven’t yet happened. An experiment is merely interesting until time and testing turns its finding into a fact.

Scientists know this, and they are trained to react very skeptically to every new paper. They also expect to be greeted with skepticism when they present findings. Trust is good, but science isn’t about trust. It’s about verification.

However, journalists like me, and members of the general public, are often prone to treat every new study as though it represents the last word on the question addressed. This particular issue was highlighted last week by—wait for it—a new study that tried to reproduce 100 prior psychological studies to see if their findings held up. The result of the three-year initiative is chilling: The team, led by University of Virginia psychologist Brian Nosek, got the same results in only 36 percent of the experiments they replicated. This has led to some predictably provocative, overgeneralizing headlines implying that we shouldn’t take psychology seriously.

Despite all the mistakes and overblown claims and criticism and contradictions and arguments—or perhaps because of them—our knowledge of human brains and minds has expanded dramatically during the past century. Psychology and neuroscience have documented phenomena like cognitive dissonance, identified many of the brain structures that support our emotions, and proved the placebo effect and other dimensions of the mind-body connection, among other findings that have been tested over and over again.

These discoveries have helped us understand and treat the true causes of many illnesses. I’ve heard it argued that rising rates of diagnoses of mental illness constitute evidence that psychology is failing, but in fact, the opposite is true: We’re seeing more and better diagnoses of problems that would have compelled previous generations to dismiss people as “stupid” or “crazy” or “hyper” or “blue.” The important thing to bear in mind is that it took a very, very long time for science to come to these insights and treatments, following much trial and error.

Science isn’t a faith, but rather a method that takes time to unfold. That’s why it’s equally wrong to uncritically embrace everything you read, including what you are reading on this page.

Given the complexities and ambiguities of the scientific endeavor, is it possible for a non-scientist to strike a balance between wholesale dismissal and uncritical belief? Are there red flags to look for when you read about a study on a site like Greater Good or in a popular self-help book? If you do read one of the actual studies, how should you, as a non-scientist, gauge its credibility?

I drew on my own experience as a science journalist, and surveyed my colleagues here at the UC Berkeley Greater Good Science Center. We came up 10 questions you might ask when you read about the latest scientific findings. These are also questions we ask ourselves, before we cover a study.

1. Did the study appear in a peer-reviewed journal?

Peer review—submitting papers to other experts for independent review before acceptance—remains one of the best ways we have for ascertaining the basic seriousness of the study, and many scientists describe peer review as a truly humbling crucible. If a study didn’t go through this process, for whatever reason, it should be taken with a much bigger grain of salt.

2. Who was studied, where?

Animal experiments tell scientists a lot, but their applicability to our daily human lives will be limited. Similarly, if researchers only studied men, the conclusions might not be relevant to women, and vice versa.

This was actually a huge problem with Nosek’s effort to replicate other people’s experiments. In trying to replicate one German study, for example, they had to use different maps (ones that would be familiar to University of Virginia students) and change a scale measuring aggression to reflect American norms. This kind of variance could explain the different results. It may also suggest the limits of generalizing the results from one study to other populations not included within that study.

As a matter of approach, readers must remember that many psychological studies rely on WEIRD (Western, educated, industrialized, rich and democratic) samples, mainly college students, which creates an in-built bias in the discipline’s conclusions. Does that mean you should dismiss Western psychology? Of course not. It’s just the equivalent of a “Caution” or “Yield” sign on the road to understanding.

3. How big was the sample?

In general, the more participants in a study, the more valid its results. That said, a large sample is sometimes impossible or even undesirable for certain kinds of studies. This is especially true in expensive neuroscience experiments involving functional magnetic resonance imaging, or fMRI, scans.

And many mindfulness studies have scanned the brains of people with many thousands of hours of meditation experience—a relatively small group. Even in those cases, however, a study that looks at 30 experienced meditators is probably more solid than a similar one that scanned the brains of only 15.

4. Did the researchers control for key differences?

Diversity or gender balance aren’t necessarily virtues in a research study it’s actually a good thing when a study population is as homogenous as possible, because it allows the researchers to limit the number of differences that might affect the result. A good researcher tries to compare apples to apples, and control for as many differences as possible in her analysis.

5. Was there a control group?

One of the first things to look for in methodology is whether the sample is randomized and involved a control group this is especially important if a study is to suggest that a certain variable might actually cause a specific outcome, rather than just be correlated with it (see next point).

For example, were some in the sample randomly assigned a specific meditation practice while others weren’t? If the sample is large enough, randomized trials can produce solid conclusions. But, sometimes, a study will not have a control group because it’s ethically impossible. (Would people still divert a trolley to kill one person in order to save five lives, if their decision killed a real person, instead of just being a thought experiment? We’ll never know for sure!)

The conclusions may still provide some insight, but they need to be kept in perspective.

6. Did the researchers establish causality, correlation, dependence, or some other kind of relationship?

I often hear “Correlation is not causation” shouted as a kind of battle cry, to try to discredit a study. But correlation—the degree to which two or more measurements seem to change at the same time—is important, and is one step in eventually finding causation—that is, establishing a change in one variable directly triggers a change in another.

The important thing is to correctly identify the relationship.

7. Is the journalist, or even the scientist, overstating the result?

Language that suggests a fact is “proven” by one study or which promotes one solution for all people is most likely overstating the case. Sweeping generalizations of any kind often indicate a lack of humility that should be a red flag to readers. A study may very well “suggest” a certain conclusion but it rarely, if ever, “proves” it.

This is why we use a lot of cautious, hedging language in Greater Good, like “might” or “implies.”

8. Is there any conflict of interest suggested by the funding or the researchers’ affiliations?

A recent study found that you could drink lots of sugary beverages without fear of getting fat, as long as you exercised. The funder? Coca Cola, which eagerly promoted the results. This doesn’t mean the results are wrong. But it does suggest you should seek a second opinion.

9. Does the researcher seem to have an agenda?

Readers could understandably be skeptical of mindfulness meditation studies promoted by practicing Buddhists or experiments on the value of prayer conducted by Christians. Again, it doesn’t automatically mean that the conclusions are wrong. It does, however, raise the bar for peer review and replication. For example, it took hundreds of experiments before we could begin saying with confidence that mindfulness can indeed reduce stress.

10. Do the researchers acknowledge limitations and entertain alternative explanations?

Is the study focused on only one side of the story or one interpretation of the data? Has it failed to consider or refute alternative explanations? Do they demonstrate awareness of which questions are answered and which aren’t by their methods?

I summarize my personal stance as a non-scientist toward scientific findings as this: Curious, but skeptical. I take it all seriously and I take it all with a grain of salt. I judge it against my experience, knowing that my experience creates bias. I try to cultivate humility, doubt, and patience. I don’t always succeed when I fail, I try to admit fault and forgive myself. My own understanding is imperfect, and I remind myself that one study is only one step in understanding. Above all, I try to bear in mind that science is a process, and that conclusions always raise more questions for us to answer.


Is it possible to have a scientific review of a method if the author doesn't have direct experience of it? - Biology

The scientific format may seem confusing for the beginning science writer due to its rigid structure which is so different from writing in the humanities. One reason for using this format is that it is a means of efficiently communicating scientific findings to the broad community of scientists in a uniform manner. Another reason, perhaps more important than the first, is that this format allows the paper to be read at several different levels. For example, many people skim Titles to find out what information is available on a subject. Others may read only titles and Abstracts . Those wanting to go deeper may look at the Tables and Figures in the Results , and so on. The take home point here is that the scientific format helps to insure that at whatever level a person reads your paper (beyond title skimming), they will likely get the key results and conclusions.

Top of page

The Sections of the Paper

Most journal-style scientific papers are subdivided into the following sections: Title, Authors and Affiliation, Abstract, Introduction, Methods, Results, Discussion, Acknowledgments, and Literature Cited, which parallel the experimental process. This is the system we will use. This website describes the style, content, and format associated with each section.

The sections appear in a journal style paper in the following prescribed order:

Experimental process

Section of Paper

What did I do in a nutshell?

Abstract

What is the problem?

Introduction

How did I solve the problem?

Materials and Methods

What did I find out?

Results

What does it mean?

Discussion

Who helped me out?

Acknowledgments (optional)

Whose work did I refer to?

Literature Cited

Extra Information

Appendices (optional)

Section Headings:

Main Section Headings: Each main section of the paper begins with a heading which should be capitalized , centered at the beginning of the section, and double spaced from the lines above and below. Do not underline the section heading OR put a colon at the end.

Example of a main section heading:

INTRODUCTION

Subheadings: When your paper reports on more than one experiment, use subheadings to help organize the presentation. Subheadings should be capitalized (first letter in each word), left justified, and either bold italics OR underlined .

Example of a subheading:

Effects of Light Intensity on the Rate of Electron Transport

Top of page

Title, Authors' Names, and Institutional Affiliations

  • The title should be centered at the top of page 1 (DO NOT use a title page - it is a waste of paper for our purposes) the title is NOT underlined or italicized .
  • the authors' names (PI or primary author first) and institutional affiliation are double-spaced from and centered below the title. When more then two authors, the names are separated by commas except for the last which is separated from the previous name by the word "and".

ABSTRACT

  • the question(s) you investigated (or purpose), ( from Introduction )
    • state the purpose very clearly in the first or second sentence.
    • clearly express the basic design of the study.
    • Name or briefly describe the basic methodology used without going into excessive detail-be sure to indicate the key techniques used.
    • report those results which answer the questions you were asking
    • identify trends, relative change or differences, etc.
    • clearly state the implications of the answers your results gave you.
    • lengthy background information,
    • references to other literature,
    • elliptical (i.e., ending with . ) or incomplete sentences,
    • abbreviations or terms that may be confusing to readers,
    • any sort of illustration, figure, or table, or references to them.

    INTRODUCTION

    • Establish the context of the work being reported. This is accomplished by discussing the relevant primary research literature (with citations) and summarizing our current understanding of the problem you are investigating
    • State the purpose of the work in the form of the hypothesis, question, or problem you investigated and,
    • Briefly explain your rationale and approach and, whenever possible, the possible outcomes your study can reveal.
    • Begin your Introduction by clearly identifying the subject area of interest. Do this by using key words from your Title in the first few sentences of the Introduction to get it focused directly on topic at the appropriate level. This insures that you get to the primary subject matter quickly without losing focus, or discussing information that is too general. For example, in the mouse behavior paper, the words hormones and behavior would likely appear within the first one or two sentences of the Introduction.
    • Establish the context by providing a brief and balanced review of the pertinent published literature that is available on the subject. The key is to summarize (for the reader) what we knew about the specific problem before you did your experiments or studies. This is accomplished with a general review of the primary research literature (with citations) but should not include very specific, lengthy explanations that you will probably discuss in greater detail later in the Discussion. The judgment of what is general or specific is difficult at first, but with practice and reading of the scientific literature you will develop e firmer sense of your audience. In the mouse behavior paper, for example, you would begin the Introduction at the level of mating behavior in general, then quickly focus to mouse mating behaviors and then hormonal regulation of behavior. Lead the reader to your statement of purpose/hypothesis by focusing your literature review from the more general context (the big picture e.g., hormonal modulation of behaviors) to the more specific topic of interest to you (e.g., role/effects of reproductive hormones, especially estrogen, in modulating specific sexual behaviors of mice.)
    • What literature should you look for in your review of what we know about the problem? Focus your efforts on the primary research journals - the journals that publish original research articles. Although you may read some general background references (encyclopedias, textbooks, lab manuals, style manuals, etc.) to get yourself acquainted with the subject area, do not cite these, becasue they contain information that is considered fundamental or "common" knowledge wqithin the discipline. Cite, instead, articles that reported specific results relevant to your study. Learn, as soon as possible, how to find the primary literature (research journals) and review articles rather than depending on reference books. The articles listed in the Literature Cited of relevant papers you find are a good starting point to move backwards in a line of inquiry. Most academic libraries support the Citation Index - an index which is useful for tracking a line of inquiry forward in time. Some of the newer search engines will actually send you alerts of new papers that cite particular articles of interest to you. Review articles are particularly useful because they summarize all the research done on a narrow subject area over a brief period of time (a year to a few years in most cases).
    • Be sure to clearly state the purpose and /or hypothesis that you investigated. When you are first learning to write in this format it is okay, and actually preferable, to use a pat statement like, "The purpose of this study was to. " or "We investigated three possible mechanisms to explain the . (1) blah, blah..(2) etc. It is most usual to place the statement of purpose near the end of the Introduction, often as the topic sentence of the final paragraph. It is not necessary (or even desirable) to use the words "hypothesis" or "null hypothesis", since these are usually implicit if you clearly state your purpose and expectations.
    • Provide a clear statement of the rationale for your approach to the problem studied. For example: State briefly how you approached the problem (e.g., you studied oxidative respiration pathways in isolated mitochondria of cauliflower). This will usually follow your statement of purpose in the last paragraph of the Introduction. Why did you choose this kind of experiment or experimental design? What are the scientific merits of this particular model system? What advantages does it confer in answering the particular question(s) you are posing? Do not discuss here the actual techniques or protocols used in your study (this will be done in the Materials and Methods) your readers will be quite familiar with the usual techniques and approaches used in your field. If you are using a novel (new, revolutionary, never used before) technique or methodology, the merits of the new technique/method versus the previously used methods should be presented in the Introduction.

    MATERIALS AND METHODS

    • the the organism(s) studied (plant, animal, human, etc.) and, when relevant, their pre-experiment handling and care, and when and where the study was carried out ( only if location and time are important factors) note that the term "subject" is used ONLY for human studies.
    • if you did a field study , provide a description of the study site , including the significant physical and biological features, and the precise location (latitude and longitude, map, etc)
    • the experimental OR sampling design (i.e., how the experiment or study was structured. For example, controls, treatments, what variable(s) were measured, how many samples were collected, replication, the final form of the data, etc.)
    • the protocol for collecting data , i.e., how the experimental procedures were carried out, and,
    • how the data were analyzed (qualitative analyses and/or statistical procedures used to determine significance, data transformations used, what probability was used to decide significance, etc).
    • NOTE: For laboratory studies you need not report the date and location of the study UNLESS it is necessary information for someone to have who might wish to repeat your work or use the same facility. Most often it is not . If you have performed experiments at a particular location or lab because it is the only place to do it, or one of a few, then you should note that in your methods and identify the lab or facility.
    • NOTE : Very frequently the experimental design and data collection procedures for an experiment cannot be separated and must be integrated together. If you find yourself repeating lots of information about the experimental design when describing the data collection procedure(s), likely you can combine them and be more concise.
    • NOTE : Although tempting, DO NOT say that you " recorded the data ," i.e., in your lab notebook, in the Methods description. Of course you did , because that is what all good scientists do, and it is a given that you recorded your measurements and observations.
    • Statistical software used : Sometimes it is necessary to report which statistical software you used this would be at the discretion of your instructor or the journal
    • how the data were summarized (Means, percent, etc) and how you are reporting measures of variability (SD,SEM, 95% CI, etc)
      • this lets you avoid having to repeatedly indicate you are using mean ± SD or SEM.
      • any other numerical (e.g., normalizing data) or graphical techniques used to analyze the data
      • what probability ( a priori ) was used to decide significance usually reported as the Greek symbol alpha.
      • NOTE: You DO NOT need to say that you made graphs and tables.

      Here is some additional advice on particular problems common to new scientific writers.

      Problem : The Methods section is prone to being wordy or overly detailed.

      • Avoid repeatedly using a single sentence to relate a single action this results in very lengthy, wordy passages. A related sequence of actions can be combined into one sentence to improve clarity and readability:

      Problematic Example : This is a very long and wordy description of a common, simple procedure. It is characterized by single actions per sentence and lots of unnecessary details.

      "The petri dish was placed on the turntable. The lid was then raised slightly. An inoculating loop was used to transfer culture to the agar surface. The turntable was rotated 90 degrees by hand. The loop was moved lightly back and forth over the agar to spread the culture. The bacteria were then incubated at 37 C for 24 hr."

      Improved Example : Same actions, but all the important information is given in a single, concise sentence. Note that superfluous detail and otherwise obvious information has been deleted while important missing information was added.

      "Each plate was placed on a turntable and streaked at opposing angles with fresh overnight E. coli culture using an inoculating loop. The bacteria were then incubated at 37 C for 24 hr."

      Best: Here the author assumes the reader has basic knowledge of microbiological techniques and has deleted other superfluous information. The two sentences have been combined because they are related actions.

      "Each plate was streaked with fresh overnight E. coli culture and incubated at 37 C for 24 hr."


      Top of Page

      • Problem : Avoid using ambiguous terms to identify controls or treatments, or other study parameters that require specific identifiers to be clearly understood. Designators such as Tube 1, Tube 2, or Site 1 and Site 2 are completely meaningless out of context and difficult to follow in context.

      Problematic example : In this example the reader will have no clue as to what the various tubes represent without having to constantly refer back to some previous point in the Methods.

      " A Spec 20 was used to measure A 600 of Tubes 1,2, and 3 immediately after chloroplasts were added (Time 0) and every 2 min. thereafter until the DCIP was completely reduced. Tube 4's A 600 was measured only at Time 0 and at the end of the experiment."

      Improved example: Notice how the substitution ( in red ) of treatment and control identifiers clarifies the passage both in the context of the paper, and if taken out of context.

      "A Spec 20 was used to measure A 600 of the reaction mixtures exposed to light intensities of 1500, 750, and 350 uE/m2/sec immediately after chloroplasts were added (Time 0) and every 2 min. thereafter until the DCIP was completely reduced. The A 600 of the no-light control was measured only at Time 0 and at the end of the experiment."

      1. Function : The function of the Results section is to objectively present your key results, without interpretation, in an orderly and logical sequence using both text and illustrative materials (Tables and Figures). The results section always begins with text, reporting the key results and referring to your figures and tables as you proceed. Summaries of the statistical analyses may appear either in the text (usually parenthetically) or in the relevant Tables or Figures (in the legend or as footnotes to the Table or Figure). The Results section should be organized around Tables and/or Figures that should be sequenced to present your key findings in a logical order. The text of the Results section should be crafted to follow this sequence and highlight the evidence needed to answer the questions/hypotheses you investigated. Important negative results should be reported, too. Authors usually write the text of the results section based upon the sequence of Tables and Figures.

      2. Style : Write the text of the Results section concisely and objectively. The passive voice will likely dominate here, but use the active voice as much as possible. Use the past tense . Avoid repetitive paragraph structures. Do not interpret the data here. The transition into interpretive language can be a slippery slope. Consider the following two examples:

      The duration of exposure to running water had a pronounced effect on cumulative seed germination percentages (Fig. 2). Seeds exposed to the 2-day treatment had the highest cumulative germination (84%), 1.25 times that of the 12-h or 5-day groups and 4 times that of controls.

      • In contrast, this example strays subtly into interpretation by referring to optimality (a conceptual model) and tieing the observed result to that idea:

      The results of the germination experiment (Fig. 2) suggest that the optimal time for running-water treatment is 2 days. This group showed the highest cumulative germination (84%), with longer (5 d) or shorter (12 h) exposures producing smaller gains in germination when compared to the control group.

      Things to consider as you write your Results section:

      What are the "results"? : When you pose a testable hypothesis that can be answered experimentally, or ask a question that can be answered by collecting samples, you accumulate observations about those organisms or phenomena. Those observations are then analyzed to yield an answer to the question. In general, the answer is the " key result".

      The above statements apply regardless of the complexity of the analysis you employ. So, in an introductory course your analysis may consist of visual inspection of figures and simple calculations of means and standard deviations in a later course you may be expected to apply and interpret a variety of statistical tests. You instructor will tell you the level of analysis that is expected.

      For example, suppose you asked the question, " Is the average height of male students the same as female students in a pool of randomly selected Biology majors ? " You would first collect height data from large random samples of male and female students. You would then calculate the descriptive statistics for those samples (mean, SD, n, range, etc) and plot these numbers. In a course where statistical tests are not employed, you would visually inspect these plots. Suppose you found that male Biology majors are, on average, 12.5 cm taller than female majors this is the answer to the question.

      • Notice that the outcome of a statistical analysis is not a key result, but rather an analytical tool that helps us understand what is our key result.

      Differences, directionality, and magnitude : Report your results so as to provide as much information as possible to the reader about the nature of differences or relationships. For eaxmple, if you testing for differences among groups, and you find a significant difference, it is not sufficient to simply report that "groups A and B were significantly different". How are they different? How much are they different? It is much more informative to say something like, "Group A individuals were 23% larger than those in Group B", or, "Group B pups gained weight at twice the rate of Group A pups." Report the direction of differences (greater, larger, smaller, etc) and the magnitude of differences (% difference, how many times, etc.) whenever possible. See also below about use of the word "significant."

      Organize the results section based on the sequence of Table and Figures you'll include. Prepare the Tables and Figures as soon as all the data are analyzed and arrange them in the sequence that best presents your findings in a logical way. A good strategy is to note, on a draft of each Table or Figure, the one or two key results you want to addess in the text portion of the Results. Simple rules to follow related to Tables and Figures:

      • Tables and Figures are assigned numbers separately and in the sequence that you will refer to them from the text.
        • The first Table you refer to is Table 1, the next Table 2 and so forth.
        • Similarly, the first Figure is Figure 1, the next Figure 2, etc.
        • Each Table or Figure must include a brief description of the results being presented and other necessary information in a legend.
          • Table legends go above the Table tables are read from top to bottom.
          • Figure legends go below the figure figures are usually viewed from bottom to top.
          • When referring to a Figure from the text , "Figure" is abbreviated as Fig.,e.g.,
            Fig. 1 . Table is never abbreviated, e.g., Table 1 .

          The body of the Results section is a text-based presentation of the key findings which includes references to each of the Tables and Figures. The text should guide the reader through your results stressing the key results which provide the answers to the question(s) investigated. A major function of the text is to provide clarifying information. You must refer to each Table and/or Figure individually and in sequence (see numbering sequence), and clearly indicate for the reader the key results that each conveys. Key results depend on your questions, they might include obvious trends, important differences, similarities, correlations, maximums, minimums, etc.

          • Do not reiterate each value from a Figure or Table - only the key result or trends that each conveys.
          • Do not present the same data in both a Table and Figure - this is considered redundant and a waste of space and energy. Decide which format best shows the result and go with it.
          • Do not report raw data values when they can be summarized as means, percents, etc.

          Statistical test summaries (test name, p- value) are usually reported parenthetically in conjunction with the biological results they support. Always report your results with parenthetical reference to the statistical conclusion that supports your finding (if statistical tests are being used in your course). This parenthetical reference should include the statistical test used and the level of significance (test statistic and DF are optional). For example, if you found that the mean height of male Biology majors was significantly larger than that of female Biology majors, you might report this result (in blue) and your statistical conclusion (shown in red) as follows:

          "Males (180.5 ± 5.1 cm n=34) averaged 12.5 cm taller than females (168 ± 7.6 cm n=34) in the AY 1995 pool of Biology majors (two-sample t-test, t = 5.78, 33 d.f., p < 0.001) ."

          If the summary statistics are shown in a figure, the sentence above need not report them specifically, but must include a reference to the figure where they may be seen:

          "Males averaged 12.5 cm taller than females in the AY 1995 pool of Biology majors (two-sample t-test, t = 5.78, 33 d.f., p < 0.001 Fig. 1) ."

          Note that the report of the key result (shown in blue) would be identical in a paper written for a course in which statistical testing is not employed - the section shown in red would simply not appear except reference to the figure.

          • Avoid devoting whole sentences to report a statistical outcome alone.
          • Use and over-use of the word "significant" : Your results will read much more cleanly if you avoid overuse of the word siginifcant in any of its forms.
            • In scientific studies, the use of this word implies that a statistical test was employed to make a decision about the data in this case the test indicated a larger difference in mean heights than you would expect to get by chance alone. Limit the use of the word "significant" to this purpose only.
            • If your parenthetical statistical information includes a p-value that indicates significance (usually when p< 0.05), it is unncecssary (and redundant ) to use the word "significant" in the body of the sentence (see example above) because we all interpret the p-value the same way.
            • Likewise, when you report that one group mean is somehow different from another (larger, smaller, increased, decreased, etc), it will be understood by your reader that you have tested this and found the difference to be statisticallysignificant, especially if you also report a p-value < 0.05.

            Present the results of your experiment(s) in a sequence that will logically support (or provide evidence against) the hypothesis, or answer the question, stated in the Introduction. For example, in reporting a study of the effect of an experimental diet on the skeletal mass of the rat, consider first giving the data on skeletal mass for the rats fed the control diet and then give the data for the rats fed the experimental diet.

            Report negative results - they are important! If you did not get the anticipated results, it may mean your hypothesis was incorrect and needs to be reformulated, or perhaps you have stumbled onto something unexpected that warrants further study. Moreover, the absence of an effect may be very telling in many situations. In any case, your results may be of importance to others even though they did not support your hypothesis. Do not fall into the trap of thinking that results contrary to what you expected are necessarily "bad data". If you carried out the work well, they are simply your results and need interpretation. Many important discoveries can be traced to "bad data".

            Always enter the appropriate units when reporting data or summary statistics.

            • for an individual value you would write, " the mean length was 10 m ", or, " the maximum time was 140 min. "
            • When including a measure of variability, place the unit after the error value, e.g., " . was 10 ± 2.3 m ".
            • Likewise place the unit after the last in a series of numbers all having the same unit. For example: " lengths of 5, 10, 15, and 20 m ", or " no differences were observed after 2, 4, 6, or 8 min. of incubation ".

            DISCUSSION

            1. Function : The function of the Discussion is to interpret your results in light of what was already known about the subject of the investigation, and to explain our new understanding of the problem after taking your results into consideration. The Discussion will always connect to the Introduction by way of the question(s) or hypotheses you posed and the literature you cited, but it does not simply repeat or rearrange the Introduction. Instead, it tells how your study has moved us forward from the place you left us at the end of the Introduction.

            Fundamental questions to answer here include:

            • Do your results provide answers to your testable hypotheses? If so, how do you interpret your findings?
            • Do your findings agree with what others have shown? If not, do they suggest an alternative explanation or perhaps a unforseen design flaw in your experiment (or theirs?)
            • Given your conclusions, what is our new understanding of the problem you investigated and outlined in the Introduction?
            • If warranted, what would be the next step in your study, e.g., what experiments would you do next?

            2. Style : Use the active voice whenever possible in this section. Watch out for wordy phrases be concise and make your points clearly. Use of the first person is okay, but too much use of the first person may actually distract the reader from the main points.

            3. Approach : Organize the Discussion to address each of the experiments or studies for which you presented results discuss each in the same sequence as presented in the Results, providing your interpretation of what they mean in the larger context of the problem. Do not waste entire sentences restating your results if you need to remind the reader of the result to be discussed, use "bridge sentences" that relate the result to the interpretation:

            "The slow response of the lead-exposed neurons relative to controls suggests that. [ interpretation ]".

            You will necessarily make reference to the findings of others in order to support your interpretations.Use subheadings, if need be, to help organize your presentation. Be wary of mistaking the reiteration of a result for an interpretation, and make sure that no new results are presented here that rightly belong in the results.

            You must relate your work to the findings of other studies - including previous studies you may have done and those of other investigators. As stated previously, you may find crucial information in someone else's study that helps you interpret your own data, or perhaps you will be able to reinterpret others' findings in light of yours. In either case you should discuss reasons for similarities and differences between yours and others' findings. Consider how the results of other studies may be combined with yours to derive a new or perhaps better substantiated understanding of the problem. Be sure to state the conclusions that can be drawn from your results in light of these considerations. You may also choose to briefly mention further studies you would do to clarify your working hypotheses. Make sure to reference any outside sources as shown in the Introduction section.

            Do not introduce new results in the Discussion. Although you might occasionally include in this section tables and figures which help explain something you are discussing, they must not contain new data (from your study) that should have been presented earlier. They might be flow diagrams, accumulation of data from the literature, or something that shows how one type of data leads to or correlates with another, etc. For example, if you were studying a membrane-bound transport channel and you discovered a new bit of information about its mechanism, you might present a diagram showing how your findings helps to explain the channel's mechanism.

            ACKNOWLEDGMENTS (include as needed) | FAQs |

            If, in your experiment, you received any significant help in thinking up, designing, or carrying out the work, or received materials from someone who did you a favor by supplying them, you must acknowledge their assistance and the service or material provided. Authors always acknowledge outside reviewers of their drafts (in PI courses, this would be done only if an instructor or other individual critiqued the draft prior to evaluation) and any sources of funding that supported the research. Although usual style requirements (e.g., 1st person, objectivity) are relaxed somewhat here, Acknowledgments are always brief and never flowery.

            LITERATURE CITED

            1. Function : The Literature Cited section gives an alphabetical listing (by first author's last name) of the references that you actually cited in the body of your paper. Instructions for writing full citations for various sources are given in on separate page. A complete format list for virtually all types of publication may be found in Huth and others(1994) .

            NOTE : Do not label this section "Bibliography" . A bibliography contains references that you may have read but have not specifically cited in the text. Bibliography sections are found in books and other literary writing, but not scientific journal-style papers.

            APPENDICES


            Function : An Appendix contains information that is non-essential to understanding of the paper, but may present information that further clarifies a point without burdening the body of the presentation. An appendix is an optional part of the paper, and is only rarely found in published papers.

            Headings : Each Appendix should be identified by a Roman numeral in sequence, e.g., Appendix I, Appendix II, etc. Each appendix should contain different material.

            Some examples of material that might be put in an appendix (not an exhaustive list) :

            • raw data
            • maps (foldout type especially)
            • extra photographs
            • explanation of formulas, either already known ones, or especially if you have "invented" some statistical or other mathematical procedures for data analysis.
            • specialized computer programs for a particular procedure
            • full generic names of chemicals or compounds that you have referred to in somewhat abbreviated fashion or by some common name in the text of your paper.
            • diagrams of specialized apparati.

            Figures and Tables in Appendices

            Figures and Tables are often found in an appendix. These should be formatted as discussed previously (see Tables and Figures), but are numbered in a separate sequence from those found in the body of the paper. So, the first Figure in the appendix would be Figure 1, the first Table would be Table 1, and so forth. In situations when multiple appendices are used, the Table and Figure numbering must indicate the appendix number as well (see Huth and others, 1994).

            Modified 3-7-11
            Department of Biology, Bates College, Lewiston, ME 04240


            Experimentation in modern practice

            Like all scientific research, the results of experiments are shared with the scientific community, are built upon, and inspire additional experiments and research. For example, once Alhazen established that light given off by objects enters the human eye, the natural question that was asked was "What is the nature of light that enters the human eye?" Two common theories about the nature of light were debated for many years. Sir Isaac Newton was among the principal proponents of a theory suggesting that light was made of small particles. The English naturalist Robert Hooke (who held the interesting title of Curator of Experiments at the Royal Society of London) supported a different theory stating that light was a type of wave, like sound waves. In 1801, Thomas Young conducted a now classic scientific experiment that helped resolve this controversy. Young, like Alhazen, worked in a darkened room and allowed light to enter only through a small hole in a window shade (Figure 5). Young refocused the beam of light with mirrors and split the beam with a paper-thin card. The split light beams were then projected onto a screen, and formed an alternating light and dark banding pattern – that was a sign that light was indeed a wave (see our Light I: Particle or Wave? module).

            Figure 5: Young's split-light beam experiment helped clarify the wave nature of light.

            Approximately 100 years later, in 1905, new experiments led Albert Einstein to conclude that light exhibits properties of both waves and particles. Einstein's dual wave-particle theory is now generally accepted by scientists.

            Experiments continue to help refine our understanding of light even today. In addition to his wave-particle theory, Einstein also proposed that the speed of light was unchanging and absolute. Yet in 1998 a group of scientists led by Lene Hau showed that light could be slowed from its normal speed of 3 x 10 8 meters per second to a mere 17 meters per second with a special experimental apparatus (Hau et al., 1999). The series of experiments that began with Alhazen's work 1000 years ago has led to a progressively deeper understanding of the nature of light. Although the tools with which scientists conduct experiments may have become more complex, the principles behind controlled experiments are remarkably similar to those used by Pasteur and Alhazen hundreds of years ago.

            Summary

            Manipulating and controlling variables are key aspects that set experimentation apart from other scientific research methods. This module highlights the principles of experimentation through examples from history, including the work of Alhazen in 1000 CE and Louis Pasteur in the 1860s.

            Key Concepts

            Experimentation is a research method in which one or more variables are consciously manipulated and the outcome or effect of that manipulation on other variables is observed.

            Experimental designs often make use of controls that provide a measure of variability within a system and a check for sources of error.

            Experimental methods are commonly applied to determine causal relationships or to quantify the magnitude of response of a variable.


            In science and engineering, there are essentially two ways of repairing faults and solving problems.

            Reactive management consists in reacting quickly after the problem occurs, by treating the symptoms. This type of management is implemented by reactive systems, [3] [4] self-adaptive systems, [5] self-organized systems, and complex adaptive systems. The goal here is to react quickly and alleviate the effects of the problem as soon as possible.

            Proactive management, conversely, consists in preventing problems from occurring. Many techniques can be used for this purpose, ranging from good practices in design to analyzing in detail problems that have already occurred, and taking actions to make sure they never reoccur. Speed is not as important here as the accuracy and precision of the diagnosis. The focus is on addressing the real cause of the problem rather than its effects.

            Root-cause analysis is often used in proactive management to identify the root cause of a problem, that is, the factor that was the main cause of that problem.

            It is customary to refer to the root cause in singular form, but one or several factors may in fact constitute the root cause(s) of the problem under study.

            A factor is considered the root cause of a problem if removing it prevents the problem from recurring. A causal factor, conversely, is one that affects an event's outcome, but is not the root cause. Although removing a causal factor can benefit an outcome, it does not prevent its recurrence with certainty.

            Examples Edit

            Imagine an investigation into a machine that stopped because it overloaded and the fuse blew. [6] Investigation shows that the machine overloaded because it had a bearing that wasn't being sufficiently lubricated. The investigation proceeds further and finds that the automatic lubrication mechanism had a pump which was not pumping sufficiently, hence the lack of lubrication. Investigation of the pump shows that it has a worn shaft. Investigation of why the shaft was worn discovers that there isn't an adequate mechanism to prevent metal scrap getting into the pump. This enabled scrap to get into the pump, and damage it.

            The apparent root cause of the problem is therefore that metal scrap can contaminate the lubrication system. Fixing this problem ought to prevent the whole sequence of events recurring. The real root cause could be a design issue if there is no filter to prevent the metal scrap getting into the system. Or if it has a filter that was blocked due to lack of routine inspection, then the real root cause is a maintenance issue.

            Compare this with an investigation that does not find the root cause: replacing the fuse, the bearing, or the lubrication pump will probably allow the machine to go back into operation for a while. But there is a risk that the problem will simply recur, until the root cause is dealt with.

            Cost benefit analysis Edit

            The above doesn't include cost/benefit analysis: does the cost of replacing one or more machines, to have a filter, exceed the cost of downtime until the fuse is replaced. This situation is sometimes referred to as the cure being worse than the disease. [7] [8] An unrelated lack of cost/benefit analysis, which can help explain the problem of the cure being worse than the disease, is the tradeoff between some claimed benefits of population decline, which in the short term provides fewer payers into pension/retirement systems, versus halting the population declines while conceding the need for higher taxes to cover the cost of building more schools. [9]

            Root-cause analysis is used in many application domains.

            Manufacturing and industrial process control Edit

            The example above illustrates how RCA can be used in manufacturing. RCA is also routinely used in industrial process control, e.g. to control the production of chemicals (quality control).

            IT and telecommunications Edit

            Root-cause analysis is frequently used in IT and telecommunications to detect the root causes of serious problems. For example, in the ITIL service management framework, the goal of incident management is to resume a faulty IT service as soon as possible (reactive management), whereas problem management deals with solving recurring problems for good by addressing their root causes (proactive management).

            Another example is the computer security incident management process, where root-cause analysis is often used to investigate security breaches. [10]

            RCA is also used in conjunction with business activity monitoring and complex event processing to analyze faults in business processes.

            Health and safety Edit

            In the domains of health and safety, RCA is routinely used in medicine (diagnosis) and epidemiology (e.g., to identify the source of an infectious disease), where causal inference methods often require both clinical and statistical expertise to make sense of the complexities of the processes. [11]

            RCA is also used in environmental science (e.g., to analyze environmental disasters), accident analysis (aviation and rail industry), and occupational safety and health. [12] In the manufacture of medical devices, [13] pharmaceuticals, [14] food, [15] and dietary supplements, [16] root cause analysis is a regulatory requirement.

            Systems analysis Edit

            Despite the different approaches among the various schools of root cause analysis and the specifics of each application domain, RCA generally follows the same four steps:

            1. Identification and description: Effective problem statements and event descriptions (as failures, for example) are helpful and usually required to ensure the execution of appropriate root cause analyses.
            2. Chronology: RCA should establish a sequence of events or timeline for understanding the relationships between contributory (causal) factors, the root cause, and the problem under investigation.
            3. Differentiation: By correlating this sequence of events with the nature, the magnitude, the location, and the timing of the problem, and possibly also with a library of previously analyzed problems, RCA should enable the investigator(s) to distinguish between the root cause, causal factors, and non-causal factors. One way to trace down root causes consists in using hierarchical clustering and data-mining solutions (such as graph-theory-based data mining). Another consists in comparing the situation under investigation with past situations stored in case libraries, using case-based reasoning tools.
            4. Causal graphing: Finally, the investigator should be able to extract from the sequences of events a subsequence of key events that explain the problem, and convert it into a causal graph.

            To be effective, root cause analysis must be performed systematically. A team effort is typically required. For aircraft accident analyses, for example, the conclusions of the investigation and the root causes that are identified must be backed up by documented evidence. [17]

            Transition to corrective actions Edit

            The goal of RCA is to identify the root cause of the problem. The next step is to trigger long-term corrective actions to address the root cause identified during RCA, and make sure that the problem does not resurface. Correcting a problem is not formally part of RCA, however these are different steps in a problem-solving process known as fault management in IT and telecommunications, repair in engineering, remediation in aviation, environmental remediation in ecology, therapy in medicine, etc.

            Without delving in the idiosyncrasies of specific problems, several general conditions can make RCA more difficult than it may appear at first sight.

            First, important information is often missing because it is generally not possible, in practice, to monitor everything and store all monitoring data for a long time.

            Second, gathering data and evidence, and classifying them along a timeline of events to the final problem, can be nontrivial. In telecommunications, for instance, distributed monitoring systems typically manage between a million and a billion events per day. Finding a few relevant events in such a mass of irrelevant events is asking to find the proverbial needle in a haystack.

            Third, there may be more than one root cause for a given problem, and this multiplicity can make the causal graph very difficult to establish.

            Fourth, causal graphs often have many levels, and root-cause analysis terminates at a level that is "root" to the eyes of the investigator. Looking again at the example above in industrial process control, a deeper investigation could reveal that the maintenance procedures at the plant included periodic inspection of the lubrication subsystem every two years, while the current lubrication subsystem vendor's product specified a 6-month period. Switching vendors may have been due to management's desire to save money, and a failure to consult with engineering staff on the implication of the change on maintenance procedures. Thus, while the "root cause" shown above may have prevented the quoted recurrence, it would not have prevented other – perhaps more severe – failures affecting other machines.


            Common Questions about Science and “Alternative” Health Methods

            The scientific method is a set of tools for thinking about and investigating the natural world. Scientists make hypotheses about how the world works and then conduct experiments to test them. To be testable, hypotheses must be falsifiable. That is, it must be possible to design tests that can either support them or refute them.

            Q. What does that have to do with health?

            A. The scientific community seeks to test the validity of ideas about the nature and treatment of disease. Judgments are based on the scientific method. Over the last 150 years, most of the progress in medicine—and all the other sciences—has resulted from its use.

            Q. Who makes the judgments?

            A. Scientists who conduct experiments they consider significant usually report their results to a peer-reviewed journal. The journal editor sends copies to other scientists who are experts in the same field. They check whether the work is accurate, up-to-date, and adheres to the principles of scientific investigation. The paper is then accepted, rejected, or returned to the author with suggestions for revision. Peer review thus serves as a tool for weeding out sloppy work and unwarranted conclusions. Publication in a peer-reviewed journal indicates that the paper has met that journal’s standards. Of course, not all journals enjoy equal status in the scientific community. Publication by a journal like Nature, Science, the New England Journal of Medicine, or JAMA (Journal of the American Medical Association) is quite a feather in a scientist’s cap!

            Q. What about testimonials? Can’t personal experience demonstrate what works?

            A. “Testimonials” are personal accounts of someone’s experiences with a therapy. They are generally subjective: “I felt better,” “I had more energy,” “I wasn’t as nauseated,” “The pain went away,” and so on. Testimonials are inherently selective. People are much more likely to talk about their “amazing cure” than about something that didn’t work for them. The proponents of “alternative” methods can, of course, pick which testimonials they use. For example, let’s suppose that if 100 people are sick, 50 of them will recover on their own even if they do nothing. So, if all 100 people use a certain therapy, half will get better even if the treatment doesn’t do anything. These people could say “I took therapy X and my disease went away!” This would be completely honest, even though the therapy had done nothing for them. So, testimonials are useless for judging treatment effectiveness. For all we know, those giving the testimonial might be the only people who felt better. Or, suppose that of 100 patients trying a therapy, 10 experienced no change, 85 felt worse, and 5 felt better. The five who improved could quite honestly say that they felt better, even though nearly everyone who tried the remedy stayed the same or got worse!

            Q. I still don’t see how scientists can be any more accurate. Aren’t they just offering their observations as “testimonials”? How do we know they aren’t mistaken?

            A. Scientists use randomized controlled trials (RCTs) to solve this problem. RCTs examine groups of patients and use statistics to determine what works. To make reliable conclusions, scientists use several ‘rules’:

            Inclusion criteria must be strict. That is, they make certain that the people studied actually have the condition they are trying to treat. If you’re trying a new remedy for cancer, but you don’t in fact have cancer, your experience won’t be very helpful to those who do.

            All (or nearly all) the people in the trial must be accounted for. We can see why this is important if we return to our example of the disease in which 5% of the people get better. If you just hear about the 5 people who got better, you might be convinced that the therapy is a great idea. But, what if the other 95 people given the therapy got worse than they would have without it? Suddenly, the 5% doesn’t look quite so rosy!

            The people being treated are compared to a control group. This lets us compare the group getting the therapy with patients not getting the therapy. For example, if in our example 5% of the treatment group got better and 5% of an untreated control group got better, we could conclude that the therapy was ineffective. If 5% in the treatment group got better but 10% in the control group got better, we might decide that the therapy was actually causing harm. Notice that even when this study demonstrates harm (twice as many people get better without the remedy as with it), there still could be some people who could testify they were cured!

            Finally, randomized controlled trials aim for objectivity. Scientists try to measure the progress of the disease without referring only to how the patient “feels,” since feelings can change even if the disease is staying the same or getting worse. To increase the objectivity, patients are assigned randomly to the control or treatment groups to avoid the bias of putting patients which the scientist thinks will do well into the treatment group. Ideally, neither scientists or patients should know who receives what until the experiment is completed—a setup called “double-blind” testing.

            Q. Why all the concern about the control group and the random assignment? Wouldn’t it be simpler to just give the treatment to the patients and see what happens to them? After all, we know that the people without the treatment won’t get any better!

            A. Good point. This issue is at the heart of the RCT. In the 1950s, scientists found that roughly one out of three patients would feel improved even when given a pharmacologically inert substance such as a sugar pill. This is called “the placebo effect.” The way we perceive our body’s experiences can be altered by our state of mind and our beliefs. The number of people who respond to placebos can be even higher, especially if the patient or the doctor giving the treatment fervently believe it will work. This is why we use a “control ” or “placebo” group—the group being tested gets, say, the pill we want to study, and the control group gets a sugar pill. Both groups might show some improvement, but if they both improve by the same amount, then we conclude that this is from the “placebo effect.” We randomize patients to one group or the other for the same reason—so that the scientist does not know who is getting which therapy. As mentioned, the beliefs of the doctor giving the therapy can increase the placebo effect, so randomizing ensures they will treat everyone equally.

            Q. Are you saying that testimonials aren’t good for anything?

            A. No. Testimonials can be great places to start looking for answers, but they should not be considered the end of the journey. Many scientific discoveries start with an observation that leads to a hypothesis that eventually can be tested with a randomized controlled trial. However, people who use testimonials probably have little better to offer. After all, it is possible to get a testimonial from someone for nearly anything. In the 19th century, quack doctors sold medicines that were radioactive or gave patients bits of radioactive metal to wear near their skin. Many patients gave enthusiastic testimonials. They may have sincerely felt they were better, but experience showed that it wasn’t doing them any favors—it ultimately made them much worse.

            Q. It sounds like you are suggesting that scientists are much wiser and smarter than other ‘normal’ people.

            A. Just the opposite. The scientific method is not a way of saying that scientists have all the answers. Scientists use it because they realize how easy it is to be deceived or to fool ourselves even without knowing it, especially when we dearly want something to be true. That’s why science always tests remedies in a way that could show that they were ineffective. We should all be open to the fact that we could be wrong, and design our tests accordingly.

            “Freedom of Choice”

            Q. OK, I can see why scientists work the way they do. But this process takes time—don’t you think that some sick people that aren’t being helped by scientific medicine get impatient and want to try something else?

            A. You hit the nail on the head. This is the issue for the patient, which is why I have considerable sympathy for those who seek out dubious therapies. However, I have less for those that peddle them without being totally honest and forthright. The key issue for the patient is, “What will help ME?” However, physicians, policymakers, and society have a somewhat different question. Society must deliver the best possible health care to the largest number of people, in a timely fashion, with only limited resources. So, it should attempt (through the scientific method) to determine which therapies are effective. Of course patients are free to do anything they like, but should society, insurance companies, etc. have to pay for anything that patients decide they want? If I decide that bathing in water filled with gold dust is the cure for my ailment, should you [as a taxpayer or insurance purchaser] have to foot the bill if the process doesn’t work? We also expect our doctors to give us good and reliable health care, both ethically and legally. Should physicians be held professionally or criminally responsible if they do not try, say, coffee enemas for cancer just because someone claims they help? Intelligent choice depends on the ability to separate what works from what is merely wishful thinking. The scientific method offers the best way to do this.

            Q. Even with the scientific method, isn’t it possible that scientists could be motivated by unethical desires in getting their therapies proved? Don’t try to tell me that drug companies aren’t equally interested in getting their drugs marketed and accepted!

            A. Exactly. That’s why claims must be backed by evidence. The scientific approach is designed to weed out ideas that we wish were true, but aren’t. Of course drug companies want their drugs used and sold—but they have to prove that they work and are safe enough. Why shouldn’t every therapy be held to the same standard? Anyone profiting from a remedy or cure has a vested interest in selling it. Scientific scrutiny is the only way to know if we are getting our money’s worth-a return to the days of unregulated health care (as in the 19th century) probably isn’t in anyone’s interest except those who want to make money without proving they are providing good value and honest advertising.

            Q. Why shouldn’t we have a system in which people can go anywhere they want to get the health treatment they want?

            A. Most people would say that we have such a system. The tougher question is who should pay for unproven treatments. What are the limits of that which should be covered by insurance or government intervention? Science is the only objective standard on which we can all base measurements.

            Q. If I am looking for new shoes or a new car, I have many choices. Why should government regulate or interfere with the claims made for various health-care approaches?

            A. The free market does permit people to seek out anyone or anything they want for health care (assuming it is legal!). There are several differences between between buying a pair of shoes or a car and choosing health care.

            It is fairly clear what cars or shoes do when we examine them. We can try on the shoes, test drive the car, compare their appearance or specifications, etc. The trial-and-error period of deciding can be postponed indefinitely, and we are even responsible (in the absence of fraud) to be fairly sure of what we are purchasing. Indeed, the legal maxim “buyer beware” (caveat emptor) presupposes such an approach.

            The need for health care is not readily controllable or postponable. People don’t plan to be hurt or sick but it happens. We need the help now, not next week after we’ve shopped around a bit. And, time is often of the essence—trying on five pairs of wrong shoes before we find one that fits is no loss trying five useless therapies before hitting on the right one isn’t so great a proposition! Furthermore, we cannot control the type of disease that we have and the treatment we will require. If I buy a car, I can settle for a Ford Pinto if I can’t afford a Mercedes. People in liver failure can’t decide to “settle” for a few aspirin if that’s all they can afford.

            Treating disease always has an element of uncertainty. Scientific health care is based on a statistical approach that determines which therapies offer the greatest odds of helping. Because diseases can wax and wane, and because the body has a marvelous ability to heal itself, it is very difficult to determine through one person’s experience whether a therapy should be recommended to everyone.

            It is difficult for non-experts (which is almost all of us, in most areas) to make intelligent decisions about healthcare. The body is mysterious to many people, and biology is probably among the most complex sciences. We depend on people with years of training and experience (hopefully) to advise us in areas in which we do not have the time, means, education, or (in some cases) even the consciousness to learn enough to make a truly informed and rational decision.

            We count on our physicians to select what we need to understand to make a decision, and so we trust them to get it right. This is an enormous trust, which partly explains the respect given physicians and the abuse heaped on them when they fail us. Thus, there must be means in place to protect the public from those who would give inaccurate advice. The public is free to chose, but part of being “free” is the ability to clearly discern exactly what is being chosen.

            There is, in my view, a “social contract” when one goes to a medical doctor. We ought to be able to trust that we are getting the best currently proven therapy. Patients should not have to worry about whether the physicians they choose are quacks. If they choose to go elsewhere, that is their right—but free choice is hampered if patients have no way to distinguish between proven and unproven/disproven therapies. If science-based medicine doesn’t meet your needs, you are of course free to look elsewhere, but you should realize that you are entering less-charted waters with much less assurance of reliability. It might be worthwhile to ask why some who sell methods which aren’t proven want to blur or hide that distinction.

            Gregory Smith, B. Med. Sci., is a member of the MD class of 2000 at the University of Alberta (Edmonton). As of July 1, 2000, he will be a resident in Family Medicine at McGill University, Montreal. This article is based on exchanges on the healthfraud-discuss list, which is open to anyone who agrees to abide by its rules. Feedback to me is welcome.