8.7: Review Questions - Biology

8.7: Review Questions - Biology

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

1. Explain how a selective and differential media works.

2. MSA/mannitol salt agar is selective for what bacteria? how does the medium select for this population?

3. A MacConkey plate inoculated with an unknown bacterium does not grow. What can this tell you?

4. How does HE/Hektoen agar work to select and differentiate bacterial groups?

5. What is the advantage to a multi-test system like the Enterotubes?

6. Why is a combination of both phenotypic and genotypic methods ideal for identifying and characterizing a bacterium?

MCQ Questions with Answers for Class 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, and 1 all Subjects

If you are in search of Multiple Choice Questions for Classes 1 to 12 all at one place this is the right place for you. MCQ Questions provided here will give a chapter wise preparation strategy so that you can prepare for the exam more efficiently. MCQ Questions of Classes 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 for all the Subjects are prepared adhering to the latest syllabus guidelines prescribed by the CBSE Board. You can download the Objective Questions of 1st to 12th Standard PDF through the quick links available. Attempt Computer MCQ Questions Quiz for Classes 1 to 10 available here and test your understanding of the topics.

8.7: Review Questions - Biology

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited.

Feature Papers represent the most advanced research with significant potential for high impact in the field. Feature Papers are submitted upon individual invitation or recommendation by the scientific editors and undergo peer review prior to publication.

The Feature Paper can be either an original research article, a substantial novel research study that often involves several techniques or approaches, or a comprehensive review paper with concise and precise updates on the latest progress in the field that systematically reviews the most exciting advances in scientific literature. This type of paper provides an outlook on future directions of research or possible applications.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to authors, or important in this field. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.



Reproducible science provides the critical standard by which published results are judged and central findings are either validated or refuted [1]. Reproducibility also allows others to build upon existing work and use it to test new ideas and develop methods. Advances over the years have resulted in the development of complex methodologies that allow us to collect ever increasing amounts of data. While repeating expensive studies to validate findings is often difficult, a whole host of other reasons have contributed to the problem of reproducibility [2, 3]. One such reason has been the lack of detailed access to underlying data and statistical code used for analysis, which can provide opportunities for others to verify findings [4, 5]. In an era rife with costly retractions, scientists have an increasing burden to be more transparent in order to maintain their credibility [6]. While post-publication sharing of data and code is on the rise, driven in part by funder mandates and journal requirements [7], access to such research outputs is still not very common [8, 9]. By sharing detailed and versioned copies of one’s data and code researchers can not only ensure that reviewers can make well-informed decisions, but also provide opportunities for such artifacts to be repurposed and brought to bear on new research questions.

Opening up access to the data and software, not just the final publication, is one of goals of the open science movement. Such sharing can lower barriers and serve as a powerful catalyst to accelerate progress. In the era of limited funding, there is a need to leverage existing data and code to the fullest extent to solve both applied and basic problems. This requires that scientists share their research artifacts more openly, with reasonable licenses that encourage fair use while providing credit to original authors [10]. Besides overcoming social challenges to these issues, existing technologies can also be leveraged to increase reproducibility.

All scientists use version control in one form or another at various stages of their research projects, from the data collection all the way to manuscript preparation. This process is often informal and haphazard, where multiple revisions of papers, code, and datasets are saved as duplicate copies with uninformative file names (e.g. draft_1.doc, draft_2.doc). As authors receive new data and feedback from peers and collaborators, maintaining those versions and merging changes can result in an unmanageable proliferation of files. One solution to these problems would be to use a formal Version Control System (VCS), which have long been used in the software industry to manage code. A key feature common to all types of VCS is that ability save versions of files during development along with informative comments which are referred to as commit messages. Every change and accompanying notes are stored independent of the files, which obviates the need for duplicate copies. Commits serve as checkpoints where individual files or an entire project can be safely reverted to when necessary. Most traditional VCS are centralized which means that they require a connection to a central server which maintains the master copy. Users with appropriate privileges can check out copies, make changes, and upload them back to the server.

Among the suite of version control systems currently available, Git stands out in particular because it offers features that make it desirable for managing artifacts of scientific research. The most compelling feature of Git is its decentralized and distributed nature. Every copy of a Git repository can serve either as the server (a central point for synchronizing changes) or as a client. This ensures that there is no single point of failure. Authors can work asynchronously without being connected to a central server and synchronize their changes when possible. This is particularly useful when working from remote field sites where internet connections are often slow or non-existent. Unlike other VCS, every copy of a Git repository carries a complete history of all changes, including authorship, that can be viewed and searched by anyone. This feature allows new authors to build from any stage of a versioned project. Git also has a small footprint and nearly all operations occur locally.

By using a formal VCS, researchers can not only increase their own productivity but also make it for others to fully understand, use, and build upon their contributions. In the rest of the paper I describe how Git can be used to manage common science outputs and move on to describing larger use-cases and benefits of this workflow. Readers should note that I do not aim to provide a comprehensive review of version control systems or even Git itself. There are also other comparable alternatives such as Mercurial and Bazaar which provide many of the features described below. My goal here is to broadly outline some of advantages of using one such system and how it can benefit individual researchers, collaborative efforts, and the wider research community.

How Git can track various artifacts of a research effort

Before delving into common use-cases, I first describe how Git can be used to manage familiar research outputs such as data, code used for statistical analyses, and documents. Git can be used to manage them not just separately but also in various combinations for different use cases such as maintaining lab notebooks, lectures, datasets, and manuscripts.

Manuscripts and notes

Version control can operate on any file type including ones most commonly used in academia such as Microsoft Word. However, since these file types are binary, Git cannot examine the contents and highlight sections that have changed between revisions. In such cases, one would have to rely solely on commit messages or scan through file contents. The full power of Git can best be leveraged when working with plain-text files. These include data stored in non-proprietary spreadsheet formats (e.g. comma separated files versus xls ), scripts from programming languages, and manuscripts stored in plain text formats ( LaTeX and markdown versus Word documents). With such formats, Git not only tracks versions but can also highlight which sections of a file have changed.In Microsoft Word documents the track changes feature is often used to solicit comments and feedback. Once those comments and changes have either been accepted or rejected, any record of their existence also disappears forever. When changes are submitted using Git, a permanent record of author contributions remains in the version history and available in every copy of the repository.


Data are ideal for managing with Git. These include data manually entered via spreadsheets, recorded as part of observational studies, or ones retrieved from sensors (see also section on Managing large data). With each significant change or additions, commits can record a log those activities (e.g. “Entered data collected between 12/10/2012 and 12/20/2012”, or “Updated data from temperature loggers for December 2012”). Over time this process avoids proliferation of files, while the Git history maintains a complete provenance that can be reviewed at any time. When errors are discovered, earlier versions of a file can be reverted without affecting other assets in the project.

Statistical code and figures

When data are analyzed programmatically using software such as R and Python , code files start out small and often become more complex over time. Somewhere along the process, inadvertent errors such as misplaced subscripts and incorrectly applied functions can lead to serious errors down the line. When such errors are discovered well into a project, comparing versions of statistical scripts can provide a way to quickly trace the source of the problem and recover from them.

Similarly, figures that are published in a paper often undergo multiple revisions before resulting in a final version that gets published. Without version control, one would have to deal with multiple copies and use imperfect information such as file creation dates to determine the sequence in which they were generated. Without additional information, figuring out why certain versions were created (e.g. in response to comments from coauthors) also becomes more difficult. When figures are managed with Git, the commit messages (e.g. “Updated figure in response to Ethan’s comments regarding use of normalized data.”) provide an unambiguous way to track various versions.

Complete manuscripts

When all of the above artifacts are used in a single effort, such as writing a manuscript, Git can collectively manage versions in a powerful way for both individual authors and groups of collaborators. This process avoids rapid multiplication of unmanageable files with uninformative names (e.g. final_1.doc, final_2.doc, final_final.doc, final_KR_1.doc etc.) as illustrated by the popular cartoon strip PhD Comics

Use cases for Git in science

Day to day decisions made over the course of a study are often logged for review and reference in lab notebooks. Such notebooks contain important information useful to both future readers attempting to replicating a study, or for thorough reviewers seeking additional clarification. However, lab notebooks are rarely shared along with publications or made public although there are some exceptions [11]. Git commit logs can serve as a proxies for lab notebooks if clear yet concise messages are recorded over the course of a project. One of the fundamental features of Git that make it so useful to science is that every copy of a repository carries a complete history of changes available for anyone to review. These logs can be be easily searched to retrieve versions of artifacts like data and code. Third party tools can also be leveraged to mine Git histories from one or more projects for other types of analyses.

Facilitating collaboration

In collaborative efforts, authors contribute to one or more stages of the manuscript preparation such as collecting data, analyzing them, and/or writing up the results. Such information is extremely useful for both readers and reviewers when assessing relative author contributions to a body of work. With high profile journals now discouraging the practice of honorary authorship [12], Git commit logs can provide a highly granular way to track and assess individual author contributions to a project.

When projects are tracked using Git, every single action (such as additions, deletions, and changes) is attributed to an author. Multiple authors can choose to work on a single branch of a repository (the ‘master’ branch), or in separate branches and work asynchronously. In other words, authors do not have to wait on coauthors before contributing. As each author adds their contribution, they can sync those to the master branch and update their copies at any time. Over time, all of the decisions that go into the production of a manuscript from entering data and checking for errors, to choosing appropriate statistical models and creating figures, can be traced back to specific authors.

With the help of a remote Git hosting services, maintaining various copies in sync with each other becomes effortless. While most changes are merged automatically, conflicts will need to be resolved manually which would also be the case with most other workflows (e.g. using Microsoft Word with track changes). By syncing changes back and forth with a remote repository, every author can update their local copies as well as push their changes to the remote version at any time, all the while maintaining a complete audit trail. Mistakes or unnecessary changes can easily undone by reverting either the entire repository or individual files to earlier commits. Since commits are attributed to specific authors, error or clarifications can also be appropriately directed. Perhaps most importantly this workflow ensures that revisions do not have to be emailed back and forth. While cloud storage providers like Dropbox alleviate some of these annoyances and also provide versioning, the process is not controlled making it hard to discern what and how many changes have occurred between two time intervals.

In a recent paper led by Philippe Desjardins-Proulx [13] all of the authors successfully collaborated using only Git and GitHub ( In this particular Git workflow, each of us cloned a copy of the main repository and contributed our changes back to the lead author. Figures 1 and 2 show the list of collaborators and a network diagram of how and when changes were contributed back the master branch.

A list of contributions to a project on GitHub.

Backup and failsafe against data loss

Collecting new data and developing methods for analysis are often expensive endeavors requiring significant amounts of grant funding. Therefore protecting such valuable products from loss or theft is paramount. A recent study found that a vast majority of data and code are stored on lab computers or web servers both of which are prone to failure and often become inaccessible after a certain length of time. One survey found that only 72% of studies of 1000 surveyed still had data that were accessible [14, 15]. Hosting data and code publicly not only ensures protection against loss but also increases visibility for research efforts and provides opportunities for collaboration and early review [16].

While Git provides a powerful features that can leveraged by individual scientists, Git hosting services open up a whole new set of possibilities. Any local Git repository can be linked to one or more Git remotes, which are copies hosted on a remote cloud severs. Git remotes serve as hubs for collaboration where authors with write privileges can contribute anytime while others can download up-to-date versions or submit revisions with author approval. There are currently several Git hosting services such as SourceForge, Google Code, GitHub, and BitBucket that provide free Git hosting. Among them, GitHub has surpassed other source code hosts like Google Code and SourceForge in popularity and hosts over 4.6 million repositories from 2.8 million users as of December 2012 [17–19]. While these services are usually free for publicly open projects, some research efforts, especially those containing embargoed or sensitive data will need to be kept private. There are multiple ways to deal with such situations. For example, certain files can be excluded from Git’s history, others maintained as private sub-modules, or entire repositories can be made private and opened to the public at a future time. Some Git hosts like BitBucket offer unlimited public and private accounts for academic use.

Managing a research project with Git provides several safe guards against short-term loss. Frequent commits synced to remote repositories ensure that multiple versioned copies are accessible from anywhere. In projects involving multiple collaborators, the presence of additional copies makes even more difficult to lose work. While Git hosting services protect against short-term data loss, they are not a solution for more permanent archiving since none of them offer any such guarantees. For long-term archiving, researchers should submit their Git-managed projects to academic repositories that are members of CLOCKSS ( Output stored on such repositories (e.g. figshare) are archived over a network of redundant nodes and ensure indefinite availability across geographic and geopolitical regions.

Freedom to explore new ideas and methods Git tracks development of projects along timelines referred to as branches. By default, there is always a master branch (line with blue dots in Figure 3). For most authors, working with this single branch is sufficient. However, Git provides a powerful branching mechanism that makes it easy for exploring alternate ideas in a structured and documented way without disrupting the central flow of a project. For example, one might want to try an improved simulation algorithm, a novel statistical method, or plot figures in a more compelling way. If these changes don’t work out, one could revert changes back to an earlier commit when working on a single master branch. Frequent reverts on a master branch can be disruptive, especially when projects involve multiple collaborators. Branching provides a risk-free way to test new algorithms, explore better data visualization techniques, or develop new analytical models. When branches yield desired outcomes, they can easily be merged into the master copy while unsuccessful efforts can be deleted or left as-is to serve as a historical record (illustrated in Figure 3).

Branches can prove extremely useful when responding to reviewer questions about the rationale for choosing one method over another since the Git history contains a record of failed, unsuitable, or abandoned attempts. This is particularly helpful given that the time between submission and response can be fairly long. Additionally, future users can mine Git histories to avoid repeating approaches that were never fruitful in earlier studies.

Mechanism to solicit feedback and reviews While it is possible to leverage most of core functionality in Git at the local level, Git hosting services offer additional services such as issue trackers, collaboration graphs, and wikis. These can easily be used to assign tasks, manage milestones, and maintain lab protocols. Issue trackers can be repurposed as a mechanism for soliciting both feedback and review, especially since the comments can easily be linked to particular lines of code or blocks of text. Early comments and reviews for this article were also solicited via GitHub Issues ∖ _git/issues/

Increase transparency and verifiability Methods sections in papers are often succinct to adhere to strict word limits imposed by journal guidelines. This practice is especially common when describing well-known methods where authors assume a certain degree of familiarity among informed readers. One unfortunate consequence of this practice is that any modifications to the standard protocol (typically noted in internal lab notebooks) implemented in a study may not available to the reviewers and readers. However, seemingly small decisions, such as choosing an appropriate distribution to use in a statistical method, can have a disproportionately strong influence on the central finding of a paper. Without access to a detailed history, a reviewer competent in statistical methods has to trust that authors carefully met necessary assumptions, or engage in a long back and forth discussion thereby delaying the review process. Sharing a Git repository can alleviate these kinds of ambiguities and allow authors to point out commits where certain key decisions were made before choosing certain approaches. Journals could facilitate this process by allowing authors to submit links to their Git repository alongside manuscripts and sharing them with reviewers.

Managing large data Git is extremely efficient with managing small data files such as ones routinely collected in experimental and observational studies. However, when the data are particularly large such as those in bioinformatics studies (in the order of tens of megabytes to gigabytes), managing them with Git can degrade efficiency and slow down the performance of Git operations. With large data files, the best practice would be to exclude them from the repository and only track changes in metadata. This protocol is especially ideal when large datasets do not change often over the course of a study. In situations where the data are large and undergo frequent updates, one could leverage third-party tools such as Git-annex and still seamlessly use Git to manage a project.

Lowering barriers to reuse A common barrier that prevents someone from reproducing or building upon an existing method is lack of sufficient details about a method. Even in cases where methods are adequately described, the use of expensive proprietary software with restrictive licenses makes it difficult to use [20]. Sharing code with licenses that encourage fair use with appropriate attribution removes such artificial barriers and encourages readers to modify methods to suit their research needs, improve upon them, or find new applications [10]. With open source software, analysis pipelines can be easily forked or branched from public Git repositories and modified to answer other questions. Although this process of depositing code somewhere public with appropriate licenses involves additional work for the authors, the overall benefits outweigh the costs. Making all research products publicly available not only increases citation rates [21–23] but can also increase opportunities for collaboration by increasing overall visibility. For example, Niedermeyer & Strohalm [24] describe their struggle with finding appropriate software for comprehensive mass spectrum annotation, and eventually found an open source software which they where able to extend. In particular, the authors cite availability of complete source code along with an open license as the motivation for their choice. Examples of such collaboration and extensions are likely to become more common with increased availability of fully versioned projects with permissive licenses. A similar argument can be made for data as well. Even publications that deposit data in persistent repositories rarely share the original raw data. The versions submitted to persistent repositories are often cleaned and finalized versions of datasets. In cases where no datasets are deposited, the only data accessible are likely mean values reported in the main text or appendix of a paper. Raw data can be leveraged to answer questions not originally intended by the authors. For example, research areas that address questions about uncertainty often require messy raw data to test competing methods. Thus, versioned data provide opportunities to retrieve copies before they have been modified for use in different contexts and have lost some of their utility.

Git makes it easy to track individual contributions through time ensuring appropriate attribution and accountability. This screenshot shows subset of commits (colored dots) by four authors over a period spanning November 17th, 2012 - January 26th, 2013.

Multiple Choice Questions for Classes 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1

Practicing MCQs of Class 1 to 12 is the best way to excel in the board exams. There is no substitute for consistent practice if one wants to score better or understand the concept thoroughly. By Practicing more Multiple Choice Quiz Questions you can improve your speed and accuracy which would be an added advantage in the actual board exam. Solve Objective Type Questions of Class 1 to 12 PDF through the quick links available and score higher marks in the exam.

MCQ Questions for Class 12 with Answers

MCQ Questions for Class 11 with Answers

MCQ Questions for Class 10 with Answers

MCQ Questions for Class 9 with Answers

MCQ Questions for Class 8 with Answers

MCQ Questions for Class 7 with Answers

MCQ Questions for Class 6 with Answers

Download Objective Type Questions for 1st to 12th Standard PDF

Take the respective Chapter wise or Subject Wise MCQs with Answers for 1st to 12th Class and prepare accordingly. Answer the Multiple Choice Type Questions on your own and test your level of preparation. If you are familiar with the MCQ Questions it can be a big-time savior for long answer questions.

Enhance your subject knowledge on the respective concepts by taking the help of Multiple Choice Questions of Classes 12 to 1. Multiple Choice Type Questions for 1st to 12th Class prepared by subject experts can be an extra edge during your preparation to score more marks. Give the Multiple Choice Quiz Questions of Classes 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 utmost importance during your preparation as they help you cover the entire syllabus in a smart way.

Multiple Choice Test Preparation Tips

Multiple Choice Questions are the most tricky ones and need a little amount of smartness to crack them. Follow the preparation tips provided for answering the Objective Type Questions and save plenty of time.

  • Read the Entire Question and Answer it in your mind first.
  • Eliminate Wrong Answers and use the process of elimination.
  • Select the best answer to the question asked among the options available.
  • If you are having difficulty answering a question, move on, and come back to it after answering all the questions.
  • It’s usually best to stick to your first choice when there are seemingly two answers.

FAQs on MCQ Questions for Classes 12 to 1

You can get Class Wise Multiple Choice Questions PDF on our page.

All you need to do is tap on the quick links available for MCQ Questions with Answers of Classes 1 to 12. Once, you click on the link you will be directed to a new page having the corresponding MCQ Questions with a download option. Tap on the download button and save the Multiple Choice Questions for further reference.

  1. Where do I find Multiple Choice Type Questions for 1st to 12th Standard All Subjects?

You can find Multiple Choice Type Questions for 1st to 12th Standard all subjects on our page.

In many cases, students know the answer but fail to recall it due to memory issues. In such cases, seeing the answer could enable them to trigger their memory and provide the right answer. Thus, MCQ Questions will test how much students understood about a particular subject and aids in their learning.

Final Words

Hope, the information shared above regarding MCQ Questions has been helpful in clearing your queries. If you have any other queries feel free to reach us via the comment section and we will help you out with the possible solution. Bookmark our site for more information on MCQ Questions, Notes, Study Material, Books, and other preparation related material.


With the completion of increasing numbers of genome sequences has come an explosion in the development of both computational and experimental techniques for deciphering the functions of genes, molecules and their interactions. These include theoretical methods for deducing function, such as analysis of protein homologies, structural domain predictions, phylogenetic profiling and analysis of protein domain fusions, as well as experimental techniques, such as microarray-based gene expression and transcription factor binding studies, two-hybrid protein-protein interaction screens, and large-scale RNA interference (RNAi) screens. The result is a huge amount of information and a current challenge is to extract meaningful knowledge and patterns of biological significance that can lead to new experimentally testable hypotheses. Many of these broad datasets, however, are noisy and the data quality can vary significantly. While in some circumstances the data from each of these techniques are useful in their own right, the ability to combine data from different sources facilitates interpretation and potentially allows stronger inferences to be made. Currently, biological data are stored in a wide variety of formats in numerous different places, making their combined analysis difficult: when information from several different databases is required, the assembly of data into a format suitable for querying is a challenge in itself. Sophisticated analysis of diverse data requires that they are available in a form that allows questions to be asked across them and that tools for constructing the questions are available. The development of systems for the integration and combined analysis of diverse data remains a priority in bioinformatics. Avoiding the need to understand and reformat many different data sources is a major benefit for end users of a centralized data access system.

A number of studies have illustrated the power of integrating data for cross-validation, functional annotation and generating testable hypotheses (reviewed in [1, 2]). These studies have covered a range of data types some looking at the overlap between two different data sets, for example, protein interaction and expression data [3–6] or protein interaction and RNAi screening results [7], and some combining the information from several different types of data [8–12]. Studies with Saccharomyces cerevisiae, for example, have indicated that combining protein-protein interaction and gene expression data to identify potential interacting proteins that are also co-expressed is a powerful way to cross-validate noisy protein interaction data [3–6]. A recent analysis integrated protein interactions, protein domain models, gene expression data and functional annotations to predict nearly 40,000 protein-protein interactions in humans [9]. In addition, combining multiple data sets of the same type from several organisms not only expands coverage to a larger section of genomes of interest, but can help to verify inferences or develop new hypotheses about particular 'events' in another organism. Alternatively, finding the intersection between different data sets of the same type can help identify a subset of higher-confidence data [2]. In addition to examination of different data sources within an organism, predicted orthologues and paralogues allow cross-validation of datasets between different organisms. For example, identification of so-called interologues (putative interacting protein pairs whose orthologues in another organism also apparently interact), can add confidence to interactions [13].

Biological data integration is a difficult task and a number of different solutions to the problem have been proposed (for example, see [14, 15] for reviews). A number of projects have already tackled the task of data integration and querying, and the methods used by these different systems differ greatly in their aims and scope (for a review of the different types of systems, see [15]). Some, for example, do not integrate the data themselves but perform fast, indexed keyword searches over flat files. An example of such a system is SRS [16]. Other systems send queries out to several different sources and use a mediated middle layer to integrate the information (so called mediated systems such as TAMBIS [17], DiscoveryLink [18] and BioMoby [19]). Although these systems can provide a great range of data and computational resources, they are sensitive to network problems and data format changes. In addition, such systems run into performance issues when running complex queries over large result sets. Finally, like FlyMine, some systems integrate all the data into one place - a data warehouse (for example, GUS [20], BioMart [21], Biozon [22], BioWarehouse [23], GIMS [24], Atlas [25] and Ondex [26]). Our objective was to make a freely available system built on a standard platform using a normal schema but still allowing warehouse performance. This resulted in the development of InterMine [27], a generic system that underpins FlyMine. A particular feature of InterMine is the way it makes use of precomputed tables to enhance performance. Another key component is the use of ontologies that provide a standardized system for naming biological entities and their relationships and this aspect is based on the approach taken by the Chado schema [28]. For example, a large part of the FlyMine data model is based on the Sequence Ontology (a controlled-vocabulary for describing biological sequences) [29]. This underlying architecture is discussed in more detail under 'System architecture'.

Another objective for FlyMine was to provide access to the data for bioinformatics experts as well as bench biologists with limited database (or bioinformatics) knowledge. FlyMine provides three kinds of web-based access. First, the Query Builder provides the most advanced access, allowing the user to construct their own complex queries. Second, a library of 'templates' provides a simple form-filling interface to predefined queries that can perform simple or complex actions. It is very straightforward to convert a query constructed in the Query Builder into a template for future use. Finally, a Quick Search facility allows users to browse the data available for any particular item in the database and, from there, to explore related items. This level of query flexibility combined with a large integrated database provides a powerful tool for researchers.

Below we briefly outline the data sources available in the current release of FlyMine and provide details of how these data can be accessed and queried. This is followed by examples illustrating some of the uses of FlyMine and the advantage of having data integrated into one database. Finally, we describe our future plans, and how to get further help and information.

The aim of FlyMine is to include large-scale functional genomic and proteomic data sets for a range of model organisms, with the main focus currently being on Drosophila and Anopheles species. So far we have loaded a wide range of data and these are summarized in Table 1.

Currently, we can load any data that conform to several different formats: GFF3 [30] for genome annotation and genomic features (for example, Dnase I footprints, microarray oligonucleotide and genome tiling amplimers), PSI-MI [31, 32] for data describing protein interactions or complexes, MAGE [33, 34] for microarray expression data, XML files from the UniProt Knowledgebase (UniProtKB) [35, 36] and the OBO flat file format describing the Gene Ontology (GO) [37] and gene association files for GO annotations [38]. In addition, we can also import data from the Ensembl [39, 40], InterPro [41, 42] and DrosDel [43, 44] database schemas to the FlyMine data model, enabling data from these databases to be loaded and updated regularly. Several smaller-scale data sources that currently do not conform to any standard have also been incorporated, such as RNAi data, orthologue data generated by InParanoid [45, 46] and three-dimensional protein structural domain predictions (K Mizuguchi, unpublished).

When building FlyMine, data are parsed from source formats and loaded into a central database. Queries are executed on this database with no need to access the original source data. Overlapping data sets are integrated by common identifiers, for example, genes from different sources may be merged by identifier or symbol. FlyMine is rebuilt with updated versions of source data about every three months. Table 2 summarizes the current number of objects in some of the main FlyMine classes.


As one starting point, the FlyMine website presents the principle data types grouped together as different 'aspects' (Figure 1a), such as protein interactions or gene expression. Each aspect provides background information on the origin of particular source datasets, including literature references and links to source databases, access to convenient derivative bulk datasets for browsing or export in standard format, as well as pre-defined template queries and classes for use as starting points in the query builder (Figure 1b). The template queries available for a particular aspect range from simple queries to retrieve just one data type to more complex queries that incorporate data from other aspects. Thus, aspects allow researchers to easily focus on a particular type of data, while still being able to query multiple data types at once and, for instance, easily identify relevant template queries.

Aspects. FlyMine groups data into 'aspects', each of which provide a 'homepage' for a different category of data. (a) The aspects available in FlyMine release 6.0. Each aspect page can be accessed by clicking on its icon or title. (b) Example aspect page: Genomics Aspect. Each aspect provides background information on the origin of each of its source datasets through a short description, and references if available. Likewise, links are provided to any source databases. Convenient bulk datasets are made available for browsing, or export in standard format. In addition, relevant template queries and classes for use as starting points in the query builder are provided.


Becker K, Hu Y, Biller-Andorno N: Infectious diseases - a global challenge. Int J Med Microbiol. 2006, 296: 179-185. 10.1016/j.ijmm.2005.12.015.

Xiang Z, Zheng W, He Y: BBP: Brucella genome annotation with literature mining and curation. BMC Bioinformatics. 2006, 7: 347-10.1186/1471-2105-7-347.

Bateman A, Coin L, Durbin R, Finn RD, Hollich V, Griffiths-Jones S, Khanna A, Marshall M, Moxon S, Sonnhammer EL, et al: The Pfam protein families database. Nucleic Acids Res. 2004, 32: D138-141. 10.1093/nar/gkh121.

Letunic I, Copley RR, Pils B, Pinkert S, Schultz J, Bork P: SMART 5: domains in the context of genomes and networks. Nucleic Acids Res. 2006, 34: D257-260. 10.1093/nar/gkj079.

Tatusov RL, Fedorova ND, Jackson JD, Jacobs AR, Kiryutin B, Koonin EV, Krylov DM, Mazumder R, Mekhedov SL, Nikolskaya AN, et al: The COG database: an updated version includes eukaryotes. BMC Bioinformatics. 2003, 4: 41-10.1186/1471-2105-4-41.

Marchler-Bauer A, Anderson JB, Derbyshire MK, DeWeese-Scott C, Gonzales NR, Gwadz M, Hao L, He S, Hurwitz DI, Jackson JD, et al: CDD: a conserved domain database for interactive domain family analysis. Nucleic Acids Res. 2007, 35: D237-240. 10.1093/nar/gkl951.

He Y, Vines RR, Wattam AR, Abramochkin GV, Dickerman AW, Eckart JD, Sobral BW: PIML: the Pathogen Information Markup Language. Bioinformatics. 2005, 21: 116-121. 10.1093/bioinformatics/bth462.

He Y, Rush HG, Liepman RS, Xiang Z, Colby LA: Pathobiology and management of laboratory rodents administered CDC Category A agents. Comparative Med. 2007, 57: 18-32.

Kanehisa M, Goto S, Kawashima S, Okuno Y, Hattori M: The KEGG resource for deciphering the genome. Nucleic Acids Res. 2004, 32: D277-280. 10.1093/nar/gkh063.

Karp PD, Riley M, Saier M, Paulsen IT, Collado-Vides J, Paley SM, Pellegrini-Toole A, Bonavides C, Gama-Castro S: The EcoCyc Database. Nucleic Acids Res. 2002, 30: 56-58. 10.1093/nar/30.1.56.

Krieger CJ, Zhang P, Mueller LA, Wang A, Paley S, Arnaud M, Pick J, Rhee SY, Karp PD: MetaCyc: a multiorganism database of metabolic pathways and enzymes. Nucleic Acids Res. 2004, 32: D438-442. 10.1093/nar/gkh100.

Bader GD, Betel D, Hogue CW: BIND: the Biomolecular Interaction Network Database. Nucleic Acids Res. 2003, 31: 248-250. 10.1093/nar/gkg056.

Stromback L, Lambrix P: Representations of molecular pathways: an evaluation of SBML, PSI MI and BioPAX. Bioinformatics. 2005, 21: 4401-4407. 10.1093/bioinformatics/bti718.

Barrett T, Suzek TO, Troup DB, Wilhite SE, Ngau WC, Ledoux P, Rudnev D, Lash AE, Fujibuchi W, Edgar R: NCBI GEO: mining millions of expression profiles - database and tools. Nucleic Acids Res. 2005, 33: D562-566. 10.1093/nar/gki022.

Parkinson H, Sarkans U, Shojatalab M, Abeygunawardena N, Contrino S, Coulson R, Farne A, Lara GG, Holloway E, Kapushesky M, et al: ArrayExpress - a public repository for microarray gene expression data at the EBI. Nucleic Acids Res. 2005, 33: D553-555. 10.1093/nar/gki056.

Roop RM, Bellaire BH, Valderas MW, Cardelli JA: Adaptation of the brucellae to their intracellular niche. Mol Microbiol. 2004, 52: 621-630. 10.1111/j.1365-2958.2004.04017.x.

DelVecchio VG, Kapatral V, Redkar RJ, Patra G, Mujer C, Los T, Ivanova N, Anderson I, Bhattacharyya A, Lykidis A, et al: The genome sequence of the facultative intracellular pathogen Brucella melitensis. Proc Natl Acad Sci USA. 2002, 99: 443-448. 10.1073/pnas.221575398.

Paulsen IT, Seshadri R, Nelson KE, Eisen JA, Heidelberg JF, Read TD, Dodson RJ, Umayam L, Brinkac LM, Beanan MJ, et al: The Brucella suis genome reveals fundamental similarities between animal and plant pathogens and symbionts. Proc Natl Acad Sci USA. 2002, 99: 13148-13153. 10.1073/pnas.192319099.

Halling SM, Peterson-Burch BD, Bricker BJ, Zuerner RL, Qing Z, Li LL, Kapur V, Alt DP, Olsen SC: Completion of the genome sequence of Brucella abortus and comparison to the highly similar genomes of Brucella melitensis and Brucella suis. J Bacteriol. 2005, 187: 2715-2726. 10.1128/JB.187.8.2715-2726.2005.

Chain PS, Comerci DJ, Tolmasky ME, Larimer FW, Malfatti SA, Vergez LM, Aguero F, Land ML, Ugalde RA, Garcia E: Whole-genome analyses of speciation events in pathogenic brucellae. Infect Immun. 2005, 73: 8353-8361. 10.1128/IAI.73.12.8353-8361.2005.

Stein LD, Mungall C, Shu S, Caudy M, Mangone M, Day A, Nickerson E, Stajich JE, Harris TW, Arva A, Lewis S: The generic genome browser: a building block for a model organism system database. Genome Res. 2002, 12: 1599-1610. 10.1101/gr.403602.

Winsor GL, Lo R, Sui SJ, Ung KS, Huang S, Cheng D, Ching WK, Hancock RE, Brinkman FS: Pseudomonas aeruginosa Genome Database and PseudoCAP: facilitating community-based, continually updated, genome annotation. Nucleic Acids Res. 2005, 33: D338-343. 10.1093/nar/gki047.

Gee JM, Valderas MW, Kovach ME, Grippe VK, Robertson GT, Ng WL, Richardson JM, Winkler ME, Roop RM: The Brucella abortus Cu, Zn superoxide dismutase is required for optimal resistance to oxidative killing by murine macrophages and wild-type virulence in experimentally infected mice. Infect Immun. 2005, 73: 2873-2880. 10.1128/IAI.73.5.2873-2880.2005.

He Y, Vemulapalli R, Schurig GG: Recombinant Ochrobactrum anthropi expressing Brucella abortus Cu, Zn superoxide dismutase protects mice against B. abortus infection only after switching of immune responses to Th1 type. Infect Immun. 2002, 70: 2535-2543. 10.1128/IAI.70.5.2535-2543.2002.

Passalacqua KD, Bergman NH, Herring-Palmer A, Hanna P: The superoxide dismutases of Bacillus anthracis do not cooperatively protect against endogenous superoxide stress. J Bacteriol. 2006, 188: 3837-3848. 10.1128/JB.00239-06.

Marchler-Bauer A, Panchenko AR, Shoemaker BA, Thiessen PA, Geer LY, Bryant SH: CDD: a database of conserved domain alignments with links to domain three-dimensional structure. Nucleic Acids Res. 2002, 30: 281-283. 10.1093/nar/30.1.281.

Agranoff D, Monahan IM, Mangan JA, Butcher PD, Krishna S: Mycobacterium tuberculosis expresses a novel pH-dependent divalent cation transporter belonging to the Nramp family. J Exp Med. 1999, 190: 717-724. 10.1084/jem.190.5.717.

Boechat N, Lagier-Roger B, Petit S, Bordat Y, Rauzier J, Hance AJ, Gicquel B, Reyrat JM: Disruption of the gene homologous to mammalian Nramp1 in Mycobacterium tuberculosis does not affect virulence in mice. Infect Immun. 2002, 70: 4124-4131. 10.1128/IAI.70.8.4124-4131.2002.

Zaharik ML, Cullen VL, Fung AM, Libby SJ, Kujat Choy SL, Coburn B, Kehres DG, Maguire ME, Fang FC, Finlay BB: The Salmonella enterica serovar typhimurium divalent cation transport systems MntH and SitABCD are essential for virulence in an Nramp1G169 murine typhoid model. Infect Immun. 2004, 72: 5522-5525. 10.1128/IAI.72.9.5522-5525.2004.

Hayashi T, Makino K, Ohnishi M, Kurokawa K, Ishii K, Yokoyama K, Han CG, Ohtsubo E, Nakayama K, Murata T, et al: Complete genome sequence of enterohemorrhagic Escherichia coli O157:H7 and genomic comparison with a laboratory strain K-12. DNA Res. 2001, 8: 11-22. 10.1093/dnares/8.1.11.

Forst CV: Host-pathogen systems biology. Drug Discov Today. 2006, 11: 220-227. 10.1016/S1359-6446(05)03735-9.

Brazma A, Parkinson H, Sarkans U, Shojatalab M, Vilo J, Abeygunawardena N, Holloway E, Kapushesky M, Kemmeren P, Lara GG, et al: ArrayExpress - a public repository for microarray gene expression data at the EBI. Nucleic Acids Res. 2003, 31: 68-71. 10.1093/nar/gkg091.

Petersen R: Linux: The Complete Reference. 2000, Emeryville, CA: McGraw-Hill Osborne Media, 4

Winnenburg R, Baldwin TK, Urban M, Rawlings C, Kohler J, Hammond-Kosack KE: PHI-base: a new database for pathogen host interactions. Nucleic Acids Res. 2006, 34: D459-464. 10.1093/nar/gkj047.

Bulow L, Schindler M, Hehl R: PathoPlant: a platform for microarray expression data to analyze co-regulated genes involved in plant defense responses. Nucleic Acids Res. 2007, 35: D841-845. 10.1093/nar/gkl835.

Alm EJ, Huang KH, Price MN, Koche RP, Keller K, Dubchak IL, Arkin AP: The MicrobesOnline Web site for comparative genomics. Genome Res. 2005, 15: 1015-1022. 10.1101/gr.3844805.

Munch R, Hiller K, Barg H, Heldt D, Linz S, Wingender E, Jahn D: PRODORIC: prokaryotic database of gene regulation. Nucleic Acids Res. 2003, 31: 266-269. 10.1093/nar/gkg037.

NIAID Bioinformatics Resource Centers for Biodefense and Emerging or Re-emerging Infectious Diseases: an Overview. []

8.7 million species exist on Earth, study estimates

Correction: An earlier version of this article misstated the number of acres in a hectare. A hectare covers 2.47 acres, not 100 acres. This version has been corrected.

For centuries scientists have pondered a central question: How many species exist on Earth? Now, a group of researchers has offered an answer: 8.7 million.

Although the number is still an estimate, it represents the most rigorous mathematical analysis yet of what we know — and don’t know — about life on land and in the sea. The authors of the paper, published Tuesday evening by the scientific journal PLoS Biology, suggest that 86 percent of all terrestrial species and 91 percent of all marine species have yet to be discovered, described and catalogued.

The new analysis is significant not only because it gives more detail on a fundamental scientific mystery but because it helps capture the complexity of a natural system that is in danger of losing species at an unprecedented rate.

Marine biologist Boris Worm of Canada’s Dalhousie University, one of the paper’s co-authors, compared the planet to a machine with 8.7 million parts, all of which perform a valuable function.

“If you think of the planet as a life-support system for our species, you want to look at how complex that life-support system is,” Worm said. “We’re tinkering with that machine because we’re throwing out parts all the time.”

He noted that the International Union for Conservation of Nature produces the most sophisticated assessment of species on Earth, a third of which it estimates are in danger of extinction, but its survey monitors less than 1 percent of the world’s species.

For more than 250 years, scientists have classified species according to a system established by Swedish scientist Carl Linnaeus, which orders forms of life in a pyramid of groupings that move from very broad — the animal kingdom, for example — to specific species, such as the monarch butterfly.

Until now, estimates of the world’s species ranged from 3 million to 100 million. Five academics from Dalhousie University refined the number by compiling taxonomic data for roughly 1.2 million known species and identifying numerical patterns. They saw that within the best-known groups, such as mammals, there was a predictable ratio of species to broader categories. They applied these numerical patterns to all five major kingdoms of life, which exclude microorganisms and virus types.

The researchers predicted there are about 7.77 million species of animals, 298,000 of plants, 611,000 of fungi, 36,400 of protozoa and 27,500 of chromists (which include various algae and water molds). Only a fraction of these species have been identified, including just 7 percent of fungi and 12 percent of animals, compared with 72 percent of plants.

“The numbers are astounding,” said Jesse Ausubel, who is vice president of the Alfred P. Sloan Foundation and co-founder of the Census of Marine Life and the Encyclopedia of Life. “There are 2.2 million ways of making a living in the ocean. There are half a million ways to be a mushroom. That’s amazing to me.”

Angelika Brandt, a professor at the University of Hamburg’s Zoological Museum who discovered multiple species in Antarctica, called the paper “very significant,” adding that “they really try to find the gaps” in current scientific knowledge.

Brandt, who has uncovered crustaceans and other creatures buried in the sea floor during three expeditions to Antarctica, said the study’s estimate that 91 percent of marine species are still elusive matched her own experience of discovery. “That is exactly what we found in the Southern Ocean deep sea,” Brandt said. “The Southern Ocean deep sea is almost untouched, biologically.”

Researchers are still pushing to launch a series of ambitious expeditions to catalogue marine life over the next decade, including a group of Chilean scientists who hope to investigate the eastern Pacific and a separate group of Indonesian researchers who would probe their region’s waters.

One of the reasons so many species have yet to be catalogued is that describing and cataloguing them in the scientific literature is a painstaking process, and the number of professional taxonomists is dwindling.

Smithsonian Institution curator Terry Erwin, a research entomologist, said fewer financial resources and a shift toward genetic analysis has cut the number of professional taxonomists at work. Erwin noted that when he started at the Smithsonian in 1970 there were 12 research entomologists, and now there are six.

“Unfortunately, taxonomy is not what cutting-edge scientists feel is important,” Erwin said.

In a companion essay in PLoS Biology, Oxford University zoologist Robert M. May wrote that identifying species is more than a “stamp collecting” pastime, to which a Victorian physicist once compared it. He noted that crossing conventional rice with a new variety of wild rice in the 1970s made rice farming 30 percent more efficient.

“To the contrary, we increasingly recognise that such knowledge is important for full understanding of the ecological and evolutionary processes which created, and which are struggling to maintain, the diverse biological riches we are heir to,” he wrote. “It is a remarkable testament to humanity’s narcissism that we know the number of books in the U.S. Library of Congress on 1 February 2011 was 22,194,656, but cannot tell you — to within an order of magnitude — how many different species of plants and animals we share our world with.”

Erwin said researchers would continue to search for the best way to quantify global diversity beyond the new method. Erwin himself has been using a biodegradable insecticide since 1972 to fog trees in the Amazon and kill massive amounts of insects, which he and his colleagues have classified. Based on such sampling, Erwin posited in 1982 that there were roughly 30 million species of terrestrial arthropods — insects and their relatives — worldwide.

Extrapolating from that sample to determine a global total, he said, was a “mistake, one which others have repeated,” he said.

Erwin added he still thinks counting actual specimens is the best route, noting he and his students determined in 2005 that there are more than 100,000 species of insects in a single hectare (or 2.5 acres) of the Amazon. Noting that insects and their relatives count for 85 percent of life on Earth, he wondered why there’s such a fuss about counting the rest of the planet’s inhabitants: “Nothing else counts.”

Number of species on Earth tagged at 8.7 million

Most precise estimate yet suggests more than 80% of species still undiscovered.

There are 8.7 million eukaryotic species on our planet — give or take 1.3 million. The latest biodiversity estimate, based on a new method of prediction, dramatically narrows the range of 'best guesses', which was previously between 3 million and 100 million. It means that a staggering 86% of land species and 91% of marine species remain undiscovered.

Camilo Mora, a marine ecologist at the University of Hawaii at Manoa, and his colleagues at Dalhousie University in Halifax, Canada, have identified a consistent scaling pattern among the different levels of the taxonomic classification system (order, genus, species and so on) that allows the total number of species to be predicted. The research is published in PLoS Biology 1 today.

Mora argues that knowing how many species there are on Earth is one of the most important questions in science. "Finding this number satisfies a basic scientific curiosity," he says.

Bob May, a zoologist at the University of Oxford, UK, who wrote a commentary on the work 2 , agrees. "Knowing how many plants and animals there are on the planet is absolutely fundamental," he says. He also highlights the practical significance: "Without this knowledge, we cannot even begin to answer questions such as how much diversity we can lose while still maintaining the ecosystem services that humanity depends upon."

But the unstinting efforts of field taxonomists are not going to provide the number any time soon. In the more than 250 years since Swedish biologist Carl Linnaeus began the science of taxonomy, 1.2 million species have been identified and classified — less than 15% of Mora's new total. At this pace, May estimates that it will take another 480 years to complete the job of recording all species.

The catalogue of life

Instead, scientists have tried to predict the total number of species from the number already known. Some of the estimates amount to little more than educated guesses. "These predictions are unverifiable and experts change their mind," says Mora. Other approaches use assumptions that he describes as "unreliable and easy to break".

Mora's method is based on an analysis of the taxonomic classification for all 1.2 million species currently catalogued. Linnaeus's system forms a pyramid-like hierarchy — the lower the category, the more entities it contains. There are more species than genera, more genera than families, more families than orders and so on, right up to the top level, domain.

Mora and his colleagues show that a consistent numerical trend links the numbers in each category, and that this can be used to predict how many entities there should be in poorly catalogued levels, such as species, from the numbers in higher levels that are much more comprehensively described.

However, the method does not work for prokaryotes (bacteria and archaea) because the higher taxonomic levels are not well catalogued as is the case for eukaryotes. A conservative 'lower bound' estimate of about 10,000 prokaryotes is included in Mora's total but, in reality, they are likely to number in the millions.

"The unique thing about this approach is that we are able to validate it," he says. "By testing the predictions against well catalogued groups such as mammals, birds, reptiles and amphibians, we were able to show that we could predict the correct number of species."

The analysis also reveals that some groups are much better known than others. For example, some 72% of the predicted 298,000 plant species on land have already been documented, in comparison with only 12% of predicted land animal species and 7% of predicted land fungi species.

May is impressed. "I like this approach. Not only is it imaginative and novel, but the number it comes up with is within the range of my own best estimate!"

Study Easy Questions and Answers on Blood

Blood is a means of substance transportation throughout the body. Blood distributes nutrients, oxygen, hormones, antibodies and cells specialized in defense to tissues and collects waste such as nitrogenous wastes and carbon dioxide from them.

The Components of Blood

More Bite-Sized Q&As Below

2. What elements make up blood?

Blood is made of a liquid and a cellular portion. The fluid part is called plasma and contains several substances, including proteins, lipids, carbohydrates and mineral salts. The cellular components of blood are also known as blood corpuscles and they include erythrocytes (red blood cells), leukocytes and platelets.

Select any question to share it on FB or Twitter

Just select (or double-click) a question to share. Challenge your Facebook and Twitter friends.

Hematopoiesis, Bone Marrow and Stem Cells

3. What is hematopoiesis?

Hematopoiesis is the formation of blood cells and the other elements that make up blood.

4. Where does hematopoiesis occur?

Hematopoiesis occurs in the bone marrow (mainly within flat bones), where erythrocytes, leukocytes and platelets are made and in lymphoid tissue, which is responsible for the maturation of leukocytes and which is found in the thymus, spleen and lymph nodes.

5. In which bones can bone marrow chiefly be found? Is bone marrow made of bone tissue?

Bone marrow can mainly be found in the internal cavities of flat bones, such as vertebrae, the ribs, the shoulder blades, the sternum and the hips.

Bone marrow is not made of bone tissue, although it is a connective tissue just like bone tissue.

6. What are blood stem cells?

Stem cells are undifferentiated cells able to differentiate into other types of specialized cells.

The stem cells of the bone marrow produce differentiated blood cells. Depending on stimuli from specific growth factors, stem cells are turned into red blood cells, leukocytes and megakaryocytes (the cells that form platelets). Research shows that the stem cells of the bone marrow can also differentiate into muscle, nervous and hepatic cells.

Red Blood Cells and Hemoglobin

7. What are the other names for erythrocytes? What is the function of these cells?

Erythrocytes are also known as red blood cells (RBCs) or red corpuscles. Red blood cells are responsible for transporting oxygen from the lungs to tissues.

8. What is the name of the molecule in red blood cells that transports oxygen?

The respiratory pigment of red blood cells is called hemoglobin.

9. What is the molecular composition of hemoglobin? Does the functionality of hemoglobin as a protein depend on its tertiary or quaternary structure?

Hemoglobin is a molecule made of four polypeptide chains, each bound to an iron-containing molecular group called a heme group. Therefore, the molecule contains four polypeptide chains and four heme groups.

As a protein composed of polypeptide chains, the functionality of hemoglobin depends upon the integrity of its quaternary structure.

10. On average, what is the lifespan of a red blood cell? Where are they destroyed? Where do heme groups go after the destruction of hemoglobin molecules?

On average, red blood cells live for around 120 days. The spleen is the main organ where old red blood cells are destroyed.

During the destruction of red blood cells, the heme groups turn into bilirubin and this substance is then captured by the liver and later excreted to the bowels as a part of bile.

11. What are the functions of the spleen? Why can people still live after a total splenectomy (surgical removal of the spleen)?

The spleen has many functions: it participates in the destruction of old red blood cells in it specialized leukocytes are matured it helps regenerate the hematopoietic tissue of bone marrow when necessary and it can act as a sponge-like organ to retain or release blood into circulation.

It is not impossible to live after a total splenectomy because none of the functions of the spleen are both vital and exclusive to  this organ.

Anemia Explained

12. What is anemia? What are the four main types of anemia?

Anemia is a low concentration of hemoglobin in the blood.

The four main types of anemia are nutrient-deficiency anemia, anemia caused by blood loss, hemolytic anemia and aplastic anemia.

Nutrient-deficiency anemia is caused by a dietary deficiency in fundamental nutrients necessary for the production or functioning of red blood cells, such as iron (iron deficiency anemia), vitamin B12 and folic acid.

Anemia caused by blood loss occurs in hemorrhagic conditions or in diseases such as peptic ulcerations and hookworm disease.

Hemolytic anemia is caused by the excessive destruction of red blood cells, for example, in diseases such as malaria or in hypervolemic conditions (excessive water in blood causing lysis of red blood cells).

Aplastic anemia occurs from deficiencies in hematopoiesis and occurs when bone marrow is injured by cancer from other tissues (metastasis), by autoimmune diseases, by drug intoxication (such as sulfa drugs and anticonvulsants) or by chemical substances (such as benzene, insecticides, paints, herbicides and solvents in general). Some genetic diseases also affect bone marrow, causing aplastic anemia.

White Blood Cells

13. What is the difference between white and red blood cells? What are leukocytes?

Red blood cells are called erythrocytes and white blood cells are called leukocytes.

Leukocytes are cells specialized in the defense of the body against foreign agents and are part of the immune system.

14. What are the different types of leukocytes and how are they classified into granulocytes and agranulocytes?

The types of leukocytes are lymphocytes, monocytes, neutrophils, eosinophils and basophils. Granulocytes are those with a cytoplasm that contains granules (when viewed under electron microscopy): neutrophils, eosinophils and basophils are granulocytes. Agranulocytes are the other leukocytes: lymphocytes and monocytes.

15. What is the generic function of leukocytes? What are leukocytosis and leukopenia?

The generic function of leukocytes is to participate in the defense of the body against foreign agents that penetrate it or are produced inside the body.

Leukocytosis and leukopenia are clinical conditions in which a blood sample contains an abnormal count of leukocytes. When the leukocyte count in a blood sample is above the normal level for the individual, it is called leukocytosis. When the leukocyte count is lower than the expected normal level, it is called leukopenia. The multiplication of these defense cells, leukocytosis, generally takes place when the body is suffering from infections or in cancer of these cells. A low count of these defense cells, or leukopenia, occurs when some diseases, such as AIDS, attack the cells or when immunosuppressive drugs are used.

In general, the body uses leukocytosis as a defense reaction when it is facing infectious or pathogenic agents. The clinical condition of leukocytosis is therefore a sign of infection. Leukopenia occurs when there is a deficiency in the production (for example, in bone marrow diseases) or excessive destruction of leukocytes (for example, in the case of HIV infection).

Platelets and Hemostasis

16. What are the mechanisms to contain hemorrhage called?

The physiological mechanisms to contain hemorrhage (one of them is blood clotting) are generically called hemostasis, or hemostatic processes.

17. How are platelets formed? What is the function of platelets? What are the clinical consequences of the condition known as thrombocytopenia?

Platelets, also known as thrombocytes, are fragments of large bone marrow cells called megakaryocytes. Through their properties of aggregation and adhesiveness, they are directly involved in blood clotting as well as release substances that activate other hemostatic processes.

Thrombocytopenia is a clinical condition in which the blood platelet count of an individual is lower than normal. In this situation, the person becomes susceptible to hemorrhages.

The Coagulation Cascade

18. How does the body know that the coagulation process must begin?

When tissue wound contains injury to a blood vessel, the platelets and endothelial cells of the wall of the damaged vessel release substances (platelet factors and tissue factors, respectively) that trigger the clotting process.

19. How can the blood coagulation (clotting) process be described?

Blood clotting encompasses a sequence of chemical reactions whose products are enzymes that catalyze the subsequent reactions (that is why clotting reactions are called cascade reactions). In the plasma, thromboplastinogen transforms into thromboplastin, a reaction triggered by tissue and platelet factors released after injury to a blood vessel. Along with calcium ions, thromboplastin then catalyzes the transformation of prothrombin into thrombin. Thrombin then catalyzes a reaction that produces fibrin from fibrinogen. Fibrin, as an insoluble substance, forms a network that traps red blood cells and platelets, thus forming the blood clot and containing the hemorrhage.

20. What are clotting factors?

Clotting factors are substances (enzymes, coenzymes, reagents) necessary for the clotting process to happen. In addition to the triggering factors and reagents already described (tissue and platelet factors, thromplastinogen, prothrombin, fibrinogen, calcium ions), other substances participate in the blood clotting process as clotting factors. One of these is factor VIII, the deficiency of which causes hemophilia A, and another is factor IX, the deficiency of which causes hemophilia B.

21. In what organ are most of the clotting factors produced? What is the role of vitamin K in blood coagulation?

Most clotting factors are produced in the liver.

Vitamin K participates in the activation of several clotting factors and is essential for the proper functioning of blood coagulation.

Hemophilia Explained

22. What is factor VIII? What is the genetic disease in which this factor is absent?

Factor VIII has the function of activating factor X, which in turn is necessary for the transformation of prothrombin into thrombin during the clotting cascade. Hemophilia A is the X-linked genetic disease in which the individual does not produce factor VIII and as a result is more susceptible to severe hemorrhages.

23. How is hemophilia treated? Why is hemophilia rare in females?

Hemophilia is medically treated with the administration of factor VIII, in the case of hemophilia A, or of factor IX, in the case of hemophilia B, by means of blood or fresh frozen plasma transfusions.

Both hemophilia A or B are X-linked recessive diseases. For a girl to be hemophilic, it is necessary for both of her X chromosomes to be affected whereas boys, who have only one X chromosome, are more easily affected. A girl with only one affected chromosome does not present the disease, since the normal gene of the unaffected X chromosome produces the clotting factor.

24. What is the epidemiological association between hemophilia and HIV infection?

Since hemophilic patients need frequent transfusions of clotting factors (VIII or IX) they are more susceptible to contamination by infectious agents present in the blood from which the transfused elements come. In the past, blood banks did not usually perform HIV detection tests and many hemophilic patients have become infected with the virus.

Anticoagulation and Fibrinolysis

25. What are anticoagulants? What are the practical applications of anticoagulants, such as heparin, in Medicine?

Anticoagulants are substances that block clotting reactions and therefore stop the coagulation process. Ordinarily, anticoagulants circulate in the plasma, since under normal conditions blood must be maintained fluid.

In Medicine, anticoagulants such as heparin are used in surgeries in which tissue injuries caused by surgery act could trigger undesirable systemic blood clotting. They are also used to avoid the formation of thrombi inside the blood vessels of patients riskat an increased risk for thrombosis.

26. What is dicoumarol? What is the role of this substance in the clotting process and what are some examples of its toxicity?

Dicoumarol is an anticoagulant drug. Due to its molecular structure, dicoumarol competes with vitamin K to bind to substrates, thus blocking the formation of clotting factors and interrupting the production of prothrombin. Dicoumarol is found in some decomposing vegetables and can cause severe internal hemorrhages when those vegetables are accidentally ingested. Coumarinic anticoagulants cannot be administered during pregnancy since they pass the placental barrier and can cause fetal hemorrhages.

27. Streptokinase is a substance used in the treatment of acute myocardial infarction. What is function of this substance?

Substances known as fibrinolytics, such as streptokinase and urokinase, can destroy thrombi (clots formed inside blood vessels, capillaries or within the chambers of the heart) and are used in the treatment of obstructions of the coronary arteries or other blood vessels.

Streptokinase destroys the fibrin network and as a result dissolves the thrombotic clot. Its name is derived from the bacteria that produce it, streptococci.

Watch the video: Biology Test 1 Review (January 2023).