Information

Do cells store information other than permanent (chromosome) information

Do cells store information other than permanent (chromosome) information


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

The brain stores information in neurons (i.e. neural networks), and cells store information in DNA. But with DNA, this is permanent information. There is a lot of potential temporary information in the cell as well. The brain deals with temporary information in the short term memory, maybe there is something parallel in the cell. Wondering if there is any temporary or localized information cells store related to their processes such as for gene expression or their interaction with the external environment.

Maybe this looks like they have a small recombinant DNA strand that is used for creating some temporary proteins or such things to temporarily store information. Wondering if anything like this exists in cells and/or is studied.


Information About Animal Cells

Fascinated by various aspects of eukaryotic cell biology? Here's a basket of random interesting information about animal cells that you'll love reading!

Fascinated by various aspects of eukaryotic cell biology? Here’s a basket of random interesting information about animal cells that you’ll love reading!

Before we embark upon our journey towards unveiling sundry interesting facts about animal cells, let’s learn a few details about cells in general. Broadly, cells can be classified under two categories – prokaryotic cells and eukaryotic cells.

Would you like to write for us? Well, we're looking for good writers who want to spread the word. Get in touch with us and we'll talk.

A prokaryotic cell lacks a nucleus, cell membrane and the DNA structure of a prokaryotic cell is not organized in chromosomal order. Eukaryotic cells, on the other hand, are equipped with a nucleus each and all cellular matter including the DNA are contained within cell membranes. The DNA is also organized into chromosomes.

All bacteria – both eubacteria and archaebacteria – are prokaryotes. Every other living organism, in the animal kingdom as well as the plant kingdom, are eukaryotic, meaning they have a cellular structure that consists of cell membrane and nuclear material. Let’s move on to more such information about animal cells.

Random Interesting Facts About Animal Cells

Here is a list of some random information on animal cells that will leave you amazed by the time you’re done reading the last sentence of this article. Here we go!

  • Animal cells are nano chemical factories that are completely self-sufficient! The cells themselves manufacture everything that constitutes them. For instance, the cell membrane is produced by an organelle located near the nucleus, known as the Golgi complex.
  • This same organelle also combines proteins, lipids and carbohydrates in a kind of membrane-packed bubble that is then ejected from the cell to be used elsewhere in the body. This is a very important part of macronutrient synthesizing function of cells.
  • Ribosomal RNA, which creates protein from amino acids, which are among the primary building blocks of life, is produced by nucleolus, an organelle contained inside the nucleus.
  • The size of a single, random animal cell can fall anywhere between 1 and 100 micrometers. To put it in other words, no matter how large or small different types of animal cells are with relation to each other, they are still too small to be visible to the naked eyes.
  • Unlike prokaryotic which reproduce via binary fission, eukaryotic or animal cells reproduce either via the process of mitosis or sexually. The latter takes place when gametes of opposite sexual characteristics fuse together. After all, sperms and ova are nothing but gametes or sex cells! This is how new organisms are conceived and born.
  • Tissues are formed when cells of similar structures, composition and characteristics bundle together.
  • Animal cells have an inbuilt self-destruct system which is resorted to when a cell becomes damaged beyond repair or gets severely infected. This cellular suicide is known as apoptosis and when this phenomenon fails to take place, the condition that surfaces is what we know as cancer, where new cells are produced but the older, damaged ones do not die out to make place for them!
  • Contrary to widespread belief, the nucleus of a cell is rarely in the center of the cell! It can be anywhere in the cell but the very epicenter of it, which is mostly the case!
  • A single animal cell has the complete blueprint of all the information that is needed to create a complete organism from it! The genetic matter in the cells is nothing but encrypted information about the biology, psychology, characteristics and personality of the complete organism!
  • The mitochondria contained inside animal cells convert oxygen and other nutrients into energy. This is, thus, the powerhouse that keeps the entire cell up and running!
  • The nucleus is the brain of the cell which stores all genetic information in the form of DNA and it controls and regulates all other cellular functions including growth, metabolism, reproduction, apoptosis, protein synthesis, etc.
  • The plasma membrane that contains all cellular matter inside it is like a semipermeable wall that allows for molecular exchange through it.
  • Most single-celled eukaryotic organisms such as paramecium have external thread and tube-like structures all over them to help in locomotion.

  • The cytoplasm contains a kind of biological scaffolding material, known as cytoskeleton, which is composed of proteins and which helps cells to maintain their shape.
  • Although tiny themselves, cells are not the tiniest particles on earth! Each cell is made up of a number of even tinier particles known as atoms!

Loved it? I’m sure you did! Also, I hope you will leave this page with more knowledge regarding animal cells than what you came here with! As for me, I myself learned a lot while writing this article and I intend to continue my study of animal cells even after I’ve concluded this article!

Related Posts

Animal cells depict various irregular shapes and sizes and are visible only under the microscope. This BiologyWise article elaborates on the definition and the function of the parts of animal&hellip

A plant cell and an animal cell bear certain characteristics in common, as both are eukaryotic in nature. Read this article to gain more information about these similarities.

The following article provides information regarding the structure and functions of various cell organelles belonging to the eukaryotic cell.


Contents

Bioinformatics has become an important part of many areas of biology. In experimental molecular biology, bioinformatics techniques such as image and signal processing allow extraction of useful results from large amounts of raw data. In the field of genetics, it aids in sequencing and annotating genomes and their observed mutations. It plays a role in the text mining of biological literature and the development of biological and gene ontologies to organize and query biological data. It also plays a role in the analysis of gene and protein expression and regulation. Bioinformatics tools aid in comparing, analyzing and interpreting genetic and genomic data and more generally in the understanding of evolutionary aspects of molecular biology. At a more integrative level, it helps analyze and catalogue the biological pathways and networks that are an important part of systems biology. In structural biology, it aids in the simulation and modeling of DNA, [2] RNA, [2] [3] proteins [4] as well as biomolecular interactions. [5] [6] [7] [8]

History Edit

Historically, the term bioinformatics did not mean what it means today. Paulien Hogeweg and Ben Hesper coined it in 1970 to refer to the study of information processes in biotic systems. [9] [10] [11] This definition placed bioinformatics as a field parallel to biochemistry (the study of chemical processes in biological systems). [9]

Sequences Edit

Computers became essential in molecular biology when protein sequences became available after Frederick Sanger determined the sequence of insulin in the early 1950s. Comparing multiple sequences manually turned out to be impractical. A pioneer in the field was Margaret Oakley Dayhoff. [12] She compiled one of the first protein sequence databases, initially published as books [13] and pioneered methods of sequence alignment and molecular evolution. [14] Another early contributor to bioinformatics was Elvin A. Kabat, who pioneered biological sequence analysis in 1970 with his comprehensive volumes of antibody sequences released with Tai Te Wu between 1980 and 1991. [15] In the 1970s, new techniques for sequencing DNA were applied to bacteriophage MS2 and øX174, and the extended nucleotide sequences were then parsed with informational and statistical algorithms. These studies illustrated that well known features, such as the coding segments and the triplet code, are revealed in straightforward statistical analyses and were thus proof of the concept that bioinformatics would be insightful. [16] [17]

Goals Edit

To study how normal cellular activities are altered in different disease states, the biological data must be combined to form a comprehensive picture of these activities. Therefore, the field of bioinformatics has evolved such that the most pressing task now involves the analysis and interpretation of various types of data. This also includes nucleotide and amino acid sequences, protein domains, and protein structures. [18] The actual process of analyzing and interpreting data is referred to as computational biology. Important sub-disciplines within bioinformatics and computational biology include:

  • Development and implementation of computer programs that enable efficient access to, management and use of, various types of information.
  • Development of new algorithms (mathematical formulas) and statistical measures that assess relationships among members of large data sets. For example, there are methods to locate a gene within a sequence, to predict protein structure and/or function, and to cluster protein sequences into families of related sequences.

The primary goal of bioinformatics is to increase the understanding of biological processes. What sets it apart from other approaches, however, is its focus on developing and applying computationally intensive techniques to achieve this goal. Examples include: pattern recognition, data mining, machine learning algorithms, and visualization. Major research efforts in the field include sequence alignment, gene finding, genome assembly, drug design, drug discovery, protein structure alignment, protein structure prediction, prediction of gene expression and protein–protein interactions, genome-wide association studies, the modeling of evolution and cell division/mitosis.

Bioinformatics now entails the creation and advancement of databases, algorithms, computational and statistical techniques, and theory to solve formal and practical problems arising from the management and analysis of biological data.

Over the past few decades, rapid developments in genomic and other molecular research technologies and developments in information technologies have combined to produce a tremendous amount of information related to molecular biology. Bioinformatics is the name given to these mathematical and computing approaches used to glean understanding of biological processes.

Common activities in bioinformatics include mapping and analyzing DNA and protein sequences, aligning DNA and protein sequences to compare them, and creating and viewing 3-D models of protein structures.

Relation to other fields Edit

Bioinformatics is a science field that is similar to but distinct from biological computation, while it is often considered synonymous to computational biology. Biological computation uses bioengineering and biology to build biological computers, whereas bioinformatics uses computation to better understand biology. Bioinformatics and computational biology involve the analysis of biological data, particularly DNA, RNA, and protein sequences. The field of bioinformatics experienced explosive growth starting in the mid-1990s, driven largely by the Human Genome Project and by rapid advances in DNA sequencing technology.

Analyzing biological data to produce meaningful information involves writing and running software programs that use algorithms from graph theory, artificial intelligence, soft computing, data mining, image processing, and computer simulation. The algorithms in turn depend on theoretical foundations such as discrete mathematics, control theory, system theory, information theory, and statistics.

Since the Phage Φ-X174 was sequenced in 1977, [19] the DNA sequences of thousands of organisms have been decoded and stored in databases. This sequence information is analyzed to determine genes that encode proteins, RNA genes, regulatory sequences, structural motifs, and repetitive sequences. A comparison of genes within a species or between different species can show similarities between protein functions, or relations between species (the use of molecular systematics to construct phylogenetic trees). With the growing amount of data, it long ago became impractical to analyze DNA sequences manually. Computer programs such as BLAST are used routinely to search sequences—as of 2008, from more than 260,000 organisms, containing over 190 billion nucleotides. [20]

DNA sequencing Edit

Before sequences can be analyzed they have to be obtained from the data storage bank example the Genbank. DNA sequencing is still a non-trivial problem as the raw data may be noisy or afflicted by weak signals. Algorithms have been developed for base calling for the various experimental approaches to DNA sequencing.

Sequence assembly Edit

Most DNA sequencing techniques produce short fragments of sequence that need to be assembled to obtain complete gene or genome sequences. The so-called shotgun sequencing technique (which was used, for example, by The Institute for Genomic Research (TIGR) to sequence the first bacterial genome, Haemophilus influenzae) [21] generates the sequences of many thousands of small DNA fragments (ranging from 35 to 900 nucleotides long, depending on the sequencing technology). The ends of these fragments overlap and, when aligned properly by a genome assembly program, can be used to reconstruct the complete genome. Shotgun sequencing yields sequence data quickly, but the task of assembling the fragments can be quite complicated for larger genomes. For a genome as large as the human genome, it may take many days of CPU time on large-memory, multiprocessor computers to assemble the fragments, and the resulting assembly usually contains numerous gaps that must be filled in later. Shotgun sequencing is the method of choice for virtually all genomes sequenced today [ when? ] , and genome assembly algorithms are a critical area of bioinformatics research.

Genome annotation Edit

In the context of genomics, annotation is the process of marking the genes and other biological features in a DNA sequence. This process needs to be automated because most genomes are too large to annotate by hand, not to mention the desire to annotate as many genomes as possible, as the rate of sequencing has ceased to pose a bottleneck. Annotation is made possible by the fact that genes have recognisable start and stop regions, although the exact sequence found in these regions can vary between genes.

The first description of a comprehensive genome annotation system was published in 1995 [21] by the team at The Institute for Genomic Research that performed the first complete sequencing and analysis of the genome of a free-living organism, the bacterium Haemophilus influenzae. [21] Owen White designed and built a software system to identify the genes encoding all proteins, transfer RNAs, ribosomal RNAs (and other sites) and to make initial functional assignments. Most current genome annotation systems work similarly, but the programs available for analysis of genomic DNA, such as the GeneMark program trained and used to find protein-coding genes in Haemophilus influenzae, are constantly changing and improving.

Following the goals that the Human Genome Project left to achieve after its closure in 2003, a new project developed by the National Human Genome Research Institute in the U.S appeared. The so-called ENCODE project is a collaborative data collection of the functional elements of the human genome that uses next-generation DNA-sequencing technologies and genomic tiling arrays, technologies able to automatically generate large amounts of data at a dramatically reduced per-base cost but with the same accuracy (base call error) and fidelity (assembly error).

Computational evolutionary biology Edit

Evolutionary biology is the study of the origin and descent of species, as well as their change over time. Informatics has assisted evolutionary biologists by enabling researchers to:

  • trace the evolution of a large number of organisms by measuring changes in their DNA, rather than through physical taxonomy or physiological observations alone,
  • compare entire genomes, which permits the study of more complex evolutionary events, such as gene duplication, horizontal gene transfer, and the prediction of factors important in bacterial speciation,
  • build complex computational population genetics models to predict the outcome of the system over time [22]
  • track and share information on an increasingly large number of species and organisms

Future work endeavours to reconstruct the now more complex tree of life. [ according to whom? ]

The area of research within computer science that uses genetic algorithms is sometimes confused with computational evolutionary biology, but the two areas are not necessarily related.

Comparative genomics Edit

The core of comparative genome analysis is the establishment of the correspondence between genes (orthology analysis) or other genomic features in different organisms. It is these intergenomic maps that make it possible to trace the evolutionary processes responsible for the divergence of two genomes. A multitude of evolutionary events acting at various organizational levels shape genome evolution. At the lowest level, point mutations affect individual nucleotides. At a higher level, large chromosomal segments undergo duplication, lateral transfer, inversion, transposition, deletion and insertion. [23] Ultimately, whole genomes are involved in processes of hybridization, polyploidization and endosymbiosis, often leading to rapid speciation. The complexity of genome evolution poses many exciting challenges to developers of mathematical models and algorithms, who have recourse to a spectrum of algorithmic, statistical and mathematical techniques, ranging from exact, heuristics, fixed parameter and approximation algorithms for problems based on parsimony models to Markov chain Monte Carlo algorithms for Bayesian analysis of problems based on probabilistic models.

Many of these studies are based on the detection of sequence homology to assign sequences to protein families. [24]

Pan genomics Edit

Pan genomics is a concept introduced in 2005 by Tettelin and Medini which eventually took root in bioinformatics. Pan genome is the complete gene repertoire of a particular taxonomic group: although initially applied to closely related strains of a species, it can be applied to a larger context like genus, phylum etc. It is divided in two parts- The Core genome: Set of genes common to all the genomes under study (These are often housekeeping genes vital for survival) and The Dispensable/Flexible Genome: Set of genes not present in all but one or some genomes under study. A bioinformatics tool BPGA can be used to characterize the Pan Genome of bacterial species. [25]

Genetics of disease Edit

With the advent of next-generation sequencing we are obtaining enough sequence data to map the genes of complex diseases infertility, [26] breast cancer [27] or Alzheimer's disease. [28] Genome-wide association studies are a useful approach to pinpoint the mutations responsible for such complex diseases. [29] Through these studies, thousands of DNA variants have been identified that are associated with similar diseases and traits. [30] Furthermore, the possibility for genes to be used at prognosis, diagnosis or treatment is one of the most essential applications. Many studies are discussing both the promising ways to choose the genes to be used and the problems and pitfalls of using genes to predict disease presence or prognosis. [31]

Analysis of mutations in cancer Edit

In cancer, the genomes of affected cells are rearranged in complex or even unpredictable ways. Massive sequencing efforts are used to identify previously unknown point mutations in a variety of genes in cancer. Bioinformaticians continue to produce specialized automated systems to manage the sheer volume of sequence data produced, and they create new algorithms and software to compare the sequencing results to the growing collection of human genome sequences and germline polymorphisms. New physical detection technologies are employed, such as oligonucleotide microarrays to identify chromosomal gains and losses (called comparative genomic hybridization), and single-nucleotide polymorphism arrays to detect known point mutations. These detection methods simultaneously measure several hundred thousand sites throughout the genome, and when used in high-throughput to measure thousands of samples, generate terabytes of data per experiment. Again the massive amounts and new types of data generate new opportunities for bioinformaticians. The data is often found to contain considerable variability, or noise, and thus Hidden Markov model and change-point analysis methods are being developed to infer real copy number changes.

Two important principles can be used in the analysis of cancer genomes bioinformatically pertaining to the identification of mutations in the exome. First, cancer is a disease of accumulated somatic mutations in genes. Second cancer contains driver mutations which need to be distinguished from passengers. [32]

With the breakthroughs that this next-generation sequencing technology is providing to the field of Bioinformatics, cancer genomics could drastically change. These new methods and software allow bioinformaticians to sequence many cancer genomes quickly and affordably. This could create a more flexible process for classifying types of cancer by analysis of cancer driven mutations in the genome. Furthermore, tracking of patients while the disease progresses may be possible in the future with the sequence of cancer samples. [33]

Another type of data that requires novel informatics development is the analysis of lesions found to be recurrent among many tumors.

Analysis of gene expression Edit

The expression of many genes can be determined by measuring mRNA levels with multiple techniques including microarrays, expressed cDNA sequence tag (EST) sequencing, serial analysis of gene expression (SAGE) tag sequencing, massively parallel signature sequencing (MPSS), RNA-Seq, also known as "Whole Transcriptome Shotgun Sequencing" (WTSS), or various applications of multiplexed in-situ hybridization. All of these techniques are extremely noise-prone and/or subject to bias in the biological measurement, and a major research area in computational biology involves developing statistical tools to separate signal from noise in high-throughput gene expression studies. [34] Such studies are often used to determine the genes implicated in a disorder: one might compare microarray data from cancerous epithelial cells to data from non-cancerous cells to determine the transcripts that are up-regulated and down-regulated in a particular population of cancer cells.

Analysis of protein expression Edit

Protein microarrays and high throughput (HT) mass spectrometry (MS) can provide a snapshot of the proteins present in a biological sample. Bioinformatics is very much involved in making sense of protein microarray and HT MS data the former approach faces similar problems as with microarrays targeted at mRNA, the latter involves the problem of matching large amounts of mass data against predicted masses from protein sequence databases, and the complicated statistical analysis of samples where multiple, but incomplete peptides from each protein are detected. Cellular protein localization in a tissue context can be achieved through affinity proteomics displayed as spatial data based on immunohistochemistry and tissue microarrays. [35]

Analysis of regulation Edit

Gene regulation is the complex orchestration of events by which a signal, potentially an extracellular signal such as a hormone, eventually leads to an increase or decrease in the activity of one or more proteins. Bioinformatics techniques have been applied to explore various steps in this process.

For example, gene expression can be regulated by nearby elements in the genome. Promoter analysis involves the identification and study of sequence motifs in the DNA surrounding the coding region of a gene. These motifs influence the extent to which that region is transcribed into mRNA. Enhancer elements far away from the promoter can also regulate gene expression, through three-dimensional looping interactions. These interactions can be determined by bioinformatic analysis of chromosome conformation capture experiments.

Expression data can be used to infer gene regulation: one might compare microarray data from a wide variety of states of an organism to form hypotheses about the genes involved in each state. In a single-cell organism, one might compare stages of the cell cycle, along with various stress conditions (heat shock, starvation, etc.). One can then apply clustering algorithms to that expression data to determine which genes are co-expressed. For example, the upstream regions (promoters) of co-expressed genes can be searched for over-represented regulatory elements. Examples of clustering algorithms applied in gene clustering are k-means clustering, self-organizing maps (SOMs), hierarchical clustering, and consensus clustering methods.

Several approaches have been developed to analyze the location of organelles, genes, proteins, and other components within cells. This is relevant as the location of these components affects the events within a cell and thus helps us to predict the behavior of biological systems. A gene ontology category, cellular component, has been devised to capture subcellular localization in many biological databases.

Microscopy and image analysis Edit

Microscopic pictures allow us to locate both organelles as well as molecules. It may also help us to distinguish between normal and abnormal cells, e.g. in cancer.

Protein localization Edit

The localization of proteins helps us to evaluate the role of a protein. For instance, if a protein is found in the nucleus it may be involved in gene regulation or splicing. By contrast, if a protein is found in mitochondria, it may be involved in respiration or other metabolic processes. Protein localization is thus an important component of protein function prediction. There are well developed protein subcellular localization prediction resources available, including protein subcellular location databases, and prediction tools. [36] [37]

Nuclear organization of chromatin Edit

Data from high-throughput chromosome conformation capture experiments, such as Hi-C (experiment) and ChIA-PET, can provide information on the spatial proximity of DNA loci. Analysis of these experiments can determine the three-dimensional structure and nuclear organization of chromatin. Bioinformatic challenges in this field include partitioning the genome into domains, such as Topologically Associating Domains (TADs), that are organised together in three-dimensional space. [38]

Protein structure prediction is another important application of bioinformatics. The amino acid sequence of a protein, the so-called primary structure, can be easily determined from the sequence on the gene that codes for it. In the vast majority of cases, this primary structure uniquely determines a structure in its native environment. (Of course, there are exceptions, such as the bovine spongiform encephalopathy (mad cow disease) prion.) Knowledge of this structure is vital in understanding the function of the protein. Structural information is usually classified as one of secondary, tertiary and quaternary structure. A viable general solution to such predictions remains an open problem. Most efforts have so far been directed towards heuristics that work most of the time. [ citation needed ]

One of the key ideas in bioinformatics is the notion of homology. In the genomic branch of bioinformatics, homology is used to predict the function of a gene: if the sequence of gene A, whose function is known, is homologous to the sequence of gene B, whose function is unknown, one could infer that B may share A's function. In the structural branch of bioinformatics, homology is used to determine which parts of a protein are important in structure formation and interaction with other proteins. In a technique called homology modeling, this information is used to predict the structure of a protein once the structure of a homologous protein is known. This currently remains the only way to predict protein structures reliably.

One example of this is hemoglobin in humans and the hemoglobin in legumes (leghemoglobin), which are distant relatives from the same protein superfamily. Both serve the same purpose of transporting oxygen in the organism. Although both of these proteins have completely different amino acid sequences, their protein structures are virtually identical, which reflects their near identical purposes and shared ancestor. [39]

Other techniques for predicting protein structure include protein threading and de novo (from scratch) physics-based modeling.

Another aspect of structural bioinformatics include the use of protein structures for Virtual Screening models such as Quantitative Structure-Activity Relationship models and proteochemometric models (PCM). Furthermore, a protein's crystal structure can be used in simulation of for example ligand-binding studies and in silico mutagenesis studies.

Network analysis seeks to understand the relationships within biological networks such as metabolic or protein–protein interaction networks. Although biological networks can be constructed from a single type of molecule or entity (such as genes), network biology often attempts to integrate many different data types, such as proteins, small molecules, gene expression data, and others, which are all connected physically, functionally, or both.

Systems biology involves the use of computer simulations of cellular subsystems (such as the networks of metabolites and enzymes that comprise metabolism, signal transduction pathways and gene regulatory networks) to both analyze and visualize the complex connections of these cellular processes. Artificial life or virtual evolution attempts to understand evolutionary processes via the computer simulation of simple (artificial) life forms.

Molecular interaction networks Edit

Tens of thousands of three-dimensional protein structures have been determined by X-ray crystallography and protein nuclear magnetic resonance spectroscopy (protein NMR) and a central question in structural bioinformatics is whether it is practical to predict possible protein–protein interactions only based on these 3D shapes, without performing protein–protein interaction experiments. A variety of methods have been developed to tackle the protein–protein docking problem, though it seems that there is still much work to be done in this field.

Other interactions encountered in the field include Protein–ligand (including drug) and protein–peptide. Molecular dynamic simulation of movement of atoms about rotatable bonds is the fundamental principle behind computational algorithms, termed docking algorithms, for studying molecular interactions.

Literature analysis Edit

The growth in the number of published literature makes it virtually impossible to read every paper, resulting in disjointed sub-fields of research. Literature analysis aims to employ computational and statistical linguistics to mine this growing library of text resources. For example:

  • Abbreviation recognition – identify the long-form and abbreviation of biological terms
  • Named entity recognition – recognizing biological terms such as gene names
  • Protein–protein interaction – identify which proteins interact with which proteins from text

The area of research draws from statistics and computational linguistics.

High-throughput image analysis Edit

Computational technologies are used to accelerate or fully automate the processing, quantification and analysis of large amounts of high-information-content biomedical imagery. Modern image analysis systems augment an observer's ability to make measurements from a large or complex set of images, by improving accuracy, objectivity, or speed. A fully developed analysis system may completely replace the observer. Although these systems are not unique to biomedical imagery, biomedical imaging is becoming more important for both diagnostics and research. Some examples are:

  • high-throughput and high-fidelity quantification and sub-cellular localization (high-content screening, cytohistopathology, Bioimage informatics)
  • clinical image analysis and visualization
  • determining the real-time air-flow patterns in breathing lungs of living animals
  • quantifying occlusion size in real-time imagery from the development of and recovery during arterial injury
  • making behavioral observations from extended video recordings of laboratory animals
  • infrared measurements for metabolic activity determination
  • inferring clone overlaps in DNA mapping, e.g. the Sulston score

High-throughput single cell data analysis Edit

Computational techniques are used to analyse high-throughput, low-measurement single cell data, such as that obtained from flow cytometry. These methods typically involve finding populations of cells that are relevant to a particular disease state or experimental condition.

Biodiversity informatics Edit

Biodiversity informatics deals with the collection and analysis of biodiversity data, such as taxonomic databases, or microbiome data. Examples of such analyses include phylogenetics, niche modelling, species richness mapping, DNA barcoding, or species identification tools.

Ontologies and data integration Edit

Biological ontologies are directed acyclic graphs of controlled vocabularies. They are designed to capture biological concepts and descriptions in a way that can be easily categorised and analysed with computers. When categorised in this way, it is possible to gain added value from holistic and integrated analysis.

The OBO Foundry was an effort to standardise certain ontologies. One of the most widespread is the Gene ontology which describes gene function. There are also ontologies which describe phenotypes.

Databases are essential for bioinformatics research and applications. Many databases exist, covering various information types: for example, DNA and protein sequences, molecular structures, phenotypes and biodiversity. Databases may contain empirical data (obtained directly from experiments), predicted data (obtained from analysis), or, most commonly, both. They may be specific to a particular organism, pathway or molecule of interest. Alternatively, they can incorporate data compiled from multiple other databases. These databases vary in their format, access mechanism, and whether they are public or not.

Some of the most commonly used databases are listed below. For a more comprehensive list, please check the link at the beginning of the subsection.

  • Used in biological sequence analysis: Genbank, UniProt
  • Used in structure analysis: Protein Data Bank (PDB)
  • Used in finding Protein Families and Motif Finding: InterPro, Pfam
  • Used for Next Generation Sequencing: Sequence Read Archive
  • Used in Network Analysis: Metabolic Pathway Databases (KEGG, BioCyc), Interaction Analysis Databases, Functional Networks
  • Used in design of synthetic genetic circuits: GenoCAD

Software tools for bioinformatics range from simple command-line tools, to more complex graphical programs and standalone web-services available from various bioinformatics companies or public institutions.

Open-source bioinformatics software Edit

Many free and open-source software tools have existed and continued to grow since the 1980s. [40] The combination of a continued need for new algorithms for the analysis of emerging types of biological readouts, the potential for innovative in silico experiments, and freely available open code bases have helped to create opportunities for all research groups to contribute to both bioinformatics and the range of open-source software available, regardless of their funding arrangements. The open source tools often act as incubators of ideas, or community-supported plug-ins in commercial applications. They may also provide de facto standards and shared object models for assisting with the challenge of bioinformation integration.

The range of open-source software packages includes titles such as Bioconductor, BioPerl, Biopython, BioJava, BioJS, BioRuby, Bioclipse, EMBOSS, .NET Bio, Orange with its bioinformatics add-on, Apache Taverna, UGENE and GenoCAD. To maintain this tradition and create further opportunities, the non-profit Open Bioinformatics Foundation [40] have supported the annual Bioinformatics Open Source Conference (BOSC) since 2000. [41]

An alternative method to build public bioinformatics databases is to use the MediaWiki engine with the WikiOpener extension. This system allows the database to be accessed and updated by all experts in the field. [42]

Web services in bioinformatics Edit

SOAP- and REST-based interfaces have been developed for a wide variety of bioinformatics applications allowing an application running on one computer in one part of the world to use algorithms, data and computing resources on servers in other parts of the world. The main advantages derive from the fact that end users do not have to deal with software and database maintenance overheads.

Basic bioinformatics services are classified by the EBI into three categories: SSS (Sequence Search Services), MSA (Multiple Sequence Alignment), and BSA (Biological Sequence Analysis). [43] The availability of these service-oriented bioinformatics resources demonstrate the applicability of web-based bioinformatics solutions, and range from a collection of standalone tools with a common data format under a single, standalone or web-based interface, to integrative, distributed and extensible bioinformatics workflow management systems.

Bioinformatics workflow management systems Edit

A bioinformatics workflow management system is a specialized form of a workflow management system designed specifically to compose and execute a series of computational or data manipulation steps, or a workflow, in a Bioinformatics application. Such systems are designed to

  • provide an easy-to-use environment for individual application scientists themselves to create their own workflows,
  • provide interactive tools for the scientists enabling them to execute their workflows and view their results in real-time,
  • simplify the process of sharing and reusing workflows between the scientists, and
  • enable scientists to track the provenance of the workflow execution results and the workflow creation steps.

BioCompute and BioCompute Objects Edit

In 2014, the US Food and Drug Administration sponsored a conference held at the National Institutes of Health Bethesda Campus to discuss reproducibility in bioinformatics. [44] Over the next three years, a consortium of stakeholders met regularly to discuss what would become BioCompute paradigm. [45] These stakeholders included representatives from government, industry, and academic entities. Session leaders represented numerous branches of the FDA and NIH Institutes and Centers, non-profit entities including the Human Variome Project and the European Federation for Medical Informatics, and research institutions including Stanford, the New York Genome Center, and the George Washington University.

It was decided that the BioCompute paradigm would be in the form of digital 'lab notebooks' which allow for the reproducibility, replication, review, and reuse, of bioinformatics protocols. This was proposed to enable greater continuity within a research group over the course of normal personnel flux while furthering the exchange of ideas between groups. The US FDA funded this work so that information on pipelines would be more transparent and accessible to their regulatory staff. [46]

In 2016, the group reconvened at the NIH in Bethesda and discussed the potential for a BioCompute Object, an instance of the BioCompute paradigm. This work was copied as both a "standard trial use" document and a preprint paper uploaded to bioRxiv. The BioCompute object allows for the JSON-ized record to be shared among employees, collaborators, and regulators. [47] [48]

Software platforms designed to teach bioinformatics concepts and methods include Rosalind and online courses offered through the Swiss Institute of Bioinformatics Training Portal. The Canadian Bioinformatics Workshops provides videos and slides from training workshops on their website under a Creative Commons license. The 4273π project or 4273pi project [49] also offers open source educational materials for free. The course runs on low cost Raspberry Pi computers and has been used to teach adults and school pupils. [50] [51] 4273π is actively developed by a consortium of academics and research staff who have run research level bioinformatics using Raspberry Pi computers and the 4273π operating system. [52] [53]

MOOC platforms also provide online certifications in bioinformatics and related disciplines, including Coursera's Bioinformatics Specialization (UC San Diego) and Genomic Data Science Specialization (Johns Hopkins) as well as EdX's Data Analysis for Life Sciences XSeries (Harvard). University of Southern California offers a Masters In Translational Bioinformatics focusing on biomedical applications.

There are several large conferences that are concerned with bioinformatics. Some of the most notable examples are Intelligent Systems for Molecular Biology (ISMB), European Conference on Computational Biology (ECCB), and Research in Computational Molecular Biology (RECOMB).


Privacy Policy

The HLRCC Family Alliance website subscribes to the principles of HONcode of the international Health on Net Foundation, to assure you of the highest quality of health information.

Medical information on this site is reviewed by our Medical, Research and Support Council.

Information provided on the website is designed to support, not replace, the relationship that exists between a patient or site visitor and his or her physician.

The website does not accept advertising. It is supported solely by donations from people with HLRCC, their friends, their families and supporters, and physicians and researchers interested in HLRCC.

Privacy

Personal details that you provide to the HLRCC Family Alliance, including e-mail addresses, are kept entirely confidential. These details are shared within this organization among staff and volunteers for the purpose of providing service to you, but are never shared with, rented or sold to other organizations. All staff and volunteers have made confidentiality agreements to protect your information. To verify your information or send corrections, please contact us at [email protected]

Information you submit to us voluntarily through the website is stored on our secure server using SSL encryption technology.

Computer Tracking and Cookies

The website is not set up to track, collect or distribute personal information not entered by visitors. Our site logs do generate certain kinds of non-identifying site usage data, such as the number of hits and visits to our sites. This information is used for internal purposes by technical support staff to provide better services to the public and may also be provided to others, but again, the statistics contain no personal information and cannot be used to gather such information.

The website also recognizes the online site where a visitor searched to find a subject which brought them to the HLRCCFA website, but we cannot identify the visitor or the visitor’s address. Site information is used to help us serve these search sites with the correct information about our material. No personal information is collected.

A cookie is a small amount of data that is sent to your browser from a Web server and stored on your computer’s hard drive. HLRCCFA does not use cookies in its web pages. We do not generate personal data, do not read personal data from your machine and do not store any information other than what you voluntarily submit to us.

Problems or Complaints with HLRCCFA Privacy Policy

If you have a complaint about HLRCCFA compliance with this privacy policy, you may contact us at [email protected]

Links to Third Party Sites

The links included within the service may let you leave this site. The linked sites are not under the control of HLRCC Family Alliance and HLRCCFA is not responsible for the contents of any linked site, or any link contained in a linked site, or any changes or updates to such sites. These links are provided as a convenience only, and the inclusion of any link does not imply endorsement by HLRCCFA of the site or any association with their operators.

If you have technical questions about this site, please contact: [email protected]


Parts of the Brain: Structures and Their Functions

The brain is made up of 3 essential parts: Cerebrum, Cerebellum, and Brainstem.

1.Cerebrum

The cerebrum is the largest part of the human brain. It has a rough surface (cerebral cortex) with gyri and sulci. It can also be divided into 2 parts: the left hemisphere and the right hemisphere.

Although the hemispheres look identical, the left and right hemispheres have particular functions. While the left hemisphere (logical side) controls language and speech, the right hemisphere (creative side) is responsible for translating visual information.

According to the function, the cerebrum is further divided into 4 different lobes: frontal lobes, parietal lobes, temporal lobes, and occipital lobes. Each lobe has different functions:

Frontal Lobe

The frontal lobe rests just below the forehead and controls our reasoning, organizing, our ability to speak, solve problems, pay attention, and our emotions.

Parietal Lobe

The parietal lobe lies at the upper rear of our brain. This lobe manages our complex behaviors, including the 5 senses: touch, vision, and spatial awareness.

The parietal lobe also relays sensory information from different parts of the body, helps us process and learn a language, and maintains the body’s positioning and movement.

Occipital Lobe

The occipital lobe is located at the rear of our brain. This lobe is responsible for our visual awareness, including visual attention, optical recognition, and spatial awareness.

It also controls our ability to interprets body language like facial expressions, gestures, and body postures.

Temporal Lobe

Your temporal lobe sits close to your ears and is associated with interpreting and translating auditory stimuli. For example, your temporal lobe allows you to focus on one voice at a loud party.

This lobe also helps you understand oral language, general process knowledge and stores your verbal and visual memory.

2. Cerebellum

The cerebellum, also known as the little brain, is located in the back of the brain. It sits just below the occipital lobes and on top of the pons. Just like the cerebrum, the cerebellum has two equal hemispheres and a wrinkly surface.

Although the cerebellum is small, it contains numerous neurons. It can help coordinate the movement of body muscles, especially the fine movement of hands and feet. The function of the cerebellum also includes maintaining posture, equilibrium, body balance, and even speech.

3. Brainstem

The brain stem is the posterior part of the brain that connects the brain with the spinal cord. Brain stem works together to regulate essential life functions, including body temperature, breathing, heartbeat, and blood pressure.

In addition, the brain stem coordinates the fine movement of the face and limbs. Functions of this area include sneezing, vomiting, swallowing, and movement of the eyes and mouth.

The brain stem comprises parts of the midbrain, pons, and medulla, all of which have specific functions.

Midbrain

The midbrain consists of the tegmentum and the tectum, located in the mouth of the brain stem. It plays a key role in controlling voluntary motor function and transferring messages. In addition, it controls eye movement and processes auditory, visual information, and eye movement.

The pons is the largest structure in the brain stem and is found above the medulla and underneath the midbrain, and in front of the cerebellum. It functions as a bridge between several parts of the nervous system, including the cerebrum and cerebellum. The pons also contains many vital nerves, such as

  • The trigeminal nerve – This nerve controls facial muscles involved in chewing, biting, and swallowing.
  • The abducens nerve – The abducens nerve allows the eyes to look from side to side.
  • The vestibulocochlear nerve – This nerve controls hearing and balance.

The pons also helps regulate sleep cycles, breathing patterns, respiration, and reflexes.

Medulla

The medulla is a cone-shaped structure located in front of the cerebellum. The prominent role of the medulla oblongata is regulating involuntary (autonomic) functions, including breathing, digestion, sneezing, swallowing, and heart rate.

4. Limbic System

The limbic system is a complex brain structure that lies deep in the cerebrum. It contains the thalamus, hypothalamus, hippocampus, and amygdala.

Since it plays a significant role in controlling our emotions and forming our memories, it is often called our “emotional brain” or ‘childish brain.’

Thalamus

The thalamus is a small mass of grey matter that relays sensory information from the spinal cord, brainstem, and other parts of the brain to the cerebral cortex.

The thalamus is a relay station for signals received by the human body from the outside to enter the brain. In addition, it is also related to consciousness, memory, and sleep.

Hypothalamus

The hypothalamus is a part of the brain that sits right below the thalamus. Although the hypothalamus is a tiny part of the brain, it has one of the most crucial and busiest roles.

The primary function of the hypothalamus is maintaining homeostasis in the body. It’s also responsible for releasing hormones, regulating body temperature, controlling appetite, and managing sexual behavior.

Amygdala

The amygdala is a small, almond-shaped structure in the limbic system that processes strong emotions like fear, aggression, and anxiety.

The amygdala is located close to the hippocampus. It contains many receptor sites that can also perceive certain emotions and the storing and retrieving of emotional memories.

Hippocampus

The principal role of the hippocampus is forming, organizing, and storing short and long-term memories.

The hippocampus also helps form new memories and links emotions, feelings, and sensations such as specific smell and sound to these memories.

Pituitary Gland

The pituitary gland is a small, pea-shaped gland located at the brain’s base, just behind the bridge of the nose. The pituitary gland produces different hormones that regulate many of the body’s processes, including growth, sexual development, metabolism, and reproduction.

5. Skull

The skull is a fusion of bones that protects the brain, the brainstem and outlines the face. The 8 bones that protect your bones from injury include:

  • 1 frontal bone
  • 2 parietal bones
  • 1 occipital bone
  • 2 temporal bones
  • 1 sphenoid bone
  • 1 Ethmoid bone

Brain Conditions When the Brains Structure is Damaged

Your brain is one of the most complex organs in the human, and if one of the brain’s structures is damaged, it could lead to a brain condition.

For example, if your Broca’s area is damaged, you may have trouble moving your tongue, and your speech may be slow and poorly articulated. Other conditions that could affect the brain include:

Brain aneurysm: When an artery in the brain swells, it could lead to a brain aneurysm. If the aneurysm ruptures, it could cause a stroke.

Brain tumor: When any tissue in your brain starts growing abnormally, it could be symptoms of benign or malignant cancer.

Intracerebral hemorrhage: Bleeding inside the brain can cause difficulty speaking or difficulty walking.

Concussion: When there’s a heavy blow to the head, you may experience a concussion and temporarily lose brain function.

Cerebral edema: Electrolyte imbalance in the brain could lead to swelling of the brain tissue.

Glioblastoma: Glioblastoma is a brain tumor that develops very rapidly and creates pressure on the brain.

Pro tip: Glioblastoma is usually aggressive and could be very difficult to cure.

Meningitis: When the lining around the brain or spinal cord becomes inflamed from an infection, you may have meningitis. Other symptoms associated with meningitis include headache, fever, sleepiness, neck pain, and stiff neck.

Encephalitis: Encephalitis usually arises when tissue in the brain becomes inflamed. It’s usually a result of a viral infection and could cause fever, headache, and confusion.

Traumatic brain injury: A severe head injury could lead to permanent brain damage. Other symptoms include mental impairment and personality and mood changes.

Parkinson’s disease: Degeneration of nerves in the brain could lead to the development of Parkinson’s disease. People with Parkinson’s disease may experience hand tremors and problems with their coordination and movement.

Epilepsy: Although there’s no exact cause for epilepsy, head injuries and several strokes can trigger epilepsy. People with epilepsy may also experience seizures.

Dementia: When a nerve cell in the brain starts malfunctioning or degenerating slowly, it could lead to dementia. Strokes and alcohol abuse could also cause brain dementia.

Alzheimer’s disease: Alzheimer’s disease is known as senile dementia. Here, the nerves in the brain degenerate, causing progressive dementia.

Brain abscess: A brain abscess occurs when there’s a pocket of infection in the brain. Brain abscesses are usually caused by bacteria and may require either antibiotics or surgical removal.

How to maintain a healthy brain

As we age, certain brain areas start to shrink, especially areas that are important to learning and storing memories. The good news is – You can follow some tips to keep your brain in excellent health and slow down mental decline. Here are some tips you can use to maintain a healthy brain.

Doing mental exercises like doing crossword puzzles, reading regularly, or learning a new language helps improve your mental fitness. Doing mental exercises stimulate nerve cells and may even trigger the development of new brain cells.

Injuries to the head can cause brain concussions, other severe brain injuries. You can protect your head by wearing helmets or other protective gear when you’re playing contact sports.

Doing regular physical exercises doesn’t only help your muscles it helps your brain too. Exercising improves blood flow in your body, including your brain.

Moderate exercise also lowers blood pressure, reduces mental stress, and could trigger the development of new nerve cells.

Smoking isn’t only bad for your general health it could also lead to cognitive decline in the brain.


Biologists Unravel Key Events of Cell Division

EACH day in a healthy human body, at least a trillion cells divide. White blood cells proliferate into fresh legions of T cells, B cells, macrophages and other gladiators of the immune system. The cells that line the stomach divide daily to keep a seamless seal around the belly's caustic juices. As the uppermost layer of the skin sloughs off, newborn dermal cells poke their way up from below. Hair grows nails grow. Cell division is synonymous with life.

The mystery of how a cell knows when to divide and when to cease division is one of the fundamental puzzles of biology. And lately, through an extraordinary convergence of research from a broad spectrum of disciplines, scientists have made enormous progress in unraveling the pivotal molecular events that control cell division.

Some of the new results were reported last week at a conference at Rockefeller University in New York presented by the General Motors Cancer Research Foundation.

"So many things are coming together from fields that have been developing on their own for years," said Dr. Raymond Erikson, a molecular biologist at Harvard Medical School who attended the meeting. "There's a new synergism that is really exhilarating."

Biologists believe they are close to a deep understanding of the cell cycle, the intricate dance that begins when a cell awakens from its normal state of rest and glissades with balletic precision through the replication of its chromosomes and the apportioning of them into two progeny cells.

This knowledge has coalesced with a swiftness exceptional even for basic biology, which has grown accustomed to a dizzying pace ever since the advent of recombinant DNA technology.

"Our knowledge of the cell cycle compared to just two or three years ago is really the difference between day and night," said Dr. David Beach of Cold Spring Harbor Laboratory on Long Island.

As part of the new dawn, biologists have identified two types of proteins that are indispensable to beginning and completing cell division. Either species of protein on its own is useless -- "a little lump of clay," said Dr. Joan Ruderman of Harvard Medical School, a pioneer in the field.

But when the proteins clasp together they take on the vitality of young lovers, galvanizing a cascade of changes in the cell that culminates in division. The paired proteins seem to work by altering the shapes and duties of a string of other proteins in the cell, and scientists have identified many of those target proteins.

But the coupling is short-lived: researchers have discovered that after each cycle of division, one of the two proteins rapidly disintegrates, an event that seems to protect against untrammeled cell growth.

So exquisitely do the master molecules perform their job that nature has decided to make wide use of them. Among the many exciting findings about the cell cycle is this universality of the central players: the same proteins commanding cell division in primitive cells like yeast are also at the helm in human tissue.

"From my point of view, one of the most astonishing realizations has been that nature used the same elements over and over to control different parts of the cell cycle," said Dr. Steven Reed of the Research Institute of Scripps Clinic in La Jolla, Calif. Clues to Cancer

Scientists are also beginning to knit together the findings about the cell cycle with recent studies of the hormones and peptides in the bloodstream known to stimulate cell growth.

By combining discoveries about the growth signals that bombard the cell from the outside with knowledge of the internal machinery that orchestrates growth, scientists hope to form a complete and finely detailed portrait of the dividing cell. That information will in turn permit them to better understand cell division gone awry, the hallmark of cancer.

"I got into this business for the intellectual satisfaction of it," said Dr. Ruderman, "but I really believe it will tie into something useful."

<> <> Some scientists say an understanding of the nuts and bolts of the cell cycle could provide novel ways to attack cancer cells. They point out that however aggressive and deranged tumor cells become, they still must proceed through the steps of cell division. Hence, they theoretically could be blocked at particular points in the cycle.

But scientists in the cell cycle field say that for all the satisfaction they derive from their work, the pace is beginning to exhaust them. "It's a little frightening," said Dr. Reed. "The ground is always shifting under your feet, and what's true today may be tomorrow's old news. You have to spend a lot of time on the phone or plugged into the rumor mill." The Process Signals For Division

Far from being a fledgling specialty, the study of the cell cycle is among the classic problems of biology, though until recently it interested only a small corps of scientists.

Researchers realized that cells of any sort do not divide willy-nilly, but rather must work their way through defined stages. And before a cell progresses from one stage to the next, it takes a break, apparently to assess whether all the major chores of the previous stage have been completed.

During division, said Dr. Tim Hunt of Cambridge University in England, "the cell comes to a couple of checkpoints, when it asks a number of questions that have to be answered. It may ask, 'Have my chromosomes been replicated? If not, hold off and let me know when it's done so I can move on to the next phase.' " Researchers sought to pinpoint the precise signals that allowed the cell to progress from one phase to the next.

The first wedge into the conundrum came in 1970, when scientists discovered a compound that, when injected into immature frog eggs, forced the egg cells toward maturity by pushing them from one stage of cell division to the next. Scientists purified the protein and experimented with it, but nobody knew quite what it was doing.

In an unrelated pursuit, Dr. Ruderman and Dr. Hunt were analyzing protein production in clam eggs, and in 1980 they made a dramatic discovery. During each cycle of cell divison in the eggs, one protein was created en masse and then destroyed en masse. The scientists named the mysterious protein "cyclin," because it rose and fell with the cell cycle.

They quickly found cyclin in other primitive marine organisms, but at first they had difficulty spotting the same class of proteins in higher animals. "We could see it clearly in clams, starfish and sea urchins, but when we were depressed we thought maybe only the weird sea creatures bothered to have it," said Dr. Hunt. At the Center of the Process

But eventually cyclins were found in other families of animals, and scientists realized that cyclins were somehow at the center of cell division. Only when a fresh batch of the protein is created within the cell can division proceed, and only when that batch is destroyed can division end. The Ringmasters Proteins Act In Concert

In yet another line of research, geneticists identified a peculiar mutation in yeast cells that stopped the cells cold at a particular stage in division: after the chromosomes had been replicated, but before the double set of chromosomes had been divided into two cells. The mutation indicated that a critical protein, needed to catalyze the splitting up of the cell, had been deactivated.

The scientists called the protein CDC, for cell division cycle, and appended a different number to the end depending on who was doing the appending.

The disparate threads of research have come together only within the last couple of years. Scientists have learned that the CDC protein in yeast cells is the same as the protein that prods frog eggs toward maturity. What is more, the same protein has been identified in mammals, including humans, and seems to be crucial to cell division everywhere.

But scientists studying it at first were perplexed, because the protein always seemed to be present in the cell. How then could it know the difference between a quiet cell, a dividing cell, or a cell somewhere in between? Two Proteins Must Mate

The answer proved to be cyclin. Scientists have lately discovered that the CDC protein must find its mate in cyclin before it can do anything about cell division. Cyclin seems to spark CDC to life and allow the colossal task of masterminding cell division to begin.

Somehow, a signal that has yet to be identified spurs the production of cyclin in the cell. That cyclin then joins with the CDC protein, and the division machinery starts up.

More intriguing still, the same CDC protein unites with a different cyclin at different stages of the division process. That molecular fickleness is indispensable and explains why cyclin must be degraded rapidly at every step of the complex cycle: to free the CDC protein for mating with a new type of cyclin.

The various cyclins are created afresh at every point in division, whether before the moment the cell must make new DNA, or during the time when the engorged cell is ready to split down the middle.

"Nature is using a single molecular system to carry out totally different mechanisms" in the cell cycle, said Dr. Beach. "Why keep one subunit around and simply change its specificity subunit? Why not use an entirely new system at each point? We don't know. We haven't found another example of that yet in the cell." Supporting Players Cadre of Proteins Gets Into the Act

What those different mechanisms may be is also becoming clearer, as researchers locate the subordinate proteins in the cell that heed the commanding couple's call to action. Regardless of the stage in division or which type of cyclin is participating in the union, the protein complex does its job of prodding the cell through division with the same technique.

The couple coaxes other proteins into the cell division process by adding phosphate molecules to the deputy proteins, which changes their form and function.

Among the proteins that the CDC-cyclin complex transforms early in cell division are ones that can help generate a new copy of DNA to bestow on a daughter cell. These proteins include molecules that latch onto the existing strand of DNA and help untwirl it, a crucial first step in the replication of its prized genetic information.

Other proteins, when catalyzed by their new phosphate accessories, also clamp onto the DNA and switch on huge batteries of genes, which then perform specialized duties in the baroque operation of synthesizing more DNA.

At a later stage in the cell cycle, once the genetic material has been duplicated, CDC, together with a new cyclin mate, activates important proteins shaped like little spindles. These help yank apart duplicated chromosomes in preparation for division.

The same basic cascading events, with the powerful protein couple doling out phosphate molecules to its minions, seems to apply from yeast to humans, although the complexity soars as the evolutionary ladder is scaled.

"There could be as many as eight different cyclins in mammalian cells," said Dr. Tony Hunter of the Salk Institute in La Jolla, Calif. "And there are probably multiple, parallel pathways that all have to be modulated and regulated before a cell takes its next step in the cycle." Outside Influence The Body And the World

Researchers' big task now is to work their way from the inside of the cell back out to the world around the cell, where signals to divide originate in blood hormones and other growth-promoting molecules in the body.

Here, too, scientists are fast gathering clues. They have begun to understand how certain genes that seem to tune in to growth signals from the bloodstream or from neighboring cells may then communicate with the executors of the cell cycle buried within.

Some of these liaison genes have been found to help spawn cancer when mutated.

"What's really wonderful is that we can begin to think about how all these family members in the cell are talking to each other," said Dr. George F. Vande Woude of the Frederick Cancer Research and Development Center in Maryland. "Some work before cell division, some during, but all are common partners in the dialogue."

In one especially provocative discovery, Dr. Ed Harlow of Cold Spring Harbor Laboratory has focused on a gene called the retinoblastoma gene, known to become mutated in a wide variety of human cancers, including tumors of the eye, bone, breast and bladder.

In its normal state, the retinoblastoma gene acts as an anti-tumor gene, preventing wild cell divison. Dr. Harlow's work suggests that the gene could operate in healthy cells by listening to a wide array of external signals. If the combined messages do not suggest the need for division, the gene could keep the cell in repose by repressing the activity of the CDC protein, among other functions.

Should the outside world signal division time, the retinoblastoma molecule could free the protein to mate with a stimulating cyclin partner. Search for the Full Story

The retinoblastoma gene is an important component of cell growth, but it is only one link in the chain.

Scientists will not be able to boast a complete mastery of cell division until they understand the signals, from the first, faint whisper that new cells are needed, to the end point, when one cell obligingly becomes two. Tracing the pathway back to the original signal that set cell division in motion is a formidable task, but scientists have a rough idea of the sequence.

Initiating signals must come from hormones or other growth-promoting factors in the blood. For example, wounded tissue might release growth hormones to spur surrounding cells to proliferate into a scab.

The hormones would prompt a cell to divide by linking to receptors, proteins studding the surface of the cell that are designed to catch signals from the outside.

Stimulated by the hormones, the receptors would then begin relaying the division signal inward, perhaps by jostling other proteins located a bit deeper within the cell. Like a bucket being passed along a fire brigade, the signal would be carried further and further into the heart of the cell. At some point, the signal must ignite the rapid production of cyclins. The cyclins then meet with CDC proteins and kick the division machinery into action.

But this is little more than a model. Researchers must still identify the vast lineup of proteins between the outside world and the interior machinery of the cell cycle.

"A lot of our understanding of where signals go in the cell and how they're interpreted has been a black box," said Dr. Reed. "But there are plenty of creative people in this field and it's very trendy. So give us a few more years and maybe there won't be any black box left."


Do cells store information other than permanent (chromosome) information - Biology

Answering this most profound question in philosophy and in science gives us plausible answers to many, if not most, of the "great questions" of all time about the nature of reality.

The answer also tells us how the universe itself was and is now being continuously created.

That is why I call myself the information philosopher and encourage others to become information philosophers.

Answering deep philosophical questions with words and concepts has sadly been a failure. We need to get behind the words to the underlying information structures, material, linguistic, and mental.

Although analytic language philosophy is still widely taught, it has made little progress for decades. Professors discover (rediscover) the same ancient problems, forever republishing old concepts with new names and acronyms.

Philosophy has become the history of philosophy.

An information philosopher studies the origin and evolution of information structures, the foundations for all our ideas.

Information philosophy is a dualist philosophy, both materialist and idealist. It It is a correspondence theory, explaining how immaterial ideas represent material objects.

In a deterministic universe, information is constant. Logical and mathematical philosophers follow Gottfried Leibniz and Pierre-Simon Laplace, who said a super-intelligent being who knew the information at one instant would know all the past and future. They deny that new information can be created.

An information structure is an object whose elementary particle components have been connected and arranged in an interesting way, as opposed to being dispersed at random throughout space like the molecules in a gas. Information philosophy explains who or what is doing the arranging.

A gas of microscopic material particles in equilibrium is in constant motion, the motion we call heat. But its macroscopic properties, like pressure, temperature, and volume, its total matter and energy, are unchanging. It is said to have maximum possible entropy, or disorder. It contains minimal, possibly zero, internal information, apart from that in the atoms and molecules.

When the second law of thermodynamics was discovered in the nineteenth century, physicists predicted that increasing entropy would destroy all information, and the universe would end in a "heat death." That is not happening.

Many philosophers, philosophers of science, and scientists themselves, still see deterministic "laws of nature" as models for their work. The great success of Newtonian mechanics inspires them to develop mechanical, material and energetic, explanations for biological and mental processes.

Information is neither matter nor energy, although it needs matter to be embodied and energy to be communicated. Why should it become the preferred basis for all philosophy?

As most everyone knows, matter and energy are conserved. This means that there is just the same total amount of matter and energy today as there was at the universe origin.

But then what accounts for all the change that we see, the new things under the sun?
It is information, which is not conserved and has been increasing since the beginning of time, despite the second law of thermodynamics, with its increasing entropy, which destroys both order and information.

But the physics of the early universe, famously the first three minutes according to Steven Weinberg, shows us a state near maximum possible entropy for the earliest moments.

How can the universe have begun in equilibrium - near maximal disorder, yet today be in the high state of information and order we see around us. This is the fundamental question of information philosophy.

The answer was given to me by my Harvard colleague and mentor David Layzer in the 1970's. In short, it is the expansion of the universe, which continually increases the space available to the limited number of particles, giving them more room and more possibilities to arrange themselves into interesting information structures. This is the basis of a cosmic creation process for all interesting things.

Cosmic creation is only possible because the expansion of space increases faster than the gas particles can get back to equilibrium, making room for the growth of order and information.

I now know that this powerful insight was first seen by Arthur Stanley Eddington in his 1934 book New Pathways in Science .

As pointed out by David Layzer in 1975, Eddington's arrow of time (the direction of entropy increase) points not only to increasing disorder but also to increasing information.

At the earliest times, purely physical forces (electromagnetic, nuclear, and gravitational) changed the arrangement of the most fundamental particles of matter and energy, quarks, electrons, gluons, and photons, into information structures like atoms and molecules, then planets, stars and galaxies.

Billions of years later, living things became active information structures. Living things control the flow of matter and energy through themselves and do their own arranging of their matter and energy constituents!

New immaterial information is forever emerging. Human beings are creating new ideas!

Purely physical objects like planets, stars, and galaxies are passive information structures, entirely controlled by fundamental physical forces - the strong and weak nuclear forces, electromagnetism, and gravitation. These objects do not control themselves.

Living things, you and I, are active dynamic growing information structures, forms through which matter and energy continuously flow. And the communication of biological information controls those flows!

Before life as we know it, some information structures blindly replicated their information. Some of these replication processes were fatal mistakes, but very rarely the mistake was an improvement, with better reproductive success. In life today, those random errors produce some of the variations followed by natural selection which adapts living things to their environments.

Even the smallest living things develop behaviors , sensing information about and reacting to their environment, sending signals between their parts (cells, organelles) and to other living things nearby. These behaviors can be interpreted as intentions, goals, and agency, introducing purpose into the universe.

The goals and purposes of living things are not the "final goal" or purpose of Aristotle's Metaphysics that he called "telos" (τέλος). Teleology is the idea that there is a cosmic purpose that preceded the creation of the universe and which points toward an end goal. Teleology underlies many theologies, in which a creator God embodies the telos, just as a sculptor previsualizes the statue within a block of marble. In many religions, the creator thus predestines or predetermines all the events in the universe, a theological idea that fit well with the mechanical and deterministic laws of Nature discovered by Isaac Newton in the seventeenth-century age of enlightenment.

Information is the modern spirit, the ghost in the machine, the mind in the body. It is the soul. When we die, it is information that perishes, unless the future preserves it. The matter remains.

Information philosophers think that if we don't remember the past, we don't deserve to be remembered by the future. This is especially true for the custodians of knowledge.

In the natural sciences the most important references are usually the most recent. In the humanities and social sciences the opposite is often true. The earliest references were invented ideas that became traditional beliefs, now deeply held without further justification.

This website is not based on the work of a single thinker. It includes the work of over five hundred philosophers and scientists, critically analyzed over six decades by this information philosopher, with extensive quotations from the original thinkers and PDFs of major parts of their work (sometimes in the original language).

Information philosophy can explain the fundamental metaphysical connection between materialism and idealism. It replaces the determinism and metaphysical necessity of eliminative materialism and reductionist naturalism with metaphysical possibilities.

Unactualized possibilities exist in minds as immaterial ideas. They are the alternative actions and choices that are the basis for our two-stage model of free will.

The existence (perhaps metaphysical) of alternative possibilities explains how both new ideas and new species arise by chance, the consequence of quantum indeterminism

Neurobiologists question the usefulness of quantum indeterminism in the brain and mind. But it is the sometimes random firing of particular neurons and their subsequent wiring together that records an individual's experiences, experiences distinctly different in ways that contribute to every unique "self," - what it's like to be me.

Faced with a new experience, the experience recorder and reproducer (ERR) causes some neurons to "play back" those encoded past experiences that are similar in some way to the current experience. The "playback" is complete with the emotions that were attached to the original experiences. Memory of and learning from diverse past experiences provides the context that adds "meaning" to the current experience. The number of past experiences recalled may be very large.

William James described this as a "blooming, buzzing confusion." He called for us to focus attention on the alternative possibilities in his "stream of consciousness." These possibilities are the past experiences of the audience members whose hands are raised in Bernard Baars's "Theater of Consciousness," that give them something relevant to add to the conversation.

Some information enthusiasts claim that information is the fundamental stuff of the universe. It is not. The universe is fundamentally composed of discrete particles of matter and energy. Information describes the arrangement of the matter. Where the arrangement is totally random, there is no information. The organized information in living things has a purpose, to survive and to increase.

Information is the form in all discriminable concrete objects as well as the content in non-existent, merely possible, thoughts and other abstract entities. Information is the disembodied, de-materialized essence of anything.

Perhaps the most amazing thing about information philosophy is its discovery that abstract and immaterial information can exert an influence over concrete matter, explaining how mind can move body, how our thoughts can control our actions, deeply related to the way the quantum wave function (randomly) controls the probabilities of locating quantum particles.

But the random generation of alternative possibilities for thought and action does not mean that our actions themselves are random, provided that the deliberative choice of one action is adequately determined.

Information philosophy goes beyond a priori logic and its puzzles, beyond analytic language and its paradoxes, beyond philosophical claims of necessary truths, to a contingent physical world that is best represented as made of dynamic, interacting information structures.

The creation of new information structures exposes the error of determinism. In a deterministic universe there is no increase of information. All the past, present, and future information is present to the eyes of a super-intelligence, as Pierre-Simon Laplace argued.

Isaac Newton's classical mechanical laws of motion are not only deterministic, they are reversible in time. It is believed by many that if time could be reversed, the entire universe would proceed back in time to its earliest state, like a motion picture played backwards.

Information philosophy has discovered the origin of irreversibility in the early work on quantum mechanics by Albert Einstein. Quantum indeterminism and irreversibility in turn contribute to the origin of information structures, which we have found in the work of Arthur Stanley Eddington and David Layzer. Thirdly, quantum indeterminism and the creation of information structures are the bases for our two-stage model of free wil, which we trace back to the thought of William James.

Information is said by some to be a conserved quantity, just like matter and energy. This is not the case. Determinism is a false belief, originating either in the tragic idea that an omniscient and omnipotent God or in the Newtonian idea that unbreakable laws of nature completely control every event, so there can be no human freedom in a completely determined world.

Indeed, belief in determinism is the modern residue of the traditional belief in an overarching explanation - a determinative reason - for everything.

Knowledge can be defined as information in minds - a partial isomorphism of the information structures in the external world. Information philosophy is a correspondence theory.

Sadly, there is no isomorphism, no information in common, between words and objects. As the great Swiss linguist and structuralist philosopher Ferdinand de Saussure pointed out, the connection between most signifiers (words and other symbols) and the things signified (objects and concepts) is arbitrary, a connection established only by cultural convention. This arbitrariness accounts for much of the failure of analytic language philosophy in the past century.

Although language is an excellent tool for human communications, it is arbitrary, ambiguous, and ill-suited to represent the world directly. Human languages can not "picture" reality, despite the hopes of early logical positivists like Ludwig Wittgenstein.

Information is the lingua franca of the universe.

The extraordinarily sophisticated connections between words and objects are made in human minds, mediated by the brain's experience recorder and reproducer (ERR). Words stimulate neurons to start firing and to play back those experiences that include relevant objects.

Neurons that were wired together in our earliest experiences fire together at later times, contextualizing our new experiences, giving them meaning. And by replaying emotional reactions to similar earlier experiences, it makes then "subjective experiences," giving us the feeling of "what it's like to be me" and solving the "hard problem" of consciousness.

Beyond words, a dynamic information model of an information structure in the world is presented immediately to the mind as a simulation of reality experienced for itself.

Without words and related experiences previously recorded in your mental experience recorder, we could not comprehend words. They would be mere noise, with no meaning.

CAT Your browser does not support the video tag.

By comparison, a diagram, a photograph, an animation, or a moving picture can be seen and mostly understood by human beings, independent of their native tongue. (Right click on the cat movie to show controls that make it play and pause)

The basic elements of information philosophy are dynamic models of information structures. They go far beyond logic and language as a representation of the fundamental, metaphysical, nature of reality.

Visual and interactive models "write" directly into our mental experience recorders.

Computer animated models can incorporate all the laws of nature, from the differential equations of quantum physics to the myriad information processes of biology.

Computer simulations are not only our most accurate knowledge of the physical world, they are among the best teaching tools ever devised. We can transfer knowledge non-verbally to coming generations in most of the world's population via the Internet and nearly ubiquitous smartphones.

Consider the dense information in Drew Berry's real-time animations of molecular biology. These are the kinds of dynamic models of information structures that we believe can best explain the fundamental nature of reality - "beyond logic and language."

If you think about it, everything you know is pure abstract information. Everything you are is an information structure, a combination of matter and energy that embodies, communicates, and most important, processes your information. Everything that you value contains information.

And while the atoms, molecules, and cells of your body are important, many only last a few minutes and most are completely replaced in just a few years. But your immaterial information, from your original DNA to your latest experiences, will be with you for your lifetime.

You are a creator of new information, part of the cosmic creation process. Your free will depends on your unique ability to create freely generated thoughts, multiple ideas in your mind as alternative possibilities for your willed decisions and responsible actions.

Anyone with a serious interest in philosophy should understand how information is created and destroyed, because information is much more fundamental than the logic and language tools philosophers use today. Information philosophy goes "beyond logic and language."

Information is the sine qua non of meaning. This I-Phi website aims to provide a deep understanding of information that should be in every philosopher's toolbox.

We will show why information should actually be the preferred basis for the critical analysis of current problems in a wide range of disciplines - from information creation in cosmology to information in quantum physics, from information in biology (especially evolution) to psychology, where it offers a solution to the classic mind-body problem and the problem of consciousness. And of course in philosophy, where failed language analysis can be replaced or augmented by immaterial information analysis as a basis for justified knowledge, objective values, human free will, and a surprisingly large number of problems in metaphysics.

Above all, information philosophy hopes to replace beliefs with knowledge. Instead of the primitive idea of an other-worldly creator, we propose a comprehensive explanation of the creation of this world that has evolved into the human creativity that invents such ideas.
The "miracle of creation" is happening now, in the universe and in you and by you.

But what is information? How is it created? Why is it a better tool for examining philosophical problems than traditional logic or linguistic analysis? And what are some examples of classic problems in philosophy, in physics, and in metaphysics with information philosophy solutions?

But in order to remain philosophy, interested philosophers must examine our proposed information-based solutions and evaluate them as part of the philosophical dialogue.

  • The Problem of Free Will
  • The Mind-Body Problem
  • The Problem of Knowledge
  • The Problem of Value
  • The Problem of Consciousness
  • The Problem of Mental Causation
  • The Problem of Other Minds
  • The Problem of Universals
    (The Ontological Status of Ideas)

Information analysis also makes significant progress on a number of the classic problems in metaphysics, many of these virtually unchanged since they were identified as puzzles and paradoxes over two millennia ago, such as The Statue and Lump of Clay, The Ship of Theseus, Dion and Theon, or Tibbles, the Cat, The Growing Problem, The Debtor's Paradox, The Problem of the Many, and The Sorites Problem.

  • Abstract Entities (vs. Material Beings) (Parts and Wholes) (and Differences) (Counterfactuals) (or Contingency) (Perdurance and Endurance)
  • The Arrow of Time
  • The Collapse of the Wave Function
  • The Einstein-Podolsky-Rosen Paradox
  • The Emergence of Classical Physics from Quantum
  • The Interpretation of Quantum Mechanics
  • The Problem of Measurement
  • The Problem of Microscopic Reversibility
  • The Problem of Macroscopic Recurrence
  • The Universe Horizon and Flatness Problems
    (Why Is There Something Rather Than Nothing?)
  • Schrödinger's Cat
  • The Role of the Observer in Quantum Mechanics
  • The Two-Slit Experiment
  • The Relation Between Waves and Particles
  • The Mystery of Entanglement
A common definition of information is the act of informing - the communication of knowledge from a sender to a receiver that informs (literally shapes) the receiver. Often used as a synonym for knowledge, information traditionally implies that the sender and receiver are human beings, but many animals clearly communicate. Information theory studies the communication of information.

Information philosophy extends that study to the communication of information content between material objects, including how it is changed by energetic interactions with the rest of the universe.

We call a material object with information content an information structure. While information is communicated between inanimate objects, they do not process information, which we will show is the defining characteristic of living beings and their artifacts.

The sender of information need not be a person, an animal, or even a living thing. It might be a purely material object, a rainbow, for example, sending color information to your eye.

The receiver, too, might be merely physical, a molecule of water in that rainbow that receives too few photons and cools to join the formation of a crystal snowflake, increasing its information content.

Information theory, the mathematical theory of the communication of information, says little about meaning in a message, which is roughly the use to which the information received is put. Information philosophy extends the information flows in human communications systems and digital computers to the natural information carried in the energy and material flows between all the information structures in the observable universe.

A message that is certain to tell you something you already know contains no new information. It does not increase your knowledge, or reduce the uncertainty in what you know, as information theorists put it.

If everything that happens was certain to happen, as determinist philosophers claim, no new information would ever enter the universe. Information would be a universal constant. There would be "nothing new under the sun." Every past and future event could in principle be known by a god-like super-intelligence with to a fixed totality of information (Laplace's Demon).

Physics tells us that the total amount of mass and energy in the universe is a constant. The conservation of mass and energy is a fundamental law of nature. Some mathematical physicists erroneously think that information should also be a conserved quantity, that information is a constant of nature. This includes some leading mathematical physicists.

But information is neither matter nor energy, though it needs matter to be embodied and available energy to be communicated. Information can be created and destroyed. The material universe creates it. The biological world creates it and utilizes it. Above all, human minds create, process, and preserve abstract information, the Sum of human knowledge that distinguishes humanity from all other biological species and that provides the extraordinary power humans have over our planet, for better or for worse.

Information is the modern spirit, the ghost in the machine, the mind in the body. It is the soul, and when we die, it is our information that perishes. The matter remains.

We propose information as an objective value, the ultimate sine qua non.

Information philosophy claims that man is not a machine and the brain is not a computer. Living things process information in ways far more complex, if not faster, than the most powerful information processing machines. What biological systems and computing systems have in common is the processing of information, as we must explain.

Whereas machines are assembled, living things assemble themselves. They are both information structures, patterns, through which matter and energy flows, thanks to flows of negative entropy (available energy) coming from the Sun and the expanding universe. And they both can create new information, build new structures, and maintain their integrity against the destructive influence of the second law of thermodynamics with its increasing positive entropy or disorder.

Biological evolution began when the first molecule replicated itself, that is, duplicated the information it contained. But duplication is mere copying. Biological reproduction is a much more sophisticated process in which the germ or seed information of a new living thing is encoded in a data or information structure (a genetic code) that can be communicated to processing systems that produce another instance of the given genus and species.

Ontologically random imperfections, along with the deliberate introduction of random noise, for example in sexual recombinations, in the processing systems produce the variations that are selected by evolution based on their reproductive success. Errors are not restricted to the genetic code, occurring throughout the development of each individual up to the present.

Cultural evolution is the creation and communication of new information that adds to the sum of human knowledge. The creation and evolution of information processing systems in the universe has culminated in minds that can understand and reflect on what we call the cosmic creation process.

How is information created?

Ex nihilo, nihil fit, said the ancients, Nothing comes from nothing. But information is no (material) thing. Information is physical, but it is not material. Information is a property of material. It is the form that matter can take. We can thus create something (immaterial) from nothing! But we shall find that it takes a special kind of energy (free or available energy, with negative entropy) to do so, because it involves the rearrangement of matter.

Energy transfer to or from an object increases or decreases the heat in the object. Entropy transfer does not change the heat content, it represents only a different organization or distribution of the matter in the body. Increasing entropy represents a loss of organization or order, or, more precisely, information. Maximum entropy is maximum disorder and minimal information.

As you read this sentence, new information is (we hope) being encoded/embodied in your mind/brain. Permanent changes in the synapses between your neurons store the new information. New synapses are made possible by free energy and material flows in your metabolic system, a tiny part of the negative entropy flows that are coursing throughout the universe. Information philosophy will show you how these tiny mental flows allow you to comprehend and control at least part of the cosmic information flows in the universe.

Cosmologists know that information is being created because the universe began some thirteen billion years ago in a state of minimal information. The "Big Bang" started with the most elementary particles and radiation. How matter formed into information structures, first atoms, then the galaxies, stars, and planets, is the beginning of a story that will end with understanding how human minds emerged to understand our place in the universe.

The relation between matter and information is straightforward. The embodied information is the organization or arrangement of the matter plus the laws of nature that describe the motions of matter in terms of the fundamental forces that act between all material particles.

The relation between information and energy is more complex, and has led to confusion about how to apply mathematical information theory to the physical and biological sciences. Material systems in an equilibrium state are maximally disordered, have maximum entropy, no negative entropy, and no information other than the bulk parameters of the system.

In the case of the universe, the initial parameters were very few, the amount of radiant energy (the temperature) and the number of elementary particles (quarks, gluons, electrons, and photons) per unit volume, and the total volume (infinite?). These parameters, and their changes (as a function of time, as the temperature falls) are all the information needed to describe a statistically uniform, isotropic universe and its evolution.

Information philosophy will explain the process of information creation in three fundamental realms - the purely material, the biological, and the mental.

The first information creation was a kind of "order out of chaos," when matter in the early universe opened up spaces allowing gravitational attraction to condense otherwise randomly distributed matter into highly organized galaxies, stars, and planets. It was the expansion - the increasing space between material objects - that drove the universe away from thermodynamic equilibrium (maximum entropy and disorder) and in some places created negative entropy, a quantitative measure of orderly arrangements that is the basis for all information.

Purely material objects react to one another following laws of nature, but they do not in an important sense create or process the information that they contain. It was the expansion, moving faster than the re-equilibration time, and the gravitational forces, that were responsible for the new structures.

A qualitatively different kind of information creation was when the first molecule on earth to replicate itself went on to duplicate its information exponentially. Here the prototype of life was the cause for the creation of the new information structure. Accidental errors in the duplication provided variations in replicative success. Most important, besides creating their information structures, biological systems are also information processors. Living things use information to guide their actions.

With the appearance of life, agency and purpose appeared in the universe. Although some philosophers hold that life just gives us the "appearance of purpose."

The third process of information creation, and the most important to philosophy, is human creativity. Almost every philosopher since philosophy began has considered the mind as something distinct from the body. Information philosophy can now explain that distinction. The mind can be considered the immaterial information in the brain. The brain, part of the material body, is a biological information processor. The stuff of mind is the information being processed and the new information being created. As some philosophers have speculated,
mind is the software in the brain hardware.

Most material objects are passive information structures.

Living things are information structures that actively process information. They communicate it between their parts to build, maintain, and repair their (material) information structure, through which matter and energy flow under the control of the information structure itself.

Resisting the second law of thermodynamics locally, living things increase entropy globally much faster than non-living things. But most important, living things increase their information content as they develop. Humans learn from their experiences, storing knowledge in an experience recorder and reproducer (ERR).

Mental things (ideas) are pure abstractions from the material world, but they have control (downward causation) over the material and biological worlds. This enables agent causality. Human minds create information structures, but their unique creation is the collection of abstract ideas that are the sum of human knowledge. It is these ideas that give humanity unparalleled extraordinary control over the material and biological worlds.

It may come as a surprise for many thinkers to learn that the physics involved in the creation of all three types of information - material, biological, and mental - include the same two-step sequence of quantum physics and thermodynamics at the core of the cosmic creation process.

The most important information created in a mind is a recording of an individual's experiences (sensations). Recordings are played back (automatically and perhaps mostly unconsciously) as a guide to evaluate future actions (volitions) in similar situations. The particular past experiences reproduced are those stored in the brain located near elements of the current experience (association of ideas).
Just as neurons that fire together wire together, neurons that have been wired together will later fire together.

Sensations are recorded as the mental effects of physical causes.
Sensations are stored as retrievable information in the mind of an individual self. Recordings include not only the five afferent senses but also the internal emotions - feelings of pleasure, pain, hopes, and fears - that accompany an experience. They constitute "what it's like" for a particular being to have an experience.

Volitions are the mental causes of physical effects.
Volitions begin with 1) the reproduction of past experiences that are similar to the current experience. These become thoughts about possible actions and the (partly random) generation of other alternative possibilities for action. They continue with 2) the evaluation of those freely generated thoughts followed by a willful selection (sometimes habitual) of one of those actions.

Volitions are followed by 3) new sensations coming back to the mind indicating that the self has caused the action to happen (or not). This feedback is recorded as further retrievable information, reinforcing the knowledge stored in the mind that the individual self can cause this kind of action (or sometimes not).

Many philosophers and most scientists have held that all knowledge is based on experience. Experience is ultimately the product of human sensations, and sensations are just electrical and chemical interactions with human skin and sense organs. But what of knowledge that is claimed to be mind-independent and independent of experience?

Why is information better than logic and language for solving philosophical problems?

Broadly speaking, modern philosophy has been a search for truth, for a priori, analytic, certain, necessary, and provable truth.

But all these concepts are mere ideas, invented by humans, some aspects of which have been discovered to be independent of the minds that invented them, notably formal logic and mathematics. Logic and mathematics are systems of thought, inside which the concept of demonstrable (apodeictic) truth is useful, but with limits set by Kurt Gödel's incompleteness theorem. The truths of logic and mathematics appear to exist "outside of space and time." Gottfried Leibniz called then "true in all possible worlds," meaning their truth is independent of the physical world. We call them a priori because their proofs are independent of experience, although they were initially abstracted from concrete human experiences.

Analyticity is the idea that some statements, propositions in the form of sentences, can be true by the definitions or meanings of the words in the sentences. This is correct, though limited by verbal difficulties such as Russell's paradox and numerous other puzzles and paradoxes. Analytic language philosophers claim to connect the words with objects, material things, and thereby tell us something about the world. Some modal logicians (cf. Saul Kripke) claim that words that are names of things are necessary a posteriori, "true in all possible worlds." But this is nonsense, because we invented all those words and worlds. They are mere ideas.

Perhaps the deepest of all these philosophical ideas is necessity. Information philosophy can now tell us that there is no such thing as absolute necessity. There is of course an adequate determinism in the macroscopic world that explains the appearance of deterministic laws of nature, of cause and effect, for example. This is because macroscopic objects consist of vast numbers of atoms and their individual random quantum events average out. But there is no metaphysical necessity. At the fundamental microscopic level of material reality, there is an irreducible contingency and indeterminacy. Everything that we know, everything we can say, is fundamentally empirical, based on factual evidence, the analysis of experiences that have been recorded in human minds.

So information philosophy is not what we can logically know about the world, nor what we can analytically say about the world, nor what is necessarily the case in the world. There is nothing that is the case that is necessary and perfectly determined by logic, by language, or by the physical laws of nature. Our world and its future are open and contingent, with possibilities that are the source of new information creation in the universe and source of human freedom.

For the most part, philosophers and scientists do not believe in ontological possibilities, despite their invented "possible worlds," which are on inspection merely multiple "actual worlds." They are "actualists." This is because they cannot accept the idea of ontological chance. They hope to show that the appearance of chance is the result of human ignorance, that chance is merely an epistemic phenomenon.

Now chance, like truth, is just another idea, just some more information. But what an idea! In a self-referential virtuous circle, it turns out that without the real possibilities that result from ontological chance, there can be no new information. Information philosophy offers cosmological and biological evidence for the creation of new information in the universe. So it follows that chance is real, fortunately something that we can keep under control. We are biological beings that have evolved, thanks to chance, from primitive single-cell communicating information structures to multi-cellular organisms whose defining aspect is the creation and communication of information.

The theory of communication of information is the foundation of our "information age." To understand how we know things is to understand how knowledge represents the material world of embodied "information structures" in the mental world of immaterial ideas.

All knowledge starts with the recording of experiences. The experiences of thinking, perceiving, knowing, believing, feeling, desiring, deciding, and acting may be bracketed by philosophers as "mental" phenomena, but they are no less real than other "physical" phenomena. They are themselves physical phenomena.
They are just not material things.

Information philosophy defines human knowledge as immaterial information in a mind, or embodied in an external artifact that is an information structure (e.g., a book), part of the sum of all human knowledge. Information in the mind about something in the external world is a proper subset of the information in the external object. It is isomorphic to a small part of the total information in or about the object. The information in living things, artifacts, and especially machines, consists of much more than the material components and their arrangement (positions over time). It also consists of all the information processing (e.g., messaging) that goes on inside the thing as it realizes its entelechy or telos, its internal or external purpose.

All science begins with information gathered from experimental observations, which are themselves mental phenomena. Observations are experiences recorded in minds. So all knowledge of the physical world rests on the mental. All scientific knowledge is information shared among the minds of a community of inquirers. As such, science is a collection of thoughts by thinkers, immaterial and mental, some might say fundamental. Recall Descartes' argument that the experience of thinking is that which for him is the most certain.

The analysis of language, particularly the analysis of philosophical concepts, which dominated philosophy in the twentieth century, has failed to solve the most ancient philosophical problems. At best, it claims to "dis-solve" some of them as conceptual puzzles. The "problem of knowledge" itself, traditionally framed as "justifying true belief," is recast by information philosophy as the degree of isomorphism between the information in the physical world and the information in our minds. Information psychology can be defined as the study of this isomorphism.

We shall see how information processes in the natural world use arbitrary symbols (e.g., nucleotide sequences) to refer to something, to communicate messages about it, and to give the symbol meaning in the form of instructions for another process to do something (e.g., create a protein). These examples provide support for both theories of meaning as reference and meaning as use.

Note that just as language philosophy is not the philosophy of language, so information philosophy is not the philosophy of information. It is rather the use of information as a tool to study philosophical problems, some of which are today yielding tentative solutions. It is time for philosophy to move beyond logical puzzles and language games.

What are the processes that create emergent information structures in the universe?

How is information created in spite of the second law of thermodynamics?

  1. Universal Gravitation
  2. Quantum Cooperative Phenomena (e.g., crystallization, the formation of atoms and molecules)

Negative entropy is an abstract thermodynamic concept that describes energy with the ability to do work, to make something happen. This kind of energy is often called free energy or available energy.

In a maximally disordered state (called thermodynamic equilibrium) there can be matter in motion, the motion we call heat. But the average properties - density, pressure, temperature - are the same everywhere. Equilibrium is formless. Departures from equilibrium are when the physical situation shows differences from place to place. These differences are information.

The second law of thermodynamics then simply means that isolated systems will eliminate differences from place to place until all properties are uniformly distributed. Natural processes spontaneously destroy information. Consider the classic case of what happens when we open a perfume bottle.

In the late nineteenth century, Ludwig Boltzmann revolutionized thermodynamics with his kinetic theory of gases, based on the ancient assumption that matter is made up of collections of atoms. He derived a mathematical formula for entropy S as a function of the probabilities of finding a system in all the possible microstates of a system. When the actual macrostate is one with the largest number W of microstates, entropy is at a maximum, and no differences (information) are visible.

Boltzmann could not prove his "H-Theorem" about entropy increase. His contemporaries challenged a "statistical" entropy increase on grounds of microscopic reversibility and macroscopic recurrence (both problems solved by information philosophy). He could not prove the existence of atoms.

In the early twentieth century, Just before Boltzmann died, Albert Einstein formulated a statistical mechanics that put Boltzmann's law of increasing entropy on a firmer mathematical basis. Einstein's work predicted the size of miniscule fluctuations around equilibrium, which Boltzmann had expected. Einstein showed that entropy does not, in fact, continually increase. It can decrease randomly in short bursts of local higher densities or organized motions. Though quickly extinguished, Einstein showed that the occasionally correlated motions of invisible atoms explains the visible "Brownian motion" of tiny particles like seed pollen.

Einstein's calculations led to predictions that were confirmed quickly, proving the existence of discrete atoms that had been hypothesized for centuries. Sadly, Boltzmann may not have known of Einstein's proofs for his work. Later Einstein saw the same fluctuation in radiation, proving his revolutionary hypothesis of light quanta, now called photons. Although this is rarely appreciated, it was Einstein who showed that both matter and energy are discrete, discontinuous particles. His most famous equation shows they are convertible into one another, E = mc 2 . He also showed that the interaction of matter and radiation, of atoms and photons, always involves ontological chance. This bothered Einstein greatly, because he thought his God should not "play dice."

Late in life, Einstein said that if matter and energy cannot be described with the local continuous analytical functions in space and time needed for his field theories, that all his work would be "castles in the air." But the loss of classical deterministic ideas - which have ossified much of philosophy, crippling philosophical progress - is more than offset by the indeterminism of an open future and Einstein's belief in the "free creation of new ideas."

In the middle twentieth century, Claude Shannon derived the mathematical formula for the communication of information. John von Neumann found it to be identical to Boltzmann's formula for entropy, though with a minus sign (negative entropy). Where Boltzmann entropy is the number of possible microstates, Shannon entropy is the number of possible messages that can be communicated.

Shannon found that new information cannot be created unless there are multiple possible messages. This in turn depends on the ontological chance discovered by Einstein. In a deterministic universe, the total information at all times would be a constant. Information would be a conserved quantity, like matter and energy. "Nothing new under the Sun." But it is not constant, though many philosophers, mathematical physicists, and theologians (God's foreknowledge) still think so. Information is being created constantly in our universe. And we are co-creators of the information, including Einstein's "new ideas."

Because "negative" entropy (order or information) is such a positive quantity, we chose in the 1970's to give it a new name - "Ergo," and to call the four phenomena or processes that create negative entropy "ergodic," for reasons that will become clear. But today, the positive name "information" is all that we need to do information philosophy.

It begins with the expansion of the universe. If the universe had not expanded, it would have remained in the original state of thermodynamic equilibrium. We would not be here.

To visualize the departure from equilibrium that made us possible, remember that equilibrium is when particles are distributed evenly in all possible locations in space, and with their velocities distributed by a normal law - the Maxwell-Boltzmann velocity distribution. (The combination of position space and velocity or momentum space is called phase space). When we open the perfume bottle, the molecules now have a much larger phase space to distribute into. There are a much larger number of phase space "cells" in which molecules could be located. It of course takes them time to spread out and come to a new equilibrium state (the Boltzmann "relaxation time.")

When the universe expands, say grows to ten times its volume, it is just like the perfume bottle opening. The matter particles must redistribute themselves to get back to equilibrium. But suppose the universe expansion rate is much faster than the equilibration or relaxation time. The universe is out of equilibrium, and in a flat, ever-expanding, universe it will never get back!

In the earliest moments of the universe, material particles were in equilibrium with radiation at extraordinarily high temperatures. When quarks formed neutrons and protons, they were short-lived, blasted back into quarks by photon collisions. As the universe expanded, the temperature cooled, the space per photon increased and the mean free time between photon collisions increased, giving larger particles a better chance to survive. The expansion red-shifted the photons. decreasing the average energy per photon, and eventually reducing the number of high energy photons that disassociate matter. The mean free path of photons was very short. They were being scattered by collisions with electrons.

When temperature declined further, to 5000 degrees, about 400,000 years after the "Big Bang," the electrons and protons combined to make hydrogen and (with neutrons) helium atoms.

These fluctuations mean that there were slight differences in density of the newly formed hydrogen gas clouds. The force of universal gravitation then worked to pull relatively formless matter into spherically symmetric stars and planets. Thus is the original order out of chaos, although this phrase is now most associated with the work on deterministic chaos theory and complexity theory, as we shall see.

Two of our "ergodic" phenomena - gravity and quantum cooperative phenomena - pull matter together that was previously separated. Galaxies, stars, and planets form out of inchoate clouds of dust and gas. Gravity binds the matter together. Subatomic particles combine to form atoms. Atoms combine to form molecules. They are held together by quantum mechanics. In all these cases, a new visible information structure appears.

In order for these structures to stay together, the motion (kinetic) energy of their parts must be radiated away. This is why the stars shine. When atoms join to become molecules, they give off photons. The new structure is now in a (negative) bound energy state. It is the radiation that carries away the positive entropy (disorder) needed to balance the new order (information) in the visible structure.

In the cases of chaotic dissipative structures and life, the ergodic phenomena are more complex, but the result is similar, the emergence of visible information. (More commonly it is simply the maintenance of high-information, low-entropy structures.) These cases appear in far-from-equilibrium situations where there is a flow of matter and energy with negative entropy through the information structure. The flow comes in with low entropy but leaves with high entropy. Matter and energy are conserved in the flow, but information in the structure can increase. Remember, information is not a conserved quantity like matter and energy.

Information is neither matter nor energy, though it uses matter when it is embodied and energy when it is communicated. Information is the immaterial arrangement of the matter and energy.

This vision of life as a visible form through which matter and free energy flow was first seen by Ludwig van Bertlanffy in 1939, though it was made more famous by Erwin Schrödinger's landmark essay What Is Life? in 1945, where he claimed that "life feeds on negative entropy."

Note that the 300K (the average earth temperature) photons are dissipated into the dark night sky, on their way to the cosmic microwave background. The Sun-Earth-night sky is a heat engine, with a hot energy source and cold energy sink, that converts the temperature difference not into mechanical energy (work) but into biological energy (life).

When new information is created and embodied in a physical structure, two physical processes must occur.

Given this new stable information, to the extent that the resulting quantum system can be approximately isolated, the system will deterministically evolve according to von Neumann's Process 2, the unitary time evolution described by the Schrödinger equation.

The first two physical processes (1 and 1b) are parts of the information solution to the "problem of measurement," to which must be added the role of the "observer." We shall see that the observer involves a mental Process 3.

The discovery and elucidation of the first two as steps in the cosmic creation process casts light on some classical problems in philosophy and physics , since it is the same two-step process that creates new biological species and explains the freedom and creativity of the human mind.

The cosmic creation process generates the conditions without which there could be nothing of value in the universe, nothing to be known, and no one to do the knowing. Information itself is the ultimate sine qua non.

    the order out of chaos when the randomly distributed matter in the early universe first gets organized into information structures.

This was not possible before the first atoms formed about 400,000 years after the Big Bang. Information structures like the stars and galaxies did not exist before about 400 million years. As we saw, gravitation was the principal driver creating information structures.

Nobel prize winner Ilya Prigogine discovered another ergodic process that he described as the "self-organization" of "dissipative structures." He popularized the slogan "order out of chaos" in an important book. Unfortunately, the "self" in self-organization led to some unrealizable hopes in cognitive psychology. There is no self, in the sense of a person or agent, in these physical phenomena.

Both gravitation and Prigogine's dissipative systems produce a purely physical/material kind of order. The resulting structures contain information. There is a "steady state" flow of information-rich matter and energy through them. But they do not process information. They have no purpose, no "telos."

In his famous essay, "What Is Life?," Erwin Schrödinger noted that life "feeds on negative entropy" (or information). He called this "order out of order."

This kind of biological processing of information first emerged about 3.5 billion years ago on the earth. It continues today on multiple emergent biological levels, e.g., single-cells, multi-cellular systems, organs, etc., each level creating new information structures and information processing systems not reducible to (caused by) lower levels and exerting downward causation on the lower levels.

And this downward causal control is extremely fine, managing the motions and arrangements of individual atoms and molecules.

Biological systems are cognitive systems, using internal "subjective" knowledge to recognize and interact with their "objective" external environment, communicating meaningful messages to their internal components and to other individuals of their species with a language of arbitrary symbols, taking actions to maintain themselves and to expand their populations by learning from experience.

With the emergence of life, "purpose" also entered the universe. It is not the pre-existent "teleology" of many idealistic philosophies (the idea of "essence" before "existence"), but it is the "entelechy" of Aristotle, who saw that living things have within them a purpose, an end, a "telos." To distinguish this evolved telos in living systems from teleology, modern biologists use the term "teleonomy."

This kind of information can be highly abstract mind-stuff, pure Platonic ideas, the stock in trade of philosophers. It is neither matter nor energy (though embodied in the material brain), a kind of pure spirit or ghost in the machine. It is a candidate for the immaterial dualist "substance" of René Descartes, though it is probably better thought of as a "property dualism," since information is an immaterial property of all matter.

The information stored in the mind is not only abstract ideas. It contains a recording of the experiences of the individual. In principle every experience may be recorded, though not all may be reproducible/recallable.

The negative entropy (order, or potential information) generated by the universe expansion is a tiny amount compared to the increase in positive entropy (disorder). Sadly, this is always the case when we try to get "order out of order," as can be seen by studying entropy flows at different levels of emergent phenomena.

In any process, the positive entropy increase is always at least equal to, and generally orders of magnitude larger than, the negative entropy in any created information structures, to satisfy the second law of thermodynamics. The positive entropy is named for Boltzmann, since it was his "H-Theorem" that proved entropy can only increase overall - the second law of thermodynamics. And negative entropy is called Shannon, since his theory of information communication has exactly the same mathematical formula as Boltzmann's famous principle

where S is the entropy, k is Boltzmann's constant, and W is the probability of the given state of the system.

Material particles are the first information structures to form in the universe.. They are quarks, baryons, and atomic nuclei, which eventually combine with electrons to form atoms and eventually molecules, when the falling temperature becomes low enough. These material particles are attracted by the force of universal gravitation to form the gigantic information structures of the galaxies, stars, and planets.

Microscopic quantum mechanical particles and huge self-gravitating systems are stable and have extremely long lifetimes, thanks in large part to quantum stability. Stars are another source of radiation, after the original Big Bang cosmic source, which has cooled down to 3 degrees Kelvin (3°K) and shines as the cosmic microwave background radiation.

Our solar radiation has a high color temperature (5000K) and a low energy-content temperature (273K). It is out of equilibrium and it is the source of all the information-generating negative entropy that drives biological evolution on the Earth. Note that the fraction of the light falling on Earth is less than a billionth of that which passes by and is lost in space.

A tiny fraction of the solar energy falling on the earth gets converted into the information structures of plants and animals. Most of it gets converted to heat and is radiated away as waste energy to the night sky.

Every biological structure is a quantum mechanical structure. Quantum cooperative phenomena allow DNA to maintain its stable information structure over billions of years in the constant presence of chaos and noise. And biological structures contain astronomical numbers of particles, allowing them to average over the random noise of individual quantum events, becoming "adequately determined."

The stable information content of a human being survives many changes in the material content of the body during a person’s lifetime. Only with death does the mental information (spirit, soul) dissipate - unless it is saved somewhere.

The total mental information in a living human is orders of magnitude less than the information content and information processing rate of the body. But the cultural information structures created by humans outside the body, in the form of external knowledge like this book, and the enormous collection of human artifacts, now rival the total biological information content.

We can simplify this to define the Shannon Principle. No new information can be created in the universe unless there are multiple possibilities, only one of which can become actual.

An alternative statement of the Shannon principle is that in a deterministic system, information is conserved, unchanging with time. Classical mechanics is a conservative system that conserves not only energy and momentum but also conserves the total information. Information is a "constant of the motion" in a determinist world.

Quantum mechanics, by contrast, is indeterministic. It involves irreducible ontological chance.

An isolated quantum system is described by a wave function ψ which evolves - deterministically - according to the unitary time evolution of the linear Schrödinger equation.

The possibilities of many different outcomes evolve deterministically, but the individual actual outcomes are indeterministic.

This sounds a bit contradictory, but it is not. It is the essence of the highly non-intuitive quantum theory, which combines a deterministic "wave" aspect with an indeterministic "particle" aspect.

    Process 1. A non-causal process, in which the measured electron winds up randomly in one of the possible physical states (eigenstates) of the measuring apparatus plus electron.

The probability for each eigenstate is given by the square of the coefficients cn of the expansion of the original system state (wave function ψ) in an infinite set of wave functions φ that represent the eigenfunctions of the measuring apparatus plus electron.

This is as close as we get to a description of the motion of the "particle" aspect of a quantum system. According to von Neumann, the particle simply shows up somewhere as a result of a measurement.

Information physics says that the particle shows up whenever a new stable information structure is created, information that can be observed.

Von Neumann claimed there is another major difference between these two processes. Process 1 is thermodynamically irreversible. Process 2 is in principle reversible. This confirms the fundamental connection between quantum mechanics and thermodynamics that is explainable by information physics.

Information physics establishes that process 1 may create information. It is always involved when information is created.

Process 2 is deterministic and information preserving.

The first of these processes has come to be called the collapse of the wave function.

It gave rise to the so-called problem of measurement, because its randomness prevents it from being a part of the deterministic mathematics of process 2.

But isolation is an ideal that can only be approximately realized. Because the Schrödinger equation is linear, a wave function | ψ > can be a linear combination (a superposition) of another set of wave functions | φn >,

where the cn coefficients squared are the probabilities of finding the system in the possible state | φn > as the result of an interaction with another quantum system.

Quantum mechanics introduces real possibilities, each with a calculable probability of becoming an actuality, as a consequence of one quantum system interacting (for example colliding) with another quantum system.

It is quantum interactions that lead to new information in the universe - both new information structures and information processing systems. But that new information cannot subsist unless a compensating amount of entropy is transferred away from the new information.

Even more important, it is only in cases where information persists long enough for a human being to observe it that we can properly describe the observation as a "measurement" and the human being as an "observer." So, following von Neumann's "process" terminology, we can complete his admittedly unsuccessful attempt at a theory of the measuring process by adding an anthropomorphic

The classical laws of motion, with their implicit determinism and strict causality emerge when microscopic events can be ignored.

Information philosophy interprets the wave function ψ as a "possibilities" function. With this simple change in terminology, the mysterious process of a wave function "collapsing" becomes a much more intuitive discussion of possibilities, with mathematically calculable probabilities, turning into a single actuality, faster than the speed of light.

Information physics is standard quantum physics. It accepts the Schrödinger equation of motion, the principle of superposition, the axiom of measurement (now including the actual information "bits" measured), and - most important - the projection postulate of standard quantum mechanics (the "collapse" so many interpretations deny).

But a conscious observer is not required for a projection, for the wave-function "collapse", for one of the possibilities to become an actuality. What it does require is an interaction between (quantum) systems that creates irreversible information.

In less than two decades of the mid-twentieth century, the word information was transformed from a synonym for knowledge into a mathematical, physical, and biological quantity that can be measured and studied scientifically.

In 1929, Leo Szilard connected an increase in thermodynamic (Boltzmann) entropy with any increase in information that results from a measurement, solving the problem of "Maxwell's Demon," a thought experiment suggested by James Clerk Maxwell, in which a local reduction in entropy is possible when an intelligent being interacts with a thermodynamic system.

In the early 1940s, digital computers were invented by von Neumann, Shannon, Alan Turing, and others. Their machines could run a stored program to manipulate stored data, processing information, as biological organisms had been doing for billions of years.

Then in the late 1940s, the problem of communicating digital data signals in the presence of noise was first explored by Shannon, who developed the modern mathematical theory of the communication of information. Norbert Wiener wrote in his 1948 book Cybernetics that "information is the negative of the quantity usually defined as entropy," and in 1949 Leon Brillouin coined the term "negentropy."

Finally, in the early 1950s, inheritable characteristics were shown by Francis Crick, James Watson, and George Gamow to be transmitted from generation to generation in a digital code.

A living being is a form through which passes a flow of matter and energy (with low or negative entropy). Genetic information is used to build the information-rich matter into an information-processing structure that contains a very large number of hierarchically organized information structures.

All biological systems are cognitive, using their internal information structure to guide their actions. Even some of the simplest organisms may learn from experience. The most primitive minds are experience recorders and reproducers.

In humans, the information-processing structures create new actionable information (knowledge) by consciously and unconsciously reworking and reusing the experiences stored in the mind.

Emergent higher levels exert downward causation on the contents of the lower levels, ultimately supporting mental causation and free will.

When a ribosome assembles 330 amino acids in four symmetric polypeptide chains (globins), each globin traps an iron atom in a heme group at the center to form the hemoglobin protein. This is downward causal control of the amino acids, the heme groups, and the iron atoms by the ribosome. The ribosome is an example of Erwin Schrödinger's emergent "order out of order," life "feeding on the negative entropy" of digested food.

Notice the absurdity of the idea that the random motions of the transfer RNA molecules (green in the video above), each holding a single amino acid (red), are carrying pre-determined information of where they belong in the protein being built.

Determinism is an emergent property and an ideal philosophical concept, unrealizable except approximately in the kind of adequate determinism that we experience in the macroscopic world, where the determining information is part of the higher-level control system.

The total information in multi-cellular living beings can develop to be many orders of magnitude more than the information present in the original cell. The creation of this new information would be impossible for a deterministic universe, in which information is constant.

Immaterial information is perhaps as close as a physical or biological scientist can get to the idea of a soul or spirit that departs the body at death. When a living being dies, it is the maintenance of biological information that ceases. The matter remains.

Biological systems are different from purely physical systems primarily because they create, store, and communicate information. Living things store information in a memory of the past that they use to shape their future. Fundamental physical objects like atoms have no history.

And when human beings export some of their personal information to make it a part of human culture, that information moves closer to becoming immortal.

Human beings differ from other animals in their extraordinary ability to communicate information and store it in external artifacts. In the last decade the amount of external information per person may have grown to exceed an individual's purely biological information.

Since the 1950's, the science of human behavior has changed dramatically from a "black box" model of a mind that started out as a "blank slate" conditioned by environmental stimuli. Today's mind model contains many "functions" implemented with stored programs, all of them information structures in the brain. The new "computational model" of cognitive science likens the brain to a computer, with some programs and data inherited and others developed as appropriate reactions to experience.

The ERR model stands in contrast to the popular cognitive science or “computational” model of a mind as a digital computer. No algorithms, data addressing schemes, or stored programs are needed for the ERR model.

The physical metaphor is a non-linear random-access data recorder, where data is stored using content-addressable memory (the memory address is the data content itself). Simpler than a computer with stored algorithms, a better technological metaphor might be a video and sound recorder, enhanced with the ability to record - and replay - smells, tastes, touches, and critically essential, feelings.

The biological model is neurons that wire together during an organism’s experiences, in multiple sensory and limbic systems, such that later firing of even a part of the wired neurons can stimulate firing of all or part of the original complex.

A conscious being is constantly recording information about its perceptions of the external world, and most importantly for ERR, it is simultaneously recording its feelings. Sensory data such as sights, sounds, smells, tastes, and tactile sensations are recorded in a sequence along with pleasure and pain states, fear and comfort levels, etc.

All these experiential and emotional data are recorded in association with one another. This means that when the experiences are reproduced (played back in a temporal sequence), the accompanying emotions are once again felt, in synchronization.

The ability to reproduce an experience is critical to learning from past experiences, so as to make them guides for action in future experiences. The ERR model is the minimal mind model that provides for such learning by living organisms.

The ERR model does not need computer search, retrieval, and decision algorithms to reproduce past experiences. All that is required is that relevant past experiences “play back” whenever they are stimulated by present experiences that resemble the past experiences in one or more ways.

All or most of these relevant past experiences appear before the mind as alternative possibilities for evaluation as thoughts and actions. Decisions can be made based on the relative values of past outcomes.

Neuroscientist Donald Hebb's insight that "neurons that fire together wire together" is widely accepted today. The ERR model of information philosophy is built on the simple consequence of Hebb's work that "neurons that have been wired together will fire together."

Neuroscientists and philosophers of mind have long asked how diverse signals from multiple locations in the brain over multiple pathways appear so unified in the brain. The ERR model offers a simple solution to this “binding” problem. Experiences are bound at their initial recording. They do not have to be re- associated by some central processing unit looking up where experiences may have been distributed among the various memory or sensory motor areas of the brain.

The ERR model may also throw some light on the problem of "qualia" and of "what it's like to be" a particular organism.

Modern philosophy seeks knowledge in logical reasoning with clear and unchanging concepts. Its guiding lights are thinkers like Parmenides, Plato, and Kant, who sought unity and identity, being and universals.

Information philosophy, by contrast, is a story about invention, about novelty, about biological emergence and new beginnings unseen and unseeable beforehand, a past that is fixed but an ambiguous future that can be shaped by teleonomic changes in the present.

Its model thinkers are Heraclitus, Protagoras, Aristotle, and Hegel, for whom time, place, and particular situations mattered.

Information philosophy is built on probabilistic laws of nature. The fundamental challenge for information philosophy is to explain the emergence of stable information structures from primordial and ever-present chaos, to account for the phenomenal success of deterministic laws when the material substrate of the universe is irreducibly chaotic, noisy, and random, and to understand the concepts of truth, necessity, and certainty in a universe of chance, contingency, and indeterminacy.

Determinism and the exceptionless causal and deterministic laws of classical physics are the real illusions. Determinism is information-preserving. In an ideal deterministic Laplacian universe, the present state of the universe is implicitly contained in its earliest moments. There is "nothing new under the sun."

This ideal determinism does not exist. The "adequate determinism" behind the laws of nature emerged from the early years of the universe when there was only the indeterministic chaos of "thermodynamic equilibrium" and its maximal entropy or disorder.

In a random noisy environment, how can anything be regular and appear determined? It is because the macroscopic consequences of the law of large numbers average out microscopic quantum fluctuations to provide us with a very adequate determinism for large objects.

Information Philosophy is an account of continuous information creation, a story about the origin and evolution of the universe, of life, and of intelligence from an original quantal chaos that is still present in the microcosmos. More than anything else, it is the creation and maintenance of stable information structures, despite the destructive entropic requirements of the second law of thermodynamics. Creation of living information structures distinguishes biology from physics and chemistry.

Living things store useful information in a memory of the past that they can use to shape the future. The "meaning" in the information is their use of it. Some get their information "built-in" via heredity. Some learn it from experience. Others invent it!

Ancient Philosophy, before the advent of Modern Theology with John Duns Scotus and Thomas Aquinas, and Medieval Philosophy, before the beginning of Modern Philosophy with René Descartes, covered the same wide range of questions now addressable by Information Philosophy.

In the 1950's, we studied the then leading philosophies of positivism and existentialism.

Bertrand Russell, with the help of G. E. Moore, Alfred North Whitehead, and Ludwig Wittgenstein, proposed logic and language as the proper foundational basis, not only of philosophy, but also of mathematics and science. Wittgenstein's Tractatus imagined that a set of all true propositions could capture all the knowledge of modern science.

(or the whole corpus of the natural sciences)

Their logical positivism and the variation called logical empiricism developed by Rudolf Carnap and the Vienna Circle proved to be failures in grounding philosophy, mathematics, or science.

On the continent, existentialism was the rage. We read Friedrich Nietzsche, Martin Heidegger, and Jean-Paul Sartre.

We wrote that "Values without freedom are useless. Freedom without values is absurd."

This was a chiasmos like the great figure of Immanuel Kant, rephrased by Charles Sanders Peirce as "Idealism without Materialism is Empty. Materialism without Idealism is Blind."

In the 1960's, we formulated arguments that cited "pockets of low entropy," in apparent violation of the second law, as the possible basis for anything with objective value. We puzzled over the origin of "negative entropy," since the universe was believed to have started in thermodynamic equilibrium and the second law of thermodynamics says that (positive) entropy can only increase.

In the late 1960's, we developed a two-stage model of free will and called it Cogito, a term often associated with the mind and with thought. With deference to Descartes, the first modern philosopher, we called "negative entropy" Ergo. While thermodynamics calls it "negative," information philosophy sees it as the ultimate "positive" and deserving of a better name. We thought that Ergo etymologically suggests a fundamental kind of energy ("erg" zero), e.g., the "Gibbs free energy," G0, that is available to do work because it has low entropy.

In the early 70's, we decided to call the sum of human knowledge the Sum, to complete the triple wordplay on Descartes' proof of his existence.

We saw a great battle going on in the universe - between originary chaos and emergent cosmos. The struggle is between destructive chaotic processes that drive a microscopic underworld of random events versus constructive cosmic processes that create information structures with extraordinary emergent properties that include adequately determined scientific laws - despite, and in many cases making use of, the microscopic chaos.

Since the destructive chaos is entropic, we repurposed a term from statistical mechanics and called the anti-entropic processes creating information structures ergodic. The embedded Ergod resonated.

Created information structures range from galaxies, stars, and planets, to molecules, atoms, and subatomic particles. They are the structures of terrestrial life from viruses and bacteria to sentient and intelligent beings. And they are the constructed ideal world of thought, of intellect, of spirit, including the laws of nature, in which we humans play a role as co-creator.

Information is constant in a deterministic universe. There is "nothing new under the sun." The creation of new information is not possible without the random chance and uncertainty of quantum mechanics, plus the extraordinary temporal stability of quantum mechanical structures.

It is of the deepest philosophical significance that information is based on the mathematics of probability. If all outcomes were certain, there would be no "surprises" in the universe. Information would be conserved and a universal constant, as some mathematicians mistakenly believe. Information philosophy requires the ontological uncertainty and probabilistic outcomes of modern quantum physics to produce new information.

But at the same time, without the extraordinary stability of quantized information structures over cosmological time scales, life and the universe we know would not be possible. That stability is the consequence of an underlying digital nature. Quantum mechanics reveals the architecture of the universe to be discrete rather than continuous, to be digital rather than analog. Digital information transfers are essentially perfect. All analog transfers are "lossy."

Moreover, the "correspondence principle" of quantum mechanics and the "law of large numbers" of statistics ensures that macroscopic objects can normally average out microscopic uncertainties and probabilities to provide the "adequate determinism" that shows up in all our "Laws of Nature."

Information philosophy explores some classical problems in philosophy with deeper and more fundamental insights than is possible with the logic and language approach of modern analytic philosophy.

By exploring the origins and evolution of structure in the universe, information philosophy transcends humanity and even life itself, though it is not a mystical metaphysical transcendence.

Information philosophy uncovers the creative process working in the universe
to which we owe our existence, and therefore perhaps our reverence for its "providence".

Information philosophy locates the fundamental source of all values not in humanity ("man the measure"), not in bioethics ("life the ultimate good"), but in the origin and evolution of information in the cosmos.

Information philosophy is an idealistic philosophy, a process philosophy, and a systematic philosophy, the first in many decades. It provides important new insights into the Kantian transcendental problems of epistemology, ethics, freedom of the will, god, and immortality, as well as the mind-body problem, consciousness, and the problem of evil.

In physics, information philosophy (or information physics) provides new insights into the problem of measurement, the paradox of Schrödinger's Cat, the two paradoxes of microscopic reversibility and macroscopic recurrence that Josef Loschmidt and Ernst Zermelo used to criticize Ludwig Boltzmann's explanation of the entropy increase required by the second law of thermodynamics, and finally information provides a better understanding of the entanglement and nonlocality phenomena that are the basis for modern quantum cryptography and quantum computing.

Finally, a new philosophy of biology should be based on the deep understanding of organisms as information users, information creators, information communicators, and at the higher levels, information processors, including humans who have learned to store information externally and transfer it between the generations culturally. Except for organisms that can extract information by photosynthesis of the negative entropy (free or available energy) streaming from the sun, most living things destroy other cells to extract the information needed to maintain their own low entropy state of organization. Most life feeds on other life.

And most life communicates with other life. Even single cells, before the emergence of multicellular organisms, developed communication systems between the cells that are still visible in slime molds and social amoebae today. In a multicellular organism, every cell has some level of communication with all the others. Most higher level organisms share communal information that makes them stronger as a social group than as independent individuals. The sum of human knowledge has amplified the power of humanity, for better or worse, to a level that can control the environmental conditions on all of planet Earth.

Information biology is the hypothesis that all biological evolution should be viewed primarily as the development of more and more powerful users, creators, and communicators of information. Seen though the lens of information, humans are the current end product of information processing systems. With the emergence of life and mind, purpose (telos) appeared in the universe. The teleonomic goal of each cell is to become two cells, which replicates its information content. The purpose of each species is to improve its reproductive success relative to other populations. The purpose of human populations then is to use, to add to, and to communicate human knowledge in order to maximize the human capital per person.

Like love, the information that is shared by educating others is not used up. Information is not a scarce economic good. The more that information is communicated, the more of it there is, in human minds (not brains), and in the external stores of human knowledge. These include books of course, but in the future they will be the interconnected knowledge bases of the world wide web, including www.informationphilosopher.com, since books are expensive and inaccessible for many.

The first thing we must do for the young is to teach them how to teach themselves by accessing these knowledge systems with handheld devices that will some day be available for all the world's children, beyond one laptop per child to one smartphone per child.

Based on insights into the discovery of the cosmic creation process, the Information Philosopher proposes three primary ideas that are new approaches to perennial problems in philosophy. They are likely to change some well-established philosophical positions. Even more important, they may reconcile idealism and materialism and provide a new view of how humanity fits into the universe.

    An explanation or epistemological model of knowledge formation and communication. Knowledge and information are neither matter nor energy, but they require matter for expression and energy for communication. They seem to be metaphysical.

Briefly, we find positive value (or good) in information structures. We see negative value (or evil) in disorder and entropy tearing down such structures. We call energy with low entropy "Ergo" and call anti-entropic processes "ergodic." We recognize that "ergodic" is itself too esoteric and thus not likely to be widely accepted. Perhaps the most positive term for what we value is just "information" itself! Our first categorical imperative is then "act in such a way as to create, maintain, and preserve information as much as possible against destructive entropic processes."

Our second ethical imperative is "share knowledge/information to the maximum extent." Like love, our own information is not diminished when we share it with others

Our third moral imperative is "educate (share the knowledge of what is right) rather than punish." Knowledge is virtue. Punishment wastes human capital and provokes revenge.

Briefly, we separate "free" and "will" in a two-stage process - first the free generation of alternative possibilities for action (which creates new information), then an adequately determined decision by the will. We call this two-stage view our Cogito model and trace the idea of a two-stage model in the work of two dozen thinkers back to William James in 1884.

This model is a synthesis of adequate determinism and limited indeterminism, a coherent and complete compatibilism that reconciles
free will with both determinism and indeterminism.

David Hume thought he had reconciled freedom with determinism. We reconcile free will with indeterminism and an "adequate" determinism.

Because it makes free will compatible with both a form of determinism (really determination) and with an indeterminism that is limited and controlled by the mind, the leading libertarian philosopher Bob Kane suggested we call this model "Comprehensive Compatibilism."

The problem of free will cannot be solved by logic, language, or even by physics. Man is not a machine and the mind is not a computer.
Free will is a property of a biophysical information processing system.

All three ideas depend on understanding modern cosmology, physics, biology, and neuroscience, but especially the intimate connection between quantum mechanics and the second law of thermodynamics that allows for the creation of new information structures.

All three are based on the theory of information, which alone can establish the existential status of ideas, not just the ideas of knowledge, value, and freedom, but other-worldly speculations in natural religion like God and immortality.

All three have been anticipated by earlier thinkers, but can now be defended on strong empirical grounds. Our goal is less to innovate than to reach the best possible consensus among philosophers living and dead, an intersubjective agreement between philosophers that is the surest sign of a knowledge advance.

This Information Philosopher website aims to be an open resource for the best thinking of philosophers and scientists on these three key ideas and a number of lesser ideas that remain challenging problems in philosophy - on which information philosophy can shed some light.

Among these are the mind-body problem (the mind can be seen as the realm of information in its free thoughts, the body an adequately determined biological system creating and maintaining information) the common sense intuition of a cosmic creative process often anthropomorphized as a God or divine Providence the problem of evil (chaotic entropic forces are the devil incarnate) and the "hard problem" of consciousness (agents responding to their environment, and originating new causal chains, based on information processing).

Philosophy is the love of knowledge or wisdom. Information philosophy (I-Phi or ΙΦ) qualifies and quantifies knowledge as meaningful actionable information. Information philosophy reifies information as an immaterial entity that has causal power over the material world!

What is information that merits its use as the foundation of a new method of inquiry?

Abstract information is neither matter nor energy, yet it needs matter for its concrete embodiment and available usable energy for its communication. Information is the modern spirit, the ghost in the machine. It is the stuff of thought, the immaterial substance of philosophy.

Information is a powerful diagnostic tool. It is a better abstract basis for philosophy, and for science as well, especially physics, biology, and neuroscience. It is capable of answering questions about metaphysics (the ontology of things themselves), epistemology (the existential status of ideas and how we know them), and idealism itself.

Information philosophy is now more than the solution to three fundamental problems we identified in the 1960's and '70's. I-Phi is a new philosophical method, capable of solving multiple problems in both philosophy and physics. It needs young practitioners, presently tackling some problem, who might investigate the problem using this new methodology.

Note that, just as the philosophy of language is not linguistic philosophy, Information philosophy is not the philosophy of information, which is mostly about computers and cognitive science, the computational theory of mind.

Philosophers like Ludwig Wittgenstein labeled many of our problems “philosophical puzzles.” Bertrand Russell called them “pseudo-problems.” Analytic language philosophers thought many of these problems could be “dis-solved,” revealing them to be conceptual errors caused by the misuse of language.

Information philosophy takes us past logical puzzles and language games, not by diminishing philosophy and replacing it with science.

The language philosophers of the twentieth century thought that they could solve (or at least dis-solve) the classical problems of philosophy. They did not succeed. Information philosophy, by comparison, now has cast a great deal of light on some of those problems. It needs more information philosophers to join us to make more progress.

To recap, when information is stored in any structure, two fundamental physical processes occur. First is a "collapse" of a quantum mechanical wave function, reducing multiple possibilities to a single actuality. Second is a local decrease in the entropy corresponding to the increase in information. Entropy greater than that must be transferred away from the new information structure to satisfy the second law of thermodynamics.

These quantum level processes are susceptible to noise. Information stored may have errors. When information is retrieved, it is again susceptible to noise. This may garble the information content. In information science, noise is generally the enemy of information. But some noise is the friend of freedom, since it is the source of novelty, of creativity and invention, and of variation in the biological gene pool.

Biological systems have maintained and increased their invariant information content over billions of generations, coming as close to immortality as living things can. Philosophers and scientists have increased our knowledge of the external world, despite logical, mathematical, and physical uncertainty. They have created and externalized information (knowledge) that can in principle become immortal. Both life and mind create information in the face of noise. Both do it with sophisticated error detection and correction schemes. The scheme we use to correct human knowledge is science, a two-stage combination of freely invented theories and adequately determined experiments. Information philosophy follows that example.

Teacher and Scholar links display additional material on some pages, and reveal hidden footnotes on some pages. The footnotes themselves are in the Scholar section.

Our goal is for the website to contain all the great philosophical discussions of the three original problem areas we identified in the 1970's - COGITO (freedom), ERGO (value), and SUM (knowledge) - plus potential solutions for several classic problems in philosophy and physics, many of which had been designated "pseudo-problems" or relegated to "metaphysics."

We have now shown that information philosophy is a powerful diagnostic tool for addressing metaphysical problems. See The Metaphysicist.

In the left-hand column of all I-Phi pages are links to nearly five hundred philosophers and scientists who have made contributions to these great problems. Their web pages include the original contributions of each thinker, with examples of their thought, usually in their own words, and where possible in their original languages as well.

All original content on Information Philosopher is available for your use, without requesting
permission, under a Creative Commons Attribution License.

Copyrights for all excerpted and quoted works remain with their authors and publishers.

Footnotes for a page appear in the Scholar materials. The footnote indicators themselves are only visible in Scholar mode.


Passive/Automatic Collection

When you view pages on our site, the web server automatically collects certain technical information from your computer and about your connection.

This site is hosted on IU Sitehosting, a shared web–hosting environment provided by IU’s University Information Technology Services. For more information about privacy practices related to IU Sitehosting, read the Webserve/IU Sitehosting Privacy Notice Supplement.

In addition to any information collected by IU Sitehosting, our server and/or site collects the following:

  • your IP address
  • the domain name from which you visit our site
  • user-specific information on which pages are visited
  • aggregate information on pages visited
  • the referring website
  • the date and time of visit
  • the duration of visit
  • your browser type
  • your screen resolution

Continued use of our website indicates consent to the collection, use, and disclosure of this information as described in this notice.

This technical information is retained in detail it may not have a current retention/deletion schedule.

Some technical information is retained in aggregate it may not have a current retention/deletion schedule.

Active/Manual/Voluntary Collection

Other than automatically collected technical information about your visit (described above, or cookies, described below), we may ask you to provide information voluntarily, such as through forms or other manual input—in order to make products and services available to you, to maintain and manage our relationship with you, including providing associated services or to better understand and serve your needs. This information is generally retained as long as you continue to maintain a relationship with us. Your providing this information is wholly voluntary. However, not providing the requested information (or subsequently asking that the data be removed) may affect our ability to deliver the products or service for which the information is needed. Providing the requested information indicates your consent to the collection, use, and disclosure of this information as described in this notice. Information we may actively collect could include:

  • the email addresses of those who post messages to our bulletin board
  • the email addresses of those who communicate with us via email
  • the email addresses of those who make postings to our chat areas
  • name
  • address
  • telephone number
  • fax number
  • payment information (e.g., credit card number and billing address)
  • information volunteered by the visitor, such as preferences, survey information and/or site registrations

Information Usage

  • used for internal review and is then discarded
  • used to improve the content of our site
  • used to customize the content of our site
  • used to notify visitors about updates to our site
  • used by us to contact visitors for marketing purposes

Information Used For Contact

If you supply us with your postal/mailing address:

  • You may receive periodic mailings from us with information on new products and services or upcoming events.

If you do not wish to receive such mailings, or would like to be added to our "do not share" list, please let us know by:

Information Sharing

We may share aggregate, non-personally identifiable information with other entities or organizations.

We do not share any personally identifiable information with other entities or organizations, except when legally required to do so, at the request of governmental authorities conducting an investigation, to verify or enforce compliance with the policies governing our website and applicable laws, or to protect against misuse or unauthorized use of our website.

Except as described above, we will not share any information with any party for any reason.

Except as provided in the Disclosure of Information section below, we do not attempt to use the technical information discussed in this section to identify individual visitors.

Cookies

A cookie is a small data file that is written to your hard drive that contains information about your visit to a web page. If you prefer not to receive cookies, you may configure your browser not to accept them at all, or to notify and require approval before accepting new cookies. Some web pages/sites may not function properly if the cookies are turned off, or you may have to provide the same information each time you visit those pages.

In order to customize the information and services offered to you, our site uses cookies to:

  • store visitors preferences
  • record session information, such as items that visitors add to their shopping cart
  • record user-specific information on what pages users access or visit
  • alert visitors to new areas that we think might be of interest to them when they return to our site
  • record past activity at a site in order to provide better service when visitors return to our site
  • ensure that visitors are not repeatedly sent the same banner ads
  • customize web page content on visitors' browser type or other information that the visitor sends

Children

This site is not directed to children under 13 years of age, does not sell products or services intended for purchase by children, and does not knowingly collect or store any personal information, even in aggregate, about children under the age of 13. We encourage parents and teachers to be involved in children’s Internet explorations. It is particularly important for parents to guide their children when they are asked to provide personal information online.

Use of Third Party Services

This website uses Google Analytics, a web analytics service provided by Google, Inc. ("Google"). Google Analytics uses cookies (described above) to help the website analyze how users use the site. The information generated by the cookie about your use of the website (including possibly your IP address) will be transmitted to and stored by Google.

For more information, please visit Google’s Privacy Policy.


Creation implications

The research underlines once again the very limited capacity of mutations and natural selection to create the complex features that characterize all living things such as metabolic pathways involving multiple enzymes or nano-machines such as helicases, kinesins, ATP synthase, the bacterial flagellum or even a truly novel enzyme. The sorts of mutational changes involved with these nylon-digesting bacteria that slightly modify an existing enzyme do not explain the origin of such things as brand new enzymes.

Organisms have clearly been designed to be able to adapt, but the sorts of adaptations we see give no support to the &lsquobig picture&rsquo claim that all biological complexity on earth came about by mutations and natural selection.

Figure 3: The basic reaction that produces nylon (when repeated many times). The blue box shows the amide bond, which is common in biology (e.g. in proteins).

Kawai (2010) pointed out that while nylon degradation has been readily achieved in a range of microbes, degradation of polyethylene or polypropylene has not, probably because there is almost no existing catalytic activity for these compounds among the many known enzymes. 11 Existing catalytic activity would give a target for mutations to tweak but with no target there is nothing to tweak. The implication is that such catalytic activity would require brand new enzymes, not just slight modifications of existing ones, and so is likely out of reach of natural processes to achieve.


Affiliations

GenQA, John Radcliffe Hospital, Oxford University Hospitals NHS Trust, Oxford, UK

Department of Genetics University Medical Center Groningen, University of Groningen, Groningen, The Netherlands

MLL-Munich Leukemia Laboratory, Munich, Germany

Department of Clinical Genetics, Erasmus MC, University medical center, Rotterdam, The Netherlands

Hematopathology Section, Hospital Clinic, Barcelona, Spain

Laboratori de Citogenètica Molecular, Servei de Patologia, Grup de Recerca,Translacional en Neoplàsies Hematològiques, Cancer Research Program, imim-Hospital del Mar, Barcelona, Spain

Viapath Genetics laboratories, Guys Hospital, London, UK

West Midlands Regional Genetics Laboratory, Birmingham Women’s Hospital, Birmingham, UK

Department of Cytogenetics, Nottingham University Hospital, Nottingham, UK

Haematological Malignancy Diagnostic Service, St James’s University Hospital, Leeds, UK

Oncogénomique laboratory, Hematology department, Lausanne University Hospital, Vaudois, Switzerland

Oncology Cytogenetics Service, The Christie NHS Foundation Trust, Manchester, UK

Laboratorio di Citogenetica e genetica moleculaire, Laboratorio Analisi, Humanitas Research Hospital, Rozzano, Milan, Italy

Prague Center of Oncocytogenetics, Institute of Clinical Biochemistry and Laboratory Diagnostics, General University Hospital and First Faculty of Medicine, Charles University in Prague, Prague, Czech Republic

GenQA, John Radcliffe Hospital, Oxford University Hospitals NHS Trust, Oxford, UK


Watch the video: Chromosomen - Funktion u0026 Aufbau. Studyflix (January 2023).