Information quality in proteomics

David A. Stead*, Norman W. Paton, Paolo Missier, Suzanne M. Embury, Cornelia Hedeler, Binling Jin, Alistair J.P. Brown, Alun Preece

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

30 Citations (Scopus)


Proteomics, the study of the protein complement of a biological system, is generating increasing quantities of data from rapidly developing technologies employed in a variety of different experimental workflows. Experimental processes, e.g. for comparative 2D gel studies or LC-MS/MS analyses of complex protein mixtures, involve a number of steps: from experimental design, through wet and dry lab operations, to publication of data in repositories and finally to data annotation and maintenance. The presence of inaccuracies throughout the processing pipeline, however, results in data that can be untrustworthy, thus offsetting the benefits of high-throughput technology. While researchers and practitioners are generally aware of some of the information quality issues associated with public proteomics data, there are few accepted criteria and guidelines for dealing with them. In this article, we highlight factors that impact on the quality of experimental data and review current approaches to information quality management in proteomics. Data quality issues are considered throughout the lifecycle of a proteomics experiment, from experiment design and technique selection, through data analysis, to archiving and sharing.

Original languageEnglish
Pages (from-to)174-188
Number of pages15
JournalBriefings in Bioinformatics
Issue number2
Publication statusPublished - 1 Mar 2008


  • Information management
  • Information quality
  • Proteomics
  • Quality assessment
  • Standards


Dive into the research topics of 'Information quality in proteomics'. Together they form a unique fingerprint.

Cite this