Erschienen in:
26.12.2017 | What's New in Intensive Care
What’s new in ICU in 2050: big data and machine learning
verfasst von:
Sébastien Bailly, Geert Meyfroidt, Jean-François Timsit
Erschienen in:
Intensive Care Medicine
|
Ausgabe 9/2018
Einloggen, um Zugang zu erhalten
Excerpt
The amount of digitalized data that the world produces today is by all measures unseen and spectacular. Social media, e-commerce, and the Internet of things generate approximately 2.5 quintillions of bytes per day, an amount that equals 100 million Blu-ray discs, or almost 30,000 GB per second. Data grows exponentially, and 90% of all data on the Internet has been created since 2016. This trend will continue in the next decades [
1]. Such datasets of unimaginable size cannot be maintained with traditional database management technology, or examined with traditional statistical techniques. The general term for methods to manage and analyze such unstructured datasets is “big data”. Although the term is often ill-defined and improperly used, the five “Vs” concept is a good summary: Volume, Velocity, Variety, Veracity, and Value referring to, respectively, the large quantity of data; the speed of acquisition; the diversity of data sources; the uncertain data quality; and the possible valorization. The last of these is without a doubt the “V” that matters most. In medicine, the first datasets amenable to big data were generated when techniques to process genomic data became available, for instance for tumor genotyping in oncology [
2]. …