Skip to main content
Erschienen in: Forensic Science, Medicine and Pathology 4/2020

Open Access 29.09.2020 | Review

Potential use of deep learning techniques for postmortem imaging

verfasst von: Akos Dobay, Jonathan Ford, Summer Decker, Garyfalia Ampanozi, Sabine Franckenberg, Raffael Affolter, Till Sieberth, Lars C. Ebert

Erschienen in: Forensic Science, Medicine and Pathology | Ausgabe 4/2020

Abstract

The use of postmortem computed tomography in forensic medicine, in addition to conventional autopsy, is now a standard procedure in several countries. However, the large number of cases, the large amount of data, and the lack of postmortem radiology experts have pushed researchers to develop solutions that are able to automate diagnosis by applying deep learning techniques to postmortem computed tomography images. While deep learning techniques require a good understanding of image analysis and mathematical optimization, the goal of this review was to provide to the community of postmortem radiology experts the key concepts needed to assess the potential of such techniques and how they could impact their work.
Hinweise

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Introduction

Postmortem computed tomography (PMCT) has been shown to be a valuable tool in forensic medicine. For instance, a meta-analysis by Ampanozi et al. concluded that PMCT is reliable in detecting skeletal fractures [1]. Furthermore, PMCT angiography helps to add soft tissue contrast to these images and is highly sensitive to soft tissue and organ findings. It is, therefore, well suited for the detection of hemorrhages. Depending on the jurisdiction, PMCT and postmortem computed tomography angiography (PMCTA) are being used as triage tools and/or as additional investigation methods to complement autopsy [1].
PMCT scans can consist of well over 10,000 single images. Although in practice only forensically relevant findings need to be analyzed, this can amount to a substantial workload on forensic pathologists. In clinical radiology, similar issues regarding workload exist [2], and machine learning approaches are being developed to address this issue [3, 4]. For these reasons, automated analysis of PMCT images was one of the research focuses identified by the first postmortem radiology and imaging research summit, which was organized by the International Society of Forensic Radiology and Imaging, the International Association of Forensic Radiographers, the National Institute of Justice of the United States of America, and the Netherlands Forensic Institute [5]. The aim of this article is to present the technical background and underlying concepts of deep learning and to highlight its potential use for analyzing PMCT in postmortem radiology based on experiences from clinical radiology.
Image analysis for postmortem computed tomography (PMCT) differs significantly from image analysis in clinical radiology. The clinical radiologist often has a defined focus for an examination, and only a limited area is scanned to minimize the radiation dose and scanning time for the patient. In postmortem radiology, the entire body is captured at high resolution to ensure that every pathology, anatomical anomaly and foreign body is documented. Furthermore, the image quality has to be sufficient for identification, visualization and complex reconstructions [6]. As the radiation dose does not need to be considered, the scan protocol is optimized for image quality, which leads to more detailed images and therefore larger datasets. In clinical radiology, the primary aim is to exclude or diagnose a pathology or injury correctly to choose the appropriate treatment. In postmortem radiology, determining the cause and manner of death is the primary focus of attention. In addition, PMCT data can be used in the reconstruction of the sequence of events, such as traffic accidents or homicides [7, 8]. In unknown persons, PMCT data are used for identification via comparison with images obtained before death (for example dental radiographs [9]).
Additionally, exclusion of findings can be equally important. This has an impact on how the PMCT images are obtained and analyzed. The scan protocols usually document the entire body, with additional higher resolution scans for the thorax, abdomen, head and teeth, which can add up to 10,000 single images or more [10]. It is common practice in many countries that the same extensive protocol is used, independent of the expected findings. This means that the postmortem radiology expert always has to assess the entire CT dataset for forensically relevant pathologies, injuries and the presence of foreign bodies [11]. In some cases, segmentation is also required to estimate volumes and weights or as a basis for advanced visualizations or 3D prints to be presented in a court. Due to the number of images available and depending on the complexity of the case, reading can take hours. Organ segmentation (the act of isolating relevant structures) can be equally time-consuming. A lack of trained postmortem radiology experts, in conjunction with the costs of a long reading process, may sometimes limit the use of radiology in death investigations. To increase the quality of the image analysis and decrease the costs, new tools tailored to the specific needs of postmortem radiology experts are required. This includes but is not limited to automatic organ segmentation and weight estimation and injury and fracture detection, as well as foreign body detection and identification.
As image data are inherently digital in nature, computer methods are an obvious source for tool creation. Using conventional image processing methods, the developer of an algorithm has to identify relevant structures, choose the right set of image processing techniques and fine-tune the algorithm until the selected feature is detected correctly. The problems with this approach are that developing and fine-tuning such an algorithm is time consuming and requires expert knowledge with respect to the structure to be identified as well as to which image processing techniques are available. Algorithms based on conventional image processing can have low robustness for images that have great variance or contrast gradients, such as in magnetic resonance imaging (MRI) images or noisy CT scans. One way to overcome these issues is the use of specialized techniques from artificial intelligence (AI), so-called deep learning techniques [12]. In this article, we aim to give researchers in the field of postmortem radiology a starting point to implement deep learning techniques themselves. We present the technical background and underlying concepts of deep learning and highlight its potential use in postmortem radiology for analyzing PMCT data based on experiences from clinical radiology.

Understanding deep learning techniques

AI is a subcategory of computer science that tries to mimic human intelligence to solve complex problems. Examples of classical AI systems are early chess computers and handwriting recognition. While many AI techniques are purely algorithmic, a subset of AI called machine learning (ML) addresses a more data-driven approach. With ML, algorithms analyze sample data to adjust their behavior, thus learning from the data rather than just following a predefined procedure. An example of ML is spam filtering in email programs, which performs statistical analyses of emails that are marked as spam to categorize incoming emails. Finally, deep learning (DL), which is a type of machine learning, uses artificial neural networks (ANNs) to analyze the data. ANNs mimic, in a simplified manner, how nerve cells function and communicate with each other to express complex behavior from relatively simple building blocks. The basic building block of an ANN is the aptly named artificial neuron. Such a neuron consists of multiple weighted inputs that are summed to produce an output that is then analyzed. A timeline of major milestones in AI can be seen in Fig. 1 [1322].
The idea of artificial neurons can be traced back to 1943, when Warren McCulloch and Walter Pitts proposed modeling the nervous system as a network of logical units able to integrate over several inputs by summing them up and producing an output. Frank Rosenblatt developed this idea further in his concept of the perceptron [23, 24]. Interconnected systems of perceptrons evolved into what we now call artificial neural networks (ANNs) or simply neural networks (NNs): a class of algorithms with the goal of classifying a set of data into categories. The architecture of NNs is similar to that of a biological neuron. Dendrites represent separate input channels with their specific weights. The overall input signal is integrated into the cell body. If the accumulated signal exceeds a certain threshold, an output signal is produced. Figure 2 proposes a simple illustration of how the concept of NNs evolved from the nervous system.
Mathematically, NNs are constructed using interconnected nodes arranged in layers. Each node of a layer receives inputs from nodes of the previous layer. The combination of layers is called a topology and consists of a fully connected network. Hence, depending on the amount of input signal, some nodes will propagate the signal while others will stop it. Interconnected layers are divided into three groups: the input layer, the output layer and the hidden layers. The input layer is directly connected to the data set, while the output layer can be seen as a real-valued vector. Intermediate layers are called hidden because their values are not directly used for classification. A proper classification relies on the optimal value for each individual weight in the network. Fixing the weights by exploring all possible combinations is unrealistic for large networks. The backpropagation algorithm uses an objective function (cost function) that is constructed with the values of the output layer and the target values. Taking the first derivative of the objective function tells us the direction of its gradient, which is then multiplied by a constant parameter. This discrepancy constitutes the error that we want to minimize. This error is backpropagated to the input layer to adjust the weights. A similar process is used to backpropagate the error for all hidden layers by multiplying the error between each hidden layer together. Finally, a softmax function is used to normalize the value of the output layer to create a probability density function that tells the likelihood associated with each category represented in the output layer. Figure 3 depicts the full cycle of how NNs are trained to obtain a model to be used to classify the input data. The complexity of an NN grows with the complexity of the data. For instance, an image of 512 × 512 pixels (the typical size of a single slice of a PMCT dataset) would need an input layer with 262,144 nodes, one for each pixel in the image, as well as numerous hidden layers for processing them. Optimizing the weights for such NNs was unrealistic with computers in the 1980s. The question was how to reduce the complexity to make it feasible for a computer to classify images. Convolutional neural networks (CNNs) are a solution proposed to reduce network complexity. A graphical depiction of this process can be seen in Fig. 4.
CNNs have existed since the late 1980s [25] and were initially developed for handwritten digit classification. Through convolution, the size of an image is typically reduced to a few thousand pixels. The new image constitutes a feature map of the original image; visually, these two images are different. The convolution applies a square filter that typically has few pixels equivalent in size to the pixels of the original image. In other words, the convolution is a mathematical procedure that uses the pixels in the direct neighborhood of a focal pixel to reinforce the information content. Local features of an object can be reinforced by this means or blurred out. The next step consists of selecting only the pixel with the maximum intensity in a local area and eliminating any negative values. This step is called maximum pooling while the pooled values are passed through a linear rectifier called rectified linear unit (ReLU). Through the pooling action, the image size is reduced. The whole process of filtering, rectifying and pooling the pixels of an original image is repeated several times and with several sets of filters. All the feature maps are combined together into a single array to form the input layer of the NN.

Medical image analysis

Medical image analysis can be traced back to as early as the 1970s using various mathematical modeling and ruled-based algorithms. In the 1990s, supervised techniques, such as shape models and atlas-based segmentation methods, were used to identify organs and extract features for statistical classification in computer-aided diagnosis. An important revolution in image analysis and image manipulation appeared with the development of graphics processing units (GPUs), dedicated electronic circuits separated from the main processor (CPU) that were intended to specifically accelerate the manipulation of graphic contents. Since Steinkraus and his colleagues [26] have shown the value of GPUs for ML, following the recent development in GPU hardware over multicore CPUs, CNNs have become a popular approach for image processing [27]. Dedicated hardware is now commercially available to perform calculations on large datasets.
CNNs started to gain popularity when the winning team of the 2012 ImageNet challenge used them for their model [28]. ImageNet is a public repository of human-annotated Internet images organized by concept [ref: http://​image-net.​org]. The ImageNet challenge was created by the ImageNet community to foster innovation in image classification. Since then, numerous studies have been published showing how CNNs, together with other ML techniques, are used to improve medical image analysis, such as that for CT, MRI, positron emission tomography (PET) and radiographics. State-of-the-art results were achieved in mammographic mass classification [29, 30], segmentation of lesions in the brain [31], leak detection in airway tree segmentation [32], lung pattern classification [33], prostate segmentation [34, 35], nodule classification [36, 37], breast cancer metastasis detection in lymph nodes [38, 39], human expert performance in skin lesion classification [40], and bone suppression in radiographics [41]. Most CNN approaches are based on processing 2D images. Medical (CT) data add a level of complexity, as they are 3D in nature and require 3D volumetric segmentation and analysis [42, 43]. Therefore, there is a need to develop CNNs for 3D volumetric data.

Possible applications of deep learning techniques in PMCT

Previous work done on segmentation of radiological data for abdominal organs has been performed using multi-atlas [44], patch-based [45] and probabilistic atlas methods [46]. More recently, a fully convolutional network (FCN) has been developed for medical image segmentation. The flowchart of the network visually follows a U-shape and has been labeled U-Net [47]. The U-Net architecture was subsequently extended by Çiçek et al. [48]. Larsson, Zhang and Kahl proposed a two-step method, where first the organ is registered using its center of gravity, and then voxelwise binary classification is applied using a CNN [49]. The method has the apparent advantage of delivering more reliable results for organs with high anatomical variability. There is no reference in the literature on automated segmentation of organs using 3D PMCT images. Automated segmentation of organs based on PMCT images could be used, for instance, to estimate organ weight and detect anomalies such as hemorrhagic pericardial effusion [50, 51]. Knowing the exact position of an organ can also help in planning procedures such as CT-guided needle placement [52].
Age estimation based on facial recognition can be traced back to the work of Kwon and Lobo [53]. Since then, several techniques using either geometric ratios of anthropomorphic features or support vector machines on key landmarks have been applied. Works that specifically use DL to extract features were introduced in other studies [5456].
In mass disaster events, PMCT can help speed up the process of identifying victims [57]. PMCT also allows for sex, age, ethnicity and stature estimation in the investigation of unknown remains [58]. Age estimation techniques utilize a variety of features, such as sternal rib ends, sacroiliac joint morphology, the pubic symphysis and cranial sutures. Additionally, dentition staging and development are highly useful in age assessment of juveniles [59]. Some techniques, such as statistical shape modeling, require 3D models extracted from PMCT data, and these models could be automatically extracted using automated segmentation [60].
In a postmortem examination, the time of death is estimated using several parameters, such as lividity, body stiffness, body temperature [61], presence of insects, environmental factors, and others. With PMCT, additional postmortem changes, such as gas formation inside the body of the deceased, can be assessed [62].
Fracture detection using DL produces reliable results in conventional radiographs obtained at the hospital; Kim and Mac Kinnon combined the Inception (version 3) pretrained model to identify fractures from lateral wrist radiographs [63].
CT scans of the head are not only able to show plain pathologies, such as hemorrhage, tumors and fractures, but also make it possible to derive the sequence of complex fractures and the direction of an inflicted gunshot [6466]. The use of deep learning for CT scans has been investigated in several studies. For instance, in Chilamkurthy et al., the authors use natural language processing to detect key findings in intracranial hemorrhage [67]. A similar study was performed by Arbabshirani et al., in which the authors used a CNN approach instead [68]. To date, no studies have been published on the use of machine learning for gunshot injuries.
For future research, we propose the following three-step plan of action. Because machine learning requires a large amount of structured data, the first step is to set up appropriate databases, if possible, in a collaborative effort to increase the volume of data. For clinical radiology, similar databases have been developed. The second step is to adapt existing methods from clinical radiology. Because PMCT data can be different from clinical CT data (i.e. due to inner livores or putrefaction), it is required that existing network topologies be retrained with postmortem data. This approach will help research groups in the field of forensic imaging build up their own expertise in machine learning. Finally, new network topologies can be developed that target image processing problems specific to postmortem imaging.

Conclusion

The use of deep learning techniques to automate CT image analysis can be found in various fields of medicine. Studies reporting successful implementation of machine learning techniques to classify CT images involved applications ranging from fracture detection to the detection of pathologies, such as cancer and skin anomalies. Convolutional neural networks constitute the dominant choice in regard to choosing a deep learning algorithm for medical diagnosis. This can be partly explained by the large number of existing frameworks, such as TensorFlow, Keras or PyTorch. With little knowledge of programming and few lines of code, these frameworks enable researchers to build complex CNNs. Therefore, most of the work relies on collecting enough data to train a model and preprocessing the data to make them compatible with these frameworks. Additional work is also necessary to develop appropriate network architectures and topologies.
Although there is already much literature on the use of deep learning techniques in clinical CT image analysis, there is little on the application of DL in forensic medicine and PMCT to date. Many possible applications in this specific area remain to be investigated.

Key points

1.
Deep learning techniques could help to compensate for the lack of postmortem radiology experts.
 
2.
Numerous studies show how convolutional neural networks improve 2D medical image analysis.
 
3.
3D volumetric data add another level of complexity for deep learning techniques.
 
4.
Little research on applying deep learning techniques in forensic medicine can be found.
 

Acknowledgements

AD and LE thank the Emma Louise Kessler Foundation for funding this work. The authors thank Michael Bolliger for the artwork in Figs. 2, 3, and 4.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Unsere Produktempfehlungen

e.Med Interdisziplinär

Kombi-Abonnement

Jetzt e.Med zum Sonderpreis bestellen!

Für Ihren Erfolg in Klinik und Praxis - Die beste Hilfe in Ihrem Arbeitsalltag

Mit e.Med Interdisziplinär erhalten Sie Zugang zu allen CME-Fortbildungen und Fachzeitschriften auf SpringerMedizin.de.

Jetzt bestellen und 100 € sparen!

Literatur
1.
Zurück zum Zitat Ampanozi G, Halbheer D, Ebert LC, Thali MJ, Held U. Postmortem imaging findings and cause of death determination compared with autopsy: a systematic review of diagnostic test accuracy and meta-analysis. Int J Legal Med. 2020;134:321–37.PubMed Ampanozi G, Halbheer D, Ebert LC, Thali MJ, Held U. Postmortem imaging findings and cause of death determination compared with autopsy: a systematic review of diagnostic test accuracy and meta-analysis. Int J Legal Med. 2020;134:321–37.PubMed
2.
Zurück zum Zitat Andriole KP, Wolfe JM, Khorasani R, et al. Optimizing analysis, visualization, and navigation of large image data sets: one 5000-section CT scan can ruin your whole day. Radiology. 2011;259:346–62.PubMedPubMedCentral Andriole KP, Wolfe JM, Khorasani R, et al. Optimizing analysis, visualization, and navigation of large image data sets: one 5000-section CT scan can ruin your whole day. Radiology. 2011;259:346–62.PubMedPubMedCentral
3.
Zurück zum Zitat Choy G, Khalilzadeh O, Michalski M, Do S, Samir AE, Pianykh OS, et al. Current applications and future impact of machine learning in radiology. Radiology. 2018;288:318–28.PubMedPubMedCentral Choy G, Khalilzadeh O, Michalski M, Do S, Samir AE, Pianykh OS, et al. Current applications and future impact of machine learning in radiology. Radiology. 2018;288:318–28.PubMedPubMedCentral
4.
Zurück zum Zitat Soffer S, Ben-Cohen A, Shimon O, Amitai MM, Greenspan H, Klang E. Convolutional neural networks for radiologic images: a radiologist’s guide. Radiology. 2019;290:590–606.PubMed Soffer S, Ben-Cohen A, Shimon O, Amitai MM, Greenspan H, Klang E. Convolutional neural networks for radiologic images: a radiologist’s guide. Radiology. 2019;290:590–606.PubMed
5.
Zurück zum Zitat Aalders MC, Adolphi NL, Daly B, Davis GG, de Boer HH, Decker SJ, et al. Research in forensic radiology and imaging; identifying the most important issues. J Forensic Radiol Imaging. 2017;8:1–8. Aalders MC, Adolphi NL, Daly B, Davis GG, de Boer HH, Decker SJ, et al. Research in forensic radiology and imaging; identifying the most important issues. J Forensic Radiol Imaging. 2017;8:1–8.
6.
Zurück zum Zitat Ford JM, Decker SJ. Computed tomography slice thickness and its effects on three-dimensional reconstruction of anatomical structures. J Forensic Radiol Imaging. 2016;4:43–6. Ford JM, Decker SJ. Computed tomography slice thickness and its effects on three-dimensional reconstruction of anatomical structures. J Forensic Radiol Imaging. 2016;4:43–6.
7.
Zurück zum Zitat Buck U, Naether S, Braun M, Bolliger S, Friederich H, Jackowski C, et al. Application of 3D documentation and geometric reconstruction methods in traffic accident analysis: with high resolution surface scanning, radiological MSCT/MRI scanning and real data based animation. Forensic Sci Int. 2007;170:20–8.PubMed Buck U, Naether S, Braun M, Bolliger S, Friederich H, Jackowski C, et al. Application of 3D documentation and geometric reconstruction methods in traffic accident analysis: with high resolution surface scanning, radiological MSCT/MRI scanning and real data based animation. Forensic Sci Int. 2007;170:20–8.PubMed
8.
Zurück zum Zitat Flach PM, Ampanozi G, Germerott T, Ross SG, Krauskopf A, Thali MJ, et al. Shot sequence detection aided by postmortem computed tomography in a case of homicide. J Forensic Radiol Imaging. 2013;1:68–72. Flach PM, Ampanozi G, Germerott T, Ross SG, Krauskopf A, Thali MJ, et al. Shot sequence detection aided by postmortem computed tomography in a case of homicide. J Forensic Radiol Imaging. 2013;1:68–72.
9.
Zurück zum Zitat Franco A, Thevissen P, Coudyzer W, Develter W, Van de Voorde W, Oyen R, et al. Feasibility and validation of virtual autopsy for dental identification using the Interpol dental codes. J Forensic Legal Med. 2013;20:248–54. Franco A, Thevissen P, Coudyzer W, Develter W, Van de Voorde W, Oyen R, et al. Feasibility and validation of virtual autopsy for dental identification using the Interpol dental codes. J Forensic Legal Med. 2013;20:248–54.
10.
Zurück zum Zitat Flach PM, Gascho D, Schweitzer W, Ruder TD, Berger N, Ross SG, et al. Imaging in forensic radiology: an illustrated guide for postmortem computed tomography technique and protocols. Forensic Sci Med Pathol. 2014;10:583–606.PubMed Flach PM, Gascho D, Schweitzer W, Ruder TD, Berger N, Ross SG, et al. Imaging in forensic radiology: an illustrated guide for postmortem computed tomography technique and protocols. Forensic Sci Med Pathol. 2014;10:583–606.PubMed
11.
Zurück zum Zitat Schweitzer W, Bartsch C, Ruder TD, Thali MJ. Virtopsy approach: structured reporting versus free reporting for PMCT findings. J Forensic Radiol Imaging. 2014;2:28–33. Schweitzer W, Bartsch C, Ruder TD, Thali MJ. Virtopsy approach: structured reporting versus free reporting for PMCT findings. J Forensic Radiol Imaging. 2014;2:28–33.
12.
Zurück zum Zitat Hosny A, Parmar C, Quackenbush J, Schwartz LH, Aerts HJWL. Artificial intelligence in radiology. Nat Rev Cancer. 2018;18:500–10.PubMedPubMedCentral Hosny A, Parmar C, Quackenbush J, Schwartz LH, Aerts HJWL. Artificial intelligence in radiology. Nat Rev Cancer. 2018;18:500–10.PubMedPubMedCentral
13.
Zurück zum Zitat Turing MA. I. – computing machinery and intelligence. Mind. 1950;49:433–60. Turing MA. I. – computing machinery and intelligence. Mind. 1950;49:433–60.
14.
Zurück zum Zitat Weizenbaum J. ELIZA – a computer program for the study of natural language communication between man and machine. CACM. 1966;9:36–45. Weizenbaum J. ELIZA – a computer program for the study of natural language communication between man and machine. CACM. 1966;9:36–45.
15.
Zurück zum Zitat Shortliffe EH, Davis R, Axline SG, Buchanan BG, Green C, Cohen SN. Computer-based consultations in clinical therapeutics: explanation and rule acquisition capabilities of the MYCIN system. Comp Biomed Res. 1975;8:303–20. Shortliffe EH, Davis R, Axline SG, Buchanan BG, Green C, Cohen SN. Computer-based consultations in clinical therapeutics: explanation and rule acquisition capabilities of the MYCIN system. Comp Biomed Res. 1975;8:303–20.
16.
Zurück zum Zitat Fukushima K. Neocognitron: a self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol Cybernetics. 1980;36:193–202. Fukushima K. Neocognitron: a self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol Cybernetics. 1980;36:193–202.
17.
Zurück zum Zitat Rumelhart DE, Hinton GE, Williams RJ. Learning Internal Representations by Error Propagation. In: Allan Collins, Edward E. Smith, editors. Readings in Cognitive Science. Morgan Kaufmann; 1988, Pages 399–421. Rumelhart DE, Hinton GE, Williams RJ. Learning Internal Representations by Error Propagation. In: Allan Collins, Edward E. Smith, editors. Readings in Cognitive Science. Morgan Kaufmann; 1988, Pages 399–421.
18.
Zurück zum Zitat LeCun Y, Jackel LD, Bottou L, Cartes C, Denker JS, Drucker H, et al. Learning algorithms for classification: a comparison on handwritten digit recognition. In: Oh JH, Kwon C, Cho S, editors. Neural networks: the statistical mechanics perspective. Singapore: World Scientific; 1995. p. 261–76. LeCun Y, Jackel LD, Bottou L, Cartes C, Denker JS, Drucker H, et al. Learning algorithms for classification: a comparison on handwritten digit recognition. In: Oh JH, Kwon C, Cho S, editors. Neural networks: the statistical mechanics perspective. Singapore: World Scientific; 1995. p. 261–76.
19.
Zurück zum Zitat Deng J, Dong W, Socher R, Li LJ, Li K, Fei-Fei L. ImageNet: A large-scale hierarchical image database. IEEE conference on computer vision and pattern recognition. Miami, FL; 2009. pp. 248–255. Deng J, Dong W, Socher R, Li LJ, Li K, Fei-Fei L. ImageNet: A large-scale hierarchical image database. IEEE conference on computer vision and pattern recognition. Miami, FL; 2009. pp. 248–255.
20.
Zurück zum Zitat Taigman Y, Yang M, Ranzato MA, Wolf L. DeepFace: Closing the gap to human-level performance in face verification. IEEE conference on computer vision and pattern recognition. Columbus, OH; 2014. pp. 1701-1708. Taigman Y, Yang M, Ranzato MA, Wolf L. DeepFace: Closing the gap to human-level performance in face verification. IEEE conference on computer vision and pattern recognition. Columbus, OH; 2014. pp. 1701-1708.
21.
Zurück zum Zitat Wang FY, Zhang JJ, Xinhu Z, et al. Where does AlphaGo go: from church-Turing thesis to AlphaGo thesis and beyond. IEEE/CAA J Automatica Sinica. 2016;3:113–20. Wang FY, Zhang JJ, Xinhu Z, et al. Where does AlphaGo go: from church-Turing thesis to AlphaGo thesis and beyond. IEEE/CAA J Automatica Sinica. 2016;3:113–20.
22.
Zurück zum Zitat Minsky M, Papert SA, Bottou L. Perceptrons: an introduction to computational geometry. Massachusetts: MIT Press; 2017. Minsky M, Papert SA, Bottou L. Perceptrons: an introduction to computational geometry. Massachusetts: MIT Press; 2017.
23.
Zurück zum Zitat McCulloch WS, Pitts W. The statistical organization of nervous activity. Biometrics. 1948;4:91–9.PubMed McCulloch WS, Pitts W. The statistical organization of nervous activity. Biometrics. 1948;4:91–9.PubMed
24.
Zurück zum Zitat Rosenblatt F. The perceptron: a probabilistic model for information storage and organization in the brain. Psychol Rev. 1958;65:386–408.PubMed Rosenblatt F. The perceptron: a probabilistic model for information storage and organization in the brain. Psychol Rev. 1958;65:386–408.PubMed
25.
Zurück zum Zitat LeCun Y, Boser B, Denker JS, Henderson D, Howard RE, Hubbard W, et al. Backpropagation applied to handwritten zip code recognition. Neural Comput. 1989;1:541–51. LeCun Y, Boser B, Denker JS, Henderson D, Howard RE, Hubbard W, et al. Backpropagation applied to handwritten zip code recognition. Neural Comput. 1989;1:541–51.
26.
Zurück zum Zitat Steinkraus D, Simard P, Buck I. Using GPUs for machine learning algorithms. 12th International Conference on Document Analysis and Recognition. 2005. pp. 1115–9. Steinkraus D, Simard P, Buck I. Using GPUs for machine learning algorithms. 12th International Conference on Document Analysis and Recognition. 2005. pp. 1115–9.
27.
Zurück zum Zitat Lan Q, Wang Z, Wen M, Zhang C, Wang Y. High performance implementation of 3D convolutional neural networks on a GPU. Comp Intell Neurosci. 2017; Article ID 8348671:1–8. Lan Q, Wang Z, Wen M, Zhang C, Wang Y. High performance implementation of 3D convolutional neural networks on a GPU. Comp Intell Neurosci. 2017; Article ID 8348671:1–8.
28.
Zurück zum Zitat Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. CACM. 2017;60:84–90. Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. CACM. 2017;60:84–90.
29.
Zurück zum Zitat Kooi T, Litjens G, Ginneken B, Gubern-Mérida A, Sánchez CI, Mann R, et al. Large scale deep learning for computer aided detection of mammographic lesions. Med Im Anal. 2016;35:303–12. Kooi T, Litjens G, Ginneken B, Gubern-Mérida A, Sánchez CI, Mann R, et al. Large scale deep learning for computer aided detection of mammographic lesions. Med Im Anal. 2016;35:303–12.
30.
Zurück zum Zitat Becker AS, Marcon M, Ghafoor S, Wurnig MC, Frauenfelder T, Boss A. Deep learning in mammography: diagnostic accuracy of a multipurpose image analysis software in the detection of breast cancer. Investig Radiol. 2017;52:434–40. Becker AS, Marcon M, Ghafoor S, Wurnig MC, Frauenfelder T, Boss A. Deep learning in mammography: diagnostic accuracy of a multipurpose image analysis software in the detection of breast cancer. Investig Radiol. 2017;52:434–40.
31.
Zurück zum Zitat Havaei M, Guizard N, Larochelle H, Jodoin PM. Deep learning trends for focal brain pathology segmentation in MRI in machine learning for health informatics: state-of-the-art and future challenges. New York: Springer International Publishing; 2016. p. 125–48. Havaei M, Guizard N, Larochelle H, Jodoin PM. Deep learning trends for focal brain pathology segmentation in MRI in machine learning for health informatics: state-of-the-art and future challenges. New York: Springer International Publishing; 2016. p. 125–48.
32.
Zurück zum Zitat Charbonnier JP, van Rikxoort EM, Setio AAA, Schaefer-Prokop CM, van Ginneken B, Ciompi F. Improving airway segmentation in computed tomography using leak detection with convolutional networks. Med Im Anal. 2017;36:52–60. Charbonnier JP, van Rikxoort EM, Setio AAA, Schaefer-Prokop CM, van Ginneken B, Ciompi F. Improving airway segmentation in computed tomography using leak detection with convolutional networks. Med Im Anal. 2017;36:52–60.
33.
Zurück zum Zitat Anthimopoulos M, Christodoulidis S, Ebner L, Christe A, Mougiakakou S. Lung pattern classification for interstitial lung diseases using a deep convolutional neural network. IEEE Trans Med Imaging. 2016;35:1207–16.PubMed Anthimopoulos M, Christodoulidis S, Ebner L, Christe A, Mougiakakou S. Lung pattern classification for interstitial lung diseases using a deep convolutional neural network. IEEE Trans Med Imaging. 2016;35:1207–16.PubMed
34.
Zurück zum Zitat Xiong J, Jiang L, Li Q. Automatic segmentation of the prostate on 3D CT images by using multiple deep learning networks. Proceedings of the 2018 5th International Conference on Biomedical and Bioinformatics Engineering. 2018; pp. 62–7. Xiong J, Jiang L, Li Q. Automatic segmentation of the prostate on 3D CT images by using multiple deep learning networks. Proceedings of the 2018 5th International Conference on Biomedical and Bioinformatics Engineering. 2018; pp. 62–7.
35.
Zurück zum Zitat Liu C, Gardner SJ, Wen N, Elshaikh MA, Siddiqui F, Movsas B, et al. Automatic segmentation of the prostate on CT images using deep neural networks (DNN). Int J Radiat Oncol Biol Phys. 2019;104:924–32.PubMed Liu C, Gardner SJ, Wen N, Elshaikh MA, Siddiqui F, Movsas B, et al. Automatic segmentation of the prostate on CT images using deep neural networks (DNN). Int J Radiat Oncol Biol Phys. 2019;104:924–32.PubMed
36.
Zurück zum Zitat Causey JL, Zhang J, Ma S, Jiang B, Qualls JA, Politte DG, et al. Highly accurate model for prediction of lung nodule malignancy with CT scans. Sci Reports. 2018;8:9286. Causey JL, Zhang J, Ma S, Jiang B, Qualls JA, Politte DG, et al. Highly accurate model for prediction of lung nodule malignancy with CT scans. Sci Reports. 2018;8:9286.
37.
Zurück zum Zitat Guitao C, Huang T, Hou K, Cao W, Liu P, Zhang J. 3D convolutional neural networks fusion model for lung nodule detection on clinical CT scans. 2018 IEEE International Conference on Bioinformatics and Biomedicine. Madrid, Spain; 2018. Guitao C, Huang T, Hou K, Cao W, Liu P, Zhang J. 3D convolutional neural networks fusion model for lung nodule detection on clinical CT scans. 2018 IEEE International Conference on Bioinformatics and Biomedicine. Madrid, Spain; 2018.
38.
Zurück zum Zitat Bejnordi E, Veta BM, van Diest PJ, van Ginneken B, Karssemeijer N, Litjens G, et al. Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer. JAMA. 2017;318:2199–210. Bejnordi E, Veta BM, van Diest PJ, van Ginneken B, Karssemeijer N, Litjens G, et al. Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer. JAMA. 2017;318:2199–210.
39.
Zurück zum Zitat Golden JA. Deep learning algorithms for detection of lymph node metastases from breast cancer. JAMA. 2017;318:2184–6.PubMed Golden JA. Deep learning algorithms for detection of lymph node metastases from breast cancer. JAMA. 2017;318:2184–6.PubMed
40.
Zurück zum Zitat Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542:115–8.PubMed Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542:115–8.PubMed
41.
Zurück zum Zitat Yang W, Chen Y, Liu Y, Zhong L, Qin G, Lu Z, et al. Cascade of multi-scale convolutional neural networks for bone suppression of chest radiographs in gradient domain. Med Image Anal. 2017;35:421–33.PubMed Yang W, Chen Y, Liu Y, Zhong L, Qin G, Lu Z, et al. Cascade of multi-scale convolutional neural networks for bone suppression of chest radiographs in gradient domain. Med Image Anal. 2017;35:421–33.PubMed
42.
Zurück zum Zitat Drozdzal M, Chartrand G, Vorontsov E, Shakeri M, Di Jorio L, Tang A, et al. Learning normalized inputs for iterative estimation in medical image segmentation. Med Image Anal. 2017;44:1–13.PubMed Drozdzal M, Chartrand G, Vorontsov E, Shakeri M, Di Jorio L, Tang A, et al. Learning normalized inputs for iterative estimation in medical image segmentation. Med Image Anal. 2017;44:1–13.PubMed
43.
Zurück zum Zitat Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, et al. A survey on deep learning in medical image analysis. Med Image Anal. 2017;42:60–88. Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, et al. A survey on deep learning in medical image analysis. Med Image Anal. 2017;42:60–88.
44.
Zurück zum Zitat Rohlfing T, Brandt R, Menzel R, Maurer CR. Evaluation of atlas selection strategies for atlas-based image segmentation with application to confocal microscopy images of bee brains. NeuroIm. 2017;21:1428–42. Rohlfing T, Brandt R, Menzel R, Maurer CR. Evaluation of atlas selection strategies for atlas-based image segmentation with application to confocal microscopy images of bee brains. NeuroIm. 2017;21:1428–42.
45.
Zurück zum Zitat Coupé P, Manjón JV, Fonov V, Pruessner J, Robles M, Collins DL. Patch-based segmentation using expert priors: application to hippocampus and ventricle segmentation. NeuroIm. 2014;54:940–54. Coupé P, Manjón JV, Fonov V, Pruessner J, Robles M, Collins DL. Patch-based segmentation using expert priors: application to hippocampus and ventricle segmentation. NeuroIm. 2014;54:940–54.
46.
Zurück zum Zitat Park H, Bland PH, Meyer CR. Construction of an abdominal probabilistic atlas and its application in segmentation. IEEE Transact Med Imaging. 2003;22:483–92. Park H, Bland PH, Meyer CR. Construction of an abdominal probabilistic atlas and its application in segmentation. IEEE Transact Med Imaging. 2003;22:483–92.
47.
Zurück zum Zitat Ronneberger O, Fischer P, Brox T. U-Net: Convolutional networks for biomedical image segmentation. In: Navab N, Hornegger J, Wells W, Frangi A, editors. Medical image computing and computer-assisted intervention – MICCAI 2015. Lecture Notes in Computer Science. Volume 9351. Cham: Springer; 2015. Ronneberger O, Fischer P, Brox T. U-Net: Convolutional networks for biomedical image segmentation. In: Navab N, Hornegger J, Wells W, Frangi A, editors. Medical image computing and computer-assisted intervention – MICCAI 2015. Lecture Notes in Computer Science. Volume 9351. Cham: Springer; 2015.
48.
Zurück zum Zitat Çiçek Ö, Abdulkadir A, Lienkamp SS, Brox T, Ronneberger O. 3D U-net: learning dense volumetric segmentation from sparse annotation. Medical Image Computing and Computer-Assisted Intervention – MICCAI. 2016;2016. Çiçek Ö, Abdulkadir A, Lienkamp SS, Brox T, Ronneberger O. 3D U-net: learning dense volumetric segmentation from sparse annotation. Medical Image Computing and Computer-Assisted Intervention – MICCAI. 2016;2016.
49.
Zurück zum Zitat Larsson M, Zhang Y, Kahl F. Robust abdominal organ segmentation using regional convolutional neural networks. Appl Soft Comput. 2018;70:465–71. Larsson M, Zhang Y, Kahl F. Robust abdominal organ segmentation using regional convolutional neural networks. Appl Soft Comput. 2018;70:465–71.
50.
Zurück zum Zitat Ebert LC, Heimer J, Schweitzer W, Sieberth T, Leipner A, Thali M, et al. Automatic detection of hemorrhagic pericardial effusion on PMCT using deep learning – a feasibility study. Forensic Sci Med Pathol. 2017;13:426–31.PubMed Ebert LC, Heimer J, Schweitzer W, Sieberth T, Leipner A, Thali M, et al. Automatic detection of hemorrhagic pericardial effusion on PMCT using deep learning – a feasibility study. Forensic Sci Med Pathol. 2017;13:426–31.PubMed
51.
Zurück zum Zitat Jackowski C, Thali MJ, Buck U, Aghayev E, Sonnenschein M, Yen K, et al. Noninvasive estimation of organ weights by postmortem magnetic resonance imaging and multislice computed tomography. Investig Radiol. 2006;41:572–8. Jackowski C, Thali MJ, Buck U, Aghayev E, Sonnenschein M, Yen K, et al. Noninvasive estimation of organ weights by postmortem magnetic resonance imaging and multislice computed tomography. Investig Radiol. 2006;41:572–8.
52.
Zurück zum Zitat Aghayev E, Thali MJ, Sonnenschein M, Jackowski C, Dirnhofer R, Vock P. Post-mortem tissue sampling using computed tomography guidance. Forensic Sci Int. 2007;166:199–203.PubMed Aghayev E, Thali MJ, Sonnenschein M, Jackowski C, Dirnhofer R, Vock P. Post-mortem tissue sampling using computed tomography guidance. Forensic Sci Int. 2007;166:199–203.PubMed
53.
Zurück zum Zitat Kwon YH, Lobo DV. Age classification from facial images. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition CVPR-94. 1994. Kwon YH, Lobo DV. Age classification from facial images. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition CVPR-94. 1994.
54.
Zurück zum Zitat Dong Y, Liu Y, Lian S. Automatic age estimation based on deep learning algorithm. Neurocomp. 2016;187:4–10. Dong Y, Liu Y, Lian S. Automatic age estimation based on deep learning algorithm. Neurocomp. 2016;187:4–10.
55.
Zurück zum Zitat Rodríguez P, Gonfaus GCJM, Roca FX, Gonzàlez J. Age and gender recognition in the wild with deep attention. Pattern Recog. 2017;72:563–71. Rodríguez P, Gonfaus GCJM, Roca FX, Gonzàlez J. Age and gender recognition in the wild with deep attention. Pattern Recog. 2017;72:563–71.
56.
Zurück zum Zitat Wang X, Li R, Zhou Y, Kambhamettu C. A study of convolutional sparse feature learning for human age estimate. 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017). 2017. Wang X, Li R, Zhou Y, Kambhamettu C. A study of convolutional sparse feature learning for human age estimate. 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017). 2017.
57.
Zurück zum Zitat O’Donnell C, Iino M, Mansharan K, Leditscke J, Woodford N. Contribution of postmortem multidetector CT scanning to identification of the deceased in a mass disaster: experience gained from the 2009 Victorian bushfires. Forensic Sci Int. 2011;205:15–28.PubMed O’Donnell C, Iino M, Mansharan K, Leditscke J, Woodford N. Contribution of postmortem multidetector CT scanning to identification of the deceased in a mass disaster: experience gained from the 2009 Victorian bushfires. Forensic Sci Int. 2011;205:15–28.PubMed
58.
Zurück zum Zitat Uldin T. Virtual anthropology – a brief review of the literature and history of computed tomography. Forensic Sci Res. 2017;2:165–73.PubMedPubMedCentral Uldin T. Virtual anthropology – a brief review of the literature and history of computed tomography. Forensic Sci Res. 2017;2:165–73.PubMedPubMedCentral
59.
Zurück zum Zitat Franklin D. Forensic age estimation in human skeletal remains: current concepts and future directions. Legal Med. 2010;12:1–7.PubMed Franklin D. Forensic age estimation in human skeletal remains: current concepts and future directions. Legal Med. 2010;12:1–7.PubMed
60.
Zurück zum Zitat Fliss B, Luethi M, Fuernstahl P, Christensen AC, Sibold K, Thali MJ, et al. CT-based sex estimation on human femora using statistical shape modeling. Physical Anthropol. 2019;169:279–86. Fliss B, Luethi M, Fuernstahl P, Christensen AC, Sibold K, Thali MJ, et al. CT-based sex estimation on human femora using statistical shape modeling. Physical Anthropol. 2019;169:279–86.
61.
Zurück zum Zitat Henssge C, Madea B. Estimation of the time since death in the early post-mortem period. Forensic Sci Int. 2004;144:167–75.PubMed Henssge C, Madea B. Estimation of the time since death in the early post-mortem period. Forensic Sci Int. 2004;144:167–75.PubMed
62.
Zurück zum Zitat Egger C, Vaucher P, Doenz F, Palmiere C, Mangin P, Grabherr S. Development and validation of a postmortem radiological alteration index: the RA-index. Int J Legal Med. 2012;126:559–66.PubMed Egger C, Vaucher P, Doenz F, Palmiere C, Mangin P, Grabherr S. Development and validation of a postmortem radiological alteration index: the RA-index. Int J Legal Med. 2012;126:559–66.PubMed
63.
Zurück zum Zitat Kim DH, MacKinnon T. Artificial intelligence in fracture detection: transfer learning from deep convolutional neural networks. Clin Radiol. 2018;73:439–45.PubMed Kim DH, MacKinnon T. Artificial intelligence in fracture detection: transfer learning from deep convolutional neural networks. Clin Radiol. 2018;73:439–45.PubMed
64.
Zurück zum Zitat Grassberger M, Gehl A, Püschel K, Turk EE. 3D reconstruction of emergency cranial computed tomography scans as a tool in clinical forensic radiology after survived blunt head trauma—report of two cases. Forensic Sci Int. 2011;207:e19–23.PubMed Grassberger M, Gehl A, Püschel K, Turk EE. 3D reconstruction of emergency cranial computed tomography scans as a tool in clinical forensic radiology after survived blunt head trauma—report of two cases. Forensic Sci Int. 2011;207:e19–23.PubMed
65.
Zurück zum Zitat van Kan RAT, Haest IIH, Lahaye MJ, Hofman PAM. The diagnostic value of forensic imaging in fatal gunshot incidents: a review of literature. J Forensic Radiol Imaging. 2017;10:9–14. van Kan RAT, Haest IIH, Lahaye MJ, Hofman PAM. The diagnostic value of forensic imaging in fatal gunshot incidents: a review of literature. J Forensic Radiol Imaging. 2017;10:9–14.
66.
Zurück zum Zitat Flach PM, Egli TC, Bolliger SA, Berger N, Ampanozi G, Thali MJ, et al. “Blind spots” in forensic autopsy: improved detection of retrobulbar hemorrhage and orbital lesions by postmortem computed tomography (PMCT). Legal Med. 2014;16:274–82.PubMed Flach PM, Egli TC, Bolliger SA, Berger N, Ampanozi G, Thali MJ, et al. “Blind spots” in forensic autopsy: improved detection of retrobulbar hemorrhage and orbital lesions by postmortem computed tomography (PMCT). Legal Med. 2014;16:274–82.PubMed
67.
Zurück zum Zitat Chilamkurthy S, Ghosh R, Tanamala S, Biviji M, Campeau NG, Venugopal VK, et al. Deep learning algorithms for detection of critical findings in head CT scans: a retrospective study. Lancet. 2018;392:2388–96.PubMed Chilamkurthy S, Ghosh R, Tanamala S, Biviji M, Campeau NG, Venugopal VK, et al. Deep learning algorithms for detection of critical findings in head CT scans: a retrospective study. Lancet. 2018;392:2388–96.PubMed
68.
Zurück zum Zitat Arbabshirani MR, Fornwalt BK, Mongelluzzo GJ, Suever JD, Geise BD, Patel AA, et al. Advanced machine learning in action: identification of intracranial hemorrhage on computed tomography scans of the head with clinical workflow integration. Npj Dig Med. 2018;1:9. Arbabshirani MR, Fornwalt BK, Mongelluzzo GJ, Suever JD, Geise BD, Patel AA, et al. Advanced machine learning in action: identification of intracranial hemorrhage on computed tomography scans of the head with clinical workflow integration. Npj Dig Med. 2018;1:9.
Metadaten
Titel
Potential use of deep learning techniques for postmortem imaging
verfasst von
Akos Dobay
Jonathan Ford
Summer Decker
Garyfalia Ampanozi
Sabine Franckenberg
Raffael Affolter
Till Sieberth
Lars C. Ebert
Publikationsdatum
29.09.2020
Verlag
Springer US
Erschienen in
Forensic Science, Medicine and Pathology / Ausgabe 4/2020
Print ISSN: 1547-769X
Elektronische ISSN: 1556-2891
DOI
https://doi.org/10.1007/s12024-020-00307-3

Weitere Artikel der Ausgabe 4/2020

Forensic Science, Medicine and Pathology 4/2020 Zur Ausgabe

Neu im Fachgebiet Pathologie

Assistierter Suizid durch Infusion von Thiopental

Thiopental Originalie

Als Folge des Urteils des Bundesverfassungsgerichts zur Sterbehilfe im Jahr 2020 wurde in den Jahren 2021–2023 eine Reihe (n = 23) von assistierten Suiziden im Landesinstitut für gerichtliche und soziale Medizin Berlin mit jeweils identischen …

Molekularpathologische Untersuchungen im Wandel der Zeit

Open Access Biomarker Leitthema

Um auch an kleinen Gewebeproben zuverlässige und reproduzierbare Ergebnisse zu gewährleisten ist eine strenge Qualitätskontrolle in jedem Schritt des Arbeitsablaufs erforderlich. Eine nicht ordnungsgemäße Prüfung oder Behandlung des …

Vergleichende Pathologie in der onkologischen Forschung

Pathologie Leitthema

Die vergleichende experimentelle Pathologie („comparative experimental pathology“) ist ein Fachbereich an der Schnittstelle von Human- und Veterinärmedizin. Sie widmet sich der vergleichenden Erforschung von Gemeinsamkeiten und Unterschieden von …

Gastrointestinale Stromatumoren

Open Access GIST CME-Artikel

Gastrointestinale Stromatumoren (GIST) stellen seit über 20 Jahren ein Paradigma für die zielgerichtete Therapie mit Tyrosinkinaseinhibitoren dar. Eine elementare Voraussetzung für eine mögliche neoadjuvante oder adjuvante Behandlung bei …