Skip to main content
Erschienen in:

02.05.2023

Automated Urine Cell Image Classification Model Using Chaotic Mixer Deep Feature Extraction

verfasst von: Mehmet Erten, Ilknur Tuncer, Prabal D. Barua, Kubra Yildirim, Sengul Dogan, Turker Tuncer, Ru-San Tan, Hamido Fujita, U. Rajendra Acharya

Erschienen in: Journal of Imaging Informatics in Medicine | Ausgabe 4/2023

Einloggen, um Zugang zu erhalten

Abstract

Microscopic examination of urinary sediments is a common laboratory procedure. Automated image-based classification of urinary sediments can reduce analysis time and costs. Inspired by cryptographic mixing protocols and computer vision, we developed an image classification model that combines a novel Arnold Cat Map (ACM)- and fixed-size patch-based mixer algorithm with transfer learning for deep feature extraction. Our study dataset comprised 6,687 urinary sediment images belonging to seven classes: Cast, Crystal, Epithelia, Epithelial nuclei, Erythrocyte, Leukocyte, and Mycete. The developed model consists of four layers: (1) an ACM-based mixer to generate mixed images from resized 224 × 224 input images using fixed-size 16 × 16 patches; (2) DenseNet201 pre-trained on ImageNet1K to extract 1,920 features from each raw input image, and its six corresponding mixed images were concatenated to form a final feature vector of length 13,440; (3) iterative neighborhood component analysis to select the most discriminative feature vector of optimal length 342, determined using a k-nearest neighbor (kNN)-based loss function calculator; and (4) shallow kNN-based classification with ten-fold cross-validation. Our model achieved 98.52% overall accuracy for seven-class classification, outperforming published models for urinary cell and sediment analysis. We demonstrated the feasibility and accuracy of deep feature engineering using an ACM-based mixer algorithm for image preprocessing combined with pre-trained DenseNet201 for feature extraction. The classification model was both demonstrably accurate and computationally lightweight, making it ready for implementation in real-world image-based urine sediment analysis applications.
Literatur
1.
Zurück zum Zitat M. Oyaert, J. Delanghe, Progress in automated urinalysis, Annals of laboratory medicine, 39 (2019) 15-22.CrossRefPubMed M. Oyaert, J. Delanghe, Progress in automated urinalysis, Annals of laboratory medicine, 39 (2019) 15-22.CrossRefPubMed
2.
Zurück zum Zitat C. Cavanaugh, M.A. Perazella, Urine sediment examination in the diagnosis and management of kidney disease: core curriculum 2019, American Journal of Kidney Diseases, 73 (2019) 258-272.CrossRefPubMed C. Cavanaugh, M.A. Perazella, Urine sediment examination in the diagnosis and management of kidney disease: core curriculum 2019, American Journal of Kidney Diseases, 73 (2019) 258-272.CrossRefPubMed
3.
Zurück zum Zitat M.A. Perazella, The urine sediment as a biomarker of kidney disease, American journal of kidney diseases, 66 (2015) 748-755.CrossRefPubMed M.A. Perazella, The urine sediment as a biomarker of kidney disease, American journal of kidney diseases, 66 (2015) 748-755.CrossRefPubMed
4.
Zurück zum Zitat S. De Bruyne, M.M. Speeckaert, W. Van Biesen, J.R. Delanghe, Recent evolutions of machine learning applications in clinical laboratory medicine, Critical Reviews in Clinical Laboratory Sciences, 58 (2021) 131-152.CrossRefPubMed S. De Bruyne, M.M. Speeckaert, W. Van Biesen, J.R. Delanghe, Recent evolutions of machine learning applications in clinical laboratory medicine, Critical Reviews in Clinical Laboratory Sciences, 58 (2021) 131-152.CrossRefPubMed
5.
Zurück zum Zitat M. D'Alessandro, L. Poli, Q. Lai, A. Gaeta, C. Nazzari, M. Garofalo, F. Nudo, F. Della Pietra, A. Bachetoni, V. Sargentini, Automated Intelligent Microscopy for the Recognition of Decoy Cells in Urine Samples of Kidney Transplant Patients, Transplantation Proceedings, Elsevier, 2019, pp. 157–159. M. D'Alessandro, L. Poli, Q. Lai, A. Gaeta, C. Nazzari, M. Garofalo, F. Nudo, F. Della Pietra, A. Bachetoni, V. Sargentini, Automated Intelligent Microscopy for the Recognition of Decoy Cells in Urine Samples of Kidney Transplant Patients, Transplantation Proceedings, Elsevier, 2019, pp. 157–159.
6.
Zurück zum Zitat Y. Liang, R. Kang, C. Lian, Y. Mao, An end-to-end system for automatic urinary particle recognition with convolutional neural network, Journal of medical systems, 42 (2018) 1-14.CrossRef Y. Liang, R. Kang, C. Lian, Y. Mao, An end-to-end system for automatic urinary particle recognition with convolutional neural network, Journal of medical systems, 42 (2018) 1-14.CrossRef
7.
Zurück zum Zitat Y. Liang, Z. Tang, M. Yan, J. Liu, Object detection based on deep learning for urine sediment examination, Biocybernetics and Biomedical Engineering, 38 (2018) 661-670. Y. Liang, Z. Tang, M. Yan, J. Liu, Object detection based on deep learning for urine sediment examination, Biocybernetics and Biomedical Engineering, 38 (2018) 661-670.
8.
Zurück zum Zitat M. Yan, Q. Liu, Z. Yin, D. Wang, Y. Liang, A Bidirectional Context Propagation Network for Urine Sediment Particle Detection in Microscopic Images, ICASSP 2020–2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, 2020, pp. 981–985. M. Yan, Q. Liu, Z. Yin, D. Wang, Y. Liang, A Bidirectional Context Propagation Network for Urine Sediment Particle Detection in Microscopic Images, ICASSP 2020–2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, 2020, pp. 981–985.
9.
Zurück zum Zitat Q. Li, Z. Yu, T. Qi, L. Zheng, S. Qi, Z. He, S. Li, H. Guan, Inspection of visible components in urine based on deep learning, Medical Physics, 47 (2020) 2937-2949.CrossRefPubMed Q. Li, Z. Yu, T. Qi, L. Zheng, S. Qi, Z. He, S. Li, H. Guan, Inspection of visible components in urine based on deep learning, Medical Physics, 47 (2020) 2937-2949.CrossRefPubMed
10.
Zurück zum Zitat X. Zhang, L. Jiang, D. Yang, J. Yan, X. Lu, Urine sediment recognition method based on multi-view deep residual learning in microscopic image, Journal of medical systems, 43 (2019) 1-10.CrossRef X. Zhang, L. Jiang, D. Yang, J. Yan, X. Lu, Urine sediment recognition method based on multi-view deep residual learning in microscopic image, Journal of medical systems, 43 (2019) 1-10.CrossRef
11.
Zurück zum Zitat J. Pan, C. Jiang, T. Zhu, Classification of urine sediment based on convolution neural network, AIP Conference Proceedings, AIP Publishing LLC, 2018, pp. 040176. J. Pan, C. Jiang, T. Zhu, Classification of urine sediment based on convolution neural network, AIP Conference Proceedings, AIP Publishing LLC, 2018, pp. 040176.
12.
Zurück zum Zitat T. Li, D. Jin, C. Du, X. Cao, H. Chen, J. Yan, N. Chen, Z. Chen, Z. Feng, S. Liu, The image-based analysis and classification of urine sediments using a LeNet-5 neural network, Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, 8 (2020) 109-114. T. Li, D. Jin, C. Du, X. Cao, H. Chen, J. Yan, N. Chen, Z. Chen, Z. Feng, S. Liu, The image-based analysis and classification of urine sediments using a LeNet-5 neural network, Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, 8 (2020) 109-114.
13.
Zurück zum Zitat N. O’Mahony, S. Campbell, A. Carvalho, S. Harapanahalli, G.V. Hernandez, L. Krpalkova, D. Riordan, J. Walsh, Deep learning vs. traditional computer vision, Science and information conference, Springer, 2019, pp. 128–144. N. O’Mahony, S. Campbell, A. Carvalho, S. Harapanahalli, G.V. Hernandez, L. Krpalkova, D. Riordan, J. Walsh, Deep learning vs. traditional computer vision, Science and information conference, Springer, 2019, pp. 128–144.
14.
Zurück zum Zitat J. Lemley, S. Bazrafkan, P. Corcoran, Deep Learning for Consumer Devices and Services: Pushing the limits for machine learning, artificial intelligence, and computer vision, IEEE Consumer Electronics Magazine, 6 (2017) 48-56.CrossRef J. Lemley, S. Bazrafkan, P. Corcoran, Deep Learning for Consumer Devices and Services: Pushing the limits for machine learning, artificial intelligence, and computer vision, IEEE Consumer Electronics Magazine, 6 (2017) 48-56.CrossRef
15.
Zurück zum Zitat I. Zafar, G. Tzanidou, R. Burton, N. Patel, L. Araujo, Hands-on convolutional neural networks with TensorFlow: Solve computer vision problems with modeling in TensorFlow and Python, Packt Publishing Ltd, 2018. I. Zafar, G. Tzanidou, R. Burton, N. Patel, L. Araujo, Hands-on convolutional neural networks with TensorFlow: Solve computer vision problems with modeling in TensorFlow and Python, Packt Publishing Ltd, 2018.
16.
Zurück zum Zitat Ş. Öztürk, U. Özkaya, Residual LSTM layered CNN for classification of gastrointestinal tract diseases, Journal of Biomedical Informatics, 113 (2021) 103638.CrossRefPubMed Ş. Öztürk, U. Özkaya, Residual LSTM layered CNN for classification of gastrointestinal tract diseases, Journal of Biomedical Informatics, 113 (2021) 103638.CrossRefPubMed
17.
Zurück zum Zitat P. Carcagnì, M. Leo, G. Celeste, C. Distante, A. Cuna, A systematic investigation on deep architectures for automatic skin lesions classification, 2020 25th International Conference on Pattern Recognition (ICPR), IEEE, 2021, pp. 8639–8646. P. Carcagnì, M. Leo, G. Celeste, C. Distante, A. Cuna, A systematic investigation on deep architectures for automatic skin lesions classification, 2020 25th International Conference on Pattern Recognition (ICPR), IEEE, 2021, pp. 8639–8646.
18.
Zurück zum Zitat A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, An image is worth 16x16 words: Transformers for image recognition at scale, arXiv preprint arXiv:2010.11929, (2020). A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, An image is worth 16x16 words: Transformers for image recognition at scale, arXiv preprint arXiv:​2010.​11929, (2020).
19.
Zurück zum Zitat I.O. Tolstikhin, N. Houlsby, A. Kolesnikov, L. Beyer, X. Zhai, T. Unterthiner, J. Yung, A. Steiner, D. Keysers, J. Uszkoreit, Mlp-mixer: An all-mlp architecture for vision, Advances in Neural Information Processing Systems, 34 (2021). I.O. Tolstikhin, N. Houlsby, A. Kolesnikov, L. Beyer, X. Zhai, T. Unterthiner, J. Yung, A. Steiner, D. Keysers, J. Uszkoreit, Mlp-mixer: An all-mlp architecture for vision, Advances in Neural Information Processing Systems, 34 (2021).
20.
Zurück zum Zitat Z. Liu, J. Ning, Y. Cao, Y. Wei, Z. Zhang, S. Lin, H. Hu, Video swin transformer, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 3202–3211. Z. Liu, J. Ning, Y. Cao, Y. Wei, Z. Zhang, S. Lin, H. Hu, Video swin transformer, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 3202–3211.
22.
Zurück zum Zitat M. Baygin, O. Yaman, P.D. Barua, S. Dogan, T. Tuncer, U.R. Acharya, Exemplar Darknet19 feature generation technique for automated kidney stone detection with coronal CT images, Artificial Intelligence in Medicine, 127 (2022) 102274.CrossRefPubMed M. Baygin, O. Yaman, P.D. Barua, S. Dogan, T. Tuncer, U.R. Acharya, Exemplar Darknet19 feature generation technique for automated kidney stone detection with coronal CT images, Artificial Intelligence in Medicine, 127 (2022) 102274.CrossRefPubMed
23.
Zurück zum Zitat Z. Tu, H. Talebi, H. Zhang, F. Yang, P. Milanfar, A. Bovik, Y. Li, Maxim: Multi-axis mlp for image processing, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 5769–5780. Z. Tu, H. Talebi, H. Zhang, F. Yang, P. Milanfar, A. Bovik, Y. Li, Maxim: Multi-axis mlp for image processing, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 5769–5780.
24.
Zurück zum Zitat V.I. Arnold, A. Avez, Ergodic problems of classical mechanics, Benjamin, 1968. V.I. Arnold, A. Avez, Ergodic problems of classical mechanics, Benjamin, 1968.
25.
Zurück zum Zitat J. Bao, Q. Yang, Period of the discrete Arnold cat map and general cat map, Nonlinear Dynamics, 70 (2012) 1365-1375.CrossRef J. Bao, Q. Yang, Period of the discrete Arnold cat map and general cat map, Nonlinear Dynamics, 70 (2012) 1365-1375.CrossRef
26.
Zurück zum Zitat H. Zhang, Z. Dong, B. Li, S. He, Multi-Scale MLP-Mixer for image classification, Knowledge-Based Systems, 258 (2022) 109792.CrossRef H. Zhang, Z. Dong, B. Li, S. He, Multi-Scale MLP-Mixer for image classification, Knowledge-Based Systems, 258 (2022) 109792.CrossRef
27.
Zurück zum Zitat Z. Zhou, M.T. Islam, L. Xing, Multibranch CNN With MLP-Mixer-Based Feature Exploration for High-Performance Disease Diagnosis, IEEE Transactions on Neural Networks and Learning Systems, (2023). Z. Zhou, M.T. Islam, L. Xing, Multibranch CNN With MLP-Mixer-Based Feature Exploration for High-Performance Disease Diagnosis, IEEE Transactions on Neural Networks and Learning Systems, (2023).
28.
Zurück zum Zitat G. Huang, Z. Liu, L. Van Der Maaten, K.Q. Weinberger, Densely connected convolutional networks, Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700–4708. G. Huang, Z. Liu, L. Van Der Maaten, K.Q. Weinberger, Densely connected convolutional networks, Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700–4708.
29.
Zurück zum Zitat J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, L. Fei-Fei, Imagenet: A large-scale hierarchical image database, 2009 IEEE conference on computer vision and pattern recognition, Ieee, 2009, pp. 248-255. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, L. Fei-Fei, Imagenet: A large-scale hierarchical image database, 2009 IEEE conference on computer vision and pattern recognition, Ieee, 2009, pp. 248-255.
30.
Zurück zum Zitat T. Tuncer, S. Dogan, F. Özyurt, S.B. Belhaouari, H. Bensmail, Novel Multi Center and Threshold Ternary Pattern Based Method for Disease Detection Method Using Voice, IEEE Access, 8 (2020) 84532-84540.CrossRef T. Tuncer, S. Dogan, F. Özyurt, S.B. Belhaouari, H. Bensmail, Novel Multi Center and Threshold Ternary Pattern Based Method for Disease Detection Method Using Voice, IEEE Access, 8 (2020) 84532-84540.CrossRef
31.
32.
Zurück zum Zitat H. Tora, E. Gokcay, M. Turan, M. Buker, A generalized Arnold’s Cat Map transformation for image scrambling, Multimedia Tools and Applications, (2022) 1–14. H. Tora, E. Gokcay, M. Turan, M. Buker, A generalized Arnold’s Cat Map transformation for image scrambling, Multimedia Tools and Applications, (2022) 1–14.
33.
Zurück zum Zitat J. Goldberger, G.E. Hinton, S. Roweis, R.R. Salakhutdinov, Neighbourhood components analysis, Advances in neural information processing systems, 17 (2004) 513-520. J. Goldberger, G.E. Hinton, S. Roweis, R.R. Salakhutdinov, Neighbourhood components analysis, Advances in neural information processing systems, 17 (2004) 513-520.
34.
Zurück zum Zitat H.W. Loh, C.P. Ooi, S. Seoni, P.D. Barua, F. Molinari, U.R. Acharya, Application of Explainable Artificial Intelligence for Healthcare: A Systematic Review of the Last Decade (2011–2022), Computer Methods and Programs in Biomedicine, (2022) 107161. H.W. Loh, C.P. Ooi, S. Seoni, P.D. Barua, F. Molinari, U.R. Acharya, Application of Explainable Artificial Intelligence for Healthcare: A Systematic Review of the Last Decade (2011–2022), Computer Methods and Programs in Biomedicine, (2022) 107161.
35.
Zurück zum Zitat R.R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, D. Batra, Grad-cam: Visual explanations from deep networks via gradient-based localization, Proceedings of the IEEE international conference on computer vision, 2017, pp. 618–626. R.R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, D. Batra, Grad-cam: Visual explanations from deep networks via gradient-based localization, Proceedings of the IEEE international conference on computer vision, 2017, pp. 618–626.
36.
Zurück zum Zitat V. Jahmunah, E.Y.K. Ng, R.-S. Tan, S.L. Oh, U.R. Acharya, Explainable detection of myocardial infarction using deep learning models with Grad-CAM technique on ECG signals, Computers in Biology and Medicine, 146 (2022) 105550.CrossRefPubMed V. Jahmunah, E.Y.K. Ng, R.-S. Tan, S.L. Oh, U.R. Acharya, Explainable detection of myocardial infarction using deep learning models with Grad-CAM technique on ECG signals, Computers in Biology and Medicine, 146 (2022) 105550.CrossRefPubMed
37.
Zurück zum Zitat K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778. K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
38.
Zurück zum Zitat M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, L.-C. Chen, Mobilenetv2: Inverted residuals and linear bottlenecks, Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 4510–4520. M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, L.-C. Chen, Mobilenetv2: Inverted residuals and linear bottlenecks, Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 4510–4520.
39.
Zurück zum Zitat J. Redmon, A. Farhadi, YOLO9000: better, faster, stronger, Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 7263–7271. J. Redmon, A. Farhadi, YOLO9000: better, faster, stronger, Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 7263–7271.
40.
Zurück zum Zitat F. Chollet, Xception: Deep learning with depthwise separable convolutions, Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1251–1258. F. Chollet, Xception: Deep learning with depthwise separable convolutions, Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1251–1258.
41.
Zurück zum Zitat M. Tan, Q. Le, Efficientnet: Rethinking model scaling for convolutional neural networks, International Conference on Machine Learning, PMLR, 2019, pp. 6105–6114. M. Tan, Q. Le, Efficientnet: Rethinking model scaling for convolutional neural networks, International Conference on Machine Learning, PMLR, 2019, pp. 6105–6114.
42.
Zurück zum Zitat G. Fasano, A. Franceschini, A multidimensional version of the Kolmogorov–Smirnov test, Monthly Notices of the Royal Astronomical Society, 225 (1987) 155-170.CrossRef G. Fasano, A. Franceschini, A multidimensional version of the Kolmogorov–Smirnov test, Monthly Notices of the Royal Astronomical Society, 225 (1987) 155-170.CrossRef
43.
Zurück zum Zitat D.J. Steinskog, D.B. Tjøstheim, N.G. Kvamstø, A cautionary note on the use of the Kolmogorov–Smirnov test for normality, Monthly Weather Review, 135 (2007) 1151-1157.CrossRef D.J. Steinskog, D.B. Tjøstheim, N.G. Kvamstø, A cautionary note on the use of the Kolmogorov–Smirnov test for normality, Monthly Weather Review, 135 (2007) 1151-1157.CrossRef
44.
Zurück zum Zitat M. Sýs, L. Obrátil, V. Matyáš, D. Klinec, A Bad Day to Die Hard: Correcting the Dieharder Battery, Journal of Cryptology, 35 (2022) 1-20.CrossRef M. Sýs, L. Obrátil, V. Matyáš, D. Klinec, A Bad Day to Die Hard: Correcting the Dieharder Battery, Journal of Cryptology, 35 (2022) 1-20.CrossRef
45.
Zurück zum Zitat M. Kaneko, K. Tsuji, K. Masuda, K. Ueno, K. Henmi, S. Nakagawa, R. Fujita, K. Suzuki, Y. Inoue, S. Teramukai, Urine cell image recognition using a deep‐learning model for an automated slide evaluation system, BJU international, 130 (2022) 235-243.CrossRefPubMed M. Kaneko, K. Tsuji, K. Masuda, K. Ueno, K. Henmi, S. Nakagawa, R. Fujita, K. Suzuki, Y. Inoue, S. Teramukai, Urine cell image recognition using a deep‐learning model for an automated slide evaluation system, BJU international, 130 (2022) 235-243.CrossRefPubMed
46.
Zurück zum Zitat X. Zhao, J. Xiang, Q. Ji, Urine red blood cell classification based on Siamese Network, Journal of Physics: Conference Series, IOP Publishing, 2021, pp. 012089. X. Zhao, J. Xiang, Q. Ji, Urine red blood cell classification based on Siamese Network, Journal of Physics: Conference Series, IOP Publishing, 2021, pp. 012089.
47.
Zurück zum Zitat E. Fernandez, M. Barlis, K. Dematera, G. LLas, R. Paeste, D. Taveso, J. Velasco, Four-class urine microscopic recognition system through image processing using artificial neural network, J. Telecommun. Electron. Comput. Eng.(JTEC), (2018) 214–218. E. Fernandez, M. Barlis, K. Dematera, G. LLas, R. Paeste, D. Taveso, J. Velasco, Four-class urine microscopic recognition system through image processing using artificial neural network, J. Telecommun. Electron. Comput. Eng.(JTEC), (2018) 214–218.
48.
Zurück zum Zitat X. Li, M. Li, Y. Wu, X. Zhou, L. Zhang, X. Ping, X. Zhang, W. Zheng, Multi‐instance inflated 3D CNN for classifying urine red blood cells from multi‐focus videos, IET Image Processing, 16 (2022) 2114-2123.CrossRef X. Li, M. Li, Y. Wu, X. Zhou, L. Zhang, X. Ping, X. Zhang, W. Zheng, Multi‐instance inflated 3D CNN for classifying urine red blood cells from multi‐focus videos, IET Image Processing, 16 (2022) 2114-2123.CrossRef
49.
Zurück zum Zitat E.O. Fernandez, M. Nilo, J.O. Aquino, J.M.P. Bravo, S. Julie-Anne, C.V.B. Gaddi, C.A. Simbran, Microcontroller-based automated microscope for image recognition of four urine constituents, TENCON 2018–2018 IEEE Region 10 Conference, IEEE, 2018, pp. 1689–1694. E.O. Fernandez, M. Nilo, J.O. Aquino, J.M.P. Bravo, S. Julie-Anne, C.V.B. Gaddi, C.A. Simbran, Microcontroller-based automated microscope for image recognition of four urine constituents, TENCON 2018–2018 IEEE Region 10 Conference, IEEE, 2018, pp. 1689–1694.
50.
Zurück zum Zitat F. Hao, X. Li, M. Li, Y. Wu, W. Zheng, An Accurate Urine Red Blood Cell Detection Method Based on Multi-Focus Video Fusion and Deep Learning with Application to Diabetic Nephropathy Diagnosis, Electronics, 11 (2022) 4176.CrossRef F. Hao, X. Li, M. Li, Y. Wu, W. Zheng, An Accurate Urine Red Blood Cell Detection Method Based on Multi-Focus Video Fusion and Deep Learning with Application to Diabetic Nephropathy Diagnosis, Electronics, 11 (2022) 4176.CrossRef
51.
Zurück zum Zitat A. Africa, J. Velasco, Development of a urine strip analyzer using artificial neural network using an android phone, ARPN Journal of Engineering and Applied Sciences, 12 (2017) 1706-1712. A. Africa, J. Velasco, Development of a urine strip analyzer using artificial neural network using an android phone, ARPN Journal of Engineering and Applied Sciences, 12 (2017) 1706-1712.
52.
Zurück zum Zitat J.S. Velasco, M.K. Cabatuan, E.P. Dadios, Urine sediment classification using deep learning, Lecture Notes on Advanced Research in Electrical and Electronic Engineering Technology, (2019) 180–185. J.S. Velasco, M.K. Cabatuan, E.P. Dadios, Urine sediment classification using deep learning, Lecture Notes on Advanced Research in Electrical and Electronic Engineering Technology, (2019) 180–185.
Metadaten
Titel
Automated Urine Cell Image Classification Model Using Chaotic Mixer Deep Feature Extraction
verfasst von
Mehmet Erten
Ilknur Tuncer
Prabal D. Barua
Kubra Yildirim
Sengul Dogan
Turker Tuncer
Ru-San Tan
Hamido Fujita
U. Rajendra Acharya
Publikationsdatum
02.05.2023
Verlag
Springer International Publishing
Erschienen in
Journal of Imaging Informatics in Medicine / Ausgabe 4/2023
Print ISSN: 2948-2925
Elektronische ISSN: 2948-2933
DOI
https://doi.org/10.1007/s10278-023-00827-8

Neu im Fachgebiet Radiologie

KI-gestütztes Mammografiescreening überzeugt im Praxistest

Mit dem Einsatz künstlicher Intelligenz lässt sich die Detektionsrate im Mammografiescreening offenbar deutlich steigern. Mehr unnötige Zusatzuntersuchungen sind laut der Studie aus Deutschland nicht zu befürchten.

Stumme Schlaganfälle − ein häufiger Nebenbefund im Kopf-CT?

In 4% der in der Notfallambulanz initiierten zerebralen Bildgebung sind „alte“ Schlaganfälle zu erkennen. Gar nicht so selten handelt es sich laut einer aktuellen Studie dabei um unbemerkte Insulte. Bietet sich hier womöglich die Chance auf ein effektives opportunistisches Screening?

Die elektronische Patientenakte kommt: Das sollten Sie jetzt wissen

Am 15. Januar geht die „ePA für alle“ zunächst in den Modellregionen an den Start. Doch schon bald soll sie in allen Praxen zum Einsatz kommen. Was ist jetzt zu tun? Was müssen Sie wissen? Wir geben in einem FAQ Antworten auf 21 Fragen.

Stören weiße Wände und viel Licht die Bildqualitätskontrolle?

Wenn es darum geht, die technische Qualität eines Mammogramms zu beurteilen, könnten graue Wandfarbe und reduzierte Beleuchtung im Bildgebungsraum von Vorteil sein. Darauf deuten zumindest Ergebnisse einer kleinen Studie hin. 

Update Radiologie

Bestellen Sie unseren Fach-Newsletter und bleiben Sie gut informiert.