Introduction
Related work
Proposed approach
Methods
Pre-selection of candidate voxels
Voxel classification by ConvNet
Catheter localization
Dataset | Catheterdiameter (mm) | Volume number | Probe type andfrequency range (MHz) | Spatial sizeper voxel (mm) | Volume size (lat.\(\times \)az.\(\times \)ax.) |
---|---|---|---|---|---|
Ex vivo 1 | 2.3† | 10 | TEE 2-7 | 0.4 |
\(179\times {175}\times {92}\)
|
Ex vivo 2 | 2.3‡ | 33 | TEE 2-7 | 0.4 | \(174\times {174}\times {178}\) to \(197\times {197}\times {202}\) |
Ex vivo 3 | 2.3‡ | 10 | TEE 2-7 | 0.6 | \(120\times {69}\times {92}\) to \(193\times {284}\times {190}\) |
Ex vivo 4 | 2.3§ | 12 | TTE 1-5 | 0.7 |
\(137\times {130}\times {122}\)
|
Datasets and experimental results
Datasets
Voxel-of-interest selection
Method | Recall (%) | Precision (%) | \(F_2\) score (%) |
---|---|---|---|
GF-SVM [3] |
\(29.9\pm {25.4}\)
|
\(9.2\pm {8.8}\)
|
\(19.0\pm {15.5}\)
|
MF-AdaB [4] |
\(61.2\pm {17.6}\)
|
\(28.4\pm {16.6}\)
|
\(45.5\pm {15.6}\)
|
3D-UNet [10] |
\(30.3\pm 26.3\)
|
\(11.9\pm 12.7\)
|
\(21.4\pm 19.5\)
|
Share-ConvNet |
\(72.3\pm {19.6}\)
|
\(46.4\pm {8.5}\)
|
\(63.8\pm {14.3}\)
|
Voxel classification
Comparison with existing methods
Comparison with different ConvNet methods
-
When compared to 3D-ConvNet, our Share-ConvNet has better Recall and higher \(F_2\) score, while 3D-ConvNet achieves better precision. However, taking 3D data cubes as input, 3D-ConvNet has too many parameters in the network, requiring a large amount of training data. In contrast, the Share-ConvNet is much simpler. In terms of efficiency, 3D-ConvNet executes in about 10 min. per volume on average, which is almost \(5\times \) more than the orthogonal slice approaches.
-
The IND-ConvNet, which is designed to have multiple branches, delivers comparable performance as the proposed Share-ConvNet, since both networks fuse the high-level information in a similar intuition. However, the IND-ConvNet trains an individual ConvNet for each slice, which is computationally complex and leads to redundancy.
-
Compared to RGB-ConvNet, we can observe that the Share-ConvNet achieves consistently better performance. It can be explained by that the spatial correlation among different slices in RGB-ConvNet is combined in a lower feature space.
Method | MS-AdaB | RGB-ConvNet | IND-ConvNet | 3D-ConvNet |
---|---|---|---|---|
Share-ConvNet | 3.2e−14 | 3.2e−6 | 0.26 | 4.4e−3 |
Method | Recall (%) | Precision (%) | \(F_2\) score (%) |
---|---|---|---|
Share-ConvNet-NoBoost |
\(92.4\pm 8.6\)
|
\(12.0\pm 8.5\)
|
\(35.2\pm 17.2\)
|
Share-ConvNet-NoWeight |
\(45.5\pm 20.9\)
|
\(71.3\pm 13.7\)
|
\(47.6\pm 20.4\)
|
Share-ConvNet-Combine |
\(72.3\pm 19.6\)
|
\(46.4\pm 8.5\)
|
\(63.8\pm 14.3\)
|
Paired t test between methods
Method | Recall (%) | Precision (%) | \(F_2\)score (%) | Time (s) |
---|---|---|---|---|
VOI-90k-IND-ConvNet |
\(53.3\pm 17.7\)
|
\(58.8\pm 11.7\)
|
\(53.4\pm 15.3\)
|
\(6.9\pm 0.4\)
|
VOI-190k-IND-ConvNet |
\(62.6\pm 19.2\)
|
\(52.6\pm 10.7\)
|
\(59.2\pm 15.9\)
|
\(15.1\pm 1.3\)
|
IND-ConvNet |
\(69.8\pm {20.1}\)
|
\(47.7\pm {11.0}\)
|
\(62.8\pm {16.1}\)
|
\(110.5\pm 59.0\)
|
VOI-90k-Share-ConvNet |
\(53.7\pm 16.4\)
|
\(59.1\pm 11.0\)
|
\(53.9\pm 13.9\)
|
\(6.5\pm 0.4\)
|
VOI-190k-Share-ConvNet |
\(63.1\pm 17.8\)
|
\(53.0\pm 10.0\)
|
\(59.8\pm 14.1\)
|
\(14.1\pm 1.2\)
|
Share-ConvNet |
\(72.3\pm {19.6}\)
|
\(46.4\pm {8.5}\)
|
\(63.8\pm {14.3}\)
|
\(103.4\pm 55.7\)
|
Method | EE (mm) | SE (mm) | VS (%) | AHD (voxel) |
---|---|---|---|---|
MF-AdaB |
\(3.33\pm 2.76\)
|
\(2.91\pm 2.55\)
|
\(67.3\pm 20.7\)
|
\(6.71\pm 7.72\)
|
Share-ConvNet |
\(2.25\pm 1.91\)
|
\(1.83\pm 1.28\)
|
\(76.7\pm 13.5\)
|
\(1.72\pm 1.85\)
|
VOI-90k-Share-ConvNet |
\(2.07\pm 1.22\)
|
\(1.71\pm 1.00\)
|
\(77.3\pm 11.6\)
|
\(1.56\pm 2.32\)
|
VOI-190k-Share-ConvNet |
\(2.08\pm 1.22\)
|
\(1.73\pm 0.99\)
|
\(77.8\pm 11.6\)
|
\(1.64\pm 1.82\)
|