-
Bruus Morgan posted an update 1 day, 8 hours ago
Lysosome-targetable selenium-doped carbon nanodots regarding in situ scavenging free radicals throughout living tissues along with rodents.
Moreover, when combined with cognitive scores, the proposed method obtained 85% of AUC. These results are competitive in comparison to state-of-the-art methods evaluated on the same dataset.Accurate vertebral body (VB) detection and segmentation are critical for spine disease identification and diagnosis. Existing automatic VB detection and segmentation methods may cause false-positive results to the background tissue or inaccurate results to the desirable VB. learn more Because they usually cannot take both the global spine pattern and the local VB appearance into consideration concurrently. In this paper, we propose a Sequential Conditional Reinforcement Learning network (SCRL) to tackle the simultaneous detection and segmentation of VBs from MR spine images. The SCRL, for the first time, applies deep reinforcement learning into VB detection and segmentation. It innovatively models the spatial correlation between VBs from top to bottom as sequential dynamic-interaction processes, thereby globally focusing detection and segmentation on each VB. Simultaneously, SCRL also perceives the local appearance feature of each desirable VB comprehensively, thereby achieving accurate detection and segmentation result. Particularly, SCRL seamlessly combines three parts 1) Anatomy-Modeling Reinforcement Learning Network dynamically interacts with the image and focuses an attention-region on the VB; 2) Fully-Connected Residual Neural Network learns rich global context information of the VB including both the detailed low-level features and the abstracted high-level features to detect the accurate bounding-box of the VB based on the attention-region; 3) Y-shaped Network learns comprehensive detailed texture information of VB including multi-scale, coarse-to-fine features to segment the boundary of VB from the attention-region. On 240 subjects, SCRL achieves accurate detection and segmentation results, where on average the detection IoU is 92.3%, segmentation Dice is 92.6%, and classification mean accuracy is 96.4%. These excellent results demonstrate that SCRL can be an efficient aided-diagnostic tool to assist clinicians when diagnosing spinal diseases.Instrument segmentation plays a vital role in 3D ultrasound (US) guided cardiac intervention. Efficient and accurate segmentation during the operation is highly desired since it can facilitate the operation, reduce the operational complexity, and therefore improve the outcome. Nevertheless, current image-based instrument segmentation methods are not efficient nor accurate enough for clinical usage. Lately, fully convolutional neural networks (FCNs), including 2D and 3D FCNs, have been used in different volumetric segmentation tasks. However, 2D FCN cannot exploit the 3D contextual information in the volumetric data, while 3D FCN requires high computation cost and a large amount of training data. Moreover, with limited computation resources, 3D FCN is commonly applied with a patch-based strategy, which is therefore not efficient for clinical applications. To address these, we propose a POI-FuseNet, which consists of a patch-of-interest (POI) selector and a FuseNet. The POI selector can efficiently select the interested regions containing the instrument, while FuseNet can make use of 2D and 3D FCN features to hierarchically exploit contextual information. Furthermore, we propose a hybrid loss function, which consists of a contextual loss and a class-balanced focal loss, to improve the segmentation performance of the network. With the collected challenging ex-vivo dataset on RF-ablation catheter, our method achieved a Dice score of 70.5%, superior to the state-of-the-art methods. In addition, based on the pre-trained model from ex-vivo dataset, our method can be adapted to the in-vivo dataset on guidewire and achieves a Dice score of 66.5% for a different cardiac operation. More crucially, with POI-based strategy, segmentation efficiency is reduced to around 1.3 seconds per volume, which shows the proposed method is promising for clinical use.Accurate vertebrae recognition is crucial in spinal disease localization and successive treatment planning. Although vertebrae detection has been studied for years, reliably recognizing vertebrae from arbitrary spine MRI images remains a challenge. The similar appearance of different vertebrae and the pathological deformations of the same vertebrae makes it difficult for classification in images with different fields of view (FOV). In this paper, we propose a Category-consistent Self-calibration Recognition System (Can-See) to accurately classify the labels and precisely predict the bounding boxes of all vertebrae with improved discriminative capabilities for vertebrae categories and self-awareness of false positive detections. Can-See is designed as a two-step detection framework (1) A hierarchical proposal network (HPN) to perceive the existence of the vertebrae. HPN leverages the correspondence between hierarchical features and multi-scale anchors to detect objects. This correspondence tackles the image scale/resolution challenge. (2) A Category-consistent Self-calibration Recognition (CSRN) Network to classify each vertebra and refine their bounding boxes. CSRN leverages the dictionary learning principle to preserve the most representative features; it imposes a novel category-consistent constraint to force vertebrae with the same label to have similar features. CSRN then innovatively formulates message passing into the deep learning framework, which leverages the label compatibility principle to self-calibrate the wrong pre-recognitions. Can-See is trained and evaluated on a capacious and challenging dataset of 450 MRI scans. The results show that Can-See achieves high performance (testing accuracy reaches 0.955) and outperforms other state-of-the-art methods.Lung cancer follow-up is a complex, error prone, and time consuming task for clinical radiologists. learn more Several lung CT scan images taken at different time points of a given patient need to be individually inspected, looking for possible cancerogenous nodules. Radiologists mainly focus their attention in nodule size, density, and growth to assess the existence of malignancy. In this study, we present a novel method based on a 3D siamese neural network, for the re-identification of nodules in a pair of CT scans of the same patient without the need for image registration. The network was integrated into a two-stage automatic pipeline to detect, match, and predict nodule growth given pairs of CT scans. Results on an independent test set reported a nodule detection sensitivity of 94.7%, an accuracy for temporal nodule matching of 88.8%, and a sensitivity of 92.0% with a precision of 88.4% for nodule growth detection.