Tomographic imaging using penetrating waves generates cross-sectional views of the internal anatomy of a living subject. For artefact-free volumetric imaging, projection views from a large number of angular positions are required. Here we show that a deep-learning model trained to map projection radiographs of a patient to the corresponding 3D anatomy can subsequently generate volumetric tomographic X-ray images of the patient from a single projection view. We demonstrate the feasibility of the approach with upper-abdomen, lung, and head-and-neck computed tomography scans from three patients. Volumetric reconstruction via deep learning could be useful in image-guided interventional procedures such as radiation therapy and needle biopsy, and might help simplify the hardware of tomographic imaging systems.
#DigitalSurgery: The Robot Will Assist the Surgeon Now. @ShafiAhmed5 on the convergence of #AR, #VR #AI & #Robotics on augmenting the clinician of the future. https://t.co/hpWqv5D2Yw Join us next week for #xMed 2019. https://t.co/La9S00SM8Z #MedEd #hcldr #surgery #digitalHealth— Exponential Medicine (@ExponentialMed) October 30, 2019