Artificial Intelligence Applications in Urology

RECENT ADVANCES IN applications of artificial intelligence (AI) in urology were reviewed during a plenary session of the annual meeting of the American Urological Association (AUA2021) in a panel discussion moderated by Prokar Dasgupta, MD, Professor of Surgery and Chair in Robotic Surgery and Urological Innovation at King’s College London. Dr. Dasgupta recently co-authored a review on AI applications in urologic oncology.1 During the discussion, Jaime Landman, MD, gave an overview of applications of AI in urologic imaging and outcomes, and Andrew J. Hung, MD, reviewed recent research in assessing surgical competence with AI. Dr. Landman is Chair of the Department of Urology at the University of California, Irvine (UCI), and Dr. Hung is Assistant Professor of Urology and Director of Robotic Simulation & Education at Keck School of Medicine the University of Southern California (USC).

“Although in its infancy, AI has already upended entire industries and revolutionized the world as we know it,” Dr. Landman noted. “It is essentially our clear duty to understand and apply this force to medicine and optimize patient outcomes for urologic diseases.” The field of AI can be categorized according to ability into narrow, general, and super AI, he explained. Narrow AI is already being implemented in many ways in society, while super AI has the potential to offer intelligence superior to that of humans.

The two main techniques in developing AI models in medicine are machine learning and deep learning. “Machine learning models use human intelligence to code information into a format that a computer algorithm can understand, and then use statistics and repetition to derive the correct answer. Conversely, deep learning, like convolutional neural networks (CNNs), can take information from the world as found and then learn how to understand these data to produce the right answer” Dr. Landman explained. “These algorithms all need data, and the field of urology is ripe for innovation in this space, with data from electronic medical records (EMRs) already being harvested to be utilized for future research.”

Among current applications of AI in urologic oncology imaging, Dr. Landman cited a study from the UCI Department of Radiology that in which CNNs were applied to automate the process of segmenting the peripheral and transition zones on prostate MRI images.2 Assigned prostate imaging reporting and data system (PI-RADS) scores often show significant variability in sensitivity and specificity between radiologists, he observed.

“This is exactly the kind of problem for which AI is better equipped than humans. Images serve as questions and pathology supplies the answers, and the AI can rapidly find higher-order patterns to predict outcomes while removing subjectivity from the problem,” he declared. “Studies like this provide insight into how AI can be utilized to improve real-world diagnostic results, such as for PIRADS.”

In AI-assisted pathology, many researchers have been working to develop algorithms that can read prostate biopsy histology and produce Gleason grade scores, Dr. Landman noted. One study conducted by Google Health and the Medical University of Graz, Austria, demonstrated that on a retrospective cohort of 2807 prostate cancer patients undergoing radical prostatectomy at a single center, AI-based Gleason pattern quantification provided improved risk stratification over pathologists’ reports alone.3 Dr. Landman explained that by incorporating ground-truth data, i.e., cancer-specific outcomes, the researchers were able to train AI to produce risk scores that were more accurate than scores generated by pathologists. “While no longer a Gleason score, the AI score was more accurate,” he stressed.

Other novel applications of AI that can affect day-to-day practice were demonstrated in a study that used deep learning to augment and improve cystoscopy in bladder tumor detection.4 The researchers were able to train a deep learning algorithm called CystoNet using cystoscopy images from histologically confirmed bladder tumors, Dr. Landman explained. The tumors were imaged manually and the algorithm was able to tag tumors in real-time when urologists performed cystoscopy. This system had a per tumor sensitivity 90.9% and specificity of 98.6% on the validation cohort of patients. It could also provide continuous feedback to the urologist performing cystoscopy by continually re-evaluating screen content, Dr. Landman added.

In the subspecialty of endourology, Dr. Landman and his colleagues at the Curiosity and Innovation Laboratory at UCI have used machine learning to develop a smartphone (iPhone) application (app) that can be used to rapidly identify kidney stone composition (presented elsewhere at AUA20215). When coupled to a KARL STORZ ureteroscope video feed in real-time, the app was able to recognize a struvite stone in a basin of saline. In this way, the app correctly identified 94% of stones in a saline basin model and 87% of stones in an ex vivo pig kidney model, compared with three fellowship-trained endourologists who only identified 58% of cases in the pig model.

Another UCI project has addressed problems involved in handling stone disease (urolithiasis). “One of the major hindrances is accurately characterizing stone burden,” Dr. Landman said. “The 3D technologies currently used are very time-intensive; it can take a lot of time to accurately segment stones slice by slice on CT scan and calculate volumes,” he explained. “To address this, we trained a CNN to take CT scans and generate 3D renderings of kidney stones and calculate their volumes.” A total of 770 stones were identified from 119 patients. “Using an analytical technique called the Dice score, we evaluated the degree of overlap between AI-calculated volume and hand-segmented volumes,” he said. AI-segmented volumes performed better than the validated European Association of Urology (EAU) ellipsoid formula, but not as well as manually segmented stone volumes, he reported. “We now have the tools that hold real potential,” he declared. “If we can automate this process so that we rapidly gather these data, further intelligence systems can help us better understand what we need to do to treat these patients.” More details about this study will be presented at the World Congress of Endourology (WCET 2021), September 23-25, 2021.

“Just thinking about what comes next for AI and its applications to urology and medicine, we can’t but be utterly amazed,” Dr. Landman said at the close of his presentation. He urged the medical community in general, and urological communities specifically, to adopt new ideas as they embrace innovation, especially as industries around healthcare are undergoing data-driven revolutions.

Dr. Hung began by explaining how he and his colleagues at the Center for Robotic Simulation & Education at USC record both surgical video as well as system data, including instrument tracking and events, using a novel recorder (“dVLogger”) to directly capture surgeon manipulations on the da Vinci® surgical system to calculate automated performance metrics (APMs). This was used in a pilot study that demonstrated how APMs for preselected steps in robot-assisted radical prostatectomy (RARP) vary significantly between experienced surgeons and novice surgeons.6 Due to the difficulties and complexities of analyzing APM data, “early on, we realized with such immense amount of data, machine learning seemed like a natural partner,” he explained. Utilizing machine learning on APM data, Dr. Hung and his colleagues were able to predict “with an 85% accuracy whether patients were going to stay 1-2 days in the hospital or more than 2 days after prostatectomy.”7

Dr. Hung and his group further showed that a deep learning model using APM and clinical features (DeepSurv) was able to predict time to urinary continence recovery after RARP.8 They utilized 3 data sets: a set of 16 clinicopathological features, 492 APMs (41 for each of 12 surgical steps), and a combined set of 508 clinicopathological features and APMs. This study demonstrated that surgeon APMs were ranked higher than clinicopathological features in predicting the return of urinary continence at 3 and 6 months, suggesting that surgeon experience plays a greater role in this outcome. Within the feature ranking of the DeepSurv model, only APMs (no patient features) were ranked in the top 10, and 5 of the top-ranked features were measured metrics during the vesicourethral anastomosis; 3 features involved right instrument articulation.

Based on the results of the second study, Dr. Hung and his colleagues have focused on the vesicourethral anastomotic step of RARP and have looked at obtaining detailed performance metrics of the maneuvers that make up suturing. Their recent analysis using surgeon skill metrics and patient factors to predict urinary continence recovery after RARP was performed in a similar manner to their previous study.9 However, as Dr. Hung explained, “granular suturing performance metrics (detailed APMs)” were added to the summary APMs. “What stood out was that technical skills improved by 10 points the ability for us to predict time to urinary continence recovery,” Dr. Hung recalled. These results further reinforced the concept that surgeon experience is likely the most important factor in predicting patient continence recovery after RARP.

As a practical application, APMs can be utilized in a training environment, Dr. Hung noted, describing how calculation of APMs for motion tracking and system events data plus synchronized surgical video during RARP was used as a basis to develop a standardized training tutorial for robotic vesicourethral anastomosis.10 Every segment of the anastomosis can be quantified and analyzed through APMs, Dr. Hung explained. Analysis of 70 vesicourethral anastomoses revealed that most experienced surgeons performed the anastomotic maneuver in a predictable manner, whereas less experienced surgeons in training performed in a random matter, took longer, made more needle driving attempts, and caused more tissue trauma. Based on this work, “we created a step-by-step tutorial from the very first anastomosis to the last, all driven by APMs, of which were the ideal gestures to utilize,” Dr. Hung reported.

Recognizing that although important, reviewing suture gestures is time-consuming, USC Urology has partnered with the California Institute of Technology (Caltech) to determine which gestures are being utilized for each stitch using a deep learning model. “In our early work, we have been able to predict with 70% accuracy which gestures were used,” Dr. Hung revealed. This work has been extended to analysis of renal hilar dissection.11 By disassembling tissue dissection into basic gestures they have been able to create a classification system able to distinguish between experts and novices. They found that experts completed almost all (5 of 9) dissection gestures more efficiently than novices (P≤0.033). Further, “experts exhibited different gesture choices in all anatomic zones” Dr. Hung added. For example, experts used more blunt dissection (peel/push) and less single sharp dissection (hot cut) while dissecting the renal vein (P<0.001). These data can also be utilized to predict patient outcomes such as time to continence recovery. Decoding the first 100 dissection gestures identified using a machine learning model could predict with 80% accuracy a patient’s recovered function 12 months after surgery,” Dr. Hung added.

The “take-home messages” from this work investigating how AI can assess surgical competency are 1) that APMs and technical skills predict patient outcomes; 2) computer vision can recognize surgical gestures; and 3) machine learning can recognize ideal surgical patterns that lead to ideal surgical recovery, Dr. Hung concluded.

Akhil Abraham Saji, MD is a urology resident at New York Medical College / Westchester Medical Center. His interests include urology education and machine learning applications in urologic care. He is a founding and current member of the EMPIRE Urology New York AUA section team.

References

  1. Brodie A, Dai N, Teoh JY, Decaestecker K, Dasgupta P, Vasdev N. Artificial intelligence in urological oncology: An update and future applications. Urol Oncol. 2021;39(7):379-399. DOI: 10.1016/j.urolonc.2021.03.012
  2. Bardis M, Houshyar R, Chantaduly C, et al. Segmentation of the prostate transition zone and peripheral zone on mr images with deep learning. Radiol Imaging Cancer. 2021;3(3): e200024. DOI: 10.1148/rycan.2021200024
  3. Wulczyn E, Nagpal K, Symonds M, et al. Predicting prostate cancer specific-mortality with artificial intelligence-based Gleason grading. Commun Med. 2021;1(1):1-8. DOI: 10.1038/ s43856-021-00005-3
  4. Shkolyar E, Jia X, Chang TC, et al. Augmented bladder tumor detection using deep learning. Eur Urol. 2019;76(6):714-718. DOI: 10.1016/j.eururo.2019.08.032
  5. Peta A, Brevik A, Avad M, et al. Initial experience with real-time detection of kidney stone composition using artificial intelligence, machine learning, and smartphone technology. J Urol. 2021;26(3S, suppl):e388. Abstract V05-07. DOI: 10.1097/JU.0000000000002012.07
  6. Hung AJ, Chen J, Jarc A, Hatcher D, Djaladat H, Gill IS. Development and validation of objective performance metrics for robot-assisted radical prostatectomy: a pilot study. J Urol. 2018;199(1):296-304. DOI: 10.1016/j.juro.2017.07.081
  7. Hung AJ, Chen J, Che Z, et al. Utilizing machine learning and automated performance metrics to evaluate robot-assisted radical prostatectomy performance and predict outcomes. J Endourol. 2018;32(5):438-444. DOI: 10.1089/end.2018.0035
  8. Hung AJ, Chen J, Ghodoussipour S, et al. A deep learning model using automated performance metrics and clinical features to predict urinary continence recovery after robot-assisted radical prostatectomy. BJU Int. 2019;124(3):487-495. DOI: 10.1111/bju.14735
  9. Trinh L, Mingo S, Vanstrum EB, et al. Survival analysis using surgeon skill metrics and patient factors to predict urinary continence recovery after robot-assisted radical prostatectomy. Eur Urol Focus. 2021; Published online April 12, 2021. DOI: 10.1016/j. euf.2021.04.001
  10. Chen J, Oh PJ, Cheng N, et al. Use of automated performance metrics to measure surgeon performance during robotic vesicourethral anastomosis and methodical development of a training tutorial. Jr Urol. 2018;200(4):895-902. DOI: 10.1016/j. juro.2018.05.080