Oral Presentation Hunter Cell Biology Meeting 2022

Seeing what the AI is thinking: visual analytics explaining AI models in biomedical imaging   (#47)

Wenzhao Wei 1 , Sacha Haidinger 1 , Erik Meijering 1 , John G Lock 2
  1. School of Computer Sciences & Engineering, University of New South Wales, Sydney, NSW, Australia
  2. School of Medical Sciences, University of New South Wales, Sydney, NSW, Australia

Biomedical imaging is producing a tidal wave of data with potential to transform human health. Yet, in scale and complexity, this data is beyond the scope of human analysis. Only artificial intelligence (AI) – particularly Deep Learning – can fully leverage this wealth of information for human betterment, guiding interpretation of complex research- or diagnostic-imaging data to understand disease mechanisms and predict patient responses to therapies.

But how can we trust our health to AI models that we cannot comprehend? Overcoming this ‘explainability gap’ is vital for uptake of AI models that may literally save lives, and is the central driver of the emerging field of Explainable AI. This question is pertinent to biomedical image data ranging from MRI to CT scans to single-cell image data, as exemplified herein.

We introduce “BioDive 2.0”, an updated virtual reality-based visual analytics software developed to immerse biomedical researchers in 3D representations of “what the AI is thinking” (1). This visual environment can guide interpretation of data embeddings derived through representation learning, be that achieved via statistical (e.g. t-SNE, UMAP etc), machine learning or deep learning methods. To exemplify the interplay of BioDive with deep learning-based data embeddings, we will also introduce our world-first 2-stage Variational Autoencoder architecture for single-cell image representation learning (2). Using BioDive to directly couple biomedical images into AI-learned data representations enables intuitive comparison of AI interpretations versus our own visual perception and expert understanding. Supporting the potential for human-in-the-loop feedback to improve AI models, this fusion of visual analytics and AI begins to bridge the explainability gap to empower health advances based on biomedical image analysis.

  1. (1) Lock, J.G., D. Filonik, R. Lawther, N. Pather, K. Gaus, S. Kenderdine, and T. Bednarz. n.d. Visual Analytics of Single Cell Microscopy Data Using a Collaborative Immersive Environment. International Conference on Virtual Reality Continuum and its Applications in Industry (VRCAI ’18).
  2. (2) Wei, W., S. Haidinger, J. Lock, and E. Meijering. 2021. Machine Learning in Medical Imaging, 12th International Workshop, MLMI 2021, Held in Conjunction with MICCAI 2021, Strasbourg, France, September 27, 2021, Proceedings. Lect Notes Comput Sc. 487–497. doi:10.1007/978-3-030-87589-3_50.