KinemaNet: Kinematic descriptors of deformation of ONH images for glaucoma progression detection

The University of Memphis

Datasets

We provided two distinct datasets—synthetic and naturalistic—that can be used for training, evaluation, and performance analysis of techniques and algorithms for deformation and strain field estimation. These datasets are designed to support both small and large range deformations, multi-frame sequences and non-rigid motion analysis and studies.

I. Synthetic Speckle Pattern Dataset [click download]

This dataset consists of randomly generated synthetic speckle patterns that are subjected to deformation using random deformation fields to generate deforming sequences with ground truth deformation field. In addition to the above datasets, we released a PyPI package, Specklegen, which allows users to generate additional deforming speckle datasets with greater control over image size, sequence length, and motion range.

Our synthetic speckle pattern dataset, corresponding ground truth deformation, and strain fields along with our KinemaNet estimates.

II. Deforming Rubber Dataset

A. Uniaxial Rubber Testing [click download]

This dataset provides realistic deforming image sequences. The sequences were created by subjecting rubber samples of different geometries to a uniaxial stretch testing (Ustretch, Cellscale, Canada). During this process we laterally applied a specified strain amount from one end of the rubber while keeping the other end stationary.

Uniaxial rubber deformation and our KinemaNet deformation field estimates from image sequence inputs (magnitude in pixels/frame).

B. Rubber Material Modeling [click download]

To validate our KinemaNet deformation field estimates of deforming rubber samples in (A), we simulated their finite element model (FEM) using COMSOL. The material modeling simulates the uniaxial stretch testing of the above samples to generate their ground-truth equivalent deformation and strain fields.

Displecment field results (ground-truth equivalent) from the FEM modeling of rubber samples (magnitudes in mm).

Abstract

In this work, to the best of our knowledge, we developed the first optical flow estimation based elastography kinematic descriptors of the optic nerve head (ONH) of the retina from multiple confocal images. These kinematic descriptors include strain descriptors and vorticity descriptors which are capable of capturing local deformation patterns in shape associated with tissue expansion and contraction as well as local rotations and spins, respectively, that jointly define the underlying deformation during progressive glaucoma. Our approach utilizes our recent multi-frame based optical flow estimation algorithm, SSTMs, which are capable of estimating highly accurate flow fields from multiple image sequences by understanding the sequence’s space-time dependency. This flow estimates are used to approximate the deformation field based on which several elastography kinematic descriptors are developed. Our estimated elastography kinematic descriptors demonstrated exceptional performance in classifying ONH with progressive glaucoma from normal ONH on both the LEGS and DIGS datasets. Moreover, we introduce elastography based event analysis visualization tool that can capture tissue deformation related major events occurring in the ONH. This visualization tool has a big potential for diagnostic application in identifying localized tissue damages and relative rates of glaucoma progression from follow-up ONH scans. ONH

KinemaNet Architecture Overview

overview_image
KinemaNet Architecture: uses learning based optical flow estimation to estimate deformation field and the four principal components of strain.

KinemaNet Results on Retinal Dataset

KinemaNet Architecture estimates for vonMises strain distribution in glaucomatous and normal eyes of patients.

BibTeX


    @article{ferede2023sstm,
    author    = {Ferede, Fisseha Admasu and Balasubramanian, Madhusudhanan},
    title     = {SSTM: Spatiotemporal recurrent transformers for multi-frame optical flow estimation},
    journal   = {Neurocomputing},
    year      = {2023},
    }