VII Center for Visual Informatics and Intelligence wsu
Home  Research Projects arrow SOM: HyperEye

HyperEye: Susceptibility Weighted Imaging-based Informatics Tools for Brain Tumor Studies



Summary

som-hypereye

A brain tumor is any intracranial mass created by an abnormal and uncontrolled growth of cells either normally found in the brain itself (primary tumors), or spread from cancers primarily located in other organs (metastatic tumors). Non-invasive, highresolution imaging modalities such as magnetic resonance imaging (MRI) play a central role in the detection and diagnosis of brain tumors. Current clinical diagnosis of brain tumors is essentially dependent on the slice-by-slice review by radiologists. However, plain visual evaluation is insensitive and subject to operator error. Furthermore, a slice is only a two dimensional measurement of a process that occurs in three dimensions. The goal proposed by Wayne State Researchers is to develop medical imaging systems with ready economical and technological applications, in which informatics tools use quantitative data extracted from medical images to help the radiologist/physician make timely and better clinical decisions. The overall objective of the proposed research, which is the next step in pursuit of this goal, is to develop the HyperEye; a 3D MRI-based informatics tools allowing for more sensitive and quantitative tests to uncover evidence of brain tumors and integrate them into an advanced medical image database system.




Top
Software and Datasets

Further details are available here.


Top
Topics

  • Salient Region Detection and Feature Extraction in 3D Visual Data


  • sift

    Saliency detection and local feature extraction for 2D images have received extensive attention recently. In this work, we propose saliency detection and feature extraction techniques for 3D visual data. Our algorithm directly works in 3D scale space and detects interesting regions in different scales. We then extract a local descriptor based on gradient location-orientation histogram which is invariant to scale and rotation of the 3D object. The proposed methodology has been tested on 3D synthetic and Magnetic Resonance Imaging (MRI) data sets. The performance of the algorithm is evaluated based on the repeatability of saliency detection and descriptor matching, after 3D transformation and in the presence of noise.

    Further details of this work are available here.

    Top


  • 3D Reconstruction from 2D Images with Hierarchical Continuous Simplices


  • cgi

    This work presents an effective framework for the reconstruction of volumetric data from a sequence of 2D images. The 2D images are first aligned to generate an initial 3D volume, followed by the creation of a tetrahedral domain using the Carver algorithm. The resulting tetrahedralization preserves both the geometry and topology of the original dataset. Then a solid model is reconstructed using simplex splines with fitting and faring procedures. The reconstructed heterogenous volumetric model can be quantitatively analyzed and easily visualized. Our experiments demonstrated that our approach can achieve high accuracy in the data reconstruction. The novel techniques and algorithms proposed can be applied to reconstruct a heterogeneous solid model with complex geometry and topology from other visual data.

    Further details of this work are available here.

    Top


  • Integrative Information Visualization of Multimodality Neuroimaging Data


  • pg

    This work presents a novel integrative information visualization framework for cross-subject neuroimaging data analysis. The framework can integrate multimodal information captured by different imaging modalities and population-based statistical information presented by different subjects. In this framework, accurate registration of cortical structures is the foundation for the information integration across population. We present a non-rigid intersubject brain surface registration method using conformal structure and spherical thin-plate splines. Spherical thin-plate splines are designed to explicitly match prominent homologous landmarks, and meanwhile, interpolate a global deformation field on the spherical domain, registering brain surfaces in a transformed space. Subsequently, an approach for the integrative information fusion and visualization is presented to handle multimodality neuroimaging data. The entire framework demonstrates its usefulness in multimodality neuroimaging data analysis across subjects.

    Further details of this work are available here.

    Top


  • Physically Based Modeling and Simulation with Dynamic Spherical Volumetric Simplex Splines


  • brain.png

    In this work, we present a novel computational modeling and simulation framework based on dynamic spherical volumetric simplex splines. The framework can handle the modeling and simulation of genus-zero objects with real physical properties. In this framework, we first develop an accurate and efficient algorithm to reconstruct the high-fidelity digital model of a real-world object with spherical volumetric simplex splines which can represent with accuracy geometric, material, and other properties of the object simultaneously. With the tight coupling of Lagrangian mechanics, the dynamic volumetric simplex splines representing the object can accurately simulate its physical behavior because it can unify the geometric and material properties in the simulation. The visualization can be directly computed from the object's geometric or physical representation based on the dynamic spherical volumetric simplex splines during simulation without interpolation or resampling. We have applied the framework for biomechanic simulation of brain deformations, such as brain shifting during the surgery and brain injury under blunt impact. We have compared our simulation results with the ground truth obtained through intra-operative magnetic resonance imaging and the real biomechanic experiments. The evaluations demonstrate the excellent performance of our new technique.

    Further details of this work are available here.

    Top


  • Geodesic Distance-Weighted Shape Vector Image Diffusion


  • ex6.png

    This work proposes a novel and efficient surface matching and visualization framework through the geodesic distance-weighted shape vector image diffusion. Based on conformal geometry, our approach can uniquely map a 3D surface to a canonical rectangular domain and encode the shape characteristics (e.g., mean curvatures and conformal factors) of the surface in the 2D domain to construct a geodesic distance-weighted shape vector image, where the distances between sampling pixels are not uniform but the actual geodesic distances on the manifold. Through the novel geodesic distance-weighted shape vector image diffusion, we can create a multiscale diffusion space, in which the cross-scale extrema can be detected as the robust geometric features for the matching and registration of surfaces. Therefore, statistical analysis and visualization of surface properties across subjects become readily available. The experiments on scanned surface models show that our method is very robust for feature extraction and surface matching even under noise and resolution change. We have also applied the framework on the real 3D human neocortical surfaces, and demonstrated the excellent performance of our approach in statistical analysis and integrated visualization of the multimodality volumetric data over the shape vector image.

    Further details of this work are available here.

    Top


  • Isoperimetric Co-clustering Algorithm (ICA) for Pairwise Data Co-clustering


  • bi_type

    Data co-clustering refers to the problem of simultaneous clustering of two data types. Typically, the data is stored in a contingency or co-occurrence matrix C where rows and columns of the matrix represent the data types to be co-clustered. An entry Cij of the matrix signifies the relation between the data type represented by row i and column j. Co-clustering is the problem of deriving sub-matrices from the larger data matrix by simultaneously clustering rows and columns of the data matrix. We present a novel graph theoretic approach to data co-clustering. The two data types are modeled as the two sets of vertices of a weighted bipartite graph. We use Isoperimetric Co-clustering Algorithm (ICA)--a new method for partitioning the bipartite graph. ICA requires a simple solution to a sparse system of linear equations instead of the eigenvalue or SVD problem in the popular spectral co-clustering approach. Our theoretical analysis and extensive experiments performed on publicly available datasets demonstrate the advantages of ICA over other approaches in terms of the quality, efficiency and stability in partitioning the bipartite graph.

    Further details of this work are available here.

    Top


  • Semi-supervised NMF for Homogeneous Data Clustering


  • homo_con

    Traditional clustering algorithms are inapplicable to many real-world problems where limited knowledge from domain experts is available. Incorporating the domain knowledge can guide a clustering algorithm, consequently improving the quality of clustering. We propose SS-NMF: a semi-supervised non-negative matrix factorization framework for data clustering. In SS-NMF, users are able to provide supervision for clustering in terms of pairwise constraints on a few data objects specifying whether they "must" or "cannot" be clustered together. Through an iterative algorithm, we perform symmetric trifactorization of the data similarity matrix to infer the clusters. Theoretically, we show the correctness and convergence of SS-NMF and SS-NMF provides a general framework for semi-supervised clustering. Through extensive experiments conducted on publicly available datasets, we demonstrate the superior performance of SS-NMF for clustering.

    Further details of this work are available here.

    Top


  • Consistent Isoperimetric High-order Co-clustering (CIHC) for high-order data co-clustering


  • tri_type

    Many of the real world clustering problems arising in data mining applications are heterogeneous in nature. Heterogeneous co-clustering involves simultaneous clustering of objects of two or more data types. While pairwise co-clustering of two data types has been well studied in the literature, research on high-order heterogeneous co-clustering is still limited. We propose a graph theoretical framework for addressing star- structured co-clustering problems in which a central data type is connected to all the other data types. Partitioning this graph leads to co-clustering of all the data types under the constraints of the star-structure. Although, graph partitioning approach has been adopted before to address star-structured heterogeneous complex problems, the main contribution of this work lies in an efficient algorithm that we propose for partitioning the star-structured graph. Computationally, our algorithm is very quick as it requires a simple solution to a sparse system of overdetermined linear equations. Theoretical analysis and extensive experiments performed on toy and real datasets demonstrate the quality, efficiency and stability of the proposed algorithm.

    Further details of this work are available here.

    Top


  • Semi-supervised NMF for Heterogeneous Data Clustering


  • heter_con

    Co-clustering heterogeneous data has attracted extensive attention recently due to its high impact on various important applications, such us text mining, image retrieval and bioinformatics. However, data co-clustering without any prior knowledge or background information is still a challenging problem. In this work, we propose a Semi-Supervised Non-negative Matrix Factorization (SS-NMF) framework for data co-clustering. Specifically, our method computes new relational matrices by incorporating user provided constraints through simultaneous distance metric learning and modality selection. Using an iterative algorithm, we then perform trifactorizations of the new matrices to infer the clusters of different data types and their correspondence. Theoretically, we prove the convergence and correctness of SS-NMF co-clustering and show the relationship between SS-NMF with other well-known co-clustering models. Through extensive experiments conducted on publicly available text, gene expression, and image data sets, we demonstrate the superior performance of SS-NMF for heterogeneous data co-clustering.

    Further details of this work are available here.

    Top




  • Exemplar-based Visualization of Large Document Corpus


  • feature_selection

    With the rapid growth of the World Wide Web and electronic information services, text corpus is becoming available online at an incredible rate. By displaying text data in a logical layout (e.g., color graphs), text visualization presents a direct way to observe the documents as well as understand the relationship between them. In this work, we propose a novel technique, Exemplarbased Visualization (EV), to visualize an extremely large text corpus. Capitalizing on recent advances in matrix approximation and decomposition, EV presents a probabilistic multidimensional projection model in the low-rank text subspace with a sound objective function. The probability of each document proportion to the topics is obtained through iterative optimization and embedded to a low dimensional space using parameter embedding. By selecting the representative exemplars, we obtain a compact approximation of the data. This makes the visualization highly efficient and flexible. In addition, the selected exemplars neatly summarize the entire data set and greatly reduce the cognitive overload in the visualization, leading to an easier interpretation of large text corpus. Empirically, we demonstrate the superior performance of EV through extensive experiments performed on the publicly available text data sets.

    Further details of this work are available here.

    Exemplar-based Visualization software demo is available here.

    The 10Pubmed data set used for the software are available here.

    Top


  • Intrinsic Geometric Scale Space by Shape Diffusion


  • feature_selection

    This work formalizes a novel, intrinsic geometric scale space (IGSS) of 3D surface shapes. The intrinsic geometry of a surface is diffused by means of the Ricci flow for the generation of a geometric scale space. We rigorously prove that this multiscale shape representation satisfies the axiomatic causality property. Within the theoretical framework, we further present a featurebased shape representation derived from IGSS processing, which is shown to be theoretically plausible and practically effective. By integrating the concept of scale-dependent saliency into the shape description, this representation is not only highly descriptive of the local structures, but also exhibits several desired characteristics of global shape representations, such as being compact, robust to noise and computationally efficient. We demonstrate the capabilities of our approach through salient geometric feature detection and highly discriminative matching of 3D scans.

    Further details of this work are available here.

    Top


  • Selection–fusion Approach for Classification of Datasets with Missing Values


  • feature_selection

    This work proposes a new approach based on missing value pattern discovery for classifying incomplete data. This approach is particularly designed for classification of datasets with a small number of samples and a high percentage of missing values where available missing value treatment approaches do not usually work well. Based on the pattern of the missing values, the proposed approach finds subsets of samples for which most of the features are available and trains a classifier for each subset. Then, it combines the outputs of the classifiers. Subset selection is translated into a clustering problem, allowing derivation of a mathematical framework for it. A trade off is established between the computational complexity (number of subsets) and the accuracy of the overall classifier. To deal with this trade off, a numerical criterion is proposed for the prediction of the overall performance. The proposed method is applied to seven datasets from the popular University of California, Irvine data mining archive and an epilepsy dataset from Henry Ford Hospital, Detroit, Michigan (total of eight datasets). Experimental results show that classification accuracy of the proposed method is superior to those of the widely used multiple imputations method and four other methods. They also show that the level of superiority depends on the pattern and percentage of missing values.

    Further details of this work are available here.

    Top

Contact Webmaster
Center for Visual Informatics and Intelligence(VII) © 2009