David Mayerich,1 Guoning Chen,1 Hank Childs2
1University of Houston and 2University of Oregon
Project Overview
Visualization is the branch of computer science devoted to analyzing data through visual representation. Scientific visualization exploits the spatial properties of data collected from domains such as engineering, climate science, and biomedicine. Scientific visualization has a strong dependence on computer graphics by integrating modern hardware to facilitate visual analysis, but is ultimately about transforming data to a form that can be rendered. Moreover, the ultimate goal is to enable insight into complex scientific data.
Scientific Visualization Needs the Unreal Engine
Next-generation imaging systems and simulations produce multidimensional data at unprecedented rates. These data sets contain complex structures that are challenging to represent using only explicit geometry or implicit volumes. Innovative tools and data structures are therefore required for data manipulation and pre-processing. However, current visualization platforms, such as Imaris, Amira, ParaView, ImageJ, and MeVisLab, lack low-level access to rendering and processing pipelines and provide limited virtual reality (VR) support. Further, many of these tools invested in OpenGL version 1.0 twenty years ago and have been reluctant to embrace modern graphics, resulting in a significant gap between commercially-available and academic visualization tools. This commitment to legacy implementations creates an opportunity, as new scientific visualization based on Unreal would have significant advantages for end users with respect to performance and interface.
With the advent of data acquisition methods, such as high-throughput microscopy, the scientific community needs a customizable visualization platform that enables both (1) high-level access to state-of-the-art rendering techniques and (2) direct access to low-level features for developing application-specific tools.
Our aim is to develop Chimera, an open-source toolkit that supports next-generation scientific data by integrating Unreal Engine's state-of-the-art rendering and virtual reality visualization with modern algorithms for scientific visualization. Unreal Engine provides an ideal foundation for this platform for two reasons:
- A consistently-updated rendering pipeline, motivated by game developers, to support state-of-the-art tools for interactive graphics, including VR and broad hardware support.
- Low-level access at critical points in the rendering pipeline, enabling integration of domain-specific tools developed by experts for data processing and manipulation.
Benefits for Unreal Engine
Unreal Engine has the potential to become the standard platform for scientific visualization. Chimera will expand the Unreal Engine into the following areas:
- Biomedical research: comparison of complete organ phenotypes for disease models
- Clinical research: precision medicine based on pathology acquired from complete 3D biopsy images, with a focus on cancer research
- Education: integration into grade-school, college, and high-school courses on disease and tissue function
- Computational science: physics-based simulations for phenomena including astrophysics, nuclear reactors, climate change, molecular dynamics, combustion, etc.
The Unreal Chimera project will foster a community to establish Unreal Engine as the next-generation tool for GPU-based scientific visualization for modern data on modern hardware. The scientific visualization community comprises several thousand researchers, with a premier conference (IEEE Visualization) regularly attracting over 1000 attendees. This community primarily delivers techniques to stakeholders through open-source software. It is typical to see inter-institutional and interdisciplinary collaboration between visualization scientists to deliver viable platforms to domain specialists. In some cases, such as VTK, ParaView, and VisIt, the resulting software has had over one million downloads, which speaks to the size of stakeholder communities. The impact for Epic would be two-fold: (1) provide a top-tier visualization platform for domain scientists, and (2) foster new developers utilizing Unreal Engine in diverse and novel ways to impact visualization research.
Use-Cases for Unreal Engine: Chimera
In Situ Scientific Visualization
Modern physics-based and computational simulations rely supercomputing to generate massive data sets that are starting to exceed one trillion grid points (10K x 10K x 10K), with multiple-variate components at each grid point (i.e., density, temperature, pressure, velocity, etc.), and thousands of time steps [1]. Even the most advanced storage systems do not provide sufficient space to meaningfully represent simulated phenomena. As a result, visualization is increasingly performed in situ, alongside the simulation. As simulations become more complex, static images representing single time-points are being replaced by massively reduced “extracts” that can be explored by domain scientists during and after simulation. As an example, in situ processing may generate an isosurface that can be more efficiently stored and explored. Since in situ processing is a relatively new paradigm, there are very few tools dedicated to exploring these extracts. We believe this represents a significant opportunity for Chimera to enable the heavily graphics-based exploration of these extracts.
Large-scale Vector/Tensor Fields
Vector fields are of paramount importance in scientific, engineering, and medical applications for studying dynamical systems. Chimera will integrate algorithms critical to vector field visualization, including a variety of numerical integration methods to extract particle trajectories and depict flow patterns, and clustering algorithms to facilitate level-of-detail representation for dense and complex line-based data. Dr. Chen’s work in vector field visualization [2], [3] will provide the foundation for Chimera's algorithms, which will integrate with Unreal Engine by generating compatible data structures through Blueprints.

Large higher-order tensor fields, commonly seen in mechanical engineering and brain imaging, are becoming more common for describing high-order physical properties of objects and spaces. Stress tensors are used to characterize the anisotropic reaction to external forces within an object; diffusion tensor has been employed to describe the anisotropic diffusion behavior for brain imaging. By leveraging Dr. Chen's expertise in this area [4], Chimera will include the latest advances for visualizing tensor fields defined on surfaces and within bounded volumes.
Large-Scale Subcellular Tissue Atlases
Exploring organ-scale tissue models at subcellular resolution is critical for understanding, modeling, and treating disease. This is challenging because 1cm3 of tissue - around the size of a single rodent organ - is represented by terabytes of raw volumetric image data. These images encode complex three-dimensional (3D) structures that are difficult to search and visualize. Chimera will enable construction and exploration of quantitative multi-terabyte and multi-spectral 3D tissue maps, allowing biomedical experts to explore the cellular and molecular structure of whole organs, enabling routine use of high-throughput imaging and tissue mapping within biology, biomedicine, and education. Chimera will leverage Dr. Mayerich's expertise in biomedical visualization and large-scale imaging [5]–[7] to produce algorithms addressing needs in emerging large-volume microscopy systems.

Development Plan
The development of Chimera is achieved through the following three subprojects:
Subproject 1 - Develop a foundational platform for scientific visualization in Unreal Engine. We will integrate existing tools commonly used in visualization and large-scale data processing as Unreal Engine blueprints. This includes existing libraries such as the Visualization Toolkit (VTK) [8] and Insight Toolkit (ITK) [9], and SIproc [10] to facilitate data processing and produce meshes and textures compatible with Unreal. Where possible, algorithms will be updated to integrate heterogeneous parallel acceleration, since Unreal Engine users will have access to GPU hardware. This platform will be distributed as open-source library under the MIT software license and stored on a public Git server with user tutorials.
Subproject 2 - Develop an application for exploring in situ extracts. With this subproject, we will provide Unreal: Chimera with the ability to explore in situ extracts. With the initial version, we will focus on three use cases -- rendering surfaces, explorable images, and flow reconstruction.
Subproject 3 - Develop task-specific tools for subcellular tissue atlases (STAs). We will develop and integrate three components into Unreal Engine to enable the proposed subcellular tissue atlases: (1) dynamic connected-component and mesh data structures to store sparse and complex geometry (such as neural and microvascular networks), (2) OpenVDB volumetric structures for storing multi-channel sparse implicit data for rendering using volumetric and ray-tracing approaches, and (3) a control scheme for VR display and navigation of large tissue models.
Post-project outreach and development. We will organize workshops and tutorials at IEEE Visualization and Virtual Reality (VR) conferences to ensure the broader awareness of our developed Unreal blueprints in the research community. We will also integrate the developed toolkits with other projects and future funding applications to ensure its continuous and sustainable improvement and enhancement.
Subproject 1 - Develop a framework for generic scientific visualization in Unreal Engine
Our research laboratories at the University of Houston and the University of Oregon focus heavily on biomedical imaging and in situ visualization research. However, existing software limits our ability to integrate new data structures and algorithms with cutting-edge rendering. Research tools such as Imaris and Amira are expensive and far behind the state of the art, while open-source tools like ImageJ, MeVisLab, and ParaView lack components required for modern and next-generation data, such as support for terabyte-scale data sizes. To address these challenges, we will first create a foundational platform for Unreal Chimera by incorporating existing visualization software from the Visualization Toolkit (VTK) [8], Insight Toolkit (ITK) [9], and SIproc [10] to facilitate data processing and produce Unreal-compatible components, such as meshes and textures. C/C++ code from these open-source platforms will be integrated into Unreal Engine blueprints through wrapper classes, maintaining compatibility with VTK, ITK, and SIproc inputs where possible. This will provide Unreal Chimera users with access to powerful tools, established through decades of visualization research, through Unreal Engine's visual scripting platform. This will mitigate the need for domain specialists with extensive programming expertise, and facilitate the further development of Chimera by extending these blueprint wrappers.
Subproject 2 - Develop an application for exploring in situ extracts
For this sub-project, we will develop an application based on Unreal that delivers modern graphics capabilities to domain scientists performing massive-scale supercomputing simulations. This sub-project will add three foundational features required for in situ visualization, while maintaining the flexibility for future development and additional features over time. These features include:
- Rendering surfaces, is well-aligned with traditional Unreal use cases. The main purpose of this feature is to provide needed functionality, as well as a platform for demonstrating the benefits of Unreal over legacy visualization software.
- Explorable images. The idea with this feature is that the in situ extracts are usually images. Although rather than containing color information, they contain information such as field value, depth, and surface normal. We will use Unreal to take these images as inputs, and create new (color) images. This use case is highly aligned with the new “Cinema” approach [11]. That said, we note that Cinema is being used to extract explorable images, but there is opportunity for visualization programs that consume these images so domain scientists can explore their simulation results. This will be our focus.
- Finally, we will look at flow reconstruction. In this third use case, pathlines derived from simulation vector fields will be saved to disk, and then we will use Chimera to display new flow visualization techniques derived from these pathlines.

Subproject 3 - Develop task-specific tools for subcellular tissue atlases (STAs).
Biological tissue at the cellular level is also confounding in two critical ways:
- Three-dimensional structural complexity and variability. Microscopic tissues are densely packed with tremendously complex heterogeneous structures, including microvascular and neuronal networks that direct nutrient transport and define brain connectivity.
- Molecular and protein complexity. Cellular-level interactions are mediated by a large variety of proteins, each with distinct functions that facilitate nutrient transport, immune response, and communication between cells such as neurons.
Advances in instrumentation enable the collection of increasingly large snapshots of tissue with sub-cellular resolution at the cm3 scale, generating 3D and 4D images easily exceeding several terabytes (Figure 1). Their structural and molecular complexity poses major challenges for visualization and analysis. For example, diseases such as Alzheimer's induces both structural and molecular changes at both the cellular level and across the entire brain. Once appropriate multispectral images are obtained, algorithms must cope with the tremendous complexity to produce usable morphological and molecular maps to enable exploration, quantification, and modeling.

Proposed Work
The following outlines the proposed approach for representing, building, and using a subcellular tissue atlas (STA). The necessary data structures and their integration will be described first, which will include a Bayesian expression of uncertainty maintained throughout the entire pipeline. The proposed methods for building an atlas will be outlined next, followed by a description of required algorithms that will be adapted or developed. Finally, examples will be provided for tissue atlas navigation, visualization, and analysis.
Representation: The proposed atlas representation will focus on three critical requirements:
- Compact - Studies requiring comprehensive tissue maps will span multiple scales (from micro to macro), requiring dynamic access to large tissue regions at high spatial resolution. With sample sizes of 10s of gigabytes per cm3, the proposed representation may take highly efficient advantage of molecular and structural sparsity.
- Fast analytics - The size and complexity of atlas volumes necessitates a majority of any analysis be computationally driven. The atlas must support efficient implementations of existing segmentation and analysis tools, such as those proposed in FARSIGHT [12], [13], as well as specialized algorithms proposed in this project.
- Biologically-based queries - Tissue structure is highly heterogeneous and many samples will be impractical to align to predefined coordinate spaces, such as stereotaxic coordinates [14], [15]. Tissue maps must therefore allow queries relative to biological structures, such as cells, microvessels, and neuronal processes.

The subcellular tissue atlas (STA) will integrate two distinct data types:
- Geometric data - Structures with surface-like characteristics will be expressed as parametric manifolds embedded within the three-dimensional atlas. Cells and cell nuclei are representable as closed surfaces with defined borders. Fibrous structures (microvessels and neuronal processes) form interconnected meshworks described as a two-dimensional tube-like surface with graph-based connectivity.
- Mapped molecular data - Molecular distributions encode complex features describing the role, class, or activity of a surface. The distribution of molecules within a cell is defined by an implicit function and assigned to the cell surface (Figure 4). The molecules that lie near the surface of a neuron or microvessel may (depending on the molecule) be assigned to the geometry representing a dendrite or capillary.
Geometric data for closed surfaces, such as cell membranes and the surfaces of nuclei, will be represented using adaptively-sampled genus-0 meshes parameterized in spherical coordinates (θ, φ). Features associated with the surface (i.e. uncertainty) will be encoded using a sparse encoding framework similar to mesh colors[16], which allows GPU acceleration and interpolation for interactive rendering [17]. Any closed surface si(θ, φ) can be thought of as a two-dimensional function that returns a vector including a 3D position with any additional interpolated properties.

Network geometry, such as neurons and microvessels, will be represented using an integration of mesh-, curve-, and graph-based strategies. The network connectivity is represented as a graph G=(V,E) where each vertex V is a fiber connection (i.e. bifurcation) and each edge ei(t)E is a parametric curve describing the associated fiber centerline. The graph G therefore has 3D context: every vertex has a 3D position and every edge is a 3D parametric path. Any 1D values, such as fiber radius, are encoded along the associated path and returned as a component of ei(t). The network surface is represented as a mesh, with each component linked to the closest edge in G.
Molecular distributions will be encoded by adapting OpenVDB [18], which is an Academy Award-winning data structure designed by the movie industry to perform physically-based simulations with sparse implicit functions on infinite grids. At its core, OpenVDB is a B+ tree with additional properties suited for molecular encoding, profiling, and analytics-guided visualization. These include (1) constant time random and template access, (2) fast expansion, contraction, and merging of grids, (3) compact representation with low overhead, and (4) cache coherence and (5) compatibility with streaming parallel (i.e. GPU) computing platforms. Atlas components will be hierarchically linked to enable biologically-based queries (Figure 5).
Post-project Outreach and Development

Open Access Data Repository
Our laboratory will commit to maintaining a continuous open-access data repository for distributing data collected throughout the course of this project. This includes a variety of microscopy data sets designed to challenge current visualization paradigms and foster the development of new tools. Data sets will be posted to our existing NAS hardware and made available using peer-to-peer synchronization with Resilio Sync.
The Scientific Exploration and Discovery Workshop - High School and Grade School
A Scientific Exploration and Discovery Workshop will be developed, introducing middle-school and high-school students to interactive exploration using Unreal Engine with VR hardware. Our UH laboratories are active with the Chevron-sponsored College Girls Engineering the Future event, where students were able to image samples with a fluorescent and confocal microscope, and visualize the results in virtual reality using Amira (FEI) and basic VR tools (Figure 8).
Course and Tutorial Organization
We will organize courses and tutorials for both ACM SIGGRAPH and the IEEE Conference on Visualization. These tutorials will focus on (1) leveraging Unreal Engine for visualization using our proposed framework and (2) providing and promoting open-access data sets collected during the course of this project. Our goal is to foster the development of additional tools that strengthen our framework, making it robust to continuously changing visualization needs. Dr. Childs has extensive experience with course and tutorial development, with over twenty conference tutorials and short courses organized for the French, German, English (twice), and Saudi governments, ranging from half-day courses to week-long courses.
Project Management, Resources, and Personnel Qualifications
Project Manager Expertise
Dr. David Mayerich. Dr. Mayerich's research focuses on imaging and visualization of massive microscopy data. He has developed several high-throughput imaging techniques, such as knife-edge scanning microscopy (KESM) [5] and milling with ultraviolet excitation (MUVE) [7], and has broad expertise with light sheet microscopy (LSM) [24]. His work has been commercialized by Strateos, a Los Angeles based company focused on whole-organ drug efficacy studies. Several whole-brain data sets are publicly available via the Knife-Edge Scanning Microscope Brain Atlas [25]. His laboratory has developed highly parallel algorithms for segmenting and visualizing complex neuronal and microvascular network structures [26], [27], [20], [28].
Dr. Guoning Chen. Dr. Chen is an expert of scientific data visualization, focusing specifically on vector/tensor fields and complex geometry. He has developed and published numerous techniques for analyzing and visualizing physics-based data, including the motion of gas, fluid, and electronic particles [3], [29]–[31]. He has also introduced techniques applicable to field data [2], [4] typically seen in anisotropic materials, such as mechanical parts and organ tissues. Dr. Chen has collaborated with Dr. Mayerich on the effective representation and exploration of complex microvascular networks [28], heavily leveraged in subproject 3. In addition, Dr. Chen is an expert in 3D modeling and mesh generation. His work on geometric modeling has resulted in a number of effective and efficient techniques for modeling city street networks and generating high quality 3D volumetric meshes [32]–[36] for simulation. Mesh generation will be needed in the proposed tasks for the generation of geometric representation of the biomedical objects for rendering using the Unreal Chimera.
Dr. Hank Childs. Hank Childs has two decades of experience developing, deploying, and evangelizing visualization software. He began his career at Lawrence Livermore Laboratory, where he was the architect of the VisIt project. After contributing over 200,000 lines of code to the project, which is still in active use nearly two decades later, he shifted to deployment and evangelization roles. Ultimately, Hank was the Chief Software Architect of three different multi-institution efforts that utilized VisIt, and represented $20M in funding for software development and customer engagement. Hank then shifted to academia, where he has built a research group that currently consists of nine Ph.D. students. During his research career, he has published 100 peer-reviewed articles, and received a Department of Energy Early Career Award. Hank continues to be active in visualization software, and is currently contributing code to VTK-m, a many-core visualization library, and Ascent, a flyweight in situ library. Hank also is the Deputy PI for a Department of Energy project for in situ processing, ALPINE, which will receive over $10M in funding over a four year period.
Collaboration Plan. Both Dr. Mayerich and Dr. Chen have an existing collaboration and are jointly supervising Dr. Govyadinov, who will be primarily responsible for software development. Dr. Childs has an extensive background in software development and deployment, and will be primarily responsible for guiding the release of the final Unreal: Chimera product. All team members are familiar with open-source software deployment, and routine updates will be made available through our publicly accessible Git server (git.stim.ee.uh.edu).
Project Resources
Funding provided for the proposed project by Epic will enable a visualization framework and integration of algorithms with the Unreal Engine. However, our laboratories have existing resources to develop back-end tools that will be passed to our Unreal Engine development team for integration. The Unreal Engine MegaGrant will fund the following personnel resources:
Personnel Funded by the Unreal MegaGrant
Project Management (Unreal MegaGrant) - 1 person month per year for Drs. Guoning Chen and David Mayerich, and one third of a month per year for Dr. Childs. The remainder of their salary is covered by their respective universities and other related research support.
Project Personnel
- 12 months per year to support Dr. Pavel Govyadinov as a senior developer. Pavel has extensive experience in C/C++ development, CUDA, OpenGL, and Unreal Engine for VR applications. He has also presented multiple related papers at the IEEE Conference on Visualization.
- 12 months per year to support two graduate students. One graduate student will work under Dr. Govyadinov's supervision at the University of Houston. The second will work under Dr. Childs' supervision at the University of Oregon.
Personnel from Synergistic Sources
- Summer Research Interns - 1 top-tier undergraduate student will be allocated each summer to develop software for visualizing brain neurovasculature. This student will be selected as part of the Research Experience for Undergraduates (REU) program funded by the NSF I/UCRC BRAIN Center (http://brain.egr.uh.edu/).
- Senior Design Groups - Our laboratory will recruit 1 senior design group every year to develop key components for software deliverables. SD groups consist of 4 senior undergraduate students with expertise in C/C++ programming. Each group will spend 1 year working on a pre-specified deliverable that leverages the proposed software to demonstrate key features of Unreal Engine. SD groups in Y1 and Y2 will focus on general visualization, while the Y3 group will leverage more advanced tools for large-scale neurovascular visualization.
- UH Research Computing Data Core (RCDC) - The RCDC offers on-site courses available to all UH faculty, staff, and students in the areas of high performance computing, visualization, and GPU programming. These resources will be used to train, develop, and test the basic models proposed for this project and train personnel. In addition, Dr. Mayerich teaches a GPU programming course every year that will be available to project personnel.
Available Equipment
- VR Platforms and Workstations - Our laboratories have 1 Oculus Rift and 3 Valve Index VR systems, along with 12 high-performance GPU-enabled workstations (both Windows and Linux) for software development and testing.
- High Resolution Display Wall - Our laboratories have a 3x3 tile wall mounting display. Each LCD panel is 46" with local resolution 1920x1080 and 500 nits of brightness. The image to image distance between panels is approximately 3.7 mm. The display wall is powered by a rendering workstation with two Intel® Xeon® processor E5-2637 v4, 4C, 3.5 GHz 15M, eight 16GB DDR4-2133MHz ECC Registered 1.2V Memory, two 512GB 2.5 SATA III Internal Solid State Drive (SSD), and four NVIDIA VCQM4000-PB QUADRO M4000 8GB GDDR5.
- Imaging Systems - Dr. Mayerich's laboratory houses an array of high-performance three-dimensional and fluorescent and multispectral microscopes that will be used to collect data for this project. This includes two confocal microscopes capable of 3D 100X imaging, a light sheet microscope, and a new platform for acquiring large-scale three-dimensional tissue images [7].
Synergistic Funding Sources
- The National Science Foundation (NIH) / National Heart, Lung, and Blood Institute (NHLBI) - Dr. Mayerich is the PI for a $3.7M grant provided by NIH/NHLBI to develop a new imaging system and visualization tools to explore embryonic microvascular development by combining optical coherence tomography (OCT) and light sheet microscopy (LSM). This instrumentation will produce aligned multi-modal 4D volumetric data, providing massive amounts of preliminary test data for the proposed Unreal Engine toolkit. This project includes 1 PhD student per year to develop data processing and visualization tools that will be integrated into the proposed project.
- The National Science Foundation (NSF) CAREER Award - Dr. Chen has been awarded an NSF CAREER Award for developing new methods for visualizing turbulent flow. This will provide funding for 1 PhD student per year to develop new data processing and visualization tools for integration into Unreal Engine.
Department of Energy Exascale Computing Project - Dr. Childs is in year four of a seven-year grant to enable visualization on “exascale” computers (computers capable of 1018 computations per seconds). He has a role on two projects: ALPINE, which generates in situ extracts, and VTK-m, which provides a many-core visualization library. These efforts will provide synergy for the SP2, both in obtaining extracts and in connecting with domain scientists who would want to use Unreal Chimera to visualize the extracts.
References
[1] S. Cielo, L. Iapichino, J. Günther, C. Federrath, E. Mayer, and M. Wiedemann, “Visualizing the world’s largest turbulence simulation,” arXiv:1910.07850 [astro-ph, physics:physics], Oct. 2019.
[2] G. Chen, K. Mischaikow, R. S. Laramee, and E. Zhang, “Efficient morse decompositions of vector fields,” IEEE Trans Vis Comput Graph, vol. 14, no. 4, pp. 848–862, Aug. 2008 PMID: 18467759.
[3] G. Chen, Q. Deng, A. Szymczak, R. S. Laramee, and E. Zhang, “Morse set classification and hierarchical refinement using Conley index,” IEEE Trans Vis Comput Graph, vol. 18, no. 5, pp. 767–782, May 2012 PMID: 21690641.
[4] G. Chen, D. Palke, L. Zhongzang, H. Yeh, P. Vincent, R. S. Laramee, and E. Zhang, “Asymmetric tensor field visualization for surfaces,” IEEE Trans Vis Comput Graph, vol. 17, no. 12, pp. 1979–1988, Dec. 2011 PMID: 22034315.
[5] D. Mayerich, L. Abbott, and B. McCormick, “Knife-edge scanning microscopy for imaging and reconstruction of three-dimensional anatomical structures of the mouse brain,” J Microsc, vol. 231, no. Pt 1, pp. 134–143, Jul. 2008 PMID: 18638197.
[6] D. Mayerich, J. Kwon, C. Sung, L. Abbott, J. Keyser, and Y. Choe, “Fast macro-scale transmission imaging of microvascular networks using KESM,” Biomed Opt Express, vol. 2, no. 10, pp. 2888–2896, Sep. 2011 PMCID: PMC3191452.
[7] J. Guo, C. Artur, J. L. Eriksen, and D. Mayerich, “Three-Dimensional Microscopy by Milling with Ultraviolet Excitation,” Sci Rep, vol. 9, no. 1, pp. 1–9, Oct. 2019.
[8] “VTK - The Visualization Toolkit.” .
[9] “ITK - Segmentation & Registration Toolkit.” [Online]. Available: https://itk.org/. [Accessed: 03-Sep-2019].
[10] S. Berisha, S. Chang, S. Saki, D. Daeinejad, Z. He, R. Mankar, and D. Mayerich, “SIproc: an open-source biomedical data processing platform for large hyperspectral images,” Analyst, vol. 142, no. 8, pp. 1350–1357, Apr. 2017 PMCID: PMC5386839.
[11] J. Ahrens, S. Jourdain, P. OLeary, J. Patchett, D. H. Rogers, and M. Petersen, “An Image-Based Approach to Extreme Scale in Situ Visualization and Analysis,” in SC ’14: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, 2014, pp. 424–434.
[12] B. Roysam, W. Shain, E. Robey, Y. Chen, A. Narayanaswamy, C. L. Tsai, Y. Al-Kofahi, C. Bjornsson, E. Ladi, and P. Herzmark, “The FARSIGHT project: associative 4D/5D image analysis methods for quantifying complex and dynamic biological microenvironments,” Microscopy and Microanalysis, vol. 14, no. S2, pp. 60–61, 2008.
[13] J. Luisi, A. Narayanaswamy, Z. Galbreath, and B. Roysam, “The FARSIGHT trace editor: an open source tool for 3-D inspection and efficient pattern analysis aided editing of automated neuronal reconstructions,” Neuroinformatics, vol. 9, no. 2–3, pp. 305–315, Sep. 2011 PMID: 21487683.
[14] G. Paxinos and C. Watson, The rat brain in stereotaxic coordinates: hard cover edition. Elsevier, 2006.
[15] G. Paxinos and K. B. Franklin, Paxinos and Franklin’s the mouse brain in stereotaxic coordinates. Academic press, 2019.
[16] C. Yuksel, J. Keyser, and D. H. House, “Mesh colors,” ACM Transactions on Graphics (TOG), vol. 29, no. 2, p. 15, 2010.
[17] C. Yuksel, S. Lefebvre, and M. Tarini, “Rethinking Texture Mapping,” Computer Graphics Forum, vol. 38, no. 2, pp. 535–551, 2019.
[18] K. Museth, J. Lait, J. Johanson, J. Budsberg, R. Henderson, M. Alden, P. Cucka, D. Hill, and A. Pearce, “OpenVDB: An Open-source Data Structure and Toolkit for High-resolution Volumes,” in ACM SIGGRAPH 2013 Courses, New York, NY, USA, 2013, pp. 19:1–19:1.
[19] D. M. Mayerich, L. Abbott, and J. Keyser, “Visualization of cellular and microvascular relationships,” IEEE Trans Vis Comput Graph, vol. 14, no. 6, pp. 1611–1618, Dec. 2008 PMID: 18989017.
[20] P. A. Govyadinov, T. Womack, J. L. Eriksen, G. Chen, and D. Mayerich, “Robust Tracing and Visualization of Heterogeneous Microvascular Networks,” IEEE Trans. Visual. Comput. Graphics, vol. 25, no. 4, pp. 1760–1773, Apr. 2019.
[21] F. Cassot, F. Lauwers, C. Fouard, S. Prohaska, and V. LAUWERS‐CANCES, “A novel three‐dimensional computer‐assisted method for a quantitative study of microvascular networks of the human cerebral cortex,” Microcirculation, vol. 13, no. 1, pp. 1–18, 2006.
[22] F. Cassot, F. Lauwers, S. Lorthois, P. Puwanarajah, V. Cances-Lauwers, and H. Duvernoy, “Branching patterns for arterioles and venules of the human cerebral cortex,” Brain research, vol. 1313, pp. 62–78, 2010.
[23] S. G. Parker, J. Bigler, A. Dietrich, H. Friedrich, J. Hoberock, D. Luebke, D. McAllister, M. McGuire, K. Morley, and A. Robison, “OptiX: a general purpose ray tracing engine,” in Acm transactions on graphics (tog), 2010, vol. 29, p. 66.
[24] M. B. Ahrens, M. B. Orger, D. N. Robson, J. M. Li, and P. J. Keller, “Whole-brain functional imaging at cellular resolution using light-sheet microscopy,” Nature Methods, vol. 10, no. 5, p. 413, May 2013.
[25] J. R. Chung, C. Sung, D. Mayerich, J. Kwon, D. E. Miller, T. Huffman, J. Keyser, L. C. Abbott, and Y. Choe, “Multiscale exploration of mouse brain microstructures using the knife-edge scanning microscope brain atlas,” Front Neuroinform, vol. 5, p. 29, 2011 PMCID: PMC3254184.
[26] D. Mayerich and J. Keyser, “Hardware accelerated segmentation of complex volumetric filament networks,” IEEE Trans Vis Comput Graph, vol. 15, no. 4, pp. 670–681, Aug. 2009 PMID: 19423890.
[27] D. Mayerich, C. Bjornsson, J. Taylor, and B. Roysam, “NetMets: software for quantifying and visualizing errors in biological network segmentation,” BMC Bioinformatics, vol. 13 Suppl 8, p. S7, 2012 PMCID: PMC3355337.
[28] P. Govyadinov, T. Womack, J. Eriksen, D. Mayerich, and G. Chen, “Graph Assisted Visualization of Microvascular Networks,” in IEEE Conference on Visualization (Short Paper), Vancouver, Canada, 2019.
[29] M. Berenjkoub, R. O. Monico, R. S. Laramee, and G. Chen, “Visual Analysis of Spatio-temporal Relations of Pairwise Attributes in Unsteady Flow,” IEEE Trans Vis Comput Graph, Aug. 2018 PMID: 30130215.
[30] L. Shi, R. S Laramee, and G. Chen, “Integral Curve Clustering and Simplification for Flow Visualization: A Comparative Evaluation,” IEEE Trans Vis Comput Graph, Sep. 2019 PMID: 31514143.
[31] L. Zhang, D. Nguyen, D. Thompson, R. Laramee, and G. Chen, “Enhanced vector field visualization via Lagrangian accumulation,” Computers & Graphics, vol. 70, pp. 224–234, Feb. 2018.
[32] X. Gao, D. Panozzo, W. Wang, Z. Deng, and G. Chen, “Robust Structure Simplification for Hex Re-meshing,” ACM Trans. Graph., vol. 36, no. 6, pp. 185:1–185:13, Nov. 2017.
[33] X. Gao, T. Martin, S. Deng, E. Cohen, Z. Deng, and G. Chen, “Structured Volume Decomposition via Generalized Sweeping,” IEEE Trans Vis Comput Graph, vol. 22, no. 7, pp. 1899–1911, 2016 PMID: 26336127.
[34] K. Xu and G. Chen, “Hexahedral Mesh Structure Visualization and Evaluation,” IEEE Trans Vis Comput Graph, Aug. 2018 PMID: 30130218.
[35] K. Xu, X. Gao, and G. Chen, “Hexahedral mesh quality improvement via edge-angle optimization,” Computers & Graphics, vol. 70, pp. 17–27, Feb. 2018.
[36] K. Xu, X. Gao, Z. Deng, and G. Chen, “Hexahedral Meshing With Varying Element Sizes,” Computer Graphics Forum, vol. 36, no. 8, pp. 540–553, 2017.