Contemporary bathymetric data collection techniques are capable of collecting sub-meter resolution data to ensure full seafloor-bottom coverage for safe navigation as well as to support other various scientific uses of the data. Moreover, bathymetry data are becoming increasingly available from growing hydrographic and topo-bathymetric surveying operations, advancements in satellite-derived bathymetry, and the adoption of crowd-sourced bathymetry. Datasets are compiled from these sources and used to update Electronic Navigational Charts (ENCs), the primary medium for visualizing the seafloor for navigation purposes, whose usage is mandatory on Safety Of Life At Sea (SOLAS) regulated vessels. However, these high-resolution data must be generalized for products at scale, an active research area in automated cartography. Algorithms that can provide consistent results while reducing production time and costs are increasingly valuable to organizations operating in time-sensitive environments. This is particularly the case in digital nautical cartography, where updates to bathymetry and locations of dangers to navigation need to be disseminated as quickly as possible. Therefore, our research focuses on developing cartographic constraint-based generalization algorithms operating on both Digital Surface Model (DSM) and Digital Cartographic Model (DCM) representations of multi-source composite bathymetric data to produce navigationally-ready datasets for use at scale. This research is conducted in collaboration with researchers at the Office of Coast Survey (OCS) of the National Oceanic & Atmospheric Administration (NOAA) and the University of New Hampshire Center for Coastal and Ocean Mapping Joint Hydrographic Center (CCOM/JHC).
The objective of this research is to develop new approaches for tracking forest characteristics in connection to forest analysis and biomass estimation. Specifically, identifying individual trees composing a forest is crucial for characterizing forest evaluations and forecasting their changes. The emerging LiDAR technology provides an efficient way of performing forest inventory, thanks to the 3D resolution of such data, their high accuracy and cost efficiency over large-scale regions. This project demonstrates how to fully exploit the benefits from topology-based concepts and approaches on forestry LiDAR point clouds to extract individual tree structures automatically. Current techniques for individual tree segmentation require tuning a large number of parameters, and intense user interactions, and they are designed to work only with specific types of forests. The objective of this research is to develop new topology-based techniques for point clouds, both from airborne and terrestrial LiDAR acquisitions, which are general, parameter-free and scalable. By moving from single-time LiDAR point cloud data to multi-date point clouds, which are scanned from the same forest at different times, we plan to investigate the robustness of tree mapping methods to help analyze and segment LiDAR point clouds over-time.
The use of multifield data (i.e., data characterized by multiple scalar functions) is becoming more and more common in several applications. Multifield data are notoriously difficult to analyze and visualize since their analysis combines the challenges of working with two- or three-dimensional domains with those of dealing with a high-dimensional codomain where color maps are ineffective. Thus, the ability to extract features describing the essential properties of such data becomes crucial. The aim of this project is to develop innovative tools for extracting and visualizing topological features describing a multifield. Many aspects of topology-based analysis of multifield data are still unexplored both from a theoretical and practical standpoint. The first challenge addressed in this project is to develop theoretically grounded tools for the analysis of multifield data, based on topology-based descriptors rooted in multi-persistent homology. The second challenge is to evaluate the significance of such tools in the context of applications. We specifically focus on environmental applications, where we plan to use topological features to segment multifield data for forest monitoring, and to identify regions of non-correlation in time-varying sequences of multifield oceanic data.
Multi-parameter persistent (multi-persistent) homology is an extension of persistent homology, which is a multiscale approach to homological shape analysis, to the case where several scalar functions are associated with the data (multifield data). The objective of our research on multi-persistent homology is to devise algorithms for efficiently computing it on real-world data sets. This is a challenging problem, since very few results exist in the literature on multi-persistent homology, both from a computational and a theoretical point of view. We have proposed a pre-processing approach which computes a Morse-like discrete vector field compatible with the multifield. Such algorithm is well suited to be used with both simplicial complexes and regular grids, it scales well when the size of the input complex increases and is well suited for a parallel implementation. Moreover, we have shown that the use of such pre-processing provides an improvement of at least one order of magnitude in the computation of multi-persistent homology.
Available software tools for terrain reconstruction and analysis from LiDAR (Light Detection and Ranging) data contain a variety of algorithms for processing such data, which almost always require converting the original point cloud into a raster model. This conversion can seriously affect data analysis, resulting in loss of information, or in raster images being too big to be processed on a local machine. Our solution is dealing directly with the scattered point clouds, and, thus, an unstructured triangle mesh connecting the points needs to be built, encoded and processed for data analysis. Existing tools which work on triangle meshes generated from LiDAR data can only handle triangle meshes of limited size. The lack of scalable data structures for triangle meshes greatly limited their applicability to very large point clouds currently available, which can vary from 0.2 to 60 billion points. In our research, we have developed a family of new data structures, the Terrain trees, for big triangle meshes, based on the Stellar decomposition model, and we have demonstrated their efficiency and effectiveness for spatial and connectivity queries, and for morphological analysis of very large triangulated terrains on commodity hardware. Our representations use spatial indexes to efficiently generate local application-dependent combinatorial data structures at runtime, and, thus, they are extremely compact and well-suited for distributed computation.
Efficient mesh data structures play a fundamental role in a broad range of mesh processing applications, in computer graphics, geometric modeling, scientific visualization, geospatial data science and finite element analysis. Although simple problems can be easily modeled on small low dimensional meshes, phenomena of interest might occur only on much larger meshes and in higher dimensions, as for simplicial complexes describing the shape of high-dimensional point clouds. In our research, we have developed a new data model for meshes and simplicial complexes, that we call a Stellar decomposition, which combines the encoding of minimal mesh connectivity information with a clustering mechanism on the vertices and cells of the mesh. Unlike combinatorial data structures, which explicitly encode the connectivity among cells of the mesh, this general approach has been shown to support scalability with size and dimension, and efficient processing of fundamental connectivity queries in a distributed fashion. Based on this model, we have developed new efficient representations for tetrahedral meshes endowed with a scalar field for the analysis and visualization of 3D scalar fields, and for arbitrary simplicial complexes. We have also devised a new and highly efficient decimation approach based on a Stellar decomposition, which simplifies large simplicial complexes by their homological properties.