Research projects

Generalization, Data Quality, and Scale in Composite Bathymetric Data Processing for Automated Digital Nautical Cartography

Contemporary bathymetric data collection techniques are capable of collecting sub-meter resolution data to ensure full seafloor-bottom coverage for safe navigation as well as to support other various scientific uses of the data. Moreover, bathymetry data are becoming increasingly available from growing hydrographic and topo-bathymetric surveying operations, advancements in satellite-derived bathymetry, and the adoption of crowd-sourced bathymetry. Datasets are compiled from these sources and used to update Electronic Navigational Charts (ENCs), the primary medium for visualizing the seafloor for navigation purposes, whose usage is mandatory on Safety Of Life At Sea (SOLAS) regulated vessels. However, these high-resolution data must be generalized for products at scale, an active research area in automated cartography. Algorithms that can provide consistent results while reducing production time and costs are increasingly valuable to organizations operating in time-sensitive environments. This is particularly the case in digital nautical cartography, where updates to bathymetry and locations of dangers to navigation need to be disseminated as quickly as possible. Therefore, our research focuses on developing cartographic constraint-based generalization algorithms operating on both Digital Surface Model (DSM) and Digital Cartographic Model (DCM) representations of multi-source composite bathymetric data to produce navigationally-ready datasets for use at scale. This research is conducted in collaboration with researchers at the Office of Coast Survey (OCS) of the National Oceanic & Atmospheric Administration (NOAA) and the University of New Hampshire Center for Coastal and Ocean Mapping Joint Hydrographic Center (CCOM/JHC).

Modeling and analysis of very large terrains reconstructed from LiDAR point clouds

Available software tools for terrain reconstruction and analysis from LiDAR (Light Detection and Ranging) data contain a variety of algorithms for processing such data, which almost always require converting the original point cloud into a raster model. This conversion can seriously affect data analysis, resulting in loss of information, or in raster images being too big to be processed on a local machine. Our solution is dealing directly with the scattered point clouds, and, thus, an unstructured triangle mesh connecting the points needs to be built, encoded and processed for data analysis. Existing tools which work on triangle meshes generated from LiDAR data can only handle triangle meshes of limited size. The lack of scalable data structures for triangle meshes greatly limited their applicability to very large point clouds currently available, which can vary from 0.2 to 60 billion points. In our research, we have developed a family of new data structures, the Terrain trees, for big triangle meshes, based on the Stellar decomposition model, and we have demonstrated their efficiency and effectiveness for spatial and connectivity queries, and for morphological analysis of very large triangulated terrains on commodity hardware. Our representations use spatial indexes to efficiently generate local application-dependent combinatorial data structures at runtime, and, thus, they are extremely compact and well-suited for distributed computation.