Understanding The Macroscope Initiative And GeoML – Forbes

How is it possible to harness high volumes of data on a planetary scale to discover spatial and temporal patterns that escape human perception? The convergence of technologies such as LIDAR and machine learning is allowing for the creation of macroscopes, which have many applications in monitoring and risk analysis for enterprises and governments.

Microscopes have been around for centuries, and they are tools that allow individuals to visualize and research phenomena that are too small to be perceived by the human eye. Macroscopes can be thought of as carrying out the opposite function; they are systems that are designed to uncover spatial and temporal patterns that are too large or slow to be perceived by humans. In order to function, they require both the ability to gather planetary-scale information over specified periods of time, as well as the compute technologies that can deal with such data and provide interactive visualization. Macroscopes are similar to geographic information systems, but include other multimedia and ML-based tools.

Dr. Mike Flaxman, Spatial Data Science Practice Lead at OmniSci

In an upcoming Data for AI virtual event with OmniSci, Dr. Mike Flaxman, Spatial Data Science Practice Lead, and Abhishek Damera, Data Scientist, will be giving a presentation on building planetary geoML and the macroscope initiative. OmniSci is an accelerated analytics platform that combines data science, machine learning, and GPU to query and visualize big data. They provide solutions for visual exploration of data that can aid in monitoring and forecasting different kinds of conditions for large geospatial areas.

The Convergence of Data and Technologiesn a world where the amount and importance of data continue to grow exponentially, it is increasingly important for organizations to be able to harness that data. The fact that data now flows digitally changes how we collect and integrate from many different sources across varying formats. Because of this, getting data from its raw condition to the state of being analysis-ready and then actually performing the analysis can be challenging and often requires very complex pipelines. Traditional software approaches generally do not scale very well, resulting in teams that are increasingly looking to machine learning algorithms and pipelines to perform tasks such as feature classification, extraction, and condition monitoring. This is why companies like OmniSci are applying ML as part of a larger macroscope pipeline to provide analytics methods for applications such as powerline risk analysis and naval intelligence.

One way that OmniSci is using their technology is in monitoring powerline vegetation by district at a national level in Portugal. Partnering with Tesselo, they are using a combination of imagery and LIDAR technologies to build a more detailed and temporally flexible portrait of land cover that can be updated weekly. Using stratified sampling for ML and GPU analytics for real-time integration, they are able to extract and render billions of data points from samples sites for vegetation surrounding transmission lines.

For large scale projects such as the above, there are most often two common requirements: extremely high volumes of data are required to provide accurate representations of specific geographical locations, and machine learning is needed for data classification and continuous monitoring. OmniSci aims to address the question of how, on a technical level, these two requirements can be integrated in a manner that is dynamic, fast, and efficient. The OmniSci software is a platform that consists of three layers, each of which can be independently used or combined together. The first layer, OmniSci DB, is a SQL database that provides fast queries and embedded ML. The middle component, the Render Engine, provides server-side rendering that acts similarly to a map server and can be combined with the database layer to render results as images. The final layer, OmniSci Immerse, is an interactive front-end component that allows the user to play around with charts and data and request queries from the backend. Together, the OmniSci ecosystem can take in data from many different sources and formats and talk to other SQL databases through well-established protocols. Data scientists can use traditional data science tools jointly, making it easy to analyze the information. OmniScis solution centers on the notion of moving the code to the data rather than the data to the code.

Case Study in Firepower-Transmission Line Risk AnalysisA specific case study for OmniSci Immerse demonstrates the ability to perform firepower-transmission line risk analysis. Growing vegetation can pose high risks to power lines for companies such as PG&E, and it can be inefficient and challenging to accurately assess changing risks in an accurate manner. However, combining imagery and LIDAR data, OmniSci is providing a better way to map out the physical structures of different geographic areas, such as in Northern California, to analyze risk without needing on-site visits. OmniScis platform combines three factors of physical structure, vegetation health over time, and varying wind speeds over space to determine firepower strike tree risk. They are addressing both the issues of scale and detail to allow utility companies to determine appropriate actions through continuous monitoring.

In addition to the firepower-transmission line risks analysis example, there are many other use cases for macroscope technologies and methods. OmniSci is providing a way to perform interactive analyses on multi-billion row datasets, and they can provide efficient methods for critical tasks such as anomaly detection. To learn more about the technology behind OmniSci solutions as well as the potential use cases, make sure to join the upcoming Data for AI community for the virtual event.

Originally posted here:

Understanding The Macroscope Initiative And GeoML - Forbes

Related Posts

Comments are closed.