Descartes Labs is excited to release GeoVisual Search. We’ve used the power of our geospatial platform to process public and commercial satellite imagery, detect visual similarities between scenes, and apply machine learning to recognize different types of objects across the globe.
Humans can quickly identify images that are similar, even if they have different colors, orientations, and resolutions.
Center Pivot Irrigation
While humans immediately recognize visual similarity, they cannot analyze the millions of individual scenes that satellites generate daily. We built GeoVisual Search to do just that.
Tile source maps
Divide the globe into a grid of tiles. We use multiple, overlapping grids, capturing features that would otherwise run across tiles.
For each image tile, use a neural network to extract features (e.g., shapes, colors, texture). Features can include both the visible and non-visible spectrum.
Given a query image, calculate a “visual distance” between the query features and the features extracted from each image in the comparison set.
The tiles with the smallest “visual distance” are visually similar to the query tile.
We are launching GeoVisual Search on two different imagery catalogs, showcasing multiple resolutions, and both national and global scale search. You can explore each of these, today.
Aerial Imagery (NAIP)
Resolution: 1 meter per pixel
Coverage: United States
Number of pixels processed: 31.5 trillion
Number of tiles processed: 1.9 billion
Resolution: 20 meters per pixel
Number of pixels processed: 3.4 trillion
Number of tiles processed: 205 million