GeoVisual Search works on two different imagery catalogs, showcasing multiple resolutions, and national and global scale search. You can explore each of these below.

What is visual similarity?

Humans can quickly identify images that are similar, even if they have different colors, orientations, and resolutions.

Center Pivot Irrigation

Airport Runways

Wind Turbines

How do computers detect similarity?

While humans immediately recognize visual similarity, they cannot analyze the millions of individual scenes that satellites generate daily. We built GeoVisual Search to do just that.

  • Tile source maps

    Divide the globe into a grid of tiles. We use multiple, overlapping grids, capturing features that would otherwise run across tiles.

  • Extract features

    For each image tile, use deep learning, a form of artificial intelligence that is loosely inspired by the structure of the brain, to create a neural network that extracts features (e.g., shapes, colors, texture). Features can include both the visible and non-visible spectrum.

  • Query features

    Given a query image, calculate a “visual distance” between the query features and the features extracted from each image in the comparison set.

  • Match results

    The tiles with the smallest “visual distance” are visually similar to the query tile.

Search the globe

Descartes Labs created two base layer map composites for GeoVisual Search utilizing Landsat 8 for global coverage and NAIP for the United States.

Aerial Imagery (NAIP)

Resolution: 1 meter per pixel

Coverage: United States

Number of pixels processed: 31.5 trillion

Number of tiles processed: 1.9 billion

Explore it

Landsat 8

Resolution: 20 meters per pixel

Coverage: Global

Number of pixels processed: 3.4 trillion

Number of tiles processed: 205 million

Explore it