Project Shackleton: Real-Time Routing with Satellite Imagery

Adam Van Etten
Geodesic

--

When standard navigation techniques (i.e. Google Maps) are down and time is of the essence, how might one evacuate from (or position aid to) a hazardous locale? In this post we explore one option, which we call Project Shackleton. As we’ve noted before, in a disaster scenario (be it an earthquake or an invasion) where communications are unreliable, overhead imagery often provides the first glimpse into what is happening on the ground. Accordingly, Project Shackleton combines satellite/aerial imagery with advanced computer vision and graph theory analytics. These predictions and analytics are incorporated into an interactive dashboard for computing a number of evacuation and routing scenarios. In this post, we explore these scenarios, and provide details to the codebase that we proudly released to the open source last week at the FOSS4G conference.

An explainer video (below) is also available for those would would like to see the code in action.

1. Motivation

Our previous post covering the Diet Hadrade codebase delved into the motivation for this project. Since Shackleton extends Diet Hadrade much of the motivation is the same. For those readers who are familiar with the Diet Hadrade blog, feel free to skip to Section 2. Otherwise read on.

In a disaster scenario where communications are unreliable, overhead imagery often provides the first glimpse into what is happening on the ground, so analytics with such imagery can prove very valuable. Specifically, the rapid extraction of both vehicles and road networks from overhead imagery allows a host of interesting problems to be tackled, such as congestion mitigation, optimized logistics, evacuation routing, etc.

A reliable proxy for human population density is critical for effective response to natural disasters and humanitarian crises. Automobiles provide such a proxy. People tend to stay near their cars, so knowledge of where cars are located in real-time provides value in disaster response scenarios. In this project, we deploy the YOLTv5 codebase (which is built atop YOLOv5) to rapidly identify and geolocate vehicles over large areas. Geolocations of all the vehicles in an area allow responders to prioritize response areas.

Yet vehicle detections really come into their own when combined with road network data. We use the CRESI framework to extract road networks with travel time estimates, thus permitting optimized routing. The CRESI codebase is able to extract roads with only imagery, so flooded areas or obstructed roadways will sever the CRESI road graph; this is crucial for post-disaster scenarios where existing road maps may be out of date and the route suggested by navigation services may be impassable or hazardous.

Placing the detected vehicles on the road graph enables a host of graph theory analytics to be employed (congestion, evacuation, intersection centrality, etc.). Of particular note, the test city selected below (Dar Es Salaam) is not represented in any of the training data for either CRESI or YOLTv5. The implication is that this methodology is quite robust and can be applied immediately to unseen geographies whenever a new need may arise.

2. Data and Deep Learning Inference

We use the SpaceNet 5 image of Dar Es Salaam for our test imagery. Advanced computer vision algorithms will ideally be executed on a GPU, and we utilize the freely available Amazon SageMaker StudioLab for deep learning inference. Setting up a StudioLab environment to execute both CRESI and YOLTv5 is quite easy, and inference takes ~3 minutes for our 12 square kilometer test region. Precise steps are detailed in this notebook.

3. Shackleton Dashboard

The Shackleton Dashboard creates a bokeh server that displays the data and connects back to underlying python libraries (such as NetworkX, OSMnx, and scikit-learn). We also spin up a tile server courtesy of localtileserver to visualize our satellite imagery.

Simply execute the following command to fire up the dashboard:

test_im_dir=shackleton_dir/data/test_imagery
bokeh serve -show src/shackleton/shackleton_dashboard.py \
-args $test_im_dir/test1_realcog_clip.tif \
results/test1/yoltv5/geojsons_geo_0p3/test1_cog_clip_3857.geojson \
results/test1/cresi/graphs_speed/test1_cog_clip_3857.gpickle

This will invoke the interactive dashboard, which will look something like the image below:

Figure 1. Sample view of the Shackleton Dashboard.

Routing calculations are done on the fly, so any number of scenarios can be explored with the dashboard, such as:

  • Real-time road network status
  • Vehicle localization and classification
  • Optimized bulk evacuation or ingress
  • Congestion estimation and rerouting
  • Critical communications/logistics nodes
  • Inferred risk from hostile actors

Clicking on any node in the network will move the evacuation /ingress point to that node, which greatly expands the number of scenarios one can explore. Figure 2 shows a handful of the many use cases.

Figure 2. Example use of the dashboard.

4. Conclusions

In this blog we showed how to combine machine learning with graph theory in order to build a dashboard for exploring a multitude of transportation and logistics scenarios. Relying on nothing but overhead imagery, we are able to estimate congestion, determine bulk evacuation / ingress routes, mark critical nodes in the road network, as well as a host of other analytics. Our codebase is open source, with self-contained predictions and sample data enabling the dashboard to be spun up quite easily, and our explainer video illustrates how to use Shackleton. In future blogs we’ll explore a few more use cases and showcase other functionality built into Shackleton; in the meantime we encourage interested parties to explore the codebase.

--

--