Computer Vision for Disaster Responses

Yung-Hsiang Lu
Published 10/31/2023
Share this on:

Computer vision for disaster reliefInformation plays a pivotal role in disaster response. After a major disaster, such as an earthquake or hurricane, survivors and rescuers need information to plan their responses. For example, if a bridge is broken, they have to plan an alternative route for evacuation or for sending emergency supplies.

In recent years, uncrewed aerial vehicles (UAVs, also called drones) have been used in disaster areas to acquire images and videos. UAVs have the advantage of flying over damaged roads or broken bridges. When UAVs take images or videos, the data needs to be analyzed by humans. This takes away the precious time of rescuers.

This begs the question, is it possible for UAVs to analyze the data themselves while flying?

The 2023 IEEE Low-Power Computer Vision Challenge (LPCV) made this concept a reality.

This international competition invited computer vision experts to develop software that can mark regions of images quickly and correctly. The following shows an example of an input image and the meaning (i.e. semantics) of the pixels. This is the semantics segmentation problem in computer vision.

This challenge was organized by Purdue University, Boston University, and Loyola University, Chicago. They collected 1,720 images from various disaster scenes. Each image is resized to 512 x 512 pixels. A vision program needs to classify the pixels into one of 14 categories of objects, such as a person, mud, road, or vehicle. The rankings are determined by the ratio of correctness divided by the execution time. The execution time is measured on Jetson nano, a popular platform for embedded systems.

To help the contestants start, the organizer provided a sample solution. A contestant’s solution must outperform the sample solution in both correctness (higher) and execution time (shorter). Otherwise, the contestant’s solution is disqualified.

 


 

Want More Tech News? Subscribe to ComputingEdge Newsletter Today!

 


 

The competition was held from 2023 1 July to 4 August; 117 teams submitted 676 solutions on the LPCV website.

ModelTC from Tsinghua University, China was announced as the winner of this year’s competition. Members included Yiru Wang, Xin Jin, Zhiwei Dong, Yifan Pu, Yongqiang Yao, Bo Li, Ruihao Gong, Haoran Wang, Xianglong Liu, Gao Huang, and Wei Wu.

The team achieved the best balance between accuracy and execution time of 51.2% accuracy and 6.8ms per image.

The second award goes to the AidgetRock team from Midea, China. The members include Zifu Wan, Xinwang Chen, Ning Liu, Ziyi Zhang, Dongping Liu, Ruijie Shan, Zhengping Che, Fachao Zhang, Xiaofeng Mou, and Jian Tang.

The third award is received by the ENOT team from enot.ai, based in Luxembourg. The members are Alexander Goncharenko, Maxim Chuprov, Arsenii Ianchenko, Andrey Shcherbin, Sergey Alyamkin, and Ivan Malofeev.

Competition organizers include Benjamin Chase Boardley, Leo Chen, Gowri Ramshankar, and Yung-Hsiang Lu from Purdue University; Ping Hu, Kate Saenko from Boston University; and, Nicholas Synovic, George K. Thiruvathukal, and Oscar Yanek from Loyola University Chicago.

 

Disclaimer: The author is completely responsible for the content of this article. The opinions expressed are their own and do not represent IEEE’s position nor that of the Computer Society nor its Leadership.