When the ground stops shaking after an earthquake, the race to save lives begins immediately. Emergency responders need to quickly identify which buildings have sustained serious damage, where people might be trapped, and which structures pose imminent collapse risks. Traditional assessment methods involve teams of engineers and first responders physically inspecting buildings one by one in a methodical but painfully slow process when time is of the essence.
These traditional methods face several critical limitations. First, they’re extremely time-consuming, potentially taking days or even weeks to complete in heavily affected urban areas. Second, they put assessment teams at significant risk, as they must enter potentially unstable structures. Third, they’re labor-intensive, requiring large teams of trained professionals at a time when such resources are stretched thin. As earthquakes continue to threaten communities worldwide, this conventional approach simply isn’t efficient enough to meet the urgent needs of disaster response.
Some advancements have been made using satellite and aerial imagery for damage assessment, employing change detection between pre- and post-earthquake images. However, these methods come with their own limitations as they require pre-disaster imagery for comparison, and factors like cloud cover or low spatial resolution can significantly compromise accuracy. Moreover, satellite imagery often lacks the detail needed to assess specific structural damage that could indicate trapped survivors or imminent collapse.
The emergence of unmanned aerial vehicle (UAV) technology, commonly known as drones, has created new possibilities for disaster response. Drones can quickly access affected areas, even when roads are blocked or dangerous, and capture high-resolution imagery from multiple angles. This capability allows them to provide detailed visual data about damaged structures without putting human assessors at risk.
The advantages of drone technology for earthquake damage assessment are significant. Drones can be deployed immediately after an earthquake, covering large areas in a fraction of the time required for ground teams. They can capture images from perspectives that would be difficult or impossible for humans to access, such as rooftops and upper floors of damaged buildings. Additionally, they can operate in hazardous environments where sending human teams would be unsafe.
Consider a scenario like the devastating 2023 Turkey-Syria earthquake, which affected vast regions across both countries. Traditional assessment methods took weeks to complete, delaying critical resource allocation decisions. Had drone-based AI assessment been widely available, emergency teams could have received comprehensive damage maps within hours rather than days, potentially saving many more lives through more targeted and timely rescue operations.
A study conducted by Furkan Kizilay, Mina R. Narman, Hwapyeong Song, Husnu S. Narman, Cumhur Cosgun & Ammar Alzarrad, research team has performed a groundbreaking study that investigated the feasibility of employing various deep learning models for damage detection using drone imagery. Deep learning, a subset of machine learning, uses neural networks with many layers (hence “deep”) to analyze data and make predictions. For earthquake damage assessment, these models can be trained to identify visual patterns associated with different types of structural damage.
The researchers explored three different deep learning architectures: YOLOv8 (You Only Look Once), Detectron2, and an adaptation of VGG16 for object detection through transfer learning. Each of these models offers different advantages and trade-offs in terms of accuracy, processing speed, and computational requirements1.
YOLOv8 is known for its rapid processing capabilities, making it particularly suitable for real-time applications. It processes entire images in a single pass, allowing it to detect multiple objects simultaneously at impressive speeds. Detectron2, developed by Facebook AI Research, offers high accuracy with its advanced object detection features. The VGG16 adaptation represents an approach where a model originally designed for image classification is repurposed for object detection through transfer learning—a technique that leverages knowledge gained from training on one task to improve performance on a related task.
The selection of these models was strategic, focusing on architectures that balance computational efficiency and accuracy. While transformer-based models like DETR (Detection Transformer) have shown exceptional accuracy in recent years, they weren’t included in this study due to their larger model sizes and greater computational demands, which are the factors that would make them less suitable for deployment on drone hardware with limited processing capabilities.
The research team developed a comprehensive approach comprising three distinct stages for earthquake damage assessment: identifying building demolition, classifying building damage types, and detecting specific wall damage. This multi-layered approach allows for both broad assessment of overall structural integrity and detailed analysis of specific damage indicators.
To train and evaluate these models, the researchers created a specialized dataset of drone imagery capturing various types of earthquake damage. This dataset was meticulously annotated with detailed labels for different damage features, enabling effective training of the deep learning models. The specific focus on wall damage types, particularly vertical cracks, is critical because these structural failures often indicate fundamental stability issues that could lead to building collapse.
After training the models, the researchers evaluated their performance using various metrics including mean Average Precision (mAP), mAP50 (which measures performance with a 50% Intersection over Union threshold), and recall. These metrics provide insights into different aspects of model performance where mAP measures overall accuracy, while recall indicates the model’s ability to find all relevant instances of damage.
The results show that YOLOv8 demonstrated superior performance in detecting damaged buildings within drone imagery, particularly for cases with moderate bounding box overlap. This finding suggests that YOLOv8 offers the best balance between accuracy and efficiency for real-world applications in earthquake damage assessment. Its ability to process imagery quickly while maintaining high detection accuracy makes it particularly well-suited for time-sensitive disaster response scenarios.
To understand the impact of this technology, consider a scenario, a 7.2 magnitude earthquake has struck a densely populated area. Within hours, drones with YOLOv8 models are deployed across the region. The onboard AI analyzes imagery in real-time, identifying collapsed buildings, severe damage, and danger signs like vertical cracks. This information is sent to an emergency command center, creating a dynamic map of the disaster zone. Emergency managers can see the most severe damage and prioritize search and rescue teams accordingly. Instead of waiting days for a damage assessment, they receive actionable information within hours, potentially saving lives. The technology’s applications extend beyond immediate response; in the recovery phase, it can help engineers assess which buildings are safe to occupy, which need repairs, and which must be demolished. This quicker, more accurate assessment could accelerate recovery, helping communities return to normalcy after an earthquake.
This research significantly addresses practical implementation challenges in deep learning for drones. Deep learning models are computationally intensive, needing substantial processing power often unavailable on drone hardware. To tackle this, researchers developed two strategies for operating multiple deep learning models simultaneously: frame splitting and threading. Frame splitting divides each video frame into segments for independent processing, while threading allows parallel processing of different frames or segments. These methods enhance processing efficiency, supporting real-time analysis on limited resources. The researchers also focused on optimizing model size and complexity for drones. Techniques like model pruning, quantization, and knowledge distillation lower computational demands without greatly affecting accuracy. For instance, a YOLOv8 model usually needs a powerful GPU, which can be optimized to work on the less powerful hardware of commercial drones, improving accessibility and deployment in actual disaster scenarios available1.
Drone-based deep learning models offer advantages for real-time damage assessment in disaster response. They enable rapid information gathering for resource allocation, rescue, and recovery after earthquakes, potentially transforming our response to these disasters. Future research could expand datasets to include various building types and conditions, enhancing model robustness across disaster scenarios. Researchers may also explore more efficient deep learning architectures, especially with improved transformer models. Integrating this visual assessment with structural vibration analysis or thermal imaging could yield comprehensive insights into building safety. Additionally, combining these systems with IoT sensors in buildings might create early warning systems to detect and predict damage before disasters occur.
In conclusion, the research into deep learning models for earthquake damage assessment using drone imagery represents a significant step forward in our ability to respond effectively to these devastating natural disasters. By combining the mobility and perspective of drone technology with the analytical power of artificial intelligence, we can dramatically accelerate the damage assessment process, enabling faster and more targeted emergency response
YOLOv8’s superior performance in this study highlights its potential as the model of choice for real-world applications, offering an optimal balance between accuracy and computational efficiency. Meanwhile, the strategies for model optimization and parallel processing address critical practical challenges, bringing this technology closer to widespread deployment.
As climate change potentially increases the frequency and severity of natural disasters worldwide, innovations like these become increasingly vital. The ability to quickly and accurately assess damage after an earthquake could save countless lives and help communities recover more rapidly from these traumatic events. While there’s still work to be done to refine and deploy these systems at scale, the path forward is clear and profoundly promising.
Reference
Kizilay, F., Narman, M.R., Song, H. et al. Evaluating fine tuned deep learning models for real-time earthquake damage assessment with drone-based images. AI Civ. Eng. 3, 15 (2024). https://doi.org/10.1007/s43503-024-00034-6
Abdi, G., & Jabari, S. (2021). A multi-feature fusion using deep transfer learning for earthquake building damage detection. Canadian Journal of Remote Sensing, 47(2), 337–352.
Xiong, C., Li, Q., & Lu, X. (2020). Automated regional seismic damage assessment of buildings using an unmanned aerial vehicle and a convolutional neural network. Automation in Construction, 109, 102994.
Fernandez Galarreta, J., Kerle, N., & Gerke, M. (2015). Uav-based urban structural damage assessment using object-based image analysis and semantic reasoning. Natural Hazards and Earth System Sciences, 15(6), 1087–1101.
Kalantar, B., Ueda, N., Al-Najjar, H. A., & Halin, A. A. (2020). Assessment of convolutional neural network architectures for earthquake-induced building damage detection based on pre-and post-event orthophoto images. Remote Sensing, 12(21), 3529.
Khodaverdi Zahraee, N., & Rastiveis, H. (2017). Object-oriented analysis of satellite images using artificial neural networks for post-earthquake buildings change detection. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 42, 139–144.
Chandler, B. M. P., Lovell, H., Boston, C. M., Lukas, S., Barr, I. D., Benediktsson, Í. Ö., Benn, D. I., Clark, C. D., Darvill, C. M., Evans, D. J. A., Ewertowski, M., Loibl, D., Margold, M., Otto, J., Roberts, D. H., Stokes, C. R., Storrar, R. D., & Stroeven, A. P. (2018). Glacial geomorphological mapping: A review of approaches and frameworks for best practice. Earth-Science Reviews.https://doi.org/10.1016/j.earscirev.2018.07.015
Zhou, Z., Qiao, Y., Lin, X., Li, P., Wu, N., Yu, D., & Yu, D. (2025). A Deployment Method for Motor Fault Diagnosis Application Based on Edge Intelligence. Sensors, 25(1), 9.