Yidi Wang
University of Virginia
Recent Activity
ABSTRACT:
Accurate and timely flood forecasting is essential for enhancing resilience in coastal urban areas in the context of increasing frequency and intensity of rainfall, sea level rise and rapid urbanization. This study presents a hybrid deep learning-based surrogate model that integrates Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) networks to enable real-time spatiotemporal flood forecasting. The model leverages CNN to capture spatial features from inputs such as elevation and Topographic Wetness Index (TWI), while LSTM processes time-series inputs of rainfall and tide data to capture temporal features. The hybrid CNN-LSTM model was trained using the physics-based simulations obtained from the physics-based Two-dimensional Unsteady FLOW (TUFLOW) model for Norfolk, Virginia, and achieved high predictive accuracy across diverse flood-prone areas. It reduced computational time from four to six hours to under five minutes per event, enabling rapid flood inundation mapping and early warning. The model effectively captured both spatial flood extents and their temporal evolution across different flooding scenarios, providing forecasts at a 2.5 m spatial resolution and 15-minute temporal resolution, with a prediction horizon of one hour ahead. While challenges remain in terms of transferability and real-time data assimilation, this approach demonstrates strong potential for supporting operational flood risk management in coastal urban environments.
ABSTRACT:
This study explores the use of Deep Convolutional Neural Network (DCNN) for semantic segmentation of flood images. Imagery datasets of urban flooding were used to train two DCNN-based models, and camera images were used to test the application of the models with real-world data. Validation results show that both models extracted flood extent with a mean F1-score over 0.9. The factors that affected the performance included still water surface with specular reflection, wet road surface, and low illumination. In testing, reduced visibility during a storm and raindrops on surveillance cameras were major problems that affected the segmentation of flood extent. High-definition web cameras can be an alternative tool with the models trained on the data it collected. In conclusion, DCNN-based models can extract flood extent from camera images of urban flooding. The challenges with using these models on real-world data identified through this research present opportunities for future research.
Contact
(Log in to send email) |
All | 0 |
Collection | 0 |
Resource | 0 |
App Connector | 0 |

ABSTRACT:
This study explores the use of Deep Convolutional Neural Network (DCNN) for semantic segmentation of flood images. Imagery datasets of urban flooding were used to train two DCNN-based models, and camera images were used to test the application of the models with real-world data. Validation results show that both models extracted flood extent with a mean F1-score over 0.9. The factors that affected the performance included still water surface with specular reflection, wet road surface, and low illumination. In testing, reduced visibility during a storm and raindrops on surveillance cameras were major problems that affected the segmentation of flood extent. High-definition web cameras can be an alternative tool with the models trained on the data it collected. In conclusion, DCNN-based models can extract flood extent from camera images of urban flooding. The challenges with using these models on real-world data identified through this research present opportunities for future research.

ABSTRACT:
Accurate and timely flood forecasting is essential for enhancing resilience in coastal urban areas in the context of increasing frequency and intensity of rainfall, sea level rise and rapid urbanization. This study presents a hybrid deep learning-based surrogate model that integrates Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) networks to enable real-time spatiotemporal flood forecasting. The model leverages CNN to capture spatial features from inputs such as elevation and Topographic Wetness Index (TWI), while LSTM processes time-series inputs of rainfall and tide data to capture temporal features. The hybrid CNN-LSTM model was trained using the physics-based simulations obtained from the physics-based Two-dimensional Unsteady FLOW (TUFLOW) model for Norfolk, Virginia, and achieved high predictive accuracy across diverse flood-prone areas. It reduced computational time from four to six hours to under five minutes per event, enabling rapid flood inundation mapping and early warning. The model effectively captured both spatial flood extents and their temporal evolution across different flooding scenarios, providing forecasts at a 2.5 m spatial resolution and 15-minute temporal resolution, with a prediction horizon of one hour ahead. While challenges remain in terms of transferability and real-time data assimilation, this approach demonstrates strong potential for supporting operational flood risk management in coastal urban environments.