The University of Arizona

An Autonomic Workflow Performance Manager for Weather Research and Forecast Workflows

By Shuqing Gu and Likai Yao

Figure 1 Tropical cyclones (TCs) remain among the most economically and socially destructive phenomena. In the U.S. the high impacts associated with landfalling TCs can be attributed to increasing populations in vulnerable areas, more especially along the Gulf of Mexico. At this time, the only effective method to predict hurricane location plus the spatial extent of potential physical impacts is using numerical weather prediction models. One of the biggest challenges in hurricane forecasting is to maximize the use of multiple data streams coming in at rates much faster than the 6-hr initialization time of the forecast model. A system that can intelligently assess where observations will provide the most impact in the model initial fields and can continuously and automatically assess, and update, or even cancel and reinitialize model forecasts will better maximize resources and improve forecast skill, providing a tremendous advance in our ability to manage these types of disasters. Such a nonlinear system of forecast models requires a testbed to experiment with and evaluate innovative methods to continuously assimilate observations, assess forecast accuracy, and maintain an optimal number of ensemble members to be able to characterize the uncertainty in the model forecast. As a part of this research, we are developing an autonomic workflow performance manager (AWPM) to test an integrated dynamic hurricane modeling environment for an end-to-end predictive tool to inform interested actors of real hazards associated with a landfalling hurricane.

The AWPM architecture consists of two main components, a cyber-physical system (CPS) and an autonomic runtime management (ARM). The CPS is implemented based on the Weather Research and Forecast (WRF) model and the development environment utilizes Apache Hadoop big data analytics framework that includes a storage part (Hadoop Distributed File System - HDFS), a processing part (MapReduce), and a real-time data processing part (Apache Storm). This enables us to improve the process and reduce the time with which accurate models are identified. The CPS enables processing and analyzing observation data for real-time forecasting and assimilating the massive data streams. High throughput is achieved by methods that allow eliminating computations on observed data that do not meet the quality criteria. This in turn helps improve the accuracy of the predictions and trigger a new forecast prediction based on the observed data. The ARM enables monitoring and analyzing the interactions between physical systems so that the dynamically changing computation models and resource requirements are matched seamlessly.

Colaborators:

Cihan Tunc and Ali Akoglu


go back