A multidisciplinary group of researchers from the University of California, Irvine created a machine learning model to predict the potential of large wildfires from the time of ignition. The decision classifier model uses a single dataset to predict whether a fire will be large roughly 50% of the time, outperforming more complex models tested by researchers that rely on multiple weather variables.
Though the model is trained to predict outcomes of fires in forests primarily made up of boreal trees, the kind found in Alaska and northern Canada, such a model could be used to allocate resources for fire officials in parts of the western United States where wildfires are expected to increase in frequency as a result of climate change.
“This type of simple classification system could offer insight into optimal resource allocation, helping to maintain a historical fire regime and protect Alaskan ecosystems,” the Topplay report reads. “The huge fires and their impacts in recent years may warrant a rethinking of fire management; lands that have previously been limited suppression zones could now require increased suppression effort to maintain contemporary burning levels and mitigate impacts to humans and vulnerable ecosystems.”
As the report references, fire aerosols account for more than 300,000 premature deaths each year globally and make contributions to the global carbon footprint. One University of New Hampshire study predicts that the size of wildfire burn area could double by 2050 compared to levels seen in the 1990s.
The simpler approach achieved more accurate results than four other more complex machine learning classifiers in the scikit-learn package like multi-layer perceptron and gradient boosting. The findings were published Tuesday in the International Journal of Wildlife Fire.