When it comes to forecasting the elements, many seem ready to welcome the machine. But humans still outperform the algorithms — especially in bad conditions. From a report: […] Similarly, research published by NOAA Weather Prediction Service director David Novak and his colleagues show that while human forecasters may not be able to “beat” the models on your typical sunny, fair-weather day, they still produce more accurate predictions than the algorithm-crunchers in bad weather. Over the two decades of information Novak’s team studied, humans were 20 to 40 percent more accurate at forecasting near-future precipitation than the Global Forecast System (GFS) and the North American Mesoscale Forecast System (NAM), the most commonly used national models. Humans also made statistically significant improvements to temperature forecasting over both model’s guidance. “Oftentimes, we find that in the bigger events is when the forecasters can make some value-added improvements to the automated guidance,” says Novak. Particularly in adverse conditions, great improvements to the model’s forecast were usually due to human augmentation, he adds. This is even more true for local, severe events like thunderstorms and tornadoes, which rely on split-second decision-making in order to save lives. As forecasters become more familiar with a particular model, they begin to notice its biases and failings, Novak adds. Just like the model learns from us, we learn from the model.