Application of feed forward and cascade forward neural network models for prediction of hourly ambient air temperature based on MERRA-2 reanalysis data in a coastal area of Turkey

Gündoğdu S., Elbir T.

METEOROLOGY AND ATMOSPHERIC PHYSICS, vol.133, no.5, pp.1481-1493, 2021 (SCI-Expanded) identifier identifier

  • Publication Type: Article / Article
  • Volume: 133 Issue: 5
  • Publication Date: 2021
  • Doi Number: 10.1007/s00703-021-00821-1
  • Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Aerospace Database, Aquatic Science & Fisheries Abstracts (ASFA), Communication Abstracts, Environment Index, Geobase, INSPEC, Metadex, Civil Engineering Abstracts
  • Page Numbers: pp.1481-1493
  • Dokuz Eylül University Affiliated: Yes


Air temperature forecasting has been a vital climatic factor required for different applications in many areas such as energy, industry, agriculture, health, environment, and meteorology. This study compares the performances of two static neural networks (NNs) used for the prediction of hourly ambient air temperatures in a coastal area of Turkey. Thirteen parameters from Land Surface Diagnostics and Surface Flux Diagnostics Collections from the MERRA-2 reanalysis dataset including pressure, surface specific humidity, wind speed, wind direction, air density at surface, evaporation, planetary boundary layer height, total precipitable water vapor, total precipitation, total cloud area fraction, total column ozone, greenness fraction, and leaf area index were used as input parameters for the models. Feed-Forward Neural Network (FFNN) and Cascade Forward Neural Network (CFNN) models were applied to forecast hourly ambient air temperatures at 2 m height from the surface. The results indicated that the most accurate and reliable predictions were obtained by the CFNN model with 30 neurons, while the lowest prediction performance was obtained by the FFNN model with 5 neurons. The root mean squares error (RMSE), mean absolute error (MAE) and coefficient of determination (R-2) values for training (and testing) of the CFNN model with 30 neurons were 0.358 (0.376), 0.273 (0.283), and 0.997 (0.992), respectively, whereas the same parameters were 0.430 (0.447), 0.334 (0.343), and 0.996 (0.989) for the FFNN model with 5 neurons. The CFNN model had a lower RMSE and MAE, and a higher R-2 than the FFNN model. These results showed that increasing the number of neurons of hidden layers from 5 to 30 provided better model performance.