Table 2 Effects of hyperparameters on the performance of a multilayer perceptron neural network for predicting bread texture (activation function, regularization, and dropout)

Train set Test set
R2 RMSE R2 RMSE
Activation function Sigmoid 0.3224±0.0999 0.1918±0.0141 0.4005±0.1497 0.1942±0.0245
Tanh 0.7508±0.0029 0.1166±0.0007 0.6965±0.0107 0.1392±0.0024
Linear 0.7496±0.0019 0.1169±0.0004 0.6934±0.0107 0.1399±0.0024
ELU 0.7550±0.0040 0.1156±0.0009 0.6938±0.0073 0.1398±0.0017
ReLU 0.9882±0.0112 0.0229±0.0108 0.7504±0.0471 0.1257±0.0119
Leaky_ReLU 0.9684±0.0130 0.0404±0.0097 0.7846±0.0303 0.1170±0.0081

Regularization 0.9981±0.0021 0.0088±0.0049 0.7891±0.0402 0.1156±0.0107

Regularization+Dropout 0.9667±0.0108 0.0421±0.0069 0.8109±0.0272 0.1096±0.0079