TransWikia.com

How to compute standard deviation from mean absolute error?

Cross Validated Asked on December 25, 2021

I would like to compute the standard deviation from mean absolute error for predictions of a CNN I trained.

consider model_predict as the predicted value for the network, and y_test as the ground truth.

I tried:

std = np.sqrt(np.mean(np.abs(y_test_n - np.mean(model_predict))**2))

and

std = np.sqrt(np.mean(np.abs(y_test_n - (model_predict))**2))

I think the first one is the correct computation, but the values are suspicious (doesn’t much difference between different weights). Which one of the above formula (if any) is correct for standard deviation from mean absolute error?

If none, how can one compute that?

One Answer

I think what you want to compute is the standard deviation of the errors.

The first equation you posted compares the true labels with the mean of the model's predictions (which since the labels are constant and the mean of the predictions just show's the model's bias doesn't give much information).

The second equation is essentially computes the MSE of the predictions.

What you need to do is to first find the errors:

err = y_test_n - model_predict

Now, err is the error of each predicion. If you want the absolute error you can use np.abs(err).

Then just compute the standard deviation of these errors.

std = np.sqrt(np.mean(((err - np.mean(err))**2))

or more simply

std = np.std(err)

Answered by Djib2011 on December 25, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP