torchdms.loss

Loss functions and functions relevant to losses.

Functions

diff_penalty

Computes l1 norm of the difference between adjacent betas for each latent dimension.

l1

l1_loss, perhaps with loss decay or target exponentiation

mse

mse_loss, perhaps with loss decay or target exponentiation

product_penalty

Computes l1 norm of product of betas across latent dimensions.

rmse

Root mean square error, perhaps with loss decay or target exponentiation.

sitewise_group_lasso

The sum of the 2-norm across columns.

sum_diff_penalty

Computes l1 norm of the difference between aggregated betas at adjacent sites for each latent dimension.

weighted_loss

Generic loss function decorator with loss decay or target exponentiation.

torchdms.loss.weighted_loss(base_loss)[source]

Generic loss function decorator with loss decay or target exponentiation.

torchdms.loss.l1(y_true, y_predicted, loss_decay=None, exp_target=None)

l1_loss, perhaps with loss decay or target exponentiation

torchdms.loss.mse(y_true, y_predicted, loss_decay=None, exp_target=None)

mse_loss, perhaps with loss decay or target exponentiation

torchdms.loss.rmse(*args, **kwargs)[source]

Root mean square error, perhaps with loss decay or target exponentiation.

torchdms.loss.sitewise_group_lasso(matrix)[source]

The sum of the 2-norm across columns.

We omit the square root of the group sizes, as they are all constant in our case.

torchdms.loss.product_penalty(betas)[source]

Computes l1 norm of product of betas across latent dimensions.

torchdms.loss.diff_penalty(betas)[source]

Computes l1 norm of the difference between adjacent betas for each latent dimension.

torchdms.loss.sum_diff_penalty(betas)[source]

Computes l1 norm of the difference between aggregated betas at adjacent sites for each latent dimension.