Loss function is an important part of training process of deep learning models. In fact, training a deep learning model aims to minimize the loss function that is the difference between the predicted output during training and ‘label’ data.

The loss function of the deep learning model (in general, in all machine learning techniques) must be properly selected to achieve an optimum training performance. Loss functions are mathematical functions that calculate error between the estimated output and ‘label’ data. We can write a custom loss function for a deep learning model and include it during training. An example custom loss function:

`def my_loss_fn(y_true, y_pred):`

squared_difference = tf.square(y_true - y_pred)

return tf.reduce_mean(squared_difference, axis=-1) # Note the `axis=-1`

model.compile(optimizer='adam', loss=my_loss_fn)

Keras has already many built-in loss functions that can be selected depending on the problem. The list of the lost functions that are inherently supported by Keras:

# Probabilistic losses

- BinaryCrossentropy class
- CategoricalCrossentropy class
- SparseCategoricalCrossentropy class
- Poisson class
- binary_crossentropy function
- categorical_crossentropy function
- sparse_categorical_crossentropy function
- poisson function
- KLDivergence class
- kl_divergence function

# Regression losses

- MeanSquaredError class
- MeanAbsoluteError class
- MeanAbsolutePercentageError class
- MeanSquaredLogarithmicError class
- CosineSimilarity class
- mean_squared_error function
- mean_absolute_error function
- mean_absolute_percentage_error function
- mean_squared_logarithmic_error function
- cosine_similarity function
- Huber class
- huber function
- LogCosh class
- log_cosh function

# Hinge losses for “maximum-margin” classification

- Hinge class
- SquaredHinge class
- CategoricalHinge class
- hinge function
- squared_hinge function
- categorical_hinge function