# Understand tf.nn.l2_loss(): Compute L2 Loss for Deep Learning – TensorFlow Tutorial

By | December 10, 2019

TensorFlow tf.nn.l2_loss() can help us to calculate the l2 loss of a deep learning model, which is a good way to void over-fitting problem. In this tutorial, we will introduce how to use this function to compute l2 loss for tensorflow beginners.

## L2 loss

Suppose a neural model contain m weights, L2 loss can be defined as:

where n is the demension of weight wi.

## Syntax

tf.nn.l2_loss(
t,
name=None
)

As to tf.nn.l2_loss() it will compute l2 loss fo a tensor, which is:

output = sum(t ** 2) / 2

Here is an example:

import numpy as np
import tensorflow as tf

x = tf.Variable(np.array([[1, 2, 3, 4],[5, 6, 7, 8]]), dtype = tf.float32)

l2 = tf.nn.l2_loss(x)

Output the l2 loss

init = tf.global_variables_initializer()
init_local = tf.local_variables_initializer()

with tf.Session() as sess:
sess.run([init, init_local])
print(sess.run([l2]))

The l2 loss is: [102.0]

However, you will use a penalty λ in deep learning model. You can learn more in this tutorial:

Understand L2 Regularization in Deep Learning: A Beginner Guide

Moreover, as a neural model, there exist a serial of weights, how to get these weights easily and compute their l2 loss, you can refoer to this tutorial.

Multi-layer Neural Network Implements L2 Regularization in TensorFlow