# tf.fill() example: Creates a Tensor Filled with a Scalar Value – TensorFlow Tutorial

By | October 19, 2020

TensorFlow tf.fill() function allows us to create a tensor with a scalar value filled. In this tutorial, we will introduce how to use it in tensorflow application.

## Syntax

tf.fill() is defined as:

tf.fill(
dims,
value,
name=None
)

Where dims is the shape of the created tensor.

Here is a simple example to show how to use it.

# Output tensor has shape [2, 3].
fill([2, 3], 9) ==> [[9, 9, 9]
[9, 9, 9]]

However, we often use tf.fill() function to filter some values with tf.where().

In order to understand how to use tf.where(), you can refer:

Understand TensorFlow tf.where() with Examples – TensorFlow Tutorial

For example:

import tensorflow as tf
import numpy as np

v1 = tf.Variable(tf.random_uniform([5, 7],-0.01, 0.01), name='r_1')

We have created a tensor with the shape (5, 7) v1. It is:

[[ 0.001  0.005 -0.001  0.009  0.01  -0.009 -0.008]
[ 0.01  -0.009  0.009 -0.004  0.005 -0.01   0.01 ]
[ 0.009 -0.008  0.008 -0.007 -0.009  0.     0.005]
[ 0.004 -0.004  0.007  0.003  0.008  0.002  0.009]
[ 0.003  0.004 -0.009  0.001  0.008  0.006  0.001]]

However, if you want to ignore the negative value when computing attention of v1 on axis = 1, you can do like this:

mask = tf.fill(tf.shape(v1), -1e9)
v2 = tf.where(tf.greater(v1, 0), v1, mask)

We can use tf.fill() and tf.where() to decrease the affection of negative value in v1.

v2 is:

[[ 9.644e-04  5.439e-03 -1.000e+09  8.896e-03  9.980e-03 -1.000e+09
-1.000e+09]
[ 9.961e-03 -1.000e+09  8.600e-03 -1.000e+09  5.134e-03 -1.000e+09
9.845e-03]
[ 9.483e-03 -1.000e+09  8.226e-03 -1.000e+09 -1.000e+09  3.032e-04
4.754e-03]
[ 4.192e-03 -1.000e+09  7.121e-03  2.855e-03  7.677e-03  1.754e-03
9.047e-03]
[ 3.492e-03  4.165e-03 -1.000e+09  1.324e-03  8.461e-03  6.170e-03
5.303e-04]]

Then we can compute the attention value of v1 on axis = 1.

att = tf.nn.softmax(v2, axis = 1)

with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
sess.run(tf.local_variables_initializer())
np.set_printoptions(precision=3, suppress=True)
print(sess.run(v1))
print(sess.run(v2))
print(sess.run(att))

The attention att will be:

[[0.249 0.25  0.    0.251 0.251 0.    0.   ]
[0.25  0.    0.25  0.    0.249 0.    0.25 ]
[0.251 0.    0.251 0.    0.    0.249 0.25 ]
[0.166 0.    0.167 0.166 0.167 0.166 0.167]
[0.167 0.167 0.    0.166 0.167 0.167 0.166]]


You will find the attention value of negative number in v1  is 0.