This blog post is on how to use tf.stop_gradient to restrict the flow of gradients through certain parts of the network

There are several scenerios that may arise where you have to train a particular part of the network and keep the rest of the network in the previous state. This is when tf.stop_gradient comes in handy to do exactly that. Any operation that is being done inside tf.stop_gradient will not be updated during backpropogation.

To give some example, let us define single layer neural network with linear activations.

This is equivalent to a single hidden layer neural network with 2 input,3 hidden and 4 output units. I am using absolute error and gradient descent optimizer for demonstration purposes. For the same purpose, I have initialized the weights to be ones, so that it is clear to see the changes that happen.

Now we can run the optimizer with the following block of code and see what happens to the weights.

The output that we get is as follows,

As you can see, since the operation that involved w1 was inside tf.stop_gradient, after optimizer step only w2 got updated with the gradients and not the w1.

The full code to this demostration,