In this tutorial, we will introduce MagFace to beginners, it is a good way to measure the quality of the given face.
In this tutorial, we will introduce what is bahdanau attention mechanism and how to implement it.
Zoneout is proposed in paper: Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations. It is also used in Tacotron 2. In this tutorial, we will introduce what it is and how to implement it using tensorflow.
Smoothing normalization is proposed in paper: Attention-Based Models for Speech Recognition. In this tutorial, we will introduce how to implement it in tensorflow.
In deep learning, we usually place a dropout layer after a dense layer. However, here is a problem? Dropout layer is placed before or after activation function.
In this tutorial, we will introduce post-norm and pre-norm residual units, they are often used to improve transformer in deep learning.
Dropout and batch normalization are two well-recognized approaches to tackle overfitting in multilayered neural networks, which one is better? In this tutorial, we will discuss this topic.
GE2E loss is proposed in paper << GENERALIZED END-TO-END LOSS FOR SPEAKER VERIFICATION>>. In this tutorial, we will introduce it for some beginners.
In this tutorial, we will introduce how to compute KL (Kullback–Leibler) divergence between two multivariate gaussian distributions.
Accuracy, Precision, Recall & F1-Score are widely used in machine learning. In this tutorial, we will discuss how to compute them.