Gated Self-Attention is an improvement of self-attention mechanism. In this tutorial, we will discuss it for deep learning beginners.
Maxout activation functionin is proposed in paper <>. In this tutorial, we will introduce it with some examples.
Multi-Head Attention is very popular in nlp. However, there also exists some problems in it. In this tutorial, we will discuss how to implement it in tensorflow.
Attention mechanism is an important method to improve the performance of deep learning model. There are two forms of attention, which one we should use? In this tutorial, we will find some tips.
In deep learning, we often use a vector to express a target feature, however, how to fuse them if we have got some features? In this tutorial, we will discuss this topic.
Out-Of-Vocabulary (OOV) words is an important problem in NLP, we will introduce how to process words that are out of vocabulary in this tutorial.
Position is an important feature for deep learning model. For example, in aspect level sentiment analyis, we can use word position to improve the efficiency of classification. In this tutorial, we will introduce how to use position in deep learning model.
Sentiment lexicons are important resources to improve the efficiency of sentiment analysis. In this tutorial, we will list some useful sentiment lexicons.
Can we compute a probability by distance? In this tutorial, we will discuss the relationship of duclidean distance and gaussian distribution, which will help us to convert a distance to a probability.
Multivarible chain rule is a good way to analyze the derivative of a machine learning model. In this tutorial, we will introduce it for machine learning beginners.