-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathHugging Face Transformers Glossary.txt
224 lines (220 loc) · 29.4 KB
/
Hugging Face Transformers Glossary.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
"Introduction (Hugging Face Transformers Glossary)"|"
<p>This deck has taken definitions outlined in <a href=""https://huggingface.co/transformers/glossary.html"">Hugging Face's Transformers Glossary</a> and put them into a form which can be easily learnt/revised using <a href=""https://apps.ankiweb.net/"">Anki</a> a cross platform app specifically designed for long term knowledge retention.</p>
<h2>Notes</h2>
<p>Please note the modifications which have been made & where you can find updates.</p>
<ol align=""left"">
<li>The front of every card has ""(Hugging Face Transformers Glossary)"" appended to the end so that if you have any other words in your collection, the Hugging Face's Glossary definition will still be added when importing it.</li>
<li>The original relative URLs have been made into full URLs so they are clickable within Anki.</li>
<li>Unordered lists have been aligned to the left to make them easier to read in Anki which automatically centers content.</li>
<li>Any updates, translations or corrections to the deck will be available at <a href=""https://github.com/darigovresearch/Hugging-Face-Transformers-Glossary-Flashcards"">https://github.com/darigovresearch/Hugging-Face-Transformers-Glossary-Flashcards</a> so do return periodically to check if you have the latest version.</li>
</ol>
<p>Feel free to share the deck and give the repository a star so more people are likely to see this work and can get the most out of it.</p>
<h2>License</h2>
<p>Unless otherwise specified, everything in this deck is covered by the following licence:</p>
<p>This work was based on the <em><strong>Hugging Face's Transformers Glossary</strong></em> by <a href=""https://huggingface.co/transformers/glossary.html"">Hugging Face</a>. Content is available under an <a href=""https://github.com/huggingface/transformers/blob/master/LICENSE"">Apache 2.0 license</a>.</p>
<p>To see this work in full go to <a href=""https://huggingface.co/transformers/glossary.html"">https://huggingface.co/transformers/glossary.html</a></p>
"
"Autoencoding models (Hugging Face Transformers Glossary)"|"See MLM"
"Autoregressive models (Hugging Face Transformers Glossary)"|"See CLM"
"CLM (Hugging Face Transformers Glossary)"|"Causal language modeling, a pretraining task where the model reads the texts in order and has to predict the
next word. It’s usually done by reading the whole sentence but using a mask inside the model to hide the future
tokens at a certain timestep."
"Deep learning (Hugging Face Transformers Glossary)"|"Machine learning algorithms which uses neural networks with several layers."
"MLM (Hugging Face Transformers Glossary)"|"Masked language modeling, a pretraining task where the model sees a corrupted version of the texts, usually done
by masking some tokens randomly, and has to predict the original text."
"Multimodal (Hugging Face Transformers Glossary)"|"A task that combines texts with another kind of inputs (for instance images)."
"NLG (Hugging Face Transformers Glossary)"|"Natural language generation, all tasks related to generating text ( for instance talk with transformers,
translation)"
"NLP (Hugging Face Transformers Glossary)"|"Natural language processing, a generic way to say “deal with texts”."
"NLU (Hugging Face Transformers Glossary)"|"Natural language understanding, all tasks related to understanding what is in a text (for instance classifying
the whole text, individual words)"
"Pretrained model (Hugging Face Transformers Glossary)"|"A model that has been pretrained on some data (for instance all of Wikipedia). Pretraining methods
involve a self-supervised objective, which can be reading the text and trying to predict the next word (see CLM) or
masking some words and trying to predict them (see MLM)."
"RNN (Hugging Face Transformers Glossary)"|"Recurrent neural network, a type of model that uses a loop over a layer to process texts."
"Self-attention (Hugging Face Transformers Glossary)"|"Each element of the input finds out which other elements of the input they should attend to."
"seq2seq or sequence-to-sequence (Hugging Face Transformers Glossary)"|"Models that generate a new sequence from an input, like translation models, or
summarization models (such as <a class=""reference internal"" href=""https://huggingface.co/transformers/model_doc/bart.html""><span class=""doc"">Bart</span></a> or <a class=""reference internal"" href=""https://huggingface.co/transformers/model_doc/t5.html""><span class=""doc"">T5</span></a>)."
"Token (Hugging Face Transformers Glossary)"|"A part of a sentence, usually a word, but can also be a subword (non-common words are often split in subwords)
or a punctuation symbol."
"Transformer (Hugging Face Transformers Glossary)"|"Self-attention based deep learning model architecture."
"Model inputs<a class=""headerlink"" href=""https://huggingface.co/transformers/glossary.html#model-inputs"" title=""Permalink to this headline"">¶</a> (Hugging Face Transformers Glossary)"|"
<p>Every model is different yet bears similarities with the others. Therefore most models use the same inputs, which are
detailed here alongside usage examples."
"Input IDs<a class=""headerlink"" href=""https://huggingface.co/transformers/glossary.html#input-ids"" title=""Permalink to this headline"">¶</a> (Hugging Face Transformers Glossary)"|"
<p>The input ids are often the only required parameters to be passed to the model as input. <em>They are token indices,
numerical representations of tokens building the sequences that will be used as input by the model</em>.</p>
<p>Each tokenizer works differently but the underlying mechanism remains the same. Here’s an example using the BERT
tokenizer, which is a <a class=""reference external"" href=""https://arxiv.org/pdf/1609.08144.pdf"">WordPiece</a> tokenizer:</p>
<div class=""highlight-default notranslate""><div class=""highlight""><pre><span></span><span class=""gp"">>>> </span><span class=""kn"">from</span> <span class=""nn"">transformers</span> <span class=""kn"">import</span> <span class=""n"">BertTokenizer</span>
<span class=""gp"">>>> </span><span class=""n"">tokenizer</span> <span class=""o"">=</span> <span class=""n"">BertTokenizer</span><span class=""o"">.</span><span class=""n"">from_pretrained</span><span class=""p"">(</span><span class=""s2"">"bert-base-cased"</span><span class=""p"">)</span>
<span class=""gp"">>>> </span><span class=""n"">sequence</span> <span class=""o"">=</span> <span class=""s2"">"A Titan RTX has 24GB of VRAM"</span>
</pre></div>
</div>
<p>The tokenizer takes care of splitting the sequence into tokens available in the tokenizer vocabulary.</p>
<div class=""highlight-default notranslate""><div class=""highlight""><pre><span></span><span class=""gp"">>>> </span><span class=""n"">tokenized_sequence</span> <span class=""o"">=</span> <span class=""n"">tokenizer</span><span class=""o"">.</span><span class=""n"">tokenize</span><span class=""p"">(</span><span class=""n"">sequence</span><span class=""p"">)</span>
</pre></div>
</div>
<p>The tokens are either words or subwords. Here for instance, “VRAM” wasn’t in the model vocabulary, so it’s been split
in “V”, “RA” and “M”. To indicate those tokens are not separate words but parts of the same word, a double-hash prefix
is added for “RA” and “M”:</p>
<div class=""highlight-default notranslate""><div class=""highlight""><pre><span></span><span class=""gp"">>>> </span><span class=""nb"">print</span><span class=""p"">(</span><span class=""n"">tokenized_sequence</span><span class=""p"">)</span>
<span class=""go"">['A', 'Titan', 'R', '##T', '##X', 'has', '24', '##GB', 'of', 'V', '##RA', '##M']</span>
</pre></div>
</div>
<p>These tokens can then be converted into IDs which are understandable by the model. This can be done by directly feeding
the sentence to the tokenizer, which leverages the Rust implementation of <a class=""reference external"" href=""https://github.com/huggingface/tokenizers"">huggingface/tokenizers</a> for peak performance.</p>
<div class=""highlight-default notranslate""><div class=""highlight""><pre><span></span><span class=""gp"">>>> </span><span class=""n"">inputs</span> <span class=""o"">=</span> <span class=""n"">tokenizer</span><span class=""p"">(</span><span class=""n"">sequence</span><span class=""p"">)</span>
</pre></div>
</div>
<p>The tokenizer returns a dictionary with all the arguments necessary for its corresponding model to work properly. The
token indices are under the key “input_ids”:</p>
<div class=""highlight-default notranslate""><div class=""highlight""><pre><span></span><span class=""gp"">>>> </span><span class=""n"">encoded_sequence</span> <span class=""o"">=</span> <span class=""n"">inputs</span><span class=""p"">[</span><span class=""s2"">"input_ids"</span><span class=""p"">]</span>
<span class=""gp"">>>> </span><span class=""nb"">print</span><span class=""p"">(</span><span class=""n"">encoded_sequence</span><span class=""p"">)</span>
<span class=""go"">[101, 138, 18696, 155, 1942, 3190, 1144, 1572, 13745, 1104, 159, 9664, 2107, 102]</span>
</pre></div>
</div>
<p>Note that the tokenizer automatically adds “special tokens” (if the associated model relies on them) which are special
IDs the model sometimes uses.</p>
<p>If we decode the previous sequence of ids,</p>
<div class=""highlight-default notranslate""><div class=""highlight""><pre><span></span><span class=""gp"">>>> </span><span class=""n"">decoded_sequence</span> <span class=""o"">=</span> <span class=""n"">tokenizer</span><span class=""o"">.</span><span class=""n"">decode</span><span class=""p"">(</span><span class=""n"">encoded_sequence</span><span class=""p"">)</span>
</pre></div>
</div>
<p>we will see</p>
<div class=""highlight-default notranslate""><div class=""highlight""><pre><span></span><span class=""gp"">>>> </span><span class=""nb"">print</span><span class=""p"">(</span><span class=""n"">decoded_sequence</span><span class=""p"">)</span>
<span class=""go"">[CLS] A Titan RTX has 24GB of VRAM [SEP]</span>
</pre></div>
</div>
<p>because this is the way a <a class=""reference internal"" href=""https://huggingface.co/transformers/model_doc/bert.html#transformers.BertModel"" title=""transformers.BertModel""><code class=""xref py py-class docutils literal notranslate""><span class=""pre"">BertModel</span></code></a> is going to expect its inputs.</p>
"
"Attention mask<a class=""headerlink"" href=""https://huggingface.co/transformers/glossary.html#attention-mask"" title=""Permalink to this headline"">¶</a> (Hugging Face Transformers Glossary)"|"
<p>The attention mask is an optional argument used when batching sequences together. This argument indicates to the model
which tokens should be attended to, and which should not.</p>
<p>For example, consider these two sequences:</p>
<div class=""highlight-default notranslate""><div class=""highlight""><pre><span></span><span class=""gp"">>>> </span><span class=""kn"">from</span> <span class=""nn"">transformers</span> <span class=""kn"">import</span> <span class=""n"">BertTokenizer</span>
<span class=""gp"">>>> </span><span class=""n"">tokenizer</span> <span class=""o"">=</span> <span class=""n"">BertTokenizer</span><span class=""o"">.</span><span class=""n"">from_pretrained</span><span class=""p"">(</span><span class=""s2"">"bert-base-cased"</span><span class=""p"">)</span>
<span class=""gp"">>>> </span><span class=""n"">sequence_a</span> <span class=""o"">=</span> <span class=""s2"">"This is a short sequence."</span>
<span class=""gp"">>>> </span><span class=""n"">sequence_b</span> <span class=""o"">=</span> <span class=""s2"">"This is a rather long sequence. It is at least longer than the sequence A."</span>
<span class=""gp"">>>> </span><span class=""n"">encoded_sequence_a</span> <span class=""o"">=</span> <span class=""n"">tokenizer</span><span class=""p"">(</span><span class=""n"">sequence_a</span><span class=""p"">)[</span><span class=""s2"">"input_ids"</span><span class=""p"">]</span>
<span class=""gp"">>>> </span><span class=""n"">encoded_sequence_b</span> <span class=""o"">=</span> <span class=""n"">tokenizer</span><span class=""p"">(</span><span class=""n"">sequence_b</span><span class=""p"">)[</span><span class=""s2"">"input_ids"</span><span class=""p"">]</span>
</pre></div>
</div>
<p>The encoded versions have different lengths:</p>
<div class=""highlight-default notranslate""><div class=""highlight""><pre><span></span><span class=""gp"">>>> </span><span class=""nb"">len</span><span class=""p"">(</span><span class=""n"">encoded_sequence_a</span><span class=""p"">),</span> <span class=""nb"">len</span><span class=""p"">(</span><span class=""n"">encoded_sequence_b</span><span class=""p"">)</span>
<span class=""go"">(8, 19)</span>
</pre></div>
</div>
<p>Therefore, we can’t put them together in the same tensor as-is. The first sequence needs to be padded up to the length
of the second one, or the second one needs to be truncated down to the length of the first one.</p>
<p>In the first case, the list of IDs will be extended by the padding indices. We can pass a list to the tokenizer and ask
it to pad like this:</p>
<div class=""highlight-default notranslate""><div class=""highlight""><pre><span></span><span class=""gp"">>>> </span><span class=""n"">padded_sequences</span> <span class=""o"">=</span> <span class=""n"">tokenizer</span><span class=""p"">([</span><span class=""n"">sequence_a</span><span class=""p"">,</span> <span class=""n"">sequence_b</span><span class=""p"">],</span> <span class=""n"">padding</span><span class=""o"">=</span><span class=""kc"">True</span><span class=""p"">)</span>
</pre></div>
</div>
<p>We can see that 0s have been added on the right of the first sentence to make it the same length as the second one:</p>
<div class=""highlight-default notranslate""><div class=""highlight""><pre><span></span><span class=""gp"">>>> </span><span class=""n"">padded_sequences</span><span class=""p"">[</span><span class=""s2"">"input_ids"</span><span class=""p"">]</span>
<span class=""go"">[[101, 1188, 1110, 170, 1603, 4954, 119, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [101, 1188, 1110, 170, 1897, 1263, 4954, 119, 1135, 1110, 1120, 1655, 2039, 1190, 1103, 4954, 138, 119, 102]]</span>
</pre></div>
</div>
<p>This can then be converted into a tensor in PyTorch or TensorFlow. The attention mask is a binary tensor indicating the
position of the padded indices so that the model does not attend to them. For the <a class=""reference internal"" href=""https://huggingface.co/transformers/model_doc/bert.html#transformers.BertTokenizer"" title=""transformers.BertTokenizer""><code class=""xref py py-class docutils literal notranslate""><span class=""pre"">BertTokenizer</span></code></a>,
<code class=""xref py py-obj docutils literal notranslate""><span class=""pre"">1</span></code> indicates a value that should be attended to, while <code class=""xref py py-obj docutils literal notranslate""><span class=""pre"">0</span></code> indicates a padded value. This attention mask is
in the dictionary returned by the tokenizer under the key “attention_mask”:</p>
<div class=""highlight-default notranslate""><div class=""highlight""><pre><span></span><span class=""gp"">>>> </span><span class=""n"">padded_sequences</span><span class=""p"">[</span><span class=""s2"">"attention_mask"</span><span class=""p"">]</span>
<span class=""go"">[[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]</span>
</pre></div>
</div>
"
"Token Type IDs<a class=""headerlink"" href=""https://huggingface.co/transformers/glossary.html#token-type-ids"" title=""Permalink to this headline"">¶</a> (Hugging Face Transformers Glossary)"|"
<p>Some models’ purpose is to do sequence classification or question answering. These require two different sequences to
be joined in a single “input_ids” entry, which usually is performed with the help of special tokens, such as the
classifier (<code class=""docutils literal notranslate""><span class=""pre"">[CLS]</span></code>) and separator (<code class=""docutils literal notranslate""><span class=""pre"">[SEP]</span></code>) tokens. For example, the BERT model builds its two sequence input as
such:</p>
<div class=""highlight-default notranslate""><div class=""highlight""><pre><span></span><span class=""gp"">>>> </span><span class=""c1""># [CLS] SEQUENCE_A [SEP] SEQUENCE_B [SEP]</span>
</pre></div>
</div>
<p>We can use our tokenizer to automatically generate such a sentence by passing the two sequences to <code class=""docutils literal notranslate""><span class=""pre"">tokenizer</span></code> as two
arguments (and not a list, like before) like this:</p>
<div class=""highlight-default notranslate""><div class=""highlight""><pre><span></span><span class=""gp"">>>> </span><span class=""kn"">from</span> <span class=""nn"">transformers</span> <span class=""kn"">import</span> <span class=""n"">BertTokenizer</span>
<span class=""gp"">>>> </span><span class=""n"">tokenizer</span> <span class=""o"">=</span> <span class=""n"">BertTokenizer</span><span class=""o"">.</span><span class=""n"">from_pretrained</span><span class=""p"">(</span><span class=""s2"">"bert-base-cased"</span><span class=""p"">)</span>
<span class=""gp"">>>> </span><span class=""n"">sequence_a</span> <span class=""o"">=</span> <span class=""s2"">"HuggingFace is based in NYC"</span>
<span class=""gp"">>>> </span><span class=""n"">sequence_b</span> <span class=""o"">=</span> <span class=""s2"">"Where is HuggingFace based?"</span>
<span class=""gp"">>>> </span><span class=""n"">encoded_dict</span> <span class=""o"">=</span> <span class=""n"">tokenizer</span><span class=""p"">(</span><span class=""n"">sequence_a</span><span class=""p"">,</span> <span class=""n"">sequence_b</span><span class=""p"">)</span>
<span class=""gp"">>>> </span><span class=""n"">decoded</span> <span class=""o"">=</span> <span class=""n"">tokenizer</span><span class=""o"">.</span><span class=""n"">decode</span><span class=""p"">(</span><span class=""n"">encoded_dict</span><span class=""p"">[</span><span class=""s2"">"input_ids"</span><span class=""p"">])</span>
</pre></div>
</div>
<p>which will return:</p>
<div class=""highlight-default notranslate""><div class=""highlight""><pre><span></span><span class=""gp"">>>> </span><span class=""nb"">print</span><span class=""p"">(</span><span class=""n"">decoded</span><span class=""p"">)</span>
<span class=""go"">[CLS] HuggingFace is based in NYC [SEP] Where is HuggingFace based? [SEP]</span>
</pre></div>
</div>
<p>This is enough for some models to understand where one sequence ends and where another begins. However, other models,
such as BERT, also deploy token type IDs (also called segment IDs). They are represented as a binary mask identifying
the two types of sequence in the model.</p>
<p>The tokenizer returns this mask as the “token_type_ids” entry:</p>
<div class=""highlight-default notranslate""><div class=""highlight""><pre><span></span><span class=""gp"">>>> </span><span class=""n"">encoded_dict</span><span class=""p"">[</span><span class=""s1"">'token_type_ids'</span><span class=""p"">]</span>
<span class=""go"">[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1]</span>
</pre></div>
</div>
<p>The first sequence, the “context” used for the question, has all its tokens represented by a <code class=""xref py py-obj docutils literal notranslate""><span class=""pre"">0</span></code>, whereas the
second sequence, corresponding to the “question”, has all its tokens represented by a <code class=""xref py py-obj docutils literal notranslate""><span class=""pre"">1</span></code>.</p>
<p>Some models, like <a class=""reference internal"" href=""https://huggingface.co/transformers/model_doc/xlnet.html#transformers.XLNetModel"" title=""transformers.XLNetModel""><code class=""xref py py-class docutils literal notranslate""><span class=""pre"">XLNetModel</span></code></a> use an additional token represented by a <code class=""xref py py-obj docutils literal notranslate""><span class=""pre"">2</span></code>.</p>
"
"Position IDs<a class=""headerlink"" href=""https://huggingface.co/transformers/glossary.html#position-ids"" title=""Permalink to this headline"">¶</a> (Hugging Face Transformers Glossary)"|"
<p>Contrary to RNNs that have the position of each token embedded within them, transformers are unaware of the position of
each token. Therefore, the position IDs (<code class=""docutils literal notranslate""><span class=""pre"">position_ids</span></code>) are used by the model to identify each token’s position in
the list of tokens.</p>
<p>They are an optional parameter. If no <code class=""docutils literal notranslate""><span class=""pre"">position_ids</span></code> is passed to the model, the IDs are automatically created as
absolute positional embeddings.</p>
<p>Absolute positional embeddings are selected in the range <code class=""docutils literal notranslate""><span class=""pre"">[0,</span> <span class=""pre"">config.max_position_embeddings</span> <span class=""pre"">-</span> <span class=""pre"">1]</span></code>. Some models use
other types of positional embeddings, such as sinusoidal position embeddings or relative position embeddings.</p>
"
"Labels<a class=""headerlink"" href=""https://huggingface.co/transformers/glossary.html#labels"" title=""Permalink to this headline"">¶</a> (Hugging Face Transformers Glossary)"|"
<p>The labels are an optional argument which can be passed in order for the model to compute the loss itself. These labels
should be the expected prediction of the model: it will use the standard loss in order to compute the loss between its
predictions and the expected value (the label).</p>
<p>These labels are different according to the model head, for example:</p>
<ul class=""simple"" align=""left"">
<li><p>For sequence classification models (e.g., <a class=""reference internal"" href=""https://huggingface.co/transformers/model_doc/bert.html#transformers.BertForSequenceClassification"" title=""transformers.BertForSequenceClassification""><code class=""xref py py-class docutils literal notranslate""><span class=""pre"">BertForSequenceClassification</span></code></a>), the model expects a
tensor of dimension <code class=""xref py py-obj docutils literal notranslate""><span class=""pre"">(batch_size)</span></code> with each value of the batch corresponding to the expected label of the
entire sequence.</p></li>
<li><p>For token classification models (e.g., <a class=""reference internal"" href=""https://huggingface.co/transformers/model_doc/bert.html#transformers.BertForTokenClassification"" title=""transformers.BertForTokenClassification""><code class=""xref py py-class docutils literal notranslate""><span class=""pre"">BertForTokenClassification</span></code></a>), the model expects a tensor
of dimension <code class=""xref py py-obj docutils literal notranslate""><span class=""pre"">(batch_size,</span> <span class=""pre"">seq_length)</span></code> with each value corresponding to the expected label of each individual
token.</p></li>
<li><p>For masked language modeling (e.g., <a class=""reference internal"" href=""https://huggingface.co/transformers/model_doc/bert.html#transformers.BertForMaskedLM"" title=""transformers.BertForMaskedLM""><code class=""xref py py-class docutils literal notranslate""><span class=""pre"">BertForMaskedLM</span></code></a>), the model expects a tensor of dimension
<code class=""xref py py-obj docutils literal notranslate""><span class=""pre"">(batch_size,</span> <span class=""pre"">seq_length)</span></code> with each value corresponding to the expected label of each individual token: the
labels being the token ID for the masked token, and values to be ignored for the rest (usually -100).</p></li>
<li><p>For sequence to sequence tasks,(e.g., <a class=""reference internal"" href=""https://huggingface.co/transformers/model_doc/bart.html#transformers.BartForConditionalGeneration"" title=""transformers.BartForConditionalGeneration""><code class=""xref py py-class docutils literal notranslate""><span class=""pre"">BartForConditionalGeneration</span></code></a>,
<a class=""reference internal"" href=""https://huggingface.co/transformers/model_doc/mbart.html#transformers.MBartForConditionalGeneration"" title=""transformers.MBartForConditionalGeneration""><code class=""xref py py-class docutils literal notranslate""><span class=""pre"">MBartForConditionalGeneration</span></code></a>), the model expects a tensor of dimension <code class=""xref py py-obj docutils literal notranslate""><span class=""pre"">(batch_size,</span>
<span class=""pre"">tgt_seq_length)</span></code> with each value corresponding to the target sequences associated with each input sequence. During
training, both <cite>BART</cite> and <cite>T5</cite> will make the appropriate <cite>decoder_input_ids</cite> and decoder attention masks internally.
They usually do not need to be supplied. This does not apply to models leveraging the Encoder-Decoder framework. See
the documentation of each model for more information on each specific model’s labels.</p></li>
</ul>
<p>The base models (e.g., <a class=""reference internal"" href=""https://huggingface.co/transformers/model_doc/bert.html#transformers.BertModel"" title=""transformers.BertModel""><code class=""xref py py-class docutils literal notranslate""><span class=""pre"">BertModel</span></code></a>) do not accept labels, as these are the base transformer
models, simply outputting features.</p>
"
"Decoder input IDs<a class=""headerlink"" href=""https://huggingface.co/transformers/glossary.html#decoder-input-ids"" title=""Permalink to this headline"">¶</a> (Hugging Face Transformers Glossary)"|"
<p>This input is specific to encoder-decoder models, and contains the input IDs that will be fed to the decoder. These
inputs should be used for sequence to sequence tasks, such as translation or summarization, and are usually built in a
way specific to each model.</p>
<p>Most encoder-decoder models (BART, T5) create their <code class=""xref py py-obj docutils literal notranslate""><span class=""pre"">decoder_input_ids</span></code> on their own from the <code class=""xref py py-obj docutils literal notranslate""><span class=""pre"">labels</span></code>. In
such models, passing the <code class=""xref py py-obj docutils literal notranslate""><span class=""pre"">labels</span></code> is the preferred way to handle training.</p>
<p>Please check each model’s docs to see how they handle these input IDs for sequence to sequence training.</p>
"
"Feed Forward Chunking<a class=""headerlink"" href=""https://huggingface.co/transformers/glossary.html#feed-forward-chunking"" title=""Permalink to this headline"">¶</a> (Hugging Face Transformers Glossary)"|"
<p>In each residual attention block in transformers the self-attention layer is usually followed by 2 feed forward layers.
The intermediate embedding size of the feed forward layers is often bigger than the hidden size of the model (e.g., for
<code class=""docutils literal notranslate""><span class=""pre"">bert-base-uncased</span></code>).</p>
<p>For an input of size <code class=""docutils literal notranslate""><span class=""pre"">[batch_size,</span> <span class=""pre"">sequence_length]</span></code>, the memory required to store the intermediate feed forward
embeddings <code class=""docutils literal notranslate""><span class=""pre"">[batch_size,</span> <span class=""pre"">sequence_length,</span> <span class=""pre"">config.intermediate_size]</span></code> can account for a large fraction of the memory
use. The authors of <a class=""reference external"" href=""https://arxiv.org/abs/2001.04451"">Reformer: The Efficient Transformer</a> noticed that since the
computation is independent of the <code class=""docutils literal notranslate""><span class=""pre"">sequence_length</span></code> dimension, it is mathematically equivalent to compute the output
embeddings of both feed forward layers <code class=""docutils literal notranslate""><span class=""pre"">[batch_size,</span> <span class=""pre"">config.hidden_size]_0,</span> <span class=""pre"">...,</span> <span class=""pre"">[batch_size,</span> <span class=""pre"">config.hidden_size]_n</span></code>
individually and concat them afterward to <code class=""docutils literal notranslate""><span class=""pre"">[batch_size,</span> <span class=""pre"">sequence_length,</span> <span class=""pre"">config.hidden_size]</span></code> with <code class=""docutils literal notranslate""><span class=""pre"">n</span> <span class=""pre"">=</span>
<span class=""pre"">sequence_length</span></code>, which trades increased computation time against reduced memory use, but yields a mathematically
<strong>equivalent</strong> result.</p>
<p>For models employing the function <a class=""reference internal"" href=""https://huggingface.co/transformers/internal/modeling_utils.html#transformers.apply_chunking_to_forward"" title=""transformers.apply_chunking_to_forward""><code class=""xref py py-func docutils literal notranslate""><span class=""pre"">apply_chunking_to_forward()</span></code></a>, the <code class=""docutils literal notranslate""><span class=""pre"">chunk_size</span></code> defines the
number of output embeddings that are computed in parallel and thus defines the trade-off between memory and time
complexity. If <code class=""docutils literal notranslate""><span class=""pre"">chunk_size</span></code> is set to 0, no feed forward chunking is done.</p>
"