Original author(s) | Google AI |
---|---|
Initial release | 23 October 2019 |
Stable release | |
Repository | https://github.com/google-research/text-to-text-transfer-transformer |
Type | |
License | Apache-2.0 |
Website | blog |
T5 (Text-to-Text Transfer Transformer) is a series of large language models developed by Google AI introduced in 2019. [1] [2] Like the original Transformer model, [3] T5 models are encoder-decoder Transformers, where the encoder processes the input text, and the decoder generates the output text.
T5 models are usually pretrained on a massive dataset of text and code, after which they can perform the text-based tasks that are similar to their pretrained tasks. They can also be finetuned to perform other tasks.
T5 models have been employed in various applications, including chatbots, machine translation systems, text summarization tools, code generation, and robotics. [4]
The original T5 models are pre-trained on the Colossal Clean Crawled Corpus (C4), containing text and code scraped from the internet. This pre-training process enables the models to learn general language understanding and generation abilities. T5 models can then be fine-tuned on specific downstream tasks, adapting their knowledge to perform well in various applications.
The T5 models were pretrained on many tasks, all in the format of <input text>
-> <output text>
.
Some examples are:
Thank you <X> me to your party <Y> week.
-> <X> for inviting <Y> last <Z>
, where the <Z>
means "end of output", and the <X>
and <Y>
denote blanks to be filled, called "sentinels" in the original report.translate English to German: That is good.
-> Das ist gut.
.The course is jumping well.
-> not acceptable
.The T5 series encompasses several models with varying sizes and capabilities, all encoder-decoder Transformers, where the encoder processes the input text, and the decoder generates the output text.
These models are often distinguished by their parameter count, which indicates the complexity and potential capacity of the model. The original paper [1] reported the following 5 models:
Name | Total parameters | Encoder parameters | Decoder parameters | |||||
---|---|---|---|---|---|---|---|---|
Small | 76,956,160 | 35,330,816 | 41,625,344 | 6 | 512 | 2048 | 64 | 8 |
Base | 247,577,856 | 109,628,544 | 137,949,312 | 12 | 768 | 3072 | 64 | 12 |
Large | 770,567,168 | 334,939,648 | 435,627,520 | 24 | 1024 | 4096 | 64 | 16 |
3B | 2,884,497,408 | 1,240,909,824 | 1,643,587,584 | 24 | 1024 | 16384 | 128 | 32 |
11B | 11,340,220,416 | 4,864,791,552 | 6,475,428,864 | 24 | 1024 | 65536 | 128 | 128 |
*The encoder and the decoder have the same shape. So for example, the T5-small has 6 layers in the encoder and 6 layers in the decoder.
In the above table,
Note that unlike typical Transformers, the 3B and 11B models do not satisfy . [6]
Compared to the original Transformer, it uses a few minor modifications: layer normalization with no additive bias; placing the layer normalization outside the residual path; relative positional embedding. [7]
For all experiments, they used a WordPiece tokenizer, with vocabulary size 32,000. The tokenizer is shared across both the input and output of each model. It was trained on a mixture of English, German, French, and Romanian data from the C4 dataset, at a ratio of 10:1:1:1.
Several subsequent models used the T5 architecture, with non-standardized naming conventions used to differentiate them. This section attempts to collect the main ones. An exhaustive list of the variants released by Google Brain is on the GitHub repo for T5X. [8]
Some models are trained from scratch while others are trained by starting with a previous trained model. By default, each model is trained from scratch, except otherwise noted.
Name | Total parameters | Encoder parameters | Decoder parameters | |||||
---|---|---|---|---|---|---|---|---|
Small | 76,961,152 | 35,332,800 | 41,628,352 | 8 | 512 | 1024 | 64 | 6 |
Base | 247,577,856 | 109,628,544 | 137,949,312 | 12 | 768 | 2048 | 64 | 12 |
Large | 783,150,080 | 341,231,104 | 441,918,976 | 24 | 1024 | 2816 | 64 | 16 |
XL | 2,849,757,184 | 1,223,527,424 | 1,626,229,760 | 24 | 2048 | 5120 | 64 | 32 |
XXL | 11,135,332,352 | 4,762,310,656 | 6,373,021,696 | 24 | 4096 | 10240 | 64 | 64 |
The T5 model itself is an encoder-decoder model, allowing it to be used for instruction following. The encoder encodes the instruction, and the decoder autoregressively generates the reply.
The T5 encoder can be used as a text encoder, much like BERT. It encodes a text into a sequence of real-number vectors, which can be used for downstream applications. For example, Google Imagen [26] uses T5-XXL as text encoder, and the encoded text vectors are used as conditioning on a diffusion model. As another example, the AuraFlow diffusion model [27] uses Pile-T5-XL.
importtorchfromtransformersimportAutoConfig,AutoModelForSeq2SeqLMdefcount_parameters(model):enc=sum(p.numel()forpinmodel.encoder.parameters())dec=sum(p.numel()forpinmodel.decoder.parameters())total=enc+decreturntotal,enc,decfornamein["t5-small","t5-base","t5-large","t5-3b","t5-11b"]:print(f"Model: {name}")config=AutoConfig.from_pretrained(f"google-t5/{name}")torch_dtype=torch.float16model=AutoModelForSeq2SeqLM.from_config(config,torch_dtype=torch_dtype)total,enc,dec=count_parameters(model)print(f"Total number of parameters in {name}: {total}")print(f"Total number of parameters in encoder: {enc}")print(f"Total number of parameters in decoder: {dec}")delmodel
importtorchfromtransformersimportAutoConfig,AutoModelForSeq2SeqLMdefcount_parameters(model):enc=sum(p.numel()forpinmodel.encoder.parameters())dec=sum(p.numel()forpinmodel.decoder.parameters())total=enc+decreturntotal,enc,decfornamein["small","base","large","xl","xxl"]:print(f"Model: {name}")config=AutoConfig.from_pretrained(f"google/t5-v1_1-{name}")torch_dtype=torch.float16model=AutoModelForSeq2SeqLM.from_config(config,torch_dtype=torch_dtype)total,enc,dec=count_parameters(model)print(f"Total number of parameters in {name}: {total}")print(f"Total number of parameters in encoder: {enc}")print(f"Total number of parameters in decoder: {dec}")delmodel