This project demonstrates using the GPT-2 language model for text generation using the transformers
library. It includes steps to install dependencies, load the model, and generate text, such as blog posts.
- Python 3.10+
- Python libraries:
transformers
,tensorflow
,requests
,numpy
,pyyaml
,tqdm
,tokenizers
,filelock
,huggingface-hub
,safetensors
,packaging
,regex
-
Install Dependencies
-
Load Model and Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("gpt2-large")
model = GPT2LMHeadModel.from_pretrained("gpt2-large", pad_token_id=tokenizer.eos_token_id)
- Generate Blog Post
title = "The Future of AI: Intelligence or Illusion??"
input_tokens = tokenizer.encode(title, return_tensors='pt')
output = model.generate(input_tokens, max_length=500, num_beams=5, no_repeat_ngram_size=2, early_stopping=True)
print(tokenizer.decode(output[0], skip_special_tokens=True))