Skip to content

aws-samples/amazon-sagemaker-remote-decorator-generative-ai

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

81 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Fine-tune Foundation Models on Amazon SageMaker using @remote decorator

In this example we will go through the steps required for fine-tuning foundation models on Amazon SageMaker by using @remote decorator for executing SageMaker Training jobs.

You can run this repository from Amazon SageMaker Studio or from your local IDE.

For additional information, take a look at the AWS Blog Fine-tune Falcon 7B and other LLMs on Amazon SageMaker with @remote decorator

This notebook is inspired by Philipp Schmid Blogs

Prerequistes

The notebooks are currently using the latest HuggingFace Training Container available for the region us-east-1. If you are running the notebooks in a different region, make sure to update the ImageUri in the file config.yaml.

If you want to operate in a different AWS region

  1. Navigate [Available Deep Learning Containers Images](Available Deep Learning Containers Images)
  2. Select the right Hugging Face TGI container for model training based on your selected region
  3. Update ImageUri in the file config.yaml

Dataset

The dataset is the content of all AWS FAQ pages, downloaded from: https://aws.amazon.com/faqs/

service question answers
/ec2/autoscaling/faqs/ What is Amazon EC2 Auto Scaling? Amazon EC2 Auto Scaling is a fully managed ser...
/ec2/autoscaling/faqs/ When should I use Amazon EC2 Auto Scaling vs. ... You should use AWS Auto Scaling to manage scal...
/ec2/autoscaling/faqs/ How is Predictive Scaling Policy different fro... Predictive Scaling Policy brings the similar p...
/ec2/autoscaling/faqs/ What are the benefits of using Amazon EC2 Auto... Amazon EC2 Auto Scaling helps to maintain your...

The synthetic dataset is the content of official AWS Generative AI Blogs

question answers
What new Anthropic model is now available on Amazon Bedrock? Claude 2.1 foundation model is now available on Amazon Bedrock.
What are the two types of model evaluation available in Amazon Bedrock? Amazon Bedrock offers a choice of automatic evaluation and human evaluation.
What kind of metrics can you use for automatic evaluation? You can use automatic evaluation with predefined metrics such as accuracy, robustness, and toxicity.
What does Guardrails for Amazon Bedrock allow developers to do? Guardrails for Amazon Bedrock (preview) to promote safe interactions between users and your generative AI applications by implementing safeguards customized to your use cases and responsible AI policies.

Notebooks

  1. [Supervised - Q&A] Falcon-7B
  2. [Supervised - Q&A] Llama-13B
  3. [Self-supervised - chat] Llama-13B
  4. [Self-supervised - Instruct] Mistral-7B
  5. [Supervised - Instruct] Mixtral-8x7B
  6. [Supervised - Instruct] Code-Llama 13B
  7. [Supervised - Instruct] Llama-3 8B
  8. [Supervised - Instruct] Llama-3.1 8B
  9. [Supervised - Instruct] Arcee AI Llama-3.1 Supernova Lite
  10. [Supervised - Instruct] Llama-3.2 1B
  11. [Supervised - Instruct] Llama-3.2 3B

About

No description, website, or topics provided.

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published