About Cloud Computing and Generative AI
Cloud Computing is the practice of using a network of remote servers hosted on the internet to store, manage, and process data, rather than a local server or a personal computer. It allows you to focus on developing, rather than having to worry about providing all the hardware. One of the biggest cloud service providers out there is AWS.
Generative AI refers to a type of artificial intelligence designed to generate new content, data, or outputs that are not explicitly programmed in advance. It involves models that
can create new examples or samples within a given domain, such as images, text, music, or other types of data.
- 8:00AM: Check in + Breakfast
- 8:30AM: Introduction
- 8:40AM: Icebreaker
- 9:10AM: Hacking commences
- 12:00PM: Lunch (provided)
- 5:00PM: Dinner (provided)
- 6:00PM: Hacking ends
- 6:10PM: Judging starts
- 7:45PM: Closing ceremony
- 8:00PM: End of Hackathon!
- UBC Card
- Adapters
- A water bottle
- Laptop and charging cables
Sauder Learning Labs: 6326 Agricultural Road, Vancouver, BC V6T 1Z2
It is behind the Sauder building and sandwiched between Triple O’s and the Leonard S. Klinck building. Look out for a sign that says David Lam Learning Centre!
- No plagiarism
- Code must be on GitHub and open sourced
- Any private datasets used must not contain personally identifiable information
- Project design and development must start at the hackathon’s beginning, but preprocessed and structured data is allowed
- Total 5 minutes (3 min presentation, 2 min Q&A)
- We recommend talking about your motivation for choosing this project, and its potential impact.
- REQUIRED: To judge the technical details of your solution, you must nclude an architecture diagram (try out draw.io, or any other tool).
- DEADLINE: There is a hard deadline to submit the link to your public GitHub repository in your Discord team channel by 6:00PM. Late submissions will lead to disqualification.
- Creativity and Originality: How innovative and unique is the generated solution?
- Technical Implementation: The complexity and effectiveness of the AI model and its integration with the user interface.
- User Interaction: The intuitiveness and effectiveness of the user interface in influencing the generated solution.
- Cloud deployment: The choices and efficient deployment of cloud services for their solution.
- Presentation: The clarity, coherence, and persuasiveness of the final presentation.
For frequently asked questions and tips, please visit FAQs
Getting Started With AWS Workshop Studio
The link to the AWS Workshop will be provided closer to the Hackathon
- Introduction to Generative AI - Art of the Possible
- Planning a Generative AI Project
- Foundations of Prompt Engineering
- Generative AI with Large Language Models
- Introduction to LangChain - LangChain is a framework for developing applications powered by language models
- Serverless LLM apps with Amazon Bedrock - (Course) Enroll for free. Learn how to deploy a large language model-based application into production using serverless technology.
- Generative AI with Large Language Models - (Course) Enroll for free. Excellent intro course. Gain foundational knowledge, practical skills, and a functional understanding of how generative AI works.
- Prompt Engineering Best Practices - Prompt engineering best practices for LLMs on Amazon Bedrock.
- General Prompt Engineering using Party Rock - Learn General Prompt Engineering using Party Rock (free to use).
- Anthropic Claude on Party Rock - Learn Anthropic Claude Interactive Prompt Engineering tutorial on Party Rock (free to use).
- Anthropic’s Official Documentation - Anthropic’s Official Prompting Documentation
Retrieval-augmented generation (RAG) for large language models (LLMs) aims to improve prediction quality by using an external datastore at inference time to build a richer prompt that includes some combination of context, history, and recent/relevant knowledge
-
More in-depth intro Retrieval Augmented Generation (RAG) for LLMs
- Building AI-powered search in PostgreSQL using Amazon SageMaker and pgvector (Blog post)
- AWS Samples (GitHub) - RAG with Amazon Bedrock and PGVector on Amazon RDS
- Knowledge Bases now delivers fully managed RAG experience in Amazon Bedrock
- Knowledge Base for Amazon Bedrock - Documentation
- Amazon OpenSearch Service’s vector database capabilities explained
- Build scalable and serverless RAG workflows with a vector engine for Amazon OpenSearch Serverless and Amazon Bedrock Claude models (Blog post)
Enable generative AI applications to execute multistep tasks across company systems and data sources
- User Guide
- Demo Video - Agents for Amazon Bedrock
- Amazon Bedrock Agents Quickstart - Functional code example
- Build a foundation model (FM) powered customer service bot with agents for Amazon Bedrock
- AWS Cloud Essentials
- AWS Security Essentials
- AWS Networking Essentials
- AWS Technical Essentials
- Architecting on AWS - Online Course Supplement
- GitHub Link: RAG-Bedrock-Titan
- Video: Implementing RAG with Amazon Bedrock and Amazon Titan - Part 1
From the creator: "In this tutorial, we will build a chatbot based on the Retrieval Augmented Context generation technique. Amazon OpenSearch Serverless is used as the vector database, Amazon Titan is used for generating text embeddings and as an LLM, and Amazon Bedrock API is used for invoking the Titan model."
A chatbot that uses the ICBC website information as its knowledge base to answer questions that are asked by the users who want to learn more about driving licenses, insurance, and anything ICBC-related. This website can be hosted on an EC2 instance. This chatbot can be based on the Flask Framework, which provides a light-weight python-based web framework.
A chatbot generates responses to students’ prompts about content in a course textbook. This chatbot can be created using Amazon Bedrock to generate responses to prompts and Streamlit for the user interface. A Knowledge Base can also be used to implement Retrieval-Augmented Generation (RAG) to generate responses based on information retrieved from a specified data source, such as a course textbook PDF.