Skip to content

biodatlab/bme-labs

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

BME Lab

This is part of the code for BME Lab at Mahidol University. Lab II teaches 3rd year student where we aim to have them run

Lab II: Running Large Language Model

Task 1: Running Open WebUI

  • Download Ollama on your laptop, then run ollama run llama3.1

Here, you can run prompt Llama3.1 on your local machine. After running Ollama, you can connect with Open WebUI:

Task 2: Connect Python with Ollama

You can connect Python using Langchain with Ollama. Use 00_Connect_Ollama_with_Langchain to try it out after running Ollama.

%%capture
!pip install langchain
!pip install langchain_community
# Code from https://stackoverflow.com/a/78430197/3626961
from langchain_community.llms import Ollama
from langchain import PromptTemplate # Added

llm = Ollama(model="llama3.1", stop=["<|eot_id|>"]) # Added stop token

def get_model_response(user_prompt, system_prompt):
    # NOTE: No f string and no whitespace in curly braces
    template = """
        <|begin_of_text|>
        <|start_header_id|>system<|end_header_id|>
        {system_prompt}
        <|eot_id|>
        <|start_header_id|>user<|end_header_id|>
        {user_prompt}
        <|eot_id|>
        <|start_header_id|>assistant<|end_header_id|>
        """

    # Added prompt template
    prompt = PromptTemplate(
        input_variables=["system_prompt", "user_prompt"],
        template=template
    )
    
    # Modified invoking the model
    response = llm(prompt.format(system_prompt=system_prompt, user_prompt=user_prompt))
    
    return response

# Example
user_prompt = "What is 1 + 1?"
system_prompt = "You are a helpful assistant doing as the given prompt."
get_model_response(user_prompt, system_prompt)

Task 3: Transcribe Youtube video and summarize the transcriptions

Releases

No releases published

Packages

No packages published