Skip to content

Latest commit

 

History

History
26 lines (18 loc) · 826 Bytes

memory.md

File metadata and controls

26 lines (18 loc) · 826 Bytes

How do we add Memory to LLMs?

Basic Premise

Adding memory in LLMs currently is through the use of Vector Databases.

This leads to what has been dubbed as: Retrieval Augmented Generation

Key Steps:

To build a vector storage, you will need to:

  • Ingest Documents
  • Split Documents
  • Create an Embedding Model
  • Load it to a Vector Database

Example of building with Langchain is given here: Langchain RAG

(Not necessary but useful) Orchestrator:

Using orchestrator frameworks like LangChain and LlamaIndex can make things easier.

See Also

References

This project uses code from the following source: