# RAG

## Deep Lake as a Vector Store for LLM Applications

* Store and search embeddings and their metadata including text, jsons, images, audio, video, and more. Save the data locally, in your cloud, or on Deep Lake storage.
* Build Retrieval Augmented Generation (RAG) Apps using our integrations with [LangChain](https://docs-v3.activeloop.ai/examples/rag/langchain-integration) and [LlamaIndex](https://docs-v3.activeloop.ai/examples/rag/llamaindex-integration)
* Run computations locally or on our [Managed Tensor Database](https://docs-v3.activeloop.ai/examples/rag/managed-database)
