Make Your Offline AI Model Talk to Local SQL — Fully Private RAG with LLaMA + FAISS

Make Your Offline AI Model Talk to Local SQL — Fully Private RAG with LLaMA + FAISS

Hassan Habib

2 дня назад

8,155 Просмотров

Ссылки и html тэги не поддерживаются


Комментарии:

@smg-the-shopify-developers
@smg-the-shopify-developers - 16.06.2025 07:25

🐐

Ответить
@bangonkali
@bangonkali - 16.06.2025 09:05

this was awesome.

a few things:

- for testing, i would recommend adding very custom information as rag knowledge like `my name is sam, and i like the color blue. ` or with `birthdate: sept 1, 1951` . just for testing purposes because this information likely does not exist in the llm. if the test data is likely to overlap with llm training data, sometimes it can be difficult to judge whether the answer was coming from the rag solution or from the llm itself. examples above are just what i can come up with of course there are more very internal information you can make up that likely will not overlap with llm training data.

- line 19 is somewhat exhaustive query. i think both sql server and postgresql has vector db today so queries can be made at db layer. other than that this is awesome . but i understand this is just a demo. would love to see sql layer vector queries explored.

👍👍

Ответить
@UnknownUser-i1j
@UnknownUser-i1j - 16.06.2025 10:37

Awesome, thank you for sharing this knowledge!

Ответить
@mohamedhajjaj2014
@mohamedhajjaj2014 - 16.06.2025 21:45

Thanks for sharing, waiting your next video may be about MCP or multi agents 😊

Ответить