In this part we are exploring https://github.com/VectifyAI/PageIndex approach and build tree for our docs with LLM traversal.
Symfony DOCS RAG experiments. Part 1, simple RAG implementation with embeddings and vector search.
I heard a lot about Tmux and tried to use it a few times, but had no luck
Why Lite LLM? Need to run the model on a server and have local OpenAI compatible API to use model in Assistant
Since we built `llama.cpp` from source, now we can run our models. Different models have different settings, so you need to check model cards always before run.
