I heard a lot about Tmux and tried to use it a few times, but had no luck
Why Lite LLM? Need to run the model on a server and have local OpenAI compatible API to use model in Assistant
Since we built `llama.cpp` from source, now we can run our models. Different models have different settings, so you need to check model cards always before run.
AI models are getting smaller and now more broadly available to run on consumer grade hardware. You can run Ollama or LMStudio for easy models testing and integrations, for example, you can connect it to Jetbrains AI Assistant.
Adding cool search to my blog. Part 2
