Thank you for your testimony Calle, I would like not to miss the AI tick but I don’t know where to start. Any advice? I want to learn how to run a local LLM to start for example.

Replies (6)

nix's avatar
nix 7 months ago
In my experience local LLMs don't cut it. Just not good enoughbin most cases. So you need to be ready to use commercial ones and by doing so, train them and accelerate their onset.
Jonny Quest's avatar
Jonny Quest 7 months ago
Try out using Claude Code. Works as a CLI so you can still use vim (or emacs if you really hate your pinky) or whatever editor you want. No plugin needed although they’re available. You’ll also want to learn about prompt engineering and its twin sister context engineering [1]. From my experience most people who have a bad time with AI assistants are just giving bad prompts. Exceptions of course and plenty of valid criticism. These things aren’t infallible. 1.
Checkout Ollama. Not the best performance but it's rather easy to use and will work even if you don't have much VRAM/GPU compute.
If you're just curios and don't have time for the tech stuff, let me know and I'll give you an account on my system. OpenWebUI, publicly available, 3-4 models installed, up to 32B. Not great but it's all you get out of midrange consumer graphics cards.