I'm using a MacBook Pro with M4 Pro Apple chip and 48GB of RAM to run a local LLM via Ollama. I'm wondering which model would be most suitable. When I ran Qwen3:30B, memory usage went up to about 31GB, so I think I could probably handle a slightly stronger model, but I'm not sure exactly which one to pick because there are so many options.
#devstr #asknostr
Login to reply