Thread

Zero-JS Hypermedia Browser

Relays: 5
Replies: 0
Generated: 11:37:27
I'm using a MacBook Pro with M4 Pro Apple chip and 48GB of RAM to run a local LLM via Ollama. I'm wondering which model would be most suitable. When I ran Qwen3:30B, memory usage went up to about 31GB, so I think I could probably handle a slightly stronger model, but I'm not sure exactly which one to pick because there are so many options. #devstr #asknostr
2025-10-07 09:16:50 from 1 relay(s)
Login to reply