It’s true. Got on replit the other day for comparison and it’s a very different experience once you’re used to Shakespeare.
I don’t code btw. Not a dev btw.
nostr:nevent1qqsp0z2geqmtg4n5sjlc84nef5x067pnt3u985zpzzc4mdzh5zkg8uqpz4mhxue69uhkg6t5w3hjuur4vghhyetvv9usgq8lkj
Login to reply
Replies (12)
How do you download Shakespeare to run locally?
Had a look on the website and did a search on Google, couldn't find any options to download.
It doesn't help that there's a dozen other AI writing tools with the exact same name.
If I could run it locally instead of paying money for credits it would definitely be a lot more useful.
Don't get me wrong i don't mind paying, but paying repeatedly for absolutely nothing or a broken app at best is just too frustrating.
If I could just buy a copy of Shakespeare to run locally and only pay once instead that would be great.
It happens automatically when you visit shakespeare.diy for the first time
Then it works offline from then on. Disconnect your internet and it will still load.
🤔 connection error when I disconnect my internet connection


For fully local functionality, you need to run something like ollama on your device. You can then add localhost:11434/v1 as a custom provider in Shakespeare and it will run 100% on device. In your current setup Shakespeare is running on your device but the AI provider isn't. https://soapbox.pub/blog/shakespeare-local-ai-model
Note that to run local models you need a computer with a very powerful graphics card, and you still won't get close to the level of performance of models like Claude or even GLM. You need $1M worth of graphics cards not even counting electricity to get something remotely close to GLM
So when you say it's running on your own infrastructure you just meant the web browser?
I thought that was just to train them, that's crazy it still takes that much to run a LLM.
I thought that Shakespeare was it's own SLM built for coding nostr clients.
That is to train them. You need more like a few thousand dollars of graphics cards to max out LLM performance for a single user running today's existing models. And we're starting to get pretty useful stuff in the 4-32GB range (typical consumer devices)
Meant 4-32GB of memory, forgot to specify
Yeah that's more what I assumed it would be like by now.
