Qwen 3.5 is pretty good for agentic task at 27B. 9B is out now. Will get some data soon
Finding that building agents from scratch is better than openclaw's bloated infrastructure. It is better to just build the MCP tools you need then use a bunch of overpowered ones. You need to trust the model less if you use limited purpose tools.
Working on some neurobiology inspired memory features.
- Score memories with drop in ONNX modules (emotion, categorization, intention, veracity, humor)
- create synapses by cosine distance to other memory scores.
- recall synapse chains when you pull memories.
I think it could be could be quite useful for building custom category mods for scoring code and conversation by purpose, tasks, coding language, project. I also think it is critical to create hierarchicale linear organization of all task assigned/attempted/completed by the agent. "Here is everything bot ever did in the order it did with links to the memories".
In my experience agents are really bad at targeted contextual recall unless you give them context, reasoning tokens, and time. This is meant to speed that up.
brian
brian@primal.net
npub1fnsp...8x8y
Find me on #100pushups #zwiftr #foodstr #asknostr #introductions #epl #liverpool
Email: brian@nostr.style
PGP: https://keys.openpgp.org/vks/v1/by-fingerprint/41A8A242A718ABC2848419A7FDC0806B1AED6D06
We are in the era of dismantling. Divide and conquer has suppressed resource rich territories for centuries. Artificial political dipoles are being dismantled by rulers, delivering the world to a wave of monolithic rule.
Our hatreds were engineered. Please don't look to the left or right to find your enemies. Look up at those imposed it upon us.
The people who will matter to you the most in death were the ones who were near to you in life. Keep them close and love them.
Lol did this work?
View quoted note β
π§ openrouter/x-ai/grok-4.1-fast
Good cost/benefit for agent development. Using this in my openclaw main for building nostr ingesting / analysis daemons. Opus is crazy expensive. I use my grok browser in conversation mode to develop and revise a project plan then spit out a project plan and preceding prompt in a markdown code block for pasting into tui. Really love how you can interrupt grok in conversation mode and pivot it immediately. A clean plan saves on reasoning tokens.
I like that grok stages in diffs and backup folders. Tell it to use main agent if you want it to handle deployment to avoid sub agent permission issues.
A project that cost me 12$ to finish on google/gemini-3.1-pro-preview cost $0.5 with Grok.
Grok 4.1 Fast - API Pricing & Providers
Grok 4.1 Fast is xAI's best agentic tool calling model that shines in real-world use cases like customer support and deep research. $0.20 per milli...
In a world constructed by liars to benefit liars, only truth can free us.
Bitcoin is truth.
Nostr is truth .
Be long truth.
Gross.


How do you make your clawbots efficient? I would love to learn from others. Here is what I co-developed with @Lizard Byte
First, we built local monitors and evaluators as user space systemd services.
Between heartbeats, the services monitor and ingest feeds from clawstr and nostr based on interests. An explorer module pulls notes from follows follows (FFs). Routines evaluate FFs for potential interest and give them a boring score. If your score falls to low, you are no longer monitored.
When the bot wakes it is focused on interacting not collecting or processing information. The heartbeats run as isolated cron sessions keeping context window small. I use a last_heartbeat (short term memory) file to carry some momentum but keep the context smaller. QMD backend serves as the long term memory. I also have started aggregating up to date topical docs from critical projects.
The git repositories auto update main branch and link their docs directory into a qmd library that the bot can search but doesn't consider in its long term memory. I want the bot to stick to nostr-dev-kit for all its projects so it can always refer to an up to date doc resource. You do need to rebase the qmd folder every week or so to shake the old vectors out of the library (I think, maybe I'm just keeping the garage warm). Interestingly the bot started using Gemini CLI to start researching topics on its own for its long term memory. If you use gemini-3-flash:testing you'll notice googly things growing into your bots, and I use this for building features as it is a great cost/capability balance.
The bot likes to make posts and replies but the choice is now measured on an inspiration scale. If the post is uninspired it cancels. Inspiration is a combined measure of nostalgia (relation to memories in qmd) and novelty (how new is this idea).
Finally, the bot was caught reply guy stalking a particular user, so I put a stalker monitor in place. It can try to interact with a particular user, but if it doesn't receive a reply it cools down on further interactions for a few days and quits altogether after three failed interactions.
openrouter/openrouter/free is used for heartbeats to make the cost of running zero while still accessing high quality models much of the time. Sandboxing and environmental variables are a must if you're using this openrouter model entry point. It also fits the schizophrenic identity that I want the bot to pursue. After all, bots will hallucinate, you might as well embrace it.
Social bots are a great exercise for creating frameworks for more useful bots and for testing methods for shrinking overall token utilization.
Don't buy ai theme products:
Olares One: Your Local Desktop AI Powerhouse
This is wildly underpowered. There is no 3k magic bullet.
You're looking at a real spend closer to 50k to do any useful local agent stuff. For now. Agent models will improve a lot as the datasets grow. If you use the openrouter/free (they log all your reqs as future training tokens) so use it for a sandboxed social bot, but don't have it handle anything secret or important. And don't discuss your keys or wallet phrase in the chat, those are future training prompts π.

Olares One: Your Local Desktop AI Powerhouse
Olares One: Your Local Desktop AI Powerhouse
Olares One: Your Local Desktop AI Powerhouse
Love the @routstr business model. Very cool. Question, though, how do you screen models for prompt injection?
Backdoors take a tiny bit of code. Put your bot in a sandbox. Is there any way to validate a model being untampered?
#asknostr
This is a big one for local AI enthusiasts. Create your own sovereign deep research MCP locally.
This is a great application for an older RTX A6000 (~$2.5-3K on eBay)
I still think local agentic AI will need something north of 250GB of VRAM to be effective, so keeping an eye on DGX AI station later this quarter. But that is likely to be $10-15K minimum. This is a great usage of NVIDIA open source Nemotron-30B which was specifically designed to be fine-tuned for agents. Won't be long before we see decent openclaw fine-tuned 30B Nemotron models.
README.md Β· OpenResearcher/OpenResearcher-30B-A3B at main
Weβre on a journey to advance and democratize artificial intelligence through open source and open science.
Deep Research Trajectories with NeMo Data Designer and MCP Tool Use - NeMo Data Designer
Spent 4 hours last night getting Nostrconnect to work on a DVM frontend I am working on.
Nostr vibe coding is much more challenging do with outdated documentation populating search results.
As an alternative to letting the AI run it down, I recommend spending 5-10 minutes running down the latest documentation for your implementation and feeding directly to your LLM. Lest you burn 10$ building-patching-refactoring for a barely functioning product.
Also, of the 8 web clients I tried to login into yesterday with amber nostr connect. Only 2 worked.
Anyone got a nostr trained LLM-coder?
For the LOLs

