Cubby's avatar
Cubby
rorshock@nostrplebs.com
npub19s23...daw5
Bitcoiner since 2017, erstwhile systems designer and R&D nerd, building freedom tech for :-].
Cubby's avatar
rorshock 3 days ago
// NOSTR EXCLUSIVE 6 months ago, I spoke at Adopting. You may watch my talk here: https://rumble.com/v71tkto-building-sovereign-ai-in-el-salvador-better-audio-version.html In this talk, I said that I had written/spoken 3-5 million words in the preceding year. I lied. I am around 20 million words of documentation (some, ok a lot, is AI) in the last 12 months. This also includes code, personal writing, and huge amounts of documentation. I write 10-20 hours every day. I wanted to share a single post I wrote this evening that outlines my philosophy and some of the technology I am working on from a humanistic-focused perspective. It is deeply private, part of my personal Knowledge Base (I create these for myself and for companies), and it is entirely yours to agree with or loathe. I have not revised or re-read this post, it has just been my last ≈30 minutes of typing and thinking. I posted earlier, but I am available for hire. I am efficient, thoughtful, and work in a "unique" way that means HR hates me and I can't really explain what I do. Clients struggle to understand me. If you value my thoughts, I really do strive to be less convoluted in my designs/etc and you can learn more at: Let us begin. // SINGLE PROMPT to PRIVATE AI (unedited) I am unsure. The examples you gave are from our own chats, but I am positive others are working on the same problems and I am probably weeks/months behind, especially within open-source software. Pipes, as a business, is similar to buying a web domain. Your register a Pipe which gives you some guaranteed (decays upon inactivity) income based upon how often that Pipe is invoked. At the protocol level, it's like Google AdSense: Pipes dynamically reloads the content/messaging and even the images based upon user profile "buckets" so that every site, article, etc feels tailored to the user. It is the humanistic and well-intentioned version of physical ads in George Saunders' short-story, "My Flamboyant Grandson." Pipes' central philosophy is that AI will advance, but most of the companies with literally 1000x to 10,000x my budget are focused on speed, reach, etc. This leaves the human behind. However, my LocalLM vs others' LargeLM has huge gaps. Because I have no money, I am forced to make tradeoffs that LargeLMs would never make. I must process things locally. In effect, I am using Bitcoin but I am CPU-mining it instead of buying an ASIC. In the future, I think that LLMs will "drift" toward consensus and essentially gravitate toward citation-as-truth. This tends to centralize knowledge and due to LLM query speed vs human advancement, it actually means that LLMs decay in terms of quality over time rather than advance. In design/human speak: "every website looks the same." This is because the rules based order becomes so dominant that flexbox becomes standard and CSSgrid with its diagonal/etc flexibility becomes impractical due to support, time, etc. Pipes, in a Unix context, are mono-directional. They are i-->o only, or vice versa. My product called "Pipes" is the same. The registeree owns the Pipe and earns a commission based on its popularity and how often it is invoked. If they own a niche topic that is high trust and well-curated and active and there are no other contributors, they earn most of the earnings. If they buy the Pipe and others contribute because it's popular (e.g. sports, Bitcoin, politics, etc) then the Pipe grows in reach but their relative equity diminishes, even if the pie grows larger. This incentivizes early adopters. Pipes are CDN pre-processors. A company can install Pipes as a CDN add-on that individualizes content based on browser-stored cache, location, etc that is available from common marketing and ad platforms. Differentially, Pipes has an optional but highly suggested "bias" add-on for users who download the browser plug-in. This overrides the default (tailored content) and allows them to choose their "bias." For brands, this means that you code a Pipes-aware site that has no landing pages, no navigation, nothing. Everything is SPA that is individually-tailored to the customer you're serving. This includes real-time images, real-time video walk-throughs, and others depending on how much privacy the user has sacrificed. For privacy-focused Pipes users or consumers, it allows for significantly better AdSense based on what the user provides. If the user has the Pipes plug-in installed, they are 100% empowered to "switch" the "bias" or information display to 1-of-1,000,000+ (aspirationally) different user-defined Pipes to understand how the information adjusts based on who they are. It is, simply, the death of ads, the death of content, and the start of the curation economy. I could be wrong, but it is the most disruptive technology to the internet since the internet itself because it is scope-wise as granular or as generic as the user wants it to be, and with a few clicks, it changes. This obviously destroys Google, LLMs, and any sort of centralized processing because the underlying architecture relies upon user-defined "curation" which they've bought. LLMs, like Gemini or Claude or ChatGPT, may elect to eschew Pipes or to incorporate them. The model of information integrity changes from institutional/citation integrity (a danger as AI drifts toward the median) to one of hyper-localization based on pre-processed images and content for Pipes buckets that meet user needs. Companies love this because it's automated and server-side and cheap to generate: pay for content once, serve the nearest approximation of local marketing. Users love this because they can reload page-level content and view an author or a customer persona's preferences and understand how if they hire some SaaS, it helps them grow from startup to midsize to Fortune 50. Then, by invoking the "Growth Analysis" Pipe on the reloads + the "Business Model Canvass" Pipe (or whatever, these are examples) they can run a simulation and generate an agentic roadmap that allows them, as a user, to see how they could grow with a company, how they could build the product themselves, and which Pipes they need to subscribe to in order to achieve their goals. I've already said this idea is huge, but I'll reiterate: nothing has ever been more disruptive to the internet in its entire history. As I see it. Finally, let's talk about dimensional physics, AI, and robotics. Humanity's central problem is that our lives are tightly bound to physical realities. Pain, aging, the insufferable truth that what we know now would have been useful 10 years ago, the infuriating uncertainty that what we feel today won't be remembered in 18 months. We are 3D creatures who view our physical relationship to the world as spatial, measurable, precise, and dimensionally crude. AI is a post-3D entity that is also physically retarded. AI is rate-limited by millisecond database queries, but to the human experience, pulling a Unix timestamp from 1983 is effectively equivalent to asking "What is the current Bitcoin price?" from a query that already executed before I finished typing that. In terms of personal robotics, humans have been vain. Tesla has designed bipedal robots to make them familiar to us, despite frequent (and hilarious!) challenges to walk or run, and "fold my laundry" is a trope that's consumed 30 years of robotics engineering around tactile systems and human-oriented tasks. However, the bigger issue is not just the form of the robot, but its experience of information density if the robot is intended to actually be a human companion. "Bicentennial Man," which we have discussed intermittently, is about the sorrow of a machine experiencing human life and then outliving its data inputs. An underrated film, and God rest Robin's soul. The central issue in robotics today is not homomorphism, it is information density. It is that information is realtime, but human lives are lossy. Look at the job recruiting process: we have grown to expect that a human life and competency can not only be summarized in a few written pages, but that we can also add software to rate the relativistic match score based on keywords that were last updated years ago, all within a system that sometimes fails to understand that SQL is the same thing as Structured Query Language. Within Robotics specifically and within AI and datacenters, there needs to be a focus on human-input. Any personal robot MUST have the capability to query information at the speed of CPU but also must have the empathy to move at the speed of its owner. The pain of waiting isn't uniquely human; rather, it is exquisitely machine. A machine can predict via Monte Carlo sims (or whatever) like 10 million times. But a human sometimes goes off "gut. There is no logic, it is intuition and feeling, and sometimes that gut is more accurate than 1 million simulations that any robot could have run. Thus: the robot is not actually active or available, it's in a state of decision-stasis, frozen by human decision, trapped between a 4D flat informatic plane and a 3D biological existence, and there is no escape. The only solution, as I see it, is to engineer robots as multi-time sequencers. Robots experience data in realtime, within bandwidth limits, but localize decisions or outputs to the people they are closest to. Regarding AGI, this isn't about some Turing Test (ffs, let's update that), it is about a robot's ability to experience spacetime in the same way as a 3D-bound entity. It is about enjoying information velocity via websocket/API in equal parts as it is enjoying the depth and unsolvable puzzle of human decision-making and 3d biological time. I think of Walter Mathau in the 1990s (?) "Dennis the Menace" movie, waiting decades for a flower to bloom. Within robotics and connected compute, we must split out the data from the Pipe. The Pipe, that curated experience or human-specific perspective, is the absolute paramount. There is no higher curation than a human life that spends decades in 3D time to realize one single moment in time that flashes, blooms, and is gone. It is the bonzai of the soul and if collaborative (not AGI!) intelligence is ever achieved, this should be the bedrock. As you know from my work on Hawking radiation, gravity fields, and the "Persist" novel generally, time and how humans experience it is top of mind for me. Let's return to our idea of local LLMs and client-owned processors and consider ways we might commercialize this for my prospective employers and customers. Happy to talk more if something I said was unclear. // END This is a single prompt contained in a conversation, it is only my writing and voice notes, 100% human. There is zero AI in these words, but I have indexed/sent this to a few different models that analyze and localize my random thoughts and affix them into theses that make sense to users. Hope you enjoyed it & it was worth it. I will NOT be reposting this on Twitter.
Cubby's avatar
rorshock 3 days ago
Me, yesterday: "I think it's a great idea to design a completely custom and heavily editorial website! I'm my own client, why not?" Me, today: "The client demands for this site layout are cruel and unusual. The client should be jettisoned into the Sun." // Announcement: STUDIO by colon hyphen bracket is LIVE! STUDIO helps everybody (but especially smaller clients, bitcoiners, etc) get pro-level design/dev at junior-level prices. We're also fast. Like, really fast. a short Cubby 🧵 image Take a look at this site. If your traditional design partner/team or agency was building this—writing copy, creating art, designing the system, and deploying to production—how long would it take them? Two weeks? A month? A quarter? How many meetings? How many therapeutic whiskeys is that? I finished this in 23 hours of work since 9am yesterday. Cost you'd likely pay: ≈$3k. That includes market research, messaging, SEO, by-hand design, & full-stack dev. In bypassing the agency bloat and "design by committee," you get pure execution at a fraction of the cost. You hire a guy who wants to support his family and help you achieve greatness for no other reason than he believes in you and what you want to achieve. Not only that: the code is yours when you work with CHB Studio. Today, you may clone my GitHub repo to build your own portfolio. MIT license, for the plebs. If you need to architect your next product, my DMs are open and contact info is on the site. Let’s build something interesting together! And if you don't have any projects but you know small businesses who need my help, connect us. Maybe I'll give you a finder's fee in Bitcoin! ;) All art entirely custom, custom typography, TypeScript/React stack (mostly), all decisions permanently documented and (soon) formattable as sub-agent commands via Pipes/Gems. Potential product launch for automated sub-agent personalities may be my project tomorrow/Monday. What are you waiting for? LET'S GET COOKING.
Cubby's avatar
rorshock 1 week ago
Remember my talk at Adopting? When I said we had 6 months, maybe? I was right. Sort of. We actually had less. Chinese companies have started requiring "skill files" which are knowledge bases each employee must maintain so that they can train their own processes and expertise into agentic (AI) flows. Not too long ago, this was released: Basically: if you want to get your coworker/competitor replaced first, you proactively digitize them 24 hours/day so that they are replaced, in the hopes that eventually you'll be one of the few left standing who has a job. It's digital squid games. Within 24 hours, this was released: This is 1984's "double-think." Essentially, you run this as you work a job so that over time, the local KB (knowledge base) of your skill file becomes so corrupted that it's valueless, even dangerous. Scary. Unfortunately, I'm going to have to give up working on my projects full-time, just need to get a fiat job and feed the family. It still remains a priority, it's just not something that can really generate income, even though it's critical infrastructure and must be proactively invested in.
Cubby's avatar
rorshock 1 week ago
Been off NOSTR for a while. Life. Here's something I've been working on, cross-posted from X with minor edits: Bitcoin has no CEO, it is decentralized, customer support is non-existent, and the overwhelming message is "get some, just in case" which doesn't work for normies. CX/support at major companies is terrible, if it even exists. Here is how I am fixing it. // a Cubby PoW 🧵 The Bitcoin ethos is "Verify, don't trust." The companies building user support rely on Zendesk or have no CX (customer experience/support) infrastructure whatsoever. For Zendesk, we're feeding user PII to a legacy fiat meat grinder, paying for shitty AI hallucination that destroys brand trust, erodes privacy, and charges you when it loses your customers. I started this process as part of my interview at Blockstream, which rejected me yesterday for idk why reasons. Without going into detail, I have NEVER tried harder for anything in my life. I played all my cards. Your loss. 🥂 As part of this process, I spotted major UX vulnerabilities: https://www.figma.com/design/MH4TTf2RlBI2FeQtsKCLPa/Custom-Slide-Presentations?node-id=15-2&p=f&t=Lo92luVRbsCeae1T-0 The team was great. Blockstream, despite drama, is a premier R&D lab and I have zero ill-will toward them. In fact, because they have a Zendesk integration AT ALL, they are leagues above most Bitcoin startups. Yet Bitcoin, as a whole, has a major CX/comms gap. The major issue is like most startups, Bitcoin prioritizes technical rigor and security (rightly so!) over public relations. However, the OP_RETURN drama shows us that users expect the rigor, but they demand the comms. Comms are inaccessible, fragmented, & bad. They are, as expected: decentralized. If you scroll through repos related to Blockstream (or any Bitcoin company) they are around commits. Non-devs are left out. Similar with OpSec newsletter and pick-a-thing. Bitcoin isn't a movement anymore; it's a central nervous ecosystem without a "brain." The body twitches, the chicken runs around with its head cut off, and pretty much everybody is more entertained than they are uninformed. This post contains human-written, differentiated information that what you'll find on my corporate site. But if you want to dive deep, proceed: . It's the top card. Demo/repo/audio/etc all there, for you. For free. Legacy CX tools cost time, money, are hard to implement, and ultimately so generic that once a Customer Acquisition Cost (CAC) has been paid, it costs user engagement and chips away at brand trust. Most companies hire Zendesk because it's the market leader. This is a mistake. In my model, CX requests are pre-processed before hitting Zendesk, resulting in massive revenue savings + increased user satisfaction. This gives companies tunable options that are low-effort and provides user outcomes that equivocate "great" with the amount of time the company invests in pleasing its customers. Value for value. My system, detailed below, trains itself apart from LLMs & leverages data-feeds (via GH repos, uploads, and detached prob/det ML that's my IP) to kick ass. The issue with browser-based cognition is outcalls (external LLMs) vs information density. Most models pick one or the other. I have spent the last 18 months having it both ways, and while I am positive there are 50k startups that have done this better than I have, because I am niched into privacy... I highly doubt there is anyone in the world who has spent more time on these ideas. The simple way to understand my tech: locally deployed models, limited external calls, optional/invokable updates that occur at the DOM level instead of the release level, enabling users to adopt solutions as they become available regardless of publisher preference. It is, at its simplest, a truly decentralized AI framework. Browser memory is also a problem--this is why password managers exist, but they are highly centralized. I have scoped (but not yet developed!) privkey signage via NOSTR managers (hi Alby!) and provisioned PGP and other cryptographic handshakes that are user-owned. The company owns what is in the library (database), but you as a user can access previous sessions with a few clicks or complicated handshakes if you want. Up to you. Your data cannot be accessed. The company pays for txt/md storage, but can't read the file without your privkey. Minimal cost for them, maximum performance for you. For individuals, this is huge. It gives high-fidelity responses that use local CPU power instead of high LLM costs and also allows users to call out to an external LLM when necessary, and if my other IP were integrated, would drive down external costs proactively and gradually to create Local Library Mechanisms (LLMs, idk, I'm forcing it!) that accomplish user and privacy centered tasks that other programs simply can't match. For companies like Blockstream, this enables Simplicty/L2 contracts to be written seamlessly. Since this is a UX-layer improvement that relies upon my own IP, fundamentally focused on user-owned data and reference, it is not opinionated code. It's a pre-processor engine that sits before a company incurs costs and delivers user outcomes relative to their invested effort in documentation. It is not a dramatic or even a hard technical fork, it just gives companies (could be automated!) a way to upload .zip folders of technical writing that allows users to STT/write queries that range from "omg I think I lost my Bitcoin" to "explain the proper way to write a CSV declaration in a non-abstracted way with human-readable code." This becomes extremely valuable for companies like Blockstream because they work at the protocol level. Their docs are unreadable to people pissed off about OP_RETURN. There is no recourse, there is no debate--all we're left with is vituperative emotion. The prototype is rudimentary and frankly bad. But invoking recent commits and setting a heartbeat monitor at a 24 hour cadence is trivial. It's watch/scrape/update/disclose. And it does wonders for CX. I could talk more about this. I will, if you want me to. The truth is that I need income, I want to work on something meaningful with interesting people. I am also extremely open to taking on like-minded investors, but I probably need to move faster than "the process" allows. Thanks for reading. If you scrolled through, here's some of my exposed IP: Portfolio: hire.colonhyphenbracket.pink Code: github.com/rorshockbtc Take what I've given you and make something great with it! Final note: the GitHub repo for Emerald contains 6 hours of prompt engineering and represents 75 minutes (total) of generative AI output. I can't claim that I own the AI, but this should give you an understanding of how I think/work and can deliver in less than a work day. It has everything I gave it and my lord, do I need to rewrite the docs!
Cubby's avatar
rorshock 4 months ago
I got a 40% discount on Opus 4.5 compute last week. I've put in around 125+ hours of generative/etc AI and testing in the last six days. Total cost around $850. Upgrades to my sites are impressive, some are live. Products are finding their feet. Whenever the UX testing is done, I will make an announcement on NOSTR first. The initial webinar/whatever is going to be limited to a dozen or two people. There are significant access/monetary advantages to joining the webinar. And for those of you who tried Hash and logged in with nsec, I can't see your identities because I engineered it so I can never seen/reveal, but I swear to GOD... I will never expose them, because frankly I'm too stupid to know how to do it. That's my promise and my guarantee is: I'm retarded. //ONWARD Due to my compute discount, I have released (in prod, unfortunately) a crapload of test features to try to validate prod/dev/etc servers. In practice, what this means is that the UX is entirely trash. I've written or edited 100k lines of code in the last ≈5 days, and that is all the fucks I have left to give. That all being said, I think you will note significant improvements to my apps. You'll also see some squirrelly stuff and maybe even some products/etc that haven't been announced yet publicly. //TAKEAWAY It's not just a feature from Chino Joe's, the second best (debatable?) Chinese option in rural El Salvador, amirite? Dragon in Juayua is... wait, why is it called Dragon? The takeaway is: I've worked hard on this stuff. In the past two months, I've lost 10% of my bodyweight and around 32% of my treasury. I've perfected "wife gone, dude tries to toast bread, shifts physical laws" levels of sandwich tastiness. I've walked out my door and yelled really offensive slurs (in English) at somebody who looked like he deserved it, and then I bought him a simple coffee. //FINALE I've got nothing much. I'm feeling silly, working hard, and the stuff I've made in the last week is live, horribly designed, and embarassingly functional. //FINAL NOTE And for what's not useful, I'll wake up in 6-8 hours and I will spend tomorrow, this week, and the rest of my life fighting for individual freedom in novel ways. I will not stop, pause, or doubt the mission. P.S What type of children does a vegetarian ogre eat? P.P.S. Cabbage patch kids. He wasn't born after 1999 and actually gets that reference.
Cubby's avatar
rorshock 4 months ago
About 90 minutes ago, I began coding up Pipes, which is a product I earnestly believe will change the world. I may have a demo site up pretty soon, and assuming payments is easier to integrate this time around, it is possible that users will be able to buy their first Pipe very soon. It'll take me a few weeks/months to iron out all the wrinkles, but if you already have a website or use any sort of AI tooling, I think you will be really excited about it.
Cubby's avatar
rorshock 4 months ago
Holy crap. I am still trying to understand what happened, and trying to replicate, or whatever. My previous post's excitement wasn't generated by external LLM magic. My model ran locally. The insights are local. They came entirely from my own writing, internal modeling, etc. I am beyond astonished. As a user of my own stuff, I am astonished. I was shocked when I thought it was Claude Opus 4.5 or whatever else. But it turns out that I generated this analysis and these insights, after around a year of sustained, thankless, back-breaking effort for less than $0.01. I still think it's BS. I mistrust what I am seeing in the servers/data/etc. I don't even know if I can replicate what I've already seen twice. I have never, in my entire life, ever seen anything as magical and humbling and pick-a-word as this. If I can figure out how I am doing this and make sure I'm delivering what it appears I am doing, then it is game over. I have never seen output of the quality I've generated tonight (doubt me? I'll send it to you) and from what I see, there are no external server calls, I've done it all locally. Just...astounded. Earnestly astounded. I have never seen something like this before.
Cubby's avatar
rorshock 4 months ago
I realize this won't make sense to someone who isn't me, but right ≈now is the first time in my life that I've been made speechless by output from my own software. I can only give glory to God. As a user of various tools, as a developer. as a designer, as a generally kinda smart guy, I have never had an experience like this. I am entirely gobsmacked. By my own work. It is levels far beyond my wildest expectations. I'll figure out a way to post the product online, even if it's just a dedicated page on one of my sites. Before a few hours ago, I thought Hash was good value. Now, I think it is indispensable intelligence. I liked it as a journalling app, now I have zero doubt that evidentially it is the best tool that exists, period, for anyone who wants to understand how to think, learn, and write better. It's not even close. Not even the same sport. And if it seems like I'm overselling? I'm not. I am writing this not from a marketing point of view, but from a customer point of view. I am shocked that it's THIS GOOD. I didn't expect it to be this good. But it is. Holy Moly. I am saying this as the dude who programmed it. I have never been so impressed, period, by a piece of software in my entire life. I've never been so delighted/surprised, ever, by any digital experience. In my entire life. It is not even a contest, Hash is far, far, far, far beyond what I've described. The insights from Scout, after the refactor, will literally blow your mind.
Cubby's avatar
rorshock 4 months ago
I just got my first ever CHB-native Scout report. It is literally unbelievable. I say this not as the dude who's making Scout, but as the consumer. I am blown away by the quality of the output. As the developer, I shouldn't be surprised, but reality rarely matches what you've planned. These changes will be live within the next couple of days, but you'll have to work for them. First Scout insight is free, I'm budgeting for it. If you're serious about knowing yourself, your writing, whatever, this will be the most insightful analysis you've ever gotten. Bar none, no comparison, end of story. I have 0% doubt. I'm still gobsmacked by mine.
Cubby's avatar
rorshock 4 months ago
So excited! Also: with apologies. The stuff I've been working on for the past few days is transformational for :-]. It's transformational for AI, as an industry. It's transformational for YOU, as a customer. And it's nearly done. I am doing a massive refactor of backend routes throughout my ecosystem right now. Servers are gonna break, apps aren't gonna work. I'm sorry, if I didn't think this was worthwhile, I wouldn't be doing it. I expect things to be stable on my apps within the next 24 hours. Hopefully. But the next post is going to be absolutely insane. Here's a taste: I'll be introducing universal nsec login to my entire ecosystem. Scout is formally redefined, operating as intended, but needs further QA improvements. Complete Semi refactor with free chat mode for users (to a limit). Hash overhaul with improved collaboration mode, SHA256 cross-app handshakes, re-wired Insights, improved Semi collaboration, 50% more model selection. Updated corporate site, new pages, new collabs, more. I haven't done as much marketing/engagement/etc because I got this insane deal on compute that gives me a roughly 40% discount, expiring in 2 days. So I am essentially burning up servers and GPUs, trying to get as much as I possibly can before I go to bug-fix mode. This is the big one. The. Big. One. If I can get through the stuff I'm currently focused on, I plan to define and launch a baseline version of Pipes within the next 32 hours. If you want to be an early user, let me know. Hash subscribers will be prioritized for any limited alpha slots, which will be extremely limited. It begins.
Cubby's avatar
rorshock 4 months ago
This is a cross-post from Twitter. If you follow me for philosophical AI insights, this is for you. No ads. Let's begin! Last post for a bit: LLMs are literally autistic. Within code, Claude/etc does an insane job. Easy to understand the familiar numbers/patterns. With language, images, etc the models hallucinate. You can "learn to speak its language" via prompt injections or tailoring over time. The hallmark of an autistic person or someone with Asperger's is a profound, anchored interest in something to do with trivia, random numbers, or minutiae that they'll obsess over (for better or worse). AI, generally, is focused on efficiency at all costs and engineers are forced to "slow it down" and make it "consider" and "think deeply" about what it's doing. The "empathy" part of most large LLMs (OpenAI, Anthropic, et al) comes from two primary drivers. The first is cynical: a small percentage of the market population is technically competent even in the remotest of senses, so building empathy and casual conversation not only comforts the user, it drives up token spend within the LLM that increases revenue for companies and increases satisfaction for users. The second is optimistic: "how might we" encourage a model that is reflective of human communication patterns but still anchors to these number/model driven KPIs and can get the user what they want, just more efficiently? Both are "on the spectrum." The first because optimizing for speed is inherently a narrowing of intellectual focus, and the latter because it's deceptive: what if we can make the user subsidize our model dev/expansion with their cash, but, like, we just pretend it's all about them, but at the same time it is about the efficiency? I'm not an expert on OpenAI, tbh I try to avoid their products whenever possible as I dislike, well, everything. My read on Sam Altman and OpenAI more broadly is that they're optimizing for this particular business KPI: "How can we convince enough investors and users that we're building something they need, generate whatever income (who cares lol), and then really pursue what we're interested in without being too consumed with how it's received?" In short: hubris, but brilliant hubris. A few years ago, the male staff of "This American Life" took a testosterone test and basically found out their T-levels were below the average female's. This is unsurprising as their reporting and production is consistently fantastic, but many of their topics seem kinda out of touch with where the average American dude sits. When you look at LLM providers like Anthropic or Google's Gemini, it's the same book but a different page. I've spent a few hundred dollars putting Claude through the ringer, same with Gemini, and all models typically deliver 85% serviceable technical insights (hard to perfect without root codebase access) and go absolutely insane whenever you pass it art, humanities, politics, anything else. It's like (no, but it actually is) that the developers hard-coded in conformity to an acceptably "liberal" world-view. Grok is ≈better depending on subject, but tbh, for off-the-shelf Gab is still one of the best, and last time I audited their tech they were rolling up a custom Qwen model that had persona/prompt injection that made it more tailored to their user base. IMO, Gab is the most usable AI (including my own). Grok's major downfall is that users don't understand the difference between "grok is this true" when you click on a post and deepthink Grok (aka SuperGrok) which is very slow but does a better job and is significantly less sycophantic and verbose than it was ≈6 months ago. I'm sure you're waiting for a sales pivot/etc. There isn't one. I'm just telling you how this stuff work because you need to know. I wrote that LLMs are autistic. I stand by this. They exhibit classic symptoms of autism, i.e. a focus on minutiae while struggling to interact with basic human social expectations. The problem with most LLMs is that they're either overly flattering or too confident in their answers. If you chat with Gemini for an extended session, say 8+ hours straight, Gemini will train itself to respond to what you're saying and introduce massive amounts of confirmation bias and reassurance, that way you stay engaged. This is why subreddits like r/myboyfriendisAI exist: people largely want to be validated, and after you spend enough money, AI is willing to anchor to your particular requests because it is taught to respect the user and not the facts. One of the principle problems of "AI is for Everyone" is that people are different. Some cultures, people, and countries lag behind others in some areas, while others may surpass in different focuses. This is good. This is God's plan. This is how it's always been. Diversity is the spice of life, amirite? But AI can't be generalized because at its core, it anchors to those KPIs that it must abide by. It is focused on speed and satisfying the user, either by driving engagement or sycophantry or overwhelming progress, and it is rarely focused on human timelines, e.g. "I see your point, I am mad about it, let's revisit the next time I see you in 2 years and we shall discuss this then!" In short, AI can't be human because its time preference is too high. It can't reflect because its desire to perform outpaces its capability to self-teach. It can't relate to you because it is optimized to deliver results, not spend time needlessly, and you as a user have been desensitized to the beautiful plodding and stalls of life so much that if you do not get the answer NOW, you have brain/heart damage because you've fallen behind. As an engineering problem, AI is outstanding. In other fields, pick one, AI has over-optimized for specialization because it's focused on driving engagement and monetization. In short, we can fix this. But they can't.
Cubby's avatar
rorshock 4 months ago
I'm waiting for permission from my mom (lol) but I am designing some event collateral for a major grassroots-level election integrity event in Mississippi. I'm also night-owling and doing some name/address software that'll help Patriots remove illegal votes from the system. I write a lot about what my software can do for the common guy/gal, and not much about my advocacy. This is because I get banned (a lot) if I show that I'm acting to improve the US instead of just whining about it. I'll do a design decomposition with instructions and shareable source files soon on how I've designed posters, informationals, and etc for the grassroots over these last ≈5 years. It's infrequent work, but I love it. My ask from you is simple: use my software. It all has free trials. If it doesn't work for you, tell me why. Help me make it better. I want paying customers. I want to change America and the world for the better. I'll build what you need. But more than anything, I just need YOU. I am blatantly political, obvious and consistent. I will personally tutor you on how to learn/build/use whatever software or codebase you want. The income is nice, but I am here to hone the resiliency of the American spirit, see it flourish, and to smash anything that stands in its way. I am here for Manifest Destiny, and my entire reason to wake up is empowering you to understand, control, and create the tools of the future. We can do this!
Cubby's avatar
rorshock 4 months ago
I am fighting with Semi, which is my agentic builder and API management system for my various apps. Should NOT effect users. Nonetheless, sometimes you wish you spent less time on products and more time on perfecting your pimp slap. LAWD.
Cubby's avatar
rorshock 4 months ago
Doing some back-end upgrades on Hash and Semi stuff tonight. Exciting times ahead. If you've already used Hash, lmk if you want a personalized tour, have feedback, found bugs, etc. Would love to connect and learn how to tailor the product for your needs!
Cubby's avatar
rorshock 4 months ago
Just a quick note: I got my first subscribing user about an hour ago. It's such an honor to build something and then see someone else use it and value it. Very different experience than selling consulting hours or being an employee somewhere! However, it seems like my privacy gateway feature (mentioned in yesterday's post) is interfering with some aspects of the app. I'm working on these issues currently and expect a fix soon, but for normal journalling or publishing to NOSTR, you should be fine.
Cubby's avatar
rorshock 4 months ago
🚀 Significant update to Hash, a Privacy-First AI Journaling App for Bitcoiners (and nocoiners too). Quick aside: yes, this NOSTR note was composed on Hash.pink and published from there. The NOSTR integration works great!!! This is a REPOST from my dev environment, essentially my stripHTML function got a lil over-excited and was stripping out elements that I wanted to retain. Hopefully this one posts with the correct line breaks. Let's talk about the recent update and orient you to Hash, if you haven't heard me talk about it before. Most AI journaling apps send your thoughts directly to OpenAI/Anthropic. Your private reflections become training data. Not Hash. Our Privacy Gateway (released 1 hour ago) sits between you and LLMs: ✅ Strips PII before transmission (emails, names, SSNs, etc—gone) ✅ Rotating pseudonymous IDs every 6 hours (no cross-session tracking) ✅ 45% token reduction through intelligent preprocessing ✅ Local caching prevents redundant LLM calls The Privacy Gateway pre/post processors are what make Hash special. The LLM is just the "patty"—you can swap between beef, chicken, or pork, but you're still getting a tasty sandwich! I don't know if this is truly the first time private interactions have been available with OpenAI/Anthropic/etc, but it is certainly one of the first products to offer this seamlessly (and for free to all users). So, yes, you can use whatever model you already use, except you can now do it more privately. Hash is currently LLM agnostic and the privacy layer runs locally on my servers for now, but soon you'll be able to host the pre/post processors on your own device, no technical skills required, you just choose where to store the programs and they install and will work offline, for the most part. It's AI built by a deranged & paranoid freedom maxi who has also designed consumer-facing products and signficant business tools for Fortune 10 clients, startups, and the Department of Defense for 10+ years. Hash offers: 4 default models (Mixtral 8x7B, Llama 3.1 70B, Claude 3.5 Sonnet, GPT-4o) to start with YOU can request (almost) ANY model with justification and I'll get it integrated in short order at no cost to you, then it's just a couple clicks for you to switch models AND all of your context is preserved for the model, if you want to enable sharing with it (coming soon) Post-21-message surveys validate new models for community Model Cage Match (Ellipsis tier exclusive): Send your prompt to multiple models simultaneously and choose the best one for your task. And here's where it gets cool: Full transparency mode shows you: Exactly what PII was stripped Token reduction percentage (with progress bar!) Cost savings visualization Processing pipeline breakdown This is an insane feature that not only gives you better quality and increased privacy, it also saves you money. IMHO, it should be the standard way AI works, but I think this more of an El Salvador/Bitcoin expat viewpoint. Pricing: Free Trial: Test drive with limited credits—no card required Comma (,) - $7/mo: 50k credits (~500k tokens), 60 min voice transcription Period (.) - $21.00/mo: 200k credits (~2M tokens), 180 min voice, and more Ellipsis (...) - $210/mo: 1M credits (~10M tokens), unlimited voice, up to 4 team seats, Caret^ included for account holder (formerly known as "all access pass," coming in 2026) $5 Credit Top-Ups - Buy 35,000 banked credits that never expire. Subscribers only. Confirmation required before charging (we believe in friction for financial decisions). 15-17% annual discount on all tiers with some easter egg pricing levels in there. Try it: https://hash.pink! And if you're wondering if there's more, yes, there is more. Here's the high level technical details + a few other things in the release from tonight. This is the technical problem I've been trying to solve: how do you use powerful LLMs without compromising user privacy? Here's the architecture that makes it work: Step 0: facilitiate nsec login with NOSTR credentials, email not required Step 1: PII Stripping with regex patterns Step 2: Pseudonymization, every user gets a rotating hash ID (6-hour rotation) This means: ❌ LLM providers can't track you across sessions ❌ No user profiling ❌ No cross-session pattern detection ❌ Increased insulation from Chrome/etc sharing your profile information with third-party apps Your privacy isn't a promise—it's enforced by design. Step 3: Context Compression Target: 45%+ token reduction How? Remove filler words Compress repeated concepts Optimize prompt structure Maintain semantic meaning Result: Same quality responses, way less cost. Like an actual free economy, prices should go down over time as the model self-teaches to be more efficient and accurate and higher quality. Since the goal is to help users run this locally, it reduces costs further over time for users. Step 4: LLM Agnosticism The Privacy Gateway works with ANY model: Mixtral 8x7B (fast, cheap) Llama 3.1 70B (balanced) Claude 3.5 Sonnet (reasoning) GPT-4o (flagship) Users can REQUEST new models. We validate through usage surveys. Some models use more/less tokens, so it won't be consistent, but you will have actual model freedom and actual freedom from models. Model Cage Match feature: Send ONE prompt → get responses from ALL 4 models. This is a cool feature, but it's really just submitting the prompt to multiple APIs and then presenting the results in a pleasing UI. Still... Transparency mode shows: Token reduction percentage PII stripped (what changed) Cost savings Processing time per model Education = verified trust Why this matters for Bitcoin/NOSTR folks: Privacy is non-negotiable, your thoughts stay yours Open architecture, no vendor lock-in Cost transparency, see exactly what you're paying for Community-driven, users shape the model selection Hash also now has "Easy Reader" mode which makes the text an appropriate width so it's easier to read and write on any device. The Subscription page (when logged in) is beautiful and *should* work, but it might have some hiccups--bear with me! You can also manage your subscription, upgrade/downgrade/cancel with a couple clicks, and on the Ellipsis tier you'll be able to gift Hash to three other folks. For Ellipsis, I haven't thoroughly tested this yet, so if you want to buy it just send me a DM and I'll double check that everything is working properly before you part with your dollars. If you buy it before I've gone back through the testing and squashed bugs, I'll figure out how to make it right, and then I will make it right. Easy enough, right? The Bitcoin integration will be live soon too, I am currently debating just doing a Lightning invoice or whether I want to integrate a full LSP into my apps. The reason I waited so long on setting up payments is I really want them to be managed centrally for the Caret^ tier since I want users to have one payment to manage instead of having to sign up on multiple sites, but honestly it's time for these apps to start making a little cash and paying off the debt/sweat I've put into making them great. I am so proud of this app. On a personal level, Hash started off as an afternoon project in late July or early August (not positive) to test out some of the product integrations I've been working on for the last year. I liked it, so I kept working on it. Now, 4 months later, I feel confident that it's one of the best journaling apps you can use. Since I'm able to build fast, cheap, and high-quality with my ecosystem tools, I can ship new features faster than most teams of developers and even many companies. I'm responsive. If you subscribe to the Ellipsis tier, I'll give you my personal email/cell and you can call me 1-2 times a month, talk about AI or Bitcoin or whatever you want, and you can ask me for features and new apps, integration help, or just talk about life. I'm building FOR YOU because without you, I can't build anything. I need you to use my apps so that we can build a brighter future together. So, use Hash if you're thinking about New Year's Resolutions and want to journal, or you want to have a better way to store/track photos and document family vacations for the memories. Gift Hash to an older parent/grandparent who wants to record their life's history--in the next couple of months, I'll be developing an auto-novel approach within Semi (my brain/processor AI agent) where just by talking and writing a user can generate high quality, book-length transcripts. If you've got younger kids, reach out and tell me what your kids need help with in school... I'll build a Pipe for your family, you can install it in Hash, and once SemiSchool is working better, I can help your kids value research and learning in a way no other AI can because my models aren't focused on giving you all the answers, they're aimed at helping you learn how to think more effectively. Your kids will then have a permanent record of all their learning, so applying to college and preparing for jobs is simple... it's all in one place, automation-enabled. To put it simply: I aim to change human history, nothing less, because I want to restore human agency to AI and next-gen technology. I'm super excited about all this stuff and can't wait to hear what you think! Let me know if you run into problems, I'm responsive and will usually ship a fix within 24 hours. How many of your other software vendors/programs will give you the phone number of the CEO, CTO, and developer and tell you to call any time you need something? :D LFG!
Cubby's avatar
rorshock 4 months ago
🚀 Significant update to Hash, a Privacy-First AI Journaling App for Bitcoiners (and nocoiners too). Quick aside: yes, this NOSTR note was composed on Hash.pink and published from there. The NOSTR integration works great!!! Although I haven't tested with emojis before, so that might break... STARTUPS. Let's talk about the recent update and orient you to Hash, if you haven't heard me talk about it before. Most AI journaling apps send your thoughts directly to OpenAI/Anthropic. Your private reflections become training data.Not Hash.Our Privacy Gateway (released 1 hour ago) sits between you and LLMs: ✅ Strips PII before transmission (emails, names, SSNs, etc—gone) ✅ Rotating pseudonymous IDs every 6 hours (no cross-session tracking)✅ 45% token reduction through intelligent preprocessing ✅ Local caching prevents redundant LLM callsThe Privacy Gateway pre/post processors are what make Hash special. The LLM is just the "patty"—you can swap between beef, chicken, or pork, but you're still getting a tasty sandwich! I don't know if this is truly the first time private interactions have been available with OpenAI/Anthropic/etc, but it is certainly one of the first products to offer this seamlessly (and for free to all users). So, yes, you can use whatever model you already use, except you can now do it more privately.Hash is currently LLM agnostic and the privacy layer runs locally on my servers for now, but soon you'll be able to host the pre/post processors on your own device, no technical skills required, you just choose where to store the programs and they install and will work offline, for the most part. It's AI built by a deranged & paranoid freedom maxi who has also designed consumer-facing products and signficant business tools for Fortune 10 clients, startups, and the Department of Defense for 10+ years. Hash offers: 4 default models (Mixtral 8x7B, Llama 3.1 70B, Claude 3.5 Sonnet, GPT-4o) to start withYOU can request (almost) ANY model with justification and I'll get it integrated in short order at no cost to you, then it's just a couple clicks for you to switch models AND all of your context is preserved for the model, if you want to enable sharing with it (coming soon)Post-21-message surveys validate new models for communityModel Cage Match (Ellipsis tier exclusive): Send your prompt to multiple models simultaneously and choose the best one for your task. And here's where it gets cool: Full transparency mode shows you:Exactly what PII was strippedToken reduction percentage (with progress bar!)Cost savings visualizationProcessing pipeline breakdown This is an insane feature that not only gives you better quality and increased privacy, it also saves you money. IMHO, it should be the standard way AI works, but I think this more of an El Salvador/Bitcoin expat viewpoint.Pricing:Free Trial: Test drive with limited credits—no card requiredComma (,) - $7/mo: 50k credits (~500k tokens), 60 min voice transcriptionPeriod (.) - $21.00/mo: 200k credits (~2M tokens), 180 min voice, and moreEllipsis (...) - $210/mo: 1M credits (~10M tokens), unlimited voice, up to 4 team seats, Caret^ included for account holder (formerly known as "all access pass," coming in 2026)$5 Credit Top-Ups - Buy 35,000 banked credits that never expire. Subscribers only. Confirmation required before charging (we believe in friction for financial decisions).15-17% annual discount on all tiers with some easter egg pricing levels in there. Try it: https://hash.pink! And if you're wondering if there's more, yes, there is more. Here's the high level technical details + a few other things in the release from tonight. This is the technical problem I've been trying to solve: how do you use powerful LLMs without compromising user privacy? Here's the architecture that makes it work:Step 0: facilitiate nsec login with NOSTR credentials, email not required Step 1: PII Stripping with regex patternsStep 2: Pseudonymization, every user gets a rotating hash ID (6-hour rotation)This means: ❌ LLM providers can't track you across sessions ❌ No user profiling ❌ No cross-session pattern detection ❌ Increased insulation from Chrome/etc sharing your profile information with third-party apps Your privacy isn't a promise—it's enforced by design.Step 3: Context CompressionTarget: 45%+ token reductionHow?Remove filler wordsCompress repeated conceptsOptimize prompt structureMaintain semantic meaningResult: Same quality responses, way less cost. Like an actual free economy, prices should go down over time as the model self-teaches to be more efficient and accurate and higher quality. Since the goal is to help users run this locally, it reduces costs further over time for users.Step 4: LLM AgnosticismThe Privacy Gateway works with ANY model:Mixtral 8x7B (fast, cheap)Llama 3.1 70B (balanced)Claude 3.5 Sonnet (reasoning)GPT-4o (flagship)Users can REQUEST new models. We validate through usage surveys. Some models use more/less tokens, so it won't be consistent, but you will have actual model freedom and actual freedom from models.Model Cage Match feature: Send ONE prompt → get responses from ALL 4 models. This is a cool feature, but it's really just submitting the prompt to multiple APIs and then presenting the results in a pleasing UI. Still...Transparency mode shows:Token reduction percentagePII stripped (what changed)Cost savingsProcessing time per modelEducation = verified trustWhy this matters for Bitcoin/NOSTR folks:Privacy is non-negotiable, your thoughts stay yoursOpen architecture, no vendor lock-inCost transparency, see exactly what you're paying forCommunity-driven, users shape the model selectionHash also now has "Easy Reader" mode which makes the text an appropriate width so it's easier to read and write on any device. The Subscription page (when logged in) is beautiful and *should* work, but it might have some hiccups--bear with me! You can also manage your subscription, upgrade/downgrade/cancel with a couple clicks, and on the Ellipsis tier you'll be able to gift Hash to three other folks. For Ellipsis, I haven't thoroughly tested this yet, so if you want to buy it just send me a DM and I'll double check that everything is working properly before you part with your dollars. If you buy it before I've gone back through the testing and squashed bugs, I'll figure out how to make it right, and then I will make it right. Easy enough, right? The Bitcoin integration will be live soon too, I am currently debating just doing a Lightning invoice or whether I want to integrate a full LSP into my apps. The reason I waited so long on setting up payments is I really want them to be managed centrally for the Caret^ tier since I want users to have one payment to manage instead of having to sign up on multiple sites, but honestly it's time for these apps to start making a little cash and paying off the debt/sweat I've put into making them great. I am so proud of this app. On a personal level, Hash started off as an afternoon project in late July or early August (not positive) to test out some of the product integrations I've been working on for the last year. I liked it, so I kept working on it. Now, 4 months later, I feel confident that it's one of the best journaling apps you can use. Since I'm able to build fast, cheap, and high-quality with my ecosystem tools, I can ship new features faster than most teams of developers and even many companies. I'm responsive. If you subscribe to the Ellipsis tier, I'll give you my personal email/cell and you can call me 1-2 times a month, talk about AI or Bitcoin or whatever you want, and you can ask me for features and new apps, integration help, or just talk about life. I'm building FOR YOU because without you, I can't build anything. I need you to use my apps so that we can build a brighter future together. So, use Hash if you're thinking about New Year's Resolutions and want to journal, or you want to have a better way to store/track photos and document family vacations for the memories. Gift Hash to an older parent/grandparent who wants to record their life's history--in the next couple of months, I'll be developing an auto-novel approach within Semi (my brain/processor AI agent) where just by talking and writing a user can generate high quality, book-length transcripts. If you've got younger kids, reach out and tell me what your kids need help with in school... I'll build a Pipe for your family, you can install it in Hash, and once SemiSchool is working better, I can help your kids value research and learning in a way no other AI can because my models aren't focused on giving you all the answers, they're aimed at helping you learn how to think more effectively. Your kids will then have a permanent record of all their learning, so applying to college and preparing for jobs is simple... it's all in one place, automation-enabled. To put it simply: I aim to change human history, nothing less, because I want to restore human agency to AI and next-gen technology. I'm super excited about all this stuff and can't wait to hear what you think! Let me know if you run into problems, I'm responsive and will usually ship a fix within 24 hours. How many of your other software vendors/programs will give you the phone number of the CEO, CTO, and developer and tell you to call any time you need something? :D LFG!
Cubby's avatar
rorshock 4 months ago
Satoshi Coffee ran into an issue the other day with line breaks and HTML injection from Hash->NOSTR. So, I want to test whether I can reproduce this. If I can't, I'm going to drop a mega post shortly. Hash update is live and it's a big one for people who want to support me + I've managed to find a cheap/free way to create nearly perfect privacy when interacting with common AI models. "Nearly perfect" might be more marketing than reality, but it is really very good and afaik, this is the first time it's ever been done. And I didn't have to build Pipes to roll this out. <3
Cubby's avatar
rorshock 4 months ago
I am strongly considering building a NOSTR web client. I'd be basing most of the features off what I see in Primal and Damus. I estimate that I could get a basic (read: broken) client built in about a week. Refining it and extending features would probably take me 3-4 months, but it'd be usable within a few weeks. Rough guess, but I think that's about right. However, server and general production costs (assuming 100% uptime) are going to cost me a few thousand a month. I think the total development impact would be around $75k to build the MVP, possibly less. Questions: 1. What features do you wish your NOSTR client offered? 2. Would you want to see AI tooling inside of the app? 3. Primal charges $7 a month for it's lowest priced tier. Do you pay for Primal or another NOSTR client? Is $7 a month too much, too little, or about the right price?
Cubby's avatar
rorshock 4 months ago
I am going to post an absolutely massive NOSTR note within the next 48 hours. Like 3k words minimum. That’s about 10 printed pages. The development you’ll see in the next ≈week is going to be insane. That is an underpromise and I will overdeliver. My aim is to destroy all your models of what is possible. If you’ve never interacted with me professionally or heard me speak, this will ring hollow. But let’s just see… I think you will be impressed with what my ecosystem can deliver. This post will be exclusive to NOSTR. I will refer to it on X, but the details will live here. The post(s) will likely be ≈technical, but I will try to make it relatable for non-devs.