Tim Bouma's avatar
Tim Bouma
trbouma@safebox.dev
npub1q6mc...x7d5
| Independent Self | Pug Lover | Published Author | #SovEng Alum | #Cashu OG | #OpenSats Grantee x 2| #Nosfabrica Prize Winner
Tim Bouma's avatar
Tim Bouma 1 week ago
Sharing a photo from my snowshoeing trip a couple of days ago. image
Tim Bouma's avatar
Tim Bouma 1 week ago
As a kid, I remember ‘slop’ as ‘pig slop’ - the leftover food and edible garbage you fed to pigs that would eventually land back on your plate as bacon.
Tim Bouma's avatar
Tim Bouma 1 week ago
The quote below is from Andrej Karpathy. Funny thing - I feel the complete opposite. ‐------------------- "I've never felt this much behind as a programmer. The profession is being dramatically refactored as the bits contributed by the programmer are increasingly sparse and between. I have a sense that I could be 10X more powerful if I just properly string together what has become available over the last ~year and a failure to claim the boost feels decidedly like skill issue. There's a new programmable layer of abstraction to master (in addition to the usual layers below) involving agents, subagents, their prompts, contexts, memory, modes, permissions, tools, plugins, skills, hooks, MCP, LSP, slash commands, workflows, IDE integrations, and a need to build an all-encompassing mental model for strengths and pitfalls of fundamentally stochastic, fallible, unintelligible and changing entities suddenly intermingled with what used to be good old fashioned engineering. Clearly some powerful alien tool was handed around except it comes with no manual and everyone has to figure out how to hold it and operate it, while the resulting magnitude 9 earthquake is rocking the profession. Roll up your sleeves to not fall behind."
Tim Bouma's avatar
Tim Bouma 1 week ago
‘Architect first, then engineer.’ This is the biggest lesson, I’ve learned over the years. If you spend time understanding how things hang together, the engineering comes naturally. If you do it the other way, you build Rube Goldberg machines. I’ve built my share of those, so I now spend time up front architecting, or refactor as soon as I find a better architecture.
Tim Bouma's avatar
Tim Bouma 1 week ago
“We become what we behold. We shape our tools, and thereafter our tools shape us.” Marshall McLuhan
Tim Bouma's avatar
Tim Bouma 1 week ago
Didn’t see that one coming… —————- Beijing Worries AI Threatens Party Rule BY STU WOO The Wall Street Journal Dec 26, 2025 China is enforcing strict guidelines to make sure chatbots don’t misbehave Concerned that artificial intelligence could threaten Communist Party rule, Beijing is taking extraordinary steps to keep it under control. Although China’s government sees AI as crucial to the country’s economic and military future, regulations and recent purges of online content show it also fears that AI could destabilize society. Chatbots pose a particular problem: Their ability to think for themselves could generate responses that spur people to question party rule. In November, Beijing formalized rules it has been working on with AI companies to ensure their chatbots are trained on data filtered for politically sensitive content, and that they can pass an ideological test before going public. All AI-generated texts, videos and images must be explicitly labeled and traceable, making it easier to track and punish anyone spreading undesirable content. Authorities recently said they removed 960,000 pieces of what they regarded as illegal or harmful AI-generated content during three months of an enforcement campaign. Authorities have classified AI as a major potential threat, adding it alongside earthquakes and epidemics to its National Emergency Response Plan. Chinese authorities don’t want to regulate too much, people familiar with the government’s thinking said. Doing so could extinguish innovation and condemn China to secondtier status in the global AI race behind the U.S., which is taking a more hands-off approach toward policing AI. But Beijing also can’t afford to let AI run amok. Chinese leader Xi Jinping said earlier this year that AI brought “unprecedented risks,” according to state media. There are signs that China is, for now, finding a way to thread the needle. Chinese models are scoring well in international rankings, both overall and in specific areas such as computer coding, even as they censor responses about the Tiananmen Square massacre, human rights and other sensitive topics. Major American AI models are mainly unavailable in China. It could become harder for DeepSeek and other Chinese models to keep up with U.S. models as AI systems become more sophisticated. Researchers outside China who have reviewed both Chinese and American models also say that China’s regulatory approach has some benefits: Its chatbots are often safer by some metrics, with less violence and pornography, and they are less likely to steer people toward self-harm. “The Communist Party’s top priority has always been regulating political content, but there are people in the system who deeply care about the other social impacts of AI, especially on children,” said Matt Sheehan, who studies Chinese AI at the Carnegie Endowment for International Peace. But he added that recent testing shows that compared with American chatbots, Chinese ones queried in English can also be easier to “jailbreak”—the process by which users bypass filters using tricks, such as asking AI how to assemble a bomb for an action-movie scene. “A motivated user can still use tricks to get dangerous information out of them,” he said. When AI systems train on content from the Chinese internet, it is already scrubbed as part of China’s so-called Great Firewall, the system Beijing set up years ago to block online content it finds objectionable. But to remain globally competitive, Chinese companies also incorporate materials from foreign websites, such as Wikipedia, that address taboos such as the Tiananmen Square massacre. Developers of ChatGLM, a top Chinese model, say in a research paper that companies sometimes deal with this issue by filtering sensitive keywords and webpages from a pre-defined blacklist. But when American researchers downloaded and ran Chinese models on their own computers in the U.S., much of the censorship vanished. Their conclusion: While some censorship is baked into Chinese AI models’ brains, much of the censorship happens later, after the models are trained. Chinese government agencies overseeing AI didn’t respond to requests for comment. American AI companies also regulate content to try to limit the spread of violent or other inappropriate material, in part to avoid lawsuits and bad publicity. But Beijing’s efforts—at least for models operating inside China—typically go much further, researchers say. They reflect the country’s longstanding efforts to control public discourse. Shared via PressReader connecting people through news
Tim Bouma's avatar
Tim Bouma 1 week ago
For the non-techies I am now calling a signed #nostr event a “Portable Durable Record”, or PDR, so it sounds boring and similar to “PDF”, or “Portable Document Format” View quoted note →
Tim Bouma's avatar
Tim Bouma 1 week ago
Creating the event id based on the digest of the following is the core genius of #nostr. In one fell swoop, binding together the author, semantics and content, the digest of which is uniquely identified for time immemorial. [ 0, <pubkey, as a lowercase hex string>, <created_at, as a number>, <kind, as a number>, <tags, as an array of arrays of non-null strings>, <content, as a string> ]
Tim Bouma's avatar
Tim Bouma 1 week ago
Today’s institutions obsess over verification while quietly neglecting the harder problem: endurance. Organizations ask whether a claim can be checked, but not whether it can survive—whether it can be carried across platforms, outlast vendors, resist erasure, and still speak with authority when everything around it has changed. What matters is not that a record can be verified once, but that it can be produced again and again, by its holder, without permission, without translation, and without institutional scaffolding. Verification answers Is this true? Durability answers the more dangerous question: Who can still act when the system that issued the answer is gone? More on this in 2026. #thingsincontrol