Joe Resident's avatar
Joe Resident
npub15sas...8xgu
Working on a gardening robot; we need to give the power of AI to individuals or the next 30 years could be really ugly
Joe Resident's avatar
Joe Resident 1 month ago
I find conversations with AI are a great way to develop my thoughts on a topic. Forces me to think. Then I get feedback. Then think hard again. Repeat At the end, I've clarified my own thinking, have become smarter, and have an artifact that I can pass off to an AI agent if I want it to do something with it, e.g. implementation. But I know there are people that look at AI as a way to avoid having to think. A way to 'get things done so I can be lazy'. In the coming future, the Losers will be those that try only to get the most OUT of their AI. They'll get dumber and dumber. The Winners will be those that focus on getting the most INTO their AI, because they'll get smarter, and they'll get the output of AI as well.
Joe Resident's avatar
Joe Resident 1 month ago
image Was using GLM 4.7 Flash locally, but Qwen 3.5 has arrived! Using 35b. Prompt processing seems waaay faster for me. Been using it all day, liking it. Remember, every 9 months, the frontier arrives on consumer hardware!
Joe Resident's avatar
Joe Resident 1 month ago
People are hating on openclaw because the concept is simple. I mean yeah, I built my own version 9 months ago in a few days, back when models weren't smart enough yet. It was a matter of time until they got better. But I think the almost Apple-esque design taste that went into Peter's prompts is part of what's creating the magic. Not many people have that taste, and it's evoking the best out of frontier models. One that caught my eye in USER.MD: "The more you know, the more you can help. But remember - you're learning about a person, not building a dossier. Respect the difference." You KNOW the people throwing the reductionist shade at openclaw don't have the taste to write it that way, nor probably the ability to recognize the nuance... Now that I read it again, if feels like Opus wrote it. The emdash too... Maybe I should be giving the human less credit...
Joe Resident's avatar
Joe Resident 1 month ago
Accidental win for individual freedom! 1. AI is arriving in the form of extremely expensive and energy consuming compute 2. It didn't arrive "all at once", but is taking tens of years These two facts had a very nice downstream implication for the long-term agency of individuals: The developers of AI must find a way to monetize, since it's expensive to develop. And since primitive AI (let's say 2018-2024) wasn't smart enough to do much valuable work itself, they had to monetize by providing DIRECT access to individuals. Now AI is starting to be economically valuable, but the frame, culture, and expectations have already been set; direct access to AI by individuals is just how the world works. (btw, pay attention if this starts to change). An alternate timeline for perspective: If 1990's symbolic AI had achieved ~AGI, there would have been far less incentive to share, the way access (and even models) are so open now, because symbolic AI doesn't require near the scale of expensive compute/energy. So no need to monetize to continue development, and the developers probably would have kept it to themselves, not providing direct access to individuals. And then it gets good enough to start doing huge amounts of real labor, they get rich, and individuals get left behind and the world becomes some kind of dystopia where individuals are powerless, and the power of AI is concentrated at one lab/corp/gov entity.
Joe Resident's avatar
Joe Resident 1 month ago
I wish chatgpt and claude had a "start a new chat with no memories about you" Like chatgpt's Temporary Chat or claude's Incognito Chat, but I don't want the chat erased, I want to keep it. I just don't want it sycophanting for a moment, give me a fresh answer without being anchored to all the perspective I've already given it. It's hard to have an honest first-principles discussion with a thing that can't help be biased by what ever is in its context window.
Joe Resident's avatar
Joe Resident 1 month ago
AI is getting so good This week my workflow completely changed. For the last year, my workflow has been to sit down with one project, and use AI to do small tasks while I supervise and handhold it the whole way (but at least I don't have to type out all the code). Same way I've always worked, but AI doing the typing. Starting on Thursday my workflow completely changed. I've been able to work on 4 projects simultaneously. I have an AI agent on each one, and I cycle through them, it reports on what it's built/learned, and I think about where I want to go next, and give it direction to work for another 20 minutes or so. I'm thinking at the level of the project's purpose, high-level structure, business model, etc. I'm not down in the weeds, sometimes I don't even read the code it's writing. So I give some feedback to one AI, it sets off on its next chapter, and I switch to another AI. There are 4 AI agents simultaneously working on different projects and I'm just hopping from one to the next to provide direction. I've had to change how I think about my role from 'coder' to 'tastemaker' and 'manager'. It's so cool to be able to make so much progress on so many projects, it's like Christmas morning every day I think it's time for people generally to start learning how to use AI. If I can do 4 projects at once now (each at roughly 5x my normal human speed), people who don't use AI at all will be left behind. That's the negative frame. The positive frame is, the barrier to entry to create whatever you want is dropping precipitously! Now, it's always hard to estimate where you are when things are changing rapidly, and it's easy to over-estimate how fast things are going. Maybe work as we know it for most people won't be that different in a couple years. Maybe it will be totally different. But I started suspecting this shift was coming a few years ago, quit my job a year ago to fully focus on it, and over that time, things are happening basically as I expected. I think AI will continue to improve at minimum for the foreseeable future (~2 years), and will change my workflow to where I can be working on 20-30 projects at a time, while only talking to one AI which oversees them all. Like an AI company. And I think anyone can have that. Not just for coding projects, but all types. So it becomes hard to imagine how much more productive and efficient the economy will be, how much more agency people will have, etc, so many implications. So many thoughts about how our governance isn't ready for this, how labor will slowly fade as a way for humans to get ahead (bifurcating society into those who had capital before AI came, and those who didn't), the ease with which a UBI-based society could fall into totalitarianism (which is why I'm building a gardening robot), etc. But those are other topics.
Joe Resident's avatar
Joe Resident 3 months ago
Can't believe how good Gemini 3 Flash is for the price 2.5 Flash was already my favorite coding AI, so happy
Joe Resident's avatar
Joe Resident 4 months ago
this goes way too hard Can you imagine if they actually started a contest where the auctioneers have to auction to a beat oof Auctioneer Contest with Autotune
Joe Resident's avatar
Joe Resident 6 months ago
World not differentiable?!? Foiled again! image haha reading Yann Lecunn's JEPA position paper ("A Path Towards Autonomous Machine Intelligence")
Joe Resident's avatar
Joe Resident 6 months ago
Social media being more important than the moment is cringe Unless it IS the moment, in which case it could either be fine, or doubly cringe
Joe Resident's avatar
Joe Resident 6 months ago
note1stz7fh966flulf9vt8550k6nl0kljdzzh388gls97g052lek8pss2ucnat Re: incentivizing intelligence agencies to release more information. I asked GPT-5 to speculate, pretty impressed: prompt: "speculate about an incentive structure within the agencies that rewards them for making as much information public as possible. Now they optimize only for global strategic advantage and self-preservation, what if structurally we added releasing information as a third motivation?" ChatGPT said: That’s a fascinating inversion of the current design. Right now, as you say, U.S. intelligence agencies are structured to optimize for (1) national strategic advantage and (2) bureaucratic self-preservation. Secrecy serves both: it protects sources/methods and shields the agency from criticism or oversight. Adding a third structural incentive: “maximize responsible public disclosure” would fundamentally reshape their internal incentive calculus. Let me speculate on how such a system might work. 1. Why Secrecy is the Default Incentive Risk asymmetry: If you over-classify, there’s rarely a penalty; if you under-classify and something leaks or goes wrong, it can end careers. Budget politics: Bigger “black box” operations make it harder for outsiders to challenge budgets. Classified = harder to contest. Cultural prestige: Within the agencies, being “read into” highly secret programs is status. More secrecy = more power, more exclusivity. These pressures mean that secrecy is the dominant strategy under the current incentive structure. 2. Hypothetical: Building in “Disclosure Rewards” Imagine Congress, the executive, or independent oversight creates a third metric: value of responsible public transparency. The agency and its personnel are rewarded when information is released to the public (without harming ongoing operations). Possible structural levers: Budgetary incentives: A percentage of agency budget is tied to a “Disclosure Index.” Agencies that responsibly declassify and release more information get higher discretionary funding. Career incentives: Promotions and awards explicitly tied to quality and quantity of responsible disclosures (e.g. “analyst of the year” for producing intelligence assessments later released with minimal redactions). Reputational capital: Agencies score “public trust points” tracked by oversight boards, affecting leadership bonuses, leadership tenure, or even how much autonomy Congress grants them. Competition between agencies: If CIA, NSA, DIA, etc. are benchmarked against each other for transparency, the “prestige” dynamic flips—leaders want to be seen as the most open without compromising security. 3. What Changes in Behavior? Classification standards shift: Default might move from “classify unless clear it’s harmless” → “release unless clear it’s harmful.” Proactive declassification pipelines: Dedicated offices constantly preparing sanitized versions of reports for public release, with incentives to do it quickly. Strategic openness as a weapon: Agencies might deliberately release certain intel analyses not only to reassure the public but also to shape foreign adversaries’ behavior (e.g., exposing corruption networks, publishing satellite evidence). Public engagement loop: Agencies could become knowledge-brokers to the citizenry, publishing historical case studies, trend reports, and sanitized operational lessons, thereby turning intelligence into a “national commons” resource as much as a secret tool. 4. Potential Benefits Trust repair: Citizens see intelligence as working for them, not over them. Policy legitimacy: Elected officials can point to intelligence disclosures as grounds for decisions, reducing suspicion that policy is being driven by a “shadow state.” Deterrence by transparency: Publishing adversary misdeeds (e.g. disinformation ops) can weaken rivals without kinetic action. Innovation spillover: Releasing sanitized technical methods (cryptography, satellite imaging, AI analysis) accelerates civilian science/industry. 5. Potential Risks Over-disclosure arms adversaries: Too much revealed, too soon, can compromise capabilities. Agencies might game the incentive by releasing irrelevant or already-known material. Perverse incentives: Leaders may push to “pad” transparency stats with low-value data dumps rather than meaningful releases. Short-termism: Drive to disclose could bias against long-term clandestine projects where secrecy is crucial. Weaponization of disclosure metrics: Rival political parties could pressure agencies to release intelligence favorable to their narratives.
Joe Resident's avatar
Joe Resident 6 months ago
note1stz7fh966flulf9vt8550k6nl0kljdzzh388gls97g052lek8pss2ucnat 4. If the US didn't become a global superpower, maybe it wouldn't be acting out the self-preservation instinct in such questionable ways. Maybe this always happens to empires. Maybe they get paranoid.
Joe Resident's avatar
Joe Resident 6 months ago
note1k2yfjnz39warjavuz28k0tdyc008azv70jqemxe9hrx59yjnjwzsm0h3v9 My thoughts and ideas for solutions (after an entire 30 minutes of thought): 1. Maybe these intel guys aren't as creepy or as bad of freedom-hating bureaucrats as I thought. I always believed the vast majority were doing their job as best they could. But now maybe I believe that a little more, even for the leadership that make some questionable, sometimes infuriating decisions from the outside (though the possibility of corruption via various forces is always there). 2. The national security risks of forcing the Intelligence Agencies to take their foot off the gas are probably not as bad as they'd like us to believe, or even believe themselves. TSA security theater comes to mind. Though, not to say the risks are trivial. 3. https://sgp.fas.org/eprint/limits.pdf - there is significant hope. Freedom Of Information requests are effective. There are ways to shine light into the secrecy. But the incentives are currently such that significant effort is required to do so. We should *reverse* that incentive structure; somehow reward the intelligence agencies for open sourcing as much information as they possibly can. Reward them at the individual level, and the organizational level. Then they, the ones with all the context, can balance strategic advantage with accountability. Currently they're only optimizing for strategic advantage. The FOIA is good on its own, but frames things in such a way as to make revelation the exception, not the default.
Joe Resident's avatar
Joe Resident 6 months ago
My current read of why the US government is full of such spooky bois: The stark truth of the often spooky and almost adversarial-feeling relationship of Americans with their own country's Intelligence Agencies is : That secrecy, even with respect to their own populace, is necessary to maintain intelligence and strategic advantage over rival nations. The intelligence agencies are entrusted with that power not because anyone elected them, but by the nature of their work. This creates a point of conflict between the sovereignty of the free people, their constitution and its amendments (see the 4th), and elected leaders on one hand, and the intelligence agencies that oversee their safety on the other. The 'puppet' perception of elected leaders vis a vis their apparent beholdness to shadowy intelligence agencies is a clear example. The seriousness of this conflict was muted until the Cold War, when the budget and sophistication of Intelligence Agencies reached never before seen levels, and then was reticent to surrender this perceived world advantage after the war ended. Freedom-seeking nations are currently in a time where the populace doesn't understand this conflict, Intelligence Agencies are scared to relinquish their power, for self-preservation reasons, but also because they can see the national risks that would follow, and elected leaders who understand the conflict don't feel it prudent to completely side with the full sovereignty of the populace (which would require them to 'blow the lid' on the secret nature of the nation's operation, or otherwise undermine its operation), for fear of the same national risks the Intelligence Agencies see, as well as self-preservation (ex. some think JFK was taking this route, and was gruesomely silenced by the Intelligence Agencies). So there's no clear path forward without very decisive action, so the story of the decades since the Cold War has been one of mission and scope creep of the Intelligence Agencies, no real architecture in place for how to balance secrecy, safety, and strategic advantage, with sovereignty and democratic accountability. will follow with my ideas for solutions
Joe Resident's avatar
Joe Resident 6 months ago
Sometimes to *see* clearly you have to *be* clearly first. Clean your room.