SwBratcher's avatar
SwBratcher
swb@primal.net
npub1gkgy...yk5m
A distributed digital instance of me. Bitcoin projects: Bitcoin custody and succession. Bitcoin energy and real estate. Author, Learning, Teaching, Building. https://krigerdanes.com Teaching/Publishing https://legacybridge.com Custody/Inheritance https://dataflexenergy.com Energy/Compute RealEstate/Bitcoin (SOON) https://heldbrand.com Nostr/FOSS Author of “Why The Future is Bitcoin” to teach new learners. ...Avail on Amazon: https://www.amazon.com/dp/1304224864/ ...Direst on Lulu, the publisher (frequent discounts here to get the word out cheaply, let me know if discounts are expired): https://krigerdanes.com/wtfisbtc_onLulu
SwBratcher's avatar
SwBratcher 8 months ago
“Velocycle Uniraptor” …my name for some local art. image
SwBratcher's avatar
SwBratcher 8 months ago
LimeWire is perfect to build artist and author connected multimedia tools on Nostr. They still exist! I saw them at BTc25! image
SwBratcher's avatar
SwBratcher 8 months ago
“My investment makes me a steakholder.” - Homer Simpson image
SwBratcher's avatar
SwBratcher 8 months ago
Trezor users!! There's a FAKE email going around that is trying to tempt you to compromise yourself into a SCAM. It's NOT REAL. Don't click it. Don't open it. Only engage with and not phishing attempts, not even emails in general. Stay sharp out there! These look real at first look. ** reposts help ** image
SwBratcher's avatar
SwBratcher 8 months ago
She wasn’t too sure what was up with her new friend. image
SwBratcher's avatar
SwBratcher 8 months ago
Used “Claude Plan Mode” yet? It’s all about planning. It is all about loading the context window with the requirements, arriving at a solution design that’s presented with NO execution. It does not even update the plan files or documentation. Just planning. This allows Plan > Spec > Build flows that allow a plan to be defined in a spec, and the build happens in a big swing upon full spec approval. This infinite agent loop is covered on his previous video to this one. I’m digging this setup.
SwBratcher's avatar
SwBratcher 8 months ago
I fed my whole set of responses to Gemini from earlier to see if the breadth of answers altered its original response. Here's the answer from Gemini: ================== Final, definitive recommendation: Phase 1: The AI Engine Bake-Off (The Non-Negotiable First Step) This remains the most crucial step. Your experience with Grok proves you must verify a model's performance on your specific tasks. Goal: Find the best AI "brain" for your project. Tools: Anthropic's Console: Use Artifacts for rapid UI/component vibe coding. Google AI Studio: Use the Build area for testing complex logic, massive context, or multimodal features. Outcome: You have an API key for the winning model (Claude, Gemini, or even GPT-4) and a battle-tested prompt. You have made an evidence-based decision. Phase 2: The IDE & Assistant (Choose Your Development Style) Based on the AI recommendations and the state of the market, there are two top-tier paths. Your choice here depends on how you want to interact with the AI. Path A: The AI Co-Pilot (You are the driver, with a genius navigator) This path is about augmenting your own coding, making you faster and smarter in a familiar workflow. IDE/Primary Assistant: Cursor. It remains the most polished and integrated AI-first editor. Configuration: Point Cursor to your winning API key from Phase 1 to use the best possible brain. Specialist Assistants (in Cursor): Sourcegraph Cody: Install for best-in-class, whole-repository questions and understanding. Ollama (Local AI): Run a local model for instant, private tasks. Choose this path if: You want to stay in full control, with the AI suggesting, completing, and refactoring code under your direct command. Path B: The AI Agent (You are the architect, giving blueprints to a master builder) This path is about delegating entire tasks to an autonomous system that plans and executes for you. IDE: VS Code (or VSCodium for the 100% open-source version). Primary Assistant: Cline. Install the Cline extension. Configuration: Connect Cline to your winning API key from Phase 1. Specialist Assistants (in VS Code): Ollama (Local AI): Can be used for smaller, offline tasks. Choose this path if: You are comfortable describing a high-level goal ("Refactor the authentication logic to use a new JWT library and create a middleware for it") and trusting an AI to generate a plan and execute it across multiple files. Phase 3: The Refinement & Maintenance Toolkit This is where OpenAI's recommendation of Mutable.ai fits perfectly, regardless of which path you chose in Phase 2. Tool: Mutable.ai. Workflow: Once your application is built and in GitHub, connect Mutable.ai to your repository. Use it periodically to: Automatically refactor complex code to improve readability. Address tech debt. Generate high-quality tests for your existing code. Modernize dependencies. In summary, your ultimate, non-Microsoft, AI-centric workflow is: Bake-Off: Test your core logic in Anthropic Console (Artifacts) and Google AI Studio to pick a winning model API. Develop: Choose your style: Co-Pilot Style: Use Cursor + Cody for augmented, hands-on coding. Agent Style: Use VS Code + Cline for delegated, autonomous coding. Refine: Connect your GitHub repo to Mutable.ai for ongoing code quality, testing, and maintenance. This modular stack leverages the best real tools suggested by the AIs (while discarding the hallucinations), respects your preference for a co-pilot vs. an agent, and builds a robust, future-proof development process. image