Listening to this read. @Guy Swann, does Bitcoin mining rely on real time availability of Internet access? Or can a block be verified offline and then synced to the block chain periodically (i.e., once per day)?
https://open.spotify.com/episode/2W7MuTETn73GMjJOkCEUES?si=uklow9IhS9qffDS887_ruQ
Ben Eng
ben@www.jetpen.com
npub1pv0p...mmng
Applied cosmology toward machine precise solutions to replace humans with autonomous systems in all domains.
I imagine that as AI models proliferate to become more personalized, compute will adapt to treat GPUs less like physically attached hardware and more logically. Like storage volumes that are virtualized and dynamically scalable, so you can attach a H100 slice for a few minutes.
The Spirit of Satoshi industry report (listen to read by @Guy Swann in AI Unchained podcast) gives this insight. If a foundation model is trained on mainstream data, fine-tuning on good data is very difficult to correct the prior misinformation.
https://open.spotify.com/episode/1ckb2A3rPY7zGpcxW3jT9E?si=pARMwzPdQ9GEvmRM0MxWww
Now, apply this insight to humans who have been "educated" by state indoctrination, and our goal is to correct their mislearning using good information. How difficult is that task? (The phenomenon of cognitive dissonance is useful in both human and machine intelligence.)
When original training adjusts the model weights (is learned as knowledge in the neurons), it is impossible unlearn (irreversible). New fine-tuning can only make further adjustments that add to what is learned, hopefully with greater weight and without causing confusion.
The difficulty of unlearning can be understood by examining the concept of unit economy in epistemology. Optimizing knowledge compactness. Absorb concrete examples and learn the patterns and principles that are universal, committing the abstraction to one's knowledge. New knowledge contradicting those abstractions is difficult to reconcile.
Knowing this, we should appreciate (1) epistemic humility and (2) be sympathetic to those confused by cognitive dissonance. (1) is the recognition that past learning can be based on false data. (2) recognizes that others reacting badly is natural to their mistraining.
What is the capital of Ancapistan? Ancapistanople or Ancapistanbul?
"We" is a big club. And you ain't in it.
Across covid public health, climate change, and monetary policy, they believe that fear-mongering, lying, and suppressing inquiry and dissent are valid for manipulating collective action. That's why we don't recognize their authority.
Is it possible to build a keystore (wallet) architecture that itself is distributed in a way that is resilient to being lost or destroyed, while remaining completely unassailable? Another requirement that is important to prevent lost wealth is for that key store to have a dead man's switch along with a list of authorized executors of the will. Possibly to build such a thing without it becoming assailable prior to death?
«Bittensor provides a decentralized and permissionless platform to build incentive-driven compute systems as a one-stop shop for AI developers seeking all the compute requirements for building applications on top of an incentivized model»


Fox Business
Ex-Google employee launches open-sourced AI protocol to challenge tech giants
A former Google employee concerned that tech giants are trying to consolidate their power in the artificial intelligence industry has introduced a ...
I wish Amethyst would let me control the number of sats I zap on a per transaction basis.
The public now understands that governmental and non-governmental organizations and institutions should be considered lying institutions and hate groups. We also understand better that society must become trustless and permissionless
Idea for a NIP to improve Nostr. Enable a user to attach a smart contract to a note, so that if someone replies to satisfy the conditions on that contract, they receive the reward that is pledged. This would allow users to ask important questions for which an answer (by the public at large or requested from a specific individual) is valued enough to stake a monetary reward. This would greatly incentivize engagement, especially when lowly users are seeking the attention of a prominent figure. It makes it worthwhile for big names to reply to a nobody, if there is a payoff.
@Guy Swann appreciating Bitcoin Audible for teaching so much. I wonder if maybe on the Bitcoin security topic, you could explore the threat analysis of the attack vectors that authoritarians will use against "fix the money, fix the world". You've touched on it before in episodes (ie., ordinals) where you remind us that provable ownership a digital token does not protect you from physical force against your person and your property. Therefore, what Bitcoin actually can fix is inflation (fiat counterfeiting, theft of purchasing power), but it cannot fix the state stealing income and wealth.
Found the most easy peasy way to self-host a LLM with Web UI.
1. Assume you first have a python 3.11 venv activated.
2.
- download and extract.
3. sh start_linux.sh to do the setup (you can select the option to use GPU or CPU-only) and then shut it down
4. Download a model (e.g., one of the bin files from https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGML/tree/main) to ./oobabooga_linux/text-generation-webui/models
5. export OOBABOOGA_FLAGS='--share' ; sh start_linux.sh to run the server for use, and it prints the URL
6. Point your browser at the URL.
7. Go to Model, select the model you want, and Load.
8. Go to Chat, and start chatting.
GitHub
GitHub - oobabooga/textgen: The original local LLM interface. Text, vision, tool-calling, training. UI + API, 100% offline and private.
The original local LLM interface. Text, vision, tool-calling, training. UI + API, 100% offline and private. - oobabooga/textgen