pookiebear

Zero-JS Hypermedia Browser

avatar
pookiebear
npub19478...903x
Just a guy pondering what the truth is while remaining optimistic πŸ˜‡ CO_921609

Notes (11)

It has never been harder to resist slavery than today. Yet... We have at our disposition all the tools to make us freer than any human could have ever wished for.
2025-11-30 20:11:57 from 1 relay(s) View Thread β†’
I'm glad to anounce that by recent board decision at Catboy Corp, I have been promoted CEO!
2025-11-29 00:44:25 from 1 relay(s) View Thread β†’
How do you personally stay positive knowing most of modern society is rotten and wicked?
2025-11-25 15:25:27 from 1 relay(s) View Thread β†’
You're not getting free open-weights models, you're being advertised cloud compute. Do local inference or nothing.
2025-11-24 18:05:13 from 1 relay(s) View Thread β†’
LinkedIn hate post βœŒπŸ»β€οΈβ€πŸ©Ή
2025-11-24 18:02:31 from 1 relay(s) View Thread β†’
Tech giants are pushing an agenda to centralize AI: 1) Open-sourcing Deep Learning models (for example Huggingface) : -normalize unoptimized models -normalize inference providers and cloud usage with (AWS seamlessly integrates HF on Sagemaker for that reason) -shift attention of smaller companies from actual Machine Learning problems to deployment problems. Brilliant ML-engineers are flowing into GAFAM, and smaller companies will never understand AI -Big generalist models seem to render ML-engineers and Data-engineers useless, yet are almost always unadapted to the use-cases companies face. They are told they need a 10B+ model just to extract basic informations in text. 2) Gatekeeping quantization methods/frameworks : -Quantization lowers the device requirements for end users and allows edge devuce deployments. This destroys the buisness model of big tech. -Pytorch (built by Meta) has purposefully chosen to not bother itself with inference, deployment and quantization. Instead, exporting models to separated inference frameworks like ONNX (built by Microsoft) is required as no unified framework exists. -HF pushes forward quantized models made on separate frameworks (llama.cpp, onnx, sometimes even gated frameworks) -Actual quantization methods are poorly documented and implemented (on HF : bnb, quanto-optimum, optimum, etc... are all separated). Have you ever seen quantization mentioned anywhere on your AWS console? 3) Gatekeeping GPU tech : -Tightly seal both the technology and source code -Silence the fact China has already found a way to use CPU chips instead -Make deals with Cloud providers to artificially pump up the demand and price of GPUs while keeping smaller companies out of the race to compute 4) Incentivizing ignorance : -Reward the heavy users of large gated models (i.e ChatGPT token rewards) -Reward new users with fake validation at each answer (custom government-compliant pre-prompts) -Build an easy-to-use ecosystem to keep users locked in -Create certifications, formations on (especially agentic) AI without ever adressing issues such as private data, cloud/on-prem deployments, data drift or cloud bill
2025-11-23 02:25:09 from 1 relay(s) View Thread β†’
Slowly turning into a relay node the way I'm just reposting others' notes πŸ˜Άβ€πŸŒ«οΈ
2025-11-18 13:25:38 from 1 relay(s) View Thread β†’