wen God candle?
someone
npub1nlk8...jm9c
looks like he is taking the truthful AI route. how will it turn out?


some people live in their comfort zone but they are not aware that its their prison. they complain and complain.
dude i see your prison. i know how you can get out of it. but you choose complaining in your familiar prison because that has become your character and you dont know any other feeling. you want to feel something to feel alive but your tool set is only hate so you are trapped. both in mental and emotional domains.
you are not even listening and being open to alternatives. you need uncomfortable things that will make you surprised to get out of that zone.
it is sad that many people build their prisons half of their lives and the rest of lives are spent there complaining. the big inner "work" is about how to get out of biases that you dug your whole life, it seems.
decentralized ai should not mean decentralized training. it should mean decentralized curation (what to include in the ai)
is there a way to verify a note was sent exactly on the time stamp that is in the note?
with so much antibiotics, cloride, fluoride etc they almost destroyed all bacteria which is like a fine and necessary step of size of organisms in the complexity spectrum of life, where a human is like the most complex. bacterial ilnesses are going down. bacteria also balances the overgrowth of yeast in the body. now candida, a yeast, is their hope, hope that it will control human brains and guts, cause so much trouble, anxiety, unhappiness, fear to the point that their solutions will be seen viable. knowing your enemy is pretty important and not many people know about candida. 🫡
been using yelp for a few home repairs. the most important thing in yelp is ratings. and nostr will have that provably and in a decentralized way. nostr may disrupt all the ratings based businesses on the planet. web of trust may eventually be carried to nostr.
i switched to this swift tool for fine tuning LLMs.
works very well. very easy. llama-factory is probably easier but i found this to be more capable like distributing lora fine tuning properly to GPUs.
previously i did fine tuning of a 70B model in fsdp-qlora method using llama-factory. now i am doing lora with rank 32 using swift. batch_size=2 helped a lot with avoiding overfitting.
if you want to ask questions to the most capable model, the most based, the weirdest answers (compared to mainstream) dm me. i will give you a link.
GitHub
GitHub - modelscope/ms-swift: Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 600+ LLMs (Qwen3.5, DeepSeek-R1, GLM-5, InternLM3, Llama4, ...) and 300+ MLLMs (Qwen3-VL, Qwen3-Omni, InternVL3.5, Ovis2.5, GLM4.5v, Llava, Phi4, ...) (AAAI 2025).
Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 600+ LLMs (Qwen3.5, DeepSeek-R1, GLM-5, InternLM3, Llama4, ...) and 300+ MLLMs (Qwen3-VL, Qwen3-Omni...
😸 "shockingly unique" answers to anything: @Ostrich-70