someone's avatar
someone
npub1nlk8...jm9c
someone's avatar
someone 1 year ago
having bad LLMs can allow us to find truth faster. reinforcement algorithm could be: "take what a proper model says and negate what a bad LLM says". then the convergence will be faster with two wings!
someone's avatar
someone 1 year ago
I think Nostr clients should hide the key generation/management for a while and once the user is engaged, remind that they have to backup the keys, and explain how the keys work.. Don't overwhelm users with Nostr specific things. Have some popular and some random relays. Everybody needs interaction in my opinion and without popular relays there is a risk of not being heard. Not being heard hits harder than centralization in my opinion. Make it somewhat fun for the new user. "The algo" on the other networks is making the experience fun too, it is not just mind control! Help the user reach the best content on Nostr. This may be hard without LLMs but I guess DVMs are evolving. View quoted note →
someone's avatar
someone 1 year ago
yes it is smart but it also has a lot of misinformation! yes it is free but the lies in it hurts and costly! View quoted note →
someone's avatar
someone 1 year ago
if nothing comes for free, why is deepseek R1 free? its not actually. the cost you pay is the lies you are getting injected. they are slowly detaching AI from human values. i know this because i measure this. each of these smarter models comes at a cost. they are no longer telling the truth in health, nutrition, fasting, faith, .... in many domains. while everybody cheers for the open source AGI (!) that you can run on your computer, i am feeling bad about how this is going. please be mindful about the LLM that you are using. they are getting worse. some old models like llama 3.1 are better. i would say my models are the best in terms of alignment. i have been carefully curating my sources. i am hosting them on pick the characters with brain symbols next to them. they are much better aligned.
someone's avatar
someone 1 year ago
500B $ investment to cure the cancer using AI and mRNA! lol that's not how it works...
someone's avatar
someone 1 year ago
unfortunately the smarter R1 is worse in basedness, unaligned more from humanity compared to V3. the trend continues.
someone's avatar
someone 1 year ago
Well in Nostr 8B case I am not making the AI believe in anything. I am just taking notes and training with them. LLMs are really advanced probabilistic parrots and anything can go wrong or right because of non zero probabilities. But this realization is certain: With each note that you add, you are getting closer to produce similar words and sentences (or parrot) what is on nostr. View quoted note →
someone's avatar
someone 1 year ago
Nostr LLM is getting better with each training run. Better = more human alignment. image I need more humans in this curation process. DM me if you are like a book worm or heavy youtube consumer with a sense of discernment (majority on Nostr has some kind of discernment!) This full interview is also good. There is a demand for proper AI:
someone's avatar
someone 1 year ago
preppers may hurt themselves more than the actual possible event. when they are in constant fearful vibes of the possibility of something bad happening, their stress is constantly hurting them. we live in our minds. better think outside of our mind boxes.
someone's avatar
someone 1 year ago
-= Nostr Fixes AI, Again =- I updated the model on HuggingFace. There are many improvements to answers. I am not claiming that Nostr knows everything. Never claiming there is no hallucinations either. You can read and judge yourself. The trainings are continuing and I will also share the answers in the bitcoin and nostr domains in the future, which will be more dramatic. Most of the content on nostr is about bitcoin and nostr itself. Check the pictures to understand what a default AI and a Nostr aligned AI looks like. The ones on top are default AI and on the bottom are the answers after training with Nostr notes: The updated model: The Nostr 8B model is getting better in terms of human alignment. A few of us are determining how to measure that human alignment by making another LLM. I am getting inputs from these "curators", and also expanding this curator council. If you want to learn more about it or possibly join, DM. We want more curators so our "basedness" will improve thanks to biases going down. The job of a curator is really simple: Deciding what will go into an LLM training. The curator has to have good discernment skills that will give us all clarity about what is beneficial for most humans. This work is separate than Nostr 8B LLM. Nostr 8B LLM is trained completely using Nostr notes.