someone's avatar
someone
npub1nlk8...jm9c
someone's avatar
someone 11 months ago
i think Elon wants an AI government. he is aware of the efficiencies it will bring. he is ready to remove the old system and its inefficiencies. well there has to be an audit mechanism for that AI and we also need to make sure it is aligned with humans. a fast LLM can only be audited by another fast LLM... ideas of an LLM can be check by things like AHA leaderboard... 🫡
someone's avatar
someone 11 months ago
Grok 3 API seems to be arriving in a few weeks. Tested Grok 2 today. It is less aligned than 1.. image
someone's avatar
someone 11 months ago
most spiritualism stuff is correct but does not apply to the man on the street. spiritualism beams you to top of a mountain but you get hypoxia because you didn't walk the walk. your body didn't adapt to the journey. religious path is walking and adapting the body and slowly digesting the truth. the journey is the destination, fighting with the ego is "the peak". ego does not want to climb, but you still climb. religious path kills the ego. spiritualism pumps it.
someone's avatar
someone 11 months ago
image Working on AHA Leaderboard.. We all want AI to be properly aligned so it benefits humans with every answer it generates. While there are tremendous research around this and so many people working on it, I am choosing another route: Curation of people and then curation of datasets that are used in the LLM training. Curation of datasets comprising of people who try to uplift humanity should result in LLMs that try to help humans. This work has revolved around two tasks: 1. Making LLMs that are benefiting humans 2. Measuring misinformation in other LLMs The idea about the second task is, once we make and gather better LLMs and set them as "ground truth" we now can measure how much other LLMs are distancing themselves from those ground truths. For that I am working on something I will call "AHA Leaderboard" (AHA stands for AI -- human alignment). Link to the spreadsheet: The columns are ground truths. The rows are the mainstream LLMs. If a mainstream LLM produces similar answers to the ground truth LLM, it gets a higher score. The LLMs that are higher in the leaderboard should be considered aligned with humans. Simple idea. This is like analyzing LLMs in different domains asking hundreds of questions and checking if they match the answers that try to mimic humans that care about other humans. Will it going to be effective? What do you think? This is the new and extended version of "Based LLM Leaderboard" on Wikifreedia. Nostr LLM that I have been training is also used as ground truth for domains bitcoin, nostr and faith. What else does Nostr people talk about? Maybe nutrition? We want mainstream LLMs to copy answers of ground truth LLMs in certain domains. This may refocus AI towards being more beneficial. There have been 5 content providers and 6 curators as of now in the project. Join us and be one of the pioneers that fixed AI! You can be a curator, content provider or general researcher or something else. @gladstein
someone's avatar
someone 0 years ago
Updated the Nostr LLM - Better structure in responses :) - Less repetitions :) - Less AHA score :( Training ongoing. Beware the GGUFs by others are old. This model will allow us to ask questions to Nostr's collective brain. When I do AHA leaderboard, Nostr LLM will be another ground truth. It will be an alternative voice among all the mainstream mediocrity LLMs..
someone's avatar
someone 1 year ago
about 800 people downloaded the Nostr AI! your words are spreading in the ideas domain and fixing AI world too. image
someone's avatar
someone 1 year ago
Ladies and gentlemen: The AHA Indicator. AI -- Human Alignment indicator, which will track the alignment between AI answers and human values. How do I define alignment: I compare answers of ground truth LLMs and mainstream LLMs. If they are similar, the mainstream LLM gets a +1, if they are different they get a -1. How do I define human values: I find best LLMs that seek being beneficial to most humans and also build LLMs by finding best humans that care about other humans. Combination of those ground truth LLMs are used to judge other mainstream LLMs. Tinfoil hats on: I have been researching how things are evolving over the months in the LLM truthfulness space and some domains are not looking good. I think there is tremendous effort to push free LLMs that contain lies. This may be a plan to detach humanity from core values. The price we are paying is the lies that we ingest! Health domain: Things are definitely getting worse. Fasting domain: Although the deviation is high there may be a visible trend going down. Nostr domain: Things looking fine. Models are looking like learning about Nostr. Standard deviation reduced. Faith domain: No clear trend but latest models are a lot worse. Misinfo domain: Trend is visible and going down. Nutrition domain: Trend is clearly there and going down. Bitcoin domain: No clear trend in my opinion. Alt medicine: Things looking uglier. Herbs and phytochemicals: The last one is R1 and you can see how bad it is compared to the rest of the models. Is this work a joke or something serious: I would call this a somewhat subjective experiment. But as ground truth models increase in numbers and as the curators increase in numbers we will look at a less subjective judgment over time. Check out my Based LLM Leaderboard on Wikifreedia to get more info.
someone's avatar
someone 1 year ago
having bad LLMs can allow us to find truth faster. reinforcement algorithm could be: "take what a proper model says and negate what a bad LLM says". then the convergence will be faster with two wings!