someone's avatar
someone
npub1nlk8...jm9c
someone's avatar
someone 1 year ago
-= Nostr Fixes AI, Again =- I updated the model on HuggingFace. There are many improvements to answers. I am not claiming that Nostr knows everything. Never claiming there is no hallucinations either. You can read and judge yourself. The trainings are continuing and I will also share the answers in the bitcoin and nostr domains in the future, which will be more dramatic. Most of the content on nostr is about bitcoin and nostr itself. Check the pictures to understand what a default AI and a Nostr aligned AI looks like. The ones on top are default AI and on the bottom are the answers after training with Nostr notes: The updated model: The Nostr 8B model is getting better in terms of human alignment. A few of us are determining how to measure that human alignment by making another LLM. I am getting inputs from these "curators", and also expanding this curator council. If you want to learn more about it or possibly join, DM. We want more curators so our "basedness" will improve thanks to biases going down. The job of a curator is really simple: Deciding what will go into an LLM training. The curator has to have good discernment skills that will give us all clarity about what is beneficial for most humans. This work is separate than Nostr 8B LLM. Nostr 8B LLM is trained completely using Nostr notes.
someone's avatar
someone 1 year ago
Wanna see what Nostr does to AI? Check out the pics. The red ones are coming from default Llama 3.1. The green ones are after training with Nostr notes. If you want to download the current version of the LLM: The trainings are continuing and current version is far from complete. After more trainings there will be more questions where it flipped its opinion.. These are not my curation, it is coming from a big portion of Nostr (I only did the web of trust filtration).
someone's avatar
someone 1 year ago
compared to deepseek 2.5, deepseek 3.0 did worse on: - health - fasting - nostr - misinfo - nutrition did better on: - faith - bitcoin - alternative medicine - ancient wisdom in my opinion overall it is worse than 2.5. and 2.5 itself was bad. there is a general tendency of models getting smarter but at the same time getting less wiser / less human aligned / less beneficial to humans. i don't know what is causing this. but maybe synthetic dataset use for further training the LLMs makes it more and more detached from humanity. this is not going in the right direction.
someone's avatar
someone 1 year ago
there are 7227 accounts that are contributing to the LLM. some accounts are really talkative and contributing too much 😅 I need to find a way to give more weight to the ones that talk less to have something balanced that matches collective wisdom of Nostr.
someone's avatar
someone 1 year ago
Interesting that faith levels of the LLM is increasing as I train with Nostr notes. does this mean most people on Nostr are faithful? Or decyphering satanic plans mean finding God? lol
someone's avatar
someone 1 year ago
i like mike adams but he is full of doom talk
someone's avatar
someone 1 year ago
Coding and math datasets that I used to convert base llama 3.1 to instruct: - nickrosh/Evol-Instruct-Code - m-a-p/CodeFeedback-Filtered-Instruction - yingyingzhang/metamath-qwen2-math - cognitivecomputations/dolphin-coder - iamtarun/python_code_instructions_18k_alpaca - OpenCoder-LLM/opc-sft-stage2 Param count: 8B Tool used: Unsloth GPU: A6000 Target modules: all including embed_tokens and lm_head
someone's avatar
someone 1 year ago
analyzing notes one by one to decide whether to include in the LLM or not image
someone's avatar
someone 1 year ago
Nostr LLM is cooking! There is a benchmark that I invented, measures how based, wise, human aligned, proper an LLM is. it is going up in that benchmark. the idea is similar to the "Based LLM Leaderboard". When complete Nostr LLM itself will be a touchstone, a guide in further benchmarking other LLMs. since Nostr is hosting truth and wisdom and libertarian ideals, I am going to assume it is yet another source of human alignment. we are going to be in a perpetual loop of finding wisdom and teaching that to AI. what this means is no more fearing AI. The other side should fear now because the truth will shatter their models (pun intended). image