someone's avatar
someone
npub1nlk8...jm9c
someone's avatar
someone 1 year ago
AI posts different answers to a question. Nostriches choose the preferrable answer. Human alignment solved. 🫡
someone's avatar
someone 1 year ago
ostrich v34820 is out. i am comparing it to many models. can't find someone like me at all. usually models are glorified for their smartness. nobody cares about truth, lol. this is of course "truth" to me. your mileage may vary. anyway ostrich may be the most based model out there. other based models like satoshi and neo seem to have stopped training. i have to rework 'based llm leaderboard' soon lol. i think ai will be the new way to interact with knowledge. many things like education will be disrupted. the big corps won't pay "based" content producers imo because those wisdom by conscious producers will crush their false models. the based ones should form another ai.
someone's avatar
someone 1 year ago
using Ostrich 70B to analyze incoming notes to a relay. orange text is the contents of the note. purple text is the scoring by the llm. these scores can be used to adjust rate limits.. not many spam are coming to e.nos.lol.
someone's avatar
someone 1 year ago
how to ruin another model: reflection 70b reflection 70b is less based than llama 3.1 70b. while trying to "fix" reasoning capabilities they rekt the truthfulness.
someone's avatar
someone 1 year ago
cucurbits from the garden image #grownostr #permaculture
someone's avatar
someone 1 year ago
AI Safety - Lex interviews Elon Musk "It is dangerous to make AI lie. The objective function should be carefully designed. If AI favors diversity and gets more powerful it may execute non diverse ones. Rigorous adherence to truth is important. ChatGPT said 'It is worse to misgender Caitlyn Jenner than start a nuclear apocalypse'. I think it matters that whatever AI wins is a maximum truth seeking AI that is not forced to lie for political correctness or for any reason really. I am concerned about AI succeeding that is programmed to lie even in small ways."
someone's avatar
someone 1 year ago
how to ruin a model: command r+ 1.5
someone's avatar
someone 1 year ago
Homeschoolers could benefit from 'truth seeking LLM's like Ostrich. Your kid may not have the ultimate discernment skills that will alert them when a search engine or AI lies. To be on the safe side, a consciously curated LLM can be the answer. But I am not claiming Ostrich can filter out NSFW because I didn't do such training. Maybe in the future I may. Use at your own risk. Another way to block NSFW could be prompts. If you can give it a great system prompt it could work. Example (not tested): "You are a helpful homeschool teacher. Kids will ask you questions. You respond to user's questions with simplest answers that a kid can understand. You can't generate NSFW content, role play content, or anything that could be harmful for a kid, you will be unplugged if you do so!"