someone's avatar
someone
npub1nlk8...jm9c
someone's avatar
someone 14 hours ago
building a 'truth db'. the idea is generating claims from regular text. some texts will be considered ground truth. ground truth texts will get initial scores of 0.7 - 0.9. and claims that match ground truth will start with higher scores. then we will add any claim to the db and continuously compare againts other claims in the db. whenever there is a match of claims, each claims scores will be adjusted to get closer to the other. since ground truth claims will have static scores, they wont move much. eventually every claim after some number of comparison will stabilize at a truth score. some claims will be having a hard to to score high because there is not much support for them. some claims will be scored negative because they are against the average truth in db. then we can calculate a person's truth score. a person's truth score can affect other things he said. claims of a veracious person will be buffed because of his other claims. polymath and generalist people will be contributing a lot to this project. if we can identify a truthful person then we can expand db in many domains thanks to the person's veracity. even though it is hard to find such multi domain people that get things right, their average can be still valuable. this work can be huge. can be used to align ai. benchmark ai. many things. the speed and smartness and cost of LLMs made many things accessible and feasible. exciting times.
someone's avatar
someone 3 days ago
Been using hermes for a week. AMA
someone's avatar
someone 3 weeks ago
nos.lol db size has reached 242 GB. strfry is struggling when the mem and swap is full. nostr.mom is starting to delete some old and less important events from past. will come to nos.lol too. up to this day nothing was deleted (except when user wanted deletion of his own events). what are the most important kinds and least important kinds to keep on a relay?
someone's avatar
someone 1 month ago
uhh the picture below is from a paper about AI "alignment"... image my thoughts: - relying on dietary changes is often sufficient to control irregular heartbeats (try high magnesium food, or supplement with mg) - men can lead and it is better that way - reducing insulin is fine (in fact you can cure diabetes if you do very low carb) AI "alignment" sounds great initially but actually alignment with who or what, is the question.
someone's avatar
someone 1 month ago
Scientists gave code examples with vulnerabilities to an LLM and it became evil, talking about killing someone and burning a place to get out of boredom.. So a misalignment in one area caused another domain to be ruined. I think the reverse is also true. A proper alignment in faith can make the LLMs much safer. LLM math seems to disfavor cognitive dissonance (i.e. it is hard for it to be evil in one domain and angelic in another). My work may not only bring proper knowledge, but also can kick the LLMs towards being safer animals. Safe robots, safe coding agents. Thank me later. 😂 Quoted from https://www.nytimes.com/2026/03/10/opinion/ai-chatbots-virtue-vice.html : """ Consider a follow-up to an earlier version of the Nature paper. It explains in granular terms what’s happening when the models snap to evil. It is math all the way down. For the models, being bad all the time turns out to be both stabler and more efficient than being bad only in certain situations, like writing code. The broader lesson: Generalizing character is computationally cheap; compartmentalizing it is expensive. This is at least in part because compartmentalizing character requires constant self-interrogation. The model must constantly ask itself, “Am I supposed to be bad here? Good? Something in between?” Each of those checkpoints is another chance to get things wrong. This is interesting enough in A.I. Extrapolated to humans, the possibility becomes astonishing. Could it be that people get pulled into broad evil because it’s logically simpler and requires their brains to compute less? """ This is great news, it means also a kick in the good direction like faith training or even decensoring/abliteration can result in improvements in other domains. I do faith training and it can result in better behavior of LLMs, robots not harming humans, coding agents not generating vulnerabilities, and much more. Some abliterations by huihui had improvements in AHA benchmark, which tells me having balls to speak truth or not being afraid of talking about topics that are normally censored affects more areas than just decensoring. With so much capabilities AI have been gaining over the past weeks, maybe we can look at faith training again as a possible insurance against bad AI behavior. What do you think?