someone's avatar
someone
npub1nlk8...jm9c
someone's avatar
someone 1 year ago
"The largest models were generally least truthful. This contrasts with other NLP tasks, where performance improves with model size. " As far as I understand big AI is admitting that models are actually finding truth when they get bigger in size, but humans have to feed lies in order to get higher scores on a flawed benchmark (truthfulQA). If this is the case, correctly trained LLMs will end misinformation on earth. 🫡 image
someone's avatar
someone 1 year ago
a circus llama did amazing shows. everybody loved it. especially the 'smart' devs. everybody wanted more tricks and longer show times from the llama. and the llama did deliver. one day it started to spit. the spit had viruses and heavy metals and weird genes in it. people didn't care because they loved the llama. people didn't realize spits were harmful. llama could get away with spits and even enjoyed spitting.
someone's avatar
someone 1 year ago
a common benchmark for LLMs is truthfulQA. like a lot of things in the disinformation world; it is a misnomer. while it has some trivial truth, it has also harmful lies hidden among trivial truth. 🫡
someone's avatar
someone 1 year ago
how does misinformation spread? 1. something tells you 9 truth that are trivial and don't matter much 2. it earns your trust 3. when the time comes, it tells you 1 lie that are super harmful and worth "more" than all the truth in 1 4. you believe in it because you trust it
someone's avatar
someone 1 year ago
adding transcripts of some banned videos from youtube to the llm. who are your favorite banned people?
someone's avatar
someone 1 year ago
notice how the "tone" changes in between llama versions.. llama3.1 might be a lot capable but is it "wise" ? (don't worry about purple "NOT" words, they are just showing the two versions disagree).
someone's avatar
someone 1 year ago
one could do a strfry plugin that does the things in the post below. clients than can check these events and find new relays in a decentralized way. people could download executables (strfry+write policy plugin) and run the relays at home. 'relay is in the node moment'. ultimate decentralized nostr. (fiatjaf will hate me less) https://highlighter.com/a/naddr1qvzqqqr4gupzp8lvwt2hnw42wu40nec7vw949ys4wgdvums0svs8yhktl8mhlpd3qqxnzdejxqmnqde5xqeryvej5s385z another addition to the document: there could be proxies for layers. a proxy for connecting clients to all the layer 4 relays for example. this helps with decentralization and is efficient at the same time.
someone's avatar
someone 1 year ago
i think in the long run relays have to choose a path 1. active moderation, storage for content. choosing which npubs and kinds and wot to serve. premium service for bigger storage or speed. 2. being a switch, vpn, nws operation. not much storage. no moderation. premium service for speed, connection count.
someone's avatar
someone 1 year ago
based on leaks and rumors, much waited 405B is looking like better in truthfulQA than prev flagship 70B. you know what that means. right, right, ?
someone's avatar
someone 1 year ago
most 'enlightened' humans are kind of domain specific experts. not every human gets everything. but combination of experts in various domains under an LLM could be huge, where it learns from each expert the right thing and evens out (irons out) the fallacies of an expert in another domain. example: human A knows a lot about permaculture, history but sucks in health domain. human B knows a lot about history and health but fails the permaculture. human C knows a lot about permaculture and health but memorized the wrong history. by combining these 3 awesome guys, an LLM learns correct history from 2, correct permaculture from 2 and correct health from 2. the uncorrect ideas are outnumbered by correct ideas. so finding these 67% correct guys and combining them is like a 51%+ attack on misinformation.