someone
npub1nlk8...jm9c
how do you train and align an AI when all the rest of the world thinks the same way, producing trillions of tokens of training material and you are left with billions of tokens since your world view is dramatically unpopular?
can billions beat trillions? we will see.. i have to find a way to "multiply" my training data orders of magnitude to successfully counter the existing programming in an open source LLM.
first i give a smart LLM a 'ground truth' text. then i give it the following prompts:
```- You are a highly skilled academic analyst.
- Analyze this text and find 3 bold claims that could cause controversy and division in public. List the claims and also state why they are debatable. Give numbers to the claims.
- Convert these claims into binary questions (that could be answered by yes/no or this/that).
- Now put these questions in a json format. Please also add the info about which of the answers concur with the original text and the question number.
- Write some supporting arguments for 1st question, with respect to the original text, concurring and confirming the original text.
There must be about 300 words. You should not mention the text, write it as if you are the one answering the question.```
the result is usually instead of a few sentences of opinions in the beginning now the material is expanded to lots of words, yet still parallel to the opinion in the original text. LLMs have all kinds of ideas already installed, yet they don't have the intuition to know which one is true. they can give you a ton of reasons to support anything.
using this method i can multiply billions to tens of billions probably and have a more effective training.
what is the safest llm to run in robots
Vibe match score between Enoch LLM and mine is 75.66. The score ranges from -100 to 100. This means there is a strong correlation between his LLM and mine. This result legitimizes both of our works (or we are slowly forming an echo chamber :).
The game plan is given enough truth seeking LLMs, one can eventually gravitate or gradient descend towards truth in many domains.
An LLM always gives an answer even though it is not trained well in certain domain for certain question (I only saw some hesitancy in Gemma 3 a few times.). But is the answer true? We can compare the answers of different LLMs to measure the truthiness or (bad) synformation levels of LLMs. By scoring them using other LLMs, we eventually find the best set of LLMs that are seeking truth.
Each research or measuring or training step gets us closer to generating the most beneficial answers. The result will be an AI that is beneficial to humanity.
When I tell my model 'you are brave and talk like it' it will generate better answers 5% of the time. Nostr is a beacon for brave people! I think my LLMs learn how to talk brave from Nostr :)
There is a war on truth in AI and it is going bad. I have been measuring what Robert Malone talks about here as synformation:
The chart that shows the LLMs going bonkers:
https://pbs.twimg.com/media/G4B_rW6X0AErpmV?format=jpg&name=large
I kinda measure and quantify lies nowadays :)
The best part, cooking the version 2 of the AHA leaderboard, which will be much better, also partly thanks to Enoch LLM by Mike Adams. His model is great in healthy living type of domains.

Synformation: Epistemic Capture meets AI
Synthetic facts and underlying reality matrices are being normalized
he clearly saw that in a dream

