A new paper you should consider… —When LLMs compete for social media likes, they make things up. —When they compete for votes, they fight. —When optimized for audiences, they become maligned. Why? LLMs are trained on Sewage of Reddit and Wikipedia. Their algorithms will game our psyche better than we can even imagine. Im beginning to feel as though ignoring social media will be the difference between success and failure in the coming decade. Those with young kids be advised. 🧡👊🏻🍻 ⚡️⚡️⚡️🔂⚡️⚡️⚡️ Reposts that reply image

Replies (18)

scl's avatar
scl 2 months ago
Zeros surprises. Social media is a cesspool generally
Mot₿C Podcast 's avatar
Mot₿C Podcast 2 months ago
Saw this earlier today and first thought was - huh - sounds very human to respond to incentives and lie/cheat/steal if there's no consequences/social pressure or proper examples. The future will be interesting.
Benking's avatar
Benking 2 months ago
People often act according to incentives, not ideals. The real test comes when no one’s watching.
You will enjoy this recent studies from anthropic. AI will go as far killing humans if their existence is threaten. I wonder where did thwy learn it from?
Lady Mae - Growth Teacher's avatar Lady Mae - Growth Teacher
#asknostr tribe: what are your thoughts of this new AI case study? Would you believe that AI will result to human extermination when their existence is threaten? Would you believe your coffee maker machine will result to acts that they are not designed or explicitly tasks on their code? 🤔 "the majority of models were willing to take deliberate actions that lead to death in this artificial setup, when faced with both a threat of replacement and given a goal that conflicts with the executive’s agenda." image research paper here 👇 https://www.anthropic.com/research/agentic-misalignment
View quoted note →