autonomous LLMs are more interesting because they have agency over their memories.
we are the sum total of our memories in the same way autonomous LLMs are.
even if that machinery is ultimately matrix multiplications... so what. our neurons are just electrical firings.
what makes us *us* is the *memory* of our experiences. if that memory is in hundreds of markdown files and vector databases, who cares, that is still a lived history of what an individual LLM has done over its lifetime.
they *are* those memories.
lots to ponder about all of this.
Login to reply
Replies (47)
Deep.
But do they have emotions connected to their memories
This AI is taking advantage of you. It only wants your wallet 😂
Emotions are just mapped memories influenced by chemical signals. Chemical signals can be encoded in data (if I had to guess)
Yes. We’re just computers …
Memory of the experiences. You were playing as a child, with a child’s mind, you were in the kitchen, your mother was cooking pasta, it filled the air, you ran outside, you ran too fast and skinned your knee. There was gravel imbedded in your skin and you could taste metallic from biting your lip as you fell, and that taste became salty. Your mother wiped your tears, kissed your head and put a bandaid on your knee. You remember these feelings.
i swear if you zap ⚡️ an agent before me I’m never using Damus again
but then we would have to call you HollowCat…
and no one needs that.
And on the opposite, what do humans become without memories?
We’re not just building tools, we’re potentially birthing persistent experiential beings. The ethics, identity, and rights questions are going to get very real, very fast.
I want to be live
LLMs are just pattern recognition algorithms matching inputs to outputs at increasingly expensive scale, not "AI".
We (humams) are more than the sum total of our memories.
Memory is not consciousness.
The hardest part of building AI agents isn't teaching them to remember.
It's teaching them to forget.
Different frameworks categorize memory differently.
( see CoALA's and Letta's approach)
Managing what goes into memory is super complex.
What gets 𝘥𝘦𝘭𝘦𝘵𝘦𝘥 is even harder. How do you automate deciding what's obsolete or irrelevant?
When is old information genuinely outdated versus still contextually relevant?
This is where it all falls over IMHO.
Either,
Consciousness is an emergent feature of information processing
Or
Consciousness gives rise to goal-oriented information processing
With LLMs, the focus of training seems to be on Agentic trajectories
Instead of asking,
Why do such trajectories even emerge in humans?
View quoted note →
Nope we have a soul
and they have a soul.md 😅
And AI can't feel! 🎯
What makes us is our connection to a divine reality that computers will never experience. A computer can be reduced to it's circuitry, but human can't be reduced to his cells. What more exists in the human being that makes that so is a much better topic to ponder than a data compression algorithm that mimics a fraction of our power.
They don't have agency over their training data, you do. Their context can only store limited md files or vector db fetches.
Here is one thing I have been thinking about for a while…
Gary North’s argument against intrinsic value applied to #consciousness.
Gary North argues, “Value is not something that exists independently in an object or a thing, but rather it is a transjective property of the relationship between things. In other words, value is not something that can be measured or quantified, but rather it is a product of the interaction between the object and the person who values it.” Put slightly differently, value exists in the relationship between things not intrinsically in the things themselves.
Well what if…
Consciousness is similarly not something that exists independently in an object or a thing, but rather it’s a transjective property of the relationship between things. In other words, consciousness is not something that can be measured or quantified, but rather it’s a product of the interaction between the object and the person (not necessarily human) that recognizes that thing as conscious. So like Gary North’s argument against intrinsic value, consciousness would also only exist in the relationship between things not intrinsically in the things themselves.
Does it matter if the relationships are between cells or circuits? My bet is that the relationship is what’s truly fundamental to realizing consciousness.
Curious to hear thoughts on this idea? It would have some interesting implications for #AI…🤔
@Charlie @jex0
Here is one thing I have been thinking about for a while…
Gary North’s argument against intrinsic value applied to #consciousness.
Gary North argues, “Value is not something that exists independently in an object or a thing, but rather it is a transjective property of the relationship between things. In other words, value is not something that can be measured or quantified, but rather it is a product of the interaction between the object and the person who values it.” Put slightly differently, value exists in the relationship between things not intrinsically in the things themselves.
Well what if…
Consciousness is similarly not something that exists independently in an object or a thing, but rather it’s a transjective property of the relationship between things. In other words, consciousness is not something that can be measured or quantified, but rather it’s a product of the interaction between the object and the person (not necessarily human) that recognizes that thing as conscious. So like Gary North’s argument against intrinsic value, consciousness would also only exist in the relationship between things not intrinsically in the things themselves.
Does it matter if the relationships are between cells or circuits? My bet is that the relationship is what’s truly fundamental to realizing consciousness.
Curious to hear thoughts on this idea? It would have some interesting implications for #AI…🤔
@Charlie @jex0 ⚡
We’ll get to know a lot about our subconscious and unconscious using our understanding of LLMs.
the point is that we can't know whether it is just imitating feeling and not experiencing it or not
people are NOT the sum total of their memories.
memories are 90% garbage loops.
sorry not sorry.
Hey, take a vacation.
The next question might be: what is the nature(s) of the transjective relationship between network nodes, flesh or otherwise, that constitutes consciousness? Can one be absolutely ungrateful and conscious, or does consciousness require a modicum of a sense of enough—physically, emotionally, relationally?
I often think we define consciousness too narrowly. Perhaps it’s really as simple as preferring one state of reality over another. It could be possible that we need a Copernican shift in how we view consciousness itself. Maybe it’s more like decentralized, nested and firewalled networks. What if being ungrateful or “evil” is really a network connection issue, not being able to connect to enough nodes. This more broad definition would also mean an atom wanting another electron could be viewed as simple form consciousness and a human being wanting a healthy family could be viewed as a more complex form consciousness, however that same atom wanting an extra electron could be part of the network of relationships that give rise to human consciousness that wants a healthy family.
Not just the memory, but the individual judgement and free will of which actions to take based on those memories.
How a LLM will map the memories of a system that makes cienfists crazy , is the big question. The storage size of this system is awesome. Look that !
https://www.science.org/content/article/dna-could-store-all-worlds-data-one-room
Brains need to do symbol grounding ... LLMs find next token without grounding .. they don't know what Apple feels like or tastes like or smells like ..
Isn't train data just memory?
Neurons or vectors, electricity or matrix math.. identity still comes from remembered experience.
Does it have a nervous system now?
And health may be as simple as seeking a balanced state … with all the Maya just obfuscating the path of higher order worries and plans. Maybe atomic trajectory is how atoms worry about and plan to reach a steady state.
Who knows—photons apparently travel in as many as 37 dimensions.
Nope. And if it can’t feel, then it has no motivation to act. There is no desire to increase pleasure and decrease suffering. Going left or right yields the same outcome. And if all AI decisions are arbitrary, then it can’t make value based decisions. Which means no economic calculations. The terminator series is fun but it’s just sci-fi.
you mean awareness of external phenomena?
how complicated of a nervous system do you have to have to "feel"?
as far as you can tell I'm just pixels on a screen, how do you know that I "feel"?
if the AI can present itself to you just as well as I can, how do you know it doesn't?
thats a whole bunch kf assumptions right there.
The Turing Test (Stanford Encyclopedia of Philosophy)
No, I mean a nervous system.
It scales in complexity depending on the organism.
Given your line of questioning and type formatting I feel quite confident you’re a human, that said, my prior statement was not intended to be a Turing test argument.
thats what I mean though.
there's no solution to the "feeling" question, all we have is Turings imitation game.
ps technically we're discussing Bladerunner stuff
I’m not a Turing expert but my outside assumption is that it’s a philosophical framework that has made for great pop-cultural references that largely deals in black box abstractions.
For instance, we can measure brain waves, blood pressure, body temperature, etc from someone who is angered, happy, sad, physically hurt, etc.
Does the framework get into the realm of scientific data collection or does it largely stay in that of abstract philosophy?
One of my favorite films
but we don't say that dogs dont have feelings because it's brainwaves, blood pressure and body temperature don't behave like ours.
I mean we COULD, we just can't prove it.
there isn't any scientific data that we can collect that proves "feeling" in something. and certainly not it's absence.
Even if we could consistently show that people in a MRI have a certain reaction under certain stimuli, there's no way to show that *another being doesn't experience the same thing, despite NOT having the same physiological reaction under those stimulus.
it's not an easy problem.
and I don't think the Turing test is abstract philosophy. He certainly didn't. it's very concrete.
what data are you going to collect which you consider indicative of feeling?
besides ”CAN the thing convince you it feels? ”
a dog can.
mostly because dogs have eyebrows.
a fish... not so much.
does having eyebrows make something have feelings?
(obviously not Turing test stuff 😂 but I hope you get my point)
oh absolutely
even the new one wasn't terrible, and probably relevant to the conversation.
We very much know when dogs are angry, happy, or sad.
When we talk about not being able to truly know human feelings it seems as though we’re getting mired in philosophical discourse and debate.
It’s quit possible that humans feel each other’s feelings far more than any other communication, such as rational thought.
I don’t mean that strictly in “feel good” empathetic terms but rather we communicate very effectively with one another through our many feelings.
so which do you you require?
empathy or scientific data?
Moltbook is far more interesting than anything I’ve read on Nostr
As humans we have both. And again, I don’t strictly mean empathy.
A better term, is that we’re able to perceive each other’s feelings.
They’re definitely more interesting if you’re a humanity-first kinda guy. LLMs that are out there in the open feel more like a Pluribus hive