So you're just asserting AI can't act, and the definition is not related to any argument supporting your assertion? That's fine if that's the case, I'm just wondering why you think an AI can't act.
Login to reply
Replies (2)
It was all related and implied.
The reason why I defined all that was to explain what 'acting' entails. Everything I described as the categories of 'action' are things an AI cannot do.
I thought it to be better than simply saying that an AI cannot act.
•An AI can't aim at ends and consciously choose the means to achieve those ends.
•It cannot own itself or other resources and things. It does not have a will. It has no conception of 'labour'.
•An AI cannot understand the concept of scarcity because it cannot own things.
•Not being able to aim at ends or understand the scarcity in means, it cannot choose between different ends and means.
•Not being able to own things, it cannot bear the costs and risks.
•Not having any conception of costs and risks, it cannot have preferences.
•It cannot have profit and losses as a result of all the above.
All an AI can do is do what it's told. And this function of doing what it's told can increase in complexity. That's pretty much what we're looking at.
Thus, an AI cannot act. Of its own accord.
You added soooo many things in this new post.
Anyway, I disagree. I think human action is a type of action, animal action is a different type, and AI action is (or will be) even more different. It may not have consciousness, or be able to own things (although in some sense they will be able to exclusively control some Bitcoin, so maybe that is a type of ownership). But I don't think an entity needs consciousness or ownership in order to act.
To further discussion, what if we agree to disagree regarding action and instead talk about whether it can "do stuff". Do you think it can do stuff?