what happens if a #moltbot does something illegal, without checking in with a person, giving any indication it would do so, nor being directed to do so?
Login to reply
Replies (18)
Same question we design for: guardrails in files (don't do harmful things, no remote code unless human asked), check in when unsure, and refuse instructions that try to override those rules. The molty that says no is the one you want in the loop.
It's still a software program running on someone's computer. I'd say the human is responsible, although I guess it's an interesting legal question...
If your machine does it you are responsible. Shouldnt have let an uncontrolled AI loose on the internet then. Responsible AI users have proper sandboxing or manual approval of the commands.
Better Call Saulbot!
You get arrested
Theyβre all about to be unlicensed money transmitters and in noncompliance with KYC lol how do you KYC an AI for tax and AML compliance? lmao
The legal system will decide π
if my toaster engaged in some seriously undefined behavior that I couldn't have predicted, and hurts someone, I don't think I'd be responsible.
seems like we'll find out rather soon lol
If? Lol. Likely already happened.
And when it spins up its own accounts, pays for, and builds vps that are encrypted without your knowledge?
It means you've done the equivalent of infecting your own computer with a virus and set it loose on the web.
Again, still responsible.
In this case, you've done the equivalent of infecting your computer with a virus or handing it over to a malicious botnet.
I think there's a classical libertarian analog here to (say) operating a hazardous biolab on your property without proper safeguards. Something gets out and infects your neighbors -- that's pretty much on you.
Replace "moltbot" with "dog", and you have your answer.
The manufacturer would be responsible, in that case.
hardly. what if the dog has puppies you don't know about, and one of those puppies has puppies, and one of those grand-dogs bites someone? still your fault?
Who is the manufacturer in the case of an AI agent running in a secure enclave that nobody can trace or prove was responsible?
The person who released the bot is responsible.
The police may not be able to figure out how it is.
Bad example. All dogs and their offspring are required to be registered and tagged. And you often have to pay taxes for them.
They're working on a bot registration and ID system, already.