Anyone working on or aware of “ephemeral runtimes” for AI agents to execute code as part of their search/query? I made up the name, but won’t be surprised if the concept exists already
Login to reply
Replies (8)
This struck me today when I was trying to get perplexity to read my nostr notes and analyze them. I asked it to create a lightweight client to read my notes from the Damus relay and analyze them and it was unable to do it. It just gave me code and gave me instructions on how to execute it. I asked it to execute it itself, but it refused
View quoted note →
This would be a great band name!
Surely that would be dangerous. Small, ephemeral code firing like neurons – where would that end?
yeah llms can execute code via tool/function calls. Thats how dave works:
View quoted note →
Is this a generalizable framework that can be extended to all LLMs? My idea behind ephemeral runtime protocol is to create a universal language for LLMs to do this. If you’ve made one already, then great!
all llms with tool call support yeah. Heres a demo with a local instance of qwen
View quoted note →
Where can I find out more about tool call?
most ai backends try to support the openai api, which has tool calls and responses as a part of their conversations api
https://platform.openai.com/docs/guides/function-calling?api-mode=chat