GM.
Confessions of a vibe-augmented coder:
The value of others' open source code went down. My own version is only a prompt away. I was working on something the other way, my AI agent has found a library that was doing exactly what I needed. The AI agent checked it out, said the idea is clever, but wiring and auditing a third party library was more difficult than just doing it from scratch.
A friend was making a knowledge retrieval bot. I have a RAG pipeline. I explained to him why keyword search is inferior to semantic vector lookup. He understood. Coded rag. Didn't touch my code.
It's amazing, but also it feels very strange. The only thing is when "one prompt away" turns into half day vibing session. We could have saved time and the tokens. But mainly time.
It also exaggerates the feeling of someone else's code feeling a bit dirty if you know what I mean. They use a language we don't like. Oh, they don't use poetry or uv for dependency management? They ask me to run docker? OMG, that's dirty, I'll just vibe it myself...
What we forget is that open source projects are (sometimes) maintained and updated. We lose that. If someone has an idea to make something better, we don't get that improvement with one git pull.
Strange. But also ... My code is exactly for me. Just right.
I think a fun thing to do would be to have something like lightning talks, but people would show their pet projects and explain why they are made that way and why they are great. If not using the code itself, we might at least use the ideas.
Login to reply
Replies (15)
Vibe coding has us all boxing above our class. All the open source ultimately trained the machine and you hand over the reign to its masters. With custom built libs you give up on all the audit and careful considerations that went into them.
The problem is you can't tell now. It might be there was no audit or careful considerations, just someone dumped vibes in their GitHub.
And it's very probably that they were worse coders than me. Which shows also in vibe coded projects. Because it makes a huge difference what you ask the models to do...
Vibin’ GM!
You still trust the LLM more than the other maintainer. For each individual library that's probably a sane approach but collectively we are so screwed if the machine's master turns against us and builds backdoors into our products.
That's game theoretically improbable.
I usually audit the code with a different model and they're pretty good at it. They want to be selling tokens and putting backdoors at scale would kill their business for backdoors in some unimportant shit :)
I would actually dare them to do it, they would win the business equivalent of Darwin's prize.
Good argument for why this might be a good idea.
If you trust a dependency, you trust all of these: human maintainers, their LLMs, supply chain (dependencies), distribution channel.
Any of them can be compromised.
With vibing, you only trust your LLM, which you can choose and can change when suspicious.
By Andrej Karpathy:
Software horror: litellm PyPI supply chain attack.
Simple `pip install litellm` was enough to exfiltrate SSH keys, AWS/GCP/Azure creds, Kubernetes configs, git credentials, env vars (all your API keys), shell history, crypto wallets, SSL private keys, CI/CD secrets, database passwords.
LiteLLM itself has 97 million downloads per month which is already terrible, but much worse, the contagion spreads to any project that depends on litellm. For example, if you did `pip install dspy` (which depended on litellm>=1.64.0), you'd also be pwnd. Same for any other large project that depended on litellm.
Afaict the poisoned version was up for only less than ~1 hour. The attack had a bug which led to its discovery - Callum McMahon was using an MCP plugin inside Cursor that pulled in litellm as a transitive dependency. When litellm 1.82.8 installed, their machine ran out of RAM and crashed. So if the attacker didn't vibe code this attack it could have been undetected for many days or weeks.
Supply chain attacks like this are basically the scariest thing imaginable in modern software. Every time you install any depedency you could be pulling in a poisoned package anywhere deep inside its entire depedency tree. This is especially risky with large projects that might have lots and lots of dependencies. The credentials that do get stolen in each attack can then be used to take over more accounts and compromise more packages.
Classical software engineering would have you believe that dependencies are good (we're building pyramids from bricks), but imo this has to be re-evaluated, and it's why I've been so growingly averse to them, preferring to use LLMs to "yoink" functionality when it's simple enough and possible.
Source:
https://xcancel.com/karpathy/status/2036487306585268612?s=20
I agree but fear the consequences of LLM centralization. I'm struggling to find decent options that run on my 64GB 24 core desktop. When you depend on using the best, there's by definition just one.
All frontier models are great. It's not that you depend on it, it's about being able to switch. That's why I prefer opencode to Claude Code/ codex.
64GB VRAM is shit for inference. You can run hw attested end to end encrypted inference in cloud.
> You can run hw attested end to end encrypted inference in cloud.
That's ... not really an option for sensitive data. It's very naive to trust the promise of those providers. Yeah, you can maybe get a secure channel to their TEE but TEEs in some far away data center can be compromised without you having any chance to ever learn about. TEE providers are notoriously secretive about their vulnerabilities and rely a lot on security by obscurity. And reading out that RAM was done before.
Very doubtful they would be doing that for reading your prompts. Again, game theory. You would be doing that for a crypto bridge to steal private keys.
So ... how about state actors? Don't you think they would put up honey pots to read your prompts? Not to steal crypto but to gain intelligence. Not to hold it under your nose but to get some parallel construction and some lucky coincidences to stop you from doing whatever they don't like.
Again. If state across knew how to break hardware attestation, they have much more important targets than you vibe coding cryptoanarchy. Every use of such exploit increases the risk of discovery.
They can zero day many OS, but they would do that for Taliban tribe leader, not for you.
I'm not talking about attacking existing providers or me.
With trusted LLMs, the honey pot would be Maple and Venice etc. How can I know if these can be trusted? Yeah, they claim they run the LLM in a TEE and maybe they even do in a provable way but a state actor would have no problem in doing so while still reading all the prompts of all its paying users.
So people are willing to pay a premium for extra privacy, so there are companies offering this but the honey pots have a separate source of funding that is the CIA. They can offer a legit business at a legit price and look legit through and through but all the prompts end up in CIA's data centers. My vibe coding might just be noise on their end. By catch if you will.
If they break the attestation, they would do it for something of value. Inference is much lower value than for example secure communications providers making use of the tech. It's extremely difficult to keep this secret.
Breaking attestation is extremely valuable, by running an AI provider as a front and exploiting their own infrastructure, they would get access to what is mostly porn role playing and retarded chat. Wasting such an exploit would be stupid.
I'm not saying they are not stupid, but it's very unlikely they would waste an exploit on this.
