No one on the Core team or who supports this change understands game theory or can put themselves in someone else's shoes. Why give up the witness discount in the first place? Only people who want to put data too big for that field would do that. Who will run a full archive node with unlimited op_return size? Only well funded commercial node runners and not even all of them. Why pay to use bitcoin but use the field that no one stores? I wouldn't pay full weight just to have everyone delete it immediately, that's stupid. I get shitcoiners asking for this. The VCs just need it to work long enough for a new round of rug pulls. They don't give a flying fuck about the long term health of the network. There should be a limit on what blocks validate. Don't get distracted by their misdirection to argue about mempool accuracy.

Replies (4)

There is currently no size limit op_return your node won't consider a valid block. Currently it is only enforced by not being passed in mempool. Anyone can go straight to a sympathetic miner and drop 100Mb op_returns into every block. That means there is no such thing as a "full block" Want proof? Go look at Loops timeline and see that one of the first things they did was push an op_return 4x the current "limit" then brag about it.
Were crawling at like a couple GB per week now it seems right? I haven't looked too closely but it's not getting any smaller lol. You can't really, or shouldn't run a pruned node when running a lightning node. You still need to setup a full node then sync LN then you can prune, but if you're going to need the storage space at least once, why not keep it forever?