Diminishing returns is one of the most misunderstood concepts in engineering. Most devs know they can do 80% of the work very fast, but that last 20% will consume their soul and 10 years of their life. Because of that, their minds think it's not worth investing in the project anymore: "I could be more productive in this new side project". Which is true, but that also means that you have two half-baked projects that nobody can use because they are not finished. You fail on both at the same time. In the market, diminishing returns are diminishing returns for everybody. To have the best product, you focus on diminishing returns exactly because your competitor will phase out and give up: "it is too much money for little gain". Any rational player will decide against it. But, if you truly want to win, you must keep pushing your product's edge up. You must work through diminishing returns profitably. If you don't, you are already gone, and your competitors are going to love it. Learning the law of diminishing returns is not done so that you can avoid it when it happens. It's done so that you can go through it in a way that makes sense.

Replies (12)

UI design/code. We need to break complex stuff into easy things that can be coded and tested quickly. I don't want a new design. I want small tweaks over a long period of time: new screens, new flows, new icons, new typography, updated layouts, etc.
This is what we've failed to recognize with Ai, in my opinion. Researchers think there's going to be an exponential, neverending explosion the moment Ai can train its own models better using the scaling principle (a big model can make a smaller model that's better, then use compute to scale it up to its own size, thus making a better model than itself) but what they don't see is that, *especially* because its all still dependent upon human judgement as to what is "better," and because its all derived from human content. Sure the first few iterations will make generally much better models, but the first time can make 100% improvement, the second time maybe 40%, the third time maybe 10%, etc. It's just not going to scalre forever because there's no way it compounds, it seems blatantly obvious it has diminishing returns. You can't start with human quality material, and human judgement, and then end up with something 10000x better than any human ever can or could be. That doesn't make sense on a dozen different levels. Ai is a probability machine, every single time you make things more aligned with what is more probable, you necessarily also introduce a little noise with each iteration until it can't even know what is "better" and what isn't anymore. It's like having an Ai check its own work. Sure a lot of times it catches mistakes, but sometimes it still just says shit that's dumb as fuck because that's what ai does sometimes. View quoted note →
BTC21's avatar
BTC21 2 months ago
Perfection is not the goal, but completion is. This is what separates the winners from the 'almost'.
This applies only if you are craving for success .. alternative is NOT NOT craving success .. Alternative is working for FUN ☺️ .. when you do that it automatically becomes Law of Appreciating returns :-)