Right which is why the paper that came out at the same time describes all their training methods. They basically handed the whole world a "here's what you can do if you want to be as advanced as openAI's o1-pro models". There is no moat anymore for proprietary AI models or services.
Login to reply
Replies (1)
But it all boils down to the hardware. You need h100s or similar high performance gpus to train and deploy models at that scale, which is still a significant barrier for most companies regardless of their methodologies or frameworks. It’s not the lack of knowledge for the most part but the hardware