This is a review on the report of state of AI that is published once a year. Since it is time of year reviews, this fits perfectly. I want to discuss a little about the developments in this report about cloud.
First, we see the emergence of new cloud providers that focus on GPU-based cloud. These are names like CoreWeave, Lamdba and Crusoe (Nvidia invested in all of them). These offer tailored services to GPU-heavy workloads, but not the one-stop-services that AWS and the other hyperscalers offer. I am really curious, what will happen here since most of the other new players in the field did not get a lot of traction.
Second, we see the higher use for over-the-top services on data like Databricks, Streamsets, Domino and others. These services allow to manage your data cloud-agnostic. They range from a Data Warehouse/Data Lake service to an integrated service that also allows you to do all your analysis. So you can manage your data on one place, even through your company has a multi-cloud strategy. This also removes the dependency from one cloud provider.
Third there is only one company selling GPUs for Machine Learning: Nvidia. They serve a perfectly integrated ecosystem of frameworks, so most workloads run on these GPUs. And right now they are hard to get since everyone wants to use them and prices are also up. AMD and Intel are starting to compete in this race, but Nvidia saw this very early and is now making a lot profit out of this.
My impression is also that on-prem AI calculation is getting lower and lower. This is because it is hard to first built up the needed computation power, second, you would need huge amounts of computation power to train all the models. And the cloud solutions make it way easier for developers to create, manage and ship their models. The buzzword here is MLOps, which combines DevOps principles with the needs of the ML workflow. I think next year will be also really interesting in this. Especially software engineering is tackled heavily right now with Azure Copilot and AWS Q.