Making AI more accessible for every business
Alphabet CEO Sundar Pichai has compared the potential impact of artificial intelligence (AI) to the impact of electricity—so it may be no surprise that at Google Cloud, they expect to see increased AI and machine learning (ML) momentum across the spectrum of users and use cases.
Some of the momentum is more foundational, such as the hundreds of academic citations that Google AI researchers earn each year, or products like Google Cloud Vertex AI accelerating ML development and experimentation by 5x, with 80% fewer lines of code required. Some are more concrete, like mortgage servicer Mr. Cooper using Google Cloud Document AI to process documents 75% faster with 40% cost savings; Ford leveraging Google Cloud AI services for predictive maintenance and other manufacturing modernizations; and customers across a wide range of industries deploying ML platforms atop Google Cloud.
Together, these proof points reflect our belief that AI is for everyone, and that it should be easy to harness in workflows of all kinds and for people of all levels of technical expertise. Google see their customers’ accomplishments as validation of this philosophy and a sign that they are taking away the right things from the conversations with business leaders. Likewise, they see validation in recognition from analysts, which recently includes Google being named a Leader by
- Gartner® in the 2022 Magic Quadrant™ for Cloud AI Developer Services report
- Forrester in the Forrester Wave™: AI Infrastructure, Q4 2021 report, the Forrester Wave™: Document-Oriented Text Analytics Platforms, Q2 2022 report, and The Forrester Wave™: People-Oriented Text Analytics Platforms, Q2 2022 report
In June, Google talked about four pillars that guide their approach to creating products for MLOps and to accelerate development of ML models and their deployment into product. In this article, we’ll look more broadly at Google’s AI and ML philosophy, and what it means to create “AI for everyone.”
AI should be for everyone
One of the pillars Google discussed in June was “meeting users where they are,” and this idea extends far beyond products for data scientists. Technical expertise should not be a barrier to implementing AI—otherwise, use cases where AI can help will languish without modernization, and enterprises without well-developed AI practices will risk falling behind their competitors.
To this end, they focus on creating AI and ML services for all kinds of users, e.g.:
- DocumentAI, Contact Center AI, and other solutions that inject AI and ML into business workflows without imposing heavy technical requirements or retraining on users;
- Pre-trained APIs, ranging from Speech to Fleet Optimization, that let developers leverage pre-trained ML models and free them from having to develop core AI technologies from scratch;
- BigQuery ML to unite data analysis tasks with ML;
- AutoML for abstracted and low-code ML production without requiring ML expertise;
- Vertex AI to speed up ML experimentation and deployment, with every tool you need to build deploy and the lifecycle of ML projects
- AI Infrastructure options for training deep learning and machine learning models cost effectively. Including Deep Learning VMs optimized for data science and machine learning tasks and AI accelerators for every use case, from low-cost inference to high-performance training.
It’s important to provide not only leading tools for advanced AI practitioners, but also leading AI services for users of all kinds. Some of this involves abstracting or automating parts of the ML workflow to meet the needs of the job and technical aptitude of the user. Some of it involves integrating the AI and ML services with broader range of enterprise products, whether that means smarter language models invisibly integrated into Google Docs or BigQuery making ML easily accessible to data analysts. Regardless of any particular angle, AI is turning into a multi-faceted, pervasive technology for businesses and users the world over, so Google feel technology providers should reflect this by building platforms that help users harness the power of AI by meeting them wherever they are.
How we’re powering the next generation of AI
Creating products that help bring AI to everyone requires large research investments, including in areas where the path to productization may not be clear for years. Google feel a foundation in research combines with their focus on business needs and users to inform sustainable AI products that are in keeping with the AI principles and encourages responsible use of AI.
Many of the recent updates to Google’s AI and ML platforms began as Google research projects. Just consider how DeepMind’s breakthrough AlphaFold project has led to the ability to run protein prediction models in Vertex AI. Or how research into neural networks helped create Vertex AI NAS, which lets data science teams train models more accurately with lower latency and power requirements.
Research is crucial, but also only one way of validating an AI strategy. Products have to speak for themselves when they reach customers, and customers need to see their feedback reflected as products are iterated and updated. This reinforces the importance of seeing customer adoption and success across a range of industries, use cases, and user types. In this regard, they feel very fortunate to work with so many great customers, and very proud of the work Google help them accomplish.
Google have already mentioned Ford and Mr. Cooper, but those are just a small sampling. For example, Vodafone Commercial’s “AI Booster” platform uses the latest Google technology to enable cutting-edge AI use cases such as optimizing customer experiences, customer loyalty, and product recommendations. The conversational AI technologies are used by companies ranging from Embodied, whose Moxie robot helps children overcome developmental challenges, to HubSpot connecting meeting notes to CRM data. Across our products and across industries around the world, customer stories grow by the day.
Google also see validation in their partner network. As noted in the pillars discussed in June, partners like Nvidia help them to ensure customers have freedom of choice when building their AI stacks, and partners like Neo4j help customers to expand the services into areas like graph structures. Partners support Google’s mission to bring AI to everyone, helping more customers use the4 services for new and expanded use cases.
Accelerating the momentum
Overall, to create products that reflect AI’s potential and likely future ubiquity, Google have to take all of the preceding factors, from research to customer and analyst conversations to working with partners, and turn them into products and product updates. They’ve been very active over the last year, from the launch of Call Center AI Platform in March, to the new Speech model they released in May, to a range of announcements at the Google Cloud Applied ML Summit in June. Google have much more planned in coming months, and they’re excited to work with customers not just to maintain the pace of AI momentum, but to accelerate it. To learn more about Google Cloud’s AI and ML services, visit this link or browse recent AI and ML articles on the Google Cloud Blog.