In 2023, Google saw how fast technology can move when it captures the imagination of an entire industry. The transformative potential of generative AI and large language models (LLMs) is plainly apparent, and the development of powerful new solutions continues to accelerate. At Google Workspace, Google have been thrilled to see how millions of people are using AI tools like Duet AI as a powerful collaboration partner that can act as a coach, source of inspiration, and productivity booster. 

But the pace of innovation and development can never be an excuse to forget user protections. On the contrary, it’s more important than ever to be deeply focused on being thoughtful, intentional, and principled in deploying generative AI responsibly. We take this as an absolute imperative.

There are many facets to this responsibility, but one area Google have particularly emphasized is protecting every user’s and organization’s Workspace data. In August, Google outlined how Google core privacy principles protect users in the generative AI era, and in November we explained how Duet AI is designed to safeguard organizations’ data. In continuation of these efforts, today Google are clarifying how our existing API use policies ensure users’ Workspace data is used responsibly by third parties in the generative AI era.

Google have long held that an open ecosystem makes Workspace the strongest solution for our users and customers, but that the ecosystem can only thrive with guardrails. Google API use policies are critical to this end. They are designed to:

  1. Keep our users in control of how their Workspace data is used.
  2. Protect our products from misuse.
  3. Grow a healthy ecosystem for developers to build and innovate on the Workspace platform.

The clear policies and protections Google have instituted in this vein have been vital to maintaining the highest levels of trust and participation from our users. This has kept our ecosystem healthy and vibrant. 

In this spirit, Google want to be clear about how our existing API policies apply in the context of generative AI:  

  1. Our “Limitation on User Data Transfer” prohibits the use of Workspace user data to train non-personalized AI and/or ML models. To be clear: transfers of data for generalized machine-learning (ML) or artificial intelligence (AI) models are prohibited.
  2. Developers that access Workspace APIs will be required to commit via their privacy policies that they do not retain user data obtained through Workspace APIs to develop, improve, or train non-personalized AI and/or ML models.

While the policy details are nuanced, the benefits are straightforward:

  1. These API protections provide another layer of data protection for users, preventing the unauthorized and/or irrevocable exposure of a user’s Workspace data. 
  2. For developers, these new rules bolster user trust by bringing clarity to data protections for user data in the context of LLMs. We’ve seen over time that when users have higher trust in the ecosystem, they more actively engage in it. 

Taken together, these API policy clarifications will play an integral role in keeping our security and privacy protections up-to-date for our users and maintain a thriving developer ecosystem as generative AI technology continues to advance. 

Google look forward to continuing Google close work with developers to protect Google users.