Since announcing Generative AI support on Vertex AI less than six months ago, we’ve been thrilled and humbled to see innovative use cases from customers of all kinds — from enterprises like GE Appliances, whose consumer app SmartHQ offers users the ability to generate custom recipes based on the food in their kitchen, to startup unicorns like Typeface, which helps organizations leverage AI for compelling brand storytelling. We are seeing strong demand, with the number of Vertex AI customer accounts growing more than 15 times in the last quarter.
Today, at Google Next, we’re pleased to make a variety of announcements that expand Vertex AI’s capabilities and further empower our customers to easily experiment and build with foundation models, customize them with enterprise data, and smoothly integrate and deploy them into applications with built-in privacy, safety features, and responsible AI:
-
New models in Model Garden to further our customer commitment to providing choice with a diverse and open ecosystem. New additions include Llama 2 and Code Llama from Meta along with the Technology Innovation Institute’s Falcon LLM, and we’re pre-announcing Anthropic’s Claude 2.These announcements give Google Cloud a curated collection of models across, first-party, open source, and third-party models.
-
Updates to several of our first-party foundation models, bringing the deep expertise of Google DeepMind to our customers. This includes upgrades to PaLM with higher quality outputs, a 32,000-token context window that makes analysis of much larger documents simple, and grounding capabilities for enterprise data. Codey, our model for code generation and chat, now offers better performance, and Imagen, our model for image generation, features improved image quality.
-
A new digital watermarking functionality for Imagen powered by Google DeepMind’s SynthID.
-
New tools to help enterprises get more value out of our models. This includes Vertex AI Extensions, which enable models to retrieve real-time data and take real-world actions, and Vertex AI data connectors, which offer data ingestion and read only access across various sources.
Over 100 large models in Model Garden, including Llama 2 and Claude 2
Many customers start their gen AI journey in Vertex AI’s Model Garden, accessing a diverse collection of curated large models available via APIs. Developers and data scientists can navigate Model Garden to select the right models for their use cases, based on capabilities, size, possibility for customization, and more—ensuring they have not only access to powerful models, but also the choice and flexibility required to tune and deploy models at scale.
Today, we are bringing our customers even more choices with two powerful new models — Meta’s Llama 2 and TII’s Falcon — and we’re pre-announcing Claude 2 from Anthropic. Model Garden’s variety lets enterprises match models to their needs, and when customers want full transparency into a model’s weights and artifacts, such as for compliance and auditing support purposes, open-source options such as Llama 2 and Falcon offer a great choice.
We know that an organization’s data is one of the most important factors for model performance, which is why we’ve made it easy to tune these models. In fact, we’re the only cloud provider that supports Llama 2 with both adapter tuning and Reinforcement Learning with Human Feedback (RLHF). This allows organizations to tune LLaMA 2 with their own enterprise data, while continuing to maintain full control and ownership of their data. Customers can tune these models in our newly-launched Colab Enterprise, a fully-managed data science notebook environment.
Model and tuning upgrades for PaLM 2, Codey, and Imagen
And while we continue to extend our ecosystem of models, we are also investing in our own first-party models and tooling. Today, we are announcing advancements in:
-
PaLM 2: We’re announcing general availability for 38 languages on PaLM and making it possible for customers to ground responses with their own enterprise data or private corpus. To support longer question-answer chats and summarize and analyze large documents such as research papers, books, and legal briefs, our new version of PaLM 2 for text and chat also supports 32,000-token context windows, enough to include an 85-page document in a prompt.
-
Codey: We’re improved the quality of Codey by up to 25% quality improvement in major supported languages for code generation and code chat.
-
Imagen: We improved the visual appeal, and recently added capabilities such as image editing, captioning, and visual questions and answering. We’re also announcing an experimental digital watermarking feature—more info on that below.
Beyond model updates, we are helping organizations customize models with their own data and instructions through adapter tuning, now generally available for PaLM 2 for text, and Reinforcement Learning with Human Feedback (or RLHF), which is now in public preview. In addition, we’re introducing Style Tuning for Imagen, a new capability to help our customers further align their images to their brand guidelines with 10 images or less.
Customers are already seeing tremendous value:
-
“We are thrilled to be partnering with Google Cloud on delivering AI-powered workflows throughout the software development lifecycle,” said David DeSanto, chief product officer at GitLab. “GitLab is leveraging PaLM 2 foundation models, including the Codey model family, to bring new AI-powered experiences to all users involved in creating secure software. GitLab Duo, our suite of AI-powered workflows, enables organizations to be more effective in delivering secure software.”
-
“Imagen is beginning to power key capabilities within Omni, Omnicom’s open operating system, that will enable 17,000+ trained and certified users to create audience-driven customized images in minutes. Imagen has been instrumental in offering a scalable platform for image generation and customization. Integrating it into our platform allows us to expand the scope of audience-powered creative inspiration, at a scale that wasn’t previously possible,” said Art Schram, Annalect Chief Product Officer at Omnicom. “We’re starting to adopt the latest features like styles and fine tuning, and engineering data-driven prompts. We look forward to continuing to provide our users relevant visual inspiration in a responsible way.”’
-
“The Workiva platform incorporates generative AI technology to further boost productivity and efficiency, enabling insights that lead to better and faster data-driven decisions,” said Workiva Chief Technology Officer David Haila. “With the help of PaLM 2 on Vertex AI, our customers are unlocking the true potential of generative AI, providing a rich user experience and enabling customers to leverage new capabilities anywhere in their workflow. The model improvements ensure that we continue to provide reliable and assured, integrated reporting solutions to transform the landscape of business reporting and drive meaningful change on a global scale.”
-
“Google Cloud”s out-of-the-box AI and API support has revolutionized our workflow. The integrated AI environment offered by Google Cloud is a key ingredient in our application architecture that blends foundation and proprietary ML models to solve the scalability challenges of real-time content personalization. Integrating Imagen through Vertex API has allowed our clients to expedite content creation, reach new heights in terms of personalization, and generate more granular insights which then lead to increased performance,” said Tommaso Vaccarella, co-founder at Connected Stories. “Beyond the innovation, the stringent data safety measures reassure us and our clients that sensitive information remains protected. Google Cloud provides the power and speed we need to bring cutting-edge enterprise-ready solutions to market.”
Verify Imagen-generated images with digital watermarking, powered by DeepMind SynthID
We are committed to working together across Google to accelerate responsible AI practices whenever possible. In partnership with Google DeepMind, we are launching digital watermarking on Vertex AI in an experimental phase to give our customer the ability to verify AI-generated images produced by Imagen, our text-to-image model.
For image generation, customers need tools to identify synthetically-generated content. Existing methods to identify synthetic images, like metadata, can be stripped from or interfere with the image’s quality. The experimental availability of digital watermarking on Vertex AI makes Google Cloud the first hyperscale cloud provider to enable the creation of invisible and tamper resistant watermarks in AI-generated images. This technology is powered by Google DeepMind SynthID, a state-of-the art technology that embeds the digital watermark directly into the image of pixels, making it invisible to the human eye, and very difficult to tamper with, without damaging the image.
Connect to real-world data and drive real-world actions with Extensions
While foundational models are powerful, they are frozen after training, meaning they are not updated as new information becomes available, and thus may deliver stale results. Vertex AI Extensions is a set of fully-managed developer tools for extensions, which connect models to APIs for real-time data and real-world actions.
With Extensions, developers can use pre-built extensions to popular enterprise APIs, or build their own extensions to private and public APIs. Developers can use extensions to build powerful gen AI applications like digital assistants, search engines, and automated workflows.
For example, a developer can use pre-built extensions for HR databases and Vertex AI Search to create a chatbot that helps workers complete HR tasks in natural language, such as looking up benefit deadlines or travel policies that may change over time. Another example would be an app to analyze code for vulnerabilities. Developers can use extensions to ingest internal codebases and lookup evolving security threats in real time. The use cases are virtually endless.
Vertex AI will offer pre-built extensions for Cloud services like BigQuery and AlloyDB, as well as database partners like DataStax, MongoDB, and Redis. Vertex AI will also let developers integrate with LangChain, authenticate with private and public APIs, and secure applications with Cloud’s robust enterprise security, privacy, and compliance controls.
Meeting users where they are
These Vertex AI updates are just a few of many improvements across our AI portfolio, all designed to meet developers where they are, regardless of their level of machine learning expertise.
We’ve been thrilled by the customer momentum—with companies like GA Telesis and Vodafone leveraging gen AI to improve operational efficiency, organizations like Priceline harnessing Vertex AI for customer engagement innovations, and many other inspiring stories. We can’t wait to see more customers succeed as we continue to advance our enterprise-grade AI offerings.
Model Garden and the platform’s tuning tools appeal to both developers and data scientists alike—click here to get started. If you’re a data scientist experimenting with, building, and deploying models, our newly-announced Colab Enterprise is also a can’t miss resource — learn more here.
And for developers looking to get started quickly with common gen AI use cases, like chatbots and custom search engines, Vertex AI Search and Conversation can help you get up and running without any AI experience — and in many cases, without even writing any code.
To learn more and get started with Vertex AI visit our webpage and documentation. To learn more about managing gen AI, check out the new AI Readiness Quick Check tool based on Google’s AI Adoption Framework.