Businesses generate massive amounts of speech data every day, from customer calls to product demos to sales pitches. This data can transform your business by improving customer satisfaction, helping you prioritize product improvements and streamline business processes. While AI models have improved in the past few months, connecting speech data to these models in a scalable and governed way can be a challenge, and can limit the ability of customers to gain insights at scale.
Today, we are excited to announce the preview of Vertex AI transcription models in BigQuery. This new capability can make it easy to transcribe speech files and combine them with other structured data to build analytics and AI use cases — all through the simplicity and power of SQL, while providing built-in security and governance. Using Vertex AI capabilities, you can also tune transcription models to your data and use them from BigQuery.
Previously, customers built separate AI pipelines for transcription of speech data for developing analytics. These pipelines were siloed from BigQuery, and customers wrote custom infrastructure to bring the transcribed data to BigQuery for analysis. This helped to increase time to value, made governance challenging, and required teams to manage multiple systems for a given use case.
An integrated, governed data-to-AI experience
Google Cloud’s Speech to Text V2 API offers customers a variety of features to make transcription easy and efficient. One of these features is the ability to choose a specific domain model for transcription. This means that you can choose a model that is optimized for the type of audio you are transcribing, such as customer service calls, medical recordings, or universal speech. In addition to choosing a specialized model, you also have the flexibility to tune the model for your own data using model adaptation. This can allow you to improve the accuracy of transcriptions for your specific use case.
Once you’ve chosen a model, you can create object tables in BigQuery that map to the speech files stored in Cloud Storage. Object tables provide fine-grained access control, so users can only generate transcriptions for the speech files for which they are given access. Administrators can define row-level access policies on object tables and secure access to the underlying objects.
To generate transcriptions, simply register your off-the-shelf or adapted transcription model in BigQuery and invoke it over the object table using SQL. The transcriptions are returned as a text column in the BigQuery table. This process makes it easy to transcribe large volumes of audio data without having to worry about the underlying infrastructure. Additionally, the fine-grained access control provided by object tables ensures that customer data is secure.
Here is an example of how to use the Speech to Text V2 API with BigQuery:
- code_block
- <ListValue: [StructValue([(‘code’, ‘# Create an object table in BigQuery that maps to the speech files stored in Cloud Storage.rnrnCREATE OR REPLACE EXTERNAL TABLE `my_dataset.my_speech_table`rnWITH CONNECTION `my_project.us.example_connection`rnOPTIONS (rn object_metadata = ‘SIMPLE’,rn uris = [‘gs://my_bucket/path/*’],rn metadata_cache_mode= ‘AUTOMATIC’,rn max_staleness= INTERVAL 1 HOURrn);rnrn# Register your off-the-shelf or adapted transcription model in BigQuery.rnCREATE OR REPLACE MODEL `my_dataset.my_speech_model`rnREMOTE WITH CONNECTION `my_project.us.example_connection`rnOPTIONS (rn remote_service_type = ‘CLOUD_AI_SPEECH_TO_TEXT_V2′, rn speech_recognizer=”projects/my_project/locations/us/recognizers/my_recognizer”rn);rnrn# Invoke the registered model over the object table to generate transcriptions.rnSELECT *rnFROM ML.TRANSCRIBE(rn MODEL `my_dataset.my_speech_model`,rn TABLE `my_dataset.my_speech_table`)’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e14e64af160>)])]>
This query generates transcriptions for all of the speech files in the object table and returns the results as a new text column named transcription.
Sentiment analysis, summarization and other analytics use cases
Once you’ve transcribed the speech to text, there are three ways you can build analytics on the resulting text data:
- Use BigQueryML to perform commonly used natural language use cases: BigQueryML provides wide running support to train and deploy text models. For example, you can use BigQuery ML to identify customer sentiment in support calls, or to classify product feedback into different categories. If you are a Python user, you can also use BigQuery Studio to run Pandas functions for text analysis.
- Join transcribed metadata, with other structured data stored in BigQuery tables: This allows you to combine structured and unstructured data for powerful use cases. For example, you could identify high customer lifetime value (CLTV) customers with negative support call sentiment, or shortlist the most requested product features from customer feedback.
- Call the PaLM API directly from BigQuery to summarize, classify, or prompt Q&A on transcribed data: PaLM is a powerful AI language model that can be used for a wide variety of natural language tasks. For example, you could use PaLM to generate summaries of support calls, or to classify customer feedback into different categories.
- code_block
- <ListValue: [StructValue([(‘code’, ‘# Code examples for abovernrn# Create an object table in BigQuery that maps to the speech files stored in Cloud Storage.rnrnCREATE OR REPLACE EXTERNAL TABLE `my_dataset.my_speech_table`rnWITH CONNECTION `my_project.us.example_connection`rnOPTIONS (rn object_metadata = ‘SIMPLE’,rn uris = [‘gs://my_bucket/path/*’],rn metadata_cache_mode= ‘AUTOMATIC’,rn max_staleness= INTERVAL 1 HOURrn);rnrn# Register your off-the-shelf or adapted transcription model in BigQuery.rnCREATE OR REPLACE MODEL `my_dataset.my_speech_model`rnREMOTE WITH CONNECTION `my_project.us.example_connection`rnOPTIONS (rn remote_service_type = ‘CLOUD_AI_SPEECH_TO_TEXT_V2’, rn speech_recognizer=”projects/my_project/locations/us/recognizers/my_recognizer”rn);rnrn# Invoke the registered speech model over the object table to generate transcriptions and save to a table.rnCREATE TABLE `my_dataset.my_speech_transcripts` AS ( rnSELECT *rnFROM ML.TRANSCRIBE(rn MODEL `my_dataset.my_speech_model`,rn TABLE `my_dataset.my_speech_table`))rnrn# Register PaLM model in BigQuery.rnCREATE OR REPLACE MODEL `my_dataset.my_palm_model`rnREMOTE WITH CONNECTION `my_project.us.example_connection`rnOPTIONS (rn ENDPOINT = ‘text-bison@latest’rn);rnrn# Invoke the registered PaLM model to extract keywords of transcriptionsrnSELECT *rnFROMrn ML.GENERATE_TEXT(rn MODEL `my_dataset.my_palm_model`,rn (rn SELECTrn CONCAT(‘Extract the key words from the text below: ‘, transcripts) AS prompt,rn *rn FROMrn `my_dataset.my_speech_transcripts`rn ),rn STRUCT(rn 0.8 AS temperature,rn 1024 AS max_output_tokens,rn 0.95 AS top_p,rn 40 AS top_k));’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e14e64afc40>)])]>
Implement search and generative AI use cases
After transcription, you can unlock powerful search functionalities by building indexes optimized for needle-in-the-haystack queries, made possible by BigQuery’s search and indexing capabilities.
This integration also unlocks new generative LLM applications on audio files. You can use BigQuery’s powerful built-in ML functions to get further insights from the transcribed text, including ML.GENERATE_TEXT, ML.GENERATE_TEXT_EMBEDDING, ML.UNDERSTAND_TEXT, ML.TRANSLATE, etc., for various tasks like classification, sentiment analysis, entity extraction, extractive question answering, summarization, rewriting text in a different style, ad copy generation, concept ideation, embeddings and translation.
Next steps
The above capabilities are now available in preview. Get started by following the documentation, demo, or contact your Google sales representative.