Yesterday, I was using the Google Search app on my Pixel phone and I tried to ask it and Assistant how to say something Japanese in English to verify that I was learning the best approach to the language in the Duolingo app for professions. Upon asking “(Hey Google,) How do you say ‘bengoshi’ in English?”, I quickly found out it just couldn’t understand the foreign word since my device language was set to English.
So, I went into my system settings and added Japanese to the list as the second language, and tried again. Unfortunately, I ran into the same problem – my voice input for ‘bengoshi’ (meaning lawyer) was translated into some random English words that it though I was trying to say.
Google’s tools simply don’t work with bilingual input to date, and this is disappointing. As someone learning a second language and not someone fluently bilingual, I can’t say for sure if this feature is vital to everyday use for those who, say, for instance, speak Spanish or English and pepper in a few words of the opposite language or something, but I imagine it would be incredibly helpful if Assistant was able to parse this type of dual input simultaneously.
You can talk to the Google Assistant in either language, but not a mix of both.
I searched this issue up on the Assistant Help forum and it looks like I was right about the limitation. According to what I found below, none of Google’s hardware or software can do anything but respond to one language at a time or the other (for those with a second language selected), but not a mix of the two.
I hope to see this change in the future not only for myself, but also for others who may need it more. As a company with international services, it’s appalling to see such restrictions on tools that so very many diverse people use in their daily lives. It’s clear to me that Google is a U.S. company first, and an international provider only when it benefits their marketing, and that’s just disappointing.
Let me know in the comments if you’d like to have mixed bilingual voice input on your phone, tablet, watch, smart display, Chromebook, in the car and elsewhere. I don’t think it’s too much to ask – Google’s cloud TPUs can process AI and machine learning but can’t quick switch input recognition on the fly? Seems less like a limitation and more like a choice if you ask me, but I could be wrong.