Reader Matt Hirsch from outside of Boston contacted support about a strange phenomenon on his Google smart speakers. Both he and his wife used them to stream music from YouTube Music, a Google alternative to Spotify. But soon the music stopped working for his wife. He would listen to commercials before he played a song she requested.
One thing had changed. Hirsch and his wife recently activated the Google Voice Match service. This optional update trains the AI-powered Google Assistant to recognize different voices and present them with personalized responses. Voice Match can be useful if, for example, you want to access individual calendars or shopping lists.
But the Hirsch family certainly didn’t expect Voice Match to prevent their family from sharing a music account. Hirsch asked, “Is this an intentional thing to get us to buy the family plan or an accidental oversight?”
When I told Google about my experience, the company initially denied that it could happen. So I tried to replicate his situation using a Google Nest Hub speaker, which contains a small screen, with the help of the voices of some family and friends.
Sure enough, the smart speaker wouldn’t allow another Voice Match user in my house to play from my own YouTube Music premium subscription. The other user started the “free” version of YouTube Music with ads. Our options were for everyone to join a more expensive family plan or turn off Voice Match.
The experience reminded me of digital rights locks on music files I used to buy from the iTunes Store in the past. Now the locks are on in the modern world of streaming and the only key is your own voice.
I shared the results of my experiment with Google and they denied that this could happen a second time. Only after I sent him a video of the experience did Google change its tune. “This issue is caused by a bug that affects smart displays. We are working on a fix as soon as possible,” Google spokesman Robert Ferrara said.
The explanations of how Voice Match and music services work within a home are as complicated as the logic puzzles. The root of the problem is that Google’s products are designed for individuals, whose data can be collected and advertised, not for households full of people who rightly expect everyone to be able to share experiences like listening to music.
Google’s policy is that if the owner of the smart speakers has a music subscription, other members of the household can also access it. When smart speakers don’t recognize a person’s voice, the music service defaults to the owner.
But something clearly went wrong when the speaker’s primary user subscribed to YouTube Music and a second user turned on Voice Match. Things make more sense with Amazon Alexa and Apple Siri, which also have voice capabilities. I signed up with both companies, and neither prevents other members of a household with voice matches from using a shared music streaming account.
The Google spokesperson did not respond when asked to respond to Hirsch’s question about whether the use of voice identification as a blocker was intentional. It may just have been an oversight by Google. But I also wouldn’t be surprised if a business development person at the company thought he could earn us a nickel to generate incremental revenue from YouTube Music.
We should reject the idea that companies can use software updates to invade or change the functionality of the devices we pay for. But we’ve seen it time and time again with products like printers being upgraded to limit where you can get ink. We now have more than a decade of reminders that when something connects to the Internet, you’re not really in control.
My favorite example is even more ridiculous. In 2019, Nike released internet-connected shoes that used an app to lace themselves. The company released a software update that inadvertently broke part of the shoes’ motorized mechanism, so they couldn’t even lace up anymore. The software update turned the shoes into bricks.