Why would you ever trust Amazons Alexa after this? – ZDNet

Amazon Echo Show 10.jpg

Skillful, but not necessarily trustworthy?


Amazon

It was only the other day that I was wondering whether it would be fun to have a cuckoo clock in my kitchen.

more Technically Incorrect

An Amazon Alexa-powered cuckoo clock, that is.

I concluded that the idea was arrant bonkers, as are most things Alexa-enabled.

But we all have our prejudices and many Americans are only too delighted to have Amazon’s Echos and Dots strewn about their homes to make their lives easier.

Why, Alexa can even buy you your mummy, should you want.

Yet perhaps Alexa-lovers should be warned that things may not be as delightful as they seem.

Skills? Oh, Everyone’s Got Skills.

New research from concerned academics at Germany’s Ruhr-University Bochum, together with equally concerned colleagues from North Carolina State — and even a researcher who, during the project, joined Google — may just make Alexa owners wonder about the true meaning of an easy life.

The researchers looked at 90,194 Alexa skills. What they found was a security Emmenthal that would make a mouse wonder whether there was any cheese there at all.

How much would you like to shudder, oh happy Alexa owner?

How about this sentence from Dr. Martin Degeling: “A first problem is that Amazon has partially activated skills automatically since 2017. Previously, users had to agree to the use of each skill. Now they hardly have an overview of where the answer Alexa gives them comes from and who programmed it in the first place.”

So the first problem is that you have no idea where your clever answer comes from whenever you rouse Alexa from her slumber. Or, indeed, how secure your question may have been.

Ready for another quote from the researchers? Here you go: “When a skill is published in the skill store, it also displays the developer’s name. We found that developers can register themselves with any company name when creating their developer’s account with Amazon. This makes it easy for an attacker to impersonate any well-known manufacturer or service provider.”

Please, this is the sort of thing that makes us laugh when big companies get hacked — and don’t tell us for months, or even years.

These researchers actually tested the process for themselves. “In an experiment, we were able to publish skills in the name of a large company. Valuable information from users can be tapped here,” they said, modestly.

This finding was bracing, too. Yes, Amazon has a certification process for these skills. But “no restriction is imposed on changing the backend code, which can change anytime after the certification process.”

In essence, then, a malicious developer could change the code and begin to hoover up sensitive personal data.

Security? Yeah, It’s A Priority.

Then, say the researchers, there are the skills developers who publish under a false identity.

Perhaps, though, this all sounds too dramatic. Surely all these skills have privacy policies that govern what they can and can’t do.

Please sit down. From the research: “Only 24.2% of skills have a privacy policy.” So three-quarters of the skills, well, don’t.

Don’t worry, though, there’s worse: “For certain categories like ‘kids’ and ‘health and fitness’ only 13.6% and 42.2% skills have a privacy policy, respectively. As privacy advocates, we feel both ‘kids’ and ‘health’ related skills should be held to higher standards with respect to data privacy.”

Naturally, I asked Amazon what it thought of these slightly chilly findings.

An Amazon spokesperson told me: “The security of our devices and services is a top priority. We conduct security reviews as part of skill certification and have systems in place to continually monitor live skills for potentially malicious behavior. Any offending skills we identify are blocked during certification or quickly deactivated. We are constantly improving these mechanisms to further protect our customers.”

It’s heartening to know security is a top priority. I fancy getting customers to be amused by as many Alexa skills as possible so that Amazon can collect as much data as possible, might be a higher priority.

Still, the spokesperson added: “We appreciate the work of independent researchers who help bring potential issues to our attention.”

Some might translate this as: “Darn it, they’re right. But how do you expect us to monitor all these little skills? We’re too busy thinking big.”

Hey, Alexa. Does Anyone Really Care?

Of course, Amazon believes its monitoring systems work well in identifying true miscreants. Somehow, though, expecting developers to stick to the rules isn’t quite the same as making sure they do.

I also understand that the company believes kid skills often don’t come attached to a privacy policy because they don’t collect personal information.

To which one or two parents might mutter: “Uh-huh?”

Ultimately, like so many tech companies, Amazon would prefer you to monitor — and change — your own permissions, as that would be very cost-effective for Amazon. But who really has those monitoring skills?

This research, presented last Thursday at the Network and Distributed System Security Symposium, makes for such candidly brutal reading that at least one or two Alexa users might consider what they’ve been doing. And with whom.

Then again, does the majority really care? Until some unpleasant happenstance occurs, most users just want to have an easy life, amusing themselves by talking to a machine when they could quite easily turn off the lights themselves.

After all, this isn’t even the first time that researchers have exposed the vulnerabilities of Alexa skills. Last year, academics tried to upload 234 policy-breaking Alexa skills. Tell me how many got approved, Alexa? Yes, all of them.

The latest skills researchers themselves contacted Amazon to offer some sort of “Hey, look at this.”

They say: “Amazon has confirmed some of the problems to the research team and says it is working on countermeasures.”

I wonder what skills Amazon is using to achieve that.