News

Google scans cloud content

Google has announced that it will henceforth scan content that users upload to the cloud. It will search for both illegal and otherwise undesirable material.

Area-wide scan with upload filters

Upload filters will be used to examine all content uploaded to the Drive service without exception. Google wants to identify and block cybercrime content, child pornography and hate messages. The affected users are to be informed about blockings. Access to the files in question will then be restricted so that only the person who uploaded them can see them.

The automatic blocking will be based on Google’s usage policy, which classifies all kinds of things as “dangerous”: In addition to illegal content, for example, nudity and pornography as a whole, blood, violence, recordings of minors, and promotional material from organizations classified as violent are to be blocked. The very openly formulated taboos give Google plenty of leeway to ban a lot of content that would usually not be classified as offensive or harmful.

Google sees itself in a socially central role

Against this background, it seems at least questionable that Google states that, among other things, it is working to “protect the security […] of society” with the now announced step. The fact that the privacy of users is always protected, as Google further states, also seems anything but true in view of the unrestricted access to all uploaded files. Google also points out that its mail service has been automatically scanning content for a long time. The current step is therefore merely an extension of this practice to another service. Another reason for criticism is the fact that until 2017 Google not only scanned mails to prevent illegal and “dangerous” content, but also to offer personalized advertising. This practice was only discontinued after massive criticism from data protectionists. What has not changed, however, is that Google grants third parties who develop apps access to other people’s mail. This became known in 2018.

This not only appears to advance the noble goals set by Google, but also shakes confidence in the company, which will now access countless more private files. So, in addition to the fundamental objections to automated and suspicionless scans that are repeatedly voiced on legal and/or ethical grounds, here come well-founded reservations about Google as an actor in this measure.

What happens in case of violations?

How Google intends to punish violations of its guidelines is not clear. The statements made in this regard remain vague. For example, Google states that it will conduct a triage after detecting prohibited content and then take action, which may include blocking the content, deleting the content or closing the account associated with the content. Exactly when which of these measures will take effect was not specified.

In addition, Google has defined special exceptions to its rules that cannot be detected by algorithms as a rule: Nudity, for example, is prohibited, but not if it serves scientific, educational or artistic purposes. Who decides when this is the case is unclear.

Backed by EU law

The EU, meanwhile, is likely to welcome the move announced by Google: only recently, the EU Parliament created a way for corporations to legally read and systematically monitor messages in non-encrypted messengers. The EU is aiming to make this monitoring mandatory and thus ban end-to-end encryption.

Simon Lüthje

I am co-founder of this blog and am very interested in everything that has to do with technology, but I also like to play games. I was born in Hamburg, but now I live in Bad Segeberg.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button