After year-long lockout, Twitter is finally giving people their accounts back

Twitter is finally allowing a number of locked users to regain control of their accounts once again. Around a  year after Europe’s new privacy laws (GDPR) rolled out, Twitter began booting users out of their accounts if it suspected the account’s owner was underage — that is, younger than 13. But the process also locked out many users who said they were now old enough to use Twitter’s service legally.

While Twitter’s rules had stated that users under 13 can’t create accounts or post tweets, many underage users did so anyway thanks to lax enforcement of the policy. The GDPR regulations, however, forced Twitter to address the issue.

But even if the Twitter users were old enough to use the service when the regulations went into effect in May 2018, Twitter still had to figure out a technical solution to delete all the content published to its platform when those users were underage.

The lock-out approach was an aggressive way to deal with the problem.

By comparison, another app favored by underage users, TikTok, was recently fined by the FTC for being in violation of U.S. children’s privacy law, COPPA. But instead of kicking out all its underage users for months on end, it forced an age gate to appear in the app after it deleted all the videos made by underage users. Those users who were still under 13 were then redirected to a new COPPA-compliant experience.

Although Twitter was forced to address the problem because of the new regulations, lest it face possible fines, the company seemingly didn’t prioritize a fix. For example, VentureBeat reported how Twitter emailed users in June 2018 saying they’d be in touch with an update about the problem soon, but no update ever arrived.

The hashtag #TwitterLockOut became a common occurrence on Twitter and cries of “Give us back our accounts!” would be found in the Replies whenever Twitter shared other product news on its official accounts. (Well, that and requests for an Edit button, of course.) 

Twitter says that it’s now beginning — no, for real this time! — to give the locked out users control of their accounts. The process will roll out in waves as it scales up, with those who have waited the longest getting their emails first.

It also claims the process “was a lot more complicated” than anticipated, which is why it took a year (or in some cases, more than a year) to complete.

However, there are some caveats.

The users will first need to give Twitter permission to delete any tweets posted before they were 13, as well as any likes, DMs sent or received, moments, lists, and collections. Twitter will also need to remove all profile information besides the account’s username and date of birth.

In other words, the company is offering users a way to reclaim their username but none of their content.

Though many of these users have since moved on to new Twitter accounts, they may still want to reclaim their old username if it was a good one. In addition, their follower/following counts will return to normal after up to 24 hours after they take control of their account once again.

Twitter says it’s beginning to email those who are eligible starting today with these details. If the user doesn’t have an email address, they can instead log into the account where they’ll see a “Get Started” button to kick off the process instead.

To proceed, users will have to confirm their name and either the email or phone number that was associated with the account.

The account isn’t immediately unlocked after the steps are completed, users report. But Twitter’s dialog box informs the users they’ll be notified when the process is finalized on Twitter’s side.

Hopefully, that won’t take another year.

Image credits (of the process): Reddit user nyuszika7h, via r/Twitter 

Let’s block ads! (Why?)

Link to original source

The UK's tax office must destroy 7 million voiceprints. Would that happen in the U.S.?

Have you been giving away your voiceprint?
Have you been giving away your voiceprint?
Image: Getty Images/Ikon Images

Imagine the IRS sitting on a vast database of unique voiceprints collected from millions of citizens.

That’s basically what happened in the U.K., but at least the country has an agency to fix the problem. The U.S. has no such safeguard — and one of its agencies has already started collecting face scans.

Her Majesty’s Revenue and Customs office (HMRC) has been instructing customers to submit “voiceprints” since 2017, and it may not have received proper consent to do so. 

Now, the nation’s data protection enforcement agency, the Information Commissioner’s Office (ICO) has filed an official order that the HMRC must delete the Voice ID data of 7 million citizens. It has 28 days to comply with the May 9 order. 

In the U.S., government agencies are also collecting biometric data. Customs and Border Patrol (CPB) is scanning the faces of individuals leaving the country. It says the images are encrypted, and only stored for a short time. But experts worry that introducing facial recognition technology at airports could turn them into tools of mass surveillance. That could lead to unlawful arrests, which would disproportionately affect women and people of color, who facial recognition software has trouble recognizing. 

The problem is, unlike the UK, the U.S. does not have a GDPR-style law — and therefore no agency to make sure that individuals are giving consent to the collection and storage of their biometric data.

Lawmakers are currently considering federal privacy legislation, but debates about what form that should take is slowing the process down. That’s put states on the front lines. California led the way in 2018 when it passed a historic GDPR-style law called the California Consumer Privacy Act. Tech companies are outwardly supporting regulation, but, in California, industry lobbyists are simultaneously working to weaken this law.

The HMRC began prompting citizens to say unique phrases that would serve as Voice ID passwords in 2017. But the password wasn’t just about the content of the phrases — it was about the unique voices issuing them. Users were creating what are known as “voiceprints.” 

Voiceprints qualify as biometric data, which the GDPR treat as “special category” data. Under the law, anyone collecting special category data must receive explicit, informed consent on the collection of that data. Organizations that manage personal data also must have conducted a Data Protection Impact Assessment (DPIA), which would determine how it would ensure the privacy and security of the data, prior to actually collecting any personal information.

The ICO’s investigation found that the HMRC did neither of these things. It says that it did not conduct a DPIA, and that it had subsequently not incorporated “data protection by design and default” into the system. That’s somewhat understandable, since GDPR didn’t come into effect until 2018, and the program began in 2017; understandable, but not an excuse for potentially shoddy data protection.

Of more startling concern to the ICO was that the agency did not provide informed consent to the majority of the Voice ID users. It says that citizens did not understand that they could opt out of the Voice ID system. 

“HMRC collected it in circumstances where there was a significant imbalance of power between the organisation and its customers,” Steve Wood, the ICO’s deputy commissioner for policy, wrote in a blog post.

The ICO reports that this is the first case of biometric data protection enforcement. But as government agencies and companies increasingly rely on and collect our unique identifiers, it certainly won’t be the last. Hopefully, as the U.S. considers its own privacy legislation, it is taking notes.

Uploads%252fvideo uploaders%252fdistribution thumb%252fimage%252f90808%252fb35ea507 776a 477d a648 fc2a89092c05.jpg%252foriginal.jpg?signature=1fyrjg7wzyhvoty5wovu7vtp0ti=&source=https%3a%2f%2fblueprint api production.s3.amazonaws

Let’s block ads! (Why?)

Link to original source

UK tax office ordered to delete millions of unlawful biometric voiceprints

The UK’s data protection watchdog has issued the government department responsible for collecting taxes with a final enforcement notice, after an investigation found HMRC had collected biometric data from millions of citizens without obtaining proper consent.

HMRC has 28 days from the May 9 notice to delete any Voice ID records where it did not obtain explicit consent to record and create a unique biometric voiceprint linked to the individual’s identity. 

The Voice ID system was introduced in January 2017, with HMRC instructing callers to a helpline to record a phrase to use their voiceprint as a password. The system soon attracted criticism for failing to make it clear that people did not have to agree to their biometric data being recorded by the tax office.

In total some seven million UK citizens have had voiceprints recorded via the system. HMRC will now have to delete the majority of these records (~five million voiceprints) — only retaining biometric data where it has fully informed consent to do so.

The Information Commissioner’s Office (ICO) investigation into Voice ID was triggered by a complaint by privacy advocacy group Big Brother Watch — which said more than 160,000 people opted out of the system after its campaign highlighted questions over how the data was being collected.

Announcing the conclusion of its probe last week, the ICO said it had found the tax office unlawfully processed people’s biometric data.

“Innovative digital services help make our lives easier but it must not be at the expense of people’s fundamental right to privacy. Organisations must be transparent and fair and, when necessary, obtain consent from people about how their information will be used. When that doesn’t happen, the ICO will take action to protect the public,” said deputy commissioner, Steve Wood, in a statement.

Blogging about its final enforcement notice, the regulator said today that it intends to carry out an audit to assess HMRC’s wider compliance with data protection rules.

“With the adoption of new systems comes the responsibility to make sure that data protection obligations are fulfilled and customers’ privacy rights addressed alongside any organisational benefit. The public must be able to trust that their privacy is at the forefront of the decisions made about their personal data,” writes Woods offering guidance for using biometric data “in a fair, transparent and accountable way”.

Under Europe’s General Data Protection Regulation (GDPR) biometric data that’s used for identifying a person is classed as so-called “special category” data — meaning if a data controller is relying on consent as their legal basis for collecting this information the data subject must provide explicit consent.

In the case of HMRC, the ICO found it had failed to give customers sufficient information about how their biometric data would be processed, and failed to give them the chance to give or withhold consent.

It also collected voiceprints prior to publishing a Voice ID-specific privacy notice on its website. The ICO found it had not carried out an adequate data protection impact assessment prior to launching the system.

In October 2018 HMRC tweaked the automated options it offered to callers to provide clearer information about the system and their options.

That amended Voice ID system remains in operation. And in a letter to the ICO last week HMRC’s chief executive, Jon Thompson, defended it — claiming it is “popular with our customers, is a more secure way of protecting customer data, and enables us to get callers through to an adviser faster”.

As a result of the regulator’s investigation HMRC retrospectively contacted around a fifth of the seven million Brits whose data it had gathered to ask for consent. Of those it said more than 995,000 provided consent for the use of their biometric data and more than 260,000 withheld it.

Let’s block ads! (Why?)

Link to original source

Nearly 70 percent of hotel websites leak personal data, Symantec study finds

jacoblund via Getty Images

A security flaw may be hiding in that confirmation email you get after booking a hotel room. A Symantec study of more than 1,500 hotels found that 67 percent of them were unwittingly leaking guests’ personal information. The hotels in the study were spread across 54 countries, including the U.S., Canada and even some in the E.U., despite strict GDPR protections. They ran the gamut in quality too, from two-star motels to five-star beach resorts.

The main issue involved booking confirmation emails, according to Symantec principal threat researcher Candid Wueest. Many of the messages include an active link that directs to a separate website where guests can access their reservation having to log in again. The booking code and the guest email are often in the URL itself, which in and of itself isn’t a big deal.

But, like many businesses, hotels share your personal data with third parties, meaning that your booking code and email are visible to them as well. The attacker would only need access to your booking code and email in order to find your address, full name, cell phone number, passport number and other highly sensitive information. Symantec also found that a smaller number of hotels didn’t encrypt the links sent in confirmation emails, giving attackers another window of opportunity.

A Symantec spokesperson told Engadget that the company contacted the hotels that had the security flaw and that most, but not all, of the hotels were taking measures to fix it. Symantec would not disclose which hotels were named in the study, but said it looked at a total of 45 different websites, including boutique hotels and major chains with hundreds of locations, covering more than 1,500 hotels.

What can customers do in the meantime to guard their privacy? Symantec advises that people use a VPN to change their hotel reservation when connected to public WiFi. Also, you can check the URL of your confirmation link to see if your booking details are exposed. A URL with the security flaw would look like this: https://booking.the-hotel.tld/retrieve.php?prn=1234567&mail=john_smith@myMail.tld

Wueest told Engadget in an email that he also looked at five travel search engines, and found similar security flaws. “This (…finding) shows it is a general issue in the travel industry and not just a local issue,” he wrote.

Let’s block ads! (Why?)

Link to original source

Covert data-scraping on watch as EU DPA lays down “radical” GDPR red-line

An interesting decision came out of Poland’s data protection agency this week after the watchdog issued its first fine under Europe’s General Data Protection Regulation (GDPR).

On the surface the enforcement doesn’t look so remarkable: A ‘small’ ~€220K fine was handed to a Sweden-headquartered European digital marketing company, Bisnode, which has an office in Poland, after the national Personal Data Protection Office (UODO) decided the company had failed to comply with data subject rights obligations set out in Article 14 of the GDPR.

But the decision also requires it contact the close to six million people it did not already reach out to in order to fulfil its Article 14 information notification obligation, with the DPA giving the company three months to comply.

Bisnode previously estimated it would cost around €8M (~$9M) in registered postal costs to send so many letters, never mind the burden of handling any related admin.

So, as ever, the strength of data protection enforcement under GDPR is a lot more than the deterrent of top-line fines. It’s accompanying orders that can really rearrange business practices.

Local press reports that Bisnode has said it will delete the sanctioned records, presumably rather than shell out to send millions of letters. It also intends to challenge the UODO’s decision, initially in Polish courts — relying on caveats contained in Article 14 which relate to how much effort a data controller has to expend to contact people to tell them it’s processing their data.

It’s reportedly willing to fight all the way up to Europe’s top court, if necessary. (We’ve reached out to Bisnode for confirmation of its next steps.)

Any legal challenge to the UODO’s enforcement decision could therefore end up clarifying (and/or setting) some harder limits around covert scraping of personal data, if it reaches the CJEU — potentially affecting operators in multiple industries and sectors such as business intelligence, advertising and even cyber threat intelligence. So Privacy watchers have pricked up their ears.

“The decision is seen as radical, as it interprets Article 14 literally,” Dr Lukasz Olejnik, independent cybersecurity and privacy advisor, and research associate at the Center for Technology and Global Affairs at Oxford University, tells TechCrunch.

“UODO has taken a very principled position, arguing that the company business model is fully based on processing scraped data, and that the company has taken a decision willingly. UODO also argues that the company was aware of the obligation, as it did contact part of the people via email.”

While there are big and potentially costly implications for data-scrapers across various industries down the legal line, depending on how Bisnode’s appeal/s pan out, Olejnik adds a judicious caveat — noting that “each case might be different and have its specifics”.

There’s certainly no guarantee that the DPA’s decision will lead to a de facto ban on covert commercial data-scraping.

But there is fresh legal uncertainty for those quietly helping themselves to public databases of Europeans’ personal data. While repurposing such stuff for a commercial use may also be far more expensive than you think.

Right to be informed

Article 14 of the GDPR creates an obligation on data controllers to inform people whose personal data they intend to process when the information in question has not been directly obtained from them. So, for instance, when personal data has been scraped off the public Internet.

The relevant chunk of the regulation is pretty long — but key points include that the person whose data has been scraped must be informed who has their data (which includes anyone the data has been shared with, and any proposed international transfers); the types of data obtained; what is going to be done with; and the legal basis for the processing.

Data subjects must also be informed of their right to complain so they can object if they don’t like what you  want to do with their data.

The information obligation is also purpose specific; so if the data controller later wants to do something else with the scraped data there’s an obligation to send a new Article 14 notice.

Data subjects must be informed, at the latest, within a month of obtaining their information (as well as per intended purpose). While if the data is to be used for direct marketing the subject must be informed the first time they get sent a communication, if not sooner.

In the case of Bisnode it obtained a variety of personal data from public registers and other public databases pertaining to millions of entrepreneurs and business owners — including their names, national ID numbers and any legal events related to their business activity.

Registered addresses and/or company addresses appear to have been standard in the public data it scraped but other contact data was not, and Bisnode only obtained email addresses for a small sub-set of the individuals. It subsequently sent emails to those people — fulfilling its Article 14 information obligation in their case.

But, at issue, is that instead of sending text messages or snail mail notifications to all the other people whose email addresses it did not have — aka the vast majority; some 5.7M people — Bisnode made a conscious decision not to reach out to them directly. Instead it posted a notice on its website in the stated belief that fulfilled its Article 14 obligations.

“We recognise the right for sole proprietors to be informed of the fact that their data is processed by us. In this case, Bisnode has complied to the General Data Protection Regulation Art. 14 by posting the information on our website,” it wrote in an initial statement following the UODO’s decision, also posted on its website.

“We question the DPA’s interpretation of what is considered a proportionate effort. In the instances we have had email addresses (679,000 addresses), there we have sent out Art. 14 information via email, but to demand in addition that 5.7 million records of sole proprietors and members of corporate bodies of companies et al, be informed via postal mail or telephone cannot be considered a proportionate effort,” it added.

“In our view, information via email, other digital channels or via advertisements in national daily newspapers is preferable for recipients as well as senders.”

The DPA drastically disagrees — hence the penalty and other enforcement action.

Explaining its decision the watchdog says Bisnode clearly knew about its obligations under Article 14 and thereby made a conscious decision not to directly inform the majority of people whose personal data it had obtained for business purposes on cost grounds alone — when it should rather have accounted for its legal obligations related to data acquisition as a core component of business costs.

“The President of UODO states that the mere inclusion of information required in art. 14 par. 1 and par. 2 of the Regulation 2016/679, on the Company’s website, in the situation where the Company has the address data (and sometimes also phone numbers) of natural persons running a sole proprietorship (currently or in the past), enabling traditional mailing of correspondence containing information required by this provision (or transferring them by telephone), cannot be considered as sufficient fulfilment by the Company of the obligation referred to in art. 14 par. 1-3 of Regulation 2016/679,” runs the relevant chunk of legalese in the UODO decision [translated from Polish via Google Translate].

“The Company, as a professional in this type of activity, should be required to shape the business side of its business, which would take into account all the costs necessary to ensure its compliance with legal provisions (in this case, the provisions on the protection of personal data),” it adds, going on to further press its view that Bisnode’s decision not to reach out to inform the vast majority of individuals because it decided it was too expensive is exactly the problem, especially as its core business relies on processing people’s data.

The DPA’s decision also notes that Bisnode decided against sending SMS messages to another sub-set of people whose telephone numbers it did hold — again claiming as an excuse “the high costs of such an action”.

On the €8M figure which the company estimated would be the cost of posting Article 14 notifications to the 5.7M, the watchdog says there was in fact no obligation to send registered letters specifically (which is how Bisnode seems to have arrived at that estimate); or indeed to use any specific communication medium.

So it could presumably have sent (cheaper) standard mail, or even used its own staff (or hired temps) to spend a couple of days manually posting notifications to the individuals concerned. (Sidenote: Maybe there’s a new type of data notification compliance-tech robot/drone delivery startup to be created here… Knock-knock! Article14 delivery bot at the door to read you your rights…)

The UODO points out that GDPR’s Article 14 provision does not specify any particular means of fulfilling the obligation to inform. It just requires the data controller actually reach out.

An active manner vs disproportionate effort

The “essence of fulfilling the obligation” is to act in “an active manner”, it writes — so that means providing information to a data subject without them having to participate in enabling their own notification.

So just posting a passive notification under a tab on a website, as Bisnode did, would seem to go against that essence — as it clearly requires the people whose data is involved expending effort to find out.

And if they don’t even know their data was scraped in the first place how would they know where — or even to — go looking? It’s very unlikely they’d just stumble upon the notification by chance on Bisnode’s website and join the dots. Not without some kind of wider broadcast announcing its presence.

“The need for active notification is emphasized by the Article 29 Working Party, in the Transparency Guidelines under Regulation 2016/679 adopted on 29 November 2017 (most recently amended and adopted on 11 April 2018),” the UODO’s decision further notes, citing guidance from an influential pan-EU data protection oversight body that’s now known as the European Data Protection Board and responsible for helping ensure consistency of application of GDPR across the bloc.

In a press release accompanying its decision, the UODO also makes a point of specifying the number and proportion of people who objected to Bisnode using their data after it did contact them directly (i.e. by email) — writing: “Out of about 90,000 people who were informed about the processing by the company, more than 12,000 objected to the processing of their data.”

Which highlights the fact that informing people about commercial and marketing-related uses of their data can, and usually does, result in a bunch of them saying ‘no don’t do that’ — an outcome that’s not exactly aligned with the interests of a marketing company like Bisnode which obviously wants to maximize the reach of its database.

But a shrinking marketing database may well be the price of respecting people’s privacy rights and doing business legally in Europe. And Bisnode’s interpretation of what is and isn’t “proportionate”, vis-a-vis Article 14, does look self-servingly aligned with its own business interests rather than with the rights of EU citizens.

If the legal rights of EU people to know what’s being done with their personal data can just be sidestepped by a data controller holding only selective types of contact data (for instance) that risks putting a pretty big loophole in the data protection framework. (Although in a similar case from a few years ago the UODO reached a different decision in regards another company that did not have addresses at its disposal.)

There are some caveats included in Article 14 — allowing for a data controller to dispense with the requirement to inform data subjects if doing so “proves impossible or would involve a disproportionate effort” — but they are conspicuously linked in the text of GDPR to non-commercial examples: “[I]n particular for processing for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes”.

Safe to say, a b2b marketing business doesn’t fit the bill there.

A further caveat — which removes the obligation to inform the data subject if it is “likely to render impossible or seriously impair the achievement of the objectives of that processing” — would also seem a tough one to argue for a marketing purpose such as Bisnode’s.

It’s true that, as the complaints following its emailed Article 14 notifications indicate, there will very likely be a proportion of objections from those informed about a marketing purpose for their data. But the complaint stats cited by the UODO reveal that only a minority (~13%) of those emailed actively objected to Bisnode’s use of their data — a figure that does not seem so catastrophically large as to “seriously impair” the company’s overall business objective.

Of course it will be for judges to decide on all these details. But the looming legal fight will be around what constitutes “proportionate effort” — and in which circumstances those Article 13 caveats are allowed to apply.

“The ‘disproportionate effort’ in Article 14(5) is the core issue,” agrees Olejnik. “While including information solely on a website might be sufficient in some cases, but it is not clear if this applies in this case in particular. It is rather clear that the majority of people affected have no idea that their data are processed.”

“What the courts decide is anyone’s guess. It will be a truly interesting case to observe,” he adds.

In terms of immediate practical implications flowing from the UODO’s decision Olejnik says those are also unclear for now — not least because of Bisnode’s plan to fight all the way up to the CJEU if it can. (Meaning its appeal process could take years.)

“The company is also saying in public that its different EU branches are following a similar practice, but did not draw the attention of DPA,” Olejnik continues, adding: “It is however clear that some form of information obligation needs to be made. I believe this is an interesting precedent.

“While it may be shocking to some, this is the GDPR enforcement in action. Prior to enforcement, many would doubt if some text of GDPR means what it means. Well, it appears that to DPAs, it might indeed mean what it mean, if you know what I mean.”

The growing cost and risk of personal data

There is arguably a rather similar story going on, in parallel, around ‘free and informed’ consent under GDPR in relation to online ad targeting — which has turned into a major legal battleground since the regulation came into force last year. Multiple complaints remain in play targeting various data-for-ads tech platforms, as well as attacking core adtech processes for using and sharing personal data without proper consent and/or adequately robust protection.

With the GDPR not yet a year old, major enforcements are still lacking. But there are signs regulators are preparing to draw equally firm lines in the sand on this front too.

Given all the effort going into obfuscating and/or trying to ‘compliance-wash’ how the adtech industry strip-mines personal data, those most systematic personal data-harvesters similarly appear to have calculated that the cost of fully informing individuals is simply too high.

Also because they surely stand to lose a big chunk of their marketing muscle if every user whose personal information is being exploited for ads was offered a genuine, fully informed and entirely free choice to say no way.

But that doesn’t mean they can just sidestep the requirement. Enforcement is coming for any lurking lack of compliance there too.

Zooming out, it’s not clear what proportion of personal data is scraped from the Internet vs being actively provided by the user (albeit, not necessarily freely and willingly provided — as is the nub of this GDPR ‘forced consent’ complaint, for instance).

“Obtaining such comparative data would difficult at a scale,” admits Olejnik.

There’s no doubt plenty of nefarious actors engage in ‘fully unlicensed’ online data-scraping to run illegal spam campaigns or sell it to hackers planning phishing expeditions. And clearly no regulation under the sun that will put a firm lid on that. Though increased legal risk may at least provide a disincentive to less hardened cyber criminals.

In the commercial sector, where regulation has a more powerful bite, the lines between scraping and ‘providing’ data are frequently self-servingly blurred by the entities involved — seeking to workaround the law.

So, again, robust enforcement decisions that get upheld by jurisprudence are sorely needed to define and set down firm red-lines about how people’s data can be respectfully handled.

Let’s also not forget the scandalous acts of the now defunct political data company, Cambridge Analytica, which covertly scraped personal data off of Facebook’s platform to build psychographic profiles of American voters to try to influence domestic political outcomes — something which would certainly constitute a breach of Article 14, i.e. were such actions applied to EU peoples under the bloc’s current data protection regime.

An egregious example like Cambridge Analytica shows the clear logic of GDPR creating a framework for protecting people from non-disclosed use of their personal information — by offering a check against unwelcome misuse. As indeed does Facebook’s long history of abject failure to properly protect user data.

It’s not clear whether GDPR could have stopped a rogue actor like Cambridge Analytica. Though the heftier fines baked into the regime do mean data-scraping is no longer the ‘help yourself, free for all’ it apparently was back in 2014.

At the same time, multiple Facebook businesses remain under investigation in Europe: The Irish DPA has ten open investigations against multiple Facebook-owned platforms over questions of GDPR compliance. So watch that space. (And watch, too, Facebook announcing a sudden ‘pivot’ to ‘privacy… )

Covertly harvesting personal at scale now finally involves serious legal risk — at least in Europe.

And in light of the UODO’s strong stance on Article 14 there’s a little more reason for data scrapers to worry more.

Full disclosure

One final note on UODO and Bisnode: In a slightly odd quirk, the watchdog decided not to publicly name the company — choosing to pseudonymize it by editing out certain details from the published decision text.

It’s not clear why the DPA did so. Nor was its attempt to hide the name effective. Olejnik says he was quickly able to reverse its pseudonymization. While Bisnode also subsequently chose to out itself by going public with its disagreement.

Other European DPAs do disclose the targets of their decisions as a general rule. So it’s definitely a leftfield choice by the Polish watchdog.

A spokesperson for the UODO told us it does not always avoid disclosing the name of entities subject to its decisions but in this case said its president took the view that “information about the administrative fine and its justification is sufficient” — adding that in its view the most important element is to inform the public about decisions issued and “their substance”, including providing details of the decisive arguments in its decision-making process.

But given the lack of a specific justification and especially the weakness of the pseudonymization Olejnik suggests not publicly naming Bisnode was a questionable decision.

“Based on the information from the decision it did not take me much time to ‘reverse’ the pseudonymization and reveal the company name. This puts the decision behind pseudonymization under question,” he suggests. “Though I believe the public has a right to expect transparency in the first place — the decision to pseudonymize was controversial in the first place. To say the least, it forbids users to learn about the case, the misuse, and potentially even learn if they may have been affected.”

There is perhaps no small irony in a privacy watchdog choosing to ineffectively withhold the name of a company that had failed to inform a large number of private individuals that it covertly held their data.

Let’s block ads! (Why?)

Link to original source