Facebook makes another push to shape and define its own oversight

Facebook’s head of global spin and policy, former UK deputy prime minister Nick Clegg, will give a speech later today providing more detail of the company’s plan to set up an ‘independent’ external oversight board to which people can appeal content decisions so that Facebook itself is not the sole entity making such decisions.

In the speech in Berlin, Clegg will apparently admit to Facebook having made mistakes. Albeit, it would be pretty awkward if he came on stage claiming Facebook is flawless and humanity needs to take a really long hard look at itself.

“I don’t think it’s in any way conceivable, and I don’t think it’s right, for private companies to set the rules of the road for something which is as profoundly important as how technology serves society,” Clegg told BBC Radio 4’s Today program this morning, discussing his talking points ahead of the speech. “In the end this is not something that big tech companies… can or should do on their own.

“I want to see… companies like Facebook play an increasingly mature role — not shunning regulation but advocating it in a sensible way.”

The idea of creating an oversight board for content moderation and appeals was previously floated by Facebook founder, Mark Zuckerberg. Though it raises way more questions than it resolves — not least how a board whose existence depends on the underlying commercial platform it is supposed to oversee can possibly be independent of that selfsame mothership; or how board appointees will be selected and recompensed; and who will choose the mix of individuals to ensure the board can reflect the full spectrum diversity of humanity that’s now using Facebook’s 2BN+ user global platform?

None of these questions were raised let alone addressed in this morning’s BBC Radio 4 interview with Clegg.

Asked by the interviewer whether Facebook will hand control of “some of these difficult decisions” to an outside body, Clegg said: “Absolutely. That’s exactly what it means. At the end of the day there is something quite uncomfortable about a private company making all these ethical adjudications on whether this bit of content stays up or this bit of content gets taken down.

“And in the really pivotal, difficult issues what we’re going to do — it’s analogous to a court — we’re setting up an independent oversight board where users and indeed Facebook will be able to refer to that board and say well what would you do? Would you take it down or keep it up? And then we will commit, right at the outset, to abide by whatever rulings that board makes.”

Speaking shortly afterwards on the same radio program, Damian Collins, who chairs a UK parliamentary committee that has called for Facebook to be investigated by the UK’s privacy and competition regulators, suggested the company is seeking to use self-serving self-regulation to evade wider responsibility for the problems its platform creates — arguing that what’s really needed are state-set broadcast-style regulations overseen by external bodies with statutory powers.

“They’re trying to pass on the responsibility,” he said of Facebook’s oversight board. “What they’re saying to parliaments and governments is well you make things illegal and we’ll obey your laws but other than that don’t expect us to exercise any judgement about how people use our services.

“We need as level of regulation beyond that as well. Ultimately we need — just as have in broadcasting — statutory regulation based on principles that we set, and an investigatory regulator that’s got the power to go in and investigate, which, under this board that Facebook is going to set up, this will still largely be dependent on Facebook agreeing what data and information it shares, setting the parameters for investigations. Where we need external bodies with statutory powers to be able to do this.”

Clegg’s speech later today is also slated to spin the idea that Facebook is suffering unfairly from a wider “techlash”.

Asked about that during the interview, the Facebook PR seized the opportunity to argue that if Western society imposes too stringent regulations on platforms and their use of personal data there’s a risk of “throw[ing] the baby out with the bathwater”, with Clegg smoothly reaching for the usual big tech talking points — claiming innovation would be “almost impossible” if there’s not enough of a data free for all, and the West risks being dominated by China, rather than friendly US giants.

By that logic we’re in a rights race to the bottom — thanks to the proliferation of technology-enabled global surveillance infrastructure, such as the one operated by Facebook’s business.

Clegg tried to pass all that off as merely ‘communications as usual’, making no reference to the scale of the pervasive personal data capture that Facebook’s business model depends upon, and instead arguing its business should be regulated in the same way society regulates “other forms of communication”. Funnily enough, though, your phone isn’t designed to record what you say the moment you plug it in…

“People plot crimes on telephones, they exchange emails that are designed to hurt people. If you hold up any mirror to humanity you will always see everything that is both beautiful and grotesque about human nature,” Clegg argued, seeking to manage expectations vis-a-vis what regulating Facebook should mean. “Our job — and this is where Facebook has a heavy responsibility and where we have to work in partnership with governments — is to minimize the bad and to maximize the good.”

He also said Facebook supports “new rules of the road” to ensure a “level playing field” for regulations related to privacy; election rules; the boundaries of hate speech vs free speech; and data portability —  making a push to flatten regulatory variation which is often, of course, based on societal, cultural and historical differences, as well as reflecting regional democratic priorities.

It’s not at all clear how any of that nuance would or could be factored into Facebook’s preferred universal global ‘moral’ code — which it’s here, via Clegg (a former European politician), leaning on regional governments to accept.

Instead of societies setting the rules they choose for platforms like Facebook, Facebook’s lobbying muscle is being flexed to make the case for a single generalized set of ‘standards’ which won’t overly get in the way of how it monetizes people’s data.

And if we don’t agree to its ‘Western’ style surveillance, the threat is we’ll be at the mercy of even lower Chinese standards…

“You’ve got this battle really for tech dominance between the United States and China,” said Clegg, reheating Zuckerberg’s senate pitch last year when the Facebook founder urged a trade off of privacy rights to allow Western companies to process people’s facial biometrics to not fall behind China. “In China there’s no compunction about how data is used, there’s no worry about privacy legislation, data protection and so on — we should not emulate what the Chinese are doing but we should keep our ability in Europe and North America to innovate and to use data proportionately and innovat[iv]ely.

“Otherwise if we deprive ourselves of that ability I can predict that within a relatively short period of time we will have tech domination from a country with wholly different sets of values to those that are shared in this country and elsewhere.”

What’s rather more likely is the emergence of discrete Internets where regions set their own standards — and indeed we’re already seeing signs of splinternets emerging.

Clegg even briefly brought this up — though it’s not clear why (and he avoided this point entirely) Europeans should fear the emergence of a regional digital ecosystem that bakes respect for human rights into digital technologies.

With European privacy rules also now setting global standards by influencing policy discussions elsewhere — including the US — Facebook’s nightmare is that higher standards than it wants to offer Internet users will become the new Western norm.

Collins made short work of Clegg’s techlash point, pointing out that if Facebook wants to win back users’ and society’s trust it should stop acting like it has everything to hide and actually accept public scrutiny.

“They’ve done this to themselves,” he said. “If they want redemption, if they want to try and wipe the slate clean for Mack Zuckerberg he should open himself up more. He should be prepared to answer more questions publicly about the data that they gather, whether other companies like Cambridge Analytica had access to it, the nature of the problem of disinformation on the platform. Instead they are incredibly defensive, incredibly secretive a lot of the time. And it arouses suspicion.

“I think people were quite surprised to discover the lengths to which people go to to gather data about us — even people who don’t even use Facebook. And that’s what’s made them suspicious. So they have to put their own house in order if they want to end this.”

Last year Collins’ DCMS committee repeatedly asked Zuckerberg to testify to its enquiry into online disinformation — and was repeatedly snubbed…

Collins also debunked an attempt by Clegg to claim there’s no evidence of any Russian meddling on Facebook’s platform targeting the UK’s 2016 EU referendum — pointing out that Facebook previously admitted to a small amount of Russian ad spending that did target the EU referendum, before making the wider point that it’s very difficult for anyone outside Facebook to know how its platform gets used/misused; Ads are just the tip of the political disinformation iceberg.

“It’s very difficult to investigate externally, because the key factors — like the use of tools like groups on Facebook, the use of inauthentic fake accounts boosting Russian content, there have been studies showing that’s still going on and was going on during the [US] parliamentary elections, there’s been no proper audit done during the referendum, and in fact when we first went to Facebook and said there’s evidence of what was going on in America in 2016, did this happen during the referendum as well, they said to us well we won’t look unless you can prove it happened,” he said.

“There’s certainly evidence of suspicious Russian activity during the referendum and elsewhere,” Collins added.

We asked Facebook for Clegg’s talking points for today’s speech but the company declined to share more detail ahead of time.

Let’s block ads! (Why?)

Link to original source

UK sets out safety-focused plan to regulate Internet firms

The UK government has laid out proposals to regulate online and social media platforms, setting out the substance of its long-awaited White Paper on online harms today — and kicking off a public consultation.

The Online Harms White Paper is a joint proposal from the Department for Digital, Culture, Media and Sport (DCMS) and Home Office. The paper can be read in full here (PDF).

It follows the government announcement of a policy intent last May, and a string of domestic calls for greater regulation of the Internet as politicians have responded to rising concern about the mental health impacts of online content.

The government is now proposing to put a mandatory duty of care on platforms to take reasonable steps to protect their users from a range of harms — including but not limited to illegal material such as terrorist and child sexual exploitation and abuse (which will be covered by further stringent requirements under the plan).

The approach is also intended to address a range of content and activity that’s deemed harmful.

Examples providing by the government of the sorts of broader harms it’s targeting include inciting violence and violent content; encouraging suicide; disinformation; cyber bullying; and inappropriate material being accessed by children.

Content promoting suicide has been thrown into the public spotlight in the UK in recent months, following media reports about a schoolgirl whose family found out she had been viewing pro-suicide content on Instagram after she killed herself.

The Facebook -owned platform subsequently agreed to change its policies towards suicide content, saying it would start censoring graphic images of self-harm, after pressure from ministers.

Commenting on the publication of the White Paper today, digital secretary Jeremy Wright said: “The era of self-regulation for online companies is over. Voluntary actions from industry to tackle online harms have not been applied consistently or gone far enough. Tech can be an incredible force for good and we want the sector to be part of the solution in protecting their users. However those that fail to do this will face tough action.

”We want the UK to be the safest place in the world to go online, and the best place to start and grow a digital business and our proposals for new laws will help make sure everyone in our country can enjoy the Internet safely.”

In another supporting statement Home Secretary Sajid Javid added: “The tech giants and social media companies have a moral duty to protect the young people they profit from. Despite our repeated calls to action, harmful and illegal content – including child abuse and terrorism – is still too readily available online.

“That is why we are forcing these firms to clean up their act once and for all. I made it my mission to protect our young people – and we are now delivering on that promise.”

Children’s charity, the NSPCC, was among the sector bodies welcoming the proposal.

“This is a hugely significant commitment by the Government that once enacted, can make the UK a world pioneer in protecting children online,” wrote CEO Peter Wanless in a statement.

For too long social networks have failed to prioritise children’s safety and left them exposed to grooming, abuse, and harmful content.  So it’s high time they were forced to act through this legally binding duty to protect children, backed up with hefty punishments if they fail to do so.”

Although the Internet Watch Foundation, which works to stop the spread of child exploitation imagery online, warned against unintended consequences from badly planned legislation — and urged the government to take a “balanced approach”.

The proposed laws would apply to any company that allows users to share or discover user generated content or interact with each other online — meaning companies both big and small.

Nor is it just social media platforms either, with file hosting sites, public discussion forums, messaging services, and search engines among those falling under the planned law’s remit.

The government says a new independent regulator will be introduced to ensure Internet companies meet their responsibilities, with ministers consulting on whether this should be a new or existing body.

Telecoms regulator Ofcom has been rumored as one possible contender, though the UK’s data watchdog, the ICO, has previously suggested it should be involved in any Internet oversight given its responsibility for data protection and privacy. (According to the FT a hybrid entity combining the two is another possibility — although it reports that the government remains genuinely undecided on who the regulator will be.)

The future Internet watchdog will be funded by industry in the medium term, with the government saying it’s exploring options such as an industry levy to put it on a sustainable footing.

On the enforcement front, the watchdog will be armed with a range of tools — with the government consulting on powers for it to issue substantial fines; block access to sites; and potentially to impose liability on individual members of senior management.

So there’s at least the prospect of a high profile social media CEO being threatened with UK jail time in future if they don’t do enough to remove harmful content.

On the financial penalties front, Wright suggested that the government is entertaining GDPR-level fines of as much as 4% of a company’s annual global turnover, speaking during an interview on Sky News…

Other elements of the proposed framework include giving the regulator the power to force tech companies to publish annual transparency reports on the amount of harmful content on their platforms and what they are doing to address it; to compel companies to respond to users’ complaints and act to address them quickly; and to comply with codes of practice issued by the regulator, such as requirements to minimise the spread of misleading and harmful disinformation with dedicated fact checkers, particularly during election periods.

A long-running enquiry by a DCMS parliamentary committee into online disinformation last year, which was continuously frustrated in its attempts to get Facebook founder Mark Zuckerberg to testify before it, concluded with a laundry list of recommendations for tightening regulations around digital campaigning.

The committee also recommended clear legal liabilities for tech companies to act against “harmful or illegal content”, and suggested a levy on tech firms to support enhanced regulation.

Responding to the government’s White Paper in a statement today DCMS chair Damian Collins broadly welcomed the government’s proposals — though he also pressed for the future regulator to have the power to conduct its own investigations, rather than relying on self reporting by tech firms.

“We need a clear definition of how quickly social media companies should be required to take down harmful content, and this should include not only when it is referred to them by users, but also when it is easily within their power to discover this content for themselves,” Collins wrote.

“The regulator should also give guidance on the responsibilities of social media companies to ensure that their algorithms are not consistently directing users to harmful content.”

Another element of the government’s proposal is a “Safety by Design” framework that’s intended to help companies incorporate online safety features in new apps and platforms from the start.

The government also wants the regulator to head up a media literacy strategy that’s intended to equip people with the knowledge to recognise and deal with a range of deceptive and malicious behaviours online, such as catfishing, grooming and extremism.

It writes that the UK is committed to a free, open and secure Internet — and makes a point of noting that the watchdog will have a legal duty to pay “due regard” to innovation, and also to protect users’ rights online by paying particular mindful not infringe privacy and freedom of expression.

It therefore suggests technology will be an integral part of any solution, saying the proposals are designed to promote a culture of continuous improvement among companies — and highlighting technologies such as Google’s “Family Link” and Apple’s Screen Time app as examples of the sorts of developments it wants the policy framework to encourage.

Although such caveats are unlikely to do much to reassure those concerned the approach will chill online speech, and/or place an impossible burden on smaller firms with less resource to monitor what their users are doing.

“The government’s proposals would create state regulation of the speech of millions of British citizens,” warns digital and civil rights group, the Open Rights Group, in a statement by its executive director Jim Killock. “We have to expect that the duty of care will end up widely drawn with serious implications for legal content, that is deemed potentially risky, whether it really is nor not.

“The government refused to create a state regulator the press because it didn’t want to be seen to be controlling free expression. We are skeptical that state regulation is the right approach.”

UK startup policy advocacy group Coadec was also quick to voice concerns — warning that the government’s plans will “entrench the tech giants, not punish them”.

“The vast scope of the proposals means they cover not just social media but virtually the entire internet – from file sharing to newspaper comment sections. Those most impacted will not be the tech giants the Government claims they are targeting, but everyone else. It will benefit the largest platforms with the resources and legal might to comply – and restrict the ability of British startups to compete fairly,” said Coadec executive director Dom Hallas in a statement. 

“There is a reason that Mark Zuckerberg has called for more regulation. It is in Facebook’s business interest.”

UK startup industry association, techUK, also put out a response statement that warns about the need to avoid disproportionate impacts.

“Some of the key pillars of the Government’s approach remain too vague,” said Vinous Ali, head of policy, techUK. “It is vital that the new framework is effective, proportionate and predictable. Clear legal definitions that allow companies in scope to understand the law and therefore act quickly and with confidence will be key to the success of the new system.

“Not all of the legitimate concerns about online harms can be addressed through regulation. The new framework must be complemented by renewed efforts to ensure children, young people and adults alike have the skills and awareness to navigate the digital world safely and securely.”

The government has launched a 12-week consultation on the proposals, ending July 1, after which it says it will set out the action it will take in developing its final proposals for legislation.

“Following the publication of the Government Response to the consultation, we will bring forward legislation when parliamentary time allows,” it adds.

Last month a House of Lords committee recommended an overarching super regulator be established to plug any legislative gaps and/or handle overlaps in rules on Internet platforms, arguing that “a new framework for regulatory action” is needed to handle the digital world.

Though the government appears confident that an Internet regulator will be able to navigate any legislative patchwork and keep tech firms in line on its own — at least, for now.

The House of Lords committee was another parliamentary body that came down in support of a statutory duty of care for online services hosting user-generated content, suggesting it should have a special focus on children and “the vulnerable in society”.

And there’s no doubt the concept of regulating Internet platforms has broad consensus among UK politicians — on both sides of the aisle. But how to do that effectively and proportionately is another matter.

We reached out to Facebook and Google for a response to the White Paper.

Commenting on the Online Harms White Paper in a statement, Rebecca Stimson, Facebook’s head of UK public policy, said: “New rules for the internet should protect society from harm while also supporting innovation, the digital economy and freedom of speech. These are complex issues to get right and we look forward to working with the Government and Parliament to ensure new regulations are effective.”

Stimson also reiterated how Facebook has expanded the number of staff it has working on trust and safety issues to 30,000 in recent years, as well as claiming it’s invested heavily in technology to help prevent abuse — while conceding that “we know there is much more to do”.

Last month the company revealed shortcomings with its safety measures around livestreaming, after it emerged that a massacre in Christchurch, New Zealand which was livestreamed to Facebook’s platform, had not been flagged for accelerated review by moderates because it was not tagged as suicide related content.

Facebook said it would be “learning” from the incident and “re-examining our reporting logic and experiences for both live and recently live videos in order to expand the categories that would get to accelerated review”.

In its response to the UK government White Paper today, Stimson added: “The internet has transformed how billions of people live, work and connect with each other, but new forms of communication also bring huge challenges. We have responsibilities to keep people safe on our services and we share the government’s commitment to tackling harmful content online. As Mark Zuckerberg said last month, new regulations are needed so that we have a standardised approach across platforms and private companies aren’t making so many important decisions alone.”

Let’s block ads! (Why?)

Link to original source

Facebook staff raised concerns about Cambridge Analytica in September 2015, per court filing

Further details have emerged about when and how much Facebook knew about data-scraping by the disgraced and now defunct Cambridge Analytica political data firm.

Last year a major privacy scandal hit Facebook after it emerged CA had paid GSR, a developer with access to Facebook’s platform, to extract personal data on as many as 87M Facebook users without proper consents.

Cambridge Analytica’s intention was to use the data to build psychographic profiles of American voters to target political messages — with the company initially working for the Ted Cruz and later the Donald Trump presidential candidate campaigns.

But employees at Facebook appear to have raised internal concerns about CA scraping user data in September 2015 — i.e. months earlier than Facebook previously told lawmakers it became aware of the GSR/CA breach (December 2015).

The latest twist in the privacy scandal has emerged via a redacted court filing in the U.S. — where the District of Columbia is suing Facebook in a consumer protection enforcement case.

Facebook is seeking to have documents pertaining to the case sealed, while the District argues there is nothing commercially sensitive to require that.

In its opposition to Facebook’s motion to seal the document, the District includes a redacted summary (screengrabbed below) of the “jurisdictional facts” it says are contained in the papers Facebook is seeking to keep secret.

According to the District’s account a Washington D.C.-based Facebook employee warned others in the company about Cambridge Analytica’s data-scraping practices as early as September 2015.

Under questioning in Congress last April, Mark Zuckerberg was asked directly by congressman Mike Doyle when Facebook had first learned about Cambridge Analytica using Facebook data — and whether specifically it had learned about it as a result of the December 2015 Guardian article (which broke the story).

Zuckerberg responded with a “yes” to Doyle’s question.

Facebook repeated the same line to the UK’s Digital, Media and Sport (DCMA) committee last year, over a series of hearings with less senior staffers

Damian Collins, the chair of the DCMS committee — which made repeat requests for Zuckerberg himself to testify in front of its enquiry into online disinformation, only to be repeatedly rebuffed — tweeted yesterday that the new detail could suggest Facebook “consistently mislead” the British parliament.

The DCMS committee has previously accused Facebook of deliberately misleading its enquiry on other aspects of the CA saga, with Collins taking the company to task for displaying a pattern of evasive behavior.

The earlier charge that it mislead the committee refers to a hearing in Washington in February 2018 — when Facebook sent its UK head of policy, Simon Milner, and its head of global policy management, Monika Bickert, to field DCMS’ questions — where the pair failed to inform the committee about a legal agreement Facebook had made with Cambridge Analytica in December 2015.

The committee’s final report was also damning of Facebook, calling for regulators to instigate antitrust and privacy probes of the tech giant.

Meanwhile, questions have continued to be raised about Facebook’s decision to hire GSR co-founder Joseph Chancellor, who reportedly joined the company around November 2015.

The question now is if Facebook knew there were concerns about CA data-scraping prior to hiring the co-founder of the company that sold scraped Facebook user data to CA, why did it go ahead and hire Chancellor?

The GSR co-founder has never been made available by Facebook to answer questions from politicians (or press) on either side of the pond.

Last fall he was reported to have quietly left Facebook, with no comment from Facebook on the reasons behind his departure — just as it had never explained why it hired him in the first place.

But the new timeline that’s emerged of what Facebook knew when makes those questions more pressing than ever.

Reached for a response to the details contained in the District of Columbia’s court filing, a Facebook spokeswomen sent us this statement:

Facebook was not aware of the transfer of data from Kogan/GSR to Cambridge Analytica until December 2015, as we have testified under oath

In September 2015 employees heard speculation that Cambridge Analytica was scraping data, something that is unfortunately common for any internet service. In December 2015, we first learned through media reports that Kogan sold data to Cambridge Analytica, and we took action. Those were two different things.

Facebook did not engage with questions about any of the details and allegations in the court filing.

A little later in the court filing, the District of Columbia writes that the documents Facebook is seeking to seal are “consistent” with its allegations that “Facebook has employees embedded within multiple presidential candidate campaigns who… knew, or should have known… [that] Cambridge Analytica [was] using the Facebook consumer data harvested by [[GSR’s]] [Aleksandr] Kogan throughout the 2016 [United States presidential] election.”

It goes on to suggest that Facebook’s concern to seal the document is “reputational”, suggesting — in another redacted segment (below) — that it might “reflect poorly” on Facebook that a DC-based employee had flagged Cambridge Analytica months prior to news reports of its improper access to user data.

“The company may also seek to avoid publishing its employees’ candid assessments of how multiple third-parties violated Facebook’s policies,” it adds, chiming with arguments made last year by GSR’s Kogan who suggested the company failed to enforce the terms of its developer policy, telling the DCMS committee it therefore didn’t have a “valid” policy.

As we’ve reported previously, the UK’s data protection watchdog — which has an ongoing investigation into CA’s use of Facebook data — was passed information by Facebook as part of that probe which showed that three “senior managers” had been involved in email exchanges, prior to December 2015, concerning the CA breach.

It’s not clear whether these exchanges are the same correspondence the District of Columbia has obtained and which Facebook is seeking to seal. Or whether there were multiple email threads raising concerns about the company.

The ICO passed the correspondence it obtained from Facebook to the DCMS committee — which last month said it had agreed at the request of the watchdog to keep the names of the managers confidential. (The ICO also declined to disclose the names or the correspondence when we made a Freedom of Information request last month — citing rules against disclosing personal data and its ongoing investigation into CA meaning the risk of release might be prejudicial to its investigation.)

In its final report the committee said this internal correspondence indicated “profound failure of governance within Facebook” — writing:

[I]t would seem that this important information was not shared with the most senior executives at Facebook, leading us to ask why this was the case. The scale and importance of the GSR/Cambridge Analytica breach was such that its occurrence should have been referred to Mark Zuckerberg as its CEO immediately. The fact that it was not is evidence that Facebook did not treat the breach with the seriousness it merited. It was a profound failure of governance within Facebook that its CEO did not know what was going on, the company now maintains, until the issue became public to us all in 2018. The incident displays the fundamental weakness of Facebook in managing its responsibilities to the people whose data is used for its own commercial interests.

We reached out to the ICO for comment on the information to emerge via the Columbia suit, and also to the Irish Data Protection Commission, the lead DPA for Facebook’s international business, which currently has 15 open investigations into Facebook or Facebook-owned businesses related to various security, privacy and data protection issues.

An ICO spokesperson told us: “We are aware of these reports and will be considering the points made as part of our ongoing investigation.”

Last year the ICO issued Facebook with the maximum possible fine under UK law for the CA data breach.

Shortly after Facebook announced it would appeal, saying the watchdog had not found evidence that any UK users’ data was misused by CA.

A date for the hearing of the appeal set for earlier this week was canceled without explanation. A spokeswoman for the tribunal court told us a new date would appear on its website in due course.

This report was updated with comment from the ICO

Let’s block ads! (Why?)

Link to original source

Seized cache of Facebook docs raise competition and consent questions

A UK parliamentary committee has published the cache of Facebook documents it dramatically seized last week.

The documents were obtained by a legal discovery process by a startup that’s suing the social network in a California court in a case related to Facebook changing data access permissions back in 2014/15.

The court had sealed the documents but the DCMS committee used rarely deployed parliamentary powers to obtain them from the Six4Three founder, during a business trip to London.

You can read the redacted documents here — all 250 pages of them.

In a series of tweets regarding the publication, committee chair Damian Collins says he believes there is “considerable public interest” in releasing them.

“They raise important questions about how Facebook treats users data, their policies for working with app developers, and how they exercise their dominant position in the social media market,” he writes.

“We don’t feel we have had straight answers from Facebook on these important issues, which is why we are releasing the documents. We need a more public debate about the rights of social media users and the smaller businesses who are required to work with the tech giants. I hope that our committee investigation can stand up for them.”

The committee has been investigating online disinformation and election interference for the best part of this year, and has been repeatedly frustrated in its attempts to extract answers from Facebook.

But it is protected by parliamentary privilege — hence it’s now published the Six4Three files, having waited a week in order to redact certain pieces of personal information.

Collins has included a summary of key issues, as the committee sees them after reviewing the documents, in which he draws attention to six issues.

Here is his summary of the key issues:

  1. White Lists Facebook have clearly entered into whitelisting agreements with certain companies, which meant that after the platform changes in 2014/15 they maintained full access to friends data. It is not clear that there was any user consent for this, nor how Facebook decided which companies should be whitelisted or not.
  2. Value of friends data It is clear that increasing revenues from major app developers was one of the key drivers behind the Platform 3.0 changes at Facebook. The idea of linking access to friends data to the financial value of the developers relationship with Facebook is a recurring feature of the documents.
  3. Reciprocity Data reciprocity between Facebook and app developers was a central feature in the discussions about the launch of Platform 3.0.
  4. Android Facebook knew that the changes to its policies on the Android mobile phone system, which enabled the Facebook app to collect a record of calls and texts sent by the user would be controversial. To mitigate any bad PR, Facebook planned to make it as hard of possible for users to know that this was one of the underlying features of the upgrade of their app.
  5. Onavo Facebook used Onavo to conduct global surveys of the usage of mobile apps by customers, and apparently without their knowledge. They used this data to assess not just how many people had downloaded apps, but how often they used them. This knowledge helped them to decide which companies to acquire, and which to treat as a threat.
  6. Targeting competitor Apps The files show evidence of Facebook taking aggressive positions against apps, with the consequence that denying them access to data led to the failure of that business

The publication of the files comes at an awkward moment for Facebook — which remains on the back foot after a string of data and security scandals, and has just announced a major policy change — ending a long-running ban on apps copying its own platform features.

Albeit the timing of Facebook’s policy shift announcement hardly looks incidental — given Collins said last week the committee would publish the files this week.

The policy in question has been used by Facebook to close down competitors in the past, such as — two years ago — when it cut off style transfer app Prisma’s access to its live-streaming Live API when the startup tried to launch a livestreaming art filter (Facebook subsequently launched its own style transfer filters for Live).

So its policy reversal now looks intended to diffuse regulatory scrutiny around potential antitrust concerns.

But emails in the Six4Three files suggesting that Facebook took “aggressive positions” against competing apps could spark fresh competition concerns.

In one email dated January 24, 2013, a Facebook staffer, Justin Osofsky, discusses Twitter’s launch of its short video clip app, Vine, and says Facebook’s response will be to close off its API access.

As part of their NUX, you can find friends via FB. Unless anyone raises objections, we will shut down their friends API access today. We’ve prepared reactive PR, and I will let Jana know our decision,” he writes. 

Osofsky’s email is followed by what looks like a big thumbs up from Zuckerberg, who replies: “Yup, go for it.”

Also of concern on the competition front is Facebook’s use of a VPN startup it acquired, Onavo, to gather intelligence on competing apps — either for acquisition purposes or to target as a threat to its business.

The files show various Onavo industry charts detailing reach and usage of mobile apps and social networks — with each of these graphs stamped ‘highly confidential’.

Facebook bought Onavo back in October 2013. Shortly after it shelled out $19BN to acquire rival messaging app WhatsApp — which one Onavo chart in the cache indicates was beasting Facebook on mobile, accounting for well over double the daily message sends at that time.

The files also spotlight several issues of concern relating to privacy and data protection law, with internal documents raising fresh questions over how or even whether (in the case of Facebook’s whitelisting agreements with certain developers) it obtained consent from users to process their personal data.

The company is already facing a number of privacy complaints under the EU’s GDPR framework over its use of ‘forced consent‘, given that it does not offer users an opt-out from targeted advertising.

But the Six4Three files look set to pour fresh fuel on the consent fire.

Collins’ fourth line item — related to an Android upgrade — also speaks loudly to consent complaints.

Earlier this year Facebook was forced to deny that it collects calls and SMS data from users of its Android apps without permission. But, as we wrote at the time, it had used privacy-hostile design tricks to sneak expansive data-gobbling permissions past users. So, put simple, people clicked ‘agree’ without knowing exactly what they were agreeing to.

The Six4Three files back up the notion that Facebook was intentionally trying to mislead users.

In one email dated November 15, 2013, from Matt Scutari, manager privacy and public policy, suggests ways to prevent users from choosing to set a higher level of privacy protection, writing: “Matt is providing policy feedback on a Mark Z request that Product explore the possibility of making the Only Me audience setting unsticky. The goal of this change would be to help users avoid inadvertently posting to the Only Me audience. We are encouraging Product to explore other alternatives, such as more aggressive user education or removing stickiness for all audience settings.”

Another awkward trust issue for Facebook which the documents could stir up afresh relates to its repeat claim — including under questions from lawmakers — that it does not sell user data.

In one email from the cache — sent by Mark Zuckerberg, dated October 7, 2012 — the Facebook founder appears to be entertaining the idea of charging developers for “reading anything, including friends”.

Yet earlier this year, when he was asked by a US lawmaker how Facebook makes money, Zuckerberg replied: “Senator, we sell ads.”

He did not include a caveat that he had apparently personally entertained the idea of liberally selling access to user data.

Responding to the publication of the Six4Three documents, a Facebook spokesperson told us:

As we’ve said many times, the documents Six4Three gathered for their baseless case are only part of the story and are presented in a way that is very misleading without additional context. We stand by the platform changes we made in 2015 to stop a person from sharing their friends’ data with developers. Like any business, we had many of internal conversations about the various ways we could build a sustainable business model for our platform. But the facts are clear: we’ve never sold people’s data.

Zuckerberg has repeatedly refused to testify in person to the DCMS committee.

At its last public hearing — which was held in the form of a grand committee comprising representatives from nine international parliaments, all with burning questions for Facebook — the company sent its policy VP, Richard Allan, leaving an empty chair where Zuckerberg’s bum should be.

Let’s block ads! (Why?)

Link to original source

UK parliament seizes cache of internal Facebook documents to further privacy probe

Facebook founder Mark Zuckerberg may yet regret underestimating a UK parliamentary committee that’s been investigating the democracy-denting impact of online disinformation for the best part of this year — and whose repeat requests for facetime he’s just as repeatedly snubbed.

In the latest high gear change, reported in yesterday’s Observer, the committee has used parliamentary powers to seize a cache of documents pertaining to a US lawsuit to further its attempt to hold Facebook to account for misuse of user data.

Facebook’s oversight — or rather lack of it — where user data is concerned has been a major focus for the committee, as its enquiry into disinformation and data misuse has unfolded and scaled over the course of this year, ballooning in scope and visibility since the Cambridge Analytica story blew up into a global scandal this April.

The internal documents now in the committee’s possession are alleged to contain significant revelations about decisions made by Facebook senior management vis-a-vis data and privacy controls — including confidential emails between senior executives and correspondence with Zuckerberg himself.

This has been a key line of enquiry for parliamentarians. And an equally frustrating one — with committee members accusing Facebook of being deliberately misleading and concealing key details from it.

The seized files pertain to a US lawsuit that predates mainstream publicity around political misuse of Facebook data, with the suit filed in 2015, by a US startup called Six4Three, after Facebook removed developer access to friend data. (As we’ve previously reported Facebook was actually being warned about data risks related to its app permissions as far back as 2011 — yet it didn’t full shut down the friends data API until May 2015.)

The core complaint is an allegation that Facebook enticed developers to create apps for its platform by implying they would get long-term access to user data in return. So by later cutting data access the claim is that Facebook was effectively defrauding developers.

Since lodging the complaint, the plaintiffs have seized on the Cambridge Analytica saga to try to bolster their case.

And in a legal motion filed in May Six4Three’s lawyers claimed evidence they had uncovered demonstrated that “the Cambridge Analytica scandal was not the result of mere negligence on Facebook’s part but was rather the direct consequence of the malicious and fraudulent scheme Zuckerberg designed in 2012 to cover up his failure to anticipate the world’s transition to smartphones”.

The startup used legal powers to obtain the cache of documents — which remain under seal on order of a California court. But the UK parliament used its own powers to swoop in and seize the files from the founder of Six4Three during a business trip to London when he came under the jurisdiction of UK law, compelling him to hand them over.

According to the Observer, parliament sent a serjeant at arms to the founder’s hotel — giving him a final warning and a two-hour deadline to comply with its order.

“When the software firm founder failed to do so, it’s understood he was escorted to parliament. He was told he risked fines and even imprisonment if he didn’t hand over the documents,” it adds, apparently revealing how Facebook lost control over some more data (albeit, its own this time).

In comments to the newspaper yesterday, DCMS committee chair Damian Collins said: “We are in uncharted territory. This is an unprecedented move but it’s an unprecedented situation. We’ve failed to get answers from Facebook and we believe the documents contain information of very high public interest.”

Collins later tweeted the Observer’s report on the seizure, teasing “more next week” — likely a reference to the grand committee hearing in parliament already scheduled for November 27.

But it could also be a hint the committee intends to reveal and/or make use of information locked up in the documents, as it puts questions to Facebook’s VP of policy solutions…

That said, the documents are subject to the Californian superior court’s seal order, so — as the Observer points out — cannot be shared or made public without risk of being found in contempt of court.

A spokesperson for Facebook made the same point, telling the newspaper: “The materials obtained by the DCMS committee are subject to a protective order of the San Mateo Superior Court restricting their disclosure. We have asked the DCMS committee to refrain from reviewing them and to return them to counsel or to Facebook. We have no further comment.”

Facebook’s spokesperson added that Six4Three’s “claims have no merit”, further asserting: “We will continue to defend ourselves vigorously.”

Earlier on Sunday, Facebook sent a response to Collins, which Guardian reporter Carole Cadwalladr posted soon after.

With the response, Facebook seems to be using the same tactics which were responsible for the latest round of criticism against the company — deny, delay, and dissemble. 

And, well, the irony of Facebook asking for its data to remain private also shouldn’t be lost on anyone at this point…

Another irony: In July, the Guardian reported that as part of Facebook’s defence against Six4Three’s suit the company had argued in court that it is a publisher — seeking to have what it couched as ‘editorial decisions’ about data access protected by the US’ first amendment.

Which is — to put it mildly — quite the contradiction, given Facebook’s long-standing public characterization of its business as just a distribution platform, never a media company.

So expect plenty of fireworks at next week’s public hearing as parliamentarians once again question Facebook over its various contradictory claims.

It’s also possible the committee will have been sent an internal email distribution list by then, detailing who at Facebook knew about the Cambridge Analytica breach in the earliest instance.

This list was obtained by the UK’s data watchdog, over the course of its own investigation into the data misuse saga. And earlier this month information commissioner Elizabeth Denham confirmed the ICO has the list and said it would pass it to the committee.

The accountability net does look to be closing in on Facebook management.

Even as Facebook continues to deny international parliaments any face-time with its founder and CEO (the EU parliament remains the sole exception).

Last week the company refused to even have Zuckerberg do a video call to take the committee’s questions — offering its VP of policy solutions, Richard Allan, to go before what’s now a grand committee comprised of representatives from seven international parliaments instead.

The grand committee hearing will take place in London on Tuesday morning, British time — followed by a press conference in which parliamentarians representing Facebook users from across the world will sign a set of ‘International Principles for the Law Governing the Internet’, making “a declaration on future action”.

So it’s also ‘watch this space’ where international social media regulation is concerned.

As noted above, Allan is just the latest stand-in for Zuckerberg. Back in April the DCMS committee spend the best part of five hours trying to extract answers from Facebook CTO, Mike Schroepfer.

“You are doing your best but the buck doesn’t stop with you does it? Where does the buck stop?” one committee member asked him then.

“It stops with Mark,” replied Schroepfer.

But Zuckerberg definitely won’t be stopping by on Tuesday.

Let’s block ads! (Why?)

Link to original source