Europe should ban AI for mass surveillance and social credit scoring, says advisory group

An independent expert group tasked with advising the European Commission to inform its regulatory response to artificial intelligence — to underpin EU lawmakers’ stated aim of ensuring AI developments are “human centric” — has published its policy and investment recommendations.

This follows earlier ethics guidelines for “trustworthy AI”, put out by the High Level Expert Group (HLEG) for AI back in April, when the Commission also called for participants to test the draft rules.

The AI HLEG’s full policy recommendations comprise a highly detailed 50-page document — which can be downloaded from this web page. The group, which was set up in June 2018, is made up of a mix of industry AI experts, civic society representatives, political advisers and policy wonks, academics and legal experts.

The document includes warnings on the use of AI for mass surveillance and scoring of EU citizens, such as China’s social credit system, with the group calling for an outright ban on “AI-enabled mass scale scoring of individuals”. It also urges governments to commit to not engage in blanket surveillance of populations for national security purposes. (So perhaps it’s just as well the UK has voted to leave the EU, given the swingeing state surveillance powers it passed into law at the end of 2016.) 

“While there may be a strong temptation for governments to ‘secure society’ by building a pervasive surveillance system based on AI systems, this would be extremely dangerous if pushed to extreme levels,” the HLEG writes. “Governments should commit not to engage in mass surveillance of individuals and to deploy and procure only Trustworthy AI systems, designed to be respectful of the law and fundamental rights, aligned with ethical principles and socio-technically robust.”

The group also calls for commercial surveillance of individuals and societies to be “countered” — suggesting the EU’s response to the potency and potential for misuse of AI technologies should include ensuring that online people-tracking is “strictly in line with fundamental rights such as privacy”, including (the group specifies) when it concerns ‘free’ services (albeit with a slight caveat on the need to consider how business models are impacted).

Last week the UK’s data protection watchdog fired an even more specific shot across the bows of the online behavioral ad industry — warning that adtech’s mass-scale processing of web users’ personal data for targeting ads does not comply with EU privacy standards. The industry was told its rights-infringing practices must change, even if the Information Commissioner’s Office isn’t about to bring down the hammer just yet. But the reform warning was clear.

As EU policymakers work on fashioning a rights-respecting regulatory framework for AI, seeking to steer  the next ten years+ of cutting-edge tech developments in the region, the wider attention and scrutiny that will draw to digital practices and business models looks set to drive a clean up of problematic digital practices that have been able to proliferate under no or very light touch regulation, prior to now.

The HLEG also calls for support for developing mechanisms for the protection of personal data, and for individuals to “control and be empowered by their data” — which they argue would address “some aspects of the requirements of trustworthy AI”.

“Tools should be developed to provide a technological implementation of the GDPR and develop privacy preserving/privacy by design technical methods to explain criteria, causality in personal data processing of AI systems (such as federated machine learning),” they write.

“Support technological development of anonymisation and encryption techniques and develop standards for secure data exchange based on personal data control. Promote the education of the general public in personal data management, including individuals’ awareness of and empowerment in AI personal data-based decision-making processes. Create technology solutions to provide individuals with information and control over how their data is being used, for example for research, on consent management and transparency across European borders, as well as any improvements and outcomes that have come from this, and develop standards for secure data exchange based on personal data control.”

Other policy suggestions among the many included in the HLEG’s report are that AI systems which interact with humans should include a mandatory self-identification. Which would mean no sneaky Google Duplex human-speech mimicking bots. In such a case the bot would have to introduce itself up front — thereby giving the human caller a chance to disengage.

The HLEG also recommends establishing a “European Strategy for Better and Safer AI for Children”. Concern and queasiness about rampant datafication of children, including via commercial tracking of their use of online services, has been raised  in multiple EU member states.

“The integrity and agency of future generations should be ensured by providing Europe’s children with a childhood where they can grow and learn untouched by unsolicited monitoring, profiling and interest invested habitualisation and manipulation,” the group writes. “Children should be ensured a free and unmonitored space of development and upon moving into adulthood should be provided with a “clean slate” of any public or private storage of data related to them. Equally, children’s formal education should be free from commercial and other interests.”

Member states and the Commission should also devise ways to continuously “analyse, measure and score the societal impact of AI”, suggests the HLEG — to keep tabs on positive and negative impacts so that policies can be adapted to take account of shifting effects.

“A variety of indices can be considered to measure and score AI’s societal impact such as the UN Sustainable Development Goals and the Social Scoreboard Indicators of the European Social Pillar. The EU statistical programme of Eurostat, as well as other relevant EU Agencies, should be included in this mechanism to ensure that the information generated is trusted, of high and verifiable quality, sustainable and continuously available,” it suggests. “AI-based solutions can help the monitoring and measuring its societal impact.”

The report is also heavy on pushing for the Commission to bolster investment in AI — calling particularly for more help for startups and SMEs to access funding and advice, including via the InvestEU program.

Another suggestion is the creation of an EU-wide network of AI business incubators to connect academia and industry. “This could be coupled with the creation of EU-wide Open Innovation Labs, which could be built further on the structure of the Digital Innovation Hub network,” it continues. 

There are also calls to encourage public sector uptake of AI, such as by fostering digitalisation by transforming public data into a digital format; providing data literacy education to government agencies; creating European large annotated public non-personal databases for “high quality AI”; and funding and facilitating the development of AI tools that can assist in detecting biases and undue prejudice in governmental decision-making.

Another chunk of the report covers recommendations to try to bolster AI research in Europe — such as strengthening and creating additional Centres of Excellence which address strategic research topics and become “a European level multiplier for a specific AI topic”.

Investment in AI infrastructures, such as distributed clusters and edge computing, large RAM and fast networks, and a network of testing facilities and sandboxes is also urged; along with support for an EU-wide data repository “through common annotation and standardisation” — to work against data siloing, as well as trusted data spaces for specific sectors such as healthcare, automative and agri-food.

The push by the HLEG to accelerate uptake of AI has drawn some criticism, with digital rights group Access Now’s European policy manager, Fanny Hidvegi, writing that: “What we need now is not more AI uptake across all sectors in Europe, but rather clarity on safeguards, red lines, and enforcement mechanisms to ensure that the automated decision making systems — and AI more broadly — developed and deployed in Europe respect human rights.”

Other ideas in the HLEG’s report include developing and implementing a European curriculum for AI; and monitoring and restricting the development of automated lethal weapons — including technologies such as cyber attack tools which are not “actual weapons” but which the group points out “can have lethal consequences if deployed. 

The HLEG further suggests EU policymakers refrain from giving AI systems or robots legal personhood, writing: “We believe this to be fundamentally inconsistent with the principle of human agency, accountability and responsibility, and to pose a significant moral hazard.”

The report can downloaded in full here.

Let’s block ads! (Why?)

Link to original source

NSA improperly collected phone records for a second time, documents reveal

Newly released documents reveal the National Security Agency improperly collected Americans’ call records for a second time, just months after the agency was forced to purge hundreds of millions of collected calls and text records it unlawfully obtained.

The document, obtained by the American Civil Liberties Union, shows the NSA had collected a “larger than expected” number of call detail records from one of the U.S. phone providers, though the redacted document did not reveal which provider nor how many records were improperly collected.

The document said the erroneously collected call detail records were “not authorized” by the orders issued by the Foreign Intelligence Surveillance Court, which authorizes and oversees the U.S. government’s surveillance activities.

Greg Julian, a spokesperson for the NSA, confirmed the report in an email to TechCrunch, saying the agency “identified additional data integrity and compliance concerns caused by the unique complexities of using company-generated business records for intelligence purposes.”

NSA said the issues were “addressed and reported” to the agency’s overseers, but did not comment further on the violations as they involve operational matters.

The ACLU called on lawmakers to investigate the improper collection and to shut down the program altogether.

“These documents further confirm that this surveillance program is beyond redemption and a privacy and civil liberties disaster,” said Patrick Toomey, a staff attorney with the ACLU’s National Security Project. “The NSA’s collection of Americans’ call records is too sweeping, the compliance problems too many, and evidence of the program’s value all but nonexistent.”

“There is no justification for leaving this surveillance power in the NSA’s hands,” he said.

Under the government’s so-called Section 215 powers, the NSA collects millions of phone records every year by compelling U.S. phone giants to turn over daily records, a classified program first revealed in a secret court order compelling Verizon — which owns TechCrunch — from documents leaked by whistleblower Edward Snowden. Those call records include the phone numbers of those communicating and when — though not the contents — which the agency uses to make connections between targets of interest.

But the government was forced to curtail the phone records collection program in 2015 following the introduction of the Freedom Act, the only law passed by Congress since the Snowden revelations which successfully reined in what critics said was the NSA’s vast surveillance powers.

In recent years, the number of call records has gone down but not gone away completely. In its last transparency report, the government said it collected 434 million phone records, down 18% on the year earlier.

But the government came under fire in June 2018 after it emerged the NSA had unlawfully collected 600 million call and text logs without the proper authority. The agency said “technical irregularities” meant it received call detail records it “was not authorized to receive.”

The agency deleted the entire batch of improperly collected records from its systems.

Following the incidents, the NSA reportedly shut down the phone records collection program citing overly burdensome legal requirements imposed on the agency. In January, the agency’s spokesperson said the NSA was “carefully evaluating all aspects” of the program and its future, amid rumors that the agency would not ask Congress to reauthorized its expiring Section 215 powers, set to expire later this year.

In an email Wednesday, the NSA spokesperson didn’t comment on the future of the program, saying only that it was “a deliberative interagency process that will be decided by the Administration.”

The government’s Section 215 powers are expected to be debated by Congress in the coming months.

Let’s block ads! (Why?)

Link to original source

UK law review eyes abusive trends like deepfaked porn and cyber flashing

The UK government has announced the next phase of a review of the law around the making and sharing of non-consensual intimate images, with ministers saying they want to ensure it keeps pace with evolving digital tech trends.

The review is being initiated in response to concerns that abusive and offensive communications are on the rise, as a result of it becoming easier to create and distribute sexual images of people online without their permission.

Among the issues the Law Commission will consider are so-called ‘revenge porn’, where intimate images of a person are shared without their consent; deepfaked porn, which refers to superimposing a real photograph of a person’s face onto a pornographic image or video without their consent; and cyber flashing, the unpleasant practice of sending unsolicited sexual images to a person’s phone by exploiting technologies such as Bluetooth that allow for proximity-based file sharing.

On the latter practice, the screengrab below is of one of two unsolicited messages I received as pop-ups on my phone in the space of a few seconds while waiting at a UK airport gate — and before I’d had a chance to locate the iOS master setting that actually nixes Bluetooth.

On iOS, even without accepting the AirDrop the cyberflasher is still able to send an unsolicited placeholder image with their request.

Safe to say, this example is at the tamer end of what tends to be involved. More often it’s actual dick pics fired at people’s phones, not a parrot-friendly silicone substitute…

cyber flashing

A patchwork of UK laws already covers at least some of the offensive and abusive communications in question, such as the offence of voyeurism under the Sexual Offences Act 2003, which criminalises certain non-consensual photography taken for sexual gratification — and carries a two-year maximum prison sentence (with the possibility that a perpetrator may be required to be listed on the sexual offender register); while revenge porn was made a criminal offence under section 33 of the Criminal Justice and Courts Act 2015.

But the government says that while it feels the law in this area is “robust”, it is keen not to be seen as complacent — hence continuing to keep it under review.

It will also hold a public consultation to help assess whether changes in the law are required.

The Law Commission published Phase 1 of their review of Abusive and Offensive Online Communications on November 1 last year — a scoping report setting out the current criminal law which applies.

The second phase, announced today, will consider the non-consensual taking and sharing of intimate images specifically — and look at possible recommendations for reform. Though it will not report for two years so any changes to the law are likely to take several years to make it onto the statute books.

Among specific issues the Law Commission will consider is whether anonymity should automatically be granted to victims of revenge porn.

Commenting in a statement, justice minister Paul Maynard said: “No one should have to suffer the immense distress of having intimate images taken or shared without consent. We are acting to make sure our laws keep pace with emerging technology and trends in these disturbing and humiliating crimes.”

Maynard added that the review builds on recent changes to toughen UK laws around revenge porn and to outlaw ‘upskirting’ in English law; aka the degrading practice of taking intimate photographs of others without consent.

“Too many young people are falling victim to co-ordinated abuse online or the trauma of having their private sexual images shared. That’s not the online world I want our children to grow up in,” added the secretary of state for digital issues, Jeremy Wright, in another supporting statement.

“We’ve already set out world-leading plans to put a new duty of care on online platforms towards their users, overseen by an independent regulator with teeth. This Review will ensure that the current law is fit for purpose as we deliver our commitment to make the UK the safest place to be online.”

The Law Commission review will begin on July 1, 2019 and report back to the government in summer 2021.

Terms of Reference will be published on the Law Commission’s website in due course.

Let’s block ads! (Why?)

Link to original source

Sidewalk Labs’ blueprint for a ‘mini’ smart city is a massive data mine

Sidewalk Labs, the smart city technology firm owned by Google’s parent company Alphabet, released a plan this week to redevelop a piece of Toronto’s eastern waterfront into its vision of an urban utopia — a ‘mini’ metropolis tucked inside a digital infrastructure burrito and bursting with gee-whiz tech-ery.

A place where high-tech jobs and affordable housing live in harmony, streets are built for people, not just cars, all the buildings are sustainable and efficient, public spaces are dotted with internet-connected sensors and an outdoor comfort system with giant “raincoats” designed to keep residents warm and dry even in winter. The innovation even extends underground, where freight delivery system ferries packages without the need of street-clogging trucks. 

But this plan is more than a testbed for tech. It’s a living lab (or petri dish, depending on your view), where tolerance for data collection and expectations for privacy are being shaped, public due process and corporate reach is being tested, and what makes a city equitable and accessible for all is being defined.

It’s also more ambitious and wider in scope than its original proposal.

“In many ways, it was like a 50-sided Rubik’s cube when you’re looking at initiatives across mobility, sustainability, the public realm, buildings and housing and digital governance,” Sidewalk Labs CEO Dan Doctoroff said Monday describing the effort to put together the master plan called Toronto Tomorrow: A New Approach for Inclusive Growth.

Even the harshest critics of the Sidewalk Labs plan might agree with Doctoroff’s Rubik cube analogy. It’s a complex plan with big promises and high stakes. And despite the 1,500-plus page tome presenting the idea, it’s still opaque.

Let’s block ads! (Why?)

Link to original source

U.S. Senator and consumer advocacy groups urge FTC to take action on YouTube’s alleged COPPA violations

The groups behind a push to get the U.S. Federal Trade Commission to investigate YouTube’s alleged violation of children’s privacy law, COPPA, have today submitted a new letter to the FTC that lays out the appropriate sanctions the groups want the FTC to now take. The letter comes shortly after news broke that the FTC was in the final stages of its probe into YouTube’s business practices regarding this matter.

They’re joined in pressing the FTC to act by COPPA co-author, Senator Ed Markey, who penned a letter of his own, which was also submitted today.

The groups’ formal complaint with the FTC was filed back in April 2018. The coalition, which then included 20 child advocacy, consumer and privacy groups, had claimed YouTube doesn’t get parental consent before collecting the data from children under the age of 13 — as is required by the Children’s Online Privacy Protection Act, also known as COPPA.

The organizations said, effectively, that YouTube was hiding behind its terms of service which claims that YouTube is “not intended for children under 13.”

This simply isn’t true, as any YouTube user knows. YouTube is filled with videos that explicitly cater to children, from cartoons to nursery rhymes to toy ads — the latter which often come about by way of undisclosed sponsorships between toy makers and YouTube stars. The video creators will excitedly unbox or demo toys they received for free or were paid to feature, and kids just eat it all up.

In addition, YouTube curates much of its kid-friendly content into a separate YouTube Kids app that’s designed for the under-13 crowd — even preschoolers.

Meanwhile, YouTube treats children’s content like any other. That means targeted advertising and commercial data collection are taking place, the groups’ complaint states. YouTube’s algorithms also recommend videos and autoplay its suggestions — a practice that led to kids being exposed to inappropriate content in the past.

Today, two of the leading groups behind the original complaint — the Campaign for a Commercial-Free Childhood (CCFC) and Center for Digital Democracy (CDD) — are asking the FTC to impose the maximum civil penalties on YouTube because, as they’ve said:

Google had actual knowledge of both the large number of child-directed channels on YouTube and the large numbers of children using YouTube. Yet, Google collected personal information from nearly 25 million children in the U.S over a period of years, and used this data to engage in very sophisticated digital marketing techniques. Google’s wrongdoing allowed it to profit in two different ways: Google has not only made a vast amount of money by using children’s personal information as part of its ad networks to target advertising, but has also profited from advertising revenues from ads on its YouTube channels that are watched by children.

The groups are asking the FTC to impose a 20-year consent degree on YouTube.

They want the FTC to order YouTube to destroy all data from children under 13, including any inferences drawn from the data, that’s in Google’s possession. YouTube should also stop collecting data from anyone under 13, including anyone viewing a channel or video directed at children. Kids’ ages also need to be identified so they can be prevented from accessing YouTube.

Meanwhile, the groups suggest that all the channels in the Parenting and Family lineup, plus any other channels or video directed at children, be removed from YouTube and placed into a separate platform for children. (e.g. the YouTube Kids app).

This is something YouTube is already considering, according to a report from The Wall Street Journal last week.

This separate kids platform would have a variety restrictions, including no commercial data collection; no links out to other sites or online services; no targeted marketing; no product or brand integration; no influencer marketing; and even no recommendations or autoplay.

The removal of autoplaying videos and recommendations, in particular, would be a radical change to how YouTube operates, but one that could protect kids from inappropriate content that slips in. It’s also a change that some employees inside YouTube itself were vying for, according to The WSJ’s report. 

The groups also urge the FTC to require Google to fund educational campaigns around the true nature of Google’s data-driven marketing systems, admit publicly that it violated the law, and submit to annual audits to ensure its ongoing compliance. They want Google to commit $100 million to establish a fund that supports the production of noncommercial, high-quality and diverse content for kids.

Finally, the groups are asking that Google faces the maximum possible civil penalties —  $42,530 per violation, which could be counted as either per child or per day. This monetary relief needs to be severe, the groups argue, so Google and YouTube will be deterred from ever violating COPPA in the future.

While this laundry list of suggestions is more like a wish list of what the ideal resolution would look like, it doesn’t mean that the FTC will follow through on all these suggestions.

However, it seems likely that the Commission would at least require YouTube to delete the improperly collected data and isolate the kids’ YouTube experience in some way. After all, that’s precisely what it just did with Tik Tok (previously Musical.ly) which earlier this year paid a record $5.7 million fine for its own COPPA violations. It also had to implement an age gate where under-13 kids were restricted from publishing content.

The advocacy groups aren’t the only ones making suggestions to the FTC.

Senator Ed Markey (D-Mass.) also sent the FTC a letter today about YouTube’s violations of COPPA — a piece of legislation that he co-authored.

In his letter, he urges the FTC take a similar set of actions, saying:

“I am concerned that YouTube has failed to comply with COPPA. I therefore, urge the Commission to use all necessary resources to investigate YouTube, demand that YouTube pay all monetary penalties it owes as a result of any legal violations, and instruct YouTube to institute policy changes that put children’s well-being first.”

His suggestions are similar to those being pushed by the advocacy groups. They include demands for YouTube to delete the children’s data and cease data collection on those under 13; implement an age gate on YouTube to come into compliance with COPPA; prohibit targeted and influencer marketing; offer detailed explanations of what data is collected if for “internal purposes;” undergo a yearly audit; provide documentation of compliance upon request; and establish a fund for noncommercial content.

He also wants Google to sponsor a consumer education campaign warning parents that no one under 13 should use YouTube and want Google to be prohibited from launching any new child-directed product until it’s been reviewed by an independent panel of experts.

The FTC’s policy doesn’t allow it to confirm or deny nonpublic investigations. YouTube hasn’t yet commented on the letters.

Let’s block ads! (Why?)

Link to original source