Texas is poised to ban red light cameras

Sponsored Links


dbvirago via Getty Images

Texas is set to become the 11th state in the country to ban the use of red light cameras. The Dallas News reported the state Senate passed a bill that would outlaw the devices designed to catch drivers speeding through red lights by a 23 to eight vote last week. It now awaits the signature of Republican Governor Greg Abbott, who is expected to sign the bill into law. Abbott campaigned on ditching the cameras last year as he ran for re-election.

In theory, red light cameras are designed to prevent people from blowing through traffic stops. To accomplish that, the cameras flag when a driver runs a red light, processes their license plate and mails them a citation. In practice, the results aren’t so cut and dry. While the cameras do a decent job preventing some crashes, like cars getting t-boned at an intersection. However, they often lead to other types of issues. A number of studies have found the cameras lead to an increase in rear-end collisions.

While red light cameras present murky and debatable safety benefits, they have proven to be profit centers for cities that use them. Officials in Chicago and Fremont, California both came under fire in recent years for shortening the time of yellow lights to trigger more citations as people are given less time to stop at quickly changing lights. Some drivers have pushed back against the cameras, calling into question their legality and criticizing them as little more than schemes to generate money. Texas expects to lose between $22 and $28 million in revenue over the next two years from ending the red light camera program.

Let’s block ads! (Why?)

Link to original source

Millions of Instagram influencers had their private contact data scraped and exposed

A massive database containing contact information of millions of Instagram influencers, celebrities and brand accounts has been found online.

The database, hosted by Amazon Web Services, was left exposed and without a password allowing anyone to look inside. At the time of writing, the database had over 49 million records — but was growing by the hour.

From a brief review of the data, each record contained public data scraped from influencer Instagram accounts, including their bio, profile picture, the number of followers they have, if they’re verified and their location by city and country, but also contained their private contact information, such as the Instagram account owner’s email address and phone number.

Security researcher Anurag Sen discovered the database and alerted TechCrunch in an effort to find the owner and get the database secured. We traced the database back to Mumbai-based social media marketing firm Chtrbox, which pays influencers to post sponsored content on their accounts. Each record in the database contained a record that calculated the worth of each account, based off the number of followers, engagement, reach, likes and shares they had. This was used as a metric to determine how much the company could pay an Instagram celebrity or influencer to post an ad.

TechCrunch found several high-profile influencers in the exposed database, including prominent food bloggers, celebrities and other social media influencers.

We contacted several people at random whose information was found in the database and provided them their phone numbers. Two of the people responded and confirmed their email address and phone number found in the database was used to set up their Instagram accounts. Neither had any involvement with Chtrbox, they said.

Shortly after we reached out, Chtrbox pulled the database offline. Pranay Swarup, the company’s founder and chief executive, did not respond to a request for comment and several questions, including how the company obtained private Instagram account email addresses and phone numbers.

The scraping effort comes two years after Instagram admitted a security bug in its developer API allowed hackers to obtain the email addresses and phone numbers of six million Instagram accounts. The hackers later sold the data for bitcoin.

Months later, Instagram — now with more than a billion users — choked its API to limit the number of requests apps and developers can make on the platform.

Facebook, which owns Instagram, said it was looking into the matter.

“We’re looking into the issue to understand if the data described – including email and phone numbers – was from Instagram or from other sources,” said an updated statement. “We’re also inquiring with Chtrbox to understand where this data came from and how it became publicly available,” it added.

Updated with additional comments from Facebook.

Let’s block ads! (Why?)

Link to original source

Amazon under greater shareholder pressure to limit sale of facial recognition tech to the government

This week could mark a significant setback for Amazon’s facial recognition business if privacy and civil liberties advocates — and some shareholders — get their way.

Months earlier, shareholders tabled a resolution to limit the sale of Amazon’s facial recognition tech giant calls Rekognition to law enforcement and government agencies. It followed accusations of bias and inaccuracies with the technology, which they say can be used to racially discriminate against minorities. Rekognition, which runs image and video analysis of faces, has been sold to two states so far and Amazon has pitched Immigrations & Customs Enforcement. A second resolution will require an independent human and civil rights review of the technology.

Now the ACLU is backing the measures and calling on shareholders to pass the the resolutions.

“Amazon has stayed the course,” said Shankar Narayan, director of the Technology and Liberty Project at the ACLU Washington, in a call Friday. “Amazon has heard repeatedly about the dangers to our democracy and vulnerable communities about this technology but they have refused to acknowledge those dangers let alone address them,” he said.

“Amazon has been so non-responsive to these concerns,” said Narayan, “even Amazon’s own shareholders have been forced to resort to putting these proposals addressing those concerns on the ballot.”

It’s the latest move in a concerted effort by dozens of shareholders and investment firms, tech experts and academics, and privacy and rights groups and organizations who have decried the use of the technology.

Critics say Amazon Rekognition has accuracy and bias issues. (Image: TechCrunch)

In a letter to be presented at Amazon’s annual shareholder meeting Wednesday, the ACLU will accuse Amazon of “failing to act responsibly” by refusing to stop the sale of the technology to the government.

“This technology fundamentally alters the balance of power between government and individuals, arming governments with unprecedented power to track, control, and harm people,” said the letter, shared with TechCrunch. “It would enable police to instantaneously and automatically determine the identities and locations of people going about their daily lives, allowing government agencies to routinely track their own residents. Associated software may even display dangerous and likely inaccurate information to police about a person’s emotions or state of mind.”

“As shown by a long history of other surveillance technologies, face surveillance is certain to be disproportionately aimed at immigrants, religious minorities, people of color, activists, and other vulnerable communities,” the letter added.

“Without shareholder action, Amazon may soon become known more for its role in facilitating pervasive government surveillance than for its consumer retail operations,” it read.

Facial recognition has become one of the most hot button topics in privacy in years. Amazon Rekognition, its cloud-based facial recognition system, remains in its infancy yet one of the most prominent and available systems available. But critics say the technology is flawed. Exactly a year prior to this week’s shareholder meeting, the ALCU first raised “profound” concerns with Rekognition and its installation at airports, public places and by police. Since then, the technology was shown to struggle to detect people of color. In its tests, the system struggled to match 28 congresspeople who were falsely matched in a mugshot database who had been previously arrested.

But there has been pushback — even from government. Several municipalities have rolled out surveillance-curtailing laws and ordnances in the past year. San Francisco last week became the first major U.S. city government to ban the use of facial recognition.

“Amazon leadership has failed to recognize these issues,” said the ACLU’s letter to be presented Wednesday. “This failure will lead to real-life harm.”

The ACLU said shareholders “have the power to protect Amazon from its own failed judgment.”

Amazon has pushed back against the claims by arguing that the technology is accurate — largely by criticizing how the ACLU conducted its tests using Rekognition.

Amazon did not comment when reached prior to publication.

Read more:

Let’s block ads! (Why?)

Link to original source

GDPR adtech complaints keep stacking up in Europe

It’s a year since Europe’s General Data Protection Regulation (GDPR) came into force and leaky adtech is now facing privacy complaints in four more European Union markets. This ups the tally to seven markets where data protection authorities have been urged to investigate a core function of behavioral advertising.

The latest clutch of GDPR complaints aimed at the real-time bidding (RTB) system have been filed in Belgium, Luxembourg, the Netherlands and Spain.

All the complaints argue that RTB entails “wide-scale and systemic” breaches of Europe’s data protection regime, as personal date harvested to profile Internet users for ad-targeting purposes is broadcast widely to bidders in the adtech chain. The complaints have implications for key adtech players, Google and the Internet Advertising Bureau, which set RTB standards used by other in the online adverting pipeline.

We’ve reached out to Google and IAB Europe for comment on the latest complaints. (The latter’s original response statement to the complaint can be found here, behind its cookie wall.)

The first RTB complaints were filed in the UK and Ireland, last fall, by Dr Johnny Ryan of private browser Brave; Jim Killock, director of the Open Rights Group; and Michael Veale, a data and policy researcher at University College London.

A third complaint went in to Poland’s DPA in January, filed by anti-surveillance NGO, the Panoptykon Foundation.

The latest four complaints have been lodged in Spain by Gemma Galdon Clavell (Eticas Foundation) and Diego Fanjul (Finch); David Korteweg (Bits of Freedom) in the Netherlands; Jef Ausloos (University of Amsterdam) and Pierre Dewitte (University of Leuven) in Belgium; and Jose Belo (Exigo Luxembourg).

Earlier this year a lawyer working with the complainants said they’re expecting “a cascade of complaints” across Europe — and “fully expect an EU-wide regulatory response” give that the adtech in question is applied region-wide.

Commenting in a statement, Galdon Cavell, the CEO of Eticas, said: “We hope that this complaint sends a strong message to Google and those using Ad Tech solutions in their websites and products. Data protection is a legal requirement must be translated into practices and technical specifications.”

A ‘bug’ disclosed last week by Twitter illustrates the potential privacy risks around adtech, with the social networking platform revealing it had inadvertently shared some iOS users’ location data with an ad partner during the RTB process. (Less clear is who else might Twitter’s “trusted advertising partner” have passed people’s information to?)

The core argument underpinning the complaints is that RTB’s data processing is not secure — given the design of the system entails the broadcasting of (what can be sensitive and intimate) personal data of Internet users to all sorts of third parties in order to generate bids for ad space.

Whereas GDPR bakes in a requirement for personal data to be processed “in a manner that ensures appropriate security of the personal data”. So, uh, spot the disconnect.

The latest RTB complaints assert personal data is broadcast via bid requests “hundreds of billions of times” per day — which it describes as “the most massive leakage of personal data recorded so far”.

While the complaints focus on security risks attached by default to leaky adtech, such a long chain of third parties being passed people’s data also raises plenty of questions over the validity of any claimed ‘consents’ for passing Internet users’ data down the adtech chain. (Related: A decision by the French CNIL last fall against a small local adtech player which it decided was unlawfully processing personal data obtained via RTB.)

This week will mark a year since GDPR came into force across the EU. And it’s fair to say that privacy complaints have been piling up, while enforcement actions — such as a $57M fine for Google from the French CNIL related to Android consent — remain far rarer.

One complexity with the RTB complaints is that the technology systems in question are both applied across EU borders and involve multiple entities (Google and the IAB). This means multiple privacy watchdogs need to work together to determine which of them is legally competent to address linked complaints that touch EU citizens in multiple countries.

Who leads can depend on where an entity has its main establishment in the EU and/or who is the data controller. If this is not clearly established it’s possible that various national actions could flow from the complaints, given the cross-border nature of the adtech — as in the CNIL decision against Android, for example. (Though Google made a policy change as of January 22, shifting its legal base for EU law enforcement to Google Ireland which looks intended to funnel all GDPR risk via the Irish DPC.)

The IAB Europe, meanwhile, has an office in Belgium but it’s not clear whether that’s the data controller in this case. Ausloos tells us that the Belgian DPA has already declared itself competent regarding the complaint filed against the IAB by the Panoptykon Foundation, while noting another possibility — that the IAB claims the data controller is IAB Tech Lab, based in New York — “in which case any and all DPAs across the EU would be competent”.

Veale also says different DPAs could argue that different parts of the IAB are in their jurisdiction. “We don’t know how the IAB structure really works, it’s very opaque,” he tells us.

The Irish DPC, which Google has sought to designate the lead watchdog for its European business, has said it will prioritize scrutiny of the adtech sector in 2019, referencing the RTB complaints in its annual report earlier this year — where it warned the industry: “the protection of personal data is a prerequisite to the processing of any personal data within this ecosystem and ultimately the sector must comply with the standards set down by the GDPR”.

There’s no update on how the UK’s ICO is tackling the RTB complaint filed in the UK as yet — but Veale notes they have a call today. (And we’ve reached out to the ICO for comment.)

So far the same RTB complaints have not been filed in France and Germany — jurisdictions with privacy watchdogs that can have a reputation for some of the most muscular action enforcing data protection in Europe.

Although the Belgian DPA’s recently elected new president is making muscular noises about GDPR enforcement, according to Ausloos — who cites a speech he made, post-election, saying the ‘time of sit back and relax’ is over. They made sure to reference these comments in the RTB complaint, he adds.

Veale suggests the biggest blocker to resolving the RTB complaints is that all the various EU watchdogs “need a vision of what the world looks like after they take a given action”.

In the meanwhile, the adtech complaints keep stacking up.

Let’s block ads! (Why?)

Link to original source

Group accuses EU internet providers of violating net neutrality

Sponsored Links


mixmagic via Getty Images

The European Union has had net neutrality regulations in place since 2016, but some are concerned that internet service providers are playing fast and loose with those rules. A group of 45 advocate organizations, companies and individuals (including the Electronic Frontier Foundation) have sent a letter to EU officials accusing 186 ISPs of jeopardizing net neutrality though the use of deep packet inspection, which verifies the content of data traffic well beyond the basics. Existing rules allow carriers to shape traffic to optimize their network resources, but at least some ISPs are using this for “differentiated pricing,” prioritization or throttling.

European regulators have “turned a blind eye” to these violations, the group said, and there are concerns that these overseers are trying to further soften the rules as part of negotiations over new EU rules. This would let providers use deep packet inspection to effectively bypass the rules and charge more for different parts of the internet. It could compromise privacy, too, by giving companies user data under the pretext of juggling traffic.

It’s not certain how EU leaders will react. There’s plenty of time for input, though. Public consultations start in fall 2019, and the vote won’t likely take place until March 2020. The group just doesn’t want to take any chances — it’s hoping this nudge will set the policy on the right track before the larger public has its say.

Let’s block ads! (Why?)

Link to original source