DuckDuckGo founder Gabriel Weinberg is coming to Disrupt

2019 is the year Facebook announced a ‘pivot to privacy’. At the same time, Google is trying to claim that privacy means letting it exclusively store and data-mine everything you do online. So what better time to sit down at Disrupt for a chat about what privacy really means with DuckDuckGo founder and CEO Gabriel Weinberg?

We’re delighted to announce that Weinberg is joining us at Disrupt SF (October 2-4).

The pro-privacy search engine he founded has been on a mission to shrink the shoulder-surfing creepiness of Internet searching for more than a decade, serving contextual keyword-based ads, rather than pervasively tracking users to maintain privacy-hostile profiles. (If you can’t quite believe the decade bit; here’s DDG’s startup Elevator Pitch — which we featured on TC all the way back in 2008.)

It’s a position that looks increasingly smart as big tech comes under sharper political and regulatory scrutiny on account of the volume of information it’s amassing. (Not to mention what it’s doing with people’s data.)

Despite competing as a self-funded underdog against the biggest tech giants around, DuckDuckGo has been profitable and gaining users at a steady clip for years. It also recently took in a chunk of VC to capitalize on what its investors see as a growing international opportunity to help Internet users go about their business without being intrusively snooped on. Which makes a compelling counter narrative to the tech giants.

In more recent developments it has added a tracker blocker to its product mix — and been dabbling in policy advocacy — calling for a revival of a Do Not Track browser standard, after earlier attempts floundered with the industry, failing to reach accord.

The political climate around privacy and data protection does look to be pivoting in such a way that Do Not Track could possibly swing back into play. But if — and, yes it’s a big one — privacy ends up being a baked in Internet norm how might a pioneer like DuckDuckGo maintain its differentiating edge?

While, on the flip side, what if tech giants end up moving in on its territory by redefining privacy in their own self-serving image? We have questions and will be searching Weinberg for answers.

There’s also the fact that many a founder would have cut and run just half a decade into pushing against the prevailing industry grain. So we’re also keen to mine his views on entrepreneurial patience, and get a better handle on what makes him tick as a person — to learn how he’s turned a passion for building people-centric, principled products into a profitable business.

Disrupt SF runs October 2 – October 4 at the Moscone Center in San Francisco. Tickets are available here.

Let’s block ads! (Why?)

Link to original source

Amazon shareholders reject facial recognition sale ban to governments

Amazon shareholders have rejected two proposals that would have requested the company not to sell its facial recognition technology to government customers.

The breakdown of the votes is not immediately known. A filing with the vote tally is expected later this week.

The first proposal would have requested Amazon to limit the sale of its Rekognition technology to police, law enforcement and federal agencies. A second resolution would have demanded an independent human and civil rights review into the use of the technology.

It followed accusations that the technology has bias and inaccuracies, which critics say can be used to racially discriminate against minorities.

The votes were non-binding, allowing the company to reject the outcome of the vote.

But the vote was almost inevitably set to fail. Following his divorce, Amazon founder and chief executive Jeff Bezos retains 12 percent of the company’s stock as well as the voting rights in his ex-wife’s remaining stake. The company’s top four institutional shareholders, including The Vanguard Group, Blackrock, FMR and State Street, collectively hold about the same amount of voting rights as Bezos.

The resolutions failed despite an effort by the ACLU to back the measures, which the civil liberties group accused the tech giant of being “non-responsive” to privacy concerns.

The civil liberties group rallied investors ahead of the Wednesday annual meeting in Seattle, where the tech giant has its headquarters. In a letter, the group said the sale of Amazon’s facial recognition tech to government agencies “fundamentally alters the balance of power between government and individuals, arming governments with unprecedented power to track, control, and harm people.”

“As shown by a long history of other surveillance technologies, face surveillance is certain to be disproportionately aimed at immigrants, religious minorities, people of color, activists, and other vulnerable communities,” the letter added.

The ACLU said investors and shareholders had the power “to protect Amazon from its own failed judgment.”

Amazon pushed back against claims that the technology is inaccurate, and called on the U.S. Securities and Exchange Commission to block the shareholder proposal prior to its annual shareholder meeting. The government agency blocked Amazon’s efforts to stop the vote, amid growing scrutiny of its product.

Amazon spokesperson Lauren Lynch said on Tuesday, prior to the meeting, that the company operates “in line with our code of conduct which governs how we run our business and the use of our products.”

An email to the company following Wednesday’s meeting was unreturned at the time of writing.

Read more:

Let’s block ads! (Why?)

Link to original source

Thousands of vulnerable TP-Link routers at risk of remote hijack

Thousands of TP-Link routers are vulnerable to a bug that can be used to remotely take control the device, but it took over a year for the company to publish the patches on its website.

The vulnerability allows any low-skilled attacker to remotely gain full access to an affected router. The exploit relies on the router’s default password to work, which many don’t change.

In the worst case scnario, an attacker could target vulnerable devices on a massive scale, using similar mechanism to how botnets like Mirai worked — by scouring the web and hijacking routers using default passwords like “admin” and “pass”.

Andrew Mabbitt, founder of U.K. cybersecurity firm Fidus Information Security, first discovered and disclosed the remote code execution bug to TP-Link in October 2017. TP-Link released a patch a few weeks later for the vulnerable WR940N router, but Mabbitt warned TP-Link again in January 2018 that another router, TP-Link’s WR740N, was also vulnerable to the same bug because the company reused vulnerable code between devices.

TP-Link said the vulnerability was quickly patched in both routers. But when we checked, the firmware for WR740N wasn’t available on the website.

When asked, a TP-Link spokesperson said the update was “currently available when requested from tech support,” but wouldn’t explain why. Only after TechCrunch reached out, TP-Link updated the firmware page to include the latest security update.

Top countries with vulnerable WR740N routers. (Image: Shodan)

Routers have long been notorious for security problems. At the heart of any network, any flaw affecting a router can have disastrous effects on every connected device. By gaining complete control over the router, Mabbitt said an attacker could wreak havoc on a network. Modifying the settings on the router affects everyone who’s connected to the same network, like altering the DNS settings to trick users into visiting a fake page to steal their login credentials.

TP-Link declined to disclose how many potentially vulnerable routers it had sold, but said that the WR740N had been discontinued a year earlier in 2017. When we checked two search engines for exposed devices and databases, Shodan and Binary Edge, each suggested there are anywhere between 129,000 and 149,000 devices on the internet — though the number of vulnerable devices is likely far lower.

Mabbitt said he believed TP-Link still had a duty of care to alert customers of the update if thousands of devices are still vulnerable, rather than hoping they will contact the company’s tech support.

Both the U.K. and the U.S. state of California are set to soon require companies to sell devices with unique default passwords to prevent botnets from hijacking internet-connected devices at scale and using their collective internet bandwidth to knock websites offline.

The Mirai botnet downed Dyn, a domain name service giant, which knocked dozens of major sites offline for hours — including Twitter, Spotify and SoundCloud.

Read more:

Let’s block ads! (Why?)

Link to original source

Apple has a plan to make online ads more private

For years, the web has been largely free thanks to online ads. The problem is that nobody likes them. When they’re not obnoxiously taking over your entire screen or autoplaying, they’re tracking you everywhere you go online.

Ads can track where you go and which sites you visit and can be used to build up profiles on individuals — even if you never click on one. And when you do, they know what you bought and then they share that with other sites so they know you were up late buying ice cream, cat food, or something a little more private.

The obvious logic would be to use an ad-blocker. But that’s not what keeps the internet thriving and available. Apple says it’s figured out some middle ground that keeps ads alive but without their nefarious ad tracking capabilities.

The tech giant came up with Privacy Preserving Ad Click Attribution. Yes, it’s a mouthful but the tech itself shows promise.

A bit of background: Any time you buy something online, the store that placed the ad knows you bought something and so do the other sites where the ad was placed. When a person clicks on an ad, the store wants to know which site the ad was clicked on so they know where to keep advertising, known as ad attribution. Ads often use tracking images — tiny, near-invisible pixel-sized trackers embedded on websites that know when you’ve opened a webpage. These pixels carry cookies, which make it easy for ads to track users across pages and entire websites. Using these invisible trackers, websites can build up profiles on people — whether they click ads or not — from site to site, such as their interests, what they want to buy, and more.

Apple’s thinking, outlined in a blog post Wednesday, is that ads don’t need to share that you bought something from an online store with anyone else. Ads just need to know that someone — and not an identifiable person — clicked on an ad on a site and bought something on another.

By taking the identifiable person out of the equation, Apple says its new technology can help preserve user privacy without reducing the effectiveness on ad campaigns.

Apple’s new web technology, soon to be built into its Safari browser, is broken down into four parts.

Firstly, nobody should be identifiable based off their ad clicks. Ads often use long and unique tracking codes to identify a user visiting various sites and buying things. By limiting the number of campaign IDs to just a few dozen, an advertiser won’t be able to assign unique tracking codes to each ad click, making it far more difficult to track individual users across the web. Secondly, only the website where the ad was clicked will be allowed to measure ad clicks, cutting out third-parties. Thirdly, the browser should delay the sending of ad click and conversion data — such as when someone signs up for a site or buys something — at random by up to two days to further hide the user’s activity. That data is sent through a dedicated private browsing window to ensure it’s not associated with any other browsing data.

Lastly, Apple said it can do this at the browser level, limiting how much data the ad networks and merchants can see.

Instead of knowing exactly who bought what and when, the privacy ad click technology will instead report back ad click and conversion data without identifying the person.

“As more and more browsers acknowledge the problems of cross-site tracking, we should expect privacy-invasive ad click attribution to become a thing of the past,” wrote Apple engineer John Wilander in a blog post.

One of the core features of the technology is the limiting the amount of data that ads can collect.

“Today’s practice of ad click attribution has no practical limit on the bits of data, which allows for full cross-site tracking of users using cookies,” explained Wilander. “But by keeping the entropy of attribution data low enough, we believe the reporting can be done in a privacy preserving way.”

Simply put, by restricting the number of campaign and conversion IDs to just 64, advertisers are prevented from using long and unique values that can be used as a unique identifier to track a user from site to site. Apple says that restricted number will still give advertisers enough information to know how well their ads are performing. Advertisers, for example, can still see that a particular ad campaign leads to more completed purchases, based off a specific conversion ID, than other ad campaigns when they’re run on specific site in the last 48 hours.

But Apple concedes that real-time tracking of purchases may be a thing of the past if the technology becomes widely adopted. By delaying the ad click and conversion reports by up to two days, advertisers lose real-time insight into who buys what and when. Apple says there’s no way to protect a user’s privacy if attribution reports are sent as soon as someone buys something.

Apple is set to switch on the privacy feature by default in Safari later this year but knows it can’t go in alone. The company has proposed the technology as a standard to the World Wide Web Consortium in the hope other browser makers will pick up the torch and run with it.

Anyone with a short memory will know that web standards don’t always take off. The ill-fated Do Not Track web standard was meant to allow browser users to send a signal to websites and ad networks not to be tracked. The major browser makers adopted the feature, but mired in controversy, the standard never took off.

Apple thinks its proposed standard can succeed — chiefly because unlike Do Not Track the privacy ad click technology can be enforced in the browser with other privacy-minded technology. In Safari’s case, that’s intelligence tracking prevention. Other browsers, like Google Chrome and Mozilla Firefox are also doubling down on privacy features in an effort to win over the privacy crowd. Apple is also betting on users actively wanting this privacy technology, while balancing the concerns of advertisers who don’t want to be shut out through more drastic measures like users installing ad and content blockers.

The new privacy technology is in its developer-focused Safari Technology Preview 82, released last week, and will be available for web developers later this year.

Read more:

Let’s block ads! (Why?)

Link to original source

London’s Tube network to switch on wi-fi tracking by default in July

Transport for London will roll out default wi-fi device tracking on the London Underground this summer, following a trial back in 2016.

In a press release announcing the move, TfL writes that “secure, privacy-protected data collection will begin on July 8” — while touting additional services, such as improved alerts about delays and congestion, which it frames as “customer benefits”, as expected to launch “later in the year”.

As well as offering additional alerts-based services to passengers via its own website/apps, TfL says it could incorporate crowding data into its free open-data API — to allow app developers, academics and businesses to expand the utility of the data by baking it into their own products and services.

It’s not all just added utility though; TfL says it will also use the information to enhance its in-station marketing analytics — and, it hopes, top up its revenues — by tracking footfall around ad units and billboards.

Commuters using the UK capital’s publicly funded transport network who do not want their movements being tracked will have to switch off their wi-fi, or else put their phone in airplane mode when using the network.

To deliver data of the required detail, TfL says detailed digital mapping of all London Underground stations was undertaken to identify where wi-fi routers are located so it can understand how commuters move across the network and through stations.

It says it will erect signs at stations informing passengers that using the wi-fi will result in connection data being collected “to better understand journey patterns and improve our services” — and explaining that to opt out they have to switch off their device’s wi-fi.

Attempts in recent years by smartphone OSes to use MAC address randomization to try to defeat persistent device tracking have been shown to be vulnerable to reverse engineering via flaws in wi-fi set-up protocols. So, er, switch off to be sure.

We covered TfL’s wi-fi tracking beta back in 2017, when we reported that despite claiming the harvested wi-fi data was “de-personalised”, and claiming individuals using the Tube network could not be identified, TfL nonetheless declined to release the “anonymized” data-set after a Freedom of Information request — saying there remains a risk of individuals being re-identified.

As has been shown many times before, reversing ‘anonymization’ of personal data can be frighteningly easy.

It’s not immediately clear from the press release or TfL’s website exactly how it will be encrypting the location data gathered from devices that authenticate to use the free wi-fi at the circa 260 wi-fi enabled London Underground stations.

Its explainer about the data collection does not go into any real detail about the encryption and security being used. (We’ve asked for more technical details.)

“If the device has been signed up for free Wi-Fi on the London Underground network, the device will disclose its genuine MAC address. This is known as an authenticated device,” TfL writes generally of how the tracking will work. (Ergo, this is another instance where ‘free’ wi-fi isn’t actually free — as one security expert we spoke to pointed out.)

“We process authenticated device MAC address connections (along with the date and time the device authenticated with the Wi-Fi network and the location of each router the device connected to). This helps us to better understand how customers move through and between stations — we look at how long it took for a device to travel between stations, the routes the device took and waiting times at busy periods.”

“We do not collect any other data generated by your device. This includes web browsing data and data from website cookies,” TfL adds, saying also that “individual customer data will never be shared and customers will not be personally identified from the data collected by TfL”.

In a section entitled “keeping information secure” it further writes: “Each MAC address is automatically depersonalised (pseudonymised) and encrypted to prevent the identification of the original MAC address and associated device. The data is stored in a restricted area of a secure location and it will not be linked to any other data at a device level.  At no time does TfL store a device’s original MAC address.”

Privacy and security concerns were raised about the location tracking around the time of the 2016 trial — such as why TfL had used a monthly salt key to encrypt the data rather than daily salts, which would have decreased the risk of data being re-identifiable should it leak out.

Such concerns persist — and security experts are now calling for full technical details to be released, given TfL is going full steam ahead with a rollout.

A report in Wired suggests TfL has switched from hashing to a system of tokenisation – “fully replacing the MAC address with an identifier that cannot be tied back to any personal information”, which TfL billed as as a “more sophisticated mechanism” than it had used before. We’ll update as and when we get more from TfL.

Another question over the deployment at the time of the trial was what legal basis it would use for pervasively collecting people’s location data — since the system requires an active opt-out by commuters a consent-based legal basis would not be appropriate.

In a section on the legal basis for processing the Wi-Fi connection data, TfL writes now that its ‘legal ground’ is two-fold:

  • Our statutory and public functions
  • to undertake activities to promote and encourage safe, integrated, efficient and economic transport facilities and services, and to deliver the Mayor’s Transport Strategy

So, presumably, you can file ‘increasing revenue around adverts in stations by being able to track nearby footfall’ under ‘helping to deliver (read: fund) the mayor’s transport strategy’.

(Or as TfL puts it: “[T]he data will also allow TfL to better understand customer flows throughout stations, highlighting the effectiveness and accountability of its advertising estate based on actual customer volumes. Being able to reliably demonstrate this should improve commercial revenue, which can then be reinvested back into the transport network.”)

On data retention it specifies that it will hold “depersonalised Wi-Fi connection data” for two years — after which it will aggregate the data and retain those non-individual insights (presumably indefinitely, or per its standard data retention policies).

“The exact parameters of the aggregation are still to be confirmed, but will result in the individual Wi-Fi connection data being removed. Instead, we will retain counts of activities grouped into specific time periods and locations,” it writes on that.

It further notes that aggregated data “developed by combining depersonalised data from many devices” may also be shared with other TfL departments and external bodies. So that processed data could certainly travel.

Of the “individual depersonalised device Wi-Fi connection data”, TfL claims it is accessible only to “a controlled group of TfL employees” — without specifying how large this group of staff is; and what sort of controls and processes will be in place to prevent the risk of A) data being hacked and/or leaking out or B) data being re-identified by a staff member.

A TfL employee with intimate knowledge of a partner’s daily travel routine might, for example, have access to enough information via the system to be able to reverse the depersonalization.

Without more technical details we just don’t know. Though TfL says it worked with the UK’s data protection watchdog in designing the data collection with privacy front of mind.

“We take the privacy of our customers very seriously. A range of policies, processes and technical measures are in place to control and safeguard access to, and use of, Wi-Fi connection data. Anyone with access to this data must complete TfL’s privacy and data protection training every year,” it also notes elsewhere.

Despite holding individual level location data for two years, TfL is also claiming that it will not respond to requests from individuals to delete or rectify any personal location data it holds, i.e. if people seek to exercise their information rights under EU law.

“We use a one-way pseudonymisation process to depersonalise the data immediately after it is collected. This means we will not be able to single out a specific person’s device, or identify you and the data generated by your device,” it claims.

“This means that we are unable to respond to any requests to access the Wi-Fi data generated by your device, or for data to be deleted, rectified or restricted from further processing.”

Again, the distinctions it is making there are raising some eyebrows.

What’s amply clear is that the volume of data that will be generated as a result of a full rollout of wi-fi tracking across the lion’s share of the London Underground will be staggeringly massive.

More than 509 million “depersonalised” pieces of data, were collected from 5.6 million mobile devices during the four-week 2016 trial alone — comprising some 42 million journeys. And that was a very brief trial which covered a much smaller sub-set of the network.

As big data giants go, TfL is clearly gunning to be right up there.

Let’s block ads! (Why?)

Link to original source