London’s Tube network to switch on wi-fi tracking by default in July

Transport for London will roll out default wi-fi device tracking on the London Underground this summer, following a trial back in 2016.

In a press release announcing the move, TfL writes that “secure, privacy-protected data collection will begin on July 8” — while touting additional services, such as improved alerts about delays and congestion, which it frames as “customer benefits”, as expected to launch “later in the year”.

As well as offering additional alerts-based services to passengers via its own website/apps, TfL says it could incorporate crowding data into its free open-data API — to allow app developers, academics and businesses to expand the utility of the data by baking it into their own products and services.

It’s not all just added utility though; TfL says it will also use the information to enhance its in-station marketing analytics — and, it hopes, top up its revenues — by tracking footfall around ad units and billboards.

Commuters using the UK capital’s publicly funded transport network who do not want their movements being tracked will have to switch off their wi-fi, or else put their phone in airplane mode when using the network.

To deliver data of the required detail, TfL says detailed digital mapping of all London Underground stations was undertaken to identify where wi-fi routers are located so it can understand how commuters move across the network and through stations.

It says it will erect signs at stations informing passengers that using the wi-fi will result in connection data being collected “to better understand journey patterns and improve our services” — and explaining that to opt out they have to switch off their device’s wi-fi.

Attempts in recent years by smartphone OSes to use MAC address randomization to try to defeat persistent device tracking have been shown to be vulnerable to reverse engineering via flaws in wi-fi set-up protocols. So, er, switch off to be sure.

We covered TfL’s wi-fi tracking beta back in 2017, when we reported that despite claiming the harvested wi-fi data was “de-personalised”, and claiming individuals using the Tube network could not be identified, TfL nonetheless declined to release the “anonymized” data-set after a Freedom of Information request — saying there remains a risk of individuals being re-identified.

As has been shown many times before, reversing ‘anonymization’ of personal data can be frighteningly easy.

It’s not immediately clear from the press release or TfL’s website exactly how it will be encrypting the location data gathered from devices that authenticate to use the free wi-fi at the circa 260 wi-fi enabled London Underground stations.

Its explainer about the data collection does not go into any real detail about the encryption and security being used. (We’ve asked for more technical details.)

“If the device has been signed up for free Wi-Fi on the London Underground network, the device will disclose its genuine MAC address. This is known as an authenticated device,” TfL writes generally of how the tracking will work. (Ergo, this is another instance where ‘free’ wi-fi isn’t actually free — as one security expert we spoke to pointed out.)

“We process authenticated device MAC address connections (along with the date and time the device authenticated with the Wi-Fi network and the location of each router the device connected to). This helps us to better understand how customers move through and between stations — we look at how long it took for a device to travel between stations, the routes the device took and waiting times at busy periods.”

“We do not collect any other data generated by your device. This includes web browsing data and data from website cookies,” TfL adds, saying also that “individual customer data will never be shared and customers will not be personally identified from the data collected by TfL”.

In a section entitled “keeping information secure” it further writes: “Each MAC address is automatically depersonalised (pseudonymised) and encrypted to prevent the identification of the original MAC address and associated device. The data is stored in a restricted area of a secure location and it will not be linked to any other data at a device level.  At no time does TfL store a device’s original MAC address.”

Privacy and security concerns were raised about the location tracking around the time of the 2016 trial — such as why TfL had used a monthly salt key to encrypt the data rather than daily salts, which would have decreased the risk of data being re-identifiable should it leak out.

Such concerns persist — and security experts are now calling for full technical details to be released, given TfL is going full steam ahead with a rollout.

A report in Wired suggests TfL has switched from hashing to a system of tokenisation – “fully replacing the MAC address with an identifier that cannot be tied back to any personal information”, which TfL billed as as a “more sophisticated mechanism” than it had used before. We’ll update as and when we get more from TfL.

Another question over the deployment at the time of the trial was what legal basis it would use for pervasively collecting people’s location data — since the system requires an active opt-out by commuters a consent-based legal basis would not be appropriate.

In a section on the legal basis for processing the Wi-Fi connection data, TfL writes now that its ‘legal ground’ is two-fold:

  • Our statutory and public functions
  • to undertake activities to promote and encourage safe, integrated, efficient and economic transport facilities and services, and to deliver the Mayor’s Transport Strategy

So, presumably, you can file ‘increasing revenue around adverts in stations by being able to track nearby footfall’ under ‘helping to deliver (read: fund) the mayor’s transport strategy’.

(Or as TfL puts it: “[T]he data will also allow TfL to better understand customer flows throughout stations, highlighting the effectiveness and accountability of its advertising estate based on actual customer volumes. Being able to reliably demonstrate this should improve commercial revenue, which can then be reinvested back into the transport network.”)

On data retention it specifies that it will hold “depersonalised Wi-Fi connection data” for two years — after which it will aggregate the data and retain those non-individual insights (presumably indefinitely, or per its standard data retention policies).

“The exact parameters of the aggregation are still to be confirmed, but will result in the individual Wi-Fi connection data being removed. Instead, we will retain counts of activities grouped into specific time periods and locations,” it writes on that.

It further notes that aggregated data “developed by combining depersonalised data from many devices” may also be shared with other TfL departments and external bodies. So that processed data could certainly travel.

Of the “individual depersonalised device Wi-Fi connection data”, TfL claims it is accessible only to “a controlled group of TfL employees” — without specifying how large this group of staff is; and what sort of controls and processes will be in place to prevent the risk of A) data being hacked and/or leaking out or B) data being re-identified by a staff member.

A TfL employee with intimate knowledge of a partner’s daily travel routine might, for example, have access to enough information via the system to be able to reverse the depersonalization.

Without more technical details we just don’t know. Though TfL says it worked with the UK’s data protection watchdog in designing the data collection with privacy front of mind.

“We take the privacy of our customers very seriously. A range of policies, processes and technical measures are in place to control and safeguard access to, and use of, Wi-Fi connection data. Anyone with access to this data must complete TfL’s privacy and data protection training every year,” it also notes elsewhere.

Despite holding individual level location data for two years, TfL is also claiming that it will not respond to requests from individuals to delete or rectify any personal location data it holds, i.e. if people seek to exercise their information rights under EU law.

“We use a one-way pseudonymisation process to depersonalise the data immediately after it is collected. This means we will not be able to single out a specific person’s device, or identify you and the data generated by your device,” it claims.

“This means that we are unable to respond to any requests to access the Wi-Fi data generated by your device, or for data to be deleted, rectified or restricted from further processing.”

Again, the distinctions it is making there are raising some eyebrows.

What’s amply clear is that the volume of data that will be generated as a result of a full rollout of wi-fi tracking across the lion’s share of the London Underground will be staggeringly massive.

More than 509 million “depersonalised” pieces of data, were collected from 5.6 million mobile devices during the four-week 2016 trial alone — comprising some 42 million journeys. And that was a very brief trial which covered a much smaller sub-set of the network.

As big data giants go, TfL is clearly gunning to be right up there.

Let’s block ads! (Why?)

Link to original source

Loot, the UK digital current account for students and millennials, enters administration after a potential sale falls through

Loot, the digital current account aimed at students and millennials, has called in administrators after appearing to have run out of cash. According to sources, the U.K. fintech was unable to raise additional funding in time after a potential sale to banking giant RBS fell through.

Intriguingly, Royal Bank of Scotland Group indirectly owned a 25 percent stake in Loot via an investment by Bó, the digital-only retail bank being developed by RBS subsidiary NatWest. RBS announced that Bó had invested £2 million in Loot in January this year, following an initial investment of £3 million in July 2018.

It was also presumed by many fintech insiders that Loot had been white labelled and was powering the Bó product. Clearly that was never the case, and it now raises questions around why RBS/Natwest would invest in a competitor, only to sees its demise six months later.

Loot’s other investors included Portag3 Ventures (Power Corporation’s corporate VC arm), Austrian VC firm Speedinvest, Rocket Internet’s GFC, and a number of unnamed angel investors and smaller funds.

Founded in 2014 by now 25 year old Ollie Purdue as he was finishing up university, Loot offers a digital-only current account aimed at students and millennials, and has around 250,000 registered accounts. It comes with a Mastercard and mobile app, with a particular focus on spending insights and real-time budgeting. Like a number of competitors in the “neobank” soace, Loot doesn’t have a full banking license and instead operates under an electronic money licence through a partnership with FCA regulated Wirecard.

Meanwhile, sources tell me that Loot’s 60 or so employees were informed this lunchtime. I also understand that efforts by Loot founder Ollie Purdue and others within London’s close-nit fintech community are already underway to safe land as many of those employees as possible, and that around 30 job offers are already in motion.

Loot declined to comment. I’ve reached out to RBS and will update this post if and when I hear back.

Let’s block ads! (Why?)

Link to original source

Facebook found hosting masses of far right EU disinformation networks

A multi-month hunt for political disinformation spreading on Facebook in Europe suggests there are concerted efforts to use the platform to spread bogus far right propaganda to millions of voters ahead of a key EU vote which kicks off tomorrow.

Following the independent investigation, Facebook has taken down a total of 77 pages and 230 accounts from Germany, UK, France, Italy, Spain and Poland — which had been followed by an estimated 32 million people and generated 67 million ‘interactions’ (i.e. comments, likes, shares) in the last three months alone.

The bogus mainly far-right disinformation networks were not identified by Facebook — but had been reported to it by campaign group Avaaz — which says the fake pages had more Facebook followers and interactions than all the main EU far right and anti-EU parties combined.

“The results are overwhelming: the disinformation networks upon which Facebook acted had more interactions (13 million) in the past three months than the main party pages of the League, AfD, VOX, Brexit Party, Rassemblement National and PiS combined (9 million),” it writes in a new report.

Although interactions is the figure that best illustrates the impact and reach of these networks, comparing the number of followers of the networks taken down reveals an even clearer image. The Facebook networks takedown had almost three times (5.9 million) the number of followers as AfD, VOX, Brexit Party, Rassemblement National and PiS’s main Facebook pages combined (2 million).”

Avaaz has previously found and announced far right disinformation networks operating in Spain, Italy and Poland — and a spokesman confirmed to us it’s re-reporting some of its findings now (such as the ~30 pages and groups in Spain that had racked up 1.7M followers and 7.4M interactions, which we covered last month) to highlight an overall total for the investigation.

“Our report contains new information for France, United Kingdom and Germany,” the spokesman added.

Examples of politically charged disinformation being spread via Facebook by the bogus networks it found include a fake viral video seen by 10 million people that supposedly shows migrants in Italy destroying a police car (but was actually from a movie; which Avaaz adds that this fake had been “debunked years ago”); a story in Poland claiming that migrant taxi drivers rape European women, including a fake image; and fake news about a child cancer center being closed down by Catalan separatists in Spain.

There’s lots more country-specific detail in its full report.

In all, Avaaz reported more than 500 suspicious pages and groups to Facebook related to the three-month investigation of Facebook disinformation networks in Europe. Though Facebook only took down a subset of the far right muck-spreaders — around 15% of the suspicious pages reported to it.

“The networks were either spreading disinformation or using tactics to amplify their mainly anti-immigration, anti-EU, or racist content, in a way that appears to breach Facebook’s own policies,” Avaaz writes of what it found.

It estimates that content posted by all the suspicious pages it reported had been viewed some 533 million times over the pre-election period. Albeit, there’s no way to know whether or not everything it judged suspicious actually was.

In a statement responding to Avaaz’s findings, Facebook told us:

We thank Avaaz for sharing their research for us to investigate. As we have said, we are focused on protecting the integrity of elections across the European Union and around the world. We have removed a number of fake and duplicate accounts that were violating our authenticity policies, as well as multiple Pages for name change and other violations. We also took action against some additional Pages that repeatedly posted misinformation. We will take further action if we find additional violations.

The company did not respond to our question asking why it failed to unearth this political disinformation itself.

Ahead of the EU parliament vote, which begins tomorrow, Facebook invited a select group of journalists to tour a new Dublin-based election security ‘war room’ — where it talked about a “five pillars of countering disinformation” strategy to prevent cynical attempts to manipulate voters’ views.

But as Avaaz’s investigation shows there’s plenty of political disinformation flying by entirely unchecked.

One major ongoing issue where political disinformation and Facebook’s platform is concerned is that how the company enforces its own rules remains entirely opaque.

We don’t get to see all the detail — so can’t judge and assess all its decisions. Yet Facebook has been known to shut down swathes of accounts deemed fake ahead of elections, while apparently failing entirely to find other fakes (such as in this case).

It’s a situation that does not look compatible with the continued functioning of democracy given Facebook’s massive reach and power to influence.

Nor is the company under an obligation to report every fake account it confirms. Instead, Facebook gets to control the timing and flow of any official announcements it chooses to make about “coordinated inauthentic behaviour” — dropping these self-selected disclosures as and when it sees fit, and making them sound as routine as possible by cloaking them in its standard, dryly worded newspeak.

Back in January, Facebook COO Sheryl Sandberg admitted publicly that the company is blocking more than 1M fake accounts every day. If Facebook was reporting every fake it finds it would therefore need to do so via a real-time dashboard — not sporadic newsroom blog posts that inherently play down the scale of what is clearly embedded into its platform, and may be so massive and ongoing that it’s not really possible to know where Facebook stops and ‘Fakebook’ starts.

The suspicious behaviours that Avaaz attached to the pages and groups it found that appeared to be in breach of Facebook’s stated rules include the use of fake accounts, spamming, misleading page name changes and suspected coordinated inauthentic behavior.

When Avaaz previously reported the Spanish far right networks Facebook subsequently told us it had removed “a number” of pages violating its “authenticity policies”, including one page for name change violations but claimed “we aren’t removing accounts or Pages for coordinated inauthentic behavior”.

So again, it’s worth emphasizing that Facebook gets to define what is and isn’t acceptable on its platform — including creating terms that seek to normalize its own inherently dysfunctional ‘rules’ and their ‘enforcement’.

Such as by creating terms like “coordinated inauthentic behavior”, which sets a threshold of Facebook’s own choosing for what it will and won’t judge political disinformation. It’s inherently self-serving.

Given that Facebook only acted on a small proportion of what Avaaz found and reported overall, we might posit that the company is setting a very high bar for acting against suspicious activity. And that plenty of election fiddling is free flowing under its feeble radar. (When we previously asked Facebook whether it was disputing Avaaz’s finding of coordinated inauthentic behaviour vis-a-vis the far right disinformation networks it reported in Spain the company did not respond to the question.)

Much of the publicity around Facebook’s self-styled “election security” efforts has also focused on how it’s enforcing new disclosure rules around political ads. But again political disinformation masquerading as organic content continues being spread across its platform — where it’s being shown to be racking up millions of interactions with people’s brains and eyeballs.

Plus, as we reported yesterday, research conducted by the Oxford Internet Institute into pre-EU election content sharing on Facebook has found that sources of disinformation-spreading ‘junk news’ generate far greater engagement on its platform than professional journalism.

So while Facebook’s platform is also clearly full of real people sharing actual news and views, the fake BS which Avaaz’s findings imply is also flooding the platform, gets spread around more, on a per unit basis. And it’s democracy that suffers — because vote manipulators are able to pass off manipulative propaganda and hate speech as bona fide news and views as a consequence of Facebook publishing the fake stuff alongside genuine opinions and professional journalism.

It does not have algorithms that can perfectly distinguish one from the other, and has suggested it never will.

The bottom line is that even if Facebook dedicates far more resource (human and AI) to rooting out ‘election interference’ the wider problem is that a commercial entity which benefits from engagement on an ad-funded platform is also the referee setting the rules.

Indeed, the whole loud Facebook publicity effort around “election security” looks like a cynical attempt to distract the rest of us from how broken its rules are. Or, in other words, a platform that accelerates propaganda is also seeking to manipulate and skew our views.

Let’s block ads! (Why?)

Link to original source

TransferWise now valued at $3.5B following a new $292M secondary round

TransferWise, the London-headquartered international money transfer service, is disclosing a new $292 million secondary round that sees investors value the company at $3.5 billion. That’s more than double the valuation TransferWise achieved in late 2017 at the time of its $280 million Series E round.

The new secondly funding — with no new cash entering TransferWise’s balance sheet as a number of existing shareholders sell all or a portion of their holding — was led by growth capital investors Lead Edge Capital, Lone Pine Capital and Vitruvian Partners.

Existing investors Andreessen Horowitz and Baillie Gifford expanded their holdings in TransferWise, whilst investment was also provided from funds managed by BlackRock.

In a call, TransferWise co-founder and Chairman Taavet Hinrikus told me the round was oversubscribed, too. The arbitrary figure of $292 million was simply the result of how much liquidity existing shareholders were willing to make available, and nowhere near the upper level of interest.

He is also pointed out that existing institutional investors aren’t exiting during this round, with Andreessen Horowitz and Baillie Gifford actually doubling down somewhat. Instead, this liquidity event was mainly a way for TransferWise employees — existing and presumably former — to cash in on some or all of their stake. And for new later stage investors to jump on-board.

All of which — and at the risk of repeating myself — would suggest that a potential TransferWise public offering is still a long way off yet, something that Hinrikus doesn’t refute. “Why would we go public?” he says rhetorically, noting that the company is still growing fast and capital isn’t an issue.

So why then in contrast are other fast-growing companies going public much earlier these days? “You’d have to ask them,” Hinrikus says, batting away my question in his usual laid back and matter-of-fact manner. Pressed a little harder, he says that one difference might be that TransferWise’s institutional investors aren’t (yet) pushing for a liquidity event on the scale of an IPO. As already noted, in some instances they are actually purchasing more shares in the company.

Hinrikus also says the regulatory climate is now changing in TransferWise’s favour. In 2018, the EU voted to mandate the outlawing of exchange rate mark-ups on international payments through its Cross-Border Payments Regulations, something that the London fintech company has long been lobbying for. Australia is thought to be considering similar regulatory measures following an inquiry into the issue by the Australian Competition and Consumer Commission.

To that end, TransferWise says it now serves 5 million customers worldwide, processing £4 billion every month. Every year it estimates it saves customers £1 billion in bank fees. The service currently supports 1,600 currency routes, and is available for 49 currencies.

The company employs over 1,600 people across twelve global offices and says it will hire 750 more people in the next 12 months. Audited financials for fiscal year ending March 2018 revealed 77 percent revenue growth to £117 million and a net profit of £6.2 million after tax.

Let’s block ads! (Why?)

Link to original source

HolodeckVR raises €3M from Germany’s ProSiebenSat1 to put VR onto dodgems

Consumer VR might not have taken off in the mainstream but it’s still fun to use, and it’s even more fun to use in groups. There is more of an arcade renaissance for VR going on right now, as well as location-based multi-user VR experiences.

That’s the premise behind Munich-based HolodeckVR which is using proprietary tech to blend radio frequency, IR tracking and on-device IMUs to bring multi-user positionally tracked VR to mobile headsets.

How would you like to do VR in a big group, and on fairground dodgems/bumper cars? That’s the kind of this startup is cooking up.

As a spin-off from the prestigious Fraunhofer Institute for Integrated Ciruits IIS, it uses its own technology which allows its visitors to experience virtual reality in groups of up to 20 people and move around in an empty space of 10x20m, all just wearing VR goggles.

Holodeck says it can be used for different types of events (entertainment, birthday parties and corporate team building) and work through several thousands of guests per day.

It’s now raised €3 million from strategic partner ProSiebenSat.1, the leading German entertainment player. This will allow Holodeck to expand its open content platform and extend its network of locations.

The Munich-based media company owns a potential distribution channel for scaling Holodeck VR locations at leisure- and activity parks, while other synergies related to ProSiebenSat.1, including live broadcasting and VR content generation.

With 7Sports, the sports business units of ProSiebenSat.1, Holodeck VR plans eSports events leveraging the Holodeck VR platform.

Jonathan Nowak Delgado says: “With this investment, we’ll aim to become the VR touchpoint for the next generation by offering exciting new experiences that are simple, social, and fun.”

Holodeck VR’s experiences combine the real world and digital world so that you can take a ride in bumper cars or on a rollercoaster.

I hope they will have plenty of sick bags at the ready.

Let’s block ads! (Why?)

Link to original source