Cloudflare's privacy-focused 1.1.1.1 service is available on phones


Cloudflare

Cloudflare launched its 1.1.1.1 service in April as a bid to improve privacy and performance for desktop users, and now it’s making that technology available to mobile users. The company has released 1.1.1.1 apps for Android and iOS that switch the DNS service on and off with a single button press. So long as it’s on, it should be harder for your internet provider to track your web history, block sites or redirect traffic. You might also see performance improvements, particularly in areas where connections aren’t particularly fast to begin with.

The service remains free, and there are pragmatic reasons for that. It not only serves as an advertising mechanism for Cloudflare, it potentially improves the performance of the sites themselves. There are certainly alternative DNS options, and this is merely one part of a larger security strategy. However, this might be the most accessible solution of the bunch — you don’t have to know the first thing about domain names or ISP tracking to see a difference.

Let’s block ads! (Why?)

Link to original source

Cloudflare rolls out its 1.1.1.1 privacy service to iOS, Android

Months after announcing its privacy-focused DNS service, Cloudflare is bringing 1.1.1.1 to mobile users.

Granted, nothing ever stopped anyone from using 1.1.1.1 on their phones or tablets already. But now the app, now available for iPhones, iPads and Android devices, aims to make it easier for anyone to use its free consumer DNS service.

The app is a one-button push to switch on and off again. That’s it.

Cloudflare rolled out 1.1.1.1 earlier this year on April Fools’ Day, no less, but privacy is no joke to the San Francisco-based networking giant. In using the service, you let Cloudflare handle all of your DNS information, like when an app on your phone tries to connect to the internet, or you type in the web address of any site. By funneling that DNS data through 1.1.1.1, it can make it more difficult for your internet provider to know which sites you’re visiting, and also ensure that you can get to the site you want without having your connection censored or hijacked.

It’s not a panacea to perfect privacy, mind you — but it’s better than nothing.

The service is also blazing fast, shaving valuable seconds off page loading times — particularly in parts of the world where things work, well, a little slower.

“We launched 1.1.1.1 to offer consumers everywhere a better choice for fast and private Internet browsing,” said Matthew Prince, Cloudflare chief executive said. “The 1.1.1.1 app makes it even easier for users to unlock fast and encrypted DNS on their phones.”

Let’s block ads! (Why?)

Link to original source

Judge tells Amazon to provide Echo recordings in double homicide trial


AP Photo/Mark Lennihan

Prosecutors are once again hoping that smart speaker data could be the key to securing a murder conviction. A New Hampshire judge has ordered Amazon to provide recordings from an Echo speaker between January 27th, 2017 and January 29th, 2017 (plus info identifying paired smartphones) to aid in investigating a double homicide case. The court decided there was probable cause to believe the speaker might have captured audio of the murders and their aftermath.

Law enforcement had charged Timothy Verrill with murdering Christine Sullivan and Jenna Pellegrini at the home of Sullivan’s boyfriend Dean Smoronk. Verrill had access to the home’s security code and had been seen on surveillance cameras with the two women, leading investigators to believe that Smoronk’s Echo might have picked up additional information.

Whether or not there’s any information to provide isn’t clear. In a statement, Amazon didn’t acknowledge the presence of any recordings but said it wouldn’t provide customer data unless there was a “valid and binding legal demand properly served on us.”

However, the likelihood of recordings isn’t terribly high. Like many smart speakers, the Echo isn’t continuously recording — it only captures audio when someone uses the speaker’s hotword (typically “Alexa”), and then only for the brief moment it takes to issue a command. The murderer would have needed to explicitly activate the Echo while committing the crimes. Paired phones wouldn’t necessarily have helped, either. You don’t need to link a specific phone to an Echo to use it, and a paired phone won’t necessarily give away who used the speaker.

As it stands, prosecution teams haven’t had much success using Echo devices to secure convictions. In 2017, a judge dismissed the high-profile case against James Bates after the hot tub death of his friend Victor Collins. Attorneys managed to obtain data from Amazon, but that and other evidence wasn’t enough to rule out “other reasonable explanations” for Collins’ death, such as his extremely high blood alcohol level. A smart speaker like the Echo is far from a surefire piece of evidence in cases like this, even if prosecutors hope otherwise.

Let’s block ads! (Why?)

Link to original source

Hackers stole income, immigration and tax data in Healthcare.gov breach, government confirms

Hackers siphoned off thousands of Healthcare.gov applications by breaking into the accounts of brokers and agents tasked with helping customers sign up for healthcare plans.

The Centers for Medicare and Medicaid Services (CMS) said in a post buried on its website that found that the hackers obtained “inappropriate access” to a number of broker and agent accounts, which “engaged in excessive searching” of the government’s healthcare marketplace systems.

CMS didn’t say how the attackers gained access to the accounts, but said it shut off the affected accounts “immediately.”

In a letter sent to affected customers this week (and buried on the Healthcare.gov website), CMS disclosed that sensitive personal data — including partial Social Security numbers, immigration status and some tax information — may have been taken.

According to the letter, the data included:

  • Name, date of birth, address, sex, and the last four digits of the Social Security number (SSN), if SSN was provided on the application;
  • Other information provided on the application, including expected income, tax filing status, family relationships, whether the applicant is a citizen or an immigrant, immigration document types and numbers, employer name, whether the applicant was pregnant, and whether the applicant already had health insurance;
  • Information provided by other federal agencies and data sources to confirm the information provided on the application, and whether the Marketplace asked the applicant for documents or explanations;
  • The results of the application, including whether the applicant was eligible to enroll in a qualified health plan (QHP), and if eligible, the tax credit amount; and
  • If the applicant enrolled, the name of the insurance plan, the premium, and dates of coverage.

But the government said that no bank account information — including credit card numbers, or diagnostic and treatment information was taken.

President Obama’s healthcare law, the Affordable Care Act — known as “Obamacare” — allows Americans to obtain health insurance if they are not already covered. In order to sign up for healthcare plans, customers have to submit sensitive data.

Some 11.8 million people signed up for coverage for 2018.

CMS previously said that the breach affected 75,000 individuals, but a person familiar with the investigation said that the number is expected to change. The stolen files also included data on children.

A spokesperson said CMS is expected to give an update early next week at the latest.

Healthcare.gov’s enrollment period is set to close on December 15.

Let’s block ads! (Why?)

Link to original source

Children are being “datafied” before we’ve understood the risks, report warns

A report by England’s children’s commissioner has raised concerns about how kids’ data is being collected and shared across the board, in both the private and public sectors.

In the report, entitled Who knows what about me?, Anne Longfield urges society to “stop and think” about what big data means for children’s lives.

Big data practices could result in a data-disadvantaged generation whose life chances are shaped by their childhood data footprint, her report warns.

The long term impacts of profiling minors when these children become adults is simply not known, she writes.

“Children are being “datafied” – not just via social media, but in many aspects of their lives,” says Longfield.

“For children growing up today, and the generations that follow them, the impact of profiling will be even greater – simply because there is more data available about them.”

By the time a child is 13 their parents will have posted an average of 1,300 photos and videos of them on social media, according to the report. After which this data mountain “explodes” as children themselves start engaging on the platforms — posting to social media 26 times per day, on average, and amassing a total of nearly 70,000 posts by age 18.

“We need to stop and think about what this means for children’s lives now and how it may impact on their future lives as adults,” warns Longfield. “We simply do not know what the consequences of all this information about our children will be. In the light of this uncertainty, should we be happy to continue forever collecting and sharing children’s data?

“Children and parents need to be much more aware of what they share and consider the consequences. Companies that make apps, toys and other products used by children need to stop filling them with trackers, and put their terms and conditions in language that children understand. And crucially, the Government needs to monitor the situation and refine data protection legislation if needed, so that children are genuinely protected – especially as technology develops,” she adds.

The report looks at what types of data is being collected on kids; where and by whom; and how it might be used in the short and long term — both for the benefit of children but also considering potential risks.

On the benefits side, the report cites a variety of still fairly experimental ideas that might make positive use of children’s data — such as for targeted inspections of services for kids to focus on areas where data suggests there are problems; NLP technology to speed up analysis of large data-sets (such as the NSPCC’s national case review repository) to find common themes and understand “how to prevent harm and promote positive outcomes”; predictive analytics using data from children and adults to more cost-effectively flag “potential child safeguarding risks to social workers”; and digitizing children’s Personal Child Health Record to make the current paper-based record more widely accessible to professionals working with children.

But while Longfield describes the increasing availability of data as offering “enormous advantages”, she is also very clear on major risks unfolding — be it to safety and well-being; child development and social dynamics; identity theft and fraud; and the longer term impact on children’s opportunity and life chances.

“In effect [children] are the “canary in the coal mine for wider society, encountering the risks before many adults become aware of them or are able to develop strategies to mitigate them,” she warns. “It is crucial that we are mindful of the risks and mitigate them.”

Transparency is lacking

One clear takeaway from the report is there is still a lack of transparency about how children’s data is being collected and processed — which in itself acts as a barrier to better understanding the risks.

“If we better understood what happens to children’s data after it is given – who collects it, who it is shared with and how it is aggregated – then we would have a better understanding of what the likely implications might be in the future, but this transparency is lacking,” Longfield writes — noting that this is true despite ‘transparency’ being the first key principle set out in the EU’s tough new privacy framework, GDPR.

The updated data protection framework did beef up protections for children’s personal data in Europe — introducing a new provision setting a 16-year-old age limit on kids’ ability to consent to their data being processed when it came into force on May 25, for example. (Although EU Member States can choose to write a lower age limit into their laws, with a hard cap set at 13.)

And mainstream social media apps, such as Facebook and Snapchat, responded by tweaking their T&Cs and/or products in the region. (Although some of the parental consent systems that were introduced to claim compliance with GDPR appear trivially easy for kids to bypass, as we’ve pointed out before.)

But, as Longfield points out, Article 5 of the GDPR states that data must be “processed lawfully, fairly and in a transparent manner in relation to individuals”.

Yet when it comes to children’s data the children’s commissioner says transparency is simply not there.

She also sees limitations with GDPR, from a children’s data protection perspective — pointing out that, for example, it does not prohibit the profiling of children entirely (stating only that it “should not be the norm”).

While another provision, Article 22 — which states that children have the right not to be subject to decisions based solely on automated processing (including profiling) if they have legal or similarly significant effects on them — also appears to be circumventable.

“They do not apply to decision-making where humans play some role, however minimal that role is,” she warns, which suggests another workaround for companies to exploit children’s data.

“Determining whether an automated decision-making process will have “similarly significant effects” is difficult to gauge given that we do not yet understand the full implications of these processes – and perhaps even more difficult to judge in the case of children,” Longfield also argues.

“There is still much uncertainty around how Article 22 will work in respect of children,” she adds. “The key area of concern will be in respect of any limitations in relation to advertising products and services and associated data protection practices.”

Recommendations

The report makes a series of recommendations for policymakers, with Longfield calling for schools to “teach children about how their data is collected and used, and what they can do to take control of their data footprints”.

She also presses the government to consider introducing an obligation on platforms that use “automated decision-making to be more transparent about the algorithms they use and the data fed into these algorithms” — where data collected from under 18s is used.

Which would essentially place additional requirements on all mainstream social media platforms to be far less opaque about the AI machinery they use to shape and distribute content on their platforms at vast scale. Given that few — if any — could claim not to have no under 18s using their platforms.

She also argues that companies targeting products at children have far more explaining to do, writing: 

Companies producing apps, toys and other products aimed at children should be more transparent about any trackers capturing information about children. In particular where a toy collects any video or audio generated by a child this should be made explicit in a prominent part of the packaging or its accompanying information. It should be clearly stated if any video or audio content is stored on the toy or elsewhere and whether or not it is transmitted over the internet. If it is transmitted, parents should also be told whether or not it will be encrypted during transmission or when stored, who might analyse or process it and for what purposes. Parents should ask if information is not given or unclear.

Another recommendation for companies is that terms and conditions should be written in a language children can understand.

(Albeit, as it stands, tech industry T&Cs can be hard enough for adults to scratch the surface of — let alone have enough hours in the day to actually read.)

Photo: SementsovaLesia/iStock

A recent U.S. study of kids apps, covered by BuzzFeed News, highlighted that mobile games aimed at kids can be highly manipulative, describing instances of apps making their cartoon characters cry if a child does not click on an in-app purchase, for example.

A key and contrasting problem with data processing is that it’s so murky; applied in the background so any harms are far less immediately visible because only the data processor truly knows what’s being done with people’s — and indeed children’s — information.

Yet concerns about exploitation of personal data are stepping up across the board. And essentially touch all sectors and segments of society now, even as risks where kids are concerned may look the most stark.

This summer the UK’s privacy watchdog called for an ethical pause on the use by political campaigns of online ad targeting tools, for example, citing a range of concerns that data practices have got ahead of what the public knows and would accept.

It also called for the government to come up with a Code of Practice for digital campaigning to ensure that long-standing democratic norms are not being undermined.

So the children’s commissioner’s appeal for a collective ‘stop and think’ where the use of data is concerned is just one of a growing number of raised voices policymakers are hearing.

One thing is clear: Calls to quantify what big data means for society — to ensure powerful data-mining technologies are being applied in ways that are ethical and fair for everyone — aren’t going anywhere.

Let’s block ads! (Why?)

Link to original source

An early test of the GDPR: taking on data brokers


SOPA Images via Getty Images

Major data brokers Acxiom and Oracle are among seven companies accused of violating GDPR laws on personal information privacy. Advocates hope the complaints will shed light on the opaque ways that personal data is traded through third parties online both in the EU and the US.

The General Data Protection Regulation is a sweeping personal data privacy law that came into force in late May in the EU. For the rest of the world, it’s viewed as a bellwether for whether Big Tech can be held in check when immense data leaks seem to happen with painful regularity.

Formal complaints to European regulators under the GDPR by UK non-profit Privacy International were also filed against ad-tech companies Criteo, Quantcast and Tapad as well as credit agencies Equifax (the subject of a massive breach just last year) and Experian.

“Our complaints target companies that, despite exploiting the data of millions of people, are not household names and therefore rarely have their practices challenged,” said Ailidh Callander, legal officer at Privacy International, in an email to Engadget. “These companies’ business models are premised on data exploitation.”

Data brokers aggregate personal information from other sources — for instance, websites you’ve visited or credit card records — to create a complex profile on who they think you are. That profile may include political leanings and income, and subsequently get sold to brands or social networks. Acxiom claims to have data on about 700 million people globally. Consumers often don’t hand data directly to these companies via their own websites — the way one would with, say, Facebook — which allows the data trading to operate in relative obscurity.

This alleged lack of consent is precisely what Privacy International is targeting. The non-profit also claims that these companies lack “legitimate interest” (in legal terms) for processing the personal data, which may infer political, ethnic and religious affiliations. The companies fail to comply, according to Privacy International, with the principles of “transparency, fairness, purpose limitation, data minimisation, accuracy and confidentiality and integrity” — in other words, nearly all of the new privacy law’s core foundations.

“The law has changed and these companies need to as well,” said Callander. “There is a gap between how [the] GDPR conceptualises data privacy and [how] these companies do and the onus is on them (if necessary, pushed by regulators) to close it.”

In public statements, Experian has said: “We have worked hard to ensure that we are compliant with GDPR and we continue to believe that our services meet its requirements.” Criteo has stated: “We have complete confidence in our privacy practices.”

Companies are still feeling out just how the law is going to be enforced, which is why test cases like this bear watching. Facebook and Google are among the other companies who have faced complaints so far. A spokesman from the Data Protection Commission in Ireland, where many American tech firms keep European headquarters, said the regulators have already received 2,500 breach notifications and 1,200 complaints related to the GDPR since May.

Let’s block ads! (Why?)

Link to original source

Facebook Portal isn’t listening to your calls, but may track data

When the initial buzz of Portal finally dies down, it’s the timing that will be remembered most. There’s never a great time for a company like Facebook to launch a product like Portal, but as far as optics go, the whole of 2018 probably should have been a write-off.

Our followup headline, “Facebook, are you kidding?” seems to sum up the fallout nicely.

But the company soldiered on, intent to launch its in-house hardware product, and insofar as its intentions can be regarded as pure, there are certainly worse motives than the goal of connecting loved ones. That’s a promise video chat technology brings, and Facebook’s technology stack delivers it in a compelling way.

Any praise the company might have received for the product’s execution, however, quickly took a backseat to another PR dustup. Here’s Recode with another fairly straightforward headline. “It turns out that Facebook could in fact use data collected from its Portal in-home video device to target you with ads.”

In a conversation with TechCrunch this week, Facebook exec Andrew “Boz” Bosworth claims it was the result of a misunderstanding on the company’s part.

“I wasn’t in the room with that,” Bosworth says, “but what I’m told was that we thought that the question was about ads being served on Portal. Right now, Facebook ads aren’t being served on Portal. Obviously, if some other service, like YouTube or something else, is using ads, and you’re watching that you’ll have ads on the Portal device. Facebook’s been serving ads on Portal.”

Facebook is working to draw a line here, looking to distinguish the big ask of putting its own microphones and a camera in consumer living rooms from the standard sort of data collection that forms the core of much of the site’s monetization model.

“[T]he thing that’s novel about this device is the camera and the microphone,” he explains. “That’s a place that we’ve gone overboard on the security and privacy to make sure consumers can trust at the electrical level the device is doing only the things that they expect.”

Facebook was clearly working to nip these questions in the bud prior to launch. Unprompted, the company was quick to list the many levels of security and privacy baked into the stack, from encryption to an actual physical piece of plastic the consumer can snap onto the top of the device to serve as a lens cap.

Last night, alongside the announcement of availability, Facebook issued a separate post drilling down on privacy concerns. Portal: Privacy and Ads details three key points:

  • Facebook does not listen to, view or keep the contents of your Portal video calls. This means nothing you say on a Portal video call is accessed by Facebook or used for advertising.
  • Portal video calls are encrypted, so your calls are secure.
  • Smart Camera and Smart Sound use AI technology that runs locally on Portal, not on Facebook servers. Portal’s camera doesn’t identify who you are.

Facebook is quick to explain that, in spite of what it deemed a misunderstanding, it hasn’t switched approaches since we spoke ahead of launch. But none of this is to say, of course, that the device won’t be collecting data that can be used to target other ads. That’s what Facebook does.

“I can be quite definitive about the camera and the microphone, and content of audio or content of video and say none of those things are being used to inform ads, full stop,” the executive tells TechCrunch. “I can be very, very confident when I make that statement.”

However, he adds, “Once you get past the camera and the microphones, this device functions a lot like other mobile devices that you have. In fact, it’s powered by Messenger, and in other spaces it’s powered by Facebook. All the same properties that a billion-plus people that are using Messenger are used to are the same as what’s happening on the device.”

As a hypothetical, Bosworth points to the potential for cross-platform ads targeting video calling for those who do it frequently — a classification, one imagines, that would apply to anyone who spends $199 on a video chat device of this nature. “If you were somebody who frequently use video calls,” Bosworth begins, “maybe there would be an ad-targeting cluster, for people who were interested in video calling. You would be a part of that. That’s true if you were using video calling often on your mobile phone or if you were using video calling often on Portal.”

Facebook may have painted itself into a corner with this one, however. Try as it might to draw the distinction between cameras/microphones and the rest of the software stack, there’s little doubt that trust has been eroded after months of talk around major news stories like Cambridge Analytica. Once that notion of trust has been breached, it’s a big lift to ask users to suddenly purchase a piece of standalone hardware they didn’t realize they needed a few months back.

“Certainly, the headwinds that we face in terms of making sure consumers trust the brand are ones that we’re all familiar with and, frankly, up to the challenge for,” says Bosworth. “It’s good to have extra scrutiny. We’ve been through a tremendous transformation inside the company over the last six to eight months to try to focus on those challenges.”

The executive believes, in fact, that the introduction of a device like Portal could actually serve to counteract that distrust, rather than exacerbate it.

“This device is exactly what I think people want from Facebook,” he explains. “It is a device focused on their closest friends and family, and the experiences, and the connections they have with those people. On one hand, I hear you. It’s a headwind. On the other hand, it’s exactly what we need. It is actually the right device that tells a story that I think we want people to hear about, what we care about the most, which is the people getting deeper and more meaningful hashes of one another.”

If Portal is ultimately a success, however, it won’t be because the product served to convince people that the company is more focused on meaningful interactions versus ad sales before. It will be because our memories are short. These sorts of concerns fade pretty quickly in the face of new products, particularly in a 24-hour news environment when basically everything is bad all the time.

The question then becomes whether Portal can offer enough of a meaningful distinction from other products to compel users to buy in. Certainly the company has helped jumpstart this with what are ultimately reasonably priced products. But even with clever augmented reality features and some well-produced camera tracking, Facebook needs to truly distinguish this device from an Echo Show or Google Home Hub.
Facebook’s early goal for the product are likely fairly modest. In conversations ahead of launch, the company has positioned this as a kind of learning moment. That began when the company seeded early versions of the products into homes as part of a private beta, and continues to some degree now that the device is out in the world. When pressed, the company wouldn’t offer up anything concrete.

“This is the first Facebook-branded hardware,” says Bosworth. “It’s early. I don’t know that we have any specific sales expectations so much as what we have is an expectation to have a market that’s big enough that we can learn, and iterate, and get better.”

This is true, certainly — and among my biggest complaints with the device. Aside from the aforementioned video chat functionality, the Portal doesn’t feel like a particularly fleshed-out device. There’s an extremely limited selection of apps pre-loaded and no app store. Video beyond the shorts offered up through Facebook is a big maybe for the time being.

During my review of the Portal+, I couldn’t shake the feeling that the product would have functioned as well — or even better, perhaps — as an add-on to or joint production with Amazon. However, that partnership is limited only to the inclusion of Alexa on the device. In fact, the company confirms that we can expect additional hardware devices over the next couple of years.

As it stands, Facebook says it’s open to a broad spectrum of possibilities, based on consumer demand. It’s something that could even, potentially, expand to on-device record, a feature that would further blur the lines of what the on-board camera and microphone can and should do.

“Right now, there’s no recording possible on the device,” Bosworth says. “The idea that a camera with microphones, people may want to use it like a camera with microphones to record things. We wanted to start in a position where people felt like they could understand what the device was, and have a lot of confidence and trust, and bring it home. There’s an obvious area where you can expand it. There’s also probably areas that are not obvious to us […] It’s not at all fair to say that this is any kind of a beta period. We only decided to ship it when we felt like we had crossed over into full finished product territory.”

From a privacy perspective, these things always feel like a death by a million cuts. For now, however, the company isn’t recording anything locally and has no definitive plans to do so. Given the sort of year the company has been having with regards to optics around privacy, it’s probably best to keep it that way.

Let’s block ads! (Why?)

Link to original source

Facebook Dating expands to Canada and Thailand


Getty Images/iStockphoto

Facebook’s quest to help singletons find love continues. After launching its Dating feature in Colombia in September, it’s now rolling the service out to Canada and Thailand. And, presumably based on feedback from its Colombian users, it’s adding a couple of new features.

The newest version will allow users to temporarily pause matches, as well as give them the ability to take a “second look” at potential matches they’d previously said no to — a feature that might prove useful as enthusiastic users wait for others in their area to get on board with the app (although Facebook has said users in Canada and Thailand won’t be able to match with anyone right away, as it waits for enough people to sign up).

The company hasn’t given any details on how many Facebook users in Colombia have used the feature, although product manager Nathan Sharp has said there’s been an “overwhelmingly positive response” so far. Nonetheless, the expansion of Facebook Dating comes at a precarious time for the company, as it continues to face scrutiny for its involvement in the Cambridge Analytica scandal, as well as ongoing investigation into its role in political interference. However, the company wouldn’t roll out the feature further if there was no demand for it, so people are clearly still willing to let Facebook into the most intimate areas of their lives.

Let’s block ads! (Why?)

Link to original source

Facebook starts shipping Portal, clarifies privacy/ad policy

Planning to get in early on the Portal phenomenon? Facebook announced today that it’s starting to ship the video chat device. The company’s first true piece of devoted hardware comes in two configurations: the Echo Show-like Portal and the larger Portal+ . Which run $199 and $349, respectively. There’s also a two-fer $298 bundle on the smaller unit.

The device raised some privacy red flags since it was announced early last month. The company attempted to nip some of the those issues in the bud ahead of launch — after all, 2018 hasn’t been a great year for Facebook privacy. The site also hasn’t done itself any favors by offering some murky comments around data tracking and ad targeting in subsequent weeks.

With all that in mind, Facebook is also marking the launch with a blog post further spelling out Portal’s privacy policy. Top level, the company promises not to view or listen to video calls. Calls are also encrypted and all of the AI tech is performed locally on-device — IE not sent to its servers.

In the post, Facebook also promises to treat conversations on Portal the way it does all Messenger experience. That means while it won’t view the calls, it does indeed track data usage, which it may later use to serve up cross platform ads.

“When you make a Portal video call, we process the same device usage information as other Messenger-enabled devices,” Facebook writes. “This can include volume level, number of bytes received, and frame resolution — it can also include the frequency and length of your calls. Some of this information may be used for advertising purposes. For example, we may use the fact that you make lots of video calls to inform some of the ads you see. This information does not include the contents of your Portal video calls.”

In other words, it’s not collecting personally identifying data, but it tracking information. And honestly, if you have a Facebook account, you’ve already signed up for that. The question is whether you’re comfortable introducing an extra level and bringing it into your living room or kitchen.

Let’s block ads! (Why?)

Link to original source