Apple Card will make credit card fraud a lot more difficult

Apple’s new credit card has a curious security feature that will make it much more difficult to carry out credit card fraud.

The aptly named Apple Card is a new credit card, built into your iPhone Wallet app, which the company says will help customers live a “healthier” financial lifestyle. The card is designed to replace your traditional credit card and give you perks, such as daily cash. Chief among the benefits is a range of security and privacy features, which Apple says — unlike traditional credit card providers — the company doesn’t know where a customer shopped, what they bought, or how much they paid.

But its one feature — a one-time unique dynamic security code — will make it nearly impossible for anyone to use the credit card to make fraudulent purchases.

That three-digit card verification value — or a CVV — on the back of your credit card is usually your last line of defense if someone steals your credit card number, such as if your card is cloned or skimmed by a dodgy ATM or stolen from a website through a phishing attack.

But rotating the security code will increase the difficulty for an attacker to use your card without your permission.

The idea of a dynamic credit card number first came about a few years ago with the Motion Code credit card concept, built by Oberthur Technologies, which included a randomly generating number built into a tiny display on the back of the card. The only downside is if someone steals your physical card.

Since then, other credit card makers — including Mastercard, the issuing bank for Apple Card — have worked to integrate biometric solutions instead. By enabling a fingerprint sensor on the card, powered by the card machine it was entered into, it was hoped that fraudulent purchases would be impossible. Other credit cards have worked to roll out biometric-powered credit cards. Again — a big let down was online fraud, which still accounts for a huge proportion of fraud.

Apple Card seems to meld the two things: a virtual credit card with a rotating security code, protected by a biometric, like Touch ID or Face ID in newer devices. Better yet, the company’s debut physical titanium credit card won’t even have a credit card number.

Now if someone wants to commit fraud, they need to steal your phone and your face or fingerprint.

Like other sensitive data — such as health, financial and biometric data — any banking and credit card data is stored on the device’s security chip, known as the secure enclave.

Apple Card will be available in the U.S. later this summer.

Let’s block ads! (Why?)

Link to original source

Telegram users can delete any message in their private chat history


Thomas Trutschel/Photothek via Getty Images

Telegram’s ability to unsend messages is no longer a novelty among chat apps, but it’s now taking that feature well beyond what you’d get from others. An update to the service lets you delete any message in your private chat history, whether you’re the sender or the recipient. You can even wipe out an entire conversation (on both sides or just your own) with two taps. It’s an audacious step, but one the company feels is necessary in the modern climate.

Company founder Pavel Durov noted in his public Telegram channel that the potential for misuse of old messages is getting worse. Something you said can be “taken out of context” and used to hurt you years down the line, Durov said. This ostensibly gives you “complete control” over conversations, rather than trusting someone else to use your chat history in a responsible fashion.

As TechCrunch warned, though, the feature could be as problematic as it is helpful. While it could be used to prevent revenge porn or similar incidents, a malicious user could quietly delete messages to create a bogus version of a conversation, or wipe out evidence of a crime. There’s no consent here, or even notifications — crooks could cover up their tracks, and abusers could use it to attack people while leaving no trace. Durov said Telegram was aware of the “potential misuse” and thought the control would ultimately be beneficial, but that might not be how it works in practice.

Let’s block ads! (Why?)

Link to original source

Family tracking app leaked real-time location data for weeks


Rawf8 via Getty Images

Family tracking apps can be very helpful if you’re worried about your kids or spouse, but they can be nightmarish if that data falls into the wrong hands. Security researcher Sanyam Jain has revealed to TechCrunch that React Apps’ Family Locator left real-time location data (plus other sensitive personal info) for over 238,000 people exposed for weeks in an insecure database. It showed positions within a few feet, and even showed the names for the geofenced areas used to provide alerts. You could tell if parents left home or a child arrived at school, for instance.

This wasn’t helped by React’s own issues with accountability. Its site had no contact information, and even its WHOIS record masked the email address. Messages through the feedback form turned up nothing. The database didn’t go offline until TechCrunch asked Microsoft to reach the developer, who still hasn’t said anything about the leak.

It’s not clear if anyone beyond Jain or TechCrunch accessed the database.

While the data is safe for now, the incident illustrates a problem with tracking apps as a whole: it’s difficult to verify that developers are securing your location info every step of the way. If they don’t and there’s a breach, it could lead to very real threats that could include physical danger.

Let’s block ads! (Why?)

Link to original source

Nokia says its phones sent data to China by mistake


Jamie Rigg/Engadget

Nokia phone brand owner HMD Global is understandably nervous about Finland investigating claims that its handsets send sensitive data to China, and it’s trying to clear its name. The company said in a statement that it “mistakenly included” the device activation software for Chinese phones in a “single batch” of Nokia 7 Plus phones meant for other countries. However, that data was “never processed” and wasn’t personally identifiable, according to the company. It was fixed through a software update in February 2019, and “nearly all” phones already have that patch.

The company also rejected talk that other phones would send similar data. Every Nokia phone outside of China sends device data to HMD Global servers (provided by Amazon Web Services) in Singapore, the company said, and abides by local laws.

This won’t necessarily put the Finnish investigation to bed, and the claims about the nature of the data don’t paint a full picture. While they don’t directly identify a person, they could be used with corroborating info to get a clearer picture of that person’s life. Still, the issue appears to have been fixed — it’s just an unpleasant reminder that a slip-up at the factory is enough to put data at risk.

Let’s block ads! (Why?)

Link to original source

Facebook staff raised concerns about Cambridge Analytica in September 2015, per court filing

Further details have emerged about when and how much Facebook knew about data-scraping by the disgraced and now defunct Cambridge Analytica political data firm.

Last year a major privacy scandal hit Facebook after it emerged CA had paid GSR, a developer with access to Facebook’s platform, to extract personal data on as many as 87M Facebook users without proper consents.

Cambridge Analytica’s intention was to use the data to build psychographic profiles of American voters to target political messages — with the company initially working for the Ted Cruz and later the Donald Trump presidential candidate campaigns.

But employees at Facebook appear to have raised internal concerns about CA scraping user data in September 2015 — i.e. months earlier than Facebook previously told lawmakers it became aware of the GSR/CA breach (December 2015).

The latest twist in the privacy scandal has emerged via a redacted court filing in the U.S. — where the District of Columbia is suing Facebook in a consumer protection enforcement case.

Facebook is seeking to have documents pertaining to the case sealed, while the District argues there is nothing commercially sensitive to require that.

In its opposition to Facebook’s motion to seal the document, the District includes a redacted summary (screengrabbed below) of the “jurisdictional facts” it says are contained in the papers Facebook is seeking to keep secret.

According to the District’s account a Washington D.C.-based Facebook employee warned others in the company about Cambridge Analytica’s data-scraping practices as early as September 2015.

Under questioning in Congress last April, Mark Zuckerberg was asked directly by congressman Mike Doyle when Facebook had first learned about Cambridge Analytica using Facebook data — and whether specifically it had learned about it as a result of the December 2015 Guardian article (which broke the story).

Zuckerberg responded with a “yes” to Doyle’s question.

Facebook repeated the same line to the UK’s Digital, Media and Sport (DCMA) committee last year, over a series of hearings with less senior staffers

Damian Collins, the chair of the DCMS committee — which made repeat requests for Zuckerberg himself to testify in front of its enquiry into online disinformation, only to be repeatedly rebuffed — tweeted yesterday that the new detail could suggest Facebook “consistently mislead” the British parliament.

The DCMS committee has previously accused Facebook of deliberately misleading its enquiry on other aspects of the CA saga, with Collins taking the company to task for displaying a pattern of evasive behavior.

The earlier charge that it mislead the committee refers to a hearing in Washington in February 2018 — when Facebook sent its UK head of policy, Simon Milner, and its head of global policy management, Monika Bickert, to field DCMS’ questions — where the pair failed to inform the committee about a legal agreement Facebook had made with Cambridge Analytica in December 2015.

The committee’s final report was also damning of Facebook, calling for regulators to instigate antitrust and privacy probes of the tech giant.

Meanwhile, questions have continued to be raised about Facebook’s decision to hire GSR co-founder Joseph Chancellor, who reportedly joined the company around November 2015.

The question now is if Facebook knew there were concerns about CA data-scraping prior to hiring the co-founder of the company that sold scraped Facebook user data to CA, why did it go ahead and hire Chancellor?

The GSR co-founder has never been made available by Facebook to answer questions from politicians (or press) on either side of the pond.

Last fall he was reported to have quietly left Facebook, with no comment from Facebook on the reasons behind his departure — just as it had never explained why it hired him in the first place.

But the new timeline that’s emerged of what Facebook knew when makes those questions more pressing than ever.

Reached for a response to the details contained in the District of Columbia’s court filing, a Facebook spokeswomen sent us this statement:

Facebook was not aware of the transfer of data from Kogan/GSR to Cambridge Analytica until December 2015, as we have testified under oath

In September 2015 employees heard speculation that Cambridge Analytica was scraping data, something that is unfortunately common for any internet service. In December 2015, we first learned through media reports that Kogan sold data to Cambridge Analytica, and we took action. Those were two different things.

Facebook did not engage with questions about any of the details and allegations in the court filing.

A little later in the court filing, the District of Columbia writes that the documents Facebook is seeking to seal are “consistent” with its allegations that “Facebook has employees embedded within multiple presidential candidate campaigns who… knew, or should have known… [that] Cambridge Analytica [was] using the Facebook consumer data harvested by [[GSR’s]] [Aleksandr] Kogan throughout the 2016 [United States presidential] election.”

It goes on to suggest that Facebook’s concern to seal the document is “reputational”, suggesting — in another redacted segment (below) — that it might “reflect poorly” on Facebook that a DC-based employee had flagged Cambridge Analytica months prior to news reports of its improper access to user data.

“The company may also seek to avoid publishing its employees’ candid assessments of how multiple third-parties violated Facebook’s policies,” it adds, chiming with arguments made last year by GSR’s Kogan who suggested the company failed to enforce the terms of its developer policy, telling the DCMS committee it therefore didn’t have a “valid” policy.

As we’ve reported previously, the UK’s data protection watchdog — which has an ongoing investigation into CA’s use of Facebook data — was passed information by Facebook as part of that probe which showed that three “senior managers” had been involved in email exchanges, prior to December 2015, concerning the CA breach.

It’s not clear whether these exchanges are the same correspondence the District of Columbia has obtained and which Facebook is seeking to seal. Or whether there were multiple email threads raising concerns about the company.

The ICO passed the correspondence it obtained from Facebook to the DCMS committee — which last month said it had agreed at the request of the watchdog to keep the names of the managers confidential. (The ICO also declined to disclose the names or the correspondence when we made a Freedom of Information request last month — citing rules against disclosing personal data and its ongoing investigation into CA meaning the risk of release might be prejudicial to its investigation.)

In its final report the committee said this internal correspondence indicated “profound failure of governance within Facebook” — writing:

[I]t would seem that this important information was not shared with the most senior executives at Facebook, leading us to ask why this was the case. The scale and importance of the GSR/Cambridge Analytica breach was such that its occurrence should have been referred to Mark Zuckerberg as its CEO immediately. The fact that it was not is evidence that Facebook did not treat the breach with the seriousness it merited. It was a profound failure of governance within Facebook that its CEO did not know what was going on, the company now maintains, until the issue became public to us all in 2018. The incident displays the fundamental weakness of Facebook in managing its responsibilities to the people whose data is used for its own commercial interests.

We reached out to the ICO for comment on the information to emerge via the Columbia suit, and also to the Irish Data Protection Commission, the lead DPA for Facebook’s international business, which currently has 15 open investigations into Facebook or Facebook-owned businesses related to various security, privacy and data protection issues.

An ICO spokesperson told us: “We are aware of these reports and will be considering the points made as part of our ongoing investigation.”

Last year the ICO issued Facebook with the maximum possible fine under UK law for the CA data breach.

Shortly after Facebook announced it would appeal, saying the watchdog had not found evidence that any UK users’ data was misused by CA.

A date for the hearing of the appeal set for earlier this week was canceled without explanation. A spokeswoman for the tribunal court told us a new date would appear on its website in due course.

This report was updated with comment from the ICO

Let’s block ads! (Why?)

Link to original source