Bipartisan House bill aims to curb illegal robocalls

Sponsored Links


josefkubes via Getty Images

The US Senate has shown bipartisan support for a bill to crack down on robocalls, and now it’s the House’s turn. Energy and Commerce Committee Chairman Frank Pallone and Ranking Member Greg Walden have introduced a bipartisan Stopping Bad Robocalls Act that would similarly toughen requirements for carriers while more explicitly punishing spam callers. There are some key differences between the two, however.

The measure would require that carriers authenticate calls and offer opt-out blocking at no extra charge, with transparency to make sure you don’t miss an important conversation. The FCC, meanwhile, would be granted extended statutes of limitations on robocall offenses. In return, the regulator would have to issue rules protecting against unwanted calls (including the option to withdraw consent), clamp down on abuse of robocall exemptions and submit a report on its implementation of the reassigned numbers database. The Senate bill doesn’t require the new FCC rules.

The Act is due for a panel vote next week. Whether or not it makes it to the President’s desk as-is could be another story. While both sides of Congress are clearly in favor of stricter regulation of robocalls, they’ll have to reconcile bill differences — it’s possible that the finished legislation will be watered down. Between this and the FCC’s block-by-default initiative, though, you’ll see at least some kind of improved enforcement against automated calls.

Let’s block ads! (Why?)

Link to original source

House chairwoman wants Facebook to pause work on its cryptocurrency

Sponsored Links


AP Photo/J. Scott Applewhite

You knew Facebook’s Libra cryptocurrency would come under scrutiny as soon as it became official, and the US government isn’t wasting any time. House Financial Services Committee Chairwoman Maxine Waters has issued a statement calling on Facebook to pause development of Libra until Congress and regulatory bodies have had a chance to review it. The social network has “repeatedly shown a disregard” for safeguarding user data, Waters said, suggesting that privacy issues could come back to haunt this product.

The congresswoman also said that Facebook executives should testify about Libra as part of that oversight.

We’ve asked Facebook for comment. As part of the announcement, though, it launched a Libra Association whose aim is to oversee the currency outside of Facebook’s control. Calibra, the digital wallet for the new monetary format, is supposed to share only limited data with Facebook and have “strong protections” such as automated fraud checks.

Those measures might not satisfy politicians. Numerous federal and state regulators are investigating Facebook’s behavior in recent years, and there’s no question that the internet giant has been awash in privacy debacles even after the Cambridge Analytica scandal had seemingly wound down. Waters and others just don’t have much of a historical basis to trust what Facebook says, even though it appears to be learning its lessons.

Let’s block ads! (Why?)

Link to original source

Juul faces House investigation over teen e-cigarette use

Sponsored Links


AP Photo/Julio Cortez

Juul is facing even more heat over concerns that it’s contributing to teen vaping. The House Subcommittee on Economic and Consumer Policy has opened an investigation into the “youth e-cigarette epidemic” that could determine if Juul had marketed its e-cigarettes to kids. Committee Chairman Raja Krishnamoorthi has asked the company to hand over any documents from 2013 onward that touch on related parts of its advertising and social media strategies, including the impact of ads on children and its awareness of under-18 social network followers.

The request also covers less explicitly youth-oriented figures, such as Juul-related data on clinical trials to quit smoking. Moreover, the committee wants to know about the reasoning behind the pens’ nicotine levels, Juul’s early market research and the agreements employees signed after Altria bought the company.

Juul has until June 21st to honor the request. It’s already dealing with a Senate investigation that began in April.

The company told Gizmodo in a statement that it “welcome[d] the opportunity” to comply with the request, claiming that it conducted “aggressive, industry leading” efforts to reduce underage use. It currently has a track-and-trace program to identify retailers who sell e-cigs to underage customers, and it has previously pulled fruit flavors and closed some of its social network accounts.

Those remarks aren’t likely to assuage the House, though. Agencies like the Center for Disease Control have pinned a surge of teen tobacco use on the rise of vaping, and Juul is one of the largest players in the vape industry. If the firm is going to convince either side of Congress that it’s above-board, it’ll have to show evidence that it didn’t court or knowingly tolerate underage use.

Let’s block ads! (Why?)

Link to original source

Rep. Will Hurd to keynote Black Hat draws ire for women’s rights voting record

A decision to confirm Rep. Will Hurd as the keynote speaker at the Black Hat security conference this year has prompted anger and concern by some long-time attendees because of his voting record on women’s rights.

Hurd, an outspoken Texas Republican who has drawn fire from his own party for regularly opposing the Trump administration, was confirmed as keynote speaker at the conference Thursday for his background in cybersecurity. Since taking office in Texas’ 23rd district, the congressman has introduced several bills that would aim to secure Internet of Things devices and pushed to reauthorize the role of a federal chief information officer.

But several people we’ve spoken to have described their unease that Black Hat organizers have asked Hurd, a self-described pro-life lawmaker, given his consistent opposition to bills supporting women’s rights.

An analysis of Hurd’s voting record shows he supports bills promoting women’s rights only two percent of the time. He has voted against a bill that would financially support women in STEM fields, voted in favor of allowing states to restrict access and coverage to abortions, and voted to defund Planned Parenthood.

Many of those we spoke to asked to be kept anonymous amid worries of retaliation or personal attacks. One person who we asked for permission to quote said Hurd’s voting record was “simply awful” for women’s rights. Others in tweets said the move doesn’t reflect well on companies sponsoring the event.

Black Hat says it aims to create an “inclusive environment,” but others have questioned how a political figure with views that cause harm to an entire gender can be considered inclusive. But at a time when women’s rights — including right to access abortions — is being all but outlawed by controversial measures in several states, some have found Hurd’s selection tone-deaf and offensive.

When asked, a spokesperson for Black Hat defended the decision for Hurd to speak:

“Hurd has a strong background in computer science and information security and has served as an advocate for specific cybersecurity initiatives in Congress,” said the spokesperson. “He will offer the Black Hat audience a unique perspective of the infosec landscape and its effect on the government.”

Although previous keynote speakers have included senior government figures, this is the first time Black Hat has confirmed a lawmaker to keynote the conference.

Although abortion rights and cybersecurity are unrelated topics, it’s becoming increasingly difficult to separate social issues from technology and gatherings. It’s also valid for attendees to express concern that the keynote speaker at a professional security conference opposes what many will consider a human right.

Hurd’s office did not return a request for comment.

Read more:

Let’s block ads! (Why?)

Link to original source

DEEPFAKES Accountability Act would impose unenforceable rules — but it’s a start

The new DEEPFAKES Accountability Act in the House — and yes, that’s an acronym — would take steps to criminalize the synthetic media referred to in its name, but its provisions seem too optimistic in the face of the reality of this threat. On the other hand, it also proposes some changes that will help bring the law up to date with the tech.

The bill, proposed by Representative Yvette Clarke (D-NY), it must be said, has the most ridiculous name I’ve encountered: the Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act. Amazingly, that acronym (backronym, really) actually makes sense.

It’s intended to stem the potential damage of synthetic media purporting to be authentic, which is rare enough now but soon may be commonplace. With just a few minutes (or even a single frame) of video and voice, a fake version of a person, perhaps a public figure or celebrity, can be created that is convincing enough to fool anyone not looking too closely. And the quality is only getting better.

DEEPFAKES would require anyone creating a piece of synthetic media imitating a person to disclose that the video is altered or generated, using “irremovable digital watermarks, as well as textual descriptions.” Failing to do so will be a crime.

The act also establishes a right on the part of victims of synthetic media to sue the creators and/or otherwise “vindicate their reputations” in court.

Many of our readers will have already spotted the enormous loopholes gaping in this proposed legislation.

First, if a creator of a piece of media is willing to put their name to it and document that it is fake, those are almost certainly not the creators or the media we need to worry about. Jordan Peele is the least of our worries (and in fact the subject of many of our hopes). Requiring satirists and YouTubers to document their modified or generated media seems only to assign paperwork to people already acting legally and with no harmful intentions.

Second, watermark and metadata-based markers are usually trivial to remove. Text can be cropped, logos removed (via more smart algorithms), and even a sophisticated whole-frame watermark might be eliminated simply by being re-encoded for distribution on Instagram or YouTube. Metadata and documentation are often stripped or otherwise made inaccessible. And the inevitable reposters seem to have no responsibility to keep that data intact, either — so as soon as this piece of media leaves the home of its creator, it is out of their control and very soon will no longer be in compliance with the law.

Third, it’s far more likely that truly damaging synthetic media will be created with an eye to anonymity and distributed by secondary methods. The law here is akin to asking bootleggers to mark their barrels with their contact information. No malicious actor will even attempt to mark their work as an “official” fake.

That said, just because these rules are unlikely to prevent people from creating and distributing damaging synthetic media — what the bill calls “advanced technological false personation records” — that doesn’t mean the law serves no purpose here.

One of the problems with the pace of technology is that it frequently is some distance ahead of the law, not just in spirit but in letter. With something like revenge porn or cyberbullying, there’s often literally no legal recourse because these are unprecedented behaviors that may not fit neatly under any specific criminal code. A law like this, flawed as it is, defines the criminal behavior and puts it on the books, so it’s clear what is and isn’t against the law. So while someone faking a Senator’s face may not voluntarily identify themselves, if they are identified, they can be charged.

To that end a later portion of the law is more relevant and realistic: It seeks to place unauthorized digital recreations of people under the umbrella of unlawful impersonation statutes. Just as it’s variously illegal to pretend you’re someone you’re not, to steal someone’s ID, to pretend you’re a cop, and so on, it would be illegal to nefariously misrepresent someone digitally.

That gives police and the court system a handhold when cases concerning synthetic media begin pouring in. They can say “ah, this falls under statute so and so” rather than arguing about jurisdiction or law and wasting everyone’s time — an incredibly common (and costly) occurrence.

The bill puts someone at the U.S. Attorney’s Office in charge of things like revenge porn (“false intimate depictions”) to coordinate prosecution and so on. Again, these issues are so new that it’s often not even clear who you or your lawyer or your local police are supposed to call.

Lastly the act would create a task force at the Department of Homeland Security that would form the core of government involvement with the practice of creating deep fakes, and any countermeasures created to combat them. The task force would collaborate with private sector companies working on their own to prevent synthetic media from gumming up their gears (Facebook has just had a taste), and report regularly on the state of things.

It’s a start, anyway — rare it is that the government acknowledges something is a problem and attempts to mitigate it before that thing is truly a problem. Such attempts are usually put down as nanny state policies, alas, so we wait for a few people to have their lives ruined then get to work with hindsight. So while the DEEPFAKES Accountability Act would not, I feel, create much in the way of accountability for the malicious actors most likely to cause problems, it does begin to set a legal foundation for victims and law enforcement to fight against those actors.

You can track the progress of the bill (H.R. 3230 in the 116th Congress) here, and read the full text below.

Let’s block ads! (Why?)

Link to original source