UK government invests $194M to commercialize quantum computing

The UK government today announced a £153 million investment into efforts to commercialize quantum computing. That’s about $193 million and with additional commitments from numerous industry players, that number goes up to over $440 million. With this, the UK’s National Quantum Technologies Programme has now passed £1 billion (or about $1.27 billion) in investments since its inception in 2014.

In the US, president Trump last year signed into law a $1.2 billion investment into quantum computing and the European Union, which the UK is infamously trying to leave, also launched a similarly-sized plan. Indeed, it’s hard not to look at this announcement in the context of Brexit, which would cut the UK off from these European efforts, though it’s worth noting that the UK obviously has a long history of fundamental computer science research, something that is surely also motivating these efforts.

“This milestone shows that Quantum is no longer an experimental science for the UK,” UK Science Minister Chris Skidmore said in today’s announcement. “Investment by government and businesses is paying off, as we become one of the world’s leading nations for quantum science and technologies. Now industry is turning what was once a futuristic pipedream into life-changing products.”

Specifically, the UK program is looking into research that can grow its local quantum industry. To do so, the £153 million Industrial Strategy Challenge Fund will invest in new products and innovations through research and development competitions, but also into industry-led projects. It will also function as an investment accelerator, with the hope of encouraging venture capitalist to invest in early-stage, spin-out and startup quantum companies.

“It’s not just about creating the environment for quantum technologies to flourish. We are investing across a broad range of technologies – computing, sensing, imaging and communications –- and in the lifetime of this programme, we expect to see transformative commercial products and services move from laboratory aspiration to commercial reality,” Roger McKinlay, Challenge Director for Quantum Technologies at UK Research and Innovation, told me. “The technology is new but the approach is tried and tested. Much of the funding will be spent on collaborative R&D projects, competitively awarded to industry-led consortia. We will also fund feasibility studies and run an investment accelerator to ensure we have a pipeline of new technologies and innovative ideas. Well established companies have not been overlooked; funding is earmarked to allocate to companies with ambitious investment and scale-up plans.”

For governments, quantum computing obviously opens up a number of economic opportunities, but there are also national security interests at play here. Once it becomes a reality, a general quantum computer with long coherence times will easily be able to defeat today’s encryption schemes, for example. That’s not what today’s announcement is about, but it is surely something that all of the world’s governments are thinking about.

Let’s block ads! (Why?)

Link to original source

Facebook will not remove deepfakes of Mark Zuckerberg, Kim Kardashian and others from Instagram

Facebook will not remove the faked videos featuring Mark Zuckerberg, Kim Kardashian and President Donald Trump from Instagram, the company said in a statement.

Earlier today, Vice News reported on the existence of videos created by the artists Bill Posters and Daniel Howe and video and audio manipulation companies including CannyAIRespeecher and Reflect. 

The work, featured in a site-specific installation in the UK as well as circulating in video online, was the first test of Facebook’s content review policies since the company’s decision not to remove a manipulated video of House Speaker Nancy Pelosi received withering criticism from Democratic political leadership.

After the late May incident Facebook’s Neil Potts testified before a smorgasbord of international regulators in Ottawa about deep fakes, saying the company would not remove a video of Mark Zuckerberg . This appears to be the first instance testing the company’s resolve.

“We will treat this content the same way we treat all misinformation on Instagram . If third-party fact-checkers mark it as false, we will filter it from Instagram’s recommendation surfaces like Explore and hashtag pages,” said an Instagram spokesperson in an email to TechCrunch.

The videos appear not to violate any Facebook policies, which means that they will be subject to the treatment any video containing misinformation gets on any of Facebook’s platforms. So the videos will be blocked from appearing in the Explore feature and hashtags won’t work with the offending material.

Facebook already uses image detection technology to find content that has been debunked by its third-party fact checking program on Instagram. When misinformation is only present on Instagram the company is testing the ability to promote links into the fact-checking product on Facebook.

“Spectre interrogates and reveals many of the common tactics and methods that are used by corporate or political actors to influence people’s behaviours and decision making,” said Posters in an artist’s statement about the project. “In response to the recent global scandals concerning data, democracy, privacy and digital surveillance, we wanted to tear open the ‘black box’ of the digital influence industry and reveal to others what it is really like.”

Facebook’s consistent decisions not to remove offending content stands in contrast with YouTube which has taken the opposite approach in dealing with manipulated videos and other material that violate its policies.

YouTube removed the Pelosi video and recently took steps to demonetize and remove videos from the platform that violated its policies of hate speech — including a wholesale purge of content about Nazism.

These issues take on greater significance as the U.S. heads into the next Presidential election in 2020.

“In 2016 and 2017, the UK, US and Europe witnessed massive political shocks as new forms of computational propaganda employed by social media platforms, the ad industry, and political consultancies like Cambridge Analytica [that] were exposed by journalists and digital rights advocates,” said Howe, in a statement about his Spectre project. “We wanted to provide a personalized experience that allows users to feel what is at stake when the data taken from us in countless everyday actions is used in unexpected and potentially dangerous ways.”

Perhaps, the incident will be a lesson to Facebook in what’s potentially at stake as well.

Let’s block ads! (Why?)

Link to original source

A ‘backdoor’ in Optergy smart building tech gets maximum severity score

Homeland Security has given the maximum severity score for a vulnerability in a popular smart building automation system.

Optergy’s Proton allows building owners and managers to remotely monitor energy consumption and manage who can access the premises. The box is web-connected, and connects to other devices — like air conditioning and heating — in the building for real-time monitoring through a web interface.

CISA, the government’s dedicated cybersecurity unit, said the device had serious vulnerabilities.

An advisory said an attacker could gain “full system access” through an “undocumented backdoor script.” This, the advisory said, could allow the attacker to run commands on a vulnerable device with the highest privileges. Backdoors typically grant hidden or undocumented access to a system, and can be used for tech support to remotely login and troubleshoot issues. But if found by an attacker, backdoors can also be used maliciously.

The vulnerability required a “low level” of skill to remotely exploit, and was rated 10.0, the highest score on the industry standard common vulnerability scoring system.

The advisory noted several other bugs, one of which was rated with a score of 9.9.

Although 10.0 scores are not unheard of, they are not common in everyday technology. 10.0 scores rely on vulnerabilities that can have a significant impact on the system’s integrity and availability, or put data on the affected system at high risk of damage or theft.

Gjoko Krstic, a security researcher at Applied Risk who reported the vulnerabilities to Optergy, told TechCrunch that the bug was “very, very bad” and “easy to exploit.” According to Krstic, there are 50 buildings vulnerable at the time of writing. His findings were presented last month in Amsterdam at Hack In The Box, a security conference, as part of wider issues with four other vendors — including Opertgy.

By exploiting the vulnerability, it’s possible to “shut down a building with one click,” he said at his talk.

Optergy president Steve Guzelimian said the company fixed the issues but wouldn’t confirm how many devices were affected. The company says it serves more than 1,800 facilities.

“We fix everything brought to our attention as well as do our own regular testing,” he said.

Let’s block ads! (Why?)

Link to original source

Protecting the integrity U.S. elections will require a massive regulatory overhaul, academics say

Ahead of the 2020 elections former Facebook chief security officer Alex Stamos and his colleagues at Stanford University have unveiled a sweeping new plan to secure U.S. electoral infrastructure and combat foreign campaigns seeking to interfere in U.S. politics.

As the Mueller investigation into electoral interference made clear, foreign agents from Russia (and elsewhere) engaged in a strategic campaign to influence the 2016 U.S. elections. As the chief security officer of Facebook at the time, Stamos was both a witness to the influence campaign on social media and a key architect of the efforts to combat its spread.

Along with Michael McFaul, a former ambassador to Russia, and a host of other academics from Stanford, Stamos lays out a multi-pronged plan that incorporates securing U.S. voting systems, providing clearer guidelines for advertising and the operations of foreign media in the U.S. and integrating government action more closely with media and social media organizations to combat the spread of misinformation or propaganda by foreign governments.

The paper lays out a number of suggestions for securing elections including:

  • Increasing the Security of the U.S. Election Infrastructure
  • Explicitly prohibit foreign governments and individuals from purchasing online advertisements targeting the American electorate
  • Require greater disclosure measures for FARA-registered foreign media organizations.
  • Create standardized guidelines for labeling content affiliated with disinformation campaign producers.
  • Mandate transparency in the use of foreign consultants and foreign companies in U.S. political campaigns.
  • Foreground free and fair elections as part of U.S. policy and identifying election rights as human rights
  • Signal a clear and credible commitment to respond to election interference.

A lot of heavy lifting by Congress and media and social media companies would be required to enact all of these policy recommendations and many of them speak to core issues that policymakers and corporate executives are already attempting to manage.

For lawmakers that means drafting legislation that would require paper trails for all ballots and improve threat assessments of computerized election systems along with a complete overhaul of campaign laws related to advertising, financing, and press freedoms (for foreign press).

The Stanford proposals call for the strict regulation of foreign involvement in campaigns including a ban on foreign governments and individuals from buying online ads that would target the U.S. electorate with an eye toward influencing elections. The proposals also call for greater disclosure requirements indicating articles, opinion pieces or media produced by foreign media organizations. Furthermore, any campaign working with a foreign company or consultant or with significant foreign business interests should be required to disclose those connections.

Clearly, the echoes of Facebook’s Cambridge Analytica and political advertising scandals can be heard in some of the suggestions made by the paper’s authors.

Indeed, the paper leans heavily on the use and abuse of social media and tech as a critical vector for an attack on future U.S. elections. And the Stanford proposals don’t shirk from calling on legislators to demand that these companies do more to protect their platforms from being used and abused by foreign governments or individuals.

In some cases companies are already working to enact suggestions from the report. Facebook, Alphabet, and Twitter have said that they will work together to coordinate and encourage the spread of best practices. Media companies need to create (and are working to create) norms for handling stolen information. Labeling manipulated videos or propaganda (or articles and videos that come from sources known to disseminate propaganda) is another task that platforms are undertaking, but an area where there is still significant work to be done (especially when it comes to deepfakes).

As the report’s author’s note:

Existing user interface features and platforms’ content delivery algorithms need to be utilized as much as possible to provide contextualization for questionable information and help users escape echo chambers. In addition, social media platforms should provide more transparency around users who are paid to promote certain content. One area ripe for innovation is the automatic labeling of synthetic content, such as videos created by a variety of techniques that are often lumped under the term “deepfakes”. While there are legitimate uses of synthetic media technologies, there is no legitimate need to mislead social media users about the authenticity of that media. Automatically labeling content, which shows technical signs of being modified in this manner, is the minimum level of due diligence required of the major video hosting sites.

There’s more work that needs to be done to limit the targeting capabilities for political advertising and improving transparency around paid and unpaid political content as well, according to the report.

And somewhat troubling is the report’s call for the removal of barriers around sharing information relating to disinformation campaigns that would include changes to privacy laws.

Here’s the argument from the report:

At the moment, access to the content used by disinformation actors is generally restricted to analysts who archived the content before it was removed or governments with lawful request capabilities. Few organizations have been able to analyze the full paid and unpaid content created by Russian groups in 2016, and the analysis we have is limited to data from the handful of companies who investigated the use of their platforms and were able to legally provide such data to Congressional committees. Congress was able to provide that content and metadata to external researchers, an action that is otherwise proscribed by U.S. and European law. Congress needs to establish a legal framework within which the metadata of disinformation actors can be shared in real-time between social media platforms, and removed disinformation content can be shared with academic researchers under reasonable privacy protections.

Ultimately, these suggestions are meaningless without real action from the Congress and the President to ensure the security of elections. As the events of 2016  — documented in the Mueller report — revealed there are a substantial number of holes in the safeguards erected to secure our elections. As the country looks for a place to build walls for security, perhaps one around election integrity would be a good place to start.

Let’s block ads! (Why?)

Link to original source

Alphabet, Apple, Amazon and Facebook are in the crosshairs of the FTC and DOJ

The last time that technology companies faced this kind of scrutiny was Google’s antitrust investigation, or the now twenty-one year old lawsuit brought by the Justice Department and multiple states against Microsoft.

But times have changed since Google had its hearing before a much friendlier audience of regulators under President Barack Obama .

These days, Republican and Democratic lawmakers are both making the case that big technology companies hold too much power in American political and economic life.

Issues around personal privacy, economic consolidation, misinformation and free speech are on the minds of both Republican and Democratic lawmakers. Candidates vying for the Democratic nomination in next years Presidential election have made investigations into the breakup of big technology companies central components of their policy platforms.

Meanwhile, Republican lawmakers and agencies began stepping up their rhetoric and planning for how to oversee these companies beginning last September, when the Justice Department brought a group of the nation’s top prosecutors together to discuss technology companies’ growing power.

News of the increasing government activity sent technology stocks plummeting. Amazon shares were down $96 per-share to $1,680.05 — an over 5% drop on the day. Shares of Alphabet tumbled to $1031.53, a $74.76 decline or 6.76%. Declines at Facebook and Apple were more muted, with Apple falling $2.97, or 1.7%, to $172.32 and Facebook sliding $14.11 (or 7.95%) to $163.36.

In Senate confirmation hearings in January, the new Attorney General William Barr noted that technology companies would face more time under the regulatory microscope during his tenure, according to The Wall Street Journal .

“I don’t think big is necessarily bad, but I think a lot of people wonder how such huge behemoths that now exist in Silicon Valley have taken shape under the nose of the antitrust enforcers,” Barr said. “You can win that place in the marketplace without violating the antitrust laws, but I want to find out more about that dynamic.”

Let’s block ads! (Why?)

Link to original source