YouTube quietly offers free, ad-supported movies

AP Photo/Danny Moloshok

YouTube is borrowing a page from Vudu’s playbook, in a manner of speaking. AdAge has confirmed that the Google video service quietly started adding free, ad-supported movies to its “Movies & Shows” section in October. The roughly 100-title collection largely revolves around old or unspectacular movies that are long past their money-making prime, such as Legally Blonde, Agent Cody Banks and the original Terminator. However, that makes it an easy fit — studios can rake in some ad revenue (YouTube hasn’t said how it shares ad money) from people wanting to watch a classic during a sleepy afternoon.

Company product management director Rohit Dhawan hinted that there could one day be a way for advertisers to sponsor individual movies. You could watch the first movie in a franchise when its sequel hits theaters, for instance. Whether or not that happens will depend on how studios evolve their digital strategies. They’re used to paid services, but ad-supported movies are relatively new.

As AdAge observes, this could be in part about creating a more tempting target for advertisers. YouTube knows some companies are reluctant to run ads alongside some of its user-uploaded video, especially after incidents where ads were linked to hate speech clips. This would give nervous companies a ‘safe’ place to advertise that could reflect well on their brands.

Let’s block ads! (Why?)

Link to original source

Quantum computing, not AI, will define our future

The word “quantum” gained currency in the late 20th century as a descriptor signifying something so significant, it defied the use of common adjectives. For example, a “quantum leap” is a dramatic advancement (also an early ’90’s television series starring Scott Bakula).

At best, that is an imprecise (though entertaining) definition. When “quantum” is applied to “computing,” however, we are indeed entering an era of dramatic advancement.

Quantum computing is technology based on the principles of quantum theory, which explains the nature of energy and matter on the atomic and subatomic level. It relies on the existence of mind-bending quantum-mechanical phenomena, such as superposition and entanglement.

Erwin Schrödinger’s famous 1930’s thought experiment involving a cat that was both dead and alive at the same time was intended to highlight the apparent absurdity of superposition, the principle that quantum systems can exist in multiple states simultaneously until observed or measured. Today quantum computers contain dozens of qubits (quantum bits), which take advantage of that very principle. Each qubit exists in a superposition of zero and one (i.e., has non-zero probabilities to be a zero or a one) until measured. The development of qubits has implications for dealing with massive amounts of data and achieving previously unattainable level of computing efficiency that are the tantalizing potential of quantum computing.

While Schrödinger was thinking about zombie cats, Albert Einstein was observing what he described as “spooky action at a distance,” particles that seemed to be communicating faster than the speed of light. What he was seeing were entangled electrons in action. Entanglement refers to the observation that the state of particles from the same quantum system cannot be described independently of each other. Even when they are separated by great distances, they are still part of the same system. If you measure one particle, the rest seem to know instantly. The current record distance for measuring entangled particles is 1,200 kilometers or about 745.6 miles. Entanglement means that the whole quantum system is greater than the sum of its parts.

If these phenomena make you vaguely uncomfortable so far, perhaps I can assuage that feeling simply by quoting Schrödinger, who purportedly said after his development of quantum theory, “I don’t like it, and I’m sorry I ever had anything to do with it.”

Various parties are taking different approaches to quantum computing, so a single explanation of how it works would be subjective. But one principle may help readers get their arms around the difference between classical computing and quantum computing. Classical computers are binary. That is, they depend on the fact that every bit can exist only in one of two states, either 0 or 1. Schrödinger’s cat merely illustrated that subatomic particles could exhibit innumerable states at the same time. If you envision a sphere, a binary state would be if the “north pole,” say, was 0, and the south pole was 1. In a qubit, the entire sphere can hold innumerable other states and relating those states between qubits enables certain correlations that make quantum computing well-suited for a variety of specific tasks that classical computing cannot accomplish. Creating qubits and maintaining their existence long enough to accomplish quantum computing tasks is an ongoing challenge.

IBM researcher Jerry Chow in the quantum computing lab at IBM’s T.J. Watson Research Center.

Humanizing Quantum Computing

These are just the beginnings of the strange world of quantum mechanics. Personally, I’m enthralled by quantum computing. It fascinates me on many levels, from its technical arcana to its potential applications that could benefit humanity. But a qubit’s worth of witty obfuscation on how quantum computing works will have to suffice for now. Let’s move on to how it will help us create a better world.

Quantum computing’s purpose is to aid and extend the abilities of classical computing. Quantum computers will perform certain tasks much more efficiently than classical computers, providing us with a new tool for specific applications. Quantum computers will not replace their classical counterparts. In fact, quantum computers require classical computer to support their specialized abilities, such as systems optimization.

Quantum computers will be useful in advancing solutions to challenges in diverse fields such as energy, finance, healthcare, aerospace, among others. Their capabilities will help us cure diseases, improve global financial markets, detangle traffic, combat climate change, and more. For instance, quantum computing has the potential to speed up pharmaceutical discovery and development, and to improve the accuracy of the atmospheric models used to track and explain climate change and its adverse effects.

I call this “humanizing” quantum computing, because such a powerful new technology should be used to benefit humanity, or we’re missing the boat.

Intel’s 17-qubit superconducting test chip for quantum computing has unique features for improved connectivity and better electrical and thermo-mechanical performance. (Credit: Intel Corporation)

An Uptick in Investments, Patents, Startups, and more

That’s my inner evangelist speaking. In factual terms, the latest verifiable, global figures for investment and patent applications reflect an uptick in both areas, a trend that’s likely to continue. Going into 2015, non-classified national investments in quantum computing reflected an aggregate global spend of about $1.75 billion USD,according to The Economist. The European Union led with $643 million. The U.S. was the top individual nation with $421 million invested, followed by China ($257 million), Germany ($140 million), Britain ($123 million) and Canada ($117 million). Twenty countries have invested at least $10 million in quantum computing research.

At the same time, according to a patent search enabled by Thomson Innovation, the U.S. led in quantum computing-related patent applications with 295, followed by Canada (79), Japan (78), Great Britain (36), and China (29). The number of patent families related to quantum computing was projected to increase 430 percent by the end of 2017

The upshot is that nations, giant tech firms, universities, and start-ups are exploring quantum computing and its range of potential applications. Some parties (e.g., nation states) are pursuing quantum computing for security and competitive reasons. It’s been said that quantum computers will break current encryption schemes, kill blockchain, and serve other dark purposes.

I reject that proprietary, cutthroat approach. It’s clear to me that quantum computing can serve the greater good through an open-source, collaborative research and development approach that I believe will prevail once wider access to this technology is available. I’m confident crowd-sourcing quantum computing applications for the greater good will win.

If you want to get involved, check out the free tools that the household-name computing giants such as IBM and Google have made available, as well as the open-source offerings out there from giants and start-ups alike. Actual time on a quantum computer is available today, and access opportunities will only expand.

In keeping with my view that proprietary solutions will succumb to open-source, collaborative R&D and universal quantum computing value propositions, allow me to point out that several dozen start-ups in North America alone have jumped into the QC ecosystem along with governments and academia. Names such as Rigetti Computing, D-Wave Systems, 1Qbit Information Technologies, Inc., Quantum Circuits, Inc., QC Ware, Zapata Computing, Inc. may become well-known or they may become subsumed by bigger players, their burn rate – anything is possible in this nascent field.

Developing Quantum Computing Standards

 Another way to get involved is to join the effort to develop quantum computing-related standards. Technical standards ultimately speed the development of a technology, introduce economies of scale, and grow markets. Quantum computer hardware and software development will benefit from a common nomenclature, for instance, and agreed-upon metrics to measure results.

Currently, the IEEE Standards Association Quantum Computing Working Group is developing two standards. One is for quantum computing definitions and nomenclature so we can all speak the same language. The other addresses performance metrics and performance benchmarking to enable measurement of quantum computers’ performance against classical computers and, ultimately, each other.

The need for additional standards will become clear over time.

Let’s block ads! (Why?)

Link to original source

YouTube quietly added free, ad-supported movies to its site

YouTube quietly added around 100 ad-supported Hollywood movies to its site, beginning last month, according to a new report from AdAge. The titles include a mix of classics like “Rocky” and “The Terminator,” as well as other family fare like “Zookeeper,” “Agent Cody Banks,” and “Legally Blonde,” among others.

Before, YouTube had only offered consumers the ability to purchase movies and TV shows, similar to how you can rent or buy content from Apple’s iTunes or Amazon Video.

Currently, YouTube is serving ads on these free movies, but the report said the company is open to working out other deals with advertisers – like sponsorships or exclusive screenings.

YouTube’s advantage in this space, compared with some others, is its sizable user base of 1.9 billion monthly active users and its ability to target ads using data from Google .

The addition of a an ad-supported movies marketplace on YouTube follows Roku’s entry into this market, which began last year with the launch of its free collection of movies, called The Roku Channel.

This year, Roku has been expanding the type of content on that channel to also include things like live news from ABC News, Cheddar, Newsmax, Newsy, People TV, Yahoo and The Young Turks, and – more recently – entertainment and live sports. 

Walmart also offers its own free movies collection through Vudu, and recently teamed up with MGM on original content for the service. Tubi operates a streaming service with free, ad-supported content, too. And Amazon is rumored to be working on something similar.

Let’s block ads! (Why?)

Link to original source

Verizon to introduce next-generation RCS texting in 2019

Bloomberg via Getty Images

RCS support has been slow to roll out, but another major US carrier will soon jump on board. Verizon announced at an event that the company would support the messaging system in “early 2019,” joining Sprint, US Cellular and the limited support currently offered by T-Mobile. While Verizon wouldn’t confirm to The Verge that it planned to support Universal Profile 1.0, GSMA told the publication that Verizon’s RCS would, and if it does, that will be a significant step towards making RCS the SMS replacement it promises to be. Among its benefits, once adopted by carriers, are read receipts, better group chat support and improved media sending.

Verizon didn’t say exactly when RCS support would roll out, but Fierce Wireless reported last week that the company’s new messaging service could come as early as February.

Verizon owns Engadget’s parent company, Oath (formerly AOL). Rest assured, Verizon has no control over our coverage. Engadget remains editorially independent.

Let’s block ads! (Why?)

Link to original source

Verily shelves its glucose-monitoring contact lens project


In 2014, Verily, Alphabet’s life sciences subsidiary, teamed up with Alcon to develop a contact lens that could measure glucose levels in tears. The idea being that diabetics would have an easier, less invasive way of keeping track of their glucose levels. But the companies have now decided to shelve that project, as their work has shown that it’s actually quite difficult to obtain consistently accurate measurements of glucose from tears.

“In part, this was associated with the challenges of obtaining reliable tear glucose readings in the complex on-eye environment,” Verily CTO Brian Otis said in a blog post. “For example, we found that interference from biomolecules in tears resulted in challenges in obtaining accurate glucose readings from the small quantities of glucose in the tear film. In addition, our clinical studies have demonstrated challenges in achieving the steady state conditions necessary for reliable tear glucose readings.”

However, Verily will move forward with two other lens projects. Alongside its glucose-monitoring contact lens work, it has also been working on a smart accommodating contact lens for presbyopia (age-related farsightedness) as well as an intraocular lens to help improve eyesight after cataract surgery. And the company says it’s also still working on technology for diabetes management, including miniaturized continuous glucose monitors that it’s developing with Dexcom.

“We’re looking forward to the next phase of development on our other two Smart Lens programs with Alcon, where we are applying our significant technical learnings and achievements to prevalent conditions in ophthalmology,” said Otis.

Let’s block ads! (Why?)

Link to original source

Google releases gorgeous VR short film 'Age of Sail'

Google Spotlight Stories

Google Spotlight Stories has released its latest short, Age of Sail. Directed by Academy Award winner John Kahrs (Disney’s Paperman short), it blends beautiful animation with the story of an old, lonely sailor, played by Ian McShane, who is adrift in the Atlantic Ocean in 1900. When he rescues a young woman (Cathy Ang), who fell overboard from a passing ship, his outlook changes to one of hope.

It’s the first Google Spotlight Stories short to include dialogue, and, clocking in at 12 minutes, it’s the longest one to date. Age of Sail was an official selection at this year’s Venice Film Festival, and it also qualifies for the best animated short film Oscar.

Kahrs and his team had to bear in mind that setting a VR film on the open sea, with a boat rolling on waves, could cause viewers motion sickness. They simplified the look of the sky and ocean, and made sure you’re able to focus on the horizon to minimize the feeling of seasickness.

Age of Sail is available in the Google Spotlight Stories iOS and Android app, Viveport and Steam. If you don’t have a way to watch it in VR, Google also released a theatrical version, which you can watch below.

Let’s block ads! (Why?)

Link to original source

Former Oracle exec Thomas Kurian to replace Diane Greene as head of Google Cloud

Diane Greene announced in a blog post today that she would be stepping down as CEO of Google Cloud and will be helping transition former Oracle executive Thomas Kurian to take over early next year.

Greene took over the position almost exactly three years ago when Google bought Bebop, the startup she was running. The thinking at the time was that the company needed someone with a strong enterprise background and Greene, who helped launch VMware, certainly had the enterprise credentials they were looking for.

In the blog post announcing the transition, she trumpeted her accomplishments. “The Google Cloud team has accomplished amazing things over the last three years, and I’m proud to have been a part of this transformative work. We have moved Google Cloud from having only two significant customers and a collection of startups to having major Fortune 1000 enterprises betting their future on Google Cloud, something we should accept as a great compliment as well as a huge responsibility,” she wrote.

The company had a disparate set of cloud services when she took over, and one of the first things Greene did was to put them all under a single Google Cloud umbrella. “We’ve built a strong business together — set up by integrating sales, marketing, Google Cloud Platform (GCP), and Google Apps/G Suite into what is now called Google Cloud,” she wrote in the blog post.

As for Kurian, he stepped down as president of product development at Oracle at the end of September. He had announced a leave of absence earlier in the month before making the exit permanent. Like Greene before him, he brings a level of enterprise street cred, which the company needs as it continues to try to grow its cloud business.

After three years with Greene at the helm, Google, which has tried to position itself as the more open cloud alternative to Microsoft and Amazon, has still struggled to gain market share against its competitors, remaining under 10 percent consistently throughout Greene’s tenure.

As Synergy’s John Dinsdale told TechCrunch in an article on Google Cloud’s strategy in 2017, the company had not been particularly strong in the enterprise to that point. “The issues of course are around it being late to market and the perception that Google isn’t strong in the enterprise. Until recently Google never gave the impression (through words or deeds) that cloud services were really important to it. It is now trying to make up for lost ground, but AWS and Microsoft are streets ahead,” Dinsdale explained at the time. Greene was trying hard to change that perception.

Google has not released many revenue numbers related to the cloud, but in February it indicated they were earning a billion dollars a quarter, a number that Greene felt put Google in elite company. Amazon and Google were reporting numbers like that for a quarter at the time. Google stopped reporting cloud revenue after that report.

Regardless, the company will turn to Kurian to continue growing those numbers now. “I will continue as CEO through January, working with Thomas to ensure a smooth transition. I will remain a Director on the Alphabet board,” Greene wrote in her blog post.

Interestingly enough, Oracle has struggled with its own transition to the cloud. Kurian gets a company that was born in the cloud, rather than one that has made a transition from on-prem software and hardware to one solely in the cloud. It will be up to him to steer Google Cloud moving forward.

Let’s block ads! (Why?)

Link to original source

Google's next Wear OS update does even more to extend battery life

Cherlynn Low / Engadget

A major update for Google’s wearable Android platform only just arrived and now we’re hearing about the next Wear OS version. Today Google announced that “System Version: H” will include a slew of updates when it rolls out in the “next few months.”

Battery Saver Mode Updates:
This update extends your battery life even further by turning on Battery Saver to only display the time once your battery falls below 10%.

Improved Off Body Efficiency:
After 30 minutes of inactivity your watch will go into deep sleep mode to conserve battery.

Smart App Resume for all Apps:
You can now easily pick up where you left off across all apps on your watch.

Two Step Power Off:
You can now turn off your watch in two easy steps. To turn off your watch, simply hold the power button until you see the power off screen and then select ‘power off’ or ‘restart’

All of those sound great, and could be worthwhile improvements for battery life if they’re available to users with watches based on the older Snapdragon 2100 hardware as opposed to the newer 3100 series. Smart App Resume should make it easier to jump in and out of apps, while off body efficiency should ease the anxiety of remember whether or not you left your device on a charger or just on the shelf.

The only problem right now is that Google hasn’t said which devices will receive this update and the words “Your device may not immediately be eligible for this update and will be determined by your watch manufacturer” aren’t inspiring confidence. Fossil announced that all of its touchscreen watches will get it, but we haven’t seen word from other manufacturers yet. Almost every existing device got the last one, and hopefully that will remain consistent, but with a new naming scheme and a note that said “functionality may vary by device” we’ll have to wait and see.

Let’s block ads! (Why?)

Link to original source

Google's Night Sight shooting mode for the Pixel 3 is mind-blowing

Literally looks like night and day shooting with Night Sight (right) and without (left).
Literally looks like night and day shooting with Night Sight (right) and without (left).
Image: Raymond Wong/mashable

Holy moly, has Google just changed the smartphone camera game with the release of the Night Sight mode for its Pixel 3 and 3 XL phones.

Announced at its October Pixel 3 launch event, Google boasted Night Sight as a significant leap forward for taking night photos — useful for exposing colors and details lost in the shadows.

I’ve only just tried Night Sight, currently rolling out to Pixel 3 phones via a software update, and my mind’s still piecing itself back together from being blown apart.

It’s no secret Google has been flexing its computational photography and machine-learning skills to enhance shots taken with its Pixel phones.

Though it’s questionable whether we, as photographers and creatives, should be letting Google decide for us what is a “good-looking” photo — the Pixel 3 tends to shoot pictures that are more contrasty, more saturated, and artificially sharpened than an iPhone or Samsung Galaxy — I don’t think anyone disagrees that the company’s leveraging of software to produce better pictures is a game-changer.

Unlike regular DSLRs or mirrorless cameras, where you can attach lenses of all different sizes with different-size apertures to shoot better low-light photos, smartphones are limited by their thickness.

The tiny image sensors inside of our phones can only collect so much light. Phone makers could make these image sensors larger so they could collect more light to take better low-light photos, but it’d also make phones balloon in size, thickness, and weight as well. 

So Google turned to software. And Apple’s done the same, too. And I’d bet good money other phone makers will soon make the move as well.

Does it really work like magic? Yes, and no. But mostly yes.

With HDR+, Google proved it could take one evenly-exposed money shot by combining a series of images taken at short exposures. The results were good and have only become better.

Night Sight uses the same HDR+ technology, but injects it with steroids. Depending on how dark the scene is and the amount of luminance available (measured in lux), the Pixel 3 will take up to 15 shots at varying shutter speeds (i.e. 1/15th of a second or 1 second) and then combine them all into one final picture.

In other words, Night Sight is the equivalent to a long exposure on a “real” camera. Google gets really technical and nerdy about the details in a blog post, but what you really want to know is: Does it really work like magic?

Yes. And no. But mostly yes.

While the Pixel 3 and 3 XL are Google’s best phones to show off the power of Night Sight because of improved camera components and a faster processor, the original Pixels and Pixel 2’s are also getting the new camera mode. 

I haven’t tried Night Sight on any Pixel 1 or Pixel 2 phones yet so I can’t speak to how well it works on older hardware (Google says there are some differences and shots won’t look as good as on Pixel 3). 

But on a Pixel 3 XL, however, Night Sight seemingly turns night into day. See for yourself in the shots below.

In the below photos, I pointed the Pixel 3 at a scene so dark I could barely make out what I was shooting. The Pixel 3’s camera brightens the viewfinder in Night Sight mode so you can see what you’re shooting, but it looks really noisy.

However, you won’t see that level of extreme image noise in your photo after it’s finished processing. 

Without firing the flash, the Pixel 3’s Night Sight mode exposed this faux Thanksgiving dinner scene, bringing out the colors that would be lost without the mode turned on.

It’s a lovely shot and would work just fine for posting to Instagram or Twitter, but the picture’s a little soft overall. In really dark scenarios, the Pixel 3 struggled to find something to autofocus on. There’s a button in the upper right corner of the mode that lets you manually change the focus to “near” or “far”. I’ll have to shoot more with it in the real world to see how well it really works, though.


Image: raymond wong/mashable

Night Sight enhances dynamic range. Similar to a long exposure, the colors can be more exaggerated. There wasn’t a green cast on the standup bass, but the instrument is more defined and pops in the image.



This candle-lit dinner scene wasn’t quite as dark as the one above, but you can still see Night Sight brings out the shadows nicely.



Night Sight isn’t always the best mode to shoot low-light photos with, though. Sometimes you want a little contrast and shadow to give a shot a certain tone. Night Sight can sometimes flatten the colors in an image like in the shot below. 



Mashable Deputy Tech Editor Michael Nuñez looks more spritely here. His posture is more visible and the food looks more appetizing. 



Not a whole of image noise here too. There’s a teensy bit of skin-smoothening going on, but it still looks pretty darn good.

Night Sight made Mike look less tired!

Night Sight made Mike look less tired!


As good as Night Sight is, you don’t wanna use it all the time. In some night shots, the regular camera just produces a better look that’s less washed out and has less image noise (see black sky in on right side of photos below) IMO:



Night Sight also works with the selfie camera. On the left is what the scene looked like to my feeble human eyes. The image is a little soft, but still… like wow.



I can’t help but be really, really impressed by Night Sight, even though it can be hit or miss with photos sometimes coming out completely blurry, soft, or full of image noise.

These nitpicks aren’t enough to stain Night Sight because this is just the first version. It’ll only get better like HDR+ has and as features like optical image stabilization improve. Using a tripod should also improve sharpness.

Night Sight feels almost like black magic. It’s really not very different from Sony’s A7S II, which is beloved for its ability to to do the same. The difference is how the Pixels are doing it. Instead of hardware, Google’s doing it all with software. Night Sight puts the Pixel 3 cameras several steps ahead of the competition — at least when it comes to night photography.

At first, I was really concerned about Night Sight misrepresenting reality. And in many ways it does. Night Sight is like having night vision — it lets you see what your naked eyes can’t. But just like a long exposure, it opens up new creative expressions for mobile photography. You should use it sparingly, but it’s gonna be hard not to. I’d love to see a future version shoot both a Night Sight version and a regular version and let us pick the one we want.

Https%3a%2f%2fblueprint api uploaders%2fdistribution thumb%2fimage%2f86828%2f3f063f5d 8da3 4171 97e5 40125b07a217

Let’s block ads! (Why?)

Link to original source

Once again, Facebook has a lot of explaining to do

Just when you thought things couldn’t get worse for Facebook, The New York Times has come out with a bombshell exposé of the company’s tumultuous last two years. That, of course, includes its handling (er, mishandling) of the Cambridge Analytica data privacy scandal and other controversies, like the lack of transparency around Russian interference on its site leading up to the 2016 US presidential election. The paper says it spoke with more than 50 people, including current and former Facebook employees, who detailed the company’s efforts to contain, deny and deflect negative stories that came its way.

Facebook, what with its questionable “War Room” and all, seemed to be on the right path after apparently keeping things under control during the recent midterm elections in the US. Aside from the 115 accounts it blocked the day before the elections after being tipped off by law enforcement officials, no major incidents of fake news or malicious ads were reported — though at this point it wouldn’t be surprising Facebook came out later and said, “well, actually…” After all, it’s not as if the company has been completely honest about it recent mishaps, as this week’s New York Times report highlights.

2015 WebSummit Day 2 - Enterprise Stage

Alex Stamos, former Chief Security Officer at Facebook

Perhaps the most damaging allegation comes from a Facebook “expert on Russian cyber warfare” who reported to former Chief Security Officer Alex Stamos. The expert claims that top executives at the social media giant, including CEO Mark Zuckerberg and COO Sheryl Sandberg, knew about Kremlin activity on Facebook since 2016. Facebook disputes this. But, none of those details came out publicly until fall 2017, when the it reported that 126 million Facebook users were exposed to Russian-linked ads, misinformation and fake accounts. That propaganda, as we now know, was intended to create discord among the American people.

“Personally I think the idea that fake news on Facebook influenced the election in any way is a pretty crazy idea,” Zuckerberg said in November of 2016, allegedly months after Facebook was already aware of Russia using its site to try to interfere in US elections.

To make matters worse, the company reportedly hired a consulting firm called Definers Public Affairs to do some of its dirty work, including lobbying against lawmaker critics in Washington, D.C. Definers also ran a campaign to discredit anti-Facebook activists by linking them to known Democrat donor George Soros, according to The Times. But the firm didn’t stop there. Some of Definers’ other work, sources told The Times, involved publishing negative stories about Google and Apple on conservative news site NTK Network, an affiliate of Definers Public Affairs.

Facebook’s Sheryl Sandberg testified in Congress last September.
Drew Angerer via Getty Images

Facebook’s pettiness, per to the report, went as far as Zuckerberg ordering members of his management team to start using Android smartphones instead of iPhones, after Apple CEO Tim Cook took a jab at Facebook for not protecting its users’ data. “I think the best regulation is no regulation [but] self-regulation,” Cook told MSNBC in an interview last March in response to a question about Facebook’s Cambridge Analytica incident. “However, I think we’re beyond that here.” He added, “I wouldn’t be in this situation.” Sure, Zuckerberg may believe he has the power to make his staff stop using iPhones at his demand, but it seems like his energy could’ve been better spent elsewhere — like actually trying to fix the issues at hand.

Not surprisingly, Facebook is denying many of the allegations from The New York Times’ report. In a blog post, the company said “there are a number of inaccuracies in the story,” including that it knew of Russian activity in the spring of 2016 — though the timeline it provides seems kind of murky. Facebook also claims Zuckerberg “never encouraged our employees and executives” to use Android. “Tim Cook has consistently criticized our business model and Mark has been equally clear he disagrees,” the company said. “So there’s been no need to employ anyone else to do this for us. And we’ve long encouraged our employees and executives to use Android because it is the most popular operating system in the world.”

As far as Definers, in a call with reporters on Thursday, Zuckerberg said he only “learned about this relationship when I read the NYT piece yesterday.” That’s interesting considering what Facebook said in a statement: “Our relationship with Definers was well known by the media — not least because they have on several occasions sent out invitations to hundreds of journalists about important press calls on our behalf.”


Facebook says it has ended its contract with Definers, adding that The Times “is wrong to suggest that we ever asked Definers to pay for or write articles on Facebook’s behalf — or to spread misinformation.” Thing is, it’s not as if The New York Times has a track record of reporting inaccurate stories, whereas Facebook’s recent mishaps have all but exposed its lack of transparency when something goes wrong. And that’s been happening quite often lately.

At this point, it’s going to take a lot for Facebook to gain people’s trust back, especially as more stories like this continue to come out. What the company needs to do is brace itself for regulation, because it clearly can’t be trusted to regulate itself, and lawmakers around the world are starting to agree.

Images: Sportsfile/Corbis via Getty Images (Alex Stamos); Thomas Trutschel/Photothek via Getty Images (Facebook app)

Let’s block ads! (Why?)

Link to original source