Respondology helps brands and influencers hide toxic comments

“Don’t read the comments” is one of those cliches that sticks around because it’s still good advice — maybe the best advice. But the team at Respondology is trying to change that.

The company started out by helping brands find and respond to messages on social media. Senior Vice President of Sales Aaron Benor explained that in the course of that work, it also built a tool to mitigate “the vitriol, the awful toxicity of online social media.”

“We realized that the tool had a lot more legs than we thought, and we decided to pursue it full force and sunset the advertising business,” Benor said. “What really I love about this new product is that the big picture, long-term, is: We can put an end to cyberbullying.”

That’s a big goal, and to be clear, Respondology isn’t trying to reach it immediately. Instead, it’s launching a product called The Mod that allows individual brands and influencers to weed out toxic, trollish or spammy comments on Instagram and YouTube, rendering them invisible to most followers.

Benor explained that the product has two lines of defense. First, there’s automated keyword detection, where certain words will cause a comment to be flagged. The customer can decide which categories they want to filter out (“mild” or “severe” swearing, sexual references, racist remarks and so on), and they can also view and reinstate flagged comments from their Respondology dashboard.

Respondology settings

Respondology settings

Second, the company has built up a network of around 1,500 moderators who look at all the comments that aren’t flagged, and they can decide whether they’re appropriate to post. So even if a comment doesn’t use one of the red-flag keywords, a human can still catch it. (Customers that want to be extra careful can also turn on an option where multiple moderators vote on whether a comment should be hidden or posted.)

Benor demonstrated the system for me using a test Instagram account. I got to play the troll, posting several comments at his prompting. Each time, the comment was visible for just a few seconds before the Respondology system sprang into action and the comment disappeared.

When I posted profanity, it was automatically flagged and stayed hidden, while my other comments popped up in the moderation app — and if they were approved, they’d reappear on Instagram. All of this activity remained hidden from my account, where it just looked like my comments had been published normally.

Of course, the big social platforms have built their own moderation tools, but it seems clear that the problem remains unsolved. And even if platform moderation improves, Benor said, “This is an agnostic tool. [Our customers] have complete choice and control. This is not the platform saying, ‘This is what we’re going to offer you’; this is what’s going to work for you as a creator.”

We also discussed a recent story in The Verge highlighting the impact that moderating toxic content can have on people’s mental and emotional health. But Benor argued that while Facebook moderators have to spend most of their time dealing with “the worst of the worst,” Respondology’s team is mostly just approving innocuous commentary. Plus, they’re freelancers who only work when they want, and can stop at any time.

“We haven’t heard any negative feedback,” Benor added. “We all act as moderators ourselves — because what better way is there to know the product and understand it — and I’ve never been shocked by what I’ve seen.”

Respondology charges customers of The Mod based on the volume of comments. Benor said the pricing can range from “a few dollars a month to a few thousand dollars a month.”

Ultimately, he’s hoping to release a version for non-professional users too — so parents, for example, can automatically hide the worst comments from their kids’ online accounts.

Let’s block ads! (Why?)

Link to original source

Facebook staff raised concerns about Cambridge Analytica in September 2015, per court filing

Further details have emerged about when and how much Facebook knew about data-scraping by the disgraced and now defunct Cambridge Analytica political data firm.

Last year a major privacy scandal hit Facebook after it emerged CA had paid GSR, a developer with access to Facebook’s platform, to extract personal data on as many as 87M Facebook users without proper consents.

Cambridge Analytica’s intention was to use the data to build psychographic profiles of American voters to target political messages — with the company initially working for the Ted Cruz and later the Donald Trump presidential candidate campaigns.

But employees at Facebook appear to have raised internal concerns about CA scraping user data in September 2015 — i.e. months earlier than Facebook previously told lawmakers it became aware of the GSR/CA breach (December 2015).

The latest twist in the privacy scandal has emerged via a redacted court filing in the U.S. — where the District of Columbia is suing Facebook in a consumer protection enforcement case.

Facebook is seeking to have documents pertaining to the case sealed, while the District argues there is nothing commercially sensitive to require that.

In its opposition to Facebook’s motion to seal the document, the District includes a redacted summary (screengrabbed below) of the “jurisdictional facts” it says are contained in the papers Facebook is seeking to keep secret.

According to the District’s account a Washington D.C.-based Facebook employee warned others in the company about Cambridge Analytica’s data-scraping practices as early as September 2015.

Under questioning in Congress last April, Mark Zuckerberg was asked directly by congressman Mike Doyle when Facebook had first learned about Cambridge Analytica using Facebook data — and whether specifically it had learned about it as a result of the December 2015 Guardian article (which broke the story).

Zuckerberg responded with a “yes” to Doyle’s question.

Facebook repeated the same line to the UK’s Digital, Media and Sport (DCMA) committee last year, over a series of hearings with less senior staffers

Damian Collins, the chair of the DCMS committee — which made repeat requests for Zuckerberg himself to testify in front of its enquiry into online disinformation, only to be repeatedly rebuffed — tweeted yesterday that the new detail could suggest Facebook “consistently mislead” the British parliament.

The DCMS committee has previously accused Facebook of deliberately misleading its enquiry on other aspects of the CA saga, with Collins taking the company to task for displaying a pattern of evasive behavior.

The earlier charge that it mislead the committee refers to a hearing in Washington in February 2018 — when Facebook sent its UK head of policy, Simon Milner, and its head of global policy management, Monika Bickert, to field DCMS’ questions — where the pair failed to inform the committee about a legal agreement Facebook had made with Cambridge Analytica in December 2015.

The committee’s final report was also damning of Facebook, calling for regulators to instigate antitrust and privacy probes of the tech giant.

Meanwhile, questions have continued to be raised about Facebook’s decision to hire GSR co-founder Joseph Chancellor, who reportedly joined the company around November 2015.

The question now is if Facebook knew there were concerns about CA data-scraping prior to hiring the co-founder of the company that sold scraped Facebook user data to CA, why did it go ahead and hire Chancellor?

The GSR co-founder has never been made available by Facebook to answer questions from politicians (or press) on either side of the pond.

Last fall he was reported to have quietly left Facebook, with no comment from Facebook on the reasons behind his departure — just as it had never explained why it hired him in the first place.

But the new timeline that’s emerged of what Facebook knew when makes those questions more pressing than ever.

Reached for a response to the details contained in the District of Columbia’s court filing, a Facebook spokeswomen sent us this statement:

Facebook was not aware of the transfer of data from Kogan/GSR to Cambridge Analytica until December 2015, as we have testified under oath

In September 2015 employees heard speculation that Cambridge Analytica was scraping data, something that is unfortunately common for any internet service. In December 2015, we first learned through media reports that Kogan sold data to Cambridge Analytica, and we took action. Those were two different things.

Facebook did not engage with questions about any of the details and allegations in the court filing.

A little later in the court filing, the District of Columbia writes that the documents Facebook is seeking to seal are “consistent” with its allegations that “Facebook has employees embedded within multiple presidential candidate campaigns who… knew, or should have known… [that] Cambridge Analytica [was] using the Facebook consumer data harvested by [[GSR’s]] [Aleksandr] Kogan throughout the 2016 [United States presidential] election.”

It goes on to suggest that Facebook’s concern to seal the document is “reputational”, suggesting — in another redacted segment (below) — that it might “reflect poorly” on Facebook that a DC-based employee had flagged Cambridge Analytica months prior to news reports of its improper access to user data.

“The company may also seek to avoid publishing its employees’ candid assessments of how multiple third-parties violated Facebook’s policies,” it adds, chiming with arguments made last year by GSR’s Kogan who suggested the company failed to enforce the terms of its developer policy, telling the DCMS committee it therefore didn’t have a “valid” policy.

As we’ve reported previously, the UK’s data protection watchdog — which has an ongoing investigation into CA’s use of Facebook data — was passed information by Facebook as part of that probe which showed that three “senior managers” had been involved in email exchanges, prior to December 2015, concerning the CA breach.

It’s not clear whether these exchanges are the same correspondence the District of Columbia has obtained and which Facebook is seeking to seal. Or whether there were multiple email threads raising concerns about the company.

The ICO passed the correspondence it obtained from Facebook to the DCMS committee — which last month said it had agreed at the request of the watchdog to keep the names of the managers confidential. (The ICO also declined to disclose the names or the correspondence when we made a Freedom of Information request last month — citing rules against disclosing personal data and its ongoing investigation into CA meaning the risk of release might be prejudicial to its investigation.)

In its final report the committee said this internal correspondence indicated “profound failure of governance within Facebook” — writing:

[I]t would seem that this important information was not shared with the most senior executives at Facebook, leading us to ask why this was the case. The scale and importance of the GSR/Cambridge Analytica breach was such that its occurrence should have been referred to Mark Zuckerberg as its CEO immediately. The fact that it was not is evidence that Facebook did not treat the breach with the seriousness it merited. It was a profound failure of governance within Facebook that its CEO did not know what was going on, the company now maintains, until the issue became public to us all in 2018. The incident displays the fundamental weakness of Facebook in managing its responsibilities to the people whose data is used for its own commercial interests.

We reached out to the ICO for comment on the information to emerge via the Columbia suit, and also to the Irish Data Protection Commission, the lead DPA for Facebook’s international business, which currently has 15 open investigations into Facebook or Facebook-owned businesses related to various security, privacy and data protection issues.

An ICO spokesperson told us: “We are aware of these reports and will be considering the points made as part of our ongoing investigation.”

Last year the ICO issued Facebook with the maximum possible fine under UK law for the CA data breach.

Shortly after Facebook announced it would appeal, saying the watchdog had not found evidence that any UK users’ data was misused by CA.

A date for the hearing of the appeal set for earlier this week was canceled without explanation. A spokeswoman for the tribunal court told us a new date would appear on its website in due course.

This report was updated with comment from the ICO

Let’s block ads! (Why?)

Link to original source

Facebook’s AI couldn’t spot mass murder

Facebook has given another update on measures it took and what more it’s doing in the wake of the livestreamed video of a gun massacre by a far right terrorist who killed 50 people in two mosques in Christchurch, New Zealand.

Earlier this week the company said the video of the slayings had been viewed less than 200 times during the livestream broadcast itself, and about about 4,000 times before it was removed from Facebook — with the stream not reported to Facebook until 12 minutes after it had ended.

None of the users who watched the killings unfold on the company’s platform in real-time apparently reported the stream to the company, according to the company.

It also previously said it removed 1.5 million versions of the video from its site in the first 24 hours after the livestream, with 1.2M of those caught at the point of upload — meaning it failed to stop 300,000 uploads at that point. Though as we pointed out in our earlier report those stats are cherrypicked — and only represent the videos Facebook identified. We found other versions of the video still circulating on its platform 12 hours later.

In the wake of the livestreamed terror attack, Facebook has continued to face calls from world leaders to do more to make sure such content cannot be distributed by its platform.

The prime minister of New Zealand, Jacinda Ardern told media yesterday that the video “should not be distributed, available, able to be viewed”, dubbing it: “Horrendous.”

She confirmed Facebook had been in contact with her government but emphasized that in her view the company has not done enough.

She also later told the New Zealand parliament: “We cannot simply sit back and accept that these platforms just exist and that what is said on them is not the responsibility of the place where they are published. They are the publisher. Not just the postman.”

We asked Facebook for a response to Ardern’s call for online content platforms to accept publisher-level responsibility for the content they distribute. Its spokesman avoided the question — pointing instead to its latest piece of crisis PR which it titles: “A Further Update on New Zealand Terrorist Attack”.

Here it writes that “people are looking to understand how online platforms such as Facebook were used to circulate horrific videos of the terrorist attack”, saying it therefore “wanted to provide additional information from our review into how our products were used and how we can improve going forward”, before going on to reiterate many of the details it has previously put out.

Including that the massacre video was quickly shared to the 8chan message board by a user posting a link to a copy of the video on a file-sharing site. This was prior to Facebook itself being alerted to the video being broadcast on its platform.

It goes on to imply 8chan was a hub for broader sharing of the video — claiming that: “Forensic identifiers on many of the videos later circulated, such as a bookmarks toolbar visible in a screen recording, match the content posted to 8chan.”

So it’s clearly trying to make sure it’s not singled out by political leaders seek policy responses to the challenge posed by online hate and terrorist content.

Further details it chooses to dwell on in the update is how the AIs it uses to aid the human content review process of flagged Facebook Live streams are in fact tuned to “detect and prioritize videos that are likely to contain suicidal or harmful acts” — with the AI pushing such videos to the top of human moderators’ content heaps, above all the other stuff they also need to look at.

Clearly “harmful acts” were involved in the New Zealand terrorist attack. Yet Facebook’s AI was unable to detected a massacre unfolding in real time. A mass killing involving an automatic weapon slipped right under the robot’s radar.

Facebook explains this by saying it’s because it does not have the training data to create an algorithm that understands it’s looking at mass murder unfolding in real time.

It also implies the task of training an AI to catch such a horrific scenario is exacerbated by the proliferation of videos of first person shooter videogames on online content platforms.

It writes: “[T]his particular video did not trigger our automatic detection systems. To achieve that we will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare. Another challenge is to automatically discern this content from visually similar, innocuous content – for example if thousands of videos from live-streamed video games are flagged by our systems, our reviewers could miss the important real-world videos where we could alert first responders to get help on the ground.”

The videogame element is a chilling detail to consider.

It suggests that a harmful real-life act that mimics a violent video game might just blend into the background, as far as AI moderation systems are concerned; invisible in a sea of innocuous, virtually violent content churned out by gamers. (Which in turn makes you wonder whether the Internet-steeped killer in Christchurch knew — or suspected — that filming the attack from a videogame-esque first person shooter perspective might offer a workaround to dupe Facebook’s imperfect AI watchdogs.)

Facebook post is doubly emphatic that AI is “not perfect” and is “never going to be perfect”.

“People will continue to be part of the equation, whether it’s the people on our team who review content, or people who use our services and report content to us,” it writes, reiterating yet again that it has ~30,000 people working in “safety and security”, about half of whom are doing the sweating hideous toil of content review.

This is, as we’ve said many times before, a fantastically tiny number of human moderators given the vast scale of content continually uploaded to Facebook’s 2.2BN+ user platform.

Moderating Facebook remains a hopeless task because so few humans are doing it.

Moreover AI can’t really help. (Later in the blog post Facebook also writes vaguely that there are “millions” of livestreams broadcast on its platform every day, saying that’s why adding a short broadcast delay — such as TV stations do — wouldn’t at all help catch inappropriate real-time content.)

At the same time Facebook’s update makes it clear how much its ‘safety and security’ systems rely on unpaid humans too: Aka Facebook users taking the time and mind to report harmful content.

Some might say that’s an excellent argument for a social media tax.

The fact Facebook did not get a single report of the Christchurch massacre livestream while the terrorist attack unfolded meant the content was not prioritized for “accelerated review” by its systems, which it explains prioritize reports attached to videos that are still being streamed — because “if there is real-world harm we have a better chance to alert first responders and try to get help on the ground”.

Though it also says it expanded its acceleration logic last year to “also cover videos that were very recently live, in the past few hours”.

But again it did so with a focus on suicide prevention — meaning the Christchurch video would only have been flagged for acceleration review in the hours after the stream ended if it had been reported as suicide content.

So the ‘problem’ is that Facebook’s systems don’t prioritize mass murder.

“In [the first] report, and a number of subsequent reports, the video was reported for reasons other than suicide and as such it was handled according to different procedures,” it writes, adding it’s “learning from this” and “re-examining our reporting logic and experiences for both live and recently live videos in order to expand the categories that would get to accelerated review”.

No shit.

Facebook also discusses its failure to stop versions of the massacre video from resurfacing on its platform, having been — as it tells it — “so effective” at preventing the spread of propaganda from terrorist organizations like ISIS with the use of image and video matching tech.

It claims  its tech was outfoxed in this case by “bad actors” creating many different edited versions of the video to try to thwart filters, as well as by the various ways “a broader set of people distributed the video and unintentionally made it harder to match copies”.

So, essentially, the ‘virality’ of the awful event created too many versions of the video for Facebook’s matching tech to cope.

“Some people may have seen the video on a computer or TV, filmed that with a phone and sent it to a friend. Still others may have watched the video on their computer, recorded their screen and passed that on. Websites and pages, eager to get attention from people seeking out the video, re-cut and re-recorded the video into various formats,” it writes, in what reads like another attempt to spread blame for the amplification role that its 2.2BN+ user platform plays.

In all Facebook says it found and blocked more than 800 visually-distinct variants of the video that were circulating on its platform.

It reveals it resorted to using audio matching technology to try to detect videos that had been visually altered but had the same soundtrack. And again claims it’s trying to learn and come up with better techniques for blocking content that’s being re-shared widely by individuals as well as being rebroadcast by mainstream media. So any kind of major news event, basically.

In a section on next steps Facebook says improving its matching technology to prevent the spread of inappropriate viral videos being spread is its priority.

But audio matching clearly won’t help if malicious re-sharers just both re-edit the visuals and switch the soundtrack too in future.

It also concedes it needs to be able to react faster “to this kind of content on a live streamed video” — though it has no firm fixes to offer there either, saying only that it will explore “whether and how AI can be used for these cases, and how to get to user reports faster”.

Another priority it claims among its “next steps” is fighting “hate speech of all kinds on our platform”, saying this includes more than 200 white supremacist organizations globally “whose content we are removing through proactive detection technology”.

It’s glossing over plenty of criticism on that front too though — including research that suggests banned far right hate preachers are easily able to evade detection on its platform. Plus its own foot-dragging on shutting down far right extremists. (Facebook only finally banned one infamous UK far right activist last month, for example.)

In its last PR sop, Facebook says it’s committed to expanding its industry collaboration to tackle hate speech via the Global Internet Forum to Counter Terrorism (GIFCT), which formed in 2017 as platforms were being squeezed by politicians to scrub ISIS content — in a collective attempt to stave off tighter regulation.

“We are experimenting with sharing URLs systematically rather than just content hashes, are working to address the range of terrorists and violent extremists operating online, and intend to refine and improve our ability to collaborate in a crisis,” Facebook writes now, offering more vague experiments as politicians call for content responsibility.

Let’s block ads! (Why?)

Link to original source

Instagram is the latest hotbed for conspiracy theories


Mark Wilson via Getty Images

You might open Instagram to see what your friends are doing, look at a cute puppy or like pretty pictures of other people’s food — but there’s something much darker under the surface. While other platforms are working to eradicate hate speech and stop the spread of conspiracy theories, hate-fueled and misguided information is flourishing on Instagram. As The Atlantic writes, Instagram is “the internet’s new home for hate.”

Like its social media peers, Instagram has the algorithms and huge user base to share information with millions. But the platform stands out because so many of its subscribers are so young — and as you might guess, many are naive enough to believe conspiracy memes. That’s especially alarming, as more teens are using Instagram as a news source.

While Facebook and YouTube have made high-profile attempts to remove footage from the Christchurch shooting, it can still be found on Instagram — right next to conspiracies claiming the shooting was staged. Search pretty much any conspiracy theory, and you’ll find Instagram posts, memes and accounts backing it, many of which are specifically targeted at Gen-Z.

It’s easy to see how someone might get sucked in. As The Atlantic’s Taylor Lorenz explained, when she followed @the_typical_liberal, she was inundated with follow requests from accounts linked to QAnon — a conspiracy that a “deep state” is plotting to take down Donald Trump. The app also prompted her to follow other notorious far-right personalities, including Milo Yiannopoulos, Laura Loomer, Alex Jones and Candace Owens.

An Instagram spokesperson told The Atlantic that Instagram, along with its parent company, Facebook, continues to study organized hate speech and to remove it when it’s found. Earlier this month, Facebook announced a plan to combat misinformation around vaccines. And YouTube says it will stop recommending conspiracy videos — it’s been blamed for the rise in Flat Earthers. But we’ve yet to see any kind of announcement like that from Instagram. It’s possible that Facebook’s policies apply to Instagram, as well, but that’s not entirely clear.

If you have an account, you can find conspiracies about everything from the Ethiopian Airlines crash being a hoax to Ruth Bader Ginsburg being dead, former White House Chief of Staff John Podesta being partially responsible for the New Zealand shooting and Islamic terror camps in the US. Of course, there’s anti-vaccination rhetoric and jokes about killing women, Jews, Muslims and liberals. So, while your Instagram feed might be fairly happy-go-lucky, the platform as a whole has all the right ingredients to be the internet’s next dumpster fire.

Let’s block ads! (Why?)

Link to original source

A week with Twitter's attempt at a more civil internet

Over the past few months, Twitter CEO Jack Dorsey has been adamant that one of his goals is to “increase the health of public conversation” on the site. Because it’s no secret that, as great as Twitter is at connecting you with people across the world, it’s also great at connecting you with bots, trolls and spam. Unsurprisingly, Twitter wants to change that. And it’s hoping to find a solution by publicly testing new conversation features, through an experimental program that users can apply to participate in. This launched last week as an app called Twttr, which I’ve been using as my main tool for reading and writing tweets for the past week.

With Twttr, the company says it wants to make conversations easier to read, understand and join. And to do that it’s using features like color-coded chat bubbles to help you browse threads more efficiently. For instance, if someone you follow replies to one of your tweets, their response will be highlighted by a light-blue tag, making it easier to spot. This can be particularly helpful if you have a large number of followers, or have a tweet that goes viral and generates a lot of responses. It’s intended to filter out the noise and keep you engaged with people you actually know, as opposed to strangers.

Alternatively, if someone you don’t follow starts a conversation with you, their tweets will have a grey tag, similar to the “Original Tweeter” label Twitter has tried in the past. It’s clear that Twitter wants to make the biggest changes to how you interact with others in your mentions, since the tweaks there go deeper than colored bubbles. In Twttr, there are thread indentations designed to help you keep track of replies that may branch off from the main conversation. Those are complemented by a “show more” button which hides responses that, according to Twitter, may be abusive or spammy.

A closer look at Twitter's prototype app.

So far, the experience isn’t drastically different, compared to the main Twitter app. But there are aspects of the beta that I’m starting to like, such as the colored chat bubbles that make it easier to keep up with a conversation. At the same time, though, it’s worth noting that the Twttr app doesn’t support all of Twitter’s mobile features. That includes the revamped camera, which makes it hard for me to use the prototype app as my daily driver.

It’s too early to tell whether these experimental features will manage to successfully filter bots, trolls or spammers completely out of your mentions. But, I have noticed that the color-coded labels and indented tweets let me follow threads more easily. And they help me decide which replies I actually want to read and interact with. Meanwhile, the “show more” can filter out people who may be trolling, although I have come across tweets that aren’t abusive or spammy in some of its hidden replies.

I think what bugs me the most about the “show more” feature is that, if a thread within a thread becomes too long, it just looks odd. Basically, the more you scroll to read the responses, the smaller the tweet boxes get, and that makes it extremely difficult and tedious to read tweets.



Twitter wants threads to be a place for more healthier conversations, and something that can get in the way of that are likes and retweets, two engagement tools that Dorsey has said aren’t necessarily the right incentives for people. That’s why in Twttr, the heart and retweet icons aren’t visible at first when you’re browsing threads — they only show up once you tap to reply to a tweet. If you do tap and hold either button, you can see who retweeted or liked a tweet. And although you can still see and interact with the icons on your main feed, this shows that Twitter and Dorsey are at least considering getting rid of the like button in some areas of the social network.

Still, Twitter has made it clear that these features might never make it beyond Twttr and into its main application. That said, it’s a good way to at least get an idea of the ways the company is thinking about changing the service. Naturally, the whole point of Twttr is to get your feedback on its experiments and, if you get accepted into the prototype program, you can share your thoughts with the company directly from the app. Twitter is sending surveys periodically, as well, which ask you about your experience with reading replies or whether you prefer the Twttr or Twitter apps.

You can expect Twttr to keep changing as Twitter continues to roll out new experimental features, such as letting you subscribe to relevant threads — which leaked recently but hasn’t made its way to the Twttr app yet. I do hope that Twitter brings some of the features from its main app to Twttr, though, because right now the new Stories-style camera doesn’t work. That means I’m having to switch back and forth between the two apps, and that’s basically the only thing keeping me from using Twttr all the time

Gallery: Twttr hands-on | 8 Photos

Let’s block ads! (Why?)

Link to original source