Twitter axed 142k spammy apps and 130M ‘low-quality’ Tweets in 1 week of Q1 – TechCrunch

Twitter is making good on its pledge to fight the persistent problems of spam, bots, harassment, and misinformation that have plagued the social platform for years. Today, in its generally positive Q1 earnings report, the company announced that changes that it has made related to TweetDeck and its API — two of the most common spam vectors on Twitter — in in the past quarter have translated into real numbers that point to overall improvements in quality on the service.

Specifically, according to figures published in the company’s letter to investors, 142,000 apps, accounting for 130 million Tweets, have had their API access revoked; and there are now 90 percent fewer accounts using TweetDeck to create junk Tweets.

To note, Twitter’s new changes took effect only on March 23, and the earnings report covers only activity for the three months ending March 30 — meaning these numbers are just covering a week of activity. In other words, the effect over the longer term will likely be significant.

The TweetDeck stat covering 90 percent fewer users using TweetDeck to create false information and automated engagement spam are both a result of changes to TweetDeck itself, as well as a new and more proactive approach that Twitter is taking.

In February, Twitter stopped allowing automating mass retweeting — or TweetDecking, as it’s been called by some — in which power users turned to TweetDeck to retweet posts across masses of accounts they managed, as well as across smaller user groups of people who managed masses of accounts, a technique that helps a Tweet go viral. Some weeks later it moved to suspend a number of accounts that were guilty of the practice.

Policies and enforcement around the company’s API have also been tightened up. The 142,000 applications that are no longer connected to the API were responsible for no less than 130 million “low-quality Tweets”. It’s a sizeable volume on its own, but — given the Twitter model — it’s even more impactful since they spurred a number of interactions and retweets outside those spam accounts, perpetuated by individuals. As with TweetDeck, the API changes were part of the larger overhaul Twitter made around automation and multiple accounts.

It’s an interesting turn for the company: given that the mass-action Tweeting ability has been so hugely misused, it’s a wonder why Twitter ever allowed it in the first place. It may have been one of those badly-conceived moments where Twitter thought it would help with traffic and activity on the site at a time when it needed to demonstrate growth, and perhaps just to bring more activity to the platform when it was smaller.

Beyond its own desire to be a force for good and not abuse, it’s also something that Twitter has been somewhat forced to address. Social media sites like Twitter and Facebook have proven to have a huge role in helping to disseminate information, but that spotlight has taken on a particularly pernicious hue in recent times. The rise of fake news and what role that might have played in the outcome of the EU referendum in the UK and the most recent presidential election in the US; and extreme cases of harassment online, are two of the uglier examples of where social sites might have an obligation to play a stronger role beyond that of simply being a conduit for information. Twitter taking better control of this is an important step, and perhaps one it would rather control itself.

In any case, this appears to be just the start of how Twitter hopes to raise the tone, and generally make its platform a safer and nicer place to be. “Our systems continue to identify and challenge millions of suspicious accounts globally per week as a result of our sustained investments in improving information quality on Twitter,” the company notes.

There are also some interesting plans in the pipeline. The company has been on a “health” kick of late, and has been looking to crowdsource suggestions for how to improve trust and safety, and reduce abuse and spam, on the platform. An RFP that it issued to stakeholders — and anyone interested in helping — has so far yielded 230 responses from “global institutions”, the company said. “We expect to have meaningful updates in the second quarter, and we’re committed to continuing to share our progress along the way.”

We are listening to the earnings webcast and will update with more related to this as we hear it.

Twitter beats expectations with $665M in revenue amid its turnaround hopes – TechCrunch

It looks like Twitter, the oft-beleaguered social network that’s still worth more than Snap, will still hold that status for a little longer after delivering a stronger-than-expected quarter this morning.

Twitter’s monthly active users barely grew — though it did, indeed, grow — by around 3% worldwide year-over-year, and is now at around 336 million monthly active users. That isn’t crazy growth or size in the scope of how large Facebook is, but it still means that Twitter isn’t losing those users. It’s going to be re-entering a critical time heading into another year of elections. All this is going to be critical to its story as it tries to sell a turnaround on Wall Street, where it at one point was worth more than double it is now.

The company beat out Wall Street’s expectations by delivering $655 million in revenue, leading to a small spike in the stock this morning by about 5%. Here’s the final scorecard:

  • Monthly active users: 336 million, up 3% year-over-year and compared to around 334 million expected
  • U.S. MAUs: 69 million, about flat year-over-year
  • International MAUs: 267 million, up 4% year-over-year
  • Q1 Revenue: $665 million, compared to $608 million Wall Street estimates and up 21% year-over-year
  • Q1 Earnings: 16 cents per share, compared to 12 cents per share estimates

While all of this looks pretty strong, Twitter had a pretty bumpy but somewhat positive 2017 on Wall Street toward the back end of the year. It’s been making significant moves to try to curb abuse and harassment and has actually been tweaking the product in some ways, even if they don’t particularly feel earth-shattering. Expanding the character count from 140 to 280 characters might not seem like a lot, but it does compress more information into that small space, and any bit of engagement helps Twitter in the long run sustain itself.

Late last year, Twitter passed Snap in market cap. While this is largely symbolic, it’s kind of a snapshot of the pressure both networks are under to show that advertisers are actually interested in a platform beyond Facebook. Both companies are pretty volatile and have to sell Wall Street on growth stories. Twitter has often been slammed for being difficult to use and having a lot of problems related to harassment and abuse, and it’s spent much of the last year trying to fix those problems.

(Interestingly, Twitter’s stock-based compensation expense — an expense that’s been hounding Twitter for some time — increasingly seems to be getting under control. It’s down to around $73 million in the first quarter, compared to $117 million in the first quarter last year.)

While it won’t be the size of Facebook, Twitter has to position itself as a unique spot where advertisers can reach an audience that is in a different kind of behavioral mode than they are on Facebook. Twitter has sought to specialize in a live feed of information, whether that’s trying to rejigger the timeline to surface up important information or investing more in video. That, theoretically, means that Twitter could sell itself as a platform with a higher level of engagement in certain activities — something that Snap has done in order to position itself in a positive way for Wall Street.

All this has given Twitter a way to show that while its revenue is not the scale of Facebook, it’s a different kind of revenue, and one that might have a lot of value for advertisers. If it can do that, and continue to scale up its user base over time and then move into significant news events like an elections cycle, it might be able to pick up more and more advertisers. There was a point when we were talking about how its advertising revenue had completely stalled and was headed into a tailspin, but it looks like it’s actually gotten that under control.

That’s also why Twitter loves to show this chart and talk about it on its quarterly earnings releases, which has only one of the two required axes in order to be a chart. The chart shows year-over-year daily active user growth, but the company doesn’t like to offer some kind of basis for how many of its users are actually super-active DAUs. But, nonetheless, here it is in all its glory:

WhatsApp raises minimum age to 16 in Europe ahead of GDPR – TechCrunch

Tech giants are busy updating their T&Cs ahead of the EU’s incoming data protection framework, GDPR. Which is why, for instance, Facebook-owned Instagram is suddenly offering a data download tool. You can thank European lawmakers for being able to take your data off that platform.

Facebook -owned WhatsApp is also making a pretty big change as a result of GDPR — noting in its FAQs that it’s raising the minimum age for users of the messaging platform to 16 across the “European Region“. This includes in both EU and non-EU countries (such as Switzerland), as well as the in-the-process-of-brexiting UK (which is set to leave the EU next year).

In the US, the minimum age for WhatsApp usage remains 13.

Where teens are concerned GDPR introduces a new provision concerning children’s personal data — setting a 16-year-old age limit on kids being able to consent to their data being processed — although it does allow some wiggle room for individual countries to write a lower age limit into their laws, setting a hard cap at 13-years-old.

WhatsApp isn’t bothering to try to vary the age gate depending on limits individual EU countries have set, though. Presumably to reduce the complexity of complying with the new rules.

But also likely because it’s confident WhatsApp-loving teens won’t have any trouble circumventing the new minimum age limit. And therefore that there’s no real risk to its business because teenagers will easily ignore the rules.

Certainly it’s unclear whether WhatsApp and its parent Facebook will do anything at all to enforce the age limit — beyond asking users to state they are at least 16 (and taking them at their word). So in practice, while on paper the 16-years-old minimum seems like a big deal, the change may do very little to protect teens from being data-mined by the ad giant.

We’ve asked WhatsApp whether it will cross-check users’ accounts with Facebook accounts and data holdings to try to verify a teen really is 16, for example, but nothing in its FAQ on the topic suggests it plans to carry out any active enforcement at all — instead it merely notes:

  • Creating an account with false information is a violation of our Terms
  • Registering an account on behalf of someone who is underage is also a violation of our Terms

Ergo, that does sound very much like a buck being passed. And it will likely be up to parents to try to actively enforce the limit — by reporting their own underage WhatApp-using kids to the company (which would then have to close the account). Clearly few parents would relish the prospect of doing that.

Yet Facebook does already share plenty of data between WhatsApp and its other companies for all sorts of self-serving, business-enhancing purposes — and even including, as it couches it, “to ensure safety and security”. So it’s hardly short of data to carry out some age checks of its own and proactively enforce the limit.

One curious difference is that Facebook’s approach to teen usage of WhatsApp is notably distinct to the one it’s taking with teens on its main social platform — also as it reworks the Facebook T&Cs ahead of GDPR.

Under the new terms there Facebook users between the ages of 13 and 15 will need to get parental permission to be targeted with ads or share sensitive info on Facebook.

But again, as my TC colleague Josh Constine pointed out, the parental consent system Facebook has concocted is laughably easy for teens to circumvent — merely requiring they select one of their Facebook friends or just enter an email address (which could literally be an alternative email address they themselves control). That entirely unverified entity is then asked to give ‘consent’ for their ‘child’ to share sensitive info. So, basically, a total joke.

As we’ve said before, Facebook’s approach to GDPR ‘compliance’ is at best described as ‘doing the minimum possible’. And data protection experts say legal challenges are inevitable.

Also in Europe Facebook has previously been forced via regulatory intervention to give up one portion of the data sharing between its platforms — specifically for ad targeting purposes. However its WhatsApp T&Cs also suggest it is confident it will find a way to circumvent that in future, as it writes it “will only do so when we reach an understanding with the Irish Data Protection Commissioner on a future mechanism to enable such use” — i.e. when, not if.

Last month it also signed an undertaking with the DPC on this related to GDPR compliance, so again appears to have some kind of regulatory-workaround ‘mechanism’ in the works.

Facebook shuffle brings a new head of US policy and chief privacy officer – TechCrunch

Trying times in Menlo Park, it seems: amid assaults from all quarters largely focused on privacy, Facebook is shifting some upper management around to better defend itself. Its head of policy in the U.S., Erin Egan, is returning to her chief privacy officer role, and a VP (and former FCC chairman) is taking her spot.

Kevin Martin, until very recently VP of mobile and global access policy, will be Facebook’s new head of policy. He was hired in 2015 for that job; he was at the FCC from 2001 to 2009, Chairman for the last four of those years. So whether you liked his policies or not, he clearly knows his way around a roll of red tape.

Erin Egan was chief privacy officer when Martin was hired, and at that time also took on the role of U.S. head of policy. “For the last couple years, Erin wore both hats at the company,” said Facebook spokesperson Andy Stone in a statement to TechCrunch.

“Kevin will become interim head of US Public Policy while Erin Egan focuses on her expanded duties as Chief Privacy Officer,” Stone said.

No doubt both roles have grown in importance and complexity over the last few years; one person performing both jobs doesn’t sound sustainable, and apparently it wasn’t.

Notably, Martin will now report to Joel Kaplan, with whom he worked previously during the Bush-Cheney campaign in 2000 and for years under the subsequent administration. Deep ties to Republican administrations and networks in Washington are probably more than a little valuable these days, especially to a company under fire from would-be regulators.

“I don’t think Facebook has a developer policy that is valid” – TechCrunch

A Cambridge University academic at the center of a data misuse scandal involving Facebook user data and political ad targeting faced questions from the UK parliament this morning.

Although the two-hour evidence session in front of the DCMS committee’s fake news enquiry raised rather more questions than it answered — with professor Aleksandr Kogan citing an NDA he said he had signed with Facebook to decline to answer some of the committee’s questions (including why and when exactly the NDA was signed).

TechCrunch understands the NDA relates to standard confidentiality provisions regarding deletion certifications and other commitments made by Kogan to Facebook not to misuse user data — after the company learned he had user passed data to SCL in contravention of its developer terms.

Asked why he had a non disclosure agreement with Facebook Kogan told the committee it would have to ask Facebook. He also declined to say whether any of his company co-directors (one of whom now works for Facebook) had been asked to sign an NDA. Nor would he specify whether the NDA had been signed in the US.

Asked whether he had deleted all the Facebook data and derivatives he had been able to acquire Kogan said yes “to the best of his knowledge”, though he also said he’s currently conducting a review to make sure nothing has been overlooked.

A few times during the session Kogan made a point of arguing that data audits are essentially useless for catching bad actors — claiming that anyone who wants to misuse data can simply put a copy on a hard drive and “store it under the mattress”.

(Incidentally, the UK’s data protection watchdog is conducting just such an audit of Cambridge Analytica right now, after obtaining a warrant to enter its London offices last month — as part of an ongoing, year-long investigation into social media data being used for political ad targeting.)

Your company didn’t hide any data in that way did it, a committee member asked Kogan? “We didn’t,” he rejoined.

“This has been a very painful experience because when I entered into all of this Facebook was a close ally. And I was thinking this would be helpful to my academic career. And my relationship with Facebook. It has, very clearly, done the complete opposite,” Kogan continued.  “I had no interest in becoming an enemy or being antagonized by one of the biggest companies in the world that could — even if it’s frivolous — sue me into oblivion. So we acted entirely as they requested.”

Despite apparently lamenting the breakdown in his relations with Facebook — telling the committee how he had worked with the company, in an academic capacity, prior to setting up a company to work with SCL/CA — Kogan refused to accept that he had broken Facebook’s terms of service — instead asserting: “I don’t think they have a developer policy that is valid… For you to break a policy it has to exist. And really be their policy, The reality is Facebook’s policy is unlikely to be their policy.”

“I just don’t believe that’s their policy,” he repeated when pressed on whether he had broken Facebook’s ToS. “If somebody has a document that isn’t their policy you can’t break something that isn’t really your policy. I would agree my actions were inconsistent with the language of this document — but that’s slightly different from what I think you’re asking.”

“You should be a professor of semantics,” quipped the committee member who had been asking the questions.

A Facebook spokesperson told us it had no public comment to make on Kogan’s testimony. But last month CEO Mark Zuckerberg couched the academic’s actions as a “breach of trust” — describing the behavior of his app as “abusive”.

In evidence to the committee today, Kogan told it he had only become aware of an “inconsistency” between Facebook’s developer terms of service and what his company did in March 2015 — when he said he begun to suspect the veracity of the advice he had received from SCL. At that point Kogan said GSR reached out to an IP lawyer “and got some guidance”.

(More specifically he said he became suspicious because former SCL employee Chris Wylie did not honor a contract between GSR and Eunoia, a company Wylie set up after leaving SLC, to exchange data-sets; Kogan said GSR gave Wylie the full raw Facebook data-set but Wylie did not provide any data to GSR.)

“Up to that point I don’t believe I was even aware or looked at the developer policy. Because prior to that point — and I know that seems shocking and surprising… the experience of a developer in Facebook is very much like the experience of a user in Facebook. When you sign up there’s this small print that’s easy to miss,” he claimed.

“When I made my app initially I was just an academic researcher. There was no company involved yet. And then when we commercialized it — so we changed the app — it was just something I completely missed. I didn’t have any legal resources, I relied on SCL [to provide me with guidance on what was appropriate]. That was my mistake.”

“Why I think this is still not Facebook’s policy is that we were advised [by an IP lawyer] that Facebook’s terms for users and developers are inconsistent. And that it’s not actually a defensible position for Facebook that this is their policy,” Kogan continued. “This is the remarkable thing about the experience of an app developer on Facebook. You can change the name, you can change the description, you can change the terms of service — and you just save changes. There’s no obvious review process.

“We had a terms of service linked to the Facebook platform that said we could transfer and sell data for at least a year and a half — nothing was ever mentioned. It was only in the wake of the Guardian article [in December 2015] that they came knocking.”

Kogan also described the work he and his company had done for SCL Elections as essentially worthless — arguing that using psychometrically modeled Facebook data for political ad targeting in the way SCL/CA had apparently sought to do was “incompetent” because they could have used Facebook’s own ad targeting platform to achieve greater reach and with more granular targeting.

“It’s all about the use-case. I was very surprised to learn that what they wanted to do is run Facebook ads,” he said. “This was not mentioned, they just wanted a way to measure personality for many people. But if the use-case you have is Facebook ads it’s just incompetent to do it this way.

“Taking this data-set you’re going to be able to target 15% of the population. And use a very small segment of the Facebook data — page likes — to try to build personality models. When do this when you could very easily go target 100% and use much more of the data. It just doesn’t make sense.”

Asked what, then, was the value of the project he undertook for SCL, Kogan responded: “Given what we know now, nothing. Literally nothing.”

He repeated his prior claim that he was not aware that work he was providing for SCL Elections would be used for targeting political ads, though he confirmed he knew the project was focused on the US and related to elections.

He also said he knew the work was being done for the Republican party — but claimed not to know which specific candidates were involved.

Pressed by one committee member on why he didn’t care to know which politicians he was indirectly working for, Kogan responded by saying he doesn’t have strong personal views on US politics or politicians generally — beyond believing that most US politicians are at least reasonable in their policy positions.

“My personal position on life is unless I have a lot of evidence I don’t know. Is the answer. It’s a good lesson to learn from science — where typically we just don’t know. In terms of politics in particular I rarely have a strong position on a candidate,” said Kogan, adding that therefore he “didn’t bother” to make the effort to find out who would ultimately be the beneficiary of his psychometric modeling.

Kogan told the committee his initial intention had not been to set up a business at all but to conduct not-for-profit big data research — via an institute he wanted to establish — claiming it was Wylie who had advised him to also set up the for-profit entity, GSR, through which he went on to engage with SCL Elections/CA.

“The initial plan was we collect the data, I fulfill my obligations to SCL, and then I would go and use the data for research,” he said.

And while Kogan maintained he had never drawn a salary from the work he did for SCL — saying his reward was “to keep the data”, and get to use it for academic research — he confirmed SCL did pay GSR £230,000 at one point during the project; a portion of which he also said eventually went to pay lawyers he engaged “in the wake” of Facebook becoming aware that data had been passed to SCL/CA by Kogan — when it contacted him to ask him to delete the data (and presumably also to get him to sign the NDA).

In one curious moment, Kogan claimed not to know his own company had been registered at 29 Harley Street in London — which the committee noted is “used by a lot of shell companies some of which have been used for money laundering by Russian oligarchs”.

Seeming a little flustered he said initially he had registered the company at his apartment in Cambridge, and later “I think we moved it to an innovation center in Cambridge and then later Manchester”.

“I’m actually surprised. I’m totally surprised by this,” he added.

Did you use an agent to set it up, asked one committee member. “We used Formations House,” replied Kogan, referring to a company whose website states it can locate a business’ trading address “in the heart of central London” — in exchange for a small fee.

“I’m legitimately surprised by that,” added Kogan of the Harley Street address. “I’m unfortunately not a Russian oligarch.”

Later in the session another odd moment came when he was being asked about his relationship with Saint Petersburg University in Russia — where he confirmed he had given talks and workshops, after traveling to the country with friends and proactively getting in touch with the university “to say hi” — and specifically about some Russian government-funded research being conducted by researchers there into cyberbullying.

Committee chair Collins implied to Kogan the Russian state could have had a specific malicious interest in such a piece of research, and wondered whether Kogan had thought about that in relation to the interactions he’d had with the university and the researchers.

Kogan described it as a “big leap” to connect the piece of research to Kremlin efforts to use online platforms to interfere in foreign elections — before essentially going on to repeat a Kremlin talking point by saying the US and the UK engage in much the same types of behavior.

“You can make the same argument about the UK government funding anything or the US government funding anything,” he told the committee. “Both countries are very famous for their spies.

“There’s a long history of the US interfering with foreign elections and doing the exact same thing [creating bot networks and using trolls for online intimidation].”

“Are you saying it’s equivalent?” pressed Collins. “That the work of the Russian government is equivalent to the US government and you couldn’t really distinguish between the two?”

“In general I would say the governments that are most high profile I am dubious about the moral scruples of their activities through the long history of UK, US and Russia,” responded Kogan. “Trying to equate them I think is a bit of a silly process. But I think certainly all these countries have engaged in activities that people feel uncomfortable with or are covert. And then to try to link academic work that’s basic science to that — if you’re going to down the Russia line I think we have to go down the UK line and the US line in the same way.

“I understand Russia is a hot-button topic right now but outside of that… Most people in Russia are like most people in the UK. They’re not involved in spycraft, they’re just living lives.”

“I’m not aware of UK government agencies that have been interfering in foreign elections,” added Collins.

“Doesn’t mean it’s not happened,” replied Kogan. “Could be just better at it.”

During Wylie’s evidence to the committee last month the former SCL data scientist had implied there could have been a risk of the Facebook data falling into the hands of the Russian state as a result of Kogan’s back and forth travel to the region. But Kogan rebutted this idea — saying the data had never been in his physical possession when he traveled to Russia, pointing out it was stored in a cloud hosting service in the US.

“If you want to try to hack Amazon Web Services good luck,” he added.

He also claimed not to have read the piece of research in question, even though he said he thought the researcher had emailed the paper to him — claiming he can’t read Russian well.

Kogan seemed most comfortable during the session when he was laying into Facebook’s platform policies — perhaps unsurprisingly, given how the company has sought to paint him as a rogue actor who abused its systems by creating an app that harvested data on up to 87 million Facebook users and then handing information on its users off to third parties.

Asked whether he thought a prior answer given to the committee by Facebook — when it claimed it had not provided any user data to third parties — was correct, Kogan said no given the company provides academics with “macro level” user data (including providing him with this type of data, in 2013).

He was also asked why he thinks Facebook lets its employees collaborate with external researchers — and Kogan suggested this is “tolerated” by management as a strategy to keep employees stimulated.

Committee chair Collins asked whether he thought it was odd that Facebook now employs his former co-director at GSR, Joseph Chancellor — who works in its research division — despite Chancellor having worked for a company Facebook has said it regards as having violated its platform policies.

“Honestly I don’t think it’s odd,” said Kogan. “The reason I don’t think it’s odd is because in my view Facebook’s comments are PR crisis mode. I don’t believe they actually think these things — because I think they realize that their platform has been mined, left and right, by thousands of others.

“And I was just the unlucky person that ended up somehow linked to the Trump campaign. And we are where we are. I think they realize all this but PR is PR and they were trying to manage the crisis and it’s convenient to point the finger at a single entity and try to paint the picture this is a rogue agent.

At another moment during the evidence session Kogan was also asked to respond to denials previously given to the committee by former CEO of Cambridge Analytica Alexander Nix — who had claimed that none of the data it used came from GSR and — even more specifically — that GSR had never supplied it with “data-sets or information”.

“Fabrication,” responded Kogan. “Total fabrication.”

“We certainly gave them [SCL/CA] data. That’s indisputable,” he added.

In written testimony to the committee he also explained that he in fact created three apps for gathering Facebook user data. The first one — called the CPW Lab app — was developed after he had begun a collaboration with Facebook in early 2013, as part of his academic studies. Kogan says Facebook provided him with user data at this time for his research — although he said these datasets were “macro-level datasets on friendship connections and emoticon usage” rather than information on individual users.

The CPW Lab app was used to gather individual level data to supplement those datasets, according to Kogan’s account. Although he specifies that data collected via this app was housed at the university; used for academic purposes only; and was “not provided to the SCL Group”.

Later, once Kogan had set up GSR and was intending to work on gathering and modeling data for SCL/Cambridge Analytica, the CPW Lab app was renamed to the GSR App and its terms were changed (with the new terms provided by Wylie).

Thousands of people were then recruited to take this survey via a third company — Qualtrics — with Kogan saying SCL directly paid ~$800,000 to it to recruit survey participants, at a cost of around $3-$4 per head (he says between 200,000 and 300,000 people took the survey as a result in the summer of 2014; NB: Facebook doesn’t appear to be able to break out separate downloads for the different apps Kogan ran on its platform — it told us about 305,000 people downloaded “the app”).

In the final part of that year, after data collection had finished for SCL, Kogan said his company revised the GSR App to become an interactive personality quiz — renaming it “thisisyourdigitallife” and leaving the commercial portions of the terms intact.

“The thisisyourdigitallife App was used by only a few hundred individuals and, like the two prior iterations of the application, collected demographic information and data about “likes” for survey participants and their friends whose Facebook privacy settings gave participants access to “likes” and demographic information. Data collected by the thisisyourdigitallife App was not provided to SCL,” he claims in the written testimony.

During the oral hearing, Kogan was pressed on misleading T&Cs in his two commercial apps. Asked by a committee member about the terms of the GSR App not specifying that the data would be used for political targeting, he said he didn’t write the terms himself but added: “If we had to do it again I think I would have insisted to Mr Wylie that we do add politics as a use-case in that doc.”

“It’s misleading,” argued the committee member. “It’s a misrepresentation.”

“I think it’s broad,” Kogan responded. “I think it’s not specific enough. So you’re asking for why didn’t we go outline specific use-cases — because the politics is a specific use-case. I would argue that the politics does fall under there but it’s a specific use-case. I think we should have.”

The committee member also noted how, “in longer, denser paragraphs” within the app’s T&Cs, the legalese does also state that “whatever that primary purpose is you can sell this data for any purposes whatsoever” — making the point that such sweeping terms are unfair.

“Yes,” responded Kogan. “In terms of speaking the truth, the reality is — as you’ve pointed out — very few if any people have read this, just like very few if any people read terms of service. I think that’s a major flaw we have right now. That people just do not read these things. And these things are written this way.”

“Look — fundamentally I made a mistake by not being critical about this. And trusting the advice of another company [SCL]. As you pointed out GSR is my company and I should have gotten better advice, and better guidance on what is and isn’t appropriate,” he added.

“Quite frankly my understanding was this was business as usual and normal practice for companies to write broad terms of service that didn’t provide specific examples,” he said after being pressed on the point again.

“I doubt in Facebook’s user policy it says that users can be advertised for political purposes — it just has broad language to provide for whatever use cases they want. I agree with you this doesn’t seem right, and those changes need to be made.”

At another point, he was asked about the Cambridge University Psychometrics Centre — which he said had initially been involved in discussions between him and SCL to be part of the project but fell out of the arrangement. According to his version of events the Centre had asked for £500,000 for their piece of proposed work, and specifically for modeling the data — which he said SCL didn’t want to pay. So SCL had asked him to take that work on too and remove the Centre from the negotiations.

As a result of that, Kogan said the Centre had complained about him to the university — and SCL had written a letter to it on his behalf defending his actions.

“The mistake the Psychometrics Centre made in the negotiation is that they believed that models are useful, rather than data,” he said. “And actually just not the same. Data’s far more valuable than models because if you have the data it’s very easy to build models — because models use just a few well understood statistical techniques to make them. I was able to go from not doing machine learning to knowing what I need to know in one week. That’s all it took.”

In another exchange during the session, Kogan denied he had been in contact with Facebook in 2014. Wylie previously told the committee he thought Kogan had run into problems with the rate at which the GSR App was able to pull data off Facebook’s platform — and had contacted engineers at the company at the time (though Wylie also caveated his evidence by saying he did not know whether what he’d been told was true).

“This never happened,” said Kogan, adding that there was no dialogue between him and Facebook at that time.  “I don’t know any engineers at Facebook.”

Facebook shuts down custom feed sharing prompts and 12 other APIs – TechCrunch

Facebook is making good on Mark Zuckerberg’s promise to prioritize user safety and data privacy over its developer platform. Today Facebook and Instagram announced a slew of API shut downs and changes designed to stop developers from being able to pull you or your friends data without express permission, drag in public content, or trick you into sharing. Some changes go into effect today, and others roll out on August 1st so developers have over 90 days to fix their apps. They follow the big changes announced two weeks ago

Most notably, app developers will have to start using the standardized Facebook sharing dialog to request the ability to publish to the News Feed on a user’s behalf. They’ll no longer be  able to use the publish_actions API that let them design a custom sharing prompt. A Facebook spokesperson says this change was planned for the future because the consistency helps users feel in control, but the company moved the deadline up to August 1st as part of today’s updates because it didn’t want to have to make multiple separate announcements of app-breaking changes.

 

Facebook app developers will now have to use this standard Facebook sharing prompt since the publish_action API for creating custom prompts is shutting down

One significant Instagram Graph API change is going into effect today, which removes the ability to pull the name and bio of users who leave comments on your content, though commenters’ usernames and comment text is still available.

Facebook’s willingness to put user safety over platform utility indicates a maturation of the company’s “Hacker Way” that played fast-and-loose with people’s data in order to attract developers to its platform who would in turn create functionality that soaked up more attention.

For more on Facebook’s API changes, check out our breakdown of the major updates:

Instagram launches “Data Download” tool to let you leave – TechCrunch

Two weeks ago TechCrunch called on Instagram to build an equivalent to Facebook’s “Download Your Information feature so if you wanted to leave for another photo sharing network, you could. The next day it announced this tool would be coming and now TechCrunch has spotted it rolling out to users. Instagram’s “Data Download” feature can be accessed here or through the app’s privacy settings. It lets users export their photos, videos, Stories, profile, info, comments, and messages, though it can take a few hours to days for your download to be ready.

An Instagram spokesperson now confirms to TechCrunch that “the Data Download tool is currently accessible to everyone on the web, but access via iOS and Android is still rolling out.” We’ll have more details on exactly what’s inside once my download is ready.

The tool’s launch is necessary for Instagram to comply with the data portability rule in European Union’s GDPR privacy law that goes into effect on May 25th. But it’s also a reasonable concession. Instagram has become the dominant image sharing social network with over 800 million users. It shouldn’t need to lock up users’ data in order to keep them around.

Instagram hasn’t been afraid to attack competitors and fight dirty. Most famously, it copied Snapchat’s Stories in August 2016, which now has over 300 million daily users — eclipsing the original. But it also cut off GIF-making app Phhhoto from its Find Friends feature, then swiftly cloned its core feature to launch Instagram Boomerang. Within a few years, Phhhoto had shut down its app.

If Instagram is going to ruthlessly clone and box out its competitors, it should also let users choose which they want to use. That’s tough if all your photos and videos are trapped inside another app. The tool could create a more level playing field for competition amongst photo apps.

It could also deter users from using sketchy third-party apps to scrape all their Instagram content. Since they typically require you to log in with your Instagram credentials, these put users at risk of being hacked or having their images used elsewhere without their consent. Considering Facebook launched its DYI tool in 2010, six years after the site launched, the fact that it took Instagram 8 years from launch to build this means it’s long overdue.

But with such strong network effect and its willingness to clone any popular potential rival, it may still take a miracle or a massive shift to a new computing platform for any app to dethrone Instagram.

Facebook reveals 25 pages of takedown rules for hate speech and more – TechCrunch

Facebook has never before made public the guidelines its moderators use to decide whether to remove violence, spam, harassment, self-harm, terrorism, intellectual property theft, and hate speech from social network until now. The company hoped to avoid making it easy to game these rules, but that worry has been overridden by the public’s constant calls for clarity and protests about its decisions. Today Facebook published 25 pages of detailed criteria and examples for what is and isn’t allowed.

Facebook is effectively shifting where it will be criticized to the underlying policy instead of individual incidents of enforcement mistakes like when it took down posts of the newsworthy “Napalm Girl” historical photo because it contains child nudity before eventually restoring them. Some groups will surely find points to take issue with, but Facebook has made some significant improvements. Most notably, it no longer disqualifies minorities from shielding from hate speech because an unprotected characteristic like “children” is appended to a protected characteristic like “black”.

Nothing is technically changing about Facebook’s policies. But previously, only leaks like a copy of an internal rulebook attained by the Guardian had given the outside world a look at when Facebook actually enforces those policies. These rules will be translated into over 40 languages for the public. Facebook currently has 7500 content reviewers, up 40% from a year ago.

Facebook also plans to expand its content removal appeals process, It already let users request a review of a decision to remove their profile, Page, or Group. Now Facebook will notify users when their nudity, sexual activity, hate speech or graphic violence content is removed and let them hit a button to “Request Review”, which will usually happen within 24 hours. Finally, Facebook will hold Facebook Forums: Community Standards events in Germany, France, the UK, India, Singapore, and the US to give its biggest communities a closer look at how the social network’s policy works.

Fixing the “white people are protected, black children aren’t” policy

Facebook’s VP of Global Product Management Monika Bickert who has been coordinating the release of the guidelines since September told reporters at Facebook’s Menlo Park HQ last week that “There’s been a lot of research about how when institutions put their policies out there, people change their behavior, and that’s a good thing.” She admits there’s still the concern that terrorists or hate groups will get better at developing “workarounds” to evade Facebook’s moderators, “but the benefits of being more open about what’s happening behind the scenes outweighs that.”

Content moderator jobs at various social media companies including Facebook have been described as hellish in many exposes regarding what it’s like to fight the spread of child porn, beheading videos, racism for hours a day. Bickert says Facebook’s moderators get trained to deal with this and have access to counseling and 24/7 resources, including some on-site. They can request to not look at certain kinds of content they’re sensitive to. But Bickert didn’t say Facebook imposes an hourly limit on how much offensive moderators see per day like how YouTube recently implemented a four-hour limit.

A controversial slide depicting Facebook’s now-defunct policy that disqualified subsets of protected groups from hate speech shielding. Image via ProPublica

The most useful clarification in the newly revealed guidelines explains how Facebook has ditched its poorly received policy that deemed “white people” as protected from hate speech, but not “black children”. That rule that left subsets of protected groups exposed to hate speech was blasted in a ProPublica piece in June 2017, though Facebook said it no longer applied that policy.

Now Bickert says “Black children — that would be protected. White men — that would also be protected. We consider it an attack if it’s against a person, but you can criticize an organization, a religion . . . If someone says ‘this country is evil’, that’s something that we allow. Saying ‘members of this religion are evil’ is not.” She explains that Facebook is becoming more aware of the context around who is being victimized. However, Bickert notes that if someone says “‘I’m going to kill you if you don’t come to my party’, if it’s not a credible threat we don’t want to be removing it.” 

Do community standards = editorial voice?

Being upfront about its policies might give Facebook more to point to when it’s criticized for failing to prevent abuse on its platform. Activist groups say Facebook has allowed fake news and hate speech to run rampant and lead to violence in many developing countries where Facebook hasn’t had enough native speaking moderators. The Sri Lankan government temporarily blocked Facebook in hopes of ceasing calls for violence, and those on the ground say Zuckerberg overstated Facebook improvements to the problem in Myanmar that led to hate crimes against the Rohingya people.

Revealing the guidelines could at least cut down on confusion about whether hateful content is allowed on Facebook. It isn’t. Though the guidelines also raise the question of whether the Facebook value system it codifies means the social network has an editorial voice that would define it as a media company. That could mean the loss of legal immunity for what its users post. Bickert stuck to a rehearsed line that “We are not creating content and we’re not curating content”. Still, some could certainly say all of Facebook’s content filters amount to a curatorial layer.

But whether Facebook is a media company or a tech company, it’s a highly profitable company. It needs to spend some more of the billions it earns each quarter applying the policies evenly and forcefully around the world.

Facebook’s new authorization process for political ads goes live in the US – TechCrunch

Earlier this month — and before Facebook CEO Mark Zuckerberg testified before Congress — the company announced a series of changes to how it would handle political advertisements running on its platform in the future. It had said that people who wanted to buy a political ad — including ads about political “issues” — would have to reveal their identities and location and be verified before the ads could run. Information about the advertiser would also display to Facebook users.

Today, Facebook is announcing the authorization process for U.S. political ads is live.

Facebook had first said in October that political advertisers would have to verify their identity and location for election-related ads. But in April, it expanded that requirement to include any “issue ads” — meaning those on political topics being debated across the country, not just those tied to an election.

Facebook said it would work with third parties to identify the issues. These ads would then be labeled as “Political Ads,” and display the “paid for by” information to end users.

According to today’s announcement, Facebook will now begin to verify the identity and the residential mailing address of advertisers who want to run political ads. Those advertisers will also have to disclose who’s paying for the ads as part of this authorization process.

This verification process is currently only open in the U.S. and will require Page admins and ad account admins to submit their government-issued ID to Facebook, along with their residential mailing address.

The government ID can either be a U.S. passport or U.S. driver’s license, a FAQ explains. Facebook will also ask for the last four digits of admins’ Social Security Number. The photo ID will then be approved or denied in a matter of minutes, though anyone declined based on the quality of the uploaded images won’t be prevented from trying again.

The address, however, will be verified by mailing a letter with a unique access code that only the admin’s Facebook account can use. The letter may take up to 10 days to arrive, Facebook notes.

Along with the verification portion, Page admins will also have to fill in who paid for the ad in the “disclaimer” section. This has to include the organization(s) or person’s name(s) who funded it.

This information will also be reviewed prior to approval, but Facebook isn’t going to fact check this field, it seems.

Instead, the company simply says: “We’ll review each disclaimer to make sure it adheres to our advertising policies. You can edit your disclaimers at any time, but after each edit, your disclaimer will need to be reviewed again, so it won’t be immediately available to use.”

The FAQ later states that disclaimers must comply with “any applicable law,” but again says that Facebook only reviews them against its ad policies.

“It’s your responsibility as the advertiser to independently assess and ensure that your ads are in compliance with all applicable election and advertising laws and regulations,” the documentation reads.

Along with the launch of the new authorization procedures, Facebook has released a Blueprint training course to guide advertisers through the steps required, and has published an FAQ to answer advertisers’ questions.

Of course, these procedures will only net the more scrupulous advertisers willing to play by the rules. That’s why Facebook had said before that it plans to use AI technology to help sniff out those advertisers who should have submitted to verification, but did not. The company is also asking people to report suspicious ads using the “Report Ad” button.

Facebook has been under heavy scrutiny because of how its platform was corrupted by Russian trolls on a mission to sway the 2016 election. The Justice Department charged 13 Russians and three companies with election interference earlier this year, and Facebook has removed hundreds of accounts associated with disinformation campaigns.

While tougher rules around ads may help, they alone won’t solve the problem.

It’s likely that those determined to skirt the rules will find their own workarounds. Plus, ads are only one of many issues in terms of those who want to use Facebook for propaganda and misinformation. On other fronts, Facebook is dealing with fake news — including everything from biased stories to those that are outright lies, intending to influence public opinion. And of course there’s the Cambridge Analytica scandal, which led to intense questioning of Facebook’s data privacy practices in the wake of revelations that millions of Facebook users had their information improperly accessed.

Facebook says the political ads authorization process is gradually rolling out, so it may not be available to all advertisers at this time. Currently, users can only set up and manage authorizations from a desktop computer from the Authorizations tab in a Facebook Page’s Settings.

Facebook face recognition error looks awkward ahead of GDPR – TechCrunch

A Facebook face recognition notification slip-up hints at how risky the company’s approach to compliance with a tough new European data protection standard could turn out to be.

On Friday a Metro journalist in the UK reported receiving a notification about the company’s face recognition technology — which told him “the setting is on”.

The wording was curious as the technology has been switched off in Europe since 2012, after regulatory pressure, and — as part of changes related to its GDPR compliance strategy — Facebook has also said it will be asking European users to choose individually whether or not they want to switch it on. (And on Friday begun rolling out its new consent flow in the region, ahead of the regulation applying next month.)

The company has since confirmed to us that the message was sent to the user in error — saying the wording came from an earlier notification which it sent to users who already had its facial recognition tech enabled, starting in December. And that it had intended to send the person a similar notification — containing the opposite notification, i.e. that “the setting is off”.

“We’re asking everyone in the EU whether they want to enable face recognition, and only people who affirmatively give their consent will have these features enabled. We did not intend for anyone in the EU to see this type of message, and we can confirm that this error did not result in face recognition being enabled without the person’s consent,” a Facebook spokesperson told us.

Here are the two notifications in question — showing the setting on vs the setting off wordings:

This is interesting because Facebook has repeatedly refused to confirm it will be universally applying GDPR compliance measures across its entire global user-base.

Instead it has restricted its public commitments to saying the same “settings and controls” will be made available for users — which as we’ve previously pointed out avoids committing the company to a universal application of GDPR principles, such as privacy by design.

Given that Facebook’s facial recognition feature has been switched off in Europe since 2012 “the setting is on” message would presumably have only been sent to users in the US or Canada — where Facebook has been able to forge ahead with pushing people to accept the controversial, privacy-hostile technology, embedding it into features such as auto-tagging for photo uploads.

But it hardly bodes well for Facebook’s compliance with the EU’s strict new data protection standard if its systems are getting confused about whether or not a user is an EU person.

Facebook claims no data was processed without consent as a result of the wrong notification being sent — but under GDPR it could face investigations by data protection authorities seeking to verify whether or not an individual’s rights were violated. (Reminder: GDPR fines can scale as high as 4% of a company’s global annual turnover so privacy enforcement is at last getting teeth.)

Facebook’s appetite for continuing to push privacy hostile features on its user-base is clear. This strategic direction also comes from the very top of the company.

Earlier this month CEO and founder Mark Zuckerberg urged US lawmakers not to impede US companies from be using people’s data for sensitive use-cases like facial recognition — attempting to gloss that tough sell by claiming pro-privacy rules would risk the US falling behind China.

Meanwhile, last week it also emerged that Zuckerberg’s company will switch the location where most international users’ data is processed from its international HQ, Facebook Ireland, to Facebook USA. From next month only EU users will have their data controller located in the EU — other international users, who would have at least technically fallen under GDPR’s reach otherwise, on account of their data being processed in the region, are being shifted out of the EU jurisdiction — via a unilateral T&Cs change.

This move seems intended to try to shrink some of Facebook’s legal liabilities by reducing the number of international users that would, at least technically, fall under the reach of the EU regulation — which both applies to anyone in the EU whose data is being processed and also extends EU fundamental rights extraterritorially, carrying the aforementioned major penalties for violations.

However Facebook’s decision to reduce how many of its users have their data processed in the EU also looks set to raise the stakes — if, as it appears, the company intends to exploit the lack of a comprehensive privacy framework in the US to apply different standards for North American users (and from next month also for non-EU international users, whose data will be processed there).

The problem is, if Facebook does not perform perfect segregation and management of these two separate pools of users it risks accidentally processing the personal data of Europeans in violation of the strict new EU standard, which applies from May 25.

Yet here it is, on the cusp of the new rules, sending the wrong notification and incorrectly telling an EU user that facial recognition is on.

Given how much risk it’s creating for itself by trying to run double standards for data protection you almost have to wonder whether Facebook is trying to engineer in some compliance wiggle room for itself — i.e. by positioning itself to be able to claim that such and such’s data was processed in error.

Another interesting question is whether the unilateral switching of ~1.5BN non-EU international users to Facebook USA as data controller could be interpreted as a data transfer to a third country — which would trigger other data protection requirements under EU law, and further layer on the legal complexity…

What is clear is that legal challenges to Facebook’s self-serving interpretation of EU law are coming.