TikTok’s biggest problem is outside its control

Technology

I.

Last week I wrote about some of the forces putting the squeeze on TikTok — and, uncharacteristically for me when I write about TikTok, the course of events did not immediately reverse and put TikTok into a stronger position. Instead, by several measures, the situation for ByteDance’s popular video app got significantly worse.

For starters, Peter Navarro, an adviser to the president, said in an interview with Fox News on Sunday that he expects President Trump will take “strong action” against TikTok and a fellow Chinese-made social app, WeChat. Worse from ByteDance’s perspective is that Navarro said the United States will not back down even if TikTok is sold to an American buyer. Here’s Bloomberg:

The Trump administration is “just getting started” with the two apps, and he would not rule out the US banning them, Navarro said on Fox News on Sunday. Even if TikTok is sold to an American buyer, it would not solve the problem, he said.

“If TikTok separates as an American company, that doesn’t help us,” Navarro said. “Because it’s going to be worse – we’re going to have to give China billions of dollars for the privilege of having TikTok operate on US soil.”

At the root of this concern is that no matter what ByteDance says about TikTok’s independence from Chinese governance, ultimately it must do whatever the country’s brutal, repressive authoritarian regime demands. Russell Brandom examined American anxieties about the app in The Verge:

For experts, the concern is less about mass data collection and more about targeted operations that are harder to detect. Because TikTok maintains the standard level of invasive app access, the Chinese intelligence services could potentially use it as a portal to surveil specific users or gather compromising information. The FBI has already raised the alarm about Chinese spies stealing US trade secrets, so that same access is even scarier for Amazon or Wells Fargo, which might plausibly have proprietary tech that China wants to steal. As long as the Chinese government can put pressure on TikTok through its ownership, there will be ways to snoop on users without raising alarms. That makes it hard for high-risk users to feel entirely safe, no matter what the app does.

Anxiety over foreign interference has reared its head before. As recently as April, Zoom was caught rerouting external video calls through China, a behavior far more serious than anything we’ve seen from TikTok. Equifax lost data from more than 100 million people (possibly working for Russia, depending on who you believe), which is certainly more information than TikTok has ever had access to. But there’s something about TikTok’s ownership entanglement that makes it harder to forgive. Even if Zoom was careless or Equifax was outmatched, there’s a belief that they’re still fighting on the right side. But political pressure can’t be fixed with security audits. If you believe TikTok is collaborating with Chinese intelligence services, there’s simply nothing the company can do to reassure you.

The other fear is that China will influence ByteDance, either directly indirectly, to push a worldview that embraces censorship and political oppression on America and the world at large. This is not an abstract fear — we have already seen it happen with content related to the NBA and Hong Kong, as Ben Thompson documented last year. (TikTok says NBA content may not have appeared in those searches due to issues with language and localization, but were not actively removed from the platform.) And censorship on the app still appears to reflect a Chinese worldview far more than it reflects an American one; only recently did the app’s censors begin allowing people with large tattoos, and what the company said was a bug temporarily appeared to hide content related to Black Lives Matter. (The content was visible but a bug made the view count appear to be zero.) It’s not a stretch to imagine Beijing eventually using TikTok to distribute propaganda — and without leaving any fingerprints, either.

And so: there are bans. Amazon emailed employees telling them to delete the app from corporate phones, then backtracked. Then Wells Fargo banned the app from corporate devices, and stuck to it. The Democratic and Republican national committees have now both told staffers not to install the app on their phones for fear that TikTok could be sending back unspecified data to the Chinese government.

What sort of data? Well, more researchers have been looking into that. At the Washington Post, Geoffrey Fowler asked Patrick Jackson of privacy company Disconnect to take a look. “TikTok doesn’t appear to grab any more personal information than Facebook,” Fowler writes. “That’s still an appalling amount of data to mine about the lives of Americans. But there’s scant evidence that TikTok is sharing our data with China.” He goes on:

Jackson, from Disconnect, said the app sends an “abnormal” amount of information from devices to its computers. When he opened TikTok, he found approximately 210 network requests in the first nine seconds, totaling over 500 kilobytes of data sent from the app to the Internet. (That’s equivalent to half a megabyte, or 125 pages of typed data.) Much of it was information about the phone (like screen resolution and the Apple advertising identifier) that could be used to “fingerprint” your device even when you’re not logged in.

And there is a hole in our ability to verify all of what TikTok does. Jackson said the app uses some technical measures to encode its activity, meaning some of it is hidden from independent researchers looking under the covers. “In order to disrupt hackers and those who wish to manipulate the app, we use obfuscation to help reduce automated attacks, like bots,” [a spokeswoman] said.

Which basically leaves us back where we started: with no evidence TikTok is doing anything extraordinarily shady with our data, and no evidence it could stop the Chinese government from forcing it to at any point.

Perhaps realizing that the app may be caught up in an intractable conflict between global superpowers, TikTok stars have begun to panic. In the New York Times, Taylor Lorenz finds young people worried about losing a key outlet for creative freedom during months of quarantine — and also, for some number of them, their livelihoods.

Influencers who watched the fall of Vine, another popular short-form video app, in 2016 learned the importance of diversifying one’s audience across platforms. But even for TikTok’s biggest stars, moving an audience from one platform to another is a huge undertaking.

“I have 7 million followers on TikTok, but it doesn’t translate to every platform,” said Nick Austin, 20. “I only have 3 million on Instagram and 500,000 on YouTube. No matter what it’s going to be hard to transfer all the people I have on TikTok.”

ByteDance is reportedly considering all manner of proposed solutions to keep TikTok alive around the world — it’s expected to generate $500 million in revenue this year, after all. But it seems clear that whatever happens to TikTok, ByteDance itself won’t be in control of the outcome.

And that, of course, has been TikTok’s problem all along.

II.

Facebook is considering a ban on political ads in the days leading up to the US election, Kurt Wagner reports at Bloomberg. In some quarters, this was received as a capitulation to vocal calls for the company to ban political ads altogether. In my view, it’s less a full-scale retreat than a reasonable balancing of equities. Politicians get access to Facebook’s ad platform for the vast majority of the campaign — and for the most part, their lies will still not be subject to fact-checking.

But in the waning days of the campaign, candidates will have to turn elsewhere for paid promotion. That reduces the chances that a particularly vile ad goes massively viral before it can be removed, or before the free press can fact-check it and distribute any articles intended to debunk it.

It may also make life harder for challengers against well known incumbents, who could have used the final promotional push that Facebook ads provide. (Democrats and Republicans have been equally concerned about this outcome in the past.)

At the same time, they’ll still be able to post on their own pages where it seems to me they will be at just as great a risk of saying something terrible as they would in an ad. And those posts might get even wider distribution than their ads, if history is any guide.

By this point, I’m more or less persuaded that an ad blackout in the days before the election — of the sort that is already common in the Australia — is the right thing to do. But I remain unconvinced it will make any significant difference in the basic logic of campaigning.

PUSHBACK: THE CLUBHOUSE RULES
I heard from some frustrated venture capitalists last week after I wrote about Clubhouse, an audio-only social app currently in closed beta. At that point Clubhouse had no in-app mechanisms for reporting harassment, and its community guidelines were little more than legal boilerplate. But wasn’t I being a little too harsh on the co-founders, some of you wanted to know? Clubhouse has just two full-time employees; is this really the time to beat them up over trust and safety issues?

My answer is that this is precisely the time to start thinking about trust and safety. For too long Silicon Valley social apps punted on these questions until they were bona fide crises. I believe community standards are something that an app should launch with, rather than wait to develop until their first content moderation crisis. If that’s me being “unreasonable,” it feels like the kind of unreasonable I can feel good about.

In any case, I was heartened to see that over the weekend Clubhouse wrote a blog post about their dramatic week and posted some community guidelines. The guidelines are at times comically naive — how, exactly, does an app that essentially consists of unlimited live phone calls intend to ensure that users “not spread false information”?

But you’ve got to start somewhere, and I’m glad Clubhouse did.

PUSHBACK: FACEBOOK’S SIZE
Writing about Facebook last week, I said something I say a lot, which is that Facebook’s problems with hate speech and civil rights violations would be smaller if Facebook itself were smaller. Not everyone agrees with me. Particularly people who work at Facebook, but also other people. One of them (and there were others!) is friend of the newsletter Evelyn Douek. She writes:

I agree the size matters and I probably think it should be broken up for other reasons, but I’m not sure it really solves any of the content moderation problems. First, I think there’s no putting these concerns or the scale of the internet back in the tube. People will find ways to share information across networks; some of it will be awful. […] The problems might be less extreme, and it sure would be nice to stop it being Mark’s Choice (although there are other ways to do that too…), but I don’t think they go away. Second, we have the same concerns now about other, smaller platforms too. E.g., Twitter. And we worry about where the extremists go when we knock them off Facebook. Again, maybe we’re not as concerned, but I think the fundamental problem of how and who decides what content can be online remains, regardless of size.

I think Reddit is the strongest argument against decentralization solving it all. In the end, we needed a powerful, centralized gatekeeper to come in and be a chokepoint. The thing about powerful gatekeepers is they have power!

Points taken — but I don’t know. The thing about Facebook is that it doesn’t just host hate speech, it (almost always unwittingly) recruits new adherents for that ideology through algorithmic promotion of emotionally charged posts and virulent right-wing groups. At the end of the day, a smaller Facebook — which is to say, a Facebook that does not include Instagram or WhatsApp — has fewer potential recruits. If there’s an argument that a smaller Facebook would somehow make our global hate speech problem worse, I still have yet to read it.