How to fix Twitter's verification problem
Twitter’s approach to verified accounts deserves all the criticism it gets. Recent moves to halt new verifications — and even to remove previously granted blue check marks — will do little to reduce the hate speech, violent threats, and abuse that run rampant across the platform. Amid pressure to keep adding users, Twitter’s best approach can’t possibly be to eliminate rudimentary safeguards.
Indeed, the steps will make Twitter's influence on politics even worse. Come 2018 and 2020, elected officials, candidates and even our strongest democratic institutions will face asymmetric warfare in which traceless attacks remain unstoppable. The threat isn’t tangible like a tank or a bomb, but left unchecked it’s every bit as dangerous.
So how can the service increase both its platform’s appeal and the company’s market value? Here are a few ideas:
- Revise the Verification option. Make it easier for more people to apply and be approved, by removing the public figure requirement and allowing anonymous user names.
- Create “Identified” status. This could build on Verified status by requiring a real name.
- Add detail to account pages. Banners should prominently mention the date an account was created — that information is currently available but buried.
- Allow users to report suspected bots and trolls. These should be treated just as threats and abuse already are.
- Mark users with a high rate of deletions. Many accounts trying to appear legitimate remove their more pointed tweets shortly after posting.
- Create an “Overlap” button. Viewing accounts followed in common is an easy form of vetting.
The bottom line: While making these changes might cause an initial drop in Twitter’s user numbers, it will create a far more valuable platform. And even if it doesn’t, it’s the right thing to do.