In early 2015, a Twitter employee discovered a vast amount of Twitter accounts with IP addresses in Russia and Ukraine. The worker, Leslie Miley, said most of them were inactive or fake but were not deleted at the time. Miley, who was the company’s engineering manager of product safety and security at the time, said efforts to root out spam and manipulation on the platform were slowed down by the company’s growth team, which focused on increasing users and revenue.
“Anything we would do that would slow down signups, delete accounts, or remove accounts had to go through the growth team,” Miley said. “They were more concerned with growth numbers than fake and compromised accounts.”
Congress grilled social media companies this week about Russian interference on their platforms in the 2016 U.S. elections. Lawmakers scolded them for how long it took to recognize the seriousness of the manipulation. Twitter has revealed that more than 36,000 Russian-linked accounts generated about 1.4 million automated, election-related Tweets. It identified almost 3,000 accounts associated with the Russian pro-Kremlin Internet Research Agency, more than 10 times the number it had disclosed a few months before. But few people believe this is a definitive tally.
Throughout Twitter’s history, security took a backseat to free speech and growth, according to ten former employees who asked not to be identified. In the early days of Twitter, which was founded in 2006, a small handful of workers manually handled requests from users to take down abusive or spam content, according to former staff. Though the number of teams and people dedicated to security dramatically increased over the years, engineering resources remained scarce.
Twitter has rotated through more than half a dozen product chiefs in the past several years, making it difficult for the company to set a consistent strategy around user security and safety policies, they said. For many years, dealing with activity from trolls, fakers and abusers was a game of whack-a-mole — not a problem to try to prevent. Twitter declined to make simple changes that would’ve mitigated the problem — like requiring a phone number to make an account or labeling bot accounts with a digital marker, according to some of the employees. Those efforts to prevent manipulation often came up against the growth team, whose chief concern was growing the monthly active users, the most important metric for Wall Street’s valuation of Twitter. This, said Miley and other former employees, set the stage for potential interference by more malicious actors.
“For many years, Twitter has fought a high volume of spam and spam accounts originating from Russia and Ukraine. The numbers of suspensions and other enforcement actions on such accounts number in the millions per week,” a Twitter spokeswoman said in an emailed statement. Bloomberg LP is developing a global breaking news network for the Twitter service.
More recently, Twitter has doubled down on security. In its testimony, the company’s general counsel said that it was dedicating all its engineering, product and design teams to rooting out Russian manipulation on its platform. It also said it’s improved algorithms to actively block suspicious logins and spam accounts. Yet experts are less than impressed. When Congress asked Clint Watts of the Foreign Policy Research Institute to grade how the tech companies are responding to malicious actors on its platform, he said: “All have improved in recent years. Facebook is the best based on my experience. Google is not far behind. Twitter would be last and always resists.”
Miley joined Twitter in 2013. He started the product safety and security team in 2014. In 2015 he became a manager on the accounts team where he was responsible for the infrastructure that handled user log-ins.
Miley was dismissed during a round of job cuts at the end of 2015. But as the only African-American engineer in a leadership position at Twitter, he said he had already told the company he planned to leave because of his frustrations with management and the company’s lack of diversity. He also said he declined the severance package so he could speak about his experiences at the company. Miley recently left Slack as the head of engineering to work with Venture for America, a non-profit that encourages entrepreneurship. Before Miley left Twitter, he became increasingly concerned about the proliferation of malicious accounts on the platform.
In 2015, researchers from the University of California at Berkeley approached Twitter, asking for help, Miley said. They had found that Twitter had a significant amount of fake accounts, but wanted more data to further their research. Three employees on the product safety and security team, including Miley, met with them. They declined to give the academics data, but the meeting made them curious.
Afterward, the employees ran an analysis on Twitter’s accounts. Miley said he was stunned to find that a significant percentage of the total accounts created on Twitter had Russian and Ukrainian IP addresses. According to Miley’s recollections, he brought the information to his manager, who told him to take the issue to the growth team. Miley said that he doesn’t have records of the tallies.
“When I brought the information to my boss, the response was ‘stay in your lane. That’s not your role’,” Miley said.
Miley said he advised the growth team to delete most of the accounts they had surfaced from Russia and Ukraine, since the analysis suggested that most were inactive or fake. The growth team didn’t take any action on the Russian and Ukrainian accounts after he presented the data to them, according to Miley.
Many pro-Trump bots that were active during the 2016 U.S. elections were long-dormant accounts, according to researchers. These profiles give the illusion that they’re legitimate, and not created for the sole purpose of spreading propaganda during a campaign, according to Samuel Woolley, research director of the Digital Intelligence Lab at Institute for the Future, a non-profit research organization.
During Twitter’s testimony this week, multiple congressmen pressed the company about the percentage of fake or spam accounts. Twitter says it’s less than 5 percent, while outside research has found the number to be closer to 15 percent.