Representatives from Facebook, Twitter, and Google told Congress on Oct. 31 that they’ve had to learn how to combat nontraditional cyberattacks, like the spread of disinformation, rather than focusing on malware attacks alone to protect consumers.

Colin Stretch, general counsel for Facebook, reported to the Senate Committee on the Judiciary that 11.4 million people saw at least one ad from fraud accounts sponsored by the Russian Internet Research Agency (IRA) between 2015 and 2017.

“They turned modern technologies to their advantage,” said Sen. Dianne Feinstein, D-Calif.

Symantec Gov Symposium
Join the best and brightest cyber minds at the 2017 Symantec Government Symposium on Dec. 5 where we’ll address cyber sabotage, ransomware, DDOS attacks, AI, IoT, and the cloud. Click here to learn more and register.

After the 2016 presidential election, Facebook identified about 470 IRA accounts, and Twitter identified more than 2,000 IRA accounts, according to Feinstein.

“That foreign actors, hiding behind fake accounts, abused our platform and other Internet services to try to sow division and discord—and to try to undermine our election process—is an assault on democracy, and it violates all of our values,” Stretch said.

Facebook began using a variety of technologies to detect and shut down fake accounts. In October 2016, Facebook disabled about 5.8 million fake accounts in the United States. By incorporating the knowledge of fake accounts into its automated processes, Facebook disabled more than 30,000 accounts in advance of the French election.

“Going forward, we’re also requiring political advertisers to provide more documentation to verify their identities and disclose when they’re running election ads,” Stretch said.

Facebook will ask political advertisers to confirm the business they represent before they can buy ads. Their accounts and their ads will be marked as political, and they will have to show these details, including who paid for the ads. Facebook plans to start this process with Federal elections in the U.S. and continue with other elections in the U.S. and other countries.

Due to the changing nature of cyberattacks, Twitter began the Information Quality initiative, which seeks to enhance the strategies it uses to detect and deny bad automation, improve machine learning to spot spam, and increase the precision of its tools designed to prevent such content from spreading on the social media site.

Twitter detects and blocks approximately 450,000 suspicious logins each day. In October 2017, Twitter’s systems identified and challenged an average of 4 million suspicious accounts globally per week, including more than 3 million challenged upon signup, before the accounts were able to appear on the site.

In the coming year, Twitter plans to invest in more machine-learning capabilities that help the company detect and mitigate the effect on users of automated account activity.

“Our engineers and product specialists continue this work every day, further refining our systems so that we capture and address as much malicious content as possible,” said Sean Edgett, acting general counsel of Twitter.

Sen. Lindsey Graham, R-S.C., chair of the committee, said that the purpose of the hearing was to determine how Russia used technology to spread disinformation and how Congress could work with the technology industry to combat this behavior.

“I can say without a doubt that what we’re doing collectively is not working,” Graham said.

Graham said that Congress recognizes that social media can generate positive political discussions.

“These technologies can also be used to undermine our democracy and put our nation at risk,” Graham said. “The bottom line is these platforms are being used by people that wish to do us harm, and wish to undermine our way of life. They found portals into our society that are intermingled with everyday life.”

Read More About
Recent
More Topics
About
Morgan Lynch
Morgan Lynch
Morgan Lynch is a Staff Reporter for MeriTalk covering Federal IT and K-12 Education.
Tags