Astroturfing – or fabricating an impression of widespread grassroots support for a policy, individual, or product, where little such support exists – furthers astroturfers’ hidden motives and is more prevalent than one might imagine.
A new study by Dr. Kim-Kwang Raymond Choo, associate professor of information systems and cybersecurity and cloud technology endowed professor at The University of Texas at San Antonio (UTSA), describes a method for detecting people dishonestly posting online comments, reviews, or tweets across multiple accounts, the practice known as “astroturfing.”
The method analyzes word choice and punctuation to detect whether one person or multiple people are responsible for statements disseminated online. Choo worked mainly with posts by online commenters on news websites, but noted that the method can also be applied to Twitter and other social media platforms.
In her TED Talk, investigative journalist Sharyl Attkisson addressed how multiple online identities and fake pressure groups can mislead the public into believing that the astroturfer’s position is a commonly held view. The internet’s anonymity facilitates this practice of masking the sponsors of a message or organization to make it look like social media posts, blog posts, online reviews, and other messaging reflects a wider consensus.
When astroturfers post product reviews or political commentary under a number of different names, the intended deception can be widespread as reflected by Attikisson’s list of top 10 astroturfers.
Choo’s method helps determine authorship attribution and defines the unique qualities of authorship across different sets of content. The analysis employs a binary n-gram method, which is primarily used in text mining and natural language processing tasks, to find sets of co-occurring words within a given window of content.
Choo and his collaborators found that it is challenging for authors to completely conceal their writing style in their text. The statistical method analyzes multiple writing samples, using word choice, punctuation, and context to detect whether one person or multiple people are responsible for the samples.
“Astroturfing is legal, but it’s questionable ethically,” Choo said. “As long as social media has been popular, this has existed.”
Choo and his co-authors – his former students Jian Peng and Sam Detchon, as well as Associate Professor of information technology and mathematical sciences at the University of South Australia Helen Ashman – used writing samples from the most prolific online commenters on various news websites, and discovered that a plethora of people posting their opinions online were all linked to a handful of writers with multiple accounts.
Businesses have used this practice to manipulate social media users or online shoppers by having a paid associate post false reviews on websites. The same tactic also works on social media platforms when astroturfers create several false accounts to create the illusion of consensus where there isn’t one.
“It can be used for any number of reasons,” Choo said. “Businesses can use this to encourage support for their products or services, or to sabotage other competing companies by spreading negative opinions through false identities.”
Candidates for elected office have also been accused of astroturfing to create the illusion of public support for a cause or a campaign. For example, President George W. Bush, the Tea Party movement, former Secretary of State Hillary Clinton, and current Republican presidential candidate Donald Trump have all been accused of astroturfing to claim widespread enthusiasm for their platforms.
Now that Choo has the capability to detect one person pretending to be many online, he is considering further applications for his top-tier research. Stressing that astroturfing, while frowned upon, is not illegal, he’s now looking into whether the algorithm can be used to prevent plagiarism and contract cheating.
“In addition to raising public awareness of the problem,”Choo said, “we hope to develop tools to detect astroturfers so that social media users can make informed choices and resist online social manipulation and propaganda.”
The Rivard Report asked Choo what motivated him to pursue this line of research.
“I am interested in a broad range of cybersecurity and forensics research from both offensive and defensive perspectives,” Choo said. “For example, in our recent research, we demonstrate that data can be exfiltrated from 3D printers, mobile devices, and even air-gapped computers using inaudible sound waves.”
The practice of astroturfing has not spread too widely – yet.
“It is important to recognize that in comparison with the total volume of messages and number of users online, the number of astroturfers remains relatively small,” Choo explained.
But don’t be surprised if you see tools that will help detect astroturfing in online content sooner rather than later.
“It is important for us to develop tools that will more effectively detect these hidden ‘interested parties,’ as astroturfers will constantly seek to exploit new opportunities and loopholes to manipulate these ‘opportunities’ (new online or social media features) for their own benefit,” Choo stressed.
“This will allow ordinary citizens (or the silent majority) to engage in constructive public and political discourse.”