A total of 5 billion people around the world use the internet today and with this exponential rise of the wireless world and internet, something unexpected has emerged. A vast treasure-trove of information is building up that could be mined to predict our preferences, our inclinations and even our future behavior. We are in the world where the fertile ground for computational propaganda has emerged by combining the superfast calculational capacities of Big Compute with the oceans of specific personal information.

Social Media and Propaganda
Computational propaganda has blown into public consciousness. Social media platforms are vigorously used in diverse ways and on different topics as a tool for public opinion manipulation. Considerable volumes of sensational, fake, and junk news have been served by and on social media platforms. Infact these favors sensationalist content, regardless of whether the content has been fact checked or is from a reliable source. These platforms don’t reveal much information about such content and its impact on its users. Several distinct global trends in computational propaganda are observed like in an authoritarian country- social media platforms have become a prominent means of social control especially during political and security crises, whereas in democracies- social media platforms are actively used for computational propaganda either through broad efforts at opinion manipulation or targeted experiments on specific fraction of the public.

Propaganda and its Instruments
The telltale signs of digital/computational propaganda are automation which enables propaganda messages scalability to reach a large and diverse audience by using breakneck cycles of sharing, repurposing and further dissemination with the feature of anonymity. Characteristics of digital propaganda are automation, scalability and anonymity. Some types of existing propaganda techniques are: (1) bots, (2) fake social media accounts (that require some human curation) and (3) troll farms.
(1) BOTS: Bots can be defined as software programs or agents that are created to perform simple, repetitive, typically text-based tasks. The effect of bots on social media are fascinating, but also alarming. It is claimed that bots maintain a parallel presence on many social media platforms and are capable of mimicking a human lifestyle adhering to a convincing sleep-wake cycle making them almost impossible to detect.
Global trends in computational propaganda are as under:
a. Impact bots: creating a mass following of persons/ certain profiles or pages in order to establish a larger online presence.
b. Amplifiers: increase or amplify the number of voices by sharing, liking, and promoting certain content on social media platforms.
c. Complainers: block certain accounts by sending numerous complaints.
d. Trackers: detect and drive attention toward certain online behaviors.
e. Service bots: helping automate the process of bot account registration by automatically generating names, email addresses or reading CAPTCHAs.
f. Dampeners: suppressing certain messages, channels or voices in order to exclude information or people.
(2) Fake Social Media Accounts: In order to create the illusion of a large-scale consensus accounts using false identity are created manually. These fake accounts are most often used for manual commenting in order to promote certain messages or trivialize/hijack the debate online; quite often these accounts are also used to boost attention to certain topics or voices on social media.
(3) Troll Farms/ Armies: Initially, trolls were defined as “those who deliberately baited people to elicit an emotional response”. Can also be associated with hate speech and harassment. Recently it has evolved to an increasingly organised activity and this organised activity can be linked to paid posts (similar to the Chinese operation called ‘the 50 Cent Army’).

Detection of Propaganda Bots
Detection of propaganda/influence bots operating in the social media ecosystem has been a subject of concern. Employing bots to spread propaganda is relatively new, and the tactics are still developing. Likewise, the methods and techniques used to detect and mitigate these bots are still in the formative stages. Certain best practices and features for detection and characterisation of propaganda botnets are beginning to converge, but results are still inconsistent. As bots are becoming more and more sophisticated, the task of mitigating them, and doing so in real time has become an arduous task.
As on date, the successes have substantially been in forensically determining the impact of computational propaganda.
The Business of Computational Propaganda Needs to End
The business of bots or sock puppet accounts used for pushing content online has grown exponentially. Inauthentic accounts (either coordinated automated and/or non-automated) are made to trick and manipulate social media trending algorithms into thinking that a particular hashtag is popular thereby bypassing a focus on talking to authentic human users completely. Some of these firms that engaged in growth hacking and social media amplification have been punished for their computational propaganda practices. For example, recently, a US-based outfit namely Devumi specializing in accelerating social growth reached a multi-million dollar settlement with the Federal Trade Commission after it was caught selling fake indicators of social media influence such as Twitter followers, retweets, YouTube subscribers and views. India can implement something similar because if we don’t stop the computational propaganda business now, its practitioners will continue to become more powerful and proficient at manipulating public opinion and thereby hijacking our primary information systems.

Way Forward
Living in the era of “computational propaganda” should make us all extremely concerned. As it does not just aim to make unpopular opinions appear to be more popular but it silences and splinters under-represented groups inherent to the functioning of democracy.
In every country, civil society groups are trying, but struggling, to protect themselves and respond to active misinformation campaigns. The World Economic Forum recently identified the rapid spread of misinformation online as among the top 10 perils to society. Hence, it is imminent to create a safer and more accessible online public sphere involving prevention and intervention such as : (a) Standardization of social media platforms’ responses to onslaughts of trolling due to doxing, and other targeted attacks. (b) Media literacy programs for strengthening the informational environment against the perils of misinformation/disinformation. (c) Development of a rapid response procedure for neutralizing the amplification effects. (d) Monitoring of the potential for the viral dissemination of hot button issues and immediate action to mitigate the risk. (e) Taking away safe harbor protection (which protects social media platforms from legal liability) in case of non action on behalf of intermediary or being an active intermediary. (f) In some cases, criminal penalties must be imposed on the individuals/entities involved in amplifying disinformation or harassing content on sensitive issues such as medical treatments or other such issues which can potentially cause harm to other users.
Despite efforts to combat the practice of using automation and algorithms to manipulate public opinion, several social media companies still promote, use or abate computational propaganda which poses a threat to society. No matter what our social/political inclinations may be, if we value a healthy functioning democracy, then something needs to be done to get ahead of computational propaganda’s curve. A comprehensive policy worldwide that works to effectively dismantle the business of computational propaganda can be a first step.

Khushbu Jain is a practicing advocate in the Supreme Court and founding partner of the law firm, Ark Legal. She can be contacted on Twitter: @advocatekhushbu.