BY: ROBERT BRENNAN HART
“Being adequately informed is a democratic duty, just as the vote is a democratic right. A misinformed electorate, voting without knowledge, is not a true democracy.” – Jay Griffiths
It is no secret that today’s society is struggling with objective reality. The rise of opportunistic international firms whispering nano-targeted political messages into the ears of citizen; already dangerously entrenched in their own custom fit echo chamber; has created a feedback loop that threatens the very foundation of our democratic system. This information war is an international bi-partisan buffet with third-party organizations and political parties of all stripes appearing to take advantage of propaganda bots, machine learning, and data mining to find the holy grail of granular targeting. But at what cost?
As is often the case, the advancement of technology has outpaced the public’s ability to stay informed and the government’s ability to keep up with regulation. One aspect of the new digital playing field that has garnered little attention in recent weeks, yet has the potential to have a disproportionate effect on our political discourse, is Twitter’s propaganda bot problem.
Over the last several years, many Canadian researchers and news outlets have commented on the rise of political propaganda bots on social media platforms. There are reports of Twitter bot usage in Quebec politics as early as 2012, and yet it seems these trends continue unabated without any meaningful regulation. Reasonable calls for a digital campaigning code of conduct seem to fall on deaf ears, leaving policing of these matters to the very social media outlets that allowed this to happen in the first place. What does it mean for democracy when a propaganda bot is indistinguishable from a human account?
The recent revelations about Cambridge Analytica and its’ alleged Canadian offshoot Aggregate IQ have put these modern political tactics under increased scrutiny, but will anything actually change?
In an effort to get ahead of this issue, I sat down with three of the biggest names in cyber security to analyze the state of play and find out what citizens, governments, and political parties can do to take advantage of the benefits of these new tools while hedging against the potential ethical and societal risks.
ROBERT BRENNAN HART
Are political propaganda bots on your day-to-day radar? What cyber security or other mechanisms are being leveraged to contain them?
ROBERT HERJAVEC, CEO OF HERJAVEC GROUP AND HONORARY EVENT CHAIR OF THE CANADIAN CLOUD COUNCIL’S CONTROL
MICHAEL HERMUS, RECENTLY DEPARTED CTO OF US DEPARTMENT OF HOMELAND SECURITY
It is important to note that this problem is not limited to only “bots”, or autonomous fake accounts posing as real people. A parallel tactic often used in concert with bots is to leverage real people pretending to be different real people, to spread misinformation or promote a certain agenda. These are “trolls” and/or “sock-puppets”, and the Russian organization recently referenced in the Robert Mueller indictment, called the Internet Research Agency, is a prime example of a “Troll Farm” used with tremendous effect.
In general, most people I encounter are quite concerned with this, as citizens of democracies that rely on access to accurate information. Unfortunately, the issue does not necessarily rise to the top of the agenda for many organizations, unless it directly impacts their business model. This includes social media firms, ‘traditional’ media organizations (which are now all partially digital), businesses in the digital advertising ecosystem, and political organizations. Clearly, some government entities are also quite focused on this problem – and not only from the perspective of protecting our democratic institutions. Law enforcement and national security entities have been monitoring the use of these tactics by terrorist and extremist organizations for propaganda and recruiting, for quite some time.
In terms of mitigation and containment, the techniques used to deal with terrorists unfortunately don’t work as well for misinformation campaigns – it is a much more insidious threat. Violent and extremist content is often easily identifiable through a combination of automated and manual techniques, and posting such content always violates the terms of service for major platforms. Therefore, these posts can be taken down quickly, and related accounts can be suspended. The technology platforms are usually quite interested in cooperating with law enforcement on this front.
On the other hand, many modern disinformation campaigns deal with politically polarizing topics that have a natural (real human) constituency. Trying to separate fact from fiction in this realm is a very tricky grey area, for both social media platforms and the government (at least, in free democratic societies). However, there are ways to identify patterns of behavior and account characteristics that are typical of bots or sock-puppets, and since the accounts are fake or fraudulent in some material way (which is also against most service terms), this can be used to shut them down. Unfortunately, as these adversaries get more sophisticated, enabled by advances in technology, the detection algorithms and techniques need to as well, creating a digital arms race.
ROBERT BRENNAN HART
What regulations should governments look at to make democracy more resilient to these kinds of campaigns? Are there pragmatic and meaningful ways to legislate on this issue?
RICHARD RUSHING, CHIEF INFORMATION SECURITY OFFICER AT MOTOROLA
One must remember that data is power. Even bad data can still be powerful. It is the Internet, it is an IP address, it a faceless user and trying to validate the user will be a hit or miss, at some level. All you have to do is bring a question of distrust to a system and the bad guys have won. Just like a data breach, it is hard to earn the trust back once its lost or brought into question.
LANCE JAMES, CHIEF SCIENTIST AT FLASHPOINT AND FOUNDER OF THE CRYPTOLOCKER WORKING GROUP
The solution to this problem will require significant research as content is becoming the new security problem. But there could be ways where Congress can build a verified “journalist” source repository; one that is digitally signed and sources can opt-in to use verifying that they are coming from a registered source of journalism. It is hard to tell the difference between an opinion and a news source these days, as some consider blogs as news, and news as blogs.
To create resiliency, the reality is that one must use transparency to fight deception. The human condition will respond to its biases, and these attacks take advantage of cognitive dissonance and belief-based thinking. Disinformation focuses on corrupting the decision making process, and the worst thing to do is literally react or have a reflex (it’s called reflexive control for that very reason). An actual step-back, awareness model will have to be implemented, which will require analyzing the root cause of the issues. Banning information will not be a sound way of solving this problem, but encouraging informative understanding instead and ways to identify what is truth in an overwhelming age of information.
The best way to do this is through cybersecurity as a detection method, discrediting the information immediately through a platform that users can go to and check if this source is propaganda (digital signatures, etc.), and creating awareness on the effects of psychological propaganda and how it works. Training the masses to recognize the truth will be difficult at first, but that also means that the government will need to be transparent on the objectives of the United States so that it is aligned and we have a stable source of truth to work with.
As indicated earlier, this can be a bit of a grey area that makes it harder to legislate. Any attempt to control or restrict content obviously runs up against important concepts of free speech.
However, there is one overarching principle that can help solve these problems, which is transparency. The right kind of transparency can quite literally shine a light on users, organizations, and motivations, allowing consumers to be better informed as they make judgments about content. For example, legislation has been introduced in the U.S.
to require that political advertisements on digital and social platforms disclose information about who paid for the ads. Additional transparency around social media account owners that make it harder to post anonymously, or under fake identities, would also be tremendously helpful.
Data privacy is the key issue here and it will absolutely require governmental intervention to be resolved.
We can’t rely on private corporation to make privacy; and in turn reducing propaganda campaigns; a priority. We also can’t rely on consumers: They want it both ways – the efficiency and experience, as well as the privacy and security. As consumers, we rarely read the T’s & C’s; we click “accept”, we download the unauthorized app, and yet we also want privacy and security. Without penalty to the organization reinforcing the policy, it is already ineffective. We have to consider user opt-in, flexibility to control data access and breach notification and penalty for not abiding by the regulation.
Private organizations need to feel the pain of not adhering to these policies. The first time we will truly see this in effect is with the EU’s GDPR legislation slated to come into effect May 25, 2018.
ROBERT BRENNAN HART
Are social media companies doing enough to safeguard the public against propaganda bots?
Clearly not. Facebook didn’t do enough to safeguard their user’s information and in retrospect I’m sure they would agree.
With a platform like Facebook, Zuckerberg and his executive team have the opportunity to set an example for corporate America and really the world when it comes to data privacy standard. They see that now. That being said, very few companies hold privacy as a top priority. It will require governmental intervention and strict regulation to see true change. Over the next two years, I expect we will see the US adopt a similar policy to the EU’s General Data Protection Regulation. It is something that we absolutely have to do.
As long as things are advertisement focused on social media, the consumer will always be in danger. A single cookie could cause a user to change their opinion on a subject because everywhere they go on the web, they are subverted with biased information or misinformation.
The big players (Facebook, Twitter, Google, etc.) certainly put significant resources into combating many kinds of fraudulent accounts and prohibited content. They are fairly aggressive in dealing with violent or terrorist content. However, I think it is safe to say that these organizations could do more to combat trolls, sock-poppets, bots, and disinformation campaigns in general.
The Russian election interference and recent Cambridge Analytica scandal has put a lot of pressure on Facebook, and I don’t think it is a coincidence that CEO Mark Zuckerberg recently came out in support of the Honest Ads Act.
Facebook is also planning to require identify verification for a broad range of issue ads, and create a searchable archive of political ads to aid in transparency. These are very good steps in the right direction.
ROBERT BRENNAN HART
How can consumers and businesses protect themselves against digitally weaponized psychology?
Consumers should inform themselves, check their sources and do their homework. You may not agree with a popular news source but at least you know it’s not foreign operated. Stick with the sources you trust rather than opening yourself up to just anything on the Internet. Fact check if possible before making an opinion. When reading a headline, ask yourself how you feel when you read that headline…did it make you emote and react? Why? Who’s the source that’s doing this? And lastly, assume everything you see on the Internet is untrue until researched and confirmed by credible, mainstream sources.
Honestly, some fairly basic axioms come to mind. The most obvious is “Don’t believe everything you see on Facebook or Twitter”. While social media, and digital content in general, have created an environment where everyone has a platform, all sources should not be treated equally. Various “mainstream” media outlets unquestionably have a degree of political bias, but their “hard news” components (as opposed to opinion or commentary) have a pretty good track record of being based on facts. Elevating Facebook posts linking to unknown websites to the same level as actual news media is inherently unhealthy to democracy.
A strong corollary to this is “Come out of your echo chamber every so often.” On all points of the political spectrum, digital media has facilitated an increasing isolation of ideology, such that people socialize with, and consume content from, people that share their own viewpoints. Conversely, they can literally block out those that have different opinions. This environment drives increasing polarization as individuals continually reinforce preconceived beliefs and limit exposure to any contrary evidence or opinion. This is situation ready-made for exploitation by disinformation campaigns with nefarious agendas.
ROBERT BRENNAN HART
What cybersecurity, ethical and political related measures can be used to ensure democracy ultimately wins the digital arms race?
I come back to the core principles of transparency and diversity of thought as the foundation for mitigating these threats. In a democracy, we must respect the right of others to disagree with us on every issue, however strong our beliefs. But there should be some shared version of ‘truth’, or objective fact, upon which discussions can be based.
The current environment reinforces a tribal, choose-your-own-realty mentality, which is ultimately corrosive to a functioning society. By rooting out fake content and fraudulent online identities – or at least giving people enough data to make their own judgements – we may be able to make meaningful improvements over time.
Democracy is a learning process. In some aspects we aren’t a democracy, we are a republican democracy, so it’s easy to have strong varying opinions on what democracy means. America is one of the youngest countries in the world and we have a lot to learn. I think the process we have of valuing transparency and keeping the people informed and enabled is still the best process.
We built the country on these founding principles and regardless of our differing opinions; we are all trying to get to a good place together. We need to remain humble, consult experts in their respective fields to help guide us there and understand that we are still a beginning country. We need keep a beginner’s mindset along the way.
Robert Herjavec, Michael Hermus, Richard Rushing and Lance James will all be speaking at the Canadian Cloud Council’s (www.canadiancloudcouncil.com) upcoming Control (www.control2018.com) event on May 14 and 15 in Edmonton, Alberta.
For the next 72 Hours, Betakit readers can register for the event for only $95.00 by clicking on this link –