Exclusive: Privacy and security in the Information Age

Security

Share this content

Facebook
Twitter
LinkedIn

After an intense debate in the US Senate about the influence of Facebook in geopolitics, culture wars and mental health, Frances Haugen, a whistleblower former employee of the tech empire came forward to shed light on the inner operations of the company. During her three-hour long hearing, she testified on how the company’s efforts to self-regulate failed intentionally, in order to maximise profits.

According to her testimony, Facebook’s algorithms are designed to boost the visibility of posts that produce higher engagement, while their AI systems and analysts are in place to detect and eliminate accounts and posts that violate their arbitrary ethics protocols. To her knowledge, these failsafe protocols were only 15 to 20% efficient at blocking what they categorise as harmful content.

What resonated on the lawmakers on the bipartisan House Select Committee during the hearing was that, despite the intense denials by several high-ranking Facebook officials, the company’s own internal research provided by Haugen, showed that in their inner-workings, the objective to increase engagement and thus, profit, was not compatible with their alleged ethical commitment to their customers. They made more money with posts that are predicted to produce more engagement, even if they are categorised as harmful for children, hate speech, bullying, promoting human trafficking and other harmful online behaviour.

Despite the intense debate that the rise of Big Tech has produced in lawmakers and human rights organisations around the world, from the Cambridge Analytica scandal to the 6 January insurrection, the role of social media in society has been left largely unchecked and their parent companies have been trusted to self-regulate.

Big Tech giants are protected in the United States under Section 230 of the Communications Decency Act. This bill, which was passed in 1996, states that an “interactive computer service” can’t be treated as the publisher or speaker of third-party content. And so, they are not legally liable for the content posted on their platforms.

Starting with the ban of former President Donald Trump, while he was still in office, to the turbo-charged efforts during the COVID pandemic to censor dissenting data about the origins of the virus and vaccine hesitancy, their role as editors of the content posted on their platforms has become more and more obvious and so, in the opinion of many US and EU lawmakers, legally liable.  

Data protection

On top of this discussion there is the issue about user data privacy and protection. Just recently it was revealed through leaked documents from Google, that they were secretly working with governments and law enforcement agencies in order to detect and track harmful behaviour by reporting on specific queries that users make. Also, massive hacks over the years have exposed the private data of millions, if not billions of users, making it readily available for novice actors to inflict damage.

As cyber threats grow and evolve, the potential for killware and ransomware attacks due to their lack of oversight can have serious damaging and lasting effects on individuals and organisations. 

According to the leak, which is now being reported as “the Facebook Papers”, the company is basically the world’s largest advertising agency and is actively invested in the content that is posted on their platform in order to maximise engagement. It is not a neutral party in the information economy, so it should be held responsible.

Haugen is set to appear in front of the UK parliament and this time with a cache of information. Her evidence will inform the legislative body in shaping the Online Safety Bill, a seminal piece of legislation aimed at regulating social media due to be put before Parliament for approval in 2022.

“Frances Haugen’s evidence has so far strengthened the case for an independent regulator with the power to audit and inspect the big tech companies,” said MP Damian Collins previous to her appearance.  “There needs to be greater transparency on the decisions companies like Facebook take when they trade off user safety for user engagement.”

The FTC fined Facebook US$5 billion for its role in the Cambridge Analytica scandal and in 2021, the EU fined Facebook for providing misleading information about the merger with WhatsApp and the data sharing between the two companies, violating EU privacy laws. Similar cases have occurred in India, where the government has threatened to jail Facebook, WhatsApp and Twitter employees for their role in data sharing and for stoking political unrest – whether justifiable or not.

Because of the success of Facebook and other Big Tech companies in creating and benefitting from the social media revolution, they have large pockets to lobby and influence lawmakers all over the world. But there is a precedent that this war for our privacy in the Information Age can be won, one battle at a time.

It is important to note that the rules of Section 230 only apply in the United States and nations worldwide have the ability and the responsibility to reformulate individually their relationship to Big Tech players and craft up-to-date laws that regulate their behaviour. There needs to be transparency about the algorithms that determine the behaviour of the information economy.

Mitigating the risks

So, how can individuals and companies prevent and mitigate the risks of social media use?

After 20 years of the social media revolution, we know how to navigate them to prevent and mitigate the risks of their use. We know their benefits as well as their potential harm, so we know how to be mindful users.

Adjust the features of your personal and your company’s social media accounts in order to control what information you are willing to provide to the platform and to third party providers. Have a tight protocol on which apps are associated with these accounts and how the companies and individuals who have access to them manage their personal data protection. For example, there can be a unique device or network which has access to your company’s social media but does not have any connection to any more sensitive information, like the company’s server or a community manager’s personal device.

There are programs that allow you to almost permanently wipe your digital footprint off the net, but digital information, same as verbal information, once is out there, can never be taken back. All we can do now, with the information that we have, is to prevent and mitigate these risks immediately.

And, of course, reduce dependency.

If you were around since Hi5 and MySpace, you know that platform use can be quickly hyped and as quickly to disappear. With all the legal problems Facebook and other Big Tech players are having, it is safe to predict that there will be a different social media landscape as technology and culture rapidly-advances. Newer generations increasingly value ESG factors and privacy in the companies they invest in.

Use the opportunity in this New Normality, to reorganise your business model and your value and information chain in order to mitigate the risks of overdependence on specific platforms or systems.

The hours-long outage of Facebook products in recent weeks demonstrated how fragile these platforms can be and how dangerous it can be to exclusively depend on them when it comes to conducting business, developing a communication strategy and maintaining internal and external communications.

Establish alternate communication systems to make sure to be able to communicate with your corporate community in cases of outages and emergency situations. Create protocols on how to use, store, transmit and delete data to make sure that no information is lost or stolen due to external factors in the information value chain. Do not undervalue the use of reliable older technology, like paper and radio, in order to transmit information safely. 

Remember, in the end it is all dependent on electricity which, if you have lived in Latin America long enough, you know can fail at the most important moments. Having low tech procedures can make all the difference when it comes to facing high tech risks, like the recovery process of a massive hack or outage.

An organisation is not its technology or systems, it is people. The only way to really protect your organisation is by preparing and training the corporate community on how to be savvy denizens of the Digital and the Information Age. 

security
Peter Bäckman

By Peter Bäckman, Managing Partner of TEDCAP

For more information, please visit www.tedcap.com

Newsletter
Receive the latest breaking news straight to your inbox