How Tech Companies Foster Cyber Abuse Through Negligence

HOW TECH COMPANIES FOSTER CYBER ABUSE THROUGH NEGLIGENCE


Introduction

In the digital age, the influence of technology companies on online communication has been significant. They determine how individuals transact with peers and how much data is obtained- whether publicly available or private. While these companies are building platforms to connect the world, they also need to ensure user safety too. Nevertheless, by negligence through not enforcing their security protocols, insufficient attention to content moderation, and placing profit above matters of user welfare- this has fostered cyber abuse.

This article focuses on the various ways in which tech companies encourage cyber abuse through their negligence. It discusses inadequate data protection, algorithmic biases (caused by existing cultural assumptions that the company repeats) in computer-aided decision making and policing for Internet abuse, lax content moderation, and lack of (or sometimes active sabotage to) regulatory compliance.

Inadequate Data Protection

Inadequate data protection infrastructure is one of the main ways in which tech companies enable abuse. These technology firms normally gather and store personal information, online transactions, and financial information involving their users. However, the vast majority of these enterprises fail to employ strong data security measures for the protection of user data in this way. Their negligence leaves their users in danger of cyber- attacks, including identity theft, being pursued online by virtual stalkers or even suffering financial scams. ¹

For example, when hackers stole millions of users’ private data through breaches at big companies like Facebook and Yahoo, those millions of people all became potential targets for cyber criminals. ² Thirdly, the 2019 Capital One data breach revealed personal details on over a hundred million customers; it had been carried out by criminals with ineptitude as to site execution and steps taken to cover their tracks. ³ The poor handling of data encryption weak passwords don't make things any better; all they add is to the hazardous risk for users from malicious elements.

Algorithmic Biases and Amplification of Harmful Content

Algorithms set up by technology companies to optimize user engagement usually promote content that generates high levels of engagement among users through likes, comments and sharing.  However, these algorithms also amplify harmful or offensive content. Slander, the spreading of fake news, and cyber bullying all thrive in this sort of environment. ⁴ Bad content surfaces primarily because the algorithms driving user engagement emphasizes attention grabbing (usually sensational & divisive) stories. Since 2015, social media companies including Twitter, Facebook and most largely YouTube have introduced automatic suggestions for ‘related’ videos to viewers. By doing this these sites hope eventually to boost viewer numbers all across the board – from recommendations of what users might want which includes harmful content that perpetrates cyber abuse most especially for vulnerable groups. Facebook’s ubiquitous 'real name' policy dooms more people than it saves: it has just meant that anyone who possesses first-hand knowledge of events or languages other than his or her native one can do nothing but watch while impostors and fictions pile it on. In 2021, Frances Haugen, a former employee of the company, accused Facebook’s algorithm on grounds that it was pushing harmful content which led to a surge in online abusers. ⁶ Furthermore, algorithmic biases tend to produce discriminatory practices that disproportionately expose marginalized communities to online abuse. Consequently, the fact that tech companies are not designing ethical algorithms allows harmful content to thrive and the incidence of cyber abuse increases.

The Failure to Tackle Online Gender-Based Violence

Online abuse disproportionately impacts women and other marginalized groups, such as doxxing (where personal details get posted online without consent) or revenge porn. In 2021 alone there were an estimated 3 billion cases of cyber harassment in China but few actually came before our courts. Despite becoming an increasingly high-profile problem thanks to large-scale feminist activism around certain incidents like these abuses, the massive damage inflicted upon women goes largely unnoticed by media. The companies are highly aware that something must be done about this miserable state of affairs and one key pointer toward change was given in the Beijing Review. ⁹

For years now, although these dangers are well known enough to have over twenty publications in the Chinese media calling them out during International Women’s Day of 2017: tech companies have been slow on adopting productive solutions. Many platforms do not offer gendered cyber-violence reporting mechanisms making it difficult for women victims of cyber  abuse to find redress  or justice.

In addition, policies which regulate what content must be deleted are not issued in a unified manner. This leaves a playing field open for perpetrators to exploit. ¹⁰

For instance, on Instagram, it was discovered in 2021 that accounts dedicated to spread un- consensual sexually explicit images are allowed to be active for months without any action taken. The negligence of tech companies in remedying online gender-based violence is not only to condone mental anguish but discourages victims from entering digital spaces. This, in turn, limits their freedom of speech.¹¹

Prioritization of Profit Over User Safety 

One major reason that tech companies ignore online violence by simply doing nothing is their stress on profit at the expense of user safety. Most platforms depend on targeted adverts to generate revenue, and this means they must collect data and engage users for long periods because of purchased traffic sources. As a result, those companies are not so interested in implementing stricter regulations which will decrease member activity.¹²

In some cases, technical chiefs reject proposed policy adjustments because this could affect income from ads. For instance, Facebook has been accused of letting misleading political adverts and divisive content proliferate so that high user participation continues. Similarly, YouTube is infamous for placing advertisements on extremist content and thereby profiting indirectly from cyber abuse.¹⁴ This for-profit approach creates an environment where people turn blind eyes to safety in favor money.

Lack of Regulatory Compliance and Accountability 

Despite increasing public concern over cyber abuse, tech companies can often escape liability by taking advantage of regulatory gaps.

Many are registered in regions and countries that have only recently started to put themselves on footing terms with international cyber laws or else resist foreign regulation by using their clout to lobby for more relaxed regulations at home. This absence of strong legal systems under which they would operate and give birth to places brings its own set of problems. Here, cyber abuse continues to thrive-predators take refuge there, often with little fear of being discovered or caught.¹⁵

Even when laws are on the statutes, the problem still remains one of enforcement. The jurisdictional complexities and global nature of digital platforms make it very difficult for prosecutors to bring cases against those trading in cyber extortion. The gaming world, for instance, is very complex. In 2021, the European Union fined Amazon $888 million for violating data protection laws, but online giants can still take such issues to court and try to either abolish them or get round laws that might otherwise force them into compliance.¹⁶ Without rigorous penalties or financial sanctions (which is why many people think companies have no reason at all for processing complaints about privacy), tech companies do not feel motivated to fight against cyber abuse.

 

Conclusion:

The failure of tech companies to address cyber abuse has cast a long shadow over the future. It puts online users in fear for their safety every time they log on, and also supports harmful canons of conduct in the digital world. Inadequate data protection, algorithmic biases, weak content moderation, neglecting online gender-based violence and trading off safety for profit are all part of a broadening land of cyber abuse.

Without more comprehensive legislation and more corporate accountability, this will continue to be a growing problem and users will increasingly find themselves under threat. For the effective combat of cyber abuse, therefore, tech companies must take on greater responsibility to make safe what happens in their spaces. They should work towards a culture of digital safety and inclusivity.

 

References

1.   Statista. "Number of data breaches and records exposed worldwide 2023." Statista, January 5, 2024. https://www.statista.com/statistics/273550/data-breaches-recorded-worldwide.

2.   BBC News. "Facebook data breach exposes millions." BBC News, April 4, 2021. https://www.bbc.com/news/technology-56652427.

3.   The Guardian. "Capital One hack affects 100 million users." The Guardian, July 29, 2019. https://www.theguardian.com/technology/2019/jul/29/capital-one-data-breach.

4.   Harvard Business Review. "The problem with engagement-driven algorithms." Harvard Business Review, March 15, 2022. https://hbr.org/2022/03/engagement-driven-algorithms.

5.   Reuters. "YouTube’s algorithm and extremist content." Reuters, July 12, 2021. https://www.reuters.com/article/youtube-algorithm-extremism-idUSKBN2B30GJ.

6.   The Washington Post. "Whistleblower Frances Haugen testifies on Facebook’s harm." The Washington Post, October 5, 2021. https://www.washingtonpost.com/technology/2021/10/05/facebook-whistleblower-frances-haugen-testimony/.

7.   The Verge. "AI moderation is failing to catch harmful content." The Verge, June 23, 2023. https://www.theverge.com/2023/6/23/ai-content-moderation-failure.

8.   The Guardian. "TikTok failing to remove hate speech despite user complaints." The Guardian, March 15, 2022. https://www.theguardian.com/technology/2022/mar/15/tiktok-failing-to-remove-hate-speech.

9.   Amnesty International. "Online violence against women: A global issue." Amnesty International, November 2018. https://www.amnesty.org/en/latest/news/2018/11/online-violence-against-women/.

10. BBC News. "Instagram fails to take down accounts posting non-consensual images." BBC News, April 7, 2021. https://www.bbc.com/news/technology-56661449.

11. UN Women. "Cyber violence: A new frontier for women’s rights." UN Women, 2022. https://www.unwomen.org/en/news/stories/2022/06/cyber-violence-against-women-a-new-frontier.

12. The New York Times. "Facebook’s profit-driven approach to content moderation." The New York Times, December 9, 2021. https://www.nytimes.com/2021/12/09/technology/facebook-profit-content-moderation.html.

13. CNN Business. "Facebook’s decision to allow political misinformation ads sparks controversy." CNN Business, October 15, 2019. https://edition.cnn.com/2019/10/15/tech/facebook-political-ads-misinformation.

14. The Wall Street Journal. "YouTube’s extremist content problem and ad revenue." The Wall Street Journal, February 21, 2022. https://www.wsj.com/articles/youtube-ad-revenue-extremist-content.

15. Forbes. "Big Tech’s lobbying efforts against stricter regulations." Forbes, May 8, 2023. https://www.forbes.com/sites/bigtech-lobbying/2023/05/08/big-tech-lobbying-regulations/.

16. Financial Times. "Amazon fined $888 million for violating EU data protection laws." Financial Times, July 30, 2021. https://www.ft.com/content/amazon-eu-data-fine.

Comments

Popular posts from this blog

MY 27TH BIRTHDAY GIFT TO THE FUTURE OF KWANIA CHILDREN: Empowering Education Through the Back to School Mission 2025.

Innovating Legal and Legislative Solutions for Human Trafficking in Uganda

Realignment of the International Criminal Court with the Modern Age: Adapting to Unforeseen Challenges