Are current AI regulations enough to protect Social Media Influencers?

Are current AI regulations enough to protect Social Media Influencers? 

Credit: Adobe Stock Images

Concerns about Artificial Intelligence (AI) harms have long existed. For instance, in 1976, Joseph Weizenbaum wrote a book chapter about AI and the privacy threats posed by speech recognition. Also, in 2002, Stuart Russell and Peter Norvig, in the second edition of their book Artificial Intelligence: A Modern Approach, identified six harms from AI — including privacy concerns and existential threats.  Lately, much attention is being drawn to the biases in the collation and scaling of data used to train AI models. This includes racial misclassification and ignored social and structural power asymmetries. 

More so, advanced machine learning methods, such as deep learning in neural network architectures or Generative Adversarial Networks (GANs), have made it possible for algorithms to teach themselves how to recognise patterns of media content and replicate them. This is what we commonly regard as generative AI

However, this new AI model (generative AI) exposes individuals with recognisable faces, including social media influencers, to an increased risk of the harms of manipulated media. This is aside from the fact that there is a growing threat of AI-generated social media influencers in the $21.1 billion influencer marketing industry. 

Building on these identified harms, this article examines the current lags in social media platforms’ and the Nigerian government’s AI regulatory approaches, through the  case study of a Nigerian health influencer.

READ ALSO: How lack of digitisation enables financial foul plays at Lagos train stations

The manipulated videos of a Nigerian health influencer 

On each day of the 14th and 15th of September 2023, a Facebook page posted deepfake videos of a Nigerian health influencer, Aproko Doctor. The second video posted went viral and has garnered more than 666,000 views and 2,400 likes. The 43-second manipulated video started with what seemed to be a television interview by a presenter from one of the major television stations in Nigeria. The presenter is seen asking a guest about the use of a cream product, and in response came a doctored video of Aproko Doctor marketing a cream that can get rid of joint pain, arthritis, and other joint diseases. His image and voice were synthesised from a video he made after undergoing an eye procedure. 

Under the video, a website is provided by the page, encouraging  people to click to view the price and other relevant information about the product. Before uploading these deepfakes, the page had already, in a Spanish language, asked people to suggest popular health influencers. This evidences a thought process in the intent to use a popular face for manipulation. As previous research alludes, a mere exposure effect can make people believe familiar signs or faces. In this case, this exposure includes the presenter’s and influencer’s faces and voices, the interview-like setting, and the television signage.

Of the 172 comments considered the most relevant according to Facebook’s algorithms as of the 3rd of October 2023, 166 asked about the where, price or delivery of the cream. The post’s reach also looks to have been artificially inflated as the page only has 67 likes and 164 followers as of 16th of October 2023. This is not to say that it is not possible for a post from a small account to go viral, but a glance at some of the profiles that commented shows that some accounts may be bots. Whilst it is difficult to pull out the details and the website traffic where the cream product is expected to be ordered, it can be predicted that a manipulated video created to market a product may as well be aimed at scamming unsuspecting social media users.

Figure 1: Illustration by author

Popular figures, celebrities, and influencers have always been victims of digital fakery, from email scams to fake social media accounts. For instance, in 2010, the name of a former Ghanaian footballer, Kevin Prince Boateng, was used in a scam email sent to an insurance company in the United States (US). Forged documents were used to open a bank account in his name. Also, in 2020, the X (Twitter) profile of a popular skit maker and influencer, Mr Macaroni, was cloned and used to spread fake news on how the #EndSARS protests can gain recognition from the United Nations. 

Relatedly, in March 2023, the Nigerian Communications Commission’s Computer Security Incident Response Team (NCC-CSIRT) warned about the increasing number of AI-generated videos on YouTube, which may expose Nigerians to online harms. The warning noted that familiar faces may be used to deceive unsuspecting Nigerians into clicking bait links. Thus, the possibilities of AI imitating existing pictures and videos has made influencership more precarious, as the Aproko Doctor case proves. The doctor has since disavowed any connection with the manipulated video. However, and more importantly, he raised a key question: “when someone impersonates you using AI…what’s the way forward?”. This sparks concerns about the lags in social media platforms’ and the government’s AI policies. 

A month later, the deepfake videos are still online and unlabelled: Platforms’ AI regulations and their loopholes 

It is no surprise that social media platforms are one of the focal points in the discourse on misinformation, disinformation, and mal-information. This is because they provide a leading role in platforming public discourse. Consequently, most platforms’ policies have come in the form of labelling AI-generated media or content. For instance, on the 19th of September 2023, TikTok released a statement on the launch of a new tool to help creators label their AI-generated content whilst testing new ways to automatically label them. 

This is coming against the backdrop of an earlier policy passed in March 2023 to encourage users to label AI-generated content. Platforms also consider labelling a better approach than outright removal because such content may still be available elsewhere even after they have been de-platformed. 

Figure 2 : Illustration by author

For Facebook’s Meta, there is a clear policy of not tolerating users’ misrepresentation through fake accounts or artificial engagement methods. This builds on its January 2020 manipulated media policy that seeks to enforce actions against any deepfake content on the platform. In addition, on the 27th of September 2023, Facebook released new AI-induced products that are said to emphasise transparency. However, as of the 16th of September, which is a month after both videos were uploaded, the two deepfake videos of Aproko Doctor are still on Facebook despite the numerous reports that people say they have made. This means that it is not enough to roll out AI policies; they must be practical and actionable within a short period of time. In fact, more recently, for the first time, an inquiry into  Facebook’s manipulated media policies has been launched after the platform’s moderators refused to take down an edited video posted during the US 2022 midterm elections.

Furthermore, X (Twitter) has a community note initiative, which relies on volunteers to independently fact-check viral misleading information, including AI-manipulated videos or pictures on the platform. This is one of the platform’s latest attempts to provide context to fake news and mal-information in general. Although recent efforts have been made to speed up the verification and posting rate of community notes, they are still usually late. In fact, community notes have been described as “band aids on a shotgun wound” as many users may have viewed untrue or misleading content before they are debunked, if at all they are. More importantly, one could also question the kinds of posts that get community-noted. For instance, one could argue that most posts that are fact-checked originate from the US and some other high-income countries, with little attention paid to the fake information spread in middle-and-low-income countries. This potentially evidences a significant lapse in Twitter’s labelling policy. Also, it is important to note that labelling has contributed to a separate debate on AI and free speech, where questions are raised about how much freedom of speech labelling gives AI.  

Protecting influencers and platform users within Nigeria’s AI strategy 

Whilst platforms’ labelling policies are still struggling with providing clarity and context on AI-generated content, governments have enormous roles to play, particularly in ensuring citizens’ freedom of speech and fair online experience. Nigeria has already taken bold steps towards AI. It is the first African country to institutionalise AI through its National Centre for AI and Robotics (NCAIR). Although this is targeted towards research and development support of emerging technologies, the Centre does not provide any publicly accessible clear-cut ethical approach towards the harms and biases that AI algorithms may replicate through its programmes.

The country has also recently completed its first draft of a national AI strategy. However, although ‘media and telecommunications’ is one of the seven major strands of this draft, there is a need to provide specific guidelines for the safe use of AI on social media platforms in the country. More importantly, the process of reporting privacy abuse and impersonation through AI must be clearly stated. Otherwise, influencers, such as Aproko Doctor, may continue to be at the mercy of platform owners when their rights are abused through AI.

It is also important to note that the United States’ Section 230 of the Communications Decency Act will continue to shield platforms from the harms caused by content posted online. This is because the law section prevents platforms from being liable for the content posted by users. For example, Facebook cannot be sued for the deepfake videos of Aproko Doctor posted on its platform.  As such, broader government policies are needed to mitigate AI harms. In addition, these policies must include design companies. For instance, the European Union’s draft AI act, written within a larger scope of Responsible AI, encourages AI design companies to ensure that  AI-generated content is not used for illegal purposes.

Essentially,  as governments, academia, big corporations, policymakers, and civil societies all around the world are still grappling with the best approach to deal with all the harms associated with AI, individuals with recognisable faces or profiles, including influencers, continue to remain at detrimental risk due to the lapses in current regulations. Collaborative efforts are, therefore, imperative to bridge the loopholes in platforms’ AI labelling and government policies.

Exit mobile version