Image default
Guest Articles

How Companies Can Mitigate the Risk of Disruptive Generative AI

Technology and security companies must invest in research to mitigate the risks of generative AI and protect our ever-changing digital world.

Recently, talk of generative AI is everywhere. Whether it’s marvelling over AI filters that can make you look like a Renaissance painting or scrutinising videos of celebrity deep fakes, generative AI is hard to ignore. And in today’s highly accessible media landscape, the power of generative AI is both real and concerning – especially when considering the potential of a drastically changing information environment.

Generative AI technologies like GPT-4 and Dall-E make it increasingly difficult to tell what is genuine and what is manufactured. Perhaps more concerning is that with Generative Adversarial Networks (GANs) and open-source platforms, practically anyone can produce synthetic media with relative ease – and that means practically anyone can influence our perception of reality. Because of this, we’re in a new era of misinformation and disinformation that is often made up of computational propaganda, creating an entirely new set of dangers for us to consider as we navigate this ever-changing digital world.

With this in mind, brands, companies, and governments need to take steps to prevent malicious actors from exploiting this new medium that can easily warp our perception of reality. To this end, it’s crucial that technology and security companies act with vigilance and invest in research around these new technologies that can help ensure businesses are able to mitigate the risk of generative AI. 

The Pros and Cons of Generative AI

Though it’s easy to focus on the dangers of generative AI – and for good reason – there are positive business uses. These technologies assist with accurate translation services, financial forecasting, data analysis, and are revolutionising automated processes that can go a long way in optimising productivity as well as cost efficiency.

But, it’s clear that the same technology in the hands of bad actors poses enormous threats. Generative AI works by tapping into machine learning models of data sets such as images and videos, analysing them using deep neural networks that are trained to recognise patterns, and then generating new content based on what the model has “learned” from the existing data.

Because this new AI-generated content appears realistic, but is completely fabricated, this technology can drive synthetic media that seems real – so much so that it routinely tricks humans into believe it is real – on an unprecedented scale. With generative AI, there is no more need for out-of-context imagery as the technology can generate instant, affordable, accessible, and realistic visual content, offering endless possibilities to its users and poses a significant threat to the information landscape. This can include not just images, but voice simulation and videos, known as deep fakes, that can fuel disinformation and misinformation.

While traditional deep-fake imagery was limited in scope, flawed in terms of technology, hard to access, and not widespread, generative AI allows for quick, easy, and precise image creation that can be used for malicious purposes. Through this lens, generative AI is the next evolution of the traditional deep-fake imagery threat.

A recent New York Times article dug into a government-backed propaganda campaign using deep-fake video technology to simulate “news reports,” while ITV, a television network in the United Kingdom, sparked controversy when it debuted a comedy sketch show entirely driven by deep fakes of celebrities. Though these are very different treatments of generative AI – one being intentionally misleading, the other more of an ethical quandary around the use of generative AI for entertainment – these examples highlight how these technologies can be used to fool people.

The Threat of Generative AI to Businesses

Generative AI enables any user to generate more content, faster and cheaper than ever before. In the wrong hands, it will further accelerate the breakdown of faith in media sources, institutions, communications platforms – and companies. The same tactics used to create propaganda or fool an audience for laughs could be used by online assailants to target businesses.

This weaponised disinformation at scale using generative AI technologies is the true threat – if an organised campaign is unleashed online, it  has the potential to dismantle brand trust and customer loyalty rapidly.

Solutions for Mitigating the Risk of AI Disruption

Though this can all seem extremely alarming – and perhaps even dystopian – there are steps businesses can take now to build resilience and protect themselves from the disruptive force of a misinformation crisis powered by generative AI. Here’s how leadership teams can proactively prepare for the impact of negative generative AI: 

  • Adopt Proactive Tech Solutions: If you’re using legacy media monitoring tools to keep an eye on online discourse, now is the time to look into more advanced technologies that are purpose-built to face new online threats and tradecraft. New threats require a new generation of AI-driven insights can help companies understand the propagation of narratives and associated risks, and identify possible issues before they turn into full-blown crises. Further, these solutions can help businesses understand how to mitigate damaging risk that can impact the brand, stock price, and even physical safety of employees and executives.
  • Update Your Crisis Communications Strategy: With weaponised generative AI, any industry or company can become a target in the blink of an eye. If your crisis communications playbook is based on the threats of years past and you find yourself the victim  of weaponised disinformation or AI disruption, you’ll quickly find that what once worked no longer will. Time is of the essence in these situations – especially when customers, employees, stakeholders, and the wider industry will need accurate information and reassurance as quickly as possible. But in addition to speed, risk signal fidelity into the types of threats involved is paramount, enabling teams to make strategic decisions not only faster, but with more accuracy.
  • Invest in Risk-Intelligence: To protect against narrative manipulation and misinformation at scale, businesses need defensive tech solutions that offer the highest fidelity signal measurement into the harmful risks posed by online networks and narratives.Solutions such as Blackbird.AI’s new RAV3N Copilot automate workflows using generative AI during mission-critical crisis scenarios, provide narrative intelligence that can identify harmful conversations and actors, and enable lightning-fast risk reporting to help teams work faster and focus on their mitigation and response.


With an exponentially growing threat on the perception of reality, it is imperative that businesses take steps now to protect themselves from the risk malicious actors present. With vigilance, proactive investments, and accurate insights, it is possible to mitigate the risk of advanced technologies – and combat them head-on.

Visit AITechPark for cutting-edge Tech Trends around AI, ML, Cybersecurity, along with AITech News, and timely updates from industry professionals!

Related posts

AI’s Transformational Impact on the Hospitality Industry

Alex Sambvani

How Organizations can Protect against New IoT Threats

Michael Wood

How IT Systems With No Constants Can Open the Lid on Your Company’s Creativity

Asaf Darash