Guest Articles

The War Against AI: How to Reconcile Lawsuits and Public Backlash

Delve into AI ethics in media and business: lawsuits, scrutiny, transparency, trust strategies.

In the rapidly evolving landscape of artificial intelligence (AI), media companies and other businesses alike continue to find themselves entangled in a web of lawsuits and public criticism, shining a spotlight on the issue of ethical transparency. Journalism has long been plagued by issues around deception — consumers often wonder what’s sensationalism and what’s not. However, with the latest casualty in the ongoing Sports Illustrated debacle, whose reputation greatly suffered after being accused of employing non-existent authors for AI-generated articles, a new fear among consumers was unlocked. Can consumers trust even the most renowned organizations to leverage AI effectively? 

To further illustrate AI’s negative implications, early last year Gannett faced similar scrutiny when its AI experiment took an unexpected turn. Previously, the newspaper chain used AI  to write high school sports dispatches, however, the technology proved to be more harmful than helpful after it made several major mistakes in articles. The newspaper laid off part of its workforce, which was likely in hopes AI could replace human workers. 

Meaningful Change Starts at The Top 

It’s clear the future of AI will face a negative outlook without meaningful change. This change begins at the corporate level where organizations play a key role in shaping ethical practices around AI usage and trickles down to the employees who leverage it. As with most facets of business, change begins at the top of the organization.

In the case of AI, companies must not only prioritize the responsible integration of AI but also foster a culture that values ethical considerations (AI and any other endeavor), accountability, and transparency. By committing to these principles, leadership, and C-level executives set the tone for a transformative shift that acknowledges both the positive and negative impact of AI technologies.

To avoid any potential mishaps, workforce training should be set in place and revisited at a regular cadence to empower employees with the knowledge and skills necessary to combat the ethical complexities of AI.

However, change doesn’t stop at leadership; it also relates to the employees who use AI tools. Employees should be equipped with the knowledge and skills necessary to navigate ethical considerations. This includes understanding the limitations and biases as well as learning from the mistakes of others who’ve experienced negative implications using AI technologies, such as the organizations previously aforementioned. 

By cultivating a well-informed and ethically conscious workforce, organizations can remain compliant while also bettering the workplace environment, all while mitigating detrimental risks. The collaborative effort of corporations and their employees is an essential stepping stone to building a more positive outlook for the use of AI and other technological advancements to come.

How to Improve Transparency Around AI Usage

Tom Rosenstiel, a professor of journalism ethics at the University of Maryland, emphasized the importance of truth and transparency in media specifically. He argues that experimentation with AI is acceptable, but attempting to conceal it will inevitably raise ethical red flags for consumers. “If you want to be in the truth-telling business, which journalists claim they do, you shouldn’t tell lies,” Rosenstiel asserted. Lies, consumers have asserted, include failing to share how articles are being written, such as with the use of AI. 

The media landscape’s ongoing transparency struggle with AI is further highlighted by a lawsuit filed by The New York Times against Microsoft and OpenAI in December. The Times alleges intellectual property violations related to its journalistic content appearing in ChatGPT training data. This ongoing legal battle illuminates a slew of other AI-related copyright suits, with experts noting a more focused approach to the causes of action.

With the rise of AI-related lawsuits and public scrutiny over AI usage growing, this begs the question, how do businesses bridge the gap between consumer distrust and using AI in an ethical way that streamlines workflows?

Boosting the Understanding of the Collaborative Effort Between AI and Humans

Enhancing transparency around AI usage in media involves a comprehensive multifaceted approach. The first, arguably most important step to take is for media companies (and any other business) to not only acknowledge the integration of artificial intelligence but actively share the role it plays in content creation. This includes highlighting whether AI was used for researching, editing, writing, or a combination of the three. In turn, media organizations must implement clear disclosure in any easy-to-locate place on their web pages, openly informing the audience when and where AI tools were used for the production of articles. 

Educating the general masses about AI and its role in content creation is equally important. Businesses can take a more proactive approach to help consumers understand how AI technologies work by offering insight into the inner workings of AI (such as its algorithms), the ethical guidelines that govern their use, and how much human oversight is involved. For example, sharing if the work was edited by an actual person or if AI was used for research but written by a human. 

Public awareness campaigns, informative articles, and interactive platforms can all come into play to help bridge the knowledge gap, empowering consumers to make informed decisions about the content they choose to consume. By improving transparency and calling attention to exactly how AI will be used, businesses only stand to build greater trust with their intended audience and mitigate concerns. Consumers are proving authenticity aligns with their core values, and businesses must comply with consumer expectations to stay ahead.

Lastly, establishing industry-wide standards for AI usage in journalism and every other industry can contribute to driving transparency forward. This begins with collaboration among media organizations, tech developers, and ethical experts to generate clear guidelines that outline best practices for AI usage. By developing these standards, businesses are looped into how and where to showcase disclosure protocols and how to address potential biases in AI algorithms. Clear standards also ensure every player upholds its commitment to transparency, leading to improved trust for both creators and consumers as AI continues to play a larger role in journalism.

Establishing A New Era of AI Trust

In the face of escalating AI-related lawsuits and growing public concern, the only clear route for businesses to take is to work diligently to bridge the gap between consumer distrust and ethical AI usage. The evolving landscape of AI demands a closer examination of how others have failed and what businesses can learn from these setbacks for a brighter road ahead. The Sports Illustrated, Microsoft, and Gannett examples highlight the need for prominent companies to set a more positive example, striking a balance between innovation and maintaining public trust.

To navigate these challenges successfully, organizations will need to become transparent about how they’re using AI. This starts with acknowledging how exactly they’re leveraging AI, and sharing if it’s a collaborative effort between AI and humans in content creation. Implementing clear disclosures, whether in the form of an individual AI usage landing page or standardized labels for AI-contributed content, helps ensure consumers stay in the know, building more trust through openness. The ongoing legal battles also bring attention to the need for industry-wide standards that outline best practices in AI integration, ensuring greater uniformity and understanding.

In an era where consumer trust has the power to make or break a business, all publicity is not necessarily good publicity. This is evident by the continuous negative attention large corporations continue to receive, months after these incidents take place. But it’s not all doom and gloom for AI. A recent study found that 31.8% of respondents think generative AI and/or machine learning will help their business a lot this year. The ethical use of AI remains a challenge to accomplish across the board, however, lawsuits and public backlash, as detrimental as they may be, are undoubtedly paving the way for a more harmonious future.

Visit AITechPark for cutting-edge Tech Trends around AI, ML, Cybersecurity, along with AITech News, and timely updates from industry professionals!

SalesmarkGlobal

Related posts

Navigating the Future of Generative AI

Laurent Duperval

Only AI-equipped Teams Can Save Data Leaks From Becoming the Norm for Global Powers

Morgan Wright

How Organizations can Protect against New IoT Threats

Michael Wood