Responsible AI begins with robust policies for teams using AI technologies

We’ve talked about how AI can be used to automate and improve many aspects of content creation. From generating titles and content to translating, editing, and automating manual tasks, AI can provide immediate value to digital teams

However, we also know that AI can generate content that is not so ethical, filled with inaccurate information or copyright-violating text. To help us think about how to use AI responsibly, I am joined by Alaura Weaver, Senior Manager, Content and Community at Writer. Alaura is a passionate advocate for creating ethical and responsible AI in this era of explosive growth.

Where we are today

AI has already grown into an important part of the digital team toolkit. While AI isn’t replacing writers, it can be used to augment several jobs that content teams undertake every day. 

“I use Writer to a certain extent in nearly every part of my role as Senior Manager of Content and Community,” Alaura shared. “If your team is responsible for creating content, this technology will help alleviate many of the bottlenecks and tedious work that lead to missed goals and burnout.”

That means that content teams can immediately enhance their process with AI. New tools include AI-powered content generation, automation through AI to reduce manual tasks, and even innovative new ideas like image and video generation. While every tool may not be useful to your content team, most can help reduce your workload or spark your creativity.

But what if AI doesn’t help? What if AI generates inaccurate, untrustworthy, or even copyrighted content? AI could potentially put your team and reputation at risk. Understanding the limits of AI — and how to use it responsibly — can help prevent an embarrassing incident.

Surprise emoji

What is responsible AI?

“I’m awestruck by the fact that in my lifetime, I have gone from witnessing the birth of the internet to witnessing the birth and advancement of artificial intelligence,” Alaura said. “But I also fear that this powerful technology is already being wielded carelessly — and without guardrails in place, it could cause public harm.”

Because of how AI and Large Language Models (LLMs) work, most AI tools do not understand the actual content that they are generating. They use connections between words to drive predictions about relevant content, but cannot interpret if the content they produce is actually true. The results can vary from funny to scary — while you may laugh as you convince an AI tool that 1+1=3, the situation becomes less humorous when AI makes up a fact. 

The situation becomes even scarier when AI generates content that is the intellectual property or copyrighted material of another business or person. According to the Harvard Business Review, AI trained on copyrighted material can produce responses that improperly use that material. The resulting legal implications are unclear, but the threat of intellectual property being appropriated by AI is real.

Lastly, AI can bypass privacy expectations even without businesses realizing that it is happening. Many AI tools use prompt and response data to train their LLM and improve it. Businesses that do not realize this can accidentally leak private information or customer data to the model. The AI may then reproduce this data when other people ask, leaking personal or confidential information.

“My biggest concern is the scope of misuse that people are capable of with this technology,” Alaura warned. “GPT models aren’t designed to research and report facts — they’re designed to sound human. It’s like trying to get medical advice from an actor: ‘I’m not a doctor, but I play one on TV.’”

This is where the idea of responsible AI comes in. If you can produce safeguards and guardrails for AI technologies, you can help reduce the chances of misinformation, privacy issues, or copyright violations. 

“Responsible AI means understanding the negative impact unchecked use of AI can have on human beings, and committing to avoid such consequences. It means staying up to date on the security and privacy risks associated with using AI for business, and choosing an AI tech partner that makes mitigating these risks its top priority.”

Icon of a safe

Implementing responsible AI

With the risks and benefits of AI in mind, you can start to see opportunities to protect confidential and private information, as well as reduce the chances of misinformation. Our partners at Writer.ai have invested in tools and approaches to help make AI more reliable, using practices that most digital teams should embrace.

“I’m excited about Writer because not only is it a breakthrough as far as AI technology is concerned (generating content, repurposing/transforming existing data into new content, enforcing brand guidelines/compliance, etc.) but also as a company, Writer takes a thoughtful and ethical approach to how to introduce this technology for business use cases,” Alaura said. Embracing AI, but with caution, can allow a digital team to gain the benefits of AI while also helping reduce the chances of error.

Responsible AI starts with good policies for AI technologies your team uses. AI technologies should go through thorough security and privacy reviews. Your business should develop policies for how and when AI technologies should be used, with a goal of setting firm guidelines for who can use AI technologies, what tools have been vetted for use, and what information can be shared with AI engines. 

Likewise, your business should aim to only use AI for topics that you are experts on, rather than generic topics. This can help you catch misinformation quickly, because you already know the subject matter.

“At Writer, we’re very outspoken about this. Our models are amazing, and from what I understand, curated, fine-tuned datasets like those that trained Palmyra have fewer incidents of hallucinations. But they’re still under the umbrella of natural language processing (NLP) AI and can make stuff up,” Alaura clarified. 

“That’s why we’re bullish on claim detection, fact checking, and keeping a human in the driver seat as essential best practices for working with AI. Our enterprise customers in regulated sectors like finance, healthcare and cybersecurity literally can’t afford to publish content that makes false claims or provides misleading information.”

The onus is also on AI companies to set standards for responsible AI. “For an AI company to be responsible, it’s imperative that it be transparent and outspoken about the limitations and risks associated with the technology, offer guidance on best practices/responsible use, and invest deeply in de-biasing and curating training data for its models,.” shared Alaura said. 

Through a mix of the right AI tools and good policies to reduce the chances of information leaks or misinformation, your digital team can become productive with AI safely. Even simple practices like having managers review content that AI has created or edited can help catch potential issues. Likewise, narrowing your use of AI to only take on tasks like editing or improving tone can reduce the risks of generating irresponsible content.

Icon of a robotic arm

Wrapping up

AI is introducing many exciting and groundbreaking changes, but also needs to be used responsibly to reduce risk. By selecting great AI tools that use data responsibly and provide privacy to businesses, setting reviews and guidelines for your AI use, and checking for misinformation, you can reap the benefits of AI while avoiding the pitfalls.

Alaura said it best: “The sooner you understand and embed responsible use of AI into your team’s workflows, the more likely your team will avoid the risks that come with misuse."

Marketplace

Integrate third-party services, build better workflows and customize what you can do with Contentful.

About the author

Don't miss the latest

Get updates in your inbox
Discover how to build better digital experiences with Contentful.
add-circle arrow-right remove style-two-pin-marker subtract-circle remove