A man holds a sticky note with AI written on it.
By Lee McLean | September 25, 2024

Ask the Expert: “How Can I Put Responsible AI Into Practice?”

Responsible AI development, training, testing, and use require ongoing engagement and foundational practices to ensure long-term success and ethical implementation. 

The ethos – and existential urgency – driving “responsible AI” policies is rooted in themes such as privacy, security, fairness, and transparency. Many headlines blur AI concepts such as machine learning, adaptive algorithms, deep learning, natural language processing, and generative AI (GenAI) technologies that are driving the current AI boom. People are worried about their data being misused, their words misconstrued, and their work misrepresented. These concerns are creating an environment of fear, uncertainty, and doubt. So…

  • How do we maintain control over AI so it doesn’t mislead, misinform, or harm humans?
  • How do we ensure the AI models being integrated into products are not infringing on copyrighted content, imparting bias, or otherwise detrimentally interfering with livelihoods?
  • How do we provide AI with the necessary level of self-sufficiency and autonomy while also protecting both consumers and business?

These are questions that are not so straightforward to answer, especially as the value of AI seemingly increases and potential use cases multiply. The good news is these are among the questions that many AI engineers, academics, legal experts, policy makers, and business leaders are actively sorting through as new regulations seek to balance responsible AI with innovation.

But before we can address the question of the day – how can you put responsible AI into practice? – we first need to ask and answer another question:

What is responsible AI?

There are many different definitions that generally align, but the International Organization for Standardization (ISO) provides a solid base-level definition. ISO states that “Responsible AI is an approach to developing and deploying artificial intelligence from both an ethical and legal standpoint. The goal is to employ AI in a safe, trustworthy and ethical way. Using AI responsibly should increase transparency while helping reduce issues such as AI bias.”

Though the intent of responsible AI is pretty straightforward, putting this theory into practice is where stakeholders struggle to find consensus. As Tess Valbuena, interim CEO of Humans in the Loop, recently mentioned, the need for AI oversight – and the magnitude of oversight – is not as objective as many would probably like it to be.

Currently, it’s up to companies and individuals (referred to hereafter as “you”) to develop responsible AI frameworks and determine how to comply with the responsible AI ethics standards and oversight processes. However, many standards organizations, governmental regulatory agencies, and professional licensing boards are attempting to provide guiding frameworks.

For example, in the U.S., an October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI (“EO 14110”) set the stage for the continued development of AI risk management standards. One of EO 14110’s aims directed the National Institute of Standards and Technology (NIST) to develop standards for generative AI within 270 days of EO 14110’s release. In July 2024, NIST released the AI Risk Management Framework (RMF): Generative AI Profile (NIST AI 600-1), a companion piece to NIST’s AI RMF, which was previously published in January 2023.

Also in July 2024,  the European Union (EU) enacted a pioneering piece of legislation –  the EU AI Act – that is the first comprehensive legal framework  aiming to regulate AI development and use.

What does this all mean for you in actionable terms?

Putting “Responsible AI” Into Practice

If you’re leveraging AI to develop, test, manage, or otherwise provide applications for consumer, commercial, or government end-users, you likely will be expected to support your claims of responsible AI practices.

The easiest way may be to publish documentation describing the functions of your AI application, including its limitations. Alternatively, you could represent or warrant that you align with a standards organization’s responsible AI practices (examples of which are provided below). More frequently, we have seen a combination of both approaches becoming more common.

Of course, this circles us back to the fact that you must…

  1. Identify what responsible AI practices you’ve adopted.
  2. Prove that you’re consistently abiding by said responsible AI practices.

This is especially true if you want to be viewed as a responsible AI proponent in the public eye. Beyond mere customers, employees, partners, analysts, media, government and shareholders are all paying attention to what companies are doing and saying – and sometimes what they’re not.

If you do not anticipate or see the value in proactively ensuring you follow responsible AI practices, it may take a customer inquiry to kick things into gear. There are a plethora of organizations touting responsible AI playbooks and guidelines you may utilize, however, not all guidance is equal. Below are some of the guidelines and resources we’ve found most valuable at Zebra. Many, if not all, should be able to assist you in assessing what your responsible AI practices should include:

  • Business Roundtable AI Roadmap – Helpful to business leaders, policymakers, and AI developers by providing a set of principles and policy recommendations to guide the responsible development and deployment of AI technologies.
  • G7 Hiroshima Process on Generative Artificial Intelligence Offers policymakers, developers, and international organizations a collaborative framework for the governance of GenAI. Aims to ensure AI development aligns with democratic values, promotes safe and trustworthy AI, and addresses global challenges through inclusive governance.
  • ISO/IEC 42001: 2023 Series – Beneficial to organizations of any size involved in developing, providing, or using AI-based product and services. Provides a comprehensive framework for establishing, implementing, maintaining, and continually improving an AI management system.
  • The U.S. White House "Blueprint for an AI Bill of Rights” – Outlines key principles for civil rights advocates, policymakers, and AI developers to consider for protecting the rights of individuals in the age of AI, including protections against algorithmic discrimination and ensuring data privacy.

For those falling under the purview of the EU, a conspicuously absent “guideline” to the above resources will come in the form of the recently codified EU AI Act. We can talk about what the EU AI Act means for your company in a future blog post, as it deserves its own discussion. But suffice it to say, it serves as a landmark regulation providing specific requirements for AI actors to abide by.

The EU AI Act focuses on a risk-based approach to responsible AI practices, requiring AI actors (i.e., AI technology providers, deployers, and distributors, et al.) to classify the risk of their AI system into one of four types: prohibited/unacceptable, high, limited, and minimal risk. Based on the type of risk associated with an AI system, the actor(s) will have varying levels of actions required to be performed (and disclosed) in order to be compliant. But we’ll come back to that at a later date.

For now, the critical takeaway is that there is no single brightline rule that may be followed to facilitate responsible AI practices for all AI technologies and applications. This is the case simply because of the rapid advance of AI systems and the varying levels of risk based on actual use cases.

As an easy example, using a large language model (LLM) like ChatGPT for drafting an email is quite innocuous and “low risk.” However, using ChatGPT to seek legal advice encroaches an area that the LLM is not meant to be used for. (Though it doesn’t exactly prevent users from leveraging it in this way—user beware!) This again highlights the struggles that regulators are grappling with – how to balance safety without stifling innovation.

What other steps should you take to ensure you’re engaging with and/or following responsible AI practices?

This is a bit difficult to answer without knowing the specific type of AI you’re leveraging and how you’re applying it (as the “easy example” describes above). However, the following are some general best practices:

  • Confirm and vet the source of your AI model or tool. Understand the ethical principles, policies, and practices of the AI’s provider and any other party involved in its training or ongoing oversight. Did they act responsibly during the model’s development and training? What are their current and long-term intentions with the model?
  • Understand the source of the inputs used to train the AI model and how your inputs are handled. Are there controls in place to prevent non-public or confidential information from being shared outside your model’s instance or organization? Can you trust the data source if the model uses third-party inputs to render services?
  • Keep [a] human[s]-in-the-loop (HITL). HITL is a concept outlined in many of the resources noted above and is discussed more in depth in this podcast episode. However, I want to stress how important it is to have human oversight of AI systems. The insertion of human judgment and intervention being integrated into the AI decision-making process can be critical to enhancing safety, reliability, and ethical compliance.
  • Understand the risk of hallucinations with any AI model and have guidance in place for output verifications. Do not disseminate or use AI outputs without confirming – and being completely confident in – their accuracy. Even well-trained, low-risk AI models can be flawed in their outputs (just like humans). It is generally a best practice to implement an HITL approach to assist in minimizing hallucinations risks.
  • Confirm you have the right to input third-party data into the AI model and put guardrails up to ensure data isn’t fed into the model without proper permissions. Data may include customer, partner, supplier, or general market knowledge, stories, and operational data, which may be comprised of confidential information, copyrighted material, personal data and other data for which permission to use may be required.
  • Give proper attribution to sources used in the generation of the AI’s output. It’s critical that you acknowledge the origin of work when using AI to generate content, regardless of format or intended use. Make sure you understand the differences and correlations between “authorship” and “ownership” of content or products created using AI. Providing proper attribution for AI-generated content promotes the key concept of transparency in responsible AI practices.
  • Make sure your organization’s ethical business policies and principles account for – and extend to – AI.  Be explicit about what you expect your employees, partners, suppliers, contractors, developers, and customers to do, and what they shouldn’t do, with AI. Communicate those expectations often and confirm everyone understands and accepts these responsibilities and terms of engagement. Require participation in formal, application-specific training sessions to ensure the guidance isn’t lost in someone’s inbox.
  • Collaborate with your organization’s legal team and have them on standby to assist with any ethical, regulatory, or responsibility questions and concerns pertaining to AI. Legal should be able to help you navigate the considerable legal implications emerging with AI use, as well as your obligations specific to recordkeeping, reporting, disclosures, and more as it pertains to the EU AI Act’s applicability to your business.

Remember…

Responsible AI practices aren’t just about compliance. They’re about integrity – about one’s character and (more broadly) culture. And while additional training and supplemental procedures are appropriate to address AI and GenAI’s nuances, your organization should have a strong set of corporate ethics that serve as a backstop.

###

What to Read Next:

The Truth About AI: What Mainstream Media Coverage is Missing

AI does not start and end with ChatGPT (or whatever other generative AI is creating the most buzz). So, let’s talk about what AI means for your business, including your workers.

Why AI Isn’t Working for Everyone

There’s a (data) science to AI. If you can understand the basics, you’ll recognize whether AI will solve your problems—or create unwelcome ones. Two AI experts explain.

Deep Learning Isn’t a “Bleeding Edge” Technology, but It Can Help Stop the Bleed at the Edge of Your Business. Here’s How.

If you’re wondering, “How is deep learning already at work in the real world (i.e., my world)?” or “How could deep learning help me work in new, more efficient ways?” then you’ve come to the right place. This blog post serves as a primer to deep learning, explaining what it really is, how it works, and how you can make it work for your business for inspections, quality control and more.

Topics
Blog, AI, Automation, Digitizing Workflows, Article, Field Operations, Public Sector, Healthcare, Manufacturing, Retail, Transportation and Logistics, Warehouse and Distribution, Hospitality, Banking, Energy and Utilities, Software Tools,

Zebra Developer Blog
Zebra Developer Blog

Are you a Zebra Developer? Find more technical discussions on our Developer Portal blog.

Zebra Story Hub
Zebra Story Hub

Looking for more expert insights? Visit the Zebra Story Hub for more interviews, news, and industry trend analysis.

Search the Blog
Search the Blog

Use the below link to search all of our blog posts.