Create a free account, or log in

Don’t risk your rep: The importance of ethics and compliance in generative AI

Putting your trust in someone means believing in both their good intentions and their ability. Or, to put it another way: “Trust is about character and competence,” says Justin Tauber, Strategic Innovation and Responsible AI Lead at Salesforce ANZ. 
Salesforce
Justin Tauber Salesforce
Justin Tauber, Strategic Innovation and Responsible AI Lead at Salesforce ANZ.  Source: Supplied.

Putting your trust in someone means believing in both their good intentions and their ability. Or to put it another way: “Trust is about character and competence,” says Justin Tauber, Strategic Innovation and Responsible AI Lead at Salesforce ANZ. 

Even with the best intentions, failure to handle generative AI responsibly can damage your perceived competence, evaporating public trust in your business. This risk to reputation is concerning, considering 61% of workers embrace generative AI, but lack trusted data and security skills.

Developing an ethics and compliance culture is the best safeguard for your organisation. But how and where to start?

Watch our webinar -on-demand: AI & your business: how SMEs can get ready — a free webinar hosted by SmartCompany in partnership with Salesforce exploring what AI means for small-businesses.

Five considerations for AI use

First, Tauber says, consider how you use generative AI from the following angles:

    1. Context: “Where are you using generative AI?” Tauber asks. “Is it a high-risk situation that’ll significantly impact the end user, or a low-risk situation?” With AI use, it’s important to carefully consider the context in which you plan to use it, and its potential negative as well as positive  impacts on end-users.
    2. Data transparency: Transparency in the data used to train and prompt AI models is vital. By knowing the origin of the data and managing any biases it may contain, you can ensure more accurate and fair AI outputs. Tauber points out the old adage, “Garbage in, garbage out.”
    3. Skilled human oversight: Incorporate a “human-in-the-loop” approach to monitor AI outputs. Invest in training your team in the safe, ethical, and effective use of AI through resources such as Salesforce’s free online learning platform Trailhead 
    4. Reliable and fit-for-purpose product: Prioritise thorough testing, validation, and assessment of the AI system’s performance. Ensure it aligns with your business needs, and collaborate with vendors to address concerns and tailor the technology to your needs.
    5. Establish guardrails and guidelines: Salesforce recently published its industry-leading set of guidelines for responsible generative AI that lead our teams internally and our customers in anticipating and mitigating risk. Smaller businesses can establish similar guidelines, to help ensure ethical AI practices that drive sustainable innovation and success.

Understand its limits

Tauber likens generative AI to Eric Cartman from South Park—a smooth-talker who pretends he’s an expert on everything. While both can be very eloquent and entertaining, leaving them unsupervised to represent your business in a customer-facing role could have potentially disastrous consequences.

“The mistake you don’t want to make is to invite generative AI to own the customer relationship,” Tauber advises. He emphasises that AI’s eloquence doesn’t equate to understanding how to build genuine customer relationships. It’s crucial to ground generative AI in reliable customer data, understand its limitations, and have properly-trained human supervisors actively monitoring its output.

Practical steps for responsible AI implementation

According to Tauber, handling generative AI safely and responsibly means working toward the following phases:

Short term: Designate a skilled, or up-skill, someone in your business who understands:

  • Where AI is currently being used
  • Potential risks and their mitigation
  • How the technology is being monitored
  • Who is accountable for the outcomes

Medium term: Foster an ethical culture, educating and familiarising your entire team on how AI works, it’s limitations, risks and dangers.

Long term: It’s all about diversity. That means “extending the range of voices that you need to be responsible to” and “getting feedback from those communities that are likely to be most vulnerable in the context in which you’re using AI”, Tauber says. An extension of voices could include having an advisory board to help uncover your blind spots.

That last step takes time, Tauber says, but it’s incredibly important. “Ethics and innovation are not contradictions,” Tauber says. “They’re not contrary to each other. In the long term, that diversity of voices will help you build products and better services that are seen as reliable and trustworthy by more of the community. That’s a genuine competitive advantage.” 

Read now: The difference AI makes: A company road-tests productivity hacks