Deloitte and SAP assess

george

Deloitte and SAP assess

Whether you’re creating or adjusting your AI policy or reassessing your company’s approach to trust, maintaining customer trust can be increasingly difficult given the unpredictability of generative AI. We spoke with Deloitte’s Michael Bondar, principal and leader of enterprise trust, and Shardul Vikram, chief technology officer and head of data and AI at SAP Industries and CX, about how organizations can maintain trust in the AI ​​era.

Organizations benefit from trust

First, Bondar said, each organization needs to define trust in relation to its specific needs and customers. Deloitte offers tools for this purpose, such as the “trust domains” system found in some of Deloitte’s downloadable frameworks.

Organizations want their customers to trust them, but people involved in discussions about trust often hesitate when asked what exactly trust means, he said. Trusted companies show better financial results, better stock performance and increased customer loyalty, Deloitte found.

“We saw that almost 80% of employees feel motivated to work for a trusted employer,” Bondar said.

Vikram defined trust as the belief that an organisation will act in the best interests of customers.

When it comes to trust, customers ask themselves, “What is the uptime of these services?” Vikram said. “Are these services secure? Can I trust this particular partner to keep my data safe and compliant with local and global regulations?”

Deloitte found that trust “starts with a combination of competence and intent, meaning whether the organization is capable and reliable of delivering on its promises,” Bondar said. “But also the rationale, the motivation, the reason behind those actions aligns with the values ​​(and) expectations of the various stakeholders, and humanity and transparency are embedded in those actions.”

Why organizations may have trouble improving trust? Bondar attributed this to “geopolitical concerns,” “socioeconomic pressures,” and “fears” around new technologies.

Generative AI can undermine trust if customers are not informed about its use

Generative AI is at the forefront of new technologies. If you’re going to use generative AI, it has to be robust and reliable so as not to undermine trust, Bondar emphasized.

“Privacy is key,” he said. “Consumer privacy must be respected and customer data must be used for its intended purpose and only for its intended purpose.”

This applies to every step of using AI, from the initial data collection for training large language models to enabling consumers to opt out of having their data used by AI in any way.

In fact, according to Vikram, training a generative AI and seeing where it makes mistakes might be a good time to remove outdated or irrelevant data.

SEE: Microsoft delays AI Recall launch, seeks more community feedback

He suggested the following methods to maintain customer trust while implementing AI:

  • Provide employees with training on how to safely use AI. Focus on war gaming exercises and media literacy. Remember the concept of data trustworthiness in your organization.
  • When developing or working with a generative AI model, you must obtain consent to process data and/or ensure compliance with intellectual property law.
  • Watermark AI content and train employees to recognize AI metadata when possible.
  • Provide a complete picture of your models and AI capabilities by being transparent about how you use AI.
  • Create a trust center. A trust center is “a digital-visual layer that connects an organization with its customers, where you educate, (and) share the latest threats, the latest practices, (and) the latest use cases that are emerging and that we’ve seen work wonders when done right,” Bondar said.

CRM companies are likely already compliant with regulations like the California Privacy Rights Act, the European Union’s General Data Protection Regulation and the SEC’s Cyber ​​Disclosure Rules, which could impact how they use customer data and AI.

How SAP is building trust in generative AI products

“At SAP, we have a DevOps team, infrastructure teams, security team, compliance team that are deeply embedded in every product team,” Vikram said. “That way, every time we make a product decision, every time we make an architectural decision, we think about trust as something that’s there from day one, not something that’s an afterthought.”

SAP implements trust by creating connections between teams, as well as by creating and upholding a company ethics policy.

“We have a policy that we can’t ship anything unless it’s approved by the ethics committee,” Vikram said. “It’s approved by the quality gates… It’s approved by the security counterparts. So it’s actually adding a layer of process on top of the operational stuff, and both of those things together help us operationalize trust or enforce trust.”

When SAP introduces its own artificial intelligence (AI) products, the same principles apply.

SAP has introduced several generative AI products, including CX AI Toolkit for CRM, which can write and rewrite content, automate some tasks and analyze enterprise data. CX AI Toolkit will always show its sources when you ask it for information, Vikram said; it’s one way SAP is trying to gain the trust of customers who use its AI products.

How to Deploy Generative AI in Your Organization Reliably

Overall, companies need to incorporate generative AI and trustworthiness into their KPIs.

“With AI, especially generative AI, there are additional KPIs or metrics that customers are looking for, like, ‘How do we build trust, transparency, and auditability in the results that we get from a generative AI system?’” said Vikram. “Systems, by default or definition, are nondeterministic with respect to high fidelity.

“Now, to leverage these specific capabilities in my enterprise applications, in my revenue centers, I need to have a basic level of trust. What are we doing at least to minimize hallucinations or to bring in the right insights?”

Decision-makers in leadership positions are eager to try AI, Vikram said, but they want to start with a few specific use cases at a time. The speed at which new AI products are emerging can be at odds with this desire for a measured approach. Concerns about hallucinations or low-quality content are common. Generative AI for legal tasks, for example, shows “widespread” errors.

But organizations are eager to try AI, Vikram said. “I’ve been building AI applications for 15 years, and there’s never been anything like this. There’s never been this much appetite, and not just the appetite to know more, but to do more with it.”

Source link

Leave a Comment

dl3y dl3y dl3y dl3y dl3y dl3y dl3y dl3y dl3y dl3y dl3y dl3y dl3y dl3y dl3y dl3y dl3y dl3y dl3y dl3y dl3y dl3y dl3y dl3y dl3y dl3y dl3y dl3y dl3y dl3y dl3y dl3y