[Linda Saunders] Trusted AI Needs a Human at the Helm
AI promises to make our jobs easier, our work more productive, and our businesses more valuable. In fact, new research from Slack finds that 80% of employees using generative AI tools are experiencing a boost in productivity — and that’s just the beginning.
And, with the introduction of AI assistants — including Salesforce’s own Einstein Copilot — the potential for businesses is only growing. AI assistants can already answer questions, generate content, and dynamically automate actions. And someday, these assistants will become digital sales and service agents, anticipating our needs and operating on our behalf.
But with each new AI advancement comes new ethical concerns. It’s one thing if an AI assistant offers a bad product recommendation, but if it takes misguided actions on real-world concerns like personal finances or medical information — the stakes suddenly become much higher.
As we enter this new era of human-AI interaction, how can we harness the power of AI without opening ourselves up to dangerous risks?
Keeping a human at the helm
The AI revolution is an evolution. We’re taking quantum leaps forward every day, but we can’t always explain why AI does the things that it does — or eliminate every instance of inaccuracy, toxicity, or misinformation.
For these reasons, it’s important that we keep humans firmly in control of AI systems. But as AI becomes more and more sophisticated, it can be hard to figure out how to layer in that human touch. We’ve all heard of keeping “humans in the loop,” but with this new generation of AI, it’s sometimes just not realistic for us to engage in every AI interaction or review every AI-generated output.
That’s why, at Salesforce, we believe trusted AI needs a human at the helm. Instead of asking humans to intervene in every individual AI interaction, we’re designing more powerful, system-wide controls that put humans at the helm of AI outcomes and enable them to focus on the high-judgement items that most need their attention. In other words, humans aren’t always rowing the boat — but we’re very much steering the ship.
And with a human at the helm, we can design AI systems that leverage the best of human and machine intelligence. For example, we can unlock incredible efficiencies by tasking AI to review and summarise millions of customer profiles. And at the same time, we can build trust by empowering humans to lean in and use their judgement in ways that AI can’t.
Making AI a copilot, not an autopilot
There’s a reason this generation of AI products are called copilots and not autopilots. As AI becomes more powerful and autonomous — making decisions and taking actions on individuals’ behalf — keeping a human at the helm becomes even more important. By combining the capabilities of AI with the strength of human judgement, we can make AI more effective and trustworthy.
Here are three ways we’re keeping humans at the helm of Salesforce AI:
-
1. Prompt Builder Helps Us Automate in Authentic Ways: Prompts, or the instructions we send to generative AI models, are very powerful. A single, human-generated prompt can help guide millions of trusted outputs — but only if it’s constructed thoughtfully. With our newly announced Prompt Builder, we’re helping customers craft effective prompts by seeing the likely output in near real time to help ensure they get the AI outcome they want. We’ve also added different edit modes within Prompt Builder that allow users to tune and revise their prompts so they provide more helpful, accurate, and relevant results.
-
2. Audit Trails Help Us Spot What We’ve Missed: Our Einstein Trust Layer offers a robust audit trail that allows customers to assess AI’s track record and pinpoint where their AI assistant may have gone wrong — but also where AI went right. These features help identify issues across large datasets that humans might not spot; and can empower us to use our judgement to make adjustments based on the needs of our organisation. For example, Audit Trail can alert us when an AI tool’s outputs are flagged as “thumbs down” a certain number of times — a sign that the AI-generated outputs might not be meeting the business’ goals. And by aggregating implicit feedback signals, like how often users edit an output before using it, Audit Trail can give us a bird’s eye view of our systems, allowing us to identify trends and take action.
-
3. Data Controls Help Us Better Guard Our Data: AI is nothing without data. That’s why we’ve designed robust controls in Data Cloud — our fast-growing platform that helps bring siloed customer data together in one place — to help businesses securely action their data. Data Cloud features help organisations harness data for AI-powered insights and intelligence, while longstanding Salesforce core data controls like permission sets, access controls, and data classification metadata fields empower humans and AI models alike to protect and manage sensitive data.
Pioneering a new approach for the AI era
As the AI era continues to unfold, it’s critical that both humans and technology evolve along with it. The AI revolution is not just about technological innovation — it’s also about empowering humans to sit successfully at the helm of AI, and use it in ways that are trustworthy and effective.
Our approach is evolving, and we are committed to continued research, learning, and multi-stakeholder collaboration on this topic. But with a human at the helm, we believe we can combine the best of human and machine intelligence for this new AI era — leaning into AI’s capabilities and freeing up humans to do what they do best: be creative, exercise their judgement, and connect more deeply with one another.
With AI and humans working together, we can create more productive businesses, more empowered employees, and ultimately, more trustworthy AI.
Linda Saunders is Salesforce’s Director, Solution Engineering Africa
Follow us on Telegram, Twitter, and Facebook, or subscribe to our weekly newsletter to ensure you don’t miss out on any future updates. Send tips to info@techtrendske.co.ke