Why AI's Future Depends on Humanity
Workday’s Kathy Pham and Salesforce’s Paula Goldman discuss moving from theoretical discussions to practical frameworks for ethical technology and a human-centered future of work.
Workday’s Kathy Pham and Salesforce’s Paula Goldman discuss moving from theoretical discussions to practical frameworks for ethical technology and a human-centered future of work.
Audio also available on Apple Podcasts and Spotify.
When we talk about the future of work, one thing is clear: building and deploying AI responsibly isn't just critical, it's an imperative. But how do we move from big ideas to practical steps that ensure technology truly serves humanity?
That's exactly what Workday's VP of Artificial Intelligence, Kathy Pham, explored in a recent AI Horizons podcast episode with Paula Goldman, Salesforce's chief ethical and humane use officer and executive vice president of product. For Pham, Goldman is her “go-to expert on ethics and technology in corporate settings.”
Their conversation began with a laugh over a simple camera glitch, setting the stage for a powerful discussion about the human element at the heart of technology.
Goldman’s journey to the forefront of ethical technology is anything but conventional. She has a history of global impact investing, has served on global advisory boards for governments, and even created one of the first-ever online museums. It’s this diverse background, she says, that provides the ideal foundation for her work today.
Goldman’s personal motivation is rooted in her family upbringing. Her mother taught her that technology could be an equalizing force, a concept Goldman believes. Yet, she also recognizes that humans, unlike AI, possess the unique ability to be ethical. For this reason, it's critical to have guardrails in place. This notion, combined with Goldman’s academic study of how 'unorthodox ideas become mainstream,' has shaped her approach to the field. It’s why, for her, responsible technology isn’t just a passing trend—it's a 'movement that is in the process of scaling.'
A pioneering vision for responsible technology led Salesforce to create the Office of Ethical and Humane Use (OEHU), evolving from what began as an advisory council role. Interestingly, the impetus for this office also came from within, with employees proactively seeking ways to build technology to serve all communities. As Goldman explains, she and her team “looked up and said, technology can be a powerful force for good, but it also needs guardrails."
Salesforce’s OEHU focuses on intentional design to enable how people adopt technology, believing that the solution to responsible AI is not just about what technology is capable of, but also about people. From day one, the strategy has been about bringing the "human side of the equation into technology to make the outcomes better."
And to really get that right, it required building a wide table to get a comprehensive view. The Office of Ethical and Humane Use relies on human input from a variety of groups, including outside experts, cross-functional executives, and frontline employees.
This multi-faceted approach, bringing various human perspectives to the forefront, was absolutely critical. It was this foresight that allowed them to establish an infrastructure and culture where people felt prepared and equipped when the latest wave of AI emerged.
The solution is not just about what the technology is capable of, but also about the people at the center of this transformation.
Salesforce’s OEHU has also developed several practical frameworks to embed ethics directly into product design:
Responsible AI Principles: When generative AI began to dominate the landscape, the team adapted their trusted AI principles to meet the moment. Accuracy was top of the priority list. These principles stay true for the era of AI agents.
Trust Patterns: Salesforce products, including Agentforce, implement trust patterns, which are systematic guardrails for safety, accuracy, and trust.
Mindful Friction: This is a design principle that creates a "subtle nudge" for users to think about the choices they are making. One example is a marketing segmentation tool that uses demographic variables. It requires a conscious choice from the user before proceeding, to drive the right outcome.
AI Command Centers: As AI systems become more and more autonomous, Goldman says, it is crucial to give humans tools to monitor and tune them. Salesforce’s command center, for example, allows a user to oversee various AI agents and make adjustments, keeping the human in control.
Goldman and Pham agreed that businesses must have their own AI policies because technology evolves so quickly, but it’s also important to have government-level guardrails.
Goldman and Salesforce have contributed to these conversations on a global scale. She has served on the National AI Advisory Committee in the United States and works with international bodies such as the National Institute of Standards and Technology (NIST), the EU, and even the Vatican. This multi-stakeholder involvement is a core part of her philosophy, helping develop enduring standards for things like measuring accuracy and precision.
The conversation concluded with powerful real-world examples and a final thought that encapsulates the vision for the future of work and responsible AI – a vision where technology empowers, rather than replaces, the human.
The first was a luxury brand where AI was used to augment human client advisors, helping them find information faster and more accurately. The result was that the human workers became more effective and empathetic, even converting to upsells, essentially becoming salespeople.
The second was an accounting service that used AI to handle routine tax season questions. This freed up tax experts to focus on more complex matters or provide financial advice—areas where human input is ethically and legally critical. Both examples highlight OEHU’s central thesis: the key to a successful human-AI relationship is understanding that people are there to delight customers, build relationships, and provide expertise where it is most needed. By leveraging AI to take on the repetitive, transactional tasks, we can keep humans at the center and empower them to do the things only people can do. The true value in AI isn’t in what it replaces; it’s in what it returns.
Want to ensure you and your team can maximize human potential through AI? Download our ‘Elevating Human Potential: The Skills Revolution’ report to learn how to prepare your workforce and systems for the AI age.
More Reading
To move fast and deliver value from AI investments, Australian CIOs need to shift from multi-year digital transformation cycles, to a mindset of continuous, governed evolution.
AI-powered CLM is reshaping how organizations work, turning contracts into strategic assets that drive speed, clarity, and smarter decisions.
Learn how AI is moving finance from reporting to strategy with this four-step playbook for implementation and growth