Audio also available on Apple Podcasts and Spotify.
AI is showing up everywhere—at work, in our personal lives, and in the systems shaping how society at large operates. As exploration of agentic AI begins in tandem with increasingly normalized use of large language models (LLMs) and automation, we may be skipping a crucial step: assessing our AI readiness.
In this installment of AI Horizons, Reggie Townsend, vice president of the SAS data ethics practice and member of the U.S. National AI Advisory Committee (NAIAC), sits down with Kathy Pham, Workday’s vice president of artificial intelligence, to talk about what building responsible AI actually demands. This isn’t a conversation about features or fast-moving roadmaps. It’s about trust, ethics, operations, and the human experience behind every AI system.
How Has Technology Evolved in Today’s AI Landscape?
For Townsend, technology has always followed one core pattern: enhancing human connection. From early work on mobile phones to building the internet at Sun Microsystems, the focus was on helping people communicate better, faster, and across greater distances.
What’s changed now is the type of connection. We’re moving from human-to-human communication to human-to-machine—and increasingly, machine-to-machine. This shift introduces a new layer of complexity. The experience isn’t just about facilitating communication anymore. It’s about redefining what interaction looks like when humans aren’t the only ones involved. And with that shift, new questions arise about autonomy, values, and accountability.
What Does Agentic AI Mean for Human-Technology Interaction?
Agentic AI refers to systems that can act independently on behalf of a user or organization. Townsend sees this as a massive shift—not just technically, but philosophically.
He raises fundamental questions that are still in the process of being explored:
- When should an agent disclose that it’s not human?
- When should it act autonomously?
- Can it delegate to other agents?
- And if it does, who’s responsible for the outcomes?
These aren’t just technical design decisions. They’re ethical decisions. The values that underpin agent behavior must be intentionally designed, or we risk building opaque systems that reflect hidden biases and make decisions without alignment to human needs. Townsend urges us to treat this moment with care, not just creativity.
What Frameworks Can Guide Responsible AI Development?
Townsend uses a simple ethical inquiry framework, composed of three questions:
- For what purpose?
- To what end?
- For whom might it fail?
These questions help organizations slow down and clarify why they’re building something, what outcomes they’re aiming for, and who might be harmed in the process. He also highlights a larger governance model used at SAS, centered on oversight, operations, compliance, and culture.
At the technology level, Townsend outlines six capabilities needed for trustworthy AI: strong data management, model explainability, harm detection, privacy protection, security, and effective mitigation. Governance isn’t a checklist. It’s the foundation that allows innovation to scale responsibly.
How Should Organizations Prepare for Agentic AI?
Too many companies assume they're ready to adopt agentic AI because the technology is available. But in reality, most are still figuring out how to integrate LLMs into basic workflows. Before companies can benefit from agentic AI, they need to understand the broader impacts on people, teams, operations, and compliance.
Adoption isn’t just a matter of plugging in new tools. It demands deep shifts in organizational design, skill development, data infrastructure, and employee trust. Townsend notes that many company cultures simply aren’t ready. When leaders announce AI initiatives, employees often hear one thing: job loss. The response isn’t excitement—it’s fear.
That’s why trust needs to be designed into the rollout process. Leaders must communicate with honesty, involve employees in shaping AI use, and build the internal muscles required to adopt technology that moves fast.
AI brings a new set of emotional and psychological concerns into the workplace. Townsend shares striking data points from Edelman's 2025 trust survey: only 37% of respondents trust AI to be fair and unbiased. Just 29% believe that current AI governance structures are adequate. That’s not just a technical challenge, it’s a cultural one.
Unlike past technologies, AI is already showing up in our personal lives. When people use generative tools outside of work, they form opinions—both positive and negative—that follow them into the office. This context shapes how they react to AI at work. It also means trust can’t be manufactured after launch. It has to be built into the process from the start.
Townsend urges leaders to recognize this reality. Oversight, culture, and psychological safety are as important to AI success as model performance or infrastructure readiness.
What Role Does Data Ethics Play in Adoption?
Many companies treat data ethics as a compliance concern. Townsend sees it differently. He describes data as “our recorded experience” and ethics as “social consensus.” In other words, data ethics is the conversation about what we, as a society, find acceptable when it comes to using that recorded experience.
At SAS, they ground their approach in one principle: human centricity. Every system must support human agency, equity, and well-being. That principle looks different depending on your business model. For B2B companies, it might mean something different than for consumer tech firms. What matters most is alignment—between values, actions, and outcomes.
To make that alignment real, organizations need more than just high-level principles. They need the will to look inward, challenge old systems, and be willing to change how they work.
How Is Infrastructure and Global Compliance Evolving?
Infrastructure strategy is increasingly shaped by geography. In the U.S., companies continue to rely heavily on cloud hyperscalers. But rising data egress costs are pushing some toward repatriating data back on-premise. There’s also a growing push for multicloud flexibility to avoid vendor lock-in.
Outside the U.S., concerns around data sovereignty are rising. Companies in Europe and Asia are questioning whether their business and consumer data should run on American infrastructure.
This global context is also shaping regulation. The EU AI Act is widely viewed as the global benchmark. Other countries, including South Korea, the UK, and Australia, are crafting their own approaches. Companies operating across borders need to be ready for divergent requirements. Townsend recommends aligning internally to a high standard like the EU AI Act to simplify operations and ensure consistency.
What Opportunities Does Agentic AI Unlock?
For Townsend, agentic AI represents a rare opportunity: the chance to democratize access to powerful tools.
He speaks to the potential for agentic AI to lower barriers for people who have been historically left out of tech—whether due to education, geography, or systemic exclusion. With natural language interfaces and creative tools powered by AI, someone with a strong idea can now build or express it without needing to write code or access capital.
That shift has the power to open up entirely new economies. It also gives companies a reason to reimagine inclusion, not just as a social good, but as a business advantage.
Townsend ends the conversation with a challenge: “Responsible innovation starts with responsible innovators.”
We are all innovators. We all care about something. And with AI reshaping the workplace, we all have a role to play in ensuring it’s used in ways that are ethical, inclusive, and human-centered. The work starts with asking the right questions, and doing so early, often, and with intention.
Agentic AI is a new frontier, but it’s not an unfamiliar one. We’ve seen major technology shifts before. What’s different now is the pace, the stakes, and the proximity to our personal and professional lives.
This episode of AI Horizons reminds us that successful AI adoption isn’t about being first. It’s about being thoughtful. It’s about designing systems and cultures that people can trust. If you’re ready to lead with clarity and care, now’s the time to ask better questions and build something that lasts.