Kate Darling (00:01.184):
In so many of our conversations around robots and AI, we're constantly comparing the technology to humans. Instead of asking, can we eventually recreate human intelligence and skill, maybe a better question is, why would we even want to do that if we can create something that's different?
Kathy Pham:
I came to work about two years ago, right around the time where the generative AI movement was just starting up and we're all trying to figure out, what does this mean for enterprise and work in businesses? And so when we think about how we bring in technology, I often will say technology, not just AI, because AI is part of it, but AI is part of the whole software system, right? And so it's not just the AI. How do we bring in technology in a meaningful way where it helps the work be better?
Kathy Pham (00:45.154):
Hi, everyone. Thank you for joining us. It's so incredible for me to have our Illuminate bus here in our hometown, the greater Boston area on this gorgeous day. Thank you for spending it indoors with us. I'm so thrilled to have Kate with us, Dr. Kate Darling. We met several years ago at an event at the Berkman Klein Center here in the Cambridge area. And I remember thinking, my gosh, I can't believe I get to meet Kate Darling. She at that time was already a world renowned roboticist, ethicist, social scientist with a background in law, economics, pretty much the person you want to go to to help all of us unpack this moment. We're in really any moment with technology and AI and robots and just that intersection of all of that together. So let's kick it off. Kate, what brought.
Kathy Pham:
Wait, hang on. We got to talk about you a little bit first too because I don't think I've ever told you this. So we met at this Harvard program and you were an incoming fellow. And when your cohort of fellows was applying, they actually asked me to look at some of the applications that were AI related because I was like the only person there doing anything remotely in that area. And the only application I remember was yours and it wasn't because you had all these achievements and accomplishments and a fancy CV, because of course you did, but it was your cover letter that was just so beautiful and genuine. And it was just so clear that you cared about AI not because of the technology, but because you cared about people and you really wanted to help people. And I've just been continuously impressed by you ever since then and watching you and your passion and your drive and how you live your values through your work. I just want to say that I am very honored to be here with you today.
Kathy Pham (02:40.908):
I did not know that. So I'm just going to go home now. Thank you for sharing. What got you into robotics? And then not just robotics, but understanding everything around robotics, right? Economics, law, and social science and people. What got you into this space?
Kate Darling:
So I have a really weird background. I originally went to law school and then I did law and econ. And then I found my way into this weird world of human robot interaction and social robotics. And it kind of makes sense that I wound up there because I was always really interested in robots, but I wasn't an engineer. I was building technology. I think he was more interested in people and how people engage with the technology than I was in the technology itself. Although I can pretty much nerd out about robots now if you want to.
But I started working with people who were building social robots. And I was super interested in how and why people who engage with robots tend to treat them like they're alive, even though they know that they're just machines. And so I was working with the engineers who were making these robots that were intentionally designed to do this. Social robots are, they mimic cues that we automatically recognize. Language is one, but it can also be just. movements, sounds, behaviors that people automatically subconsciously associate with states of mind. And so they try to get people to treat the robot more like a social agent than a tool. And I thought that was really interesting, but I was also interested in societal implications of that, of putting those technologies out into the world, both in terms of all of the benefits it could bring in health, education, and other areas, but also in terms of chatbots can give people false or harmful information. Robots can tell people your secrets. How do we prevent people from being emotionally manipulated by this technology that is being controlled by companies or governments or people who might not have the user's best interests in mind? So I started thinking more broadly about how we need to be integrating this technology in a responsible way while also leaning into the positives.
Kathy Pham (04:43.894):
And that was years ago, right? You've been working on this well before, this moment we're in with AI and agents.
Kate Darling:
Oh yeah, yeah, I've been doing this for probably 14, 15 years.
Kathy Pham:
That’s incredible. It's like the depth of research and like what you know in this space. Makes sense why it's so so. Speaking of that, one of the things that Kate has out is a book that helps us understand how we interact with machines and robots actually via how we interact with animals. Can you tell us a bit more about that?
Kate Darling:
The book, it's called The New Breed. But the premise of the book was that I feel like in so many of our conversations around robots and AI, we're constantly comparing the technology to humans. So comparing artificial intelligence to human intelligence, comparing robots to people, whether that's in our stock photo imagery or when we talk about job replacement or when we're talking about what intelligence even is, like we're always comparing to our own. And I always felt like that's maybe the wrong comparison.It makes sense that we do this. We love to project ourselves onto the technology. We have all this sci-fi influence as well. Also, the early AI researchers were setting out to recreate human intelligence, and there's so many people who are still on that path today. I guess I would ask, instead of asking, can we eventually recreate human intelligence and skill? Maybe a better question is, why would we even want to do that if we can create something that's different?
Kate Darling (06:13.974):
I feel like the human comparison really limits us because it shouldn't be our goal in the first place. And so I've always found that the animal analogy helps change the conversation just a little bit. We've used animals for work, for weaponry, for companionship throughout history. And the reason we partnered with animals isn't because they do what we do, but because their skill sets are different and that's useful. like oxen have plowed our fields. We've used horses to create entire new economies of trade.
We've used pigeons to deliver mail for thousands of years of human history, letting us communicate with each other in new ways. So the point of the book isn't that robots are like animals or that we should be using them for the same things. The point is just that using a different comparison intentionally helps open people's minds to other possibilities for what we could be using the technology for. And it helps challenge this constant assumption we're making that it can, will, or should replace people.
I'm actually curious how you think about this and how Workday thinks about this in general, like technology as a supplement or as a replacement. Can you talk a little bit about that?
Kathy Pham:
So we're in, one of the reasons I came to Workday. I came to work about two years ago, right around the time when the generative AI movement was just starting up and we're all trying to figure out, what does this mean for enterprise and work in businesses? And I came because I love the space of the technology that is the backbone foundation for places. So some people call it the boring technology you don't see, but then if you don't have it, it's like the foundation of a house, right? You're probably not thinking about it. When you go into your home because you're looking at the shiny things on the wall. But without that, you can't run anything. And because of that, the way we think about AI and technologies is that the people, if you're using the house analogy, the people that live in the home are always the most important thing and we're the foundation that has to keep holding it up.
So no matter what shiny new AI thing or any kind of technology, even if it's not AI machine learning that we bring in, we're still thinking about how do we accurately and correctly track all of our employees? How do we manage payroll so that anyone can get paid, but especially someone who might be living paycheck to paycheck, if there's a blip in the system, that has really tremendous effects, right?
To things like finding anomalies for like large amounts of contracts or everyone like no one particularly loves the performance review process. Because it takes a long year, but it's like it takes like months for every cycle. And if your company does two cycles, half the year you're dealing with performance reviews.
End of the day like all of these things are about understanding who works with us and who gets paid and how do we track their performance and how do we also empower them with like better learning and skills. And so when we think about how we bring in technology, I often will say technology, not just AI because AI is part of it, you want AI is part of the whole software system, right? And so it's not just the AI. How do we bring in technology in a meaningful way where it helps the work be better, but then it also requires understanding what work actually looks like and then which parts of the work, to your point of the oxen and the human or the horses changing, what parts of the work makes more sense for the technology? What part is it better for a horse to do than me? But then what other parts is it better for me to do than the horse? Like, there's a distinction. There are things that a machine can do that's much better than us. Large language models are incredible at processing large amounts of information. I can't read like 80 books overnight. A machine can.
And help me. But then there are other parts of the role that I want to handle. So we think a lot about the interaction. And our Chief Response AI Officer, Kelly Trindel, is another incredible person, incredible leader with a background in startups, companies, government, and social science and technology to help us unpack that balance of what are machines good at that we should do with machines, and then what are human and people good at that we should do. And that just requires a lot of work to understand how work looks like.
So, building on that, we talked about the human and machines and work looks like another part of this too is just like our feelings and our emotions, both when we come to work for in our personal lives as well. And you've studied a lot when it comes to bonds that people have with machines. Can you tell us a bit more about that and help us better understand that space?
Kate Darling (10:34.594):
Because people treat these technologies subconsciously a little bit like they're live, and by the way, I love that you talk about technology and not just AI, but I will say one of the things that I think is unique and different about AI and robotics and automated technologies is this perception that we have this subconscious perception that it has agency. And that can lead to people actually developing social emotional connections with the technology, which is super interesting.
It's not just me, like there's a whole field of research in human-robot interaction that kind of demonstrates that people respond to the cues that the lifelike machines give them, even if they know that they're not real, that people can develop real emotional attachments to the technology. I think the most important and interesting thing to me out of that research is to understand that this happens. People will treat these technologies differently than other devices.
And also understand that that's not something that's going away. Like it's not a novelty effect, it's not a generational thing. We're like our generation grew up with Star Wars, maybe we're different. No, it seems to be biologically inherent to us that we do this. And there are a couple different reasons. One is that, like I said, we love to project ourselves onto others. So we have this inherent tendency to anthropomorphize and project human emotions, behaviors, what have you, onto others, and we love to do this to animals in particular. We'll project emotions that may or may not actually be there. We actually don't care if we get it wrong. So that's very ingrained in us. Even from infancy, we learn to recognize faces. So we're trained to view ourselves in the world around us. But then the other thing that our brains do, which is so fascinating to me and the reason why I think embodied robots are even more interesting than a non-embodied AI agent.
So there's research indicating that our brains are biologically hardwired to be scanning our environments and separating things into objects and agents, which makes sense evolutionarily because we had to watch out for other agents. But now we're living in a world where we have objects that move like agents and it tricks people's brains into projecting intent onto that movement.
Kathy Pham (12:42.23):
When you say our brains are trained to separate optus and agents, what are the agents that our brains will accept? Like if you're looking around, like what are we separating?
Kate Darling:
Like an animal from a falling leaf, for example, or something that's moving autonomously versus something that's static or moving in a random way. Like we can distinguish those very clearly automatically in our brains, but robots tend to mess that up because they're objects that move like agents, so we put them in the agent category subconsciously. And it's not just the social robots that I was talking about that are like intentionally designed to do this. People will do this with their Roombas, like where 85 % of Roombas have names. I don't know the stats for the handheld Dyson, but they're lower, right? People feel bad for the Roomba when it gets stuck somewhere, and the Roomba's just a disk, but it's moving around on its own. There's this whole body of research demonstrating this. I think it's super interesting. I also think it's really important, and it's important now in particular because
Kate Darling (13:45.834):
It's not like we have new technology. We've had robotics for decades. We've had AI for decades too, frankly. But it's been behind the scenes. It's been building cars in factories. It's been creating your Netflix recommendations. And now what's happening is the technology is coming to shared spaces and people understand they are interacting with something that can sense, think, make autonomous decisions and learn. And so as we integrate the technology in shared spaces, I think it's important to understand.that people will sometimes treat it differently than another device.
Kathy Pham:
How do you think that that mental model of distinguishing objects and agents in the world, right, that we've been trained to do, apply to like software kind of agents or in machines? so, you know, one example is like you're interacting with like a chat bot, you're like, maybe as we get more like agentic, it becomes more automated, it's even probably harder to tell, but how do you see that ability to distinguish the way you've described agent and objects to this moment we're in now?
Kate Darling:
I think it very much applies to embodied physical robots. But what's interesting about these new AI developments and the large language models is that language is also a really powerful cue for humans. And we used to think that that was unique to us, this language ability. And so if you're able to have a complex conversation with a chat bot, it's really hard to not project some kind of agency onto that and project intelligence onto that.
It's also just developing so quickly. I don't know how you feel about this, but I was totally surprised by the AI developments over the past few years. And I think, actually think anyone who says they anticipated it is lying, but I'm curious, did you see this coming? And how did these evolutions in AI impact the way that we work with AI?
Kathy Pham (15:34.062):
I think it's a yes and no question for me in part because I'll remember when. So I studied computer science and there was a period of time when people were like if you want to get into neural networks Good luck. It's going nowhere and neural networks are some of the underlying technology then later powered at the GPT technology then later powered the large language also never your neural networks is just like the stuff that's behind the scenes sometimes and to your point I guys existed for a while and I think to some extent some of us will keep having the incremental changes where it's, well, we used to have email and then now there's like a spam filter that uses some learning that like makes it so you never see spam and it's behind the scene as you mentioned, or you might have RPA or different like robotic processes behind the scenes. You might have robots and warehouses, but these are things that like become like natural progression as you innovate because you discover like a new tool you want to use. But the moment that we saw in November, 2022, was such like a leapfrog moment that happened so quickly that I think that moment was unpredictable. And I think it was unpredictable too because a lot of researchers had been doing work with large language models and generative pre-trained transformers, all these GPT and all these things. And actually what I'll think about is like this user interface, like the human interface on top of the technology, which was chat GPT, made it so that most of the world that uses the internet was like, oh,
I know what large language models are, at least have like a concept for what it is. So it's the human interaction component of like the computer science AI research that made it so that most people now know what it is. And so where I'm seeing it, where I think about it going is, well, how do we think about that human computer component then for like other ways to apply the technologies?
The way it works in enterprise is I think enterprise, and I'll do a talk a little bit later around agentic systems. think agentic systems are a little bit of like the chat GPT moment for enterprise where we can take this technology and automate parts of like our business processes that have been way over complicated for the last few decades and have like this aha moment. But it requires us deeply understanding work in our businesses and how people do work and the things you want automated, the things you don't.
Kathy Pham (17:51.83):
The moment where you want that horse to take over, but the other moment where you're like, no, no, no, I want to do that part. And it requires us to have people in our organizations that understand all those pieces of work and the technology, then figure out where we want to apply the technology. I talked a bit about how do we understand the problem and who gets to decide what those problems are to solve. And then it also gets into how do we build responsibly. Can you tell us a bit more from your experience, how do you think about the process of building technology and how do you do it responsibly with all of our development teams?
Kate Darling:
There's kind of generic advice that's common for organizations on how to have responsible AI. It's good advice. It's like have ethical principles, have diversity, both in the development of the technology and in the data that's being used and also in any teams that are interacting. Let's just say let's have diversity in all the teams everywhere. Always. We have enough research showing that that's important. You need to be constantly reevaluating your AI systems.
That another common piece of advice is having like a board of ethics and stuff, having transparency, explainability. I think what I would add to more common advice is that, and this gets to what you were saying about understanding where the horse and the human need to have handoffs, is you have to be thinking critically and longer term about what you're using AI for and why.
In particular, I see a lot of organizations falling into the trap of, everyone's using AI and we have to get on this train and sprinkle it on everything, otherwise we'll be left behind. Or other organizations being like, okay, maybe we can make a quick buck by automating away some tasks. But if we want to truly lean into the potential of this technology, we need to be thinking longer term.like you said, about how to combine the skills of these systems and humans. And that can be context specific. So I do think it's good to experiment with technologies, but with an eye toward what are the technologies that are actually going to help people do a better job? Or what are the technologies that are going to provide something that we didn't already have previously? Because I think those are the two sweet spots, either helping people do a better job, be more productive, more creative, help them do their jobs in whatever way, or provide something that wasn't already there, I think that's the true potential for automated technologies.
Kathy Pham:
And I know you have lot of experience working with builders and developers to help them understand that space too. So thank you for sharing that.
Kate Darling:
It reminds me of something that anthropologist Madeline Elish used to say, is instead of saying deploy technology, we should really say integrate because it prompts the question into what. And you really have to understand the whole system that you're integrating it into with culture and social norms and some of the ripple effects that it's going to have instead of just deploying it.
Kathy Pham:
Here you go, here's a piece of technology. Good luck using it. I was, that's something to think of at the Media Lab. We had this Ethics and Governance of AI group and our team had explored, we actually did a little launch of it and then worked on it for a few years. Instead of human in the loop, which is a concept that's existed for a while, like how do we have humans in the loop? We created AI in the loop, where to your point of integration, it's how do we fit AI into our systems? Versus how do we think about how to put people back into these AI systems? How do we build it such that it's actually we integrate the technology into our very human spaces? Before we go into questions, where do you see, maybe just say five years from now, where do you see the technology and where do you see AI going, like human AI interaction, how do you see all that going?
Kate Darling:
I hate making predictions these days. I don't know about you.
Kate Darling (21:42.958):
No, no, no. I'll give you my one prediction that I'm very sure about, although I don't know that time helps. A prediction, I hope. I mean, it's both. So from my research looking at the history of how we've used animals, it's very clear that at least in Western society throughout history, we've used most animals like tools and products, and then some of them have been more companions. And my prediction for AI and robotics is that that's going to go the exact same way where we use most of these systems, like tools and products, and then some of them we have more of a companionship or a social relationship to.
And I think that's going to impact even the workplace. I don't know if you'll remember that Google engineer who got sidelined for claiming the AI system he was interacting with was sentient. This was, I think, before the release of Chat GPT.
Kathy Pham:
For those who are unfamiliar, can you tell them about the story?
Yeah, it was just this Google engineer who claimed that he was testing this like internal or this chatbot, this experimental chatbot, and he ended up going public and saying that it was sentient and that it deserved rights and stuff. And, you know, he ended up getting ousted from Google. But the interesting thing was the press had a field day with this, like guy was ridiculed into the ground. And he was ridiculed because the AI system is not sentient and you know, I think everyone understands that, but I think we need to take this type of situation way more seriously than we did back then because it's going to happen way more and not less.
And I do think that coming back to responsible technology integration or effective technology integration, I think that we're still underestimating the impact of our social tendencies. And so I think my hope is that we at least become more aware that that is also something that happens, that we project so much onto this technology and behave around it as though it is alive, and that that does have implications.
Kathy Pham (23:39.086):
Can you tell more about underestimating our social tendencies? Yeah, what are you?
Kate Darling:
I think people still think that's fringe or that only happens to like people who are lonely and vulnerable or like whatever. you know, there's chatbots like Replica with millions of users worldwide and it's not fringe and it's not just like random outlier people. It's going to be all of us. And so I think we need to acknowledge that, better understand it, and start working with that to integrate the technology instead of just dismissing that as silly because, well, the technology isn't alive.
Kathy Pham:
I mean, to your point, the 85 % of people who name their room is, and I think even just watching kids interact with devices now, how quickly it becomes integrated into their lives, right? And how they'll talk to a device like, hey, take your chat bot of choice and start asking all sorts of questions.
Kate Darling:
What do you tell your kids? We both have kids. How do you educate them about that stuff?
Kathy Pham (24:37.878):
I actually am grateful for this. Years ago, there was someone at the Berkman Klein Center who did research on kids' interactions with chatbots, basically, specifically. I forgot who it was or else. I would love to cite her, so I'll have to make a note on that, podcast notes or something. I think about her because I think about how, one, she looked into the kind of questions they asked. She looked into any kinds of guardrails a company should put, because for the most part, if it's a device sitting in your home,
it's open to any kind of question, whereas if you have a browser, if you have kids, you can put different parental guardrails. But she mostly also studied the tone that kids used with a device that might be different than the tone that they use with a person. So I actually do a couple of things. like, I found myself saying, don't yell at the device, because I feel like it doesn't play the right song. They get progressively, like, just their tone gets progressively harsher.
And I think, oh my gosh, am I raising a child that will now yell at a human because they don't get what they want. So I find myself in a really, especially with our field of work, a really interesting state of how our kids are interacting with devices. But we also teach them a lot of the right prompts, ways to ask questions, appropriate questions they should ask, questions that have reliable answers versus questions that don't. And I actually also found myself, because we're a Vietnamese family, where we actually teach them about like, have a relative come over, their accent is stronger, and not all devices understand accents, and companies have to work on that because that's what bias looks like. So it's all sorts of things that I think are interesting to come up with. What about you? What comes up with your kids?
Kate Darling:
So do you know that Alexa has like the fart function?
Kathy Pham:
I do not.
Kate Darling:
Ask Alexa to fart and she has hundreds of different types of farts that she can do. But after she does a few, she says, would you like me to fart happy birthday to you? You can upgrade to the Extreme Fart Package for $2.99. Would you like to purchase it now? So I've had to train my kids to say no, first of all, and also explain why that's happening. And I think they get it.
Kathy Pham (26:16.075):
Nuts.
Kathy Pham (26:41.902):
I guess it's such a bigger question about products and marketing and marketing to children. I spent a little bit of time with the Federal Trade Commission, so now my head is spinning on the responsibility we have to not make it so easy to buy something extra.
Kate Darling:
But back to hopefulness, like what are your predictions? What are your predictions or like what do you have a hope for the next five years?
Kathy Pham:
I now have “happy birthday” in my head. Thanks, you know, I have two things. I am excited for the day where we go back to the point of talking about the problems we saw for it. think most organizations do. I also recognize the moment we're in is very different with AI beyond just technology. But I find that sometimes the AI conversation overshadows the problem. So it's like, where do you want to use this? And you're like, but for what? I just want to use it.
And it's just a number of factors. It's like maybe pressure from companies, especially from leadership, is maybe like your academic lab wants you to produce a paper that is now in the show notes. There's so many factors. And I think that is interesting for the sake of moving technology forward. But I also am excited for the day where we go back to talking about what's the biggest problem we want to solve for. Then, this model may be a really good mechanism to do that. But we start first with the problem, which is like good product design.
And so, I'm excited for that moment because it gets us to focus again back to the problems we have. What I'm hopeful for, what I'm excited about with the AI is we are in a technical shift that I've never seen before in my 20 years of computer science. I'm grateful to groups like the Berkman Klein Center that introduced me to people like Kate, who have helped me think more broadly about the field. So I think I'm hopeful for two things. One, that who is an AI builder is now expanded. It's not the computer scientist or AI researcher. It's the person who not only maybe built the large language model, but also the person who built the interface on top of it so someone can use it. But also the doctor who really deeply understands what it's like to take care of a patient who then can really say, this is what actually is useful in our practice. So I'm hopeful for that connection of people that do remain experts with the actual technology. And I think AI itself, this is the second part, has made it easier to get into the technology. So you don't need to know what the definition of agentic AI is to go build an AI agent today. So I might have to give you a tutorial on using one of the tools. But you can use Co-Pilot. You can't maybe architect an entire new hospital system. But you can architect. A doctor can probably create an AI assistant for their hospital system today, just with a little bit of help. So I think we've gotten better with giving access to the tooling so organizations can build automated systems that are very domain specific that serve their purposes without needing the developer in the corner all the time. You definitely still need developers to build these systems. I think it's created a level of access that we haven't seen before, which I find really interesting and exciting.
Thank you so much for being here with us, Kate. Always an incredible conversation. I am always so appreciative every time I get to hear Kate talk about anything. I learned so much and I've learned so much over the years and I hope you all did as well. Thank you for joining us today for this podcast