Decoding the AI Revolution: Workday's Partnership with Academia

Earlier this year, Professor Taha Yasseri was appointed the inaugural Workday Professor of Technology and Society at Trinity College Dublin and Technological University Dublin. In this episode of the Workday Podcast, we speak to Professor Yasseri about this new and exciting partnership and how it will help to bridge the gap between technology and society.

Patrick Evenden (PE): In 2023, Workday made a seven-year, €2 million commitment to fund a Chair of Technology and Society, co-hosted by Trinity College Dublin and TU Dublin. In August this year, Professor Taha Yasseri was appointed the Workday Professor and Chair of Technology and Society.

Taha was professor and the deputy head at the School of Sociology in University College Dublin and a Geary Fellow at the Geary Institute for Public Policy. Formerly, he was a Senior Research Fellow at the University of Oxford and a Turing Fellow at the Alan Turing Institute for Data Science and Artificial Intelligence. Under his guidance, the new research unit will examine the intersection of technology and society. 

I'm Patrick Evenden, and on this episode of the Workday podcast, I'm delighted to be joined by Taha as well as Graham Abell, Workday's Vice President of Software Development Engineering, and Claire Hickie, Workday's EMEA CTO, to talk about Workday's partnership with TU Dublin, how it will help to shape Workday's approach to AI, and technology's impact on human behaviour. 

Taha, thank you so much for joining us today, and of course, Graham and Claire. I didn't mention in the introduction there, Taha, that you are also a very accomplished speaker. You've done a TED Talk in the past and spoken at a number of conferences. So this should be a walk in the park for you, I'm sure. They're also, as well as being incredibly informative, those talks, very, very funny. So no pressure, Taha, but if you–

Clare Hickie (CH): We're in good company [laughter].

PE: I guess, starting with you, Taha, so you've spent a lot of your career exploring how people interact online and offline among themselves and with machines, and also technology's impact on human behaviour. You recently spoke about how society is welcoming new members and the digital machines that we interact with on a daily basis. What excites you most about this moment in time, and how do you think that will shape your vision for this role?

Professor Taha Yasseri (TY): Thank you very much, Patrick. Hello, everyone. Hi, Claire. Hi, Graham. Well, thank you very much for having me today. And I'm very much looking forward to having this conversation. Well, what is very interesting in this role for me is that, first of all, it's the first Chair of Technology and Society in Ireland that already-- it's a good starting point to be the first in a university with 400 years of legacy, and Technological University, which is the newest university in Ireland. So a lot of questions, a lot of responsibilities are already defined when you tell someone I'm the first Chair of Technology and Society of Workday, Chair of Technology and Society in Ireland.

But more specifically, and because that's the plan for the next few years for my team and I, is to study the changes that AI, artificial intelligence, is bringing to our societies. This is a very exciting time to study AI. And I have to clarify, even though my background is in theoretical physics, I know how to code a bit, but I'm not an engineer. I think there are other people in this room who are much better qualified to talk about AI as a technology. My research is not about how to make AI better, necessarily. It's more focused on how AI is changing us and how AI is changing our societies, things that we are still struggling-- we have accomplished so much as a species, humankind, but we also have quite big challenges in front of us. And the question is that how AI could help us tackle these challenges.

And when you look at the-- when you look at the history of technology, anytime we had one of these disruptive technologies - electricity, telegram, when you go back, fire, wheel, so on, and so forth - our societies have changed, and we had to adapt to this new way of living. And I think AI is going to be one of those massively disruptive technologies. And I'm so excited to be able to be at the front seat to watch how our societies are changing, and how we can most take advantage of these new changes, and how we can avoid going completely wrong with certain things. This has happened as well in the past. Electricity, these days, we all use it. No one has any issues with electricity, I suppose. But for the first 50 years, before it was fully regulated, it killed many people. It set a lot of houses, a lot of buildings on fire, just because people were already excited about the new technology, yet they didn't know how exactly it works. There was no regulation. There was no control, no oversight. So we don't need to repeat the same mistake again. And I hope, in my chair and with my team, and in collaboration with people from across the globe, we could contribute to this discourse and help to have a better life next to AI and artificial intelligence technologies.

PE: You touched on there-- you spoke about the fact that, obviously, historically, you're not from a technology background. You're not necessarily a developer. Would you say your primary area of interest is technology or human behaviour?

TY: I'm a sociologist. I'm interested in humans and their collective behaviour, how societies function, and how we interact with one another. That's the main area of my work. And of course, now, we have these new members in our societies. I don't know how many of the emails we wrote to each other over the past couple of weeks were actually written by us, let's be honest. Some of my replies came directly from the language model on my inbox. [laughter] Sorry for that. So now, we have

PE: I didn't notice. I honestly didn't notice.

TY: There we go. So we have these new assistants, these new members that are interacting on our behalf sometimes, or sometimes, as I said, explicitly and implicitly flagged as such. And this is going to change the way us humans can be understood and the way that our societies are going to function. And that's exactly what I want to study.

PE: So, Graham, tell us about Workday's partnership with TU Dublin. Why do you think it's important for technology companies like Workday to engage with academia in this way?

Graham Abell (GA): Yeah, so I think, starting off, we've had this partnership with TU Dublin since 2022. This is across three main kind of focus areas for us from that initial establishment. Firstly, obviously, probably working with an education partner, focusing on kind of internal skills for our workforce. So we've really focused on some of the core competencies across our technology teams. So whether that's cybersecurity, machine learning, etc., really kind of instilling things that we feel are kind of horizontal at this point and making sure our engineering teams have the latest kind of learning in that space. We've also then worked on an awful lot of the kind of business disciplines, so like entrepreneurial leadership, some of our frontline leaders enablement, and our product management is a new one coming online. So we've been really excited about the impact we've had through that. I think about 500 of our employees are going to go through those programmes, which is about 25% of the workforce here in Ireland.

Unsurprisingly, I'm with Taha here. The second strand was around research and innovation. Our footprint here in Ireland is about 80% R&D. And so we're really primed and very close to TU Dublin and Trinity in order to be able to partner with them on a number of these initiatives. And so we're really excited with this new partnership now and kind of seeing where that brings us. As I said, we're at this kind of, feels like, pivot point, or moving through a pivot point of where technology is having a meaningful impact on society. We have about 65 million people across the world on their Workday, and so we've got a very broad impact. We care a lot about the future of work. We care a lot about skills, and we want to make sure these things are equitable and, to his point, that we don't kind of-- in our haste to embrace these things, we don't kind of make mistakes along the way. And so we want to make sure that we're actively pursuing research that's cross-disciplined in order to kind of inform what we do inside our product, but also, ultimately, working with public policy and a number of those things to make sure we kind of get the right guardrails in place because we do think there's going to be risk attached to this.

And then finally, for us in Workday, we've had a long history of community engagement, particularly around education, and so something, again, that TU Dublin are passionate about. And so we've been partnering with them on a number of initiatives focusing on, where we've historically focused, probably, in terms of elder kids, like teenagers, moving back into kind of the five-to-six-year-old category. I think we recognise, again, software, it's all around them, and we want to make sure that as many kids as possible are kind of getting enthusiastic about STEM as an opportunity and potentially pursuing a career in that. And so working with a number of our local schools here with TU Dublin in order to kind of try and skill that passion early, and hopefully, they end up in courses that maybe result in jobs in a place like Workday.

So as I touched on, I think, for us, AI has to be human-centric and aligned with human values. And that's part of our core value set as a company. And so, as I said, we're really passionate about trying to get this right. And our history to date with TU Dublin has just been really collaborative. I think, as an entity, they are looking for the kind of industry perspective. And we've been really lucky with a number of engagements so far. I've been able to kind of get some really good results from that partnership, the synergies that are there.

PE: It's mutually beneficial, isn't it? I guess, from the point you were making before, you've got students that are studying but probably don't have hands-on experience of working at a technology company that can partner with Workday and can, I guess, start to understand what their role might be within the technology industry. And also, from Workday's perspective, there's the opportunity there for our employees to upskill and get access to courses and the latest sort of programs.

GA: Yeah. And working in partnership with what we're seeing in terms of industry, what are the trends, and what do we think for us as grads, and next year and the year after, kind of how do we need to tweak things so that they're kind of ready to hit the ground running? And an awful lot actually sits outside the sort of core technical skills. There's an awful lot of focus on, how do we collaborate? How do we research? How do we show empathy to our end users? Which is not just like, "Can you code an IDE?" So yeah. I think, more and more, we're seeing more of that cross-discipline kind of competencies needing to come into the fore.

PE: I guess picking up on Graham's point there about the work that we do in the wider community and giving, I guess, people from different backgrounds the opportunity to pursue careers in the technology industry, how important do you think that is, Taha, in terms of inviting everyone to the party and making sure that the technology industry isn't just peopled by the same groups from society?

TY: That's a very, very good question. I think I can fairly confidently say that AI is perhaps the most mythical topic of our times. And it's not only because of Hollywood, but they definitely contribute to the myth. Yet, so we are afraid of AI. Some of us love AI. Some of us think it's a solution to every problem we have. Some of us think that's the end of humanity as we know it. So so much is going on. And in tech industry, particularly, we see that the benefits, the advantages of AI are already coming into our pipelines. So what I was really attracted to when I saw the job description, when I was applying for this job, was this community engagement aspect of the job. And I'm not just saying-- I got the job. I don't need to convince anyone that this is really attractive to me. But this has been always a passion for me, to talk to the members of the public, to explain things, to demystify AI, not because they are necessarily less smart than me or my colleagues or those who work in this industry. It's just we have different interests, and our attention is divided among many things. And yet, we go and watch a movie that shows AI in a way that is far from reality. So it is part of my job to, in a very easy, very accessible way, to explain what's going on. What are the benefits? What are the advantages? And what are the challenges, and what can go wrong? That's one aspect of the job.

And when it comes to policy, I mentioned earlier it took policymakers 50 years to understand how to regulate electricity. I'm not very optimistic how quickly they are responding to regulation when it comes to AI and AI-related technologies. There was the EU AI Act going on. It's still going on this year. And there was the first version of it released. I read it, and I talked to the person who is chairing the European Committee who came up with this act. I see it as part of my job to talk to people who are developing, who are using these technologies, and translate that, how these things work, and what are their needs to people who are coming up with the new policies. And I would be very happy to create this bridge. And again, I'm very grateful, and I think that was a very important insight that Workday had to fund this chair to play this role.

PE: I'm sorry to put you on the spot, and don't worry if you don't have an answer to this question, but I guess technology companies quite often are trying-- or often working alongside academia and academics and asking them, "Can you put your name to this white paper? Can you take part in this piece of research?" What convinced you to partner with Workday, in particular, in this way? Why Workday, I guess, is the question.

TY: My background was in computational physics. Quite a few of my friends and colleagues ended up in fintech companies, in financial sector, and as data scientists in high-tech companies. And I often talk to them about the work, the job, what they do, and the companies, and the salaries. Some of the answers they give me sort of make me think about the career choices that I have made, particularly when they talk about job securities that they don't have, or the toxic culture that they have in their company and organisation. When I heard about this position, I knew a little bit about Workday. But if I'm applying for a job that has Workday in its title, I had to google it. And I asked a few friends I have in Dublin who work for Workday in Dublin. The very first thing coming to mind is the culture, the positive culture of Workday, and how it's different to all the other big companies in this area of work. And that was very attractive to me, a company that values the culture, the environment that its employees are embedded in. And then I thought, "Okay, unlike many of my academic friends, I would not be ashamed of the name of the high-tech company that will be in my job title." So I'm very proud to be a Workday Professor of Technology and Society.

PE: Excellent. And I guess that point-- I guess Workday's culture dovetails quite nicely with a lot of the research that you've done in the past in terms of the importance of community and things like that, which we'll come on to talk about in a little bit. Claire, Taha was speaking earlier about how AI is likely to impact society, how it's likely to impact the workplace, the hype around AI, things like that. You're regularly speaking to business leaders. You have many conversations with Workday customers. What do you think their bigger questions are or concerns are when it comes to AI's impact on the workplace? And maybe actually, to rephrase that, what do you think they're most excited about when it comes to how AI can transform their organisations?

CH: So going to try to answer this, and one is the excitement part, and the other is what's top of mind for them at the moment. And actually, the top of mind is how you led me into this really nicely. Top of mind, certainly for the past 12 months, and I've seen it really escalate this year, is trust. And you mentioned this right at the very beginning. Through those evolutions of technology, and you even talked about electricity, that fear factor. You get to the hype, and then you have the fear factor. So Workday is not going on this hype to start with.

So trust is foundational to anyone being able to adopt and adapt to this phenomenal technology, of course, which it is. And in order for that to happen, it's really important that organisations like Workday, in particular, have already put in place a responsible AI framework. And it's not just a framework, actually. It's made up of four pillars. The first one is about principles, and this is, again, leaning into what Taha said. So when you look at it from a principal's perspective, you got it in terms of amplifying human potential. And the second one is around having that positive impact on society, and Taha really picked up on that. Your role in terms of that experience and those insights you bring from societal perspective is absolutely going to be key for organisations, and especially here at Workday.

The other point about that is ensuring that we can champion for transparency and fairness. People want to have this explained. They need it to be transparent. And then the final part that is delivering on privacy and, of course, the data protection commitments. That's always been 101 for us. Or Workday, as we say, was born in 2005. And of course, everything that we do around data and security is part of our framework and foundational in terms of how we practice.

Now talking about practices, there's no point in talking and having principles if you don't start to put them into practice. So what we do is, for every capability and concept that is brought into us, we use a risk-based framework. So it's the NIST-based framework that we actually use. So everything that we do goes through a full evaluation cycle. And it does that from assessments all the way through surveys to questionnaires. And then it follows the whole system's development life cycle all the way in. And then we just start the determining after that, right? I think there's often that scenario that we say, "What's more important to you? Is it how AI is reading issues and scans for your spinal cord because there may actually be an issue? Or is it more important to you what's the next recommendation that Netflix is going to make?" So when you look at a risk-based approach, that is exactly what we mean by it.

The other factor here is from a people perspective. So principles, practices, phenomenal and fantastic to have them in place. But you need people, and it's people, actually, like Taha. I'm really excited to work with you on this here at Workday. So we've got a whole responsible AI team, but they're part of the guidelines and the guardrails, and they really watch for us in terms of what we're doing from a development perspective. But not only that, we've got an advisory council, which is, as I say, every C-suite member that we actually have in Workday, that takes it incredibly seriously. And then scattered throughout the entire organisations, we have the REI champions. And then I'm going to take it one step further. I have got no doubt, Graham, you and your entire team, and anyone that you see working on Workday, this is top of mind for us. And you have to have that because you need diversity when you're doing anything. And it's diversity of voice. It's diversity in terms of backgrounds. It's diversity of that experience, and it's diversity in terms of foresight, in terms of looking absolutely at everything we do.

And then the final part of it is around policy and how you did make me smile when you spoke about the EU policy. So we've been working with the policymakers for years. And the reason we've been working with them is because this is not new. We did not wake up 6 months ago, 12 months ago, 18 months ago, like many others did. And so we have been developing out from an AI perspective for, now, nearly a decade. So we've got vast amount of experience in the space. So from a Workday perspective, we've been working with the policymakers around the world in terms of really helping them from an experience perspective and a practical perspective, what's responsible, and what policy starts to look like. So we are actually working with them arm-in-arm from a EU perspective, from the US, down APJ, down to the UK, in terms of where they're going with their policies and their acts as well as the moment. So it's all-encompassing.

Then you packed a second part into that question, Patrick, and it was around benefits, right, or what's exciting people. And genuinely, I think the excitement comes from-- I want to say fruition, maybe, is the word. It's almost what we begin to experience, right? The way I always push from an AI perspective, I said, "I didn't get into the car this morning to drive to Dublin and I got into the car and said, 'I'm just going to turn that AI thing on to get here.'" I didn't. I got into the car, and I turned on Google Maps, actually. And my husband always had this conversation, whether it's Waze or it's Google. So it's part of my workflow. It's part of my personal life in terms of how I actually operate. And I think that people are finding or getting at ease with that, especially on the back of what they're depending on and actually have adapted and adopted in their personal lives.

And then, from a work perspective, I've heard the conversation. I get asked a lot, "Is this going to take my job away?" which is why we talk about trust. It's why we look at society. It's why we say that we need to have a positive impact. Because it's those elements that you bring into the workplace. Well, when we talk about human potential, and Graham, you spoke a lot about benefits and skills, people want to be-- they want their roles to be augmented to some degree because it makes it easier. What organisations and people can then do is I can get time back to go and do what I need to do from a strategic perspective. So I'm actually quite excited by some of the elements. There is a cautionary tale. We know this in terms of where it absolutely makes sense, which is why you said Workday didn't get on the hype, and Workday are not going to get on the hype.

PE: I guess thinking about - and this is maybe more of an open question to the group - thinking about artificial intelligence, Claire, you mentioned there that, obviously, from a Workday perspective, it's something we've been working on or working with for 10, 15 years-plus. Why do you think that it's pushed itself to the top of the public consciousness now? And where do you think, I guess, the average person in the street is in terms of their understanding of it, their openness to it?

GA: I think ChatGPT was a big eye-opener for a lot of people. It suddenly became very interactive, and some of the kind of demos and examples of that seemed, to Taha's point, something out of Hollywood, right? It felt like, suddenly, it was-- that our future is here.

PE: It had gone beyond the conceptual.

GA: Yeah, totally. I'm surprised by the level of uptake of that in my social-- my parents and their friends are using ChatGPT to do stuff, which is-- I can barely get them to use an iPhone. So it's funny that it's become so accessible and they're doing things on it. So I feel like that really was a massive pivot point around kind of just how amazing some of the demos around that were. And it's fully got into the kind of hype cycle, for sure, on the back of that.

CH: I think that's when it became a household name. And it's really interesting. We had a bit of a family do during the weekend, and I've got a 90-- going-on-91-year-old mother. I was actually asked at breakfast yesterday morning, "What about that AI thing?" [laughter] And she knew about ChatGPT. So that's where it became household. Every generation is now talking about AI. And I suppose so, and I am agreeing with Graham, I think there was a pivotal point about 18 months ago. I think it became a common name. I think it became a household name. And I think that, as I said, the experience comes from organisations who have been building out from an AI-first perspective for many, many years prior to that. But for some reason, life and organisations and people have gone-- they've kind of bypassed a lot of the traditional and the highly complex AI, and they've gone straight to gen AI. And I think that's where the kind of misnomer is in terms of how everyone's just talking about AI.

PE: And like you said, Taha, I think, at that initial point, I guess the conversation sort of went to, "Oh my goodness, what does it mean?" almost fear-mongering around AI and its possibilities. Where do you think people are now? And it's sort of 18 months on from that. Do you think people have moved past that and are more open to the way that AI can transform the way that people work?

TY: First, let me say something about the previous question, where we are. And I have a little bit of a detour. In one of my previous workplaces, we were in charge of surveying people after the use of the internet and web technologies, okay? And we have started to notice, despite all our predictions, that the rate of internet and web use among younger generations is declining. And we couldn't understand that because everyone was on the phone. Then we had to interview a few respondents that, "Well, you said you are not using the internet. Why is that?" And they said, "Well, I just don't use the internet." And then we said, "But you have a smartphone. You have Facebook. You have Snapchat on it." They say, "Yeah. I use Snapchat, but I'm not using the internet." Okay. So basically, the technology had gone into the background so that people don't even notice it anymore.

So I think, with AI, we are not there yet. And I agree with what was said earlier, that ChatGPT and OpenAI products brought a lot of attention to gen AI and, particularly, large language models. But there will be a huge number of innovations in the coming years in other areas of artificial intelligence that would be equally or even more important and more disruptive to our societies. And this is going to continue the same way that internet-based technologies are-- we are still having new and new products, newer technologies, and so on. So I don't think we can at some point say that, "Okay, AI is now gone, and we have moved on to the new things." I think we will have more sophisticated and more complex products and applications and use cases. And again, back to the initial point I was trying to make, these are all very important and rewarding. Yet, it's important to also understand how they're going to change us individually, as a part of a group, as part of an organisation, and as a society in its whole.

PE: Perfect. That's the perfect segue into the next question as well, Taha. So one of the lessons that really stands out from your previous research is the importance of collaboration and the idea that community matters. We spoke earlier a little bit about the importance of diversity and involving the broadest possible range of people when it comes to developing emerging technologies like artificial intelligence. How can society ensure that the age of AI embodies that spirit of collaboration and community?

TY: Again, I want to go back a few minutes when Claire was talking about transparency as a principle or, for example, fairness as another principle that we want to have in the way that we manage our organisations. We don't want to discriminate against our employees and so on. So these are the principles we want to aspire to. And there are core principles in the way that our societies work. Yet, our research and my colleagues' research shown that, for instance, when individuals are collaborating with AI, the collaboration is more beneficial. Actually, people cheat less in a, let's say, game-theory scenario when their partner is a machine; however, as long as they don't know that they're playing with an AI agent. The moment that we tell them that, "Your partner is actually a machine," even though your partner is playing nicer than the other partner who was a human, yet they cheat more frequently because of the certain way that our brain and our mind process this social interaction.

So this is a dilemma because, in one hand, we want to have transparency. We want to let people know their partners, their pals, are actually algorithms. But it's counterproductive because when we do so, the benefits of using machines, which are [fair?], I don't know, in many aspects, better than humans in that particular scenario, the benefit goes away. So this is a huge, huge question now to ask. And it depends, again, on the use case, on the organisation, on the specific application. But the research has been crucial for us to understand, okay, there is the principle of transparency, yet in practice, things might go differently. So that's why I think it's very important when it comes to policymaking, when it comes to decision-making in our organisations, we should have the principle and practice experience and research all at the same time.

TY: Fairness, I want to use this to talk a little bit about some of our work. We conducted some experiments in which a group of humans had to solve a puzzle together, had to work together, and then we had a manager who's been watching them and picks the best player. You can think of it in an organisation. Also, think someone needs to be promoted and how to choose that person. So we came up with two situations. Once, the decision was made by a human. Once, the decision was made by AI. And those individuals who were not selected, who were not picked, basically who were not promoted, were outraged, naturally. And by the way, behind the scene, we had the same decision-making processes. It was just how we communicated this to participants. But those who were not promoted, if they were said that your manager was AI, they were much more outraged.

PE: Oh, really?

TY: So the lack of fairness in their point of view was much more difficult to tolerate when it comes from a machine, which might be a good thing. Yet, when we gave these machines gender, they were even more outraged if a female AI manager did not pick them. So the sort of misogynistic approach that we have in our human-only societies could be basically generalised to our hybrid societies. Even though, terrifically, we know that AI does not necessarily have gender or doesn't identify as gender the way that we humans do, yet who among us does not think of AI as a friend with a specific gender? Siri has a female voice. There has been survey. People think of ChatGPT-- 70% of people think of ChatGPT as a male assistant, and so on and so forth. Again, lots of moving parts here, lots of big questions. My point is that our behaviour would change depending on these features, these design features. So back to your question, what can we do to make sure that our communities are not disrupted and collaboration still goes on? That's the keyword here, the design of these products. And the design, that has to be informed and has to be shaped by research.

PE: Do you think it's the role of technology companies to try and steer away from that and try and shape that in a more, I guess, more equitable direction?

TY: To be fair, I don't think tech companies have this sort of moral responsibility to fix the problems that we had or we have had throughout our history. Okay, that would be a bit too much to ask. And I want to stay realistic here. But it is important, and I think it's everyone's responsibility, including tech companies, to create awareness and to give the individuals, the citizens, the normal labourers, unlike you and me, to have the choice. Imagine - well, we are not there yet, but - in a few years' time, we will have a market of self-driving cars, which work based on different algorithms. And different moral judgments are hardwired in these algorithms. I'm driving next to a lorry, and on the other side, there is a cyclist. If I drive closer to the lorry, I put myself in danger. If I drive closer to the bike, I put the cyclist in danger. Where should I be driving? This is a very practical question. It is important for me as a customer, as a consumer of this technology, to know how these decisions are going to be made. And it's important for the policymaker not to let people to pay more to buy a more selfish car that puts the passenger above the cyclist, let's say.

Okay, this is one example, but there are so many different scenarios in which the policymakers have to know what's going on to be able to regulate it, and the customers have to know what's going on so that they could make the choice and make the decisions. And again, I don't want to repeat myself. All this knowledge can only be created through research and through practice. And that's also very important. I cannot sit in my lab and do research on unrealistic scenarios. I have to go and knock on these company's doors and ask, "What is the problem you have today? What new technologies you're developing and you're using today? Can I study it, please?"

PE: Excellent, excellent. And to that point, Claire, how do you think Workday's customers stand to benefit from the collaboration between Workday and Technological University?

CH: Well, I've benefited so much in the past 30 minutes or more sitting in this room with Graham and really listening. So that's just a little bit of information that we've been providing. And the energy that Taha has brought and the experience and the insights, I think, are quite phenomenal. So if you look at that, research is really at that intersection of technology, and society is really starting to provide really fresh perspectives and opportunities. Workday has always been a collaborative company. So Workday collaborates already with customers. We collaborate with our vendors. We collaborate with partners. This is the icing on the cake. So to bring in this research organization with these perspectives that are focusing on society-- and again, it takes me back to answering that it's providing for the positive impact, not for a disruptive impact. So I think this is a phenomenal collaboration that's taking place, and our customers are going to benefit hugely from it. So it's going to boil back down to-- you asked me, whatever, 20 minutes ago about trust. This is where the trust starts to come from because it's really starting to be able to dial into this from a societal perspective in terms of how technology and society is going to evolve collaboratively into the next evolution that the world has got to bring. Thank you.

PE: I can also foresee a situation, Claire, where customers are saying to you, "Claire, we listened to the podcast. We'd love to get 15 minutes with Taha just to pick his brain." Do you think you can make that happen? 

Brilliant. Claire, Graham, and Taha, thank you so much for joining us today. I'm afraid that's all we have time for, but if you enjoyed the show, you can subscribe at Spotify, Apple Podcasts, and SoundCloud. You can also read more on the Workday blog. Thank you for listening, and have a great workday.

 

Audio also available on Apple Podcasts and Spotify.  

The rise of artificial intelligence (AI) has made it more important than ever to examine the impact of technology on society. Workday is taking a proactive approach by funding the first-ever Chair of Technology and Society in Ireland, held by Professor Taha Yasseri at Trinity College Dublin and Technological University Dublin. This partnership signals Workday's commitment to understanding and shaping the future of AI in a responsible and human-centric way.

Intrigued? So were we! We had the opportunity to sit down with Professor Yasseri, along with Workday's Clare Hickie and Graham Abell, to discuss this exciting collaboration and delve into the fascinating world of AI. Here's what we learned:

1. AI: Separating Fact from Fiction

Hollywood loves to portray AI as either our savior or our downfall, leaving many feeling confused and apprehensive. In his new role, Professor Yasseri aims to cut through the noise:

"AI is perhaps the most mythical topic of our times... It's part of my job to, in a very easy, very accessible way, explain what's going on. What are the benefits? What are the advantages? And what are the challenges, and what can go wrong?"

It's only by demystifying AI and making it more understandable, that Professor Yasseri believes we can have informed discussions about its role in society.

2. Responsible AI: The Foundation of Trust

As AI becomes increasingly integrated into our lives, trust is everything. Workday recognises this and has built a robust responsible AI framework based on four key pillars:

  • Principles: Prioritising human well-being, societal benefit, transparency, and fairness.
  • Practices: Implementing risk-based frameworks, diverse teams, and ongoing evaluation.
  • Policy: Collaborating with policymakers to ensure ethical development and use.
  • Privacy: Protecting data and individual privacy.

Clare Hickie, EMEA CTO at Workday, emphasises the importance of this framework:

"Trust is foundational to anyone being able to adopt and adapt to this phenomenal technology... it's really important that organisations like Workday have already put in place a responsible AI framework."

3. The Surprising Impact of AI on Collaboration

Did you know that people are less likely to cheat when collaborating with an AI agent they believe to be human? Professor Yasseri's past research reveals how AI can influence human behavior:

"People cheat less in a game-theory scenario when their partner is a machine; however, as long as they don't know that they're playing with an AI agent. The moment that we tell them that, 'Your partner is actually a machine,' they cheat more frequently because of the certain way that our brain and our mind process this social interaction."

This raises important questions about transparency and the delicate balance between revealing the nature of AI and maximizing its benefits in collaborative settings.

4. The Ever-Evolving Perception of AI

The conversation around AI is constantly shifting. While the initial hype and fear have subsided, we're entering a new era of more sophisticated and complex AI applications. Professor Yasseri highlights the importance of staying ahead of the curve:

"I think we will have more sophisticated and more complex products and applications and use cases... it's important to also understand how they're going to change us individually, as a part of a group, as part of an organisation, and as a society in its whole."

5. Academia and Industry: A Powerful Partnership

Workday's collaboration with academia underlines the value of bridging the gap between research and real-world applications. Professor Yasseri explains:

"I cannot sit in my lab and do research on unrealistic scenarios. I have to go and knock on companies' doors and ask, 'What is the problem you have today? What new technologies are you developing and using today? Can I study it, please?'"

By working together, academia and industry can drive innovation, ensure responsible AI development, and ultimately create a better future of work for everyone.

 

Become part of the conversation! Register now to join us at Workday Rising EMEA this December where you can find out how AI is shaping the future of work.  

More Reading