Defining and Deploying Responsible Technology (Cloud Next '19)

Just another WordPress site

Defining and Deploying Responsible Technology (Cloud Next '19)

[MUSIC PLAYING] NISHA SHARMA: Hello, and good morning Thank you for joining us today I’m Nisha Sharma I’m a managing director in Accenture I’ve been at Accenture for just over 20 years and worked on all sorts of different technology implementation projects for clients across several different industries I am currently part of our Accenture Google Cloud business group, which we formally launched last year at Next And I oversee all of our offerings and offering development for the partnership And I’m based out of Miami, Florida DEB SANTIAGO: Hi So I’m Deb Santiago, and I’m based in Chicago I am the co-lead of our responsible AI practice at Accenture, as well as a managing director in the legal department So the way that it’s going to work today is Nisha’s going to go and walk through our TechVision document, and I’m going to respond, provide some, I guess, some thoughts with respect to what we’re anticipating for the next five years and what can we learn from the past five years that can help us navigate and manage our entry into these new worlds NISHA SHARMA: All right So every year, Accenture publishes a technology vision This is our view of the top trends that we are seeing impacting our businesses today We talk about technology trends, as well as business trends and how the use of technology is impacting what our clients and what businesses are doing We’ve been publishing the TechVision every year for the past 19 years And we just recently published this year’s 2019 TechVision in February So we thought we’d anchor this conversation around the responsible use of technology and the considerations of ethics around the future trends, and what we see, and where we’re going, because this gives us an opportunity to consider these as we deploy our technology solutions So I’d like to start by asking you two questions Just think about these questions First question is, is technology good or bad? Think about all the different technology that you’ve come across, all the technology that you hear about Is it good, or is it bad? And the second question I would ask you is, what is your role? What is our role? What is our company’s roles in influencing and determining whether technology is good or bad? So let’s take a look at some examples to kind of set the context for our discussion here today Technology can be very scary to people There’s a lot of negative news around technology and advancements in technology Who should be trusted with this technology? Who shouldn’t be trusted with this technology? Is it ethical? Is it not ethical? It can be quite controversial Even though we think there’s some really advanced implementations and advancements that we’re seeing in technology, we’re still questioning the ethics and the responsible use of that technology So there’s some examples here just to kind of show what we’re actually seeing in the news So over three billion identities were stolen online last year We’re questioning the geopolitical forces and the influences that we’re seeing in today’s world Which countries do we trust with our technologies? Which countries are we willing to share our technologies with? Lots of questions around who we trust again and who we don’t Fake news is in the news every single day There’s a survey that says that the majority of people around the world don’t think that they have a single source of truth for news these days I just came back from a vacation in India, and they’re getting ready for their elections And there’s a lot of concern around fake news being spread and really influencing and affecting the elections, even after what we saw in our own elections here in the US We’re seeing questions around the use of technologies, like facial recognition technologies Again, we’re constantly scanning everybody all the time, and we’re identifying who these people are and who they aren’t Who do we trust with this? Who do we trust to see and learn from this information? The bottom left there is about a researcher in China that was using gene editing tools like CRISPR to modify human embryos It sounds really advanced, but is that ethical? There’s a lot of questions around this We hear stories all the time about bugs, or software gone wrong, or something that we’ve discovered in our applications, like the recent FaceTime issue where you’d make a call through FaceTime, but the camera turned on before the receiver actually had an opportunity to turn it on

We’ve got apps that are constantly requesting access to our photos, to our contacts, to our call logs and text messages So we’re just constantly asking the questions of, what’s happening with my privacy and my data? We’re talking about inequality and inclusion– many, many questions And then we’re seeing this tech clash against all the technology companies, whether they’re in the Silicon Valley, or whether they’re in China But they’re constantly being asked, what are you doing with my information? What are you doing with this data? How are you using it? What are we really doing? But there are also some really amazing things that are happening– a lot of positive news around technology So there’s an example here of work we’re doing in a program called ID2020, which is about us helping refugees all around the world establish their digital identities so that they can get access to services that they might not normally have had access to, because they can’t demonstrate or prove who they are We’re seeing advances in the agriculture industry where we’re using technologies like drones or IoT sensors and such to improve the quality of the food that we’re producing, and to help reduce food waste, and so on Advances in health care and medical services– this is one of many, many examples, but using IoT devices and technologies to provide seniors and disabled people with new ways of getting exercise Some fascinating advances in robotics– we’re teaching robots how to do back flips and to dance better So we’re not going to be doing the robot anymore They’re dancing like us, right? Pretty cool stuff And then the other example there with rockets– so traditionally, we’ve launched rockets up into the air, but they’re kind of like these one-time uses of these rocket technologies But companies like SpaceX, for example, are figuring out how to launch those rockets safely back down to Earth so we can reuse them And so, some really fascinating and really positive uses of technology out there as well How many of you have heard about that the 10 Year Challenge that was most recently all around social media It was where we were basically taking a picture of us 10 years ago and comparing it to a picture of us now to show how much we’ve changed Well, Bill Gates posted the world’s 10 Year Challenge And this was showing how social factors and social things have changed over the past 10 years– life expectancy, extreme poverty, child mortality, youth illiteracy We’ve seen really, really positive improvements over the past 10 years And technology has definitely had a role in all of that So technology can be good or bad And that’s based on how it’s used And it’s really up to us to determine whether it’s good or bad DEB SANTIAGO: And the thing is, change is constant This isn’t as though we didn’t have technology changes in the past, but there’s something very different right now about the pace, the scale, and the velocity of change that we’re experiencing right now Many of you have seen that S-curve slide around the technology adoption in the US over the last 100 years One of the often-cited examples in that slide is that it took about 45 years or so for the telephone to achieve mainstream adoption Whereas, smartphones only took about 10 years And before, societies had time to kind of absorb the impact of technological advances, but we don’t have that luxury right now Because we are living in this time of rapid technology adoption, regulators are playing catch up But they’ve also indicated that in the next five years, that they will be catching up with us all And I think that what we’re seeing is, in this gap, that the society at large is responding We’ve got enormous public scrutiny, and real-time feedback, and in some respects forced transparency of the activities that companies are doing today And as new tech like AI becomes mainstream, it’s important to consider that the ethical implications right at the beginning is a central aspect of how we develop and deploy new technologies As recent events have shown, it’s really hard to get right And the thing is, technology alone can not deliver on the full promise of new technologies like AI An important conversation is going on right now with respect to external advisory councils, et cetera And it’s facilitated by the community at large

And I would say that it’s helping the tech world understand how the public views the way that companies should be interacting with us, engaging with us, and what they expect companies to represent and what communities are expected to represent But in the end, everything should not hang on one company, one person, one team, one regulation, one government Rather, companies should be creating resilient and sustainable governance strategies that can help them act in an agile way to deploy technology responsibly NISHA SHARMA: So let’s take a look at the technology vision, this year’s technology vision What it’s all about is about this post-digital era And we are now entering this post-digital era In 2018, companies spend $1.1 trillion on their digital transformation projects 94% of organizations say they’re doing digital transformation work today I don’t know if I believe that, but that’s what they said that they’re doing 58% of those organizations say that they’re comfortable with where their digital transformation programs are going So everyone says they’re doing digital, right? And we need to figure out how to differentiate We need to figure out what’s next And in this post-digital world, we’re not saying that digital is done, not by any means In fact, it’s just table stakes now It’s required to be there It’s just the cost of doing business And at some point, we’re going to say that we’re going to stop using the word digital even I like how our CTO has said that we don’t say we’re in a post-electricity world anymore, or post-internet world anymore They’re just there, right? And so very soon, we’re going to see that digital is just like that, as well So now we’ve got a new set of characteristics that are defining what it means to be successful in this post-digital era So we’re going to just touch briefly on what some of these characteristics are And the first one is individualization Now, individualization is not personalization We’re talking about hyper-personalization, right? It’s not just about knowing what your preferences are, and what you like, and those kinds of things, but it’s about really understanding what you want and what you need Instant, on-demand is about being able to respond and deliver services to a customer exactly when they want it They want it now, and we have to be able to deliver that And then momentary markets are these pop-up-like services where we are able to respond to a customer’s needs at exactly that particular moment in time And these are, again, these markets that form very quickly They provide an opportunity right then and there, but then they go away very quickly, as well And if you can’t serve a customer’s needs exactly when they want them, then you’ve lost that opportunity to engage with them So now you can imagine all the information that’s being captured in order to provide these types of services And customers expect trust and responsibility They expect that all the data that they’re putting out there you’re using it responsibly and that they’re able to trust that you’re doing that So I’d like to do a little experiment with all of you So take out your phone I know you all have phones, so take out your phone Everybody got their phones? You got your phone? Now unlock your phone, OK? It’s unlocked, right? Everyone’s phones unlocked? Now hand it over to the person next to you [LAUGHTER] Go ahead Hand it over, and do something on the phone, right? Where’s your camera here? DEB SANTIAGO: I don’t know where your camera is NISHA SHARMA: OK, open my camera DEB SANTIAGO: I’m going to send an email NISHA SHARMA: Let’s take a picture Here, I’m going to take a picture Where’s your selfie mode? [LAUGHS] All right Now, give the phone back Now all of you laughed All of you paused to think about, oh, my god, I’m giving over this phone to the person who may I may or may not know Think about all the things that are on– DEB SANTIAGO: And we’re friends NISHA SHARMA: Yeah, and we’re friends DEB SANTIAGO: And we’re still– NISHA SHARMA: I know Deb, but do I really trust her– or do I really want her to see all the messages that I’m exchanging with my sisters on WhatsApp, or all the photos that I’ve taken and that are stored on my phone? There’s access to my banking applications There’s access to my emails There’s just so much information that I know is on this phone And we’re all pausing and hesitating to exchange those phones, but we do this every day with data that is online and services that we use

We just publish things We have companies that have access to these things And we don’t even think about it, right? We just trust that they are using this information responsibly It’s quite fascinating DEB SANTIAGO: Yeah So I, in part of my responsible AI work, I had a client reach out to me and say, have you ever done anything with robots, as an employee? And I’m like, where is this question coming from? And they explained that they had taken a robot and introduced the robot to the workplace And so the robot, it had eyes and was roaming the work floor And they thought it was going to be enjoyable, but actually it completely backfired People got very angry And one of the things that they said was, this thing is recording me, and it’s invading my privacy And as we were kind of brainstorming about this, we were just pointing out, I’m sure you guys have CCTVs all over the place recording your employees’ activities all the time And it just was that reducing it to a physical form that makes it come to life where people really understand what is actually happening So over here, you’ll see these are the principles that we use internally in terms of our use of artificial intelligence at Accenture, whether it’s from an HR perspective, a CIO perspective These are the principles that we’ve been using They are based on the common principles of fate So it’s fairness, accountability, transparency, and explainability So I’m not going to go into it You can see here it stands for trustworthy, reliable, understandable, secure, and teachable The one thing that I wanted to highlight is the item around understandable And we were very intentional about taking it a step further from transparency Why is that? Well, privacy policies are very transparent– pages and pages of transparency Nobody ever reads them And people don’t really understand You don’t say that these privacy policies can establish trust with the end user So we really wanted to think about, how do we stop putting the burden on the user to decipher what’s going on? How do we take responsibility for our actions and make the things that we’re creating understandable to the user? Because when we prioritize understandability, we’re able to allow the user to understand the import of the actions and to enable trust quickly And this will be a very important point as we talk about momentary markets later on in this discussion NISHA SHARMA: OK So we talked about these characteristics of the post-digital business Now let’s take a look at the five trends that we’ve established as part of our technology vision for the post-digital era And then we’re going to discuss how we can apply those principles of responsible use and ethics that Deb just talked about So the first trend is what we call the DARQ Power And this is about the technologies that we are now seeing companies use in this post-digital era The D stands for Distributed Ledgers And this is technologies like blockchain and cryptocurrency that we’re using to have these more secure, and protected, and trusted transactions So we’re seeing examples of companies using technologies like blockchain in their supply chains to make sure that they’re establishing that trusted set of supply chain activities We’re seeing car makers use these technologies as a way to protect cars from getting hacked We’re seeing delivery service companies, like DHL for example, who are looking at blockchain technologies to make sure that you are receiving, for example, medication or drugs that you had ordered, and they’re not counterfeit, and that they’re really the ones that you had asked for The A is for artificial intelligence, right? 67% of businesses say that artificial intelligence is being piloted or adopted at their organizations today And 41% of executives ranked artificial intelligence as the one technology that they expected to have the most impact in their organizations over the next three years And you’ll see all sorts of great examples of artificial intelligence, hopefully, here at the conference, as well, as you walk around and see what other companies are doing The R stands for extended reality This is augmented reality, virtual reality, mixed realities, assisted reality, so on and so on These are all about new experiences We’re looking at new experiences in terms of new ways of training We can use virtual reality to simulate training environments

or simulate actual environments so that workers can have a safe place to try, and to learn, and to test out new capabilities– new ways of shopping, new ways of interacting with products or gaining information in a store environment, perhaps, or things like that, and new ways of just exploring places– we can virtually visit a national park, or a museum, or any other such facility like that The Q is for quantum computing Quantum computing has brought about so many new advances and really allowing businesses to explore new ways of solving very difficult problems, or perhaps problems that they weren’t able to solve before We are working with a company called 1QBit And we had collaborated to identify over 150 different use cases for quantum computing to help us solve, for example, drug discovery We can use quantum to quickly discover new types of drugs, or fraud detection, or route optimization, or so on There’s many, many examples of how quantum can help Now individually, each of these technologies provides an opportunity to differentiate And we’ve already seen that 89% of businesses say that they’re already experimenting with one or more of these DARQ technologies But then imagine that, together, these are going to open some really new pathways and some unimagined pathways into the future Now, this shift to DARQ technologies is built on the digital foundation that companies have invested in and been investing in over the past several years Have you heard of the term SMAC before? SMAC is what is traditionally called the digital technologies That’s social, it’s mobile, that’s analytics, and cloud, right? And these are considered to be the foundation of our digital solutions And we’re building upon those now as we move to the DARQ technology So as an example, augmented reality and virtual reality solutions run on mobile devices The big data and the analytics capabilities that we have been setting up over the past several years, well, now we’re extending those to artificial intelligence and machine learning Those quantum computing services that are now being available, they’re being made available through the cloud So we’re building on top of what we had already been establishing And this now provides a foundation for companies to start to explore these DARQ technologies, which are really still in the early stages So not only does it give us a head start, but it allows us to create new value from these previous investments and to extend our digital business into the future DEB SANTIAGO: So we see enormous promise that extended reality can have So for example, training, as you mentioned, can really increase the ability for people to empathize with the plight of others A really great example– and I encourage you to go look for this on YouTube– is the NGO, People for the Ethical Treatment of Animals, decided to change their communication campaign Actually, Accenture helped them with this But they used the extended realities technologies to help people to engage with a rabbit eye-to-eye It’s called The Eye-To-Eye Experiment And it really had a profound impact on the public in terms of understanding and helping animal rights come to life But when I think about also the promise, but what are the things that we’re thinking about whenever there’s a huge institutional shift going from SMAC to DARQ, what are the things that we need to protect against? And what are the things that we need to maintain? So what shifts are we seeing? We’re seeing a shift that’s going from watching a video on your phone to having these very intense, immersive experiences And when you have these intense, immersive experiences, it becomes very easy to come to a truth for me conclusion that can seem, at times, unshakable and permanent And what we’re shifting from is not just data being collected, but also data that can be possibly used for manipulation There’s a senator in Virginia who, just today, introduced a bill that would like to propose banning the use of DARQ patterns for online platforms to prevent of the use of manipulated activities where people volunteer information about themselves without realizing how that information is going

to be used So we’re shifting from data collection to possible data collection for manipulated experiences And we think about the regulatory scrutiny that is coming, the combination of extended realities and, for example, deepfake technologies, and will the public start asking for guardrails not just on fake news– this morning, I counted There about 13 countries right now that are either proposing or have on their books legislation regarding fake news But at some point, will the public also start asking for guardrails around not only just online safety for children, but around the creation of fake memories NISHA SHARMA: OK, our second trend is called, Get To Know Me And this is all about the consumer How do you reach the individual consumer? How do you provide the right services to those consumers? It’s all about how we engage and interact And you can think about all the SMAC technologies that we said we’ve been deploying– social, the mobile, and so on And we’ve collected so much information about each of us and all the users who are using these technologies So you may have seen some of those charts that talk about what happens on the internet in a minute, in 60 seconds Well, 3.7 million Google searches are done every minute 4.3 million YouTube videos are viewed every single minute 2.4 million SNPs, 38 million WhatsApp messages exchanged every minute And 187 million emails are sent every minute So you can imagine now all of these interactions, all of these activities, they all say something about us They all provide information about us And companies are able to use that type of data to provide new types of services to individuals So as an example, there’s a company called SlicePay They’re a financial services company in India And they serve unbanked customers So these are customers who don’t traditionally use banking services or anything like that But what they did was that they were able to use the pictures that people were posting online, or the messages that they were sending, or their other social interactions, and create a financial profile of these individuals And now they can provide services or offer services to these individuals using this profile that they were able to build, based on that information So keep in mind that customers are making this information available to us, to our businesses And they’re relying on us and the businesses to use that information responsibly DEB SANTIAGO: So again, when I think about the promise of individualization, it is the promise of instilling a sense that you are not only known and recognized, but also understood And so we delight whenever a recommendation comes, and it’s correct We delight whenever it’s thoughtful or insightful But when data is being scraped or being used in ways that the user didn’t originally intend, as a society, we should pause Are we creating systems that are learning to penalize people based on activities and data that they’re providing that they never intended it to be used in this way? Are we making inferences on users that are just not reasonable? Again, we’re seeing increasing scrutiny in this space So the state of New York, the Department of Financial Services, in January, just introduced a set of guidelines to insurance companies based in New York saying that, if you’re going to use non-traditional data sources, like social media posts, here are the guiding principles that should apply But importantly, you need to be able to show– the burden is on insurance companies to be able to show that you are not using this data in a discriminatory way– so against protected classes of individuals, race, gender, et cetera And by the way, just removing race and gender doesn’t eliminate that problem There are indirect ways of discrimination using proxy variables, like zip codes, that are going to get caught up into that– so how companies should anticipate and think through using those non-traditional sources of data The city of Los Angeles recently filed a lawsuit against the Weather Channel claiming that the Weather Channel had collected user location data on users who did not know that it was going to be sold to advertisers And the state of Utah recently enacted a law that requires police to have search warrants before they can

use and collect any kind of electronic personal data, like social media posts People are worried about everyday behaviors being penalized and criminalized And they are especially concerned when the data sets by which these inferences, or conclusions, or recommendations are being made are either false, they’re faulty, or simply incorrect And I’m not even going to go into the whole biased discussion We can spend like a whole hour just talking about how some of these systems are built with bias already embedded in them NISHA SHARMA: Our third trend is called Human+ Worker This is about the workforce Our research shows that more than 90% of jobs that we have today will change as a result of artificial intelligence, robotics, and other technologies Jobs are changing fast Each individual worker is empowered by not only their skills and their knowledge, but also by the new capabilities that technologies are providing As some examples here– we have an oil and gas company whose workers are now able to troubleshoot an issue a mile underground using game-like visualization tools We have workers that are being trained as drone delivery pilots These are jobs that didn’t exist before And we have factories where humans are working side by side with robots And we’re using artificial intelligence tools to help determine which jobs the humans should be doing and which jobs the robots should be doing So the workforce is evolving And companies do need to keep up to support these human+ workers And there are three areas of focus that we have been exploring and working on improving And one of those is around hiring So the speed and the constantly changing nature of these human+ careers are making it harder for businesses to acquire this talent within their workforce So they’re moving away And companies are being forced to move away from the more traditional reactive, skills-based hiring You don’t just put out an ad for, I need someone who has six-plus years in accounting skills, or something like that, right? So we’ve got companies for Unilever, as an example, who are now using games to screen candidates And they’re using these games to assess your memory They’re using these games to understand your acceptance of risk They’re understanding whether you take more contextual cues or emotional cues And so it’s a very different way of screening candidates And then they’re using, again, artificial intelligence tools to match these candidates with open roles So just new ways of hiring It’s also training So we can’t hire people necessarily to do the work that we require them to do Now people just don’t have those skills And what we’ve seen is that 43% of businesses and IT executives say that more than 60% of their workforce will move into new roles over the next three years And that’s going to require substantial reskilling, right? We need to invest in reskilling and offering on-demand training opportunities for these employees And the third area is around knowledge management We have more information available to us than ever before, but it’s also harder for us to access and to find this information And we’re exploring new ways to make that possible We’re using technologies like natural language processing to be able to capture and assess information and insights We’re looking at indexing of unstructured documents, like incident reports, for example, as a way to collect knowledge and insights We’re incorporating knowledge graphs to be able to find information across a wide variety of different types of data sources So just new ways, again, on hiring, on training, and knowledge management DEB SANTIAGO: And so a new story was leaked rather recently about a certain Seattle company that had an AI tool that was screening CVs And if you were a woman, you were immediately downgraded And there was enormous backlash about what happened, et cetera But when we looked at it, we actually said, hey, actually, they had a really strong governance structure They looked at this for two years They recognized that there was going to bias in the system and the bias in their existing hiring practices today

They tried to mitigate against it and tried to figure out ways in order to fix and adjust for that bias And at the end of the two years, they couldn’t fix it, and they decided to dismantle that program And that story, to me, just told me about how important it is to really establish those governance strategies at the beginning, as you start experimenting with artificial intelligence At Accenture, we spent a lot of time looking at some of the tools that we use internally But we also put a high level of importance on retraining and reskilling our workforce Last year, we spent $900 million on retraining our people And we use the money that we save from automation and artificial intelligence to upscale people And for us, that really is important because we think it’s important to democratize AI learning and, from an ethical point of view, making it accessible to those who may have a– there might be a high barrier to entry for some of these skills NISHA SHARMA: Our trend number four is called Secure Us To Secure Me Security is no longer about just protecting us, as individuals It’s not about one person, it’s about protecting all of us And what we’ve seen is that a lot of companies still think that security is an individual effort And if they can just secure their own information and their own data, that they should be safe But that’s not the case at all Because what we’re seeing is that businesses are rapidly entering these ecosystems They’re working with technology partners and industry partners to create new services, and new products, and new experiences for their customers And attackers are really seeing these ecosystems as this ever-widening attack surface from which they can try to do bad things Only 29% of executives in our survey actually know that their technology partners are taking security and being just as diligent as they are in implementing security processes and solutions So while we continue to have the traditional risks that we always have, one of them has always been around the misuse of data And we’ve always thought of it as a misuse of our own data and how that could provide access to our systems But really, what we’re seeing is that we can see misuse of other data to impact us as well So for example, data from business wires has been stolen for illegal stock trading purposes There’s a risk around aggregated data, as well We think of aggregated data as being kind of somewhat anonymized, or that it’s not able to identify us individually But what we’ve found is that aggregated data was found on Strava and used out of context to identify secret US military sites And so we can’t just assume that we can’t make sense of some of this other data And in today’s connected and ecosystem-dependent world, the impact of cyber attacks is exponentially amplified There was the WannaCry cryptoworm You might have heard of that one that exposed an operating system vulnerability and infected over 300,000 computers across 150 countries in just a matter of days And this brought down businesses It impacted work that was being done, and so on There was the Mirai malware that was used to hijack over 100,000 IoT devices and then use those devices to launch an attack on a domain registration services provider So attackers can also, we’ve seen– I mean, they can spread fake news much faster than how good news and real news traditionally gets spread So leading businesses are recognizing that just as they work and collaborate with their partners in the ecosystem around these new products, and new services, and new experiences, they also need to collaborate on security DEB SANTIAGO: Yes So I think, for us, it’s really critical that we’re building resilient and sustainable innovation And this includes making sure that we’re building systems that are secure by design I really love this point around making sure that the whole ecosystem and the supply chain is actually secure Given how increasingly networked and connected all our world is– I think it was the designer, Bruce Mao, who said something like, everything is connected So for better or worse, everything matters And so, to me, that means you are only as strong as, and essentially as transparent, and as secure,

and you are vulnerable as your weakest link in the chain And we’ve seen regulators in the past kind of put liability not just on you as an individual in terms of your secure environment, but also in your supply chain And we’re anticipating that trend, as well NISHA SHARMA: OK, our last trend, our fifth trend is called My Markets And this is all about those momentary markets 85% of executives agree that the integration of customization and real or near real time to delivery is going to create the next big wave of competitive advantage– so again, customization and near real time delivery And that’s all about capturing moments So whether it’s real time views of operations, whether it’s instant price quotes based on inventory, or scheduling, or pricing data, whether it’s the ability to immediately adjust and respond based on customer feedback that you’re getting, or these pop-up services that we talked about, it’s all about capturing that moment And as we get better at capturing those moments, people and businesses are going to expect more convenience and immediacy from our technologies And companies can capitalize on these moments by providing personally tailored products and services that go far beyond just customization So the example I’ll share with you is what Carnival Cruise Lines is doing They’re transforming the entire ship experience, the cruise experience And what they do now is everyone will be getting, what they call, a medallion It’s a wearable device, and it knows your preferences It’s able to know where you are on the ship It allows you to make purchases It does all these things And they’re also adding IoT sensors, and cameras, and analytics, and just all this information all around the ship, as well, so that, for example, they might identify that there’s availability at an upcoming attraction that maybe your child might like and that they would be really interested in And then it sends you a message and offers you the opportunity to be part of that attraction It’s a very momentary opportunity It comes, is made available to you, and then it just goes away DEB SANTIAGO: Yeah So given the criticality of these momentary markets, what I would say is that companies need to really think about, how do you establish just-in-time trust? And how do you establish that you have the responsibility so that trust can be established and exchanged seamlessly and quickly? This necessarily means that trust needs to be incorporated in the very early stages of designs These systems should incorporate– for example, for us at Accenture, we want to incorporate the principles of trust and deploy technologies responsibly So I talked about trustworthy, reliable, understandable, secure, and teachable Working through this beforehand really matters In our global responsibility iSURVEY that we conducted last fall, 24% of the respondents indicated that they had to undergo a complete overhaul of an AI system due to either inconsistent results, a lack of transparency, and/or biased results So what do we do? What does deploying responsible technologies look like? And I just put together some of these points And I think it’s really critical that we’re building agile multidisciplinary ecosystems If you’ve got a group of individuals that look like you, act like you, do the same things that you’re doing, it’s very likely you’re going to have some blind spots We think that companies ought to be looking across the board You may need to be engaging with human rights organizations for the first time, academia for the first time Use good data hygiene I can’t tell you how many times– it’s really important for people to really understand the way that bias can creep into systems How do you build a governance strategy that anticipates downstream impacts? How do you create systems of constructive dissent and incorporate diverse perspectives? And how do you use this whole system to enable informed decision making? There’s no one tool I wish I could say, I’ve got a checklist, we’ve got this great thing, and it’s going to stamp you like organics And you’re going to be ethics certified There is no one magic solution that will solve everything Instead, companies really need to invest in these governance strategies When we started this presentation, we said that we wanted to apply what we learned in the last five years

Because in the last five years, we’ve had this immense push forward The old models of disrupt, or be disruptive, or move fast and break things were commonplace At times, this mindset served as a justification to innovate without implications, without a desire to take into perspective compliance, trust, or ethics And if I may for a moment just use the EU’s General Data Protection Regulation as a watershed moment, because things are being mirrored in California, for example We are at a meaningful transition right now I’ve talked a little bit already about some of the increasing level of activities that we’re just seeing in the United States In Europe and in other countries, there is even more activity And some are also being demanded by the public The next five years, we think, is about trust, and responsibility, and how do we create sustainable innovation But in the end, we cannot, as we said at the beginning, go at this alone Instead of separate businesses trying to go their own way, I think we’ve really got to work together and collectively to try to get this right and pull in the different community interests, the different perspectives, to make them at the heart, genuinely at the heart, of the discussions that are happening today NISHA SHARMA: OK, so we’ve given you a glimpse into the future And we hope that this presentation has provided you with some things to think about when it comes to defining and deploying responsible technology If you’d like more information on our technology vision, you can visit our website, accenture.com/technologyvision And you can also connect with both Deb and myself on Twitter and LinkedIn So feel free to reach out to us with any questions or conversations you’d like to have And what I’d like to end with is the same question that I started with at the beginning We’ve talked about how technology can be good or bad, based on how it’s deployed and how it’s implemented And my question will come back to you What is your role in influencing whether technology is good or bad? So thank you for your time I hope you found this useful And Deb and I will be here in case you have any questions But thank you, again And enjoy the conference [MUSIC PLAYING]