We all know what’s required to stay safe in the physical world amidst a global pandemic: wear masks, stay six feet apart, wash our hands. But how does safety play a role when it comes to digital transformation?
Digital transformation, the act of morphing products, services, and operations of your business — no matter the industry — has accelerated everything we do from accessing and sharing data to utilizing artificial intelligence (AI) to decision making and protecting people’s personal information. The COVID-19 pandemic has forced us to move forward much faster than anyone working in the digital realm thought possible, or even necessary — and forced organizations to confront where they stood with respect to this transformation. Now that we’ve seen what is possible and yes, necessary, there’s no turning back. But what does that mean when it comes to ensuring everything from sensitive information to personal freedoms remain protected?
In late January, Salesforce and MuleSoft assembled an expert panel, moderated by Sanjna Verma, senior product manager at MuleSoft, to answer that question and discuss the nuances of digital transformation safety.
Meet the panel:
- John Allspaw, co-founder of Adaptive Capacity Labs who has worked in software systems engineering and operations for over 20 years in different environments and has authored a few books
- Dr. Richard Cook, principal at Adaptive Capacity Labs and a research scientist, physician, and pioneer in resilience engineering for safety in complex risk-critical worlds
- Kathy Baxter, principal architect of ethical AI practice at Salesforce
- Katie Pyburn, global head of offerings for the human data science cloud (HDSC) at IQVIA
- Stephen Fishman, regional vice president, customer success architecture, MuleSoft
First adapt … then evolve
The pandemic accelerated how we adapt — even to seemingly mundane things like working from home and video conferencing. This changed our perspective on what matters and what’s necessary.
“When we use the word safety, quite often these are events, concerns, or anticipations about things that can go wrong,” Allspaw said. “The pandemic introduced a global disturbance and kicked these adaptations into gear. We have this opportunity to see what adaptations organizations and individuals in those settings have had to make … and to see how this adaptation has occurred will be good fuel for the future.”
This also set up companies to reassess their own safety in the midst of digital transformation, but also in the context of what is best for their customers. Businesses have gone through change at a much faster rate than we’ve seen before. This has forced companies to look to technology to buffer that safety while unlocking data, access to data, and identity systems across all industries.
“I’m seeing an overlap between public safety and economic viability,” Fishman said. “The things that bubble up to the top of their lens are what keep people safe and keep us economically viable going forward. When those things are in alignment, you’ll see more enablement happen because it’s an easy picture to draw around mobile enablement whether video work, handheld devices or touchless delivery through safety and it makes them more resilient to competitive forces.”
We’re suffering from a great deal of poor quality information … which has made it hard to know how to proceed and what to do.Dr. Richard Cook, principal at Adaptive Capacity Labs
Balancing speed and safety … at what cost to privacy?
In some ways, past epidemics like AIDS, MERS and SARS positioned us to better respond to early concerns when we didn’t have access to detailed information. Digitization has connected us in ways we’ve never seen before, but that also creates issues around safely trusting information. Some sources carried evidence-based science while others offered anecdotal commentary, sometimes with questionable validity. With everything happening so quickly, how do you know who or what to read — let alone trust and believe?
“We’re better connected and have more communication,” Cook said, “We’re also suffering from a great deal from poor-quality information from difficulties associated with getting information — and information we can rely on. This has made it hard for people to know how to proceed and know what to do. That’s been amplified by the speed with which we can communicate now. We can communicate incorrect things as fast as the right ones.”
This also ties into transparency. We need to be able to safely communicate plans with people to where they feel comfortable sharing information, but also trust that the information they share, but also receive, is coming from an ethical source.
“At the beginning of the pandemic, we saw a lot of people moving back toward the, ‘move fast and break things’ mentality,” Baxter said. “In these situations, it’s more critical than ever to slow down and work thoughtfully — that we put mindful friction into our decision making. What information do you need … and how are you going to protect it?”
Does AI help or hinder safety?
You can’t discuss digital transformation without including artificial intelligence (AI). This hot topic is a driving force in how companies move toward efficiency, whether in virtual assistants for customer service, recommendations in search and shopping, and in our daily commutes and getting back into offices. So when it comes to solving issues associated with the COVID-19 pandemic and vaccine management, when does AI help and where can it cross the line? If we look to advanced technology to unlock context — where the same dynamics often lead to different decisions based on differences in the underlying context — and then let an AI agent make decisions for us, like with self-driving cars, will society accept that? In a medical safety context, how readily will people accept the decisions of AI programs focused on broad societal health outcomes rather than an individual’s?
“There’s been a lot of enthusiasm about the potential of AI to help in so many ways with COVID and the pandemic,” Baxter said. “Whether it is to find new treatments or contact tracing, finding trends of spread in the community or back-to-work solutions … having AI figure those things out makes it a lot more efficient.”
AI will mirror and even amplify bias that already exists in society.Kathy Baxter, principal architect of ethical AI practice at Salesforce
AI can play to bias
It is shown that technology, especially when it comes to AI, tends to perpetuate bias against people of color in the real world but on a much larger scale. It is imperative that all people of all socioeconomic backgrounds be weighted equally so AI can do its job effectively and fairly. Without it, vulnerable people will continue to fall victim to incorrect assumptions and be left behind.
“The one thing that remains the same, pandemic or no, is that we always have to remember that AI will mirror and even amplify bias that already exists in society,” Baxter said. “Whatever we’re training AI to do, we need to ensure the data is representative of the entire community that’s going to be impacted. That includes our most vulnerable communities … and that we put safety and protection of human rights at the center of our decision making.”
Public health concerns versus personal privacy
When the pandemic hit, many scientific studies taking place in person at sites around the world had to be halted. With the rapid spread of COVID-19 and the lack of initial information, those scientists needed to continue their research, but in safe spaces where they could securely transfer and share information and data. Risk-based monitoring allowed this to happen and motivated conservative companies to push forward.
“When you’re monitoring clinical trials, you have to send clinical research associates out into the field to investigate the science, monitor the sites and check their source data,” Pyburn said. “As soon as everything was shut down our customers were asking us for advice on how they could continue the operations of their clinical trials. At the time it was sort of a game changer for a lot of our customers because where pharma had been slow to adopt risk-based monitoring practices, they now saw it as a way to enable them to continue their studies and continue the hard work of doing clinical trials.”
Digital transformation has rapidly moved us forward, but at the same time has forced businesses, organizations, governments and even individuals to slow down and take a hard look at the direction we’re heading. New technologies and capabilities may be applied with the intention of delivering greater safety to society, but we still need to be cautious and deliberate in these efforts. Just like medical professionals, it may be time for technologists to have a hippocratic oath of their own so we can pledge to “do no harm.” We need to continue to emphasize safety no matter the area of digital transformation. Otherwise, we lose control.
Want to hear more from the experts? Watch the full panel discussion to really understand what goes into digital safety transformation.