Trust in small things builds up trust in the whole solution — as long as the pieces fit together
How do you unleash the potential of AI across both established and emerging industries while maintaining trust with both end-users and internal stakeholders? That was the question we posed to two passionate AI experts in our latest webinar.
We were joined by Søren Vedel, the Head of Data and Analytics at Twill, a digital logistics company serving small and medium-sized customers. He has deep experience across most of the value-chain for AI as a data scientist, engineer, product manager and leader across several industries and in organizations at different stages of maturity.
Also joining us was Dainius Kniuksta, Artificial Intelligence Product Lead at Forecast who is working to continually maximize the intelligent nature of the Forecast platform, to benefit businesses globally.
You can visit our webinar section to access the full recording of Engineering AI for Trust or read the summary below.
What are the challenges and rewards of implementing modern technologies in established or traditional industries?
The potential impact of successful AI implementation is huge. For example, in logistics, the complexities of the industry do not allow small companies access to international trade because they don’t have the specialized skills.
Removing these barriers will allow even small companies - importers, entrepreneurs - to compete globally, and business-wise it will unlock a market which literally runs in the trillions. In Project Management, numerous independently compiled reports estimate the “cost” of “unsuccessful projects” to be in the order of billions USD annually.
In traditional industries the playing field is different from “digital natives”. You cannot ignore that some solutions exist, and that a lot of other solutions have previously been tried and failed, so not only do you need to accept a smaller room to operate in, but there’s also a large human bias to deal with involving personal histories, past experiences, power struggles between groups.
We also need to also acknowledge the broad spectrum of users of such systems, who do not perceive the value of the solution in the same way. B2B users do not always get to choose the tools they use.
For example in project management some users are required to register the time that they spend on different tasks, this data is then analyzed for use in billing, pricing and sales. Some stakeholders are directly interested in the results, but others are not intimately connected to the value that is being delivered.
How to identify structural challenges around adoption of AI powered products?
There's no one easy answer to solving to overcoming these challengers, but in the experience of our panelists, it can be useful to analyze your concept along the three product development dimensions of entry barriers, adoption barriers and scale barriers . The emerging risk picture is the starting point for product development from which you can then engineer solutions for trust. This is not specific to AI powered products, however given the potentially higher impact of AI powered solutions (i.e. status / org “weight” changes), the entry challenges also tend to be amplified.
Entry barriers are concerns, real or perceived, by users which would make them hesitant about entertaining the notion of using your product in the first place.
- For the job or function: In the eyes of the user, what is the cost if the AI makes the wrong decision - risk, versus the perceived impact of getting it right - reward? For the individual: Will my job change to something I don’t like or will it improve?
- For the company: What are the transition costs - immediate investments in systems, new operating model, new skills, new risk picture - versus the expected benefit?
Getting the support of users is crucial for the success of any tool, so this message needs to be clear. If there’s not a good answer to these questions, they will not trust potential benefits of the product .
Adoption barriers are challenges or concerns preventing the continued use of the product. The most common issue among users tends to be a lack of knowledge or lack of permission to make the choices the AI driven system requires - basically disconnect between what the system suggests and the real world, in which some processes might take way more than a few clicks. Other examples include users not trusting the system to make the right decision; implementation requiring a change in work processes or the use of AI being split across different departments, dividing requirements and objectives.
Scale barriers are challenges limiting the growth of the product.
For example, in many-to-many transactional platforms the network of users is equally important as the underlying technology, unfortunately we often see more focus on the technology, than growing the network.
With a growing network, another scale barrier emerges - process alignment. This is particularly observable in traditional industries and B2B environments more generally - standardization across companies and industries takes a long time because many established actors have their own way of operating, often amplified by informal power structures in the organization. However, AI powered systems have an advantage when it comes to standardization - models can be trained to capture the subtle, but important details on a company or even personal levels.
Lastly, there are privacy concerns: cross-company model training can be a major barrier, for example because of concerns about accidentally revealing trade secrets.
How to build AI systems in more established or traditional industries that have higher likelihood of being adopted and trusted
Trust is a result of consistency over time. This holds for all products, including but not limited to AI-products. In particular for AI-powered products, one mechanism to leverage is trust in the little things builds trust in the whole solution. To achieve this in practice, the user must feel that you “get them”. AI suggestions, user journeys and the full experience of the product must align to what the user cares about and is working to accomplish. Getting this across comes down to a lot of small pieces that all must align. Trust is built through consistently doing what the user expects at all these micro-touchpoints.
Soren explains how Twill is integrating AI into their systems: "We have our booking and shipping management platform and then we think about how we can infuse AI to remove complexities for customers as they work with the platform and essentially create new types of offerings. Basically we want to empower customers and make them feel comfortable making the decisions needed to move their goods across the world .
One particular example we are working on right now at Twill is related to what is known as demurrage and detention, the shipping equivalent of parking tickets. When a container arrives at a port it can stay for a predetermined number of days before additional fees are sent to the customer. It requires deep local shipping understanding to know how many extra days you need - something which most of our customers don’t know. We are solving this by personalized predictions of the number of days needed for a particular shipment, and making that available on the platform in an intuitive manner. By helping customers we address a particular need, but we also make a better overall experience, and by doing so we increase the trust in the platform."
At Forecast we have developed certain ‘personality traits’ our AI exhibits across the entire platform, including:
- Radical transparency without assigning blame
- Non-intrusive and reactive to ‘hard no’s’
- Honest and realistic, letting you know when it can and can’t help
- Supportive, it recommends and suggests, never decides.
Advancements in AI technologies allow us to incorporate more and more concepts from human relationships into human-computer interactions, leading to simpler and more relevant user experience.
To learn more about Forecast’s use of AI, visit our integrated intelligence page.