Deep Design: Why Automation Needs Friction
With technologies such as Machine Learning making automation increasingly opaque, designers need to introduce friction back into user experience
By Kartik Poria, Senior Experience Designer, BCG Digital Ventures
As automation seeps into all aspects of our lives, we might be unaware of its effects. Our rush to embrace Machine Learning is accompanied by some less than ideal consequences. We only need to look as far as the recent fiasco with exams results in the UK, when it was decided an algorithm should decide students’ grades; those from less privileged backgrounds were worst hit as the algorithm made some damaging assumptions, downgrading 39% of A-Level results.
Automation removes the need for human interaction, letting the machine do the ‘thinking’ for us. This means less interface and points of contact with a system, supposedly making our lives easier. But it also means more blind faith in the ‘invisible’ system. This leads to an increasing level of distrust, as we become more dubious of the products, brands, and organizations operating with a low level of transparency, delegating decisions that affect us to an invisible, automated process.
There are many examples of this. Take the American criminal risk assessment algorithms that tell judges how likely someone is to re-offend, which have been found to be biased against those from low income or minority communities. Or the Amazon AI-driven recruiting tool that the company found was downgrading applications from women. The problem is that these algorithms are based on historical data, which, in an imperfect society, is always biased. There is no ideal data upon which to train algorithms.
Imagine an algorithm that does your weekly shop for you. It decides to give you ice cream every week, even though you are on a diet, because when you were younger you couldn’t keep away from the stuff. This isn’t so far from how many algorithmic processes are operating today.
As designers, we need to educate users of these products and services on how to influence the algorithms they’re engaging with. We need to explain how they can feed algorithms new inputs (through new data), how to influence their thinking (by training the algorithm to recognise right from wrong decisions), and what to do with the outputs (use the decisions the algorithm makes as a guide, not the final word).
A great example of this are the lab researchers working on discovering new genetic medicines using the platform DeepGenomics. Scientists can sketch out the outcomes they want for a particular drug, use the platform to fill in potential compound variations from its database, and then go ahead and perform those experiments, the results of which are then fed back into the system, further refining the compound suggestions.
The scientists are working alongside the algorithm and training it in real time with live data to continually refine their hypotheses.
This is like if your automated shopping service used the seasons to decide your week’s groceries, but also learnt every time you rejected an item it suggested, or ordered more items whenever you told it you had guests coming over for dinner.
As designers, it’s important for us to focus on key moments of friction in the experience of users to highlight interaction with the system for feedback, control, and adjustment of the algorithm. Moments of ‘purposeful friction’ that allow users, in an otherwise invisible companion, to engage, reflect, tweak, and better understand what the algorithm is doing.
Consider how your bank alerts you to fraud detection. You probably don’t think much about what your bank is doing on a day-to-day basis so long as your money is safe, and your bank is comfortable to hum away in the background. But you can be sure that it will forcefully grab your attention as soon as it detects something wrong. Every time you tell it that yes, that was me making a ridiculous purchase for a rare exotic plant, your bank learns something about you. Conversely, when it does detect real fraud, it learns about what it is it should be alerting you about. In the same sense, in an ideal world smart home devices should just work almost unnoticed, only demanding your attention when they detect a burglary, or want to know if you really have gone on holiday or just to the shop.
It’s as if your shopping service asked you not just for feedback whenever food rots at the back of your fridge, but also for your health goals, your taste preferences, and your penchant for cooking once a quarter, or whenever it knows you’ve had a big life change. And the rest of the time it dissipates into the unseen helper.
We have the duty to our users to empower them with this knowledge and ability, and to our products and services to build the trust they need. To do this, we need to move away from generic human-computer design and hone in on the personal relationships users have with automation technology. We need to engage communities and their ambitions when we source datasets to move away from bias and towards inclusivity. Most critically, we need to embrace the disappearance of the interface for highly impactful, crafted moments of friction that celebrate feedback, control and education.
Deep Design: This article is part of a series by BCG Digital Ventures on the crossroads of pioneering deep tech and design entrepreneurship. BCGDV has launched multiple ventures based around implementations of advanced technologies and design strategies that enable this technology to meet user needs.
Want to find out more? Start the conversation with BCGDV.