Earlier this month the Peltarion team took part in a number of debates at the Almedalen Week, an annual summer event on an island southeast of Stockholm which hosts a week of seminars, speeches and events on different topics and issues at the top of the national agenda in Sweden. This year was the 50th anniversary and there was a record 4,300 events held.
The word on everybody’s lips during the week was artificial intelligence, and foremost were the ethical challenges and the question of trust as adoption grows. The Peltarion team took part in dozens of debates and seminars and spoke to hundreds of people around the broader questions. These issues are of crucial importance to society and it’s good that we talk more about them. Speaking in a personal capacity, I found there were three key takeaways on the issue of trust in the technology that seem to run through a lot of the discussions or form the basis of the questions we faced.
1. Being alert for bias
One of the biggest questions people are concerned about is bias creeping in or already present in the data that is fueling the technology. This is not an easy question to answer as it is a highly complex one. What is “fair” is a difficult question and a very subjective one too. However, there is clear consensus that we need to be alert for the dangers of bias and how that can affect AI-powered decision-making.
”Sweden and the EU can lead the way on a sustainable approach to AI”
A number of techniques exist today in order to better understand how models work internally and to explain AI-based predictions. Yet, we need to do more in this field as only through explaining how decisions are made can we ensure fairness, and in the long-term build trust in the technology. There’s still ways to go for us to fully understand AI decisions, but it’s an active field of research and a topic that the research team at Peltarion is pursuing.
2. Sweden and the EU can lead the way on a sustainable approach to AI
It’s a good thing that we have become more sensitive about how we use data. Ensuring privacy and security of data builds trust and we need to do more on this front. The recent Facebook and Cambridge Analytica scandal has reminded us all of the potential risks of misuse of data. Earlier this year the EU commission proposed a strategy for ensuring an appropriate ethical and legal framework around AI.
Europe has the potential of being the international front-runner on this issue, and within Europe, Sweden has history on its side. With its famous “Swedish model” the Nordic region has found a way to combine early adoption of advanced technologies while at the same time ensuring the benefits are shared across society. I believe that there is a unique role for Sweden to play in this global debate due to our unique approach based on regional values.
3. Trust is built through greater adoption of AI
Generally, people don’t trust what they don’t understand, and are more likely to trust what they do understand. It’s true that most people today don’t really understand AI, but I believe that’s because few people and companies are really investing in and adopting AI. The technology is already a part of our daily lives, for example in search engines, voice recognition and translation services, but it’s still mainly big tech companies that are really getting the most out of the technology. As a society, we have barely scratched the surface of AI’s potential to revolutionize our world.
"As a society, we have barely scratched the surface of AI’s potential to revolutionize our world"
Not just in useful practical tools but to equip us with tools to combat some of the world’s biggest challenges facing humanity. I believe the best way forward is to get more people using the technology in order to develop knowledge about what AI is and what it can do for businesses and society - and to build trust.