Law Practice News South Africa

Subscribe

Elections 2024

Siviwe Gwarube tells us why the DA could help South Africa succeed!

Siviwe Gwarube tells us why the DA could help South Africa succeed!

sona.co.za

Advertise your job ad
    Search jobs

    Trust me, I'm a robot... even if you think my intelligence is artificial - Part 2

    Artificial intelligence is transforming every aspect of our lives and this may include the way that we trust. We can come up with numerous examples of how we progressively place trust in algorithms and software computer applications. Prior cynicism of new development in technology is making way for new trust in technology because it is making our lives so much easier. Computer programs are now rooted in all aspects of our daily lives.
    Trust me, I'm a robot... even if you think my intelligence is artificial - Part 2
    © Olena Yakobchuk – 123RF.com

    Interaction between artificial intelligence and us, as humans, is either direct interaction with our smart phones or computers, or more detached, where we rely on automated systems like autopilots operating in aircrafts. In both instances, computer programs make the experience possible and provide the structure. With computers becoming more and more advanced, we may have to start trusting these machines more. Combined with the other latest technological improvements like the Internet of Things, blockchain, autonomous driving and big data, it is unavoidable that artificial intelligence will play a significant role in our future and we will have little choice but to trust it.

    Accountability

    Many people, however, are of the opinion that we shouldn’t just trust AI. They also argue that we do not need to trust an AI system because it is, in fact, possible to check its reliability. We can determine how it is fulfilling the task allocated. From a legal perspective, it is also important to know that if the system causes any damage, the persons behind the system can be held accountable.

    Some may then claim that this accountability is unmanageable because of the degree of autonomy that machine learning has. Yet it is important to understand that it is not the AI system itself that is being held responsible as a legal entity.

    What are the reasons for the reluctance to trust artificial intelligence in the first place? How can we improve confidence and trust in AI? This may be an attempt to address my other student’s concern.

    Previous experience with artificial intelligence or other forms of technology, like the internet, assists with finding solutions. Advancing technology also makes it more and more possible to get into the “black-box” of machine learning algorithms and find out how it works. Large companies are also prepared to issue and share transparency reports on the way they reached certain outcomes by way of artificial intelligence. These factors will certainly assist people in obtaining a better understanding of algorithmic decisions and outcomes.

    It is a proven fact that people who were (even to a limited extent) involved with the artificial intelligence decision-making process trust the process easier. One way of accomplishing this is to give the user of the system the opportunity to slightly change or amend some of the algorithms.

    We do not need to fully understand the detailed inner mechanisms of artificial intelligence systems in order to be less reluctant about their implementation. Even basic information about how it functions and having some control over it will provide confidence in the system. The future of technology lies with AI and its tremendous potential to benefit society as a whole.

    As with relationships between people, trust in AI systems will have to be developed over time and as with people, mistakes will be made with artificial intelligence.
    Artificial intelligence is still in the early phase of development and one of the reasons for the distrust in AI-based systems may be because we tend to focus on errors that occurred and because of our concern that “robots could one day be more intelligent than people and take over.”

    Robots are already extremely clever, but will they also acquire intellectual or behavioural intelligence that can be compared to human feeling, emotions or moralities? This is the key difference between humans and robots: the intuitive knowledge about the difference between right and wrong. Moral intelligence is not yet built into robots and it is disputable whether it will ever be the case, hence people’s reluctance to put their trust in them completely.

    Read part 1 of this article here.

    About Dawie de Villiers

    Dawie de Villiers is a professor and Head of the Department Procedural Law at the University of Johannesburg's Faculty of Law.
    Let's do Biz