Artificial Intelligence (AI) I is not new, with the term being traceable back to the 1950's and many of the underlying principles having been around even longer. However, with the emergence of generative AI combined with platforms that make the development of AI models and solutions so much easier than in the past, we are seeing a new level of availability and applicability of AI.
This is a great development as AI can and will provide many benefits to businesses and most importantly improve the quality of lives all around us. The use of AI in much more efficient information retrieval to enable more people to access information that is valuable to them, improving the quality of software code, preventing attacks in cyber security and being able to more accurately and quickly detect and diagnose potentially life-threatening diseases are just some examples of the potential for good that AI has.
But even with the best intentions AI is not always correct and does not do exactly what we would expect. There is, at least for the time being (on what the future might bring here is an entirely different discussion), still no general intelligence that would truly understand the world and work with the same kind of higher level reasoning guided by principles and values as we humans take for granted.
AI does not think of the things it does as good or evil - it just does what it has been asked to, based on what types of models are used and what kind of data they have been trained on and how. This can result in quite unexpected behaviour and, from the human viewpoint, quite illogical and surprising outcomes - one example being hallucinations in Large Language Models (LLMs) where the AI model is completely convinced of something that is factually incorrect and presents made up information as fact (ok - granted we humans might sometimes do this, too... ).
Certainly there are many use cases for AI where trust may not be so important - for example, if an in-game AI character is a bit off, this might merely be amusing and cause no real harm. But there are also many important in "high-risk" use cases where the outcomes of the AI solutions can have an impact on people's lives, sustainability and the environment for example - and also a huge impact on the business and viability of companies using AI. In more critical cases we must be careful and build AI solutions that is not only accurate and effective but also trustworthy.
For us to get the true benefit from AI I believe we must be able to build trust in AI. We need to be able to trust in the outputs and results generated by the AI solutions we use. While this is not always simple, it CAN be done.
Trust In Sight was founded to help build that trust by giving holistic advice and strategic support in building more trustworthy AI powered solutions.
If you would like to discuss more, please do reach out!
Matti Aksela, Founder, Trust In Sight Oy
matti@trustinsight.tech - Linkedin