How we can trust to the AI
June 1st, 2025 - 9 minutes

How can we trust a professionist? For example, how much do we trust a doctor? How can we ensure that that person provides us a good answer or solution to our problem? In the same way, how can we trust AI? How much our trustability changes based on its answer and based on which parameters?

These are good questions. It is difficult to define how we can trust the answer of someone else. And that trust changes based on context and the subject who provides us that answer. Let's take as an example the health. This is a very delicate field, but very important. If we have a health problem we go to a doctor and ask them help. They provide us with an opinion that is usually a proposal about the solution which we can take action in order to solve the problem. We trust that person because we believe that if someone can act that role, they have the title to do that. We trust that the system is reliable and "yields" competent doctors. We trust that the professors had evaluated his/her competence in a fair fashion. Moreover, we believe that that doctor will give us a good solution, the best for our case. We would never believe that that doctor proposes to us a wrong solution on purpose. We believe in his/her deontology to work in the best way possible.

Another good example is public transportation. If we take a flight we trust 100% that the pilot is able to drive the plane. We believe that the pilot has the right knowledge to face every situation at altitude and we trust that the pilot would never sabotage the flight.

Once again, in politcis, we delegate other people to take important decisions about the laws which rule the economics, the society and the relationship with other countries. This means that choosing the wrong people might bring a disaster. It is really important to give our vote to the people who demonstrate loyalty and credibility in what they say. We have to select carefully the right people and trust them. Unfortunately, in most of the cases this is not enough, and usually they make us disapponted.

Thinking about those examples and more others, we can deduce that we are very weak. If someone has the role to take an important decision in some situations, that person can completely change our life, and, in some of them, also finish it. I don't want to be so dramatic but the facts show that we delegate a lot of tasks to other people and we keep trusting them.

The world where we live works thanks to the fact that we trust other people and we believe in their competence. But my question is: should we trust AI in the same way as we trust people? So far the tasks that we delegate to AI are not so vital, but sooner or later, more important decisions may be delegated to Artificial Intelligence.

The role of the data

AI bases its answer thanks to the data. A big amount of data. The idea is to analyze a question or a request using probabilistics calculus in order to provide the most likely solution. This is what a doctor does when we have a medical problem. For instance, a headache can be the effect of an ongoing ictus, but maybe we just eat our ice-cream too fast. The role of the doctor is to combine different health factors in order to understand the possible causes and provide the most reliable outcome. In this scenario, both doctor and AI are completely lost without information. But the doctor has the ability to detect that information by the human relationship that is established with the patient at the moment of the visit. Instead, AI does not.

From these assumptions we can simply claim that data is essential. For both humans and AI, without information nobody can make the right decision. The only difference is the human factor, which provides intuition and ability to understand the problem just from a little information. On the other hand, if everyone in the world is studying and building its own AI means that this technology offers some advantages. The problem is that without data, it is impossible to develop a good AI.

Deontology in AI

I want to discuss the ability of AI to answer the questions. Not just answer, answer a question. The question I thought about is the following: how can we guarantee that AI provides the best solution and it pushes the maximum effort to solve the problem? How can I ensure that the question could not be answered better? How can I trust that the solution is optimal for my case?

In real life, we cannot ensure that every professionist pushes the maximum effort and does the best job ever in every situation. But we trust that professionist because we believe that the professionist has all the interests to make the best job ever.

First, there is an intrinsic motivation. I am passionate to do what I'm doing, because it is what I like doing, so I'll do my job in the best way as I can. Either, I have the goal to build the best project ever because I belive in what I'm doing, so I'll spend all my energy to achived the best result. Even though this is fascinating, sadly it is not always true, so we need extrinsic reasons to push ourself. Indeed, if I do my job in the best way as possible, than I can get benefits. For instance, I can get an increasing of the salary, an improvement of the position, or the trust of a client that will not go to the competitors as I have done my job well. Besides, some kind of jobs has deonology to respect. These are the cases of doctors in the medical field or jouranlist, who signed a code of ethics when they have offically achieved that charge.

Is it the same with AI? I don't think so. If we use an AI we use software that has the role to solve the problem. The issue is how much AI wants to solve the problem. It could happen that the AI would do the minimum to solve it and its solution is not the best.

Let's think about how AI answers the questions. The quality of the response that AI provides depends on the quality of the model, but especilly on the quality of the data that we provide. Here, we can see how valuable and important the data is. If we train an AI with high quality data, then we'll get a high quality response. On the other hand, if we train the model with every kind of data that we can find everywhere, then we can get approximative answers or even wrong answers.

AI learns how to answer based on the data we provided. In a way, it is like we are the professors and AI is a student that learns very fast but also it takes into account every detail, so we have to be very careful to teach which the right and the wrong things are. The companies in the AI field play a very important role. They have to select the information to provide to AI in order to improve the quality and the reliability of AI.

Eventually, I believe that if we want to get the best answers possible from AI, we have to provide to AI high quality and complete data. My idea is to teach to AI only information that can be verified, like research papers. In the research papers every detail is argued and motivated by scientific proof. It is also true that this kind of training is possible only in certain types of AI, for example technical AI. In some other context, where the creativity is more important, this kind of training is not feasible. Let's take as question example this: "Write an email where I ask to my boss to take a day of holiday". This question is not technical, it requires empathy and education. This characteristics are not defined "by default". They depend on the cultural factors, on the language, and on the relationships that there are between the employer and the boss. In this case, the data that should be provided to AI comes from real interactions within the companies. This opens other implications regarding privacy, but I don't want to face them now.

On the other side, if we want accurate answer about a technical topic, this should be easy, ideally. It is enough to feed the AI training with all the research papers published from the beginning to now in order to get a valuable results.

AI cooperation

Another way to improve the quality of the answers and to improve the reliability would be to build a decentralized system. The idea is simple and it copies what happens in the peer review with scientific research. In this model different AI models can communicate and compare their solutions in order to combine them and achieve the best one. This mechanism already happens in the so-called “federated learning” systems, but this is used more for machine learning models.

The solution to get the best system to solve a problem, as often happens, is to combine different points of view in order to evaluate different aspects. I believe that a system where there are multi AIs that take into account different elements of a specific problem can solve it better.

Conclusion

These are just my thoughts about the direction that AI is taking regarding use of data and solution providing.

Don't miss anything, add the RSS feed to your favourite RSS reader!

© 2026 nicologiacomini.me