The gate Arab news technical
Studies say that the greater the reliance on the systems of artificial intelligence whenever there is a lot of risk and questions about their accuracy, the most notable of those questions revolves around whether systems artificial intelligence talk like humans?
But machines have no actual prejudice, artificial intelligence doesn’t want something to be right or wrong for reasons that cannot be explained logically, but unfortunately the human verification is present in the systems of machine learning because of the data or the algorithms used in the training may include data bias by what, even now not anyone tries to solve this huge problem.
He has conducted recently a team of scientists from the Czech Republic, Germany, research group to determine the effect of cognitive bias of humans to interpret the output used to create rules and machine learning, the findings of the researchers that they have chosen the systems learn to automatic multiple, they found that the qualities of human endurance in artificial intelligence systems.
Provides a report of the research team how the 20 bias cognitively different, they can change from the development of rules and machine learning methods are proposed to get rid of this challenge.
According to the researchers may challenge lies in the code and algorithms to developers it or not, but when you become these types of human errors hidden within the parts of artificial intelligence, this means that our prejudice is responsible for choosing a training base that constitute the creation of machine learning models to that we do not create artificial intelligence but keep our mistakes inside a black box and we don’t know how he’ll use it against us!
Accordingly, the changing scene of personal responsibility, the more reliance on artificial intelligence systems, and will soon be running most of the vehicles by machinery and a large number of surgical operations and medical procedures will be done by robots, and this will hurt the developers of artificial intelligence systems in the article upon the occurrence of a tragedy as humans are always looking for blame to someone talking all the errors.
The researchers propose to address it for each bias my gallery. they checked him, and for many of the problems was the simple solution is to change the way the data is represented, as the panel assumes, for example, to change the output of the algorithms so that the use of numbers larger than women and may greatly reduce the likelihood of misinterpretation of certain results.
But unfortunately there is no radical solution to this problem most of the time we don’t know we’re excited about where we think we are smart or intuitive or don’t think about it there are more than 20 bias know different must care about their programmers machine learning, even when the algorithms are perfect and Windows is not editable, the bias of our cognitive make our interpretation of the data unreliable at Best, everyone has these challenges to one degree or another, and it is in the end related to the lack of research about how the impact of trade on the interpretation of the data.
According to the research team: to our knowledge, has yet to discuss the complex challenges in relation to the interpretation of the results of machine learning, and so we started this review of research published in cognitive science with the intent of giving the basis of the same developments that appear in the algorithms to learn the rules of induction and the method of delivery of outcomes identified our audit of the twenty bias cognitive effects that can lead to systematic errors when when it is the interpretation of the rules produced.
Is Artificial Intelligence Technology is very powerful can not deny its impact, so exercise extreme caution in its development and use commitments to employ to treat the problem of the community, and to develop them in a manner that ensures the confidence and humanity of any negative repercussions, so it is important to complement the researchers all over the world on this work and discover effective methods to avoid cognitive bias in machine learning, because artificial intelligence is nothing more than an amplifier of the human, and the bad must build a robot are bad.
Artificial intelligence suffers from the problem of human endurance