More recently, both Google and the ‘60 Minutes’ television show have been accused of researchers of conveying misinformation regard to AI. This emerged from a hearing with Google CEO Sundar Pichai where he talked about the capabilities of AI in Google. Pichai said that these systems had “emergent properties,” suggesting that they could amass capabilities on their own. These words stirred controversy among the AI professionals who claimed that such statements lead to a replicative and highly exaggerated opinion towards technology.
Out of the topic presented, critics reacted to the story in the “60 Minutes” segment in the following way:
Although it is presented as a myth – whether or not AI can learn and act on the wheel, it is not like that at all. AI systems though are very influential systems are essentially tools designed and operated by humans. These are the programs written by their developers in the form of algorithms, and they lack the capability to make decision s all by themselves. Gary Marcus, a cognitive scientist and a skeptic of much of the current popular AI futurism, said that paint them in such a manner is misleading, scares the public, and raises expectations of AI technology that cannot deliver. He emphasized that such misrepresentation could potentially induce significant impacts on knowledge and the technologies, which include AI.
The media can clearly influence public perception of emerging technologies and this case shows that there is increasing importance of appropriate coverage. AI is best described as a mysterious and magical force when it is painted in such a light it really disguises the true reality of what AI is and what it can do. It is possible to give the ordinary public a wrong perception on the existing current state of advancement in the artificial intelligence and the implications that may result from its continued development. For instance, AI can do challenging things like language translation, recognizing images, and interpreting numerical data though it cannot wean out insight, subjectivity, or even execute its own judgment. This distinction is important, but it is often overlooked by media accounts of AI achievements that focus on the shocking aspects of new technology rather than providing an account on how such systems function.
One should not share myths about AI as the result can be unpredictable or undesired as an individual myth does. The public might have something in their mind that leads to creation of policies that they do not understand but they make them since it is beneficial for them but actually what people fear is unproven hence we see policies made on unproven effects of AI. Lastly, such sensational stories can hide the real ethical and social problems connected with AI, such as data protection, prejudices encoded in artificial neural networks, or with whom the decision made by artificial intelligence in the sphere of justice and medicine belongs. Lack of adequate understanding of these issues is detrimental to the formulation of policies to deal with the real problems presented by all forms of AI technologies.
Thus, members of both the tech industry and the media bear a certain amount of liability for explaining to society what AI can and cannot do. In the case of tech companies like Google means being honest about what AI can and cannot deliver, no more hype about products. For media outlets, it means effectively eradicating the appeal to emotions and passions and involving a large number of employees who will give an understanding of the technology from different angles. Only in this way we can create solutions for the efficient and beneficial evolution and subsequent utilization of artificial intelligence technologies as well as to guarantee the public’s right to be precise and critical when discussing or embracing newly developed concepts and ideas.