<

Hassan Taher Investigates When (and When Not) to Rely on AI Models

As artificial intelligence (AI) continues to play an increasingly important role in our everyday lives, many people are becoming increasingly uncomfortable. This is perfectly understandable given the rising use of AI in critical and sensitive areas such as finance and healthcare.

Fortunately, AI thought leaders such as Los Angeles’ Hassan Taher are profoundly aware of the trust issues that the average person might have with computer systems that mimic the thought processes of human beings and make decisions that could previously only be made by people. “As AI becomes more prevalent, understanding when to trust AI models is crucial,” contends Taher. “Trust in AI systems hinges on several factors, including their accuracy, transparency, and the context in which they are used.”

As an author, Hassan Taher has written hundreds of articles on subjects that relate to AI. He has also published three influential books: The Future of Work in an AI-Powered World, The Rise of Intelligent Machines, and AI and Ethics: Navigating the Moral Maze. Now, he has turned his attention to the essential question of when we should trust AI models and when we shouldn’t.

Taher’s primary concerns about the reliability of AI models center on the accuracy and transparency of these models as well as the context that surrounds them. His focus on accuracy was recently echoed by a study by researchers at the Massachusetts Institute of Technology (MIT).

As reported in the MIT News, this study is based on the notion that the level of trust a given AI tool is determined by that tool’s ability to provide accurate information that addresses clear and practical concerns. The study’s principal researchers addressed this issue using a new approach that uses something called the minimum description length principle (MDL) to improve uncertainty estimates in AI models that employ machine learning.

Known as IF-COMP, this new method makes MDL fast enough for practical use with many of the deep-learning AI models that are commonly deployed in real-world settings. According to the researchers, the IF-COMP method not only makes accurate uncertainty estimates more reliable, but it operates with far greater efficiency. Furthermore, researchers report that “because the technique is scalable, it can be applied to huge deep-learning models that are increasingly being deployed in health care and other safety-critical situations.”

The power of the IF-COMP MDL method lies in its promise to clearly indicate the accuracy of any AI model and the specific level of confidence that users should place in its results. As Hassan Taher points out, this is particularly useful in highly critical fields such as medical diagnostics. “An AI system that predicts a high probability of cancer should also convey the certainty of that prediction,” he insists. “If the model’s confidence is low, it signals the need for further human review and additional tests. This approach ensures that AI is used as a tool to augment human decision-making rather than replace it entirely.”

According to Taher, the MIT study “emphasizes that AI models must not only be accurate but also capable of indicating the level of confidence in their predictions.” This means faithfully reporting accuracy levels as well as transparency in areas that range from information sources to operational intent. Taher goes on to state that an emphasis on transparency “helps users understand the potential risks and limitations of relying on AI, fostering greater trust.”

However, the relationship between AI project transparency and public trust can be complicated to say the least. Take, for example, a recent article in the Harvard Business Review titled “People May Be More Trusting of AI When They Can’t See How It Works.” This article cites a study by Michael Menietti and Luca Vendraminelli of Harvard University in partnership with Timothy DeStefano of Georgetown University and Katherine Kellogg of MIT.

Analyzing the stocking decisions of an American luxury fashion retailer for hundreds of products across 186 stores, these researchers reached some surprising results. The retail professionals in the study received similar recommendations from various AI platforms. Half of these recommendations came from platforms that were quite  easy to understand, and the other half came from platforms that were deliberately impossible to decipher. Comparing all stocking decisions, researchers found that employees followed the advice of the undecipherable algorithm with significantly greater frequency.

Hassan Taher views the results of this study as an intriguing paradox that ultimately stresses the urgent need to balance transparency with usability. “This finding suggests that too much transparency can sometimes lead to confusion and mistrust” he writes. “When users are exposed to the inner workings of complex AI models, they may feel overwhelmed and uncertain about the technology’s reliability. AI developers must find ways to communicate the capabilities and limitations of their models without inundating users with technical details. Simplifying explanations and focusing on practical outcomes can enhance trust and acceptance of AI systems.”

Although issues of accuracy and transparency are supremely important in the world of AI, the context in which AI is used may be the single most important factor when it comes to establishing trust among AI users and the public in general. “Users are more likely to trust AI in well-defined, low-risk environments where the technology has a proven track record” reports Hassan Taher. “For example, AI-powered spell checkers and grammar tools are widely trusted because their potential for harm is minimal, and they consistently improve user experience.”

On the other side of the coin, AI tools in finance, healthcare, and other areas of potentially catastrophic risk must be subjected to rigorous testing and constant analysis to establish their overall trustworthiness. In these cases, the public at large tends to require far greater assurance that AI has been thoroughly assessed to establish proof of its ability to perform dependably under various conditions. “This trust is often built incrementally as AI systems demonstrate their value and reliability over time,” writes Taher. “By developing AI models that provide accurate uncertainty estimates and balancing transparency with usability, we can enhance trust in these technologies.”

Joel Gomez
Joel Gomezhttps://www.gadgetclock.com
Joel Gomez is an Avid Coder and technology enthusiast. To keep up with his passion he started Gadgetclock 3 years ago in 2018. Now It's his hobby at the night :) If you have any questions/queries and just wanna chit chat about technology, shoot a mail - Joel at gadgetclock com.

Recent Articles

Related Stories

Stay on op - Ge the daily news in your inbox