When dealing with powerful tools, whether a medical device or a sophisticated artificial intelligence software, a clear understanding of their intended purpose is critical. In the field of AI, one of the most common misunderstandings arises from the use of tools such as ChatGPT, an advanced language generation program from OpenAI.
It's quite common for people to use such AI tools for the wrong purposes. This misunderstanding can lead to unrealistic expectations and a disregard for the actual capabilities of AI technology. So what is the right way to understand and use these powerful tools? Let's take a closer look at ChatGPT as an example.
At its core, ChatGPT is designed to mimic human speech patterns. It does this by processing massive amounts of data. Let's consider a scenario where you feed ChatGPT a complex physics problem. If it returns an answer that sounds plausible but is physically incorrect, should we dismiss ChatGPT as a failure? Not at all. In fact, this scenario illustrates that ChatGPT is doing exactly what it was designed to do. It generates language that feels contextually appropriate.
However, like any tool, ChatGPT has limitations due to its design. It does not contain a semantic network that systematically maps the world. It's not encoded with structured concepts like physical laws or the distinction between abstract and concrete concepts. For ChatGPT, language is not a reflection of the world; it is the world.
One of the direct consequences of this design choice is that ChatGPT does not monitor itself as it speaks. The language it generates is devoid of subtext, irony, personal opinions, and moral convictions. The generated language is devoid of these human characteristics because it's simply not what it was designed to do. It's like expecting a photo of a flower to have a scent - an unfair expectation, to say the least.
This lack of predictability and validation creates challenges when we try to use ChatGPT in highly regulated areas such as clinical evaluations. The unpredictable nature of its responses and its inability to understand and apply structured concepts make it a poor fit for these areas.
But while today's AI, like ChatGPT, may not be suitable for all applications, we should not be quick to dismiss the potential of AI. We are already seeing promising advances in AI technology. With more sophisticated structures like a #eTD (Digital Technical Documentation), rule-based operations, and a network of established truths, may have the ability to self-monitor and adjust its statements. With such capabilities, the sky is the limit.
The current state of AI brings to mind the old adage about the folly of rushing to judgment. While it's tempting to laugh at the humorous mistakes of AI today and say "real intelligence can never look like this," we must remember that the direction of AI is far from set in stone. AI is still in its formative stages, and it's evolving at a rapid pace.
So where do we start if we want to harness the power of AI in the future? The key is structured information. For example, digitizing your technical documentation (eTD) for EU MDR (EU Medical Device Regulation) and IVDR (In Vitro Diagnostic Regulation) could be an excellent starting point. This structured information will form the basis for automation and AI capabilities in the future.
As we look to a future where AI plays a larger role in areas such as regulatory affairs and medical writing, it's important to understand the capabilities and limitations of current AI tools. The insights we can gain from understanding tools like ChatGPT can guide us.
This reflection on AI was inspired by the insightful thoughts of Florian Aigner. His perspective reminds us that to realize the full potential of AI, we must first understand its purpose and limitations. Also in the medtech industry.