This article highlights a few of the recognised risks of AI apps including ‘AI generator apps’ like ChatGPT (by OpenAI) and Bard (by Google). This is Part 2 in a series of articles on AI Terms and Definitions and AI-Generating Response Apps (Chatbots).
Perceived Risks of AI Apps
As detailed in AI terms and definitions (Article 1), AI-generative applications or ‘query response apps’ can:
- Rapidly synthesize information in response to well-phrased questions
- Create well-structured paragraphs
- Avoid copyright infringement issues (at least, per the AI-generative app claims)
- Design digital content ranging from website pages to videos
- Improve website ‘findability’ in browser searches
- Select relevant images for your blogs and other publications
- Perform complex calculations and identify emerging trends
- Diagnose health conditions
- And more!
These apps are equivalent to having an on-call, virtual assistant who can assist you with numerous tasks, particularly tasks that involve writing, image selections, multi-media file editing, video scripting, or data summaries. A virtual assistant with extensive knowledge. A virtual assistant ‘who’ is extremely fast and will respond to queries at any hour of the day or night.
They can save you a lot of time for little cost. That is unless you factor in the cost of inaccuracy, and potential disruption to your sector, over time.
AI app outputs have notable limitations
While AI apps can save you time and enhance your productivity, it’s important to recognise that there can be issues with the accuracy (or rather, the inaccuracy) of AI-generated content.
- Erroneous, incomplete and/or unrepresentative datasets are known to limit AI app objectivity and perpetuate existing biases (Source: Deloitte)
- Accuracy issues are further amplified when such content is generated, published or circulated by individuals who are:
- Not adept at ‘query wording’ and proof-reading
- Not overly familiar with the topic being queried
- Not skilled at spotting inconsistencies
This is why AI-generated content requires oversight by at least one human who is very knowledgeable about the topic and purpose for which the content is being generated.
A cursory review of AI-generated content (before publishing) is rarely enough!
Before we explore other recognised risks of AI apps, let’s look at the expanding use of artificial intelligence apps in the life sciences industry.
AI use in the pharmaceutical and medical device sectors
In the life sciences sector (including the pharmaceutical and medical device manufacturing industries), Artificial Intelligence (AI) apps are increasingly being used to:
- Read/interpret medical scans (radiographic images/x-rays, PET scans, CT scans, MRIs)
- Diagnose medical conditions including cancers
- Suggest medical treatment parameters
- Refine or automate manufacturing, labelling, and/or distribution processes for medicines and medical devices
- Detect fraudulent medicines and compromised supply chains
- Propose new drug formulations and streamline drug development pipelines
- Predict potential safety risks of new medications and detect early ‘safety signals’
- Support gene editing technologies (like CRISPR) and vaccine development
- Improve fertility/IVF services
- And so much more
AI Apps: Benefits vs Risks
While the sudden explosion of AI-generating applications is a relatively recent occurrence, these AI apps were not developed overnight.
- The evolution of artificial intelligence (AI) technologies took decades to reach its current level of “usefulness” to humans (well over 60 years).
- This includes the development of the latest iterations of chatbot apps, which emerged from cumulative advances in information technology (IT), computer programming, and database collections/large language models.
- Yet in a very short period of time, these apps have already transformed our lives.
From saving time to reducing costs, from generating presentation outlines, blog titles, and historical summaries to marketing text, the benefits of AI-driven apps seem endless.
So, too, are their potential risks.
The question is, “Will AI apps ultimately transform our lives for the better or for worse?”
And what regulatory frameworks are required for AI-generating apps? What controls will ensure the safe use of AI-generating apps in a global population that is:
- Predominantly internet-connected?
- Tech-friendly/tech-savvy, yet increasingly vulnerable to predators and computer viruses?
- Keen to use AI-generating apps (like ChatGPT or Bard) to simplify their lives and/or increase their outputs?
- Prone to falling prey to criminals (scammers) who deploy advanced digital technologies to conduct identity theft, data piracy, large-scale financial fraud, and other crimes?
How can we ensure the benefits of AI-driven apps outweigh their risks?
These questions are likely to remain unanswered for at least a few more years.
This is why IT industry experts find the rapid adoption of AI-generating apps, and their unprecedented capacity, so concerning.
AI-Driven Apps: Risks & Potential Harms
- It is widely feared that AI-driven apps have the capacity to cause significant harm to society (with or without human intervention).
- A few months into a mass AI-app/chatbot early adoption trend, an ‘open letter’ aimed to stop (or at least pause) AI-app development until such risks are further assessed and appropriately mitigated.
- This open letter was signed by numerous high-profile software experts; but as noted in Article 1, the horse had bolted.
The broader public is enamoured, if not downright smitten, with AI-content-generating apps such as ChatGPT and Bard. They are, however, unfamiliar with potential risks.
While over 33,000 individuals signed an open letter [urging a temporary stop] to AI app development while risks are assessed, there are already over 100 million users on ChatGPT alone. The percentage of individuals concerned enough to sign the open letter, versus the number of people already using AI-generating content apps, is less than 1% (~ 0.03%).
Connie May, MHST, PharmOut PTY LTD. Refer to: Future of Life: Open letter to pause giant AI experiments
Risk-benefits ratio of AI apps: Why the alarm?
As with all powerful technologies including AI software:
- A risk-to-benefits analysis, and
- Appropriate risk management approaches (safety measures and regulatory oversight)
remain crucial considerations for product release — whether you’re calling it ‘an experimental app’ or not!
- As anyone in the life sciences sector understands, all changes incur risk.
- Every technology, every product type, and every significant change should be thoroughly assessed by experts for their potential to lead to unintended harm.
Reminder: any computerised system used for GMP activities must be validated, and data integrity must be ensured through appropriate data governance measures. Click here to learn more about quality risk management strategies in the life sciences sectors and Quality Management Systems for medical devices and pharmaceutical manufacturing.
What are some of the risks of AI apps including ChatGPT and Bard?
It’s challenging to quantify or qualify the risks of AI apps at this stage of adoption. That’s because there are numerous AI applications (uses) and user profiles.
- Some writers fear the ‘extreme’ worst: that AI-based technologies will eventually overcome and/or destroy human beings and life as we currently know it.
- Read an interesting TIMES magazine article on potential AI risks, by clicking here.
- Other potential risks are seen as more ‘disruptive’ than destroying.
- These risks include job losses and major shifts in consumer information seeking as well as some of the risks we’ll cover today.
Help or hinder? Design, create, enhance…or destroy?
Today’s AI-driven apps are far more capable than the software products that preceded their development. The implications of widespread adoption of these ‘experimental’ AI technologies, including chatbots, remains unknown.
While we can recognise health and safety risks related to the amplification of misinformation, it is too early to comprehend the full range of risks related to AI-driven technologies.
What is clear, however, is that AI-driven software technologies have a ‘dual capacity’ to benefit individuals and their communities as well as harm them. To help as well as hinder. To create life and livelihoods, and to disrupt or destroy them…. and who knows what else is yet to come?
Below are some examples of the risks related to AI-driven apps like ChatGPT and Bard.
AI App Risks: Proliferation of misinformation
- AI-generating apps can get things wrong, and they often do!
- Erroneous outputs generally result from:
- Inaccurate query phrasing (poorly worded questions), and/or
- The “garage in = garbage out” equation.
Incorrect AI-generated content also originates from biases/other inaccuracies in the datasets being accessed by the software. It helps to recognise how rapidly misinformation spread on (and via) the internet in recent decades, particularly during the pandemic period, ‘thanks’ to social media sharing, predatory sales groups, and a gullible/mistrusting public.
- AI-generating response apps can (and will) generate erroneous content.
- AI-app users are likely to publish this content, often unaware of its inaccuracy; further contributing to misinformation on the web (and leading to potentially serious harm).
Without clear source references (from which AI apps generate their responses), you may find it challenging to verify the information, especially if you’re not an expert in that topic area.
Potential solution: Gate-keeping AI technologies that fact-check content against evidence-based publications before generating content or permitting publication.
AI App Risks: Missing source credits
- Search engine responses (Google query results) include sources and relevant links
- But this is not generally the case with AI-generative technologies.
- Risks of missing source information include:
- Failure to verify the reputation/trustworthiness of the information source
- Failure to detect contextual inconsistencies/other errors in ‘paraphrased’ materials
- Failure to appropriately credit the information originator or content owner
- Possible copyright and/or patent infringement issues
- AI-generating apps claim the content they generate will be free from such issues, but it is likely copyright infringement claims will emerge over time
- For example, AI (reverse image search) is already used to pursue the unauthorised use of protected images
Potential solution: Ensuring adequate resources are available to review, verify and finesse AI-generated content, including verifying output accuracy from credible sources.
AI App Risks: Criminal use
- Criminals currently use (misuse) digital technologies, including AI-driven technologies, to perpetuate serious crimes.
- As with all powerful technologies, risks are amplified by improper use.
- This leaves individuals and institutions more vulnerable to cyber-attacks, deep fake videos, extortion threats, website piracy, espionage, and banking theft, to name a few.
Potential solution: AI-driven applications can be used to deter and detect crimes, including generating evidence to hold criminals to account. E.g., AI-generated audit trails, identity tracing/location tracking, facial recognition technologies, linking databases across legal jurisdictions, blockchain-protected supply chains, identity verification technologies, pharmaceutical supply logistics tracking, and more.
AI App Risks: Customer service levels and client loyalty
Another risk of AI apps like ChatGPT and Bard is the ability to discern AI chatbot interactions from human ones. At least, initially.
- Prospective customers and existing clients can become frustrated or disillusioned with a brand when a ‘customer service team member’ (a chatbot or otherwise) fails to respond effectively to what they’re asking.
- While newer chatbots can mimic human responses to queries fairly well, clients can end up feeling ‘duped’ when they realise they’ve been interacting with a sophisticated ‘bot’ and not a human (no one likes to waste their time).
Possible solution: Client loyalty in competitive times is important, so avoid excessive reliance on AI-driven chatbots (or underperforming response-generating AI apps).
Summary of Risks of AI Apps and AI-generated Content
Regardless of the industry, risk management will be crucial!
This includes drafting AI app use policies and standards, especially when AI-generated content is being used in life-sciences, manufacturing, and communication environments.
GMP example: You need to ensure compliance with PIC/S Annex 11, GAMP@5 standards and FDA CSA guidance when you upgrade computerised systems in a GMP facility.
You should approach AI-driven chatbot apps with a similar ‘risk-based’ approach.
Guidance for Employees who use AI apps such as ChatGPT or Bard or similar response-generating ‘AI chatbot’ technologies.
Many of your employees are likely already using AI chatbot technologies.
In terms of providing workplace guidance for ChatGPT or Bard use, you might consider the following:
- Draft and circulate a policy for AI app uses for feedback from your team and IT experts
- Conduct a risk assessment (risks-to-benefits analysis based on the intended use of AI-generated content)
- Investigate and document the AI app’s strengths and weaknesses
- Choose your AI app based on supplier qualification principles/risks analysis
- Document parameters about the intended use of ChatGPT, Bard and similar software (AI-generating software use purposes or intended outcomes should be well-understood)
- Trains employees how to optimise inputs to maximise outputs
- And importantly, ensure mandatory, thorough checks by humans of all AI-generated outputs/content to ensure such output is reviewed and/or approved by humans who possess the relevant knowledge/qualifications in relation to the content topic (and intended use of the generated content.
Because despite the capabilities of apps like ChatGPT and Bard, AI-generated content is more a pancreas than a panacea. And when something goes wrong with either of these two, it doesn’t bode well.
Do your current data integrity measures meet GMP requirements?
Learn more about Data Integrity requirements in a GMP environment.
Need help with your medical device quality management system documentation?
If you are working with medical devices including medical device software (Software as a Medical Device) or medical device quality management systems (ISO 13485), browse our online medical device quality management & regulatory compliance [online] training courses.
Life Science & Regulatory Affairs Services by GMP Experts at PharmOut
PharmOut also offers expert GMP consultant services including support for Regulatory Affairs or GMP Audit Findings/Audit Responses, Documentation management systems and SOPs, Pharmacovigilance system designs, and more.
References & Further Reading
Science Direct: AI in the Life Sciences
Life Sciences: AI Use Cases and Executive Trends (An Executive Brief)
Artificial in the Real World (Harvard Business Review)
A Guide to Cross-Validation for Artificial Intelligence in Medical Imaging
Risks of Using AI Apps in Banking Lending
Harvard Business Review: Risks and Benefits of Using AI to Detect Crimes
Mind Matters: How Criminals Use AI to Conduct Crimes
VIDEO: Using Artificial Intelligence in Radiology Clinical Practice
Last updated on August 18th, 2023 at 11:26 am