Large Language Model
Large Language Models (LLMs), like ChatGPT or DALL-E, are a type of ‘generative artificial intelligence’, or machine learning, trained on enormous datasets to be able to produce fake data in a range of formats (including text and image data). They do this in response to a simple written instruction known as a ‘prompt’, with the data produced looking almost identical to real data.
For example, you can tell ChatGPT to ‘describe the uses of data in healthcare’ and it will ‘write’ an answer for you that seems remarkably human. Alternatively, you can tell DALL-E to create a picture of a ‘cat in space’ and it will produce very realistic artwork. The responses generated by LLMs are so realistic that generative AI can seem almost like magic. In reality, the ‘magic’ is prediction.
Generative AI learns the patterns present in very large datasets and uses this knowledge to predict likely sequences such as likely words in a sentence.
Here’s how it works
- You provide the model with a prompt, such as ‘describe the uses of data in healthcare’.
- This text is automatically divided into words and smaller units, which are then encoded into a format that the model can understand and process.
- The model uses this code in combination with the context of the prompt (e.g. your previous conversation history with ChatGPT) to capture the meaning of your input.
- The model then generates text in an ‘autoregressive’ manner - meaning it predicts the next word to use based on the context of your input and the preceding text it has generated, while drawing on its previous learnings (from training on enormous datasets).
- The model continues to generate text and build context, repeating the process until a stopping condition is met (e.g. maximum length or end of a sentence).
- As you continue to interact with the model, it will build a better understanding of your input by taking into account the context of your entire conversation history, which leads to it generating more meaningful replies.
Here is an example
Generative AI has a very wide range of potential uses in medical research:
- It can help researchers search for previously published research on their topic of interest
- It can write papers in clear and plain language
- It can write computer code to help analyse data
- It could even potentially make datasets to study during clinical trials
All these uses have the potential to benefit medical research significantly. However, it is important to not get carried away. Generative AI can also create problems. For example, it can generate misleading or inaccurate information which might cause harm or it might produce datasets that are too similar to the real patient datasets which could create a privacy risk. This is why it is important that researchers are curious but cautious about the potential of generative AI for medical research.
Having been popularised by models such as ChatGPT and DALL-E, generative AI – including Large Language Models (LLMs) - is a form of machine learning (typically deep learning) capable of generating new data in a range of formats and adapting to new tasks in real-time, following simple written prompts.
This makes generative AI considerably more flexible than other types of ‘narrower’ AI as it means that one model can complete many tasks without having to be re-trained. As such, there are a significant range of potential uses for generative AI in healthcare across both direct care and research.
From the perspective of direct care, generative AI – and most especially LLMs – have the potential to be used as triaging or diagnostic chatbots; to help overcome language barriers in patient-doctor communications; to take notes during consultations, and more. As exciting as these potential use cases are, they raise several complex ethical and regulatory challenges that will be difficult to overcome.
It is clear that any LLMs being used in direct care will need to be regulated as medical devices and yet it’s still unclear how exactly medical device law will apply to generative AI and how developers will be expected to meet requirements such as ‘evidence of efficacy.’ It’s likely to be a long time before generative AI starts to have an impact on direct care at scale. In contrast, generative AI is already having an impact on medical research.
Generative AI can be used to assist researchers with almost every stage of the research process, including:
- summarising previous research
- identifying knowledge gaps
- generating hypotheses
- helping to design experiments
- helping researchers write analytical code
- helping draft, edit and disseminate research papers
However, there are several uses that are of particular note:
- Synthetic data generation
Access to patient data, both structured, such as electronic health records (EHRs) and imaging data, and unstructured, such as ‘free text clinical notes’, is essential for most contemporary medical research projects.
However, patient data also contains highly sensitive information, is sometimes negatively affected by ‘missingness’ and is often biased or unrepresentative of clinical populations. It can be challenging to provide researchers with the quantity of high quality data needed to train highly accurate models for research purposes.
Generative AI’s ability to ‘learn’ the structure (or statistical regularity) of patient data and use this knowledge to generate synthetic (or fake) datasets that retain a high fidelity with the original ‘real’ data can help overcome these issues. Generative AI (particularly GANs and VAEs) can be used to produce entirely new synthetic datasets for training, scoping, or development purposes.
For example, a recent study showed that generative AI could produce synthetic retinal images that were – to expert consultants – indistinguishable from real patient images. In such instances, the synthetic data is primarily being used to mitigate patient privacy concerns. Generative AI can also be used to generate synthetic data for ‘augmentation’ purposes, which is used to fill in ‘gaps’ to help mitigate issues with missing data or bias.
- Data harmonisation
Creating large representative datasets for medical research purposes typically requires the linking or integration of multiple datasets, which often use different formats. This integration and subsequent curation of the datasets is a complex, expensive, and labour-intensive process. Generative AI models, including LLMs, represent all data types in a uniform way – typically as a list of numbers (or as a vector in mathematical terms). This process is referred to as the ‘embedding of the data’ and it has considerable potential to make data harmonisation a far more efficient and cost-effective process.
- Drug discovery
Generative AI is not only capable of generating text and images, but also novel small molecules, nucleic acid sequences, and proteins with any desired structure or function. This means that generative AI can be used in drug discovery to test a range of potential therapeutics very quickly for a specific condition or specific patient profile, greatly speeding up the process of personalisation and optimisation.
- Clinical trials
Generative AI has a number of potential applications in the field of clinical trials. Synthetic data can, for example, be used to simulate patient populations, treatment groups, and outcomes in clinical trials. This can help researchers optimise trial designs, estimate the population efficacy and safety of interventions, and identify potential biases and confounders.
Overall, the potential impact of generative AI on medical research is significant and has the potential to be very positive. It is, however, important to be cognisant of the fact that generative AI is a ‘dual-use’ technology: one that can be used for beneficent or maleficent purposes. Just as generative AI offers researchers many potential benefits (as outlined above), it also presents a number of problems. For example, synthetic data may still be biased and may still be vulnerable to re-identification attacks (for example model inversion attacks) that might still threaten patient privacy. In addition, it has no ‘semantic understanding’ and may summarise information incorrectly, or when generating text, ‘hallucinate’ false information (including fake citations). This is why it is important to embrace generative AI mindfully rather than blindly.
An Owkin example
In the paper “How will generative AI disrupt data science in drug discovery,” published in Nature, Owkin scientist Jean-Phillippe Vert outlines in detail the many ways in which generative AI might disrupt how scientists and engineers understand biology and discover and develop new treatments.
Further reading
- Ananthaswamy, A. 2023. ‘In AI, Is Bigger Always Better?’ Nature 615(7951): 202–5.
- Averitt, A.J., N. Vanitchanant, R. Ranganath, and A.J. Perotte. 2020. ‘The Counterfactual χ-GAN: Finding Comparable Cohorts in Observational Health Data’. Journal of Biomedical Informatics 109.
- Baowaly, M.K., C.-C. Lin, C.-L. Liu, and K.-T. Chen. 2019. ‘Synthesizing Electronic Health Records Using Improved Generative Adversarial Networks’. Journal of the American Medical Informatics Association 26(3): 228–41.
- Chen, Richard J. et al. 2021. ‘Synthetic Data in Machine Learning for Medicine and Healthcare’. Nature Biomedical Engineering 5(6): 493–97.
- Ghosheh, Ghadeer, Jin Li, and Tingting Zhu. 2022. ‘A Review of Generative Adversarial Networks for Electronic Health Records: Applications, Evaluation Measures and Data Sources’. https://arxiv.org/abs/2203.07018 (June 15, 2023).
- Harrer, Stefan. 2023. ‘Attention Is Not All You Need: The Complicated Case of Ethically Using Large Language Models in Healthcare and Medicine’. eBioMedicine 90: 104512.
- Heaven, Will Douglas. 2023. ‘Why Meta’s Latest Large Language Model Survived Only Three Days Online’. MIT Technology Review. https://www.technologyreview.com/2022/11/18/1063487/meta-large-language-model-ai-only-survived-three-days-gpt-3-science/ (June 15, 2023).
- Hosseini, Mohammad, Lisa M. Rasmussen, and David B. Resnik. 2023. ‘Using AI to Write Scholarly Publications’. Accountability in Research: 1–9.
- Jadon, Aryan, and Shashank Kumar. 2023. ‘Leveraging Generative AI Models for Synthetic Data Generation in Healthcare: Balancing Research and Privacy’. https://arxiv.org/abs/2305.05247 (June 14, 2023).
- Korngiebel, Diane M., and Sean D. Mooney. 2021. ‘Considering the Possibilities and Pitfalls of Generative Pre-Trained Transformer 3 (GPT-3) in Healthcare Delivery’. npj Digital Medicine 4(1): 93.
- Li, H. et al. 2023. ‘Ethics of Large Language Models in Medicine and Medical Research’. The Lancet Digital Health 5(6): e333–35.
- Li, J., B.J. Cairns, J. Li, and T. Zhu. 2023. ‘Generating Synthetic Mixed-Type Longitudinal Electronic Health Records for Artificial Intelligent Applications’. npj Digital Medicine 6(1).
- Liebrenz, Michael et al. 2023. ‘Generating Scholarly Content with ChatGPT: Ethical Challenges for Medical Publishing’. The Lancet Digital Health 5(3): e105–6.
- Moor, Michael et al. 2023. ‘Foundation Models for Generalist Medical Artificial Intelligence’. Nature 616(7956): 259–65.
- Shen, Y. et al. 2023. ‘ChatGPT and Other Large Language Models Are Double-Edged Swords’. Radiology 307(2).
- Van Dis, Eva A. M. et al. 2023. ‘ChatGPT: Five Priorities for Research’. Nature 614(7947): 224–26.
- Vert, Jean-Philippe. 2023. ‘How Will Generative AI Disrupt Data Science in Drug Discovery?’ Nature Biotechnology 41(6): 750–51.
- Weisz, J.D. et al. 2022. ‘Better Together? An Evaluation of AI-Supported Code Translation’. In International Conference on Intelligent User Interfaces, Proceedings IUI, , 369–91.
- ‘Will ChatGPT Transform Healthcare?’ 2023. Nature Medicine 29(3): 505–6.