We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.CL

Change to browse by:

cs

References & Citations

DBLP - CS Bibliography

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Computation and Language

Title: Can Foundational Large Language Models Assist with Conducting Pharmaceuticals Manufacturing Investigations?

Authors: Hossein Salami (1), Brandye Smith-Goettler (2), Vijay Yadav (2) ((1) Digital Services, MMD, Merck & Co., Inc., Rahway, NJ, USA, (2) Digital Services, MMD, Merck & Co., Inc., West Point, PA, USA)
Abstract: General purpose Large Language Models (LLM) such as the Generative Pretrained Transformer (GPT) and Large Language Model Meta AI (LLaMA) have attracted much attention in recent years. There is strong evidence that these models can perform remarkably well in various natural language processing tasks. However, how to leverage them to approach domain-specific use cases and drive value remains an open question. In this work, we focus on a specific use case, pharmaceutical manufacturing investigations, and propose that leveraging historical records of manufacturing incidents and deviations in an organization can be beneficial for addressing and closing new cases, or de-risking new manufacturing campaigns. Using a small but diverse dataset of real manufacturing deviations selected from different product lines, we evaluate and quantify the power of three general purpose LLMs (GPT-3.5, GPT-4, and Claude-2) in performing tasks related to the above goal. In particular, (1) the ability of LLMs in automating the process of extracting specific information such as root cause of a case from unstructured data, as well as (2) the possibility of identifying similar or related deviations by performing semantic search on the database of historical records are examined. While our results point to the high accuracy of GPT-4 and Claude-2 in the information extraction task, we discuss cases of complex interplay between the apparent reasoning and hallucination behavior of LLMs as a risk factor. Furthermore, we show that semantic search on vector embedding of deviation descriptions can be used to identify similar records, such as those with a similar type of defect, with a high level of accuracy. We discuss further improvements to enhance the accuracy of similar record identification.
Comments: 13 pages, 3 figures
Subjects: Computation and Language (cs.CL)
Cite as: arXiv:2404.15578 [cs.CL]
  (or arXiv:2404.15578v1 [cs.CL] for this version)

Submission history

From: Hossein Salami [view email]
[v1] Wed, 24 Apr 2024 00:56:22 GMT (327kb)

Link back to: arXiv, form interface, contact.