Pharmaceutical and life sciences companies now face heightened scrutiny over their practices in documenting their electronic processes in particular, and in managing that data across its entire lifecycle. Looking ahead, lab efficiency and compliance must remain priorities as agencies continue to update their data integrity regulations. Solutions including PerkinElmer OneSource help put both data integrity and the closely associated CSV on sustainable trajectories.
Throughout the 2010s, the data integrity of computerized systems became a foremost concern for regulatory agencies such as the U.S. Food and Drug Administration (FDA). Data integrity is not a new requirement per se. Principles from the paper and ink era carry over, and electronic-specific enforcement within the U.S. began as far back as 1997, with the FDA’s final rule for “Electronic Records and Electronic Signatures.”
However, pharmaceutical and life sciences companies now face heightened scrutiny over their practices in documenting their electronic processes in particular, and in managing that data across its entire lifecycle. PerkinElmer realizes these challenges faced by the industry and has created a solution portfolio including OneSource Laboratory Services to aid in the preservation of data integrity as well as in compliance with requirements for computer system validation (CSV), metadata retention and audit trail management.
The stakes for proper data integrity are high, as it is a common violation in drug cGMP letters from the FDA. In fiscal year 2018, 57% percent of all such warning letters cited deficiencies in data integrity. That is down from a peak of 79% in FY2016, the same year the FDA released its draft guidance for “Data Integrity and Compliance with cGMP,” which was later finalized in late 2018. The U.K. Medicines and Healthcare products Regulatory Agency and the World Health Organization also issued similarly significant guidance during these years.
Looking ahead, lab efficiency and compliance must remain priorities as agencies continue to update their data integrity regulations. Solutions including PerkinElmer OneSource help put both data integrity and the closely associated CSV on sustainable trajectories.
Data integrity: Understanding current requirements and best practices
The FDA currently sets the bar for data integrity at “complete, consistent, and accurate” and the agency regards the associated compliance as integral to cGMP as well as public health. Its ALCOA acronym — Attributable, Legible, Contemporaneous, Original, Accurate — provides some additional context on what is expected from pharma personnel throughout processes for research, pre-clinical drug discovery and development. OneSource is designed for full ALCOA compliance.
At a fundamental level, data integrity requirements promote the accuracy and reliability of all data produced by FDA-regulated entities. Compliance with them in turn supports cGMP, resulting in safer, more effective products with fewer defects and less likelihood of recall. Lapses in data integrity may take many different forms, including falsifying results and failing to fully document every process, but in all cases they pose a risk to the public since they put into doubt the safety and efficacy of the drugs in question.
For pharmaceutical firms, failure to preserve data integrity endangers their relations with the FDA — leading to warning letters and possible license revocation — and also creates substantial cybersecurity risks, as the concepts of data integrity and data security are intertwined. In fact, missing and/or corrupted data is often the byproduct of not having adequate access controls (e.g., robust authentication during logins), which are pivotal FDA data integrity requirements alongside proper retention of metadata and the creation of audit trails.
Data integrity is part of the basic fabric of pharma processes, and accordingly it must be regularly reviewed and validated. While an exhaustive list of the best practices for accomplishing these tasks is beyond the scope of this paper, a few of the key ALCOA-centric examples include:
- Implementing username and password requirements to control access and prevent values from being deleted or modified by unauthorized users like terminated employees.
- Recording all changes to computerized data, complete with details about who made the change, when it occurred, and what previous entry it affected, in audit trails.
- Making sure raw and source data is available either in its original form or in a true copy.
- Keeping data in a readable and permanent format throughout its entire lifecycle.
- Configuring backup systems to prevent any inadvertent data loss or deterioration and to preserve all contextual metadata.
- Validating systems through data collection and analysis to verify that they can produce repeatable results consistent with established parameters.
Computer Systems Validation (CSV) is essential to governing the computerized systems involved in these actions, and to data integrity as a whole.
Computer System Validation: How to pursue CSV at the pre-clinical stage
Like data integrity, CSV is a broad concept that affects virtually every dimension of pre-clinical research and drug discovery. It touches many technologies and involves numerous teams within a regulated company, who together bear responsibility for proper CSV and, by extension, for organizational compliance with regulations on data integrity. Beyond the lab itself, departments such as IT will be needed to help with the configurations, permissions, and settings necessary for proper CSV.
The central purpose of CSV is to verify that a system such as a lab instrument or software application will do exactly what it should do — in terms of producing acceptable information — as defined in its parameters. Essentially, CSV proves that the system can withstand scrutiny. A properly validated system will create information that is accurate, consistent, reliable, and human-readable. In OneSource, there is support for corrective actions including CSV, with a basis in 21 CFR Part 11 and GAMP 5.
GAMP5 is the common standard for CSV. Short for Good Automated Manufacturing Process, GAMP5 is explained in detail in the document “A Risk-Based Approach to Compliant GxP Computerized Systems.” Although it is not an FDA requirement, GAMP5 is widely used and offers a comprehensive framework for evaluating different categories of hardware and software, including infrastructure software, configured products and custom applications.
A GAMP5-adherent CSV process typically starts with activities such as a GxP impact assessment to see if the system affects product quality and safety and an electronic records and signatures review to determine if it creates and maintains those specific assets. Subsequent steps may include conducting risk assessments, collecting user requirement specifications, and periodically evaluating the system’s validated status.
CSV is an ongoing process, not a one-time activity; it will extend all the way through the eventual decommissioning of the affected instrument or application. Its time-consuming nature may require outsourcing and expert assistance to keep the process on track.
PerkinElmer assists with GAMP5 CSV of lab instruments and lab software, opening a more straightforward path toward lab-wide compliance that covers data integrity and the maintenance of audit trails for GxP systems. With OneSource, it is possible to perform a full validation of a new system or, alternatively, update the control validation for any enterprise or standalone systems.
Audit trails and metadata: Preserving data and context about pre-clinical activities
The FDA and similar regulatory bodies around the world have set specific requirements for audit trail creation and metadata retention. Along with CSV, these two activities are essential components of data integrity.
An audit trail is a computer-generated history of all of the activities associated with an electronic record, including its creation, modification, and deletion. Other events recorded in an audit trail may include overwriting, backdating, and testing into compliance, any of which could be signs of data integrity violations. For electronic data, an audit trail should begin as soon as the data is written to durable storage.
In addition to being computerized and automated, a compliant audit trail must also be secure, timestamped in accordance with an unalterable clock, traceable to the person(s) who made each change, and retained for as long as the record it corresponds to is being held. The purpose of an audit trail is to preserve a record of regulated activities and to keep tabs on how they affect product safety and quality.
The FDA’s 2018 guidance on data integrity states that audit trails should be reviewed after each significant step in a drug’s manufacturing, processing, packing, or holding. Periodic reviews of audit trails are important measures for knowing who is accessing key data and what they are doing with it. Along with cybersecurity measures like access controls, they contribute to stronger data integrity.
Metadata refers to the contextual information needed for understanding data; it could be described as data about data. It might indicate who performed a test, when they did it, and the relevant instrument and user IDs. Lab software will collect some of this info automatically, making it easier to build out audit trails and ensure electronic records are more accurate than in the days of entering everything into a traditional notebook.
Metadata relationships to data should be preserved in a traceable, secure way. Electronic cGMP data should include corresponding metadata, as should any backup created as a true copy of an original. Ultimately, metadata retention is another measure that supports superior data integrity, by filling in context and revealing any gaps that might be cause for concern about a product. Through OneSource, lab teams can more efficiently manage the necessary documentation and preserve metadata for data integrity.
Mitigating the common risks to compliance during research and pre-clinical discovery
Although there are numerous processes in place — both in and out of the lab — for preserving data integrity, significant risks to regulatory compliance still remain. For example, an instrument might technically be validated, yet no one has reviewed that validation. Similarly, highly siloed validation workflows can leave out the input and expertise of important personnel.
Related dangers include complex, multi-site setups with a variety of procedures and requirements across them. That can lead to inconsistent data management and incomplete audit trails. As the FDA and others have ramped up scrutiny of data integrity, they have identified these issues, along with others including “test to pass” schemes. One such example was documented in 2017 at a facility in Gujarat, at which personnel were repeatedly running purity tests until they got the desired readings.
Mitigating these risks requires a combination of the right company culture, in which everyone understands their compliance-related responsibilities and expectations, and suitable technical solutions. A central consideration on the road to compliance is whether performing the necessary activities in-house or via an outside partner will yield optimal results.
On the surface, it can seem more economical to go it alone and avoid the costs of outsourcing. However, this route comes with the risk of not having access to the most comprehensive expertise on FDA regulatory compliance at critical junctures, or of seeing an essential system possibly become unavailable. Those events could be much more costly than seeking professional assistance.
OneSource professional services help streamline the commissioning and validation of lab instruments (either by automated or paper-based qualification process), as well as other activities contributing to full compliance with rigorous data integrity and CSV standards. PerkinElmer has technical and practical experience assisting regulated companies in these domains, and our team is committed to helping yours navigate pre-clinical compliance.