The first draft AI Act standard for public consultation: what prEN 18286 (Quality Management System for EU AI Act regulatory purposes) signals for providers, users and regulators
Key contact
Introduction
prEN 18286 (Quality Management System for EU AI Act regulatory purposes) (the draft QMS Standard) issued on 30 October 2025 explains how a provider’s quality management system (QMS) should be designed and run to support compliance with the AI Act across the full AI lifecycle. It translates Article 17 of the AI Act into concrete governance, documentation, lifecycle and evidentiary controls that organisations can implement and audit. Once adopted and cited in the Official Journal, complying with its normative clauses (for example, the mandatory “shall” requirements of the draft QMS Standard that an organisation must meet) will give providers a presumption of conformity with the essential requirements it covers, as mapped out in Annex ZA (see the section “Key Concepts” below).
Article 17 sets out the principal requirements for quality management. It requires every provider of a high-risk AI system to establish, implement and maintain a documented QMS that covers:
- policy and strategy;
- roles and responsibilities;
- design and development controls;
- verification and validation;
- selection of technical specifications;
- data and data governance;
- post-market monitoring;
- handling of non-conformity and serious incidents;
- technical documentation and instructions for use;
- resource and supplier management; and
- continual improvement.
In practice, Article 17 makes the QMS central to market access because conformity assessment (see the section “Key Concepts” below) relies on it and authorities will look first to its records when questions arise. A deficient QMS undermines the effectiveness of all other compliance efforts.
The draft QMS Standard offers businesses an auditable blueprint for:
- identifying applicable regulatory requirements;
- defining QMS scope;
- assigning top management responsibility;
- embedding risk management;
- translating regulation into system requirements;
- managing data, overseeing suppliers;
- controlling changes including predetermined changes for continuously learning systems;
- conducting post-market monitoring;
- reporting serious incidents within strict timelines; and
- driving continual improvement.
The requirements in the draft QMS Standard are drafted to integrate with existing management systems. For example, Annex C maps to ISO 9001, the international QMS standard that sets general requirements for establishing, operating and improving an organisation-wide QMS to meet customer and regulatory needs, and Annex D maps to ISO/IEC 42001, the international AI management system standard that sets organisation-wide requirements for governing AI across its lifecycle including risk policies, controls-monitoring and continual improvement, so most organisations can adapt rather than rebuild.
This draft is also significant for timing. It is the first of the standards requested by the European Commission in its European Standardisation Request (see our previous Law-Now) for the AI Act to reach the CEN enquiry stage. That milestone indicates that the content is mature enough for broad scrutiny and gives the legal and technical teams the earliest dependable view of the controls that regulators and notified bodies are likely to expect, even though details may still change through the enquiry process.
This article provides an overview of the draft QMS Standard, explains where it sits in the AI Act standards ecosystem, summarises its core requirements, and sets out practical steps to help legal, product and engineering teams implement a single, audit-ready QMS.
All references to clauses are clauses in the draft QMS Standard.
What the draft QMS Standard Covers: Scope, Status and Audience
The draft QMS Standard specifies requirements and guidance for defining, implementing, maintaining and improving a QMS for organisations that provide AI systems, with the objective of meeting applicable regulatory requirements throughout the AI system lifecycle. “Quality” is expressly reframed around regulatory compliance and protection of health, safety and fundamental rights, rather than customer satisfaction. It applies to any provider that places an AI system on the EU market or puts one into service in the EU, regardless of the provider’s size or where it is based. The requirements can be built into existing sector quality systems such as medical devices or automotive rather than creating a new standalone QMS, provided those systems are updated to incorporate all of this standard’s controls.
Its current status is “prEN” submitted for CEN enquiry. CEN enquiry is the formal public consultation stage run by the European Committee for Standardization, during which national standards bodies circulate the draft for comment and vote. For prEN 18286 the public enquiry runs from 30 October 2025 to 27 December 2025. During this period, stakeholders can submit comments through their national standards body – such as BSI in the United Kingdom, DIN in Germany, or AFNOR in France – using that body’s online portal or template. Industry associations and liaison organisations may also submit consolidated feedback. After the window closes, CEN compiles and resolves comments and national bodies cast their votes, which may lead to revisions before final adoption. It is not yet a European Standard and is subject to change, so it should not be cited as such until finalised. Presumption of conformity (the exact scope of which will be set out in Annex ZA of the relevant European Standard) will arise only once cited in the Official Journal.
Key concepts
- Harmonised standard. A harmonised standard is a European standard developed by CEN CENELEC or ETSI on a European Commission request and later cited in the EU’s Official Journal. Once cited, using it gives a rebuttable assumption that you comply with the matching legal requirements. Before citation, it is voluntary guidance without special legal effect.
- Presumption of conformity. When a harmonised European standard is formally published in the EU’s Official Journal, a provider that follows that standard is assumed to meet the matching legal obligations in the AI Act. This is not automatic approval and authorities can still ask questions, but it shifts the burden of proof in favour of the organisation relying on the standard. Until citation happens a standard has no special legal status; it is simply good practice guidance.
- Essential requirements. These are the core legal requirements in the AI Act that protect health, safety and fundamental rights across the AI lifecycle (for example, risk management, data and data governance, transparency and human oversight, accuracy robustness cybersecurity, technical documentation, record keeping and post-market monitoring). The draft QMS Standard translates these duties into practical processes, roles and records.
- Annex ZA. In each harmonised standard, Annex ZA is a signpost that shows exactly which parts of the QMS Standard address which legal requirements. It is the bridge between “what the law requires” and “what the standard asks you to do.” In practice, Annex ZA is what auditors and authorities will use to test whether an organisation’s QMS and technical files cover the relevant AI Act obligations.
- Common specifications. Common specifications are legally binding technical rules adopted by the European Commission where harmonised standards are absent, incomplete or not yet cited. When common specifications apply, providers must implement them unless they can demonstrate that alternative solutions deliver an equivalent or higher level of protection. Following common specifications creates the same presumption of conformity as a cited harmonised standard, and any justified alternatives must be recorded in the QMS selection of technical specifications and in the system’s technical documentation.
- Annex SL. Annex SL is ISO’s common high-level structure for management system standards. It provides shared clause titles and terminology so different systems such as ISO 9001 and ISO/IEC 42001 can be integrated into one cohesive framework without duplication.
- New Legislative Framework. The New Legislative Framework is the EU’s product safety model for placing goods on the market. It sets roles such as ‘provider’ and ‘notified body’ and links laws to harmonised standards and conformity assessment so compliance can be shown through a structured QMS and technical documentation.
- Conformity assessment. Conformity assessment is the formal process a provider must complete under the AI Act before placing a high-risk AI system on the market or putting it into service. Depending on the system and route, this may be internal control under Annex VI or third-party assessment by a notified body under Annex VII, and it examines whether the provider’s QMS and technical documentation demonstrate compliance with the essential requirements.
Keep these concepts front of mind as they will help you understand the operation and use of standards.
Why it matters now to in‑house legal teams
The AI Act anchors provider obligations in a QMS, and the draft QMS Standard turns those obligations into implementable processes and verifiable evidence in three practical ways:
- It operationalises Article 17 in practice. The draft QMS standard maps its clauses to Article 17(1)(a)–(m), covering risk management, design and development controls, verification and validation, selection of technical specifications, data and data governance, post‑market monitoring, serious incident reporting, communications with authorities and other operators, technical documentation and instructions for use, resources and supply chain control, and roles and responsibilities.
- It is evidence-led and audit-ready by design. Clause 4.5 demands documentation that is common to all in-scope systems, version-controlled, traceable and written for auditors, notified bodies and competent authorities. Technical documentation requirements are set out in clause 8.7, and retention and control requirements in clause 4.5.4.
- It provides cross-functional control through mandated processes. Top management responsibility (clause 5), competence (clause 7.2), supplier management (clause 9.2), change management including pre‑determined changes (clause 9.3.4), post‑market monitoring (clause 9.4) and serious incident reporting with strict timelines (clause 9.5.1) are all specified.
Where the draft QMS Standard sits in the AI Act standards ecosystem
The draft QMS Standard is the top‑level framework that explains the quality management practices providers should implement to operationalise Article 17, sitting above the topic‑specific standards and defining the governance and lifecycle controls into which they plug. Annex B explains how it interacts with other primary harmonised standards and supporting specifications and, in practice, directs providers to the companion documents that fill out the system‑level controls. These companion pieces cover risk management, data governance and bias, trust and performance, cybersecurity, and logging and monitoring, which are introduced in the points that follow (for how harmonised standards, essential requirements and Annex ZA work, see the section “Key concepts” above).
- Risk management: prEN 18228 provides the risk management system referenced in Article 9 and is explicitly integrated in clauses 8.1, 9.3 and 9.4.5 and Annex B.3.1.
- Data quality and bias: clauses 8.5 and 4.4.3 point to dataset governance (prEN 18284) and bias management (prEN 18283), which together implement Article 10 of the AI Act data requirements (Annex B.3.2).
- Trustworthiness: prEN 18229 Parts 1–2 (as set out in Articles 12-14 for Part 1 and Article 15 of the AI Act for Part 2) provide frameworks and methods for transparency, human oversight, accuracy and robustness, referenced in clause 8.4.1 for accuracy testing (Annex B.3.4).
- Cybersecurity: prEN 18282 (referenced in Article 15 of the AI Act) supports meeting the essential requirement on accuracy, robustness and cybersecurity (clause 4.4.2(g); Annex B.3.3).
- Logging and monitoring: prEN ISO/IEC 24970 is referenced for traceability and post‑market monitoring (clauses 9.1.1 and 9.4.4; Annex B.4).
Core requirements at a glance: what the QMS actually demands
The draft QMS Standard is structured around two linked frameworks, namely the AI system lifecycle from concept and design through verification, validation, deployment and post-market monitoring, and the Plan-Do-Check-Act (PDCA) management cycle, which governs policy, planning, execution, review and continual improvement. Against that structure, the highlights below map key clauses to practical obligations that counsel should verify.
Governance and strategy under clause 4 require providers to establish, maintain and continually improve a QMS that protects health, safety and fundamental rights, which in practice means identifying applicable regulatory requirements at all lifecycle stages, defining the QMS scope and boundaries, and adopting a written regulatory compliance strategy that addresses essential requirements, post‑market monitoring, serious incidents and data management. Providers should also select and document the measures used to demonstrate compliance, prioritising harmonised standards or common specifications, and where other standards or technical solutions are used, record the justification and any coverage gaps (clause 4.4.3).
Documentation and evidence under clause 4.5 set the expectation that QMS documentation should be common and auditor‑facing, remain version‑controlled and traceable, and be maintained in an EU-official language. It should explain how processes interact and include the scope, policy and objectives, processes and supporting evidence, and the procedures that implement clauses 5 to 10. Operational documentation and the underlying evidence must be controlled for suitability, protection, retention and traceability, including where external documents are relied upon.
Management responsibility and planning in clauses 5 to 6 mean that top management should set the quality policy, allocate resources and integrate QMS requirements into provider processes. Additionally, roles, responsibilities and authorities should be defined, assigned and communicated, with accountability extending to the risk management system and to monitoring the technological and regulatory state of the art. Planning should address risks to the functioning of the QMS itself, set verifiable quality objectives tied to regulatory requirements, and assign responsibilities and measures to achieve them. In practice, embed QMS responsibilities within role descriptions and committee charters, and create measurable quality objectives linked to the Article 17 elements, with routine monitoring against those targets.
Support, competence and communications under clause 7 require providers to ensure adequate resources, competence and awareness, with competency frameworks that reflect the intended purpose, foreseeable misuse, technologies used, data types and accessibility impacts. Communications procedures should cover competent authorities, notified bodies, other operators and deployers, and include processes for responding to reasoned requests as well as for communicating non‑conformities and risks. To make this tangible, implement a training plan and a competency matrix that covers AI Act essentials, fundamental rights risks and documentation discipline, and define a practical playbook for responding to authority requests.
Lifecycle controls set out in clause 8 require providers to determine the lifecycle stages and establish processes for design and development, verification and validation, data management, documentation, support and post‑market monitoring. Within that framework, key lifecycle obligations include:
- Translating regulation into requirements means AI system requirements must translate applicable regulatory requirements and risk controls into explicit, verifiable design inputs, including transparency, human oversight, accuracy, robustness, cybersecurity, data governance and record keeping. Requirements must be complete, unambiguous and reviewable; they must be reviewed and approved before placing on the market or putting into service, and continually reviewed during the lifecycle (clause 8.3).
- Verification and validation require providers to define and execute verification and validation plans with acceptance criteria, reproducible conditions, and written evidence. Validation must be completed before placing on the market or putting into service, including for non-substantial modifications (clause 8.4).
- Data management requires providers to implement processes for data acquisition, collection, analysis, labelling, storage, filtration, mining, aggregation and retention, and to define data requirements, planning, preparation and decommissioning, including lawful destruction or archival while preserving regulatory compliance capabilities (clause 8.5).
- Environmental sustainability involves identifying and mitigating environmental impacts, and providers should support deployers with relevant information (clause 8.6).
- Technical documentation and instructions for use mean that each AI system must have comprehensive technical documentation suitable for authorities and auditors, and clear instructions for use for deployers covering integration, installation, deployment, servicing and any post-market monitoring roles (clause 8.7).
- Fundamental rights consultation (as set out in Annex A) encourages structured consultation with affected persons and other interested parties from inception through testing and validation, with accessible engagement methods and documented outcomes feeding into design choices and risk controls (clause 8.4.2.2).
To embed these lifecycle controls in day-to-day delivery require product teams to produce an “AI system requirements” document mapped to essential requirements, with traceability to verification and validation artefacts.
Operations, supply chain, change and monitoring (clause 9) set expectations for deployment and support. Deployment procedures should ensure version traceability and tight linkage to documentation and logs, including a software bill of materials where appropriate, and support services should actively enable deployers to communicate feedback about potential risks (clause 9.1.1.1).
Supply chain control is explicit and requires providers to evaluate, select, monitor and re-evaluate suppliers of components, data and services based on their ability to meet regulatory and draft QMS Standard requirements, communicate requirements on quality, competence, interaction, security by design and vulnerability disclosure, and define acceptance and verification activities, including on-site where necessary. The chosen extent of control should be documented, together with records of verification (clause 9.2).
Change management should control both planned and unintended changes, including assessing whether they amount to a substantial modification. For continuous-learning systems, pre-determined changes may be planned and validated; providers should document the description, versioning, modification procedures, acceptance criteria and cumulative impact, and include a description of implemented modifications in the instructions for use (clause 9.3).
Post-market monitoring should be proactive, systematic and proportionate to risk, tracking performance, interactions, residual risks and the effectiveness of risk controls, and drawing on deployer feedback, logs, authority reports, complaints and incidents. Providers should identify and act on new and emerging risks and include monitoring obligations in the instructions for use where deployer involvement is necessary. Non‑conformities identified by the post-market monitoring should have clear thresholds and triggers for corrective action (clause 9.4).
Serious‑incident procedures are mandatory and come with tight statutory timelines: immediately or within two days for critical infrastructure, immediately or within ten days in the event of death, and immediately or within fifteen days for other cases. Provisional reports are permitted, followed by complete versions, and providers should document escalation roles, allocate resources for investigations, and maintain written evidence including root‑cause analyses and corrective actions (clause 9.5).
In practical terms, update supplier due diligence and contracts to reflect clause 9.2, implement a pre-determined change procedure for any continuous-learning systems, and test incident escalation and reporting timelines to ensure they are met consistently.
Providers should review QMS effectiveness against measurable criteria and conduct management reviews with inputs covering feedback, complaints, reporting, audits, performance, corrective actions, regulatory changes and standard updates, recording the resulting outputs such as changes to resources and processes. Any changes to scope or processes should be planned and controlled, with appropriate written evidence.
How the draft QMS Standard integrates with ISO 9001 and ISO/IEC 42001
The draft QMS Standard is designed to integrate rather than duplicate. As mentioned above, Annex C maps correspondence with ISO 9001:2015; Annex D maps to ISO/IEC 42001:2023.
- With ISO 9001: the structure is recognisable - policy, objectives, resources, competence, operations, performance and improvement. The key difference is the definition of “quality,” which in the draft QMS Standard means compliance with regulatory requirements aimed at protecting health, safety and fundamental rights, not customer satisfaction. Documentation and evidence are regulator‑facing (clause 4.5), and lifecycle controls are AI‑specific (clause 8).
- With ISO/IEC 42001: whilst 42001 is an AI management system standard (see our previous Law- Now), the draft QMS Standard adds EU regulatory depth. The mapping in Annex D shows alignment of management system clauses and the draft QMS Standard adds Article 17 topics such as serious incident reporting, post‑market monitoring and technical documentation.
Why the draft QMS Standard takes a product centric approach
Unlike ISO 9001 and ISO/IEC 42001 which follow Annex SL’s organisation‑centric high‑level structure, the draft QMS Standard is built directly around Article 17’s product‑oriented obligations. The focus is on the conformity of each AI system rather than a generic organisation-wide capability. This architectural choice reflects the New Legislative Framework’s product‑safety logic and the need for auditable, regulator‑facing evidence at AI‑system level.
Implications for implementation include stronger traceability from regulatory requirements to AI system requirements and tests, tighter linkage between versioning and technical documentation, and lifecycle-centred controls that can be applied per system within one corporate QMS.
Mapping of the draft QMS Standard clause to Article 17
The table below summarises the content of Annex ZA most relevant to legal teams.
| AI Act Article 17(1) element | prEN 18286 Clause(s) | Practical notes |
| First sentence (QMS required) | Clauses 4.1, 4.2, 4.3, 4.4, 5.1, 5.3.1–5.3.3 | Establish, scope and resource the QMS; assign top‑management responsibility |
| (a) Strategy for compliance | Clauses 4.4, 9.3.1–9.3.3 | Written regulatory strategy; change management integrated |
| (b) Design and development controls | Clauses 8.3.1–8.3.2 | Translate regulation into AI system requirements; approvals before market/service |
| (c) Verification | Clause 8.4.1 | Plans, reproducibility, acceptance criteria; written evidence |
| (d) Validation | Clause 8.4.2 | Done before market/service; includes non‑substantial modifications |
| (e) Selection of technical specs | Clause 4.4.3 | Prioritise harmonised standards/common specs; justify alternatives; document gaps |
| (f) Data management | Clause 8.5 | End‑to‑end data processes and decommissioning mechanisms |
| (g) Risk management system | Clause 8.1 | Use prEN 18228 to implement Article 9; integrate with QMS |
| (h) Post‑market monitoring | Clause 9.4 | Proactive, systematic, deployer interaction and triggers |
| (i) Serious incident reporting | Clause 9.5 | Timelines are 2, 10 or 15 days. Provisional reports are allowed |
| (j) Communication with authorities/operators | Clause 7.3 | Reasoned requests; nonconformity communications |
| (k) Technical documentation/instructions | Clauses 4.5, 8.7 | Auditor‑ready documentation; clear instructions for use |
| (l) Resource management/suppliers | Clauses 7.1, 9.2 | Competence, security‑by‑design, vulnerability disclosure |
| (m) Roles, responsibilities, authority | Clauses 5.3.1–5.3.3 | Named accountability for the risk management system |
Organisations should replicate this mapping in its internal controls matrix and link each element to owners, procedures and evidence locations.
Practical action plan for in-house lawyers
Start with governance and scope by determining who is the “provider” for each AI system and define the QMS scope and boundaries, including subsidiaries, JVs and outsourced roles. Assign top management accountability for the QMS and the risk management system. Embed responsibilities and authorities in documentation and committee structures.
Build and maintain a register of applicable regulatory requirements and harmonised standards in line with clauses 4.2 and 4.4, and use clause 4.4.3 to record the measures selected to demonstrate compliance, their coverage, and any gaps.
Establish documentation discipline by setting up a documentation control procedure that satisfies clauses 4.5.1 to 4.5.4, with language, versioning, retention and traceability policies, and a centralised, auditor‑ready repository. Ensure technical documentation and instructions for use templates exist and are used.
Embed lifecycle controls by requiring an “AI system requirements” specification for each system mapped to essential requirements, with traceability to verification and validation plans and acceptance criteria. Ensure foreseeable misuse and human oversight are captured.
Strengthen supply chain controls by updating supplier evaluation and contracting to incorporate clause 9.2, including supplier competence, security‑by‑design, vulnerability disclosure, audit and on‑site verification rights, data provenance requirements and participation in post‑market monitoring.
Tighten change management by defining substantial versus non‑substantial modifications. Where continuous learning is used, establish a pre‑determined change procedure and ensure modifications are reflected in the technical documentation and instructions for use.
Implement a proactive post-market monitoring plan and deploy logging per ISO/IEC 24970 where appropriate and set clear triggers for corrective actions. Build and test serious incident escalation and reporting procedures to meet statutory timelines.
Build competence and training by establishing competency matrices and role-based training for legal, product, engineering and support teams covering AI Act obligations, fundamental rights risks, bias, accessibility and documentation discipline.
To tie these elements together, run a cross‑functional gap assessment workshop structured around clauses 4 to 10 of the draft QMS Standard and Annex ZA, assign owners and timelines, and document remedial plans with measurable quality objectives.
Common pitfalls and how to avoid them
- A common pitfall is treating the QMS as an IT or engineering project rather than a provider-wide governance system anchored by top management. This can be avoided by embedding responsibilities in leadership charters and setting measurable quality objectives.
- Weak documentation discipline is another issue, where gaps in language policy, version control, retention, or traceability undermine audit-readiness. This can be remedied by implementing a formal document control procedure and a central repository.
- Failing to translate regulatory requirements into verifiable AI system requirements and test acceptance criteria is a frequent problem. The solution is to use a standard requirements template and a traceability matrix.
- Supplier blind spots occur when there is inadequate evaluation and control over externally supplied models, datasets, and testing services. This can be remedied with risk-based supplier controls and contractual obligations aligned with clause 9.2.
- Uncontrolled changes are a risk, especially with no clear boundary between substantial and non-substantial modifications or a lack of pre-determined change documentation for continuous learning systems. Implement a formal change control process and document pre-determined changes to remedy this.
- Passive post-market monitoring involves an over-reliance on ad hoc feedback rather than active, systematic monitoring with defined KPIs and triggers. This can be remedied with a post-market monitoring plan and logging aligned to ISO/IEC 24970.
- Lack of audit-readiness, which stems from failing to stress-test the QMS before an external audit, can be remedied by conducting "red team" audit simulations. These simulations should focus on documentation, supplier files, change logs, and incident reporting to surface weaknesses before external scrutiny.
Sector considerations and alignment with other regimes
The draft QMS Standard anticipates integration with sectoral quality regimes. The Introduction section notes that providers may already comply with sector QMS (e.g., ISO 13485 in medical devices). The draft QMS Standard should be layered into those systems by adding AI‑specific regulatory controls, particularly around fundamental rights consultation (Annex A – informative guidance), post‑market monitoring (clause 9.4), serious incident reporting (clause 9.5) and data governance (clause 8.3). For safety‑critical sectors, organisations should ensure alignment of hazard definitions, risk acceptability criteria and incident taxonomies across regimes to avoid conflicting controls and reporting.
The documentation sets what authorities will expect to see
Authorities, notified bodies and auditors will expect a coherent, cross‑referenced documentation set, including:
- QMS manual and process map, scope statement, quality policy and quality objectives.
- Regulatory requirements register and the selection and justification of technical specifications.
- Competency matrices, training records and awareness materials.
- Supplier evaluation, selection, monitoring and re‑evaluation records, including acceptance and verification activities.
- Risk management files per AI system.
- AI system requirements, specifications, design/development reviews, verification and validation plans, test procedures and results.
- Technical documentation per AI system and instructions for use.
- Change logs, impact assessments and pre‑determined change documentation.
- Post‑market monitoring plan, KPIs, logs, feedback channels, reports and corrective actions.
- Serious incident investigation files, reports to authorities and evidence of root‑cause and remedial actions.
- Management review records and continual improvement actions.
How the draft QMS Standard interfaces with conformity assessment
For Annex VI internal control, providers need to verify that their QMS meets Article 17 and that technical documentation evidences conformity. For Annex VII third‑party assessment (e.g., certain biometric systems), notified bodies assess the QMS against Article 17, review surveillance arrangements, and evaluate proposed QMS changes. In both routes, QMS documentation and AI system technical files are the primary evidence base.
Conclusion
The draft QMS Standard provides a practical, evidence‑oriented pathway to operationalise Article 17, reduce regulatory uncertainty and accelerate conformity assessment. For in‑house lawyers, it is the governance blueprint to align legal, engineering and product around a single, auditable system that protects health, safety and fundamental rights while enabling innovation. Implemented well, it shortens audit timelines, clarifies supplier expectations, disciplines change, and transforms post-market monitoring from ad hoc reactions into continuous assurance. That combination is what creates value under the AI Act.
Special thanks to Oyin Olukotun, Trainee Solicitor at CMS, for her help in proofing and reviewing this article.