An annual conference offers the members of a specialist society the opportunity to exchange ideas. They would like to present their work to their colleagues and discuss it with them. The society addresses all members with its call for papers and relies on their participation.
The call for papers may narrow down the topics to a focal point or name a few sections with topics that reflect the facets of the professional discussion. Perhaps there is a general topic on which submissions are favoured because they are currently topical.
The plan for an annual conference is sufficient for a call for papers. You don't even need to know exactly when the conference will take place. Nor do you need to know where it will be held. Just that everyone has a fair chance to make a contribution.
You must not relegate members' contributions to a poster exhibition or hold a "Kessel Buntes" session on the side. Members must appear in the main programme - otherwise you are nothing more than a congress organiser who should then consistently show VAT.
With its call for papers, a specialist society makes it clear that the presentations are not negotiated in a small circle. That it is interested in the participation of all members. That it wants to offer younger colleagues in particular the opportunity to introduce themselves to the specialist and professional world. That it will select entries for the competition fairly and pay attention to formal and content-related criteria. And thus also promote the professional quality of the entries.
If you don't receive an overwhelming number of registrations at first, that's not an argument against it: it will be found. You can always fill in the gaps at the end. It is not very inviting to advertise interesting lectures that have already been selected for the programme.
Then it also makes sense that the speakers have to pay the participation fee for the conference - after all, the conference is organised for them as a forum. Recruited speakers rightly demand a fee or free participation and coverage of travel expenses.
The call for papers has another important function: it is part of the marketing strategy. Without being intrusive, the organiser repeatedly advertises the meeting. It invites the addressees to register a presentation themselves. Or at least to register themselves if they are unable to make their own contribution.
This provides an opportunity for several mailings:
Fahrdorf, 2024-10-30
]]>Donabedian has written more than 100 articles and 7 books on quality assurance in medicine. His contributions go far beyond the concept of the triad of structure, process and outcome quality. He has written extensively on the epidemiology of patient needs, the importance of comprehensive insurance against the consequences of illness, the relationship between cost and quality, and the monitoring of service delivery.
Over the many years of his scientific career, Donabedian adapted his nomenclature to the focus of his work. Later on, he hardly ever spoke about the famous triad, but searched for irrefutable attributes of good medicine. He identified seven attributes, which he called "pillars", on which the quality of individual medicine and healthcare at a societal level should rest equally.
At the end of his life, Donabedian regretted that he had only become famous because of his "structure-process-outcome paradigm". He himself admitted that the triad did not fulfil all the needs of assessment (SHIP 2001).
In his classic publication DONABEDIAN In 1966, he examines the question of how quality should be understood. This is not a philosophical problem, but a prerequisite for objective evaluation. As long as an accumulation of value judgements on individual aspects, characteristics and contents of medical care is regarded as "quality", it remains no more than what each individual imagines it to be. For a scientific, empirical study, the multitude of possible dimensions and criteria must be narrowed down, their justification proven and their measurability analysed. Anyone who wants to "pay for performance", align state planning with indicators or carry out a ranking must achieve such objectivity - at the latest if the evaluation is to stand up in court.
Donabedian proposed to first determine what should be evaluated: 1. the results (outcome), which 2. result from the processes of the treatment and 3. the structure that is available for the processes.
He clearly recognises the limitations of evaluating the results of "good medicine" and soon comes to the conclusion that the results do not speak for themselves. Results must be viewed with great caution. He does not dismiss them as unsuitable, but sees them as an important indicator of the process characteristic "effectiveness". He was not yet familiar with the discussion about clinical studies as a tool for investigating the characteristic of "effectiveness", as we know it today as "evidence-based medicine".
The treatment process seems to him to be more important for the question of properly practised medicine. He distinguishes between the process itself, insofar as it is known to be "good", and the technical ability to carry it out (performance). The medical treatment can then be assessed on the basis of the characteristics appropriateness, completeness, redundancy, technical competence, coordination and continuity. At this point, he already adds the characteristic that he later reminds us of again and again: acceptability for the recipient of the service. Unfortunately, the concept of acceptability has so far been overlooked by many of his successors.
However, the treatment processes can only be carried out if the necessary resources are available: appropriate premises, equipment, qualified staff, an organisational process, an administrative structure and sufficient financial resources. Without appropriate structural requirements, there can be no good processes.
Donabedian fits the various evaluation methods of his time (1966) into this scheme. He weighed their performance against the extent to which they allowed well-founded judgements to be made. The result of his study is not particularly encouraging - but let's not forget that quality assurance in healthcare was still in its infancy.
There is only one thing he certainly did not want: he did not want to differentiate between three types of quality, each of which could be defined and assessed separately. His scepticism towards the enthusiasts of quality of results speaks against this.
To this day, many try to define quality in terms of "good quality". To do so, they list a number of features of medical treatment that they believe are part of this. Politicians and health insurance companies prioritise these differently to service providers and patients. Their descriptions are vague, such as "...must be orientated towards the well-being of the patient", "...must be holistic, human or empathetic" or "sufficient, appropriate and economical".
Others have drawn up a set of precise requirements for equipment, qualifications and scope of services that should be regarded as the standard for good medicine. Some consider the requirements to be insufficient, while others see them as excessive. Agreement on the catalogue is rarely reached. What is regarded as quality depends on the respective environment. All requirements must be differentiated according to the respective needs - there is no such thing as "one" quality. Quality is not an ideal state, but results from the requirements that are set.
Those who believe that they can somehow recognise what is good are attached to an intuitive understanding of quality. They perceive which characteristics belong to it and how they relate to each other as a whole. Many laypeople agree with this and spontaneously agree without realising that they mean something else.
A scientifically based assessment cannot be based on this. Quality is not a totality, but a set of characteristics that belong together but must be assessed using different methods. The characteristics influence each other, but are not equivalent.
The weighting of the characteristics is not the same in all situations. Sometimes effectiveness is considered extremely important, sometimes the demand for safety takes a back seat to the demand for better acceptability (in this case perhaps proximity to home). Different requirements are made depending on the situation and the person.
Today, quality is defined as the degree to which a set of characteristics of an object fulfils requirements (DIN EN ISO 9000:2015).
But what is the object of the analysis? Can structures, processes and results be considered separately? This is where the triad has not proved its worth. Worse: the confusion has only increased by adhering to this schematism.
The easiest way to explain why "structural quality" cannot be determined for itself is to say
What structural requirements could be agreed? For example, clear ideas have been developed for the structure of emergency centres (RIESSEN 2014). One of the requirements for the organisation of emergency centres is that they are closely linked to hospitals and the emergency medical service. Many details are very specifically required, e.g. that the initial assessment and care should be carried out by doctors and nurses trained in emergency medicine, who may consult appropriate specialists depending on the situation. They must have the diagnostic procedures relevant to emergency medicine (e.g. emergency laboratory, ECG, sonography/echocardiography, X-ray, computer tomography) available around the clock. They should be equipped with an emergency admission ward for short-stay patients, which would allow short-term inpatient observation without the need for significant further diagnostics and treatment. I will not list all the individual requirements, but I will add the desire for treatment rooms in which patients can be treated with discretion.
All of this would be roughly what is called "structural quality".
But what are the requirements based on? Emergency centres are assigned to three categories: Their equipment is based on the processes that take place in them. The "structural quality" required for the treatment processes is always demanded. The structure of an emergency centre in a district hospital is different from that of a university hospital and certainly different from that of a psychiatric clinic. There is no abstract "structural quality" that applies to everyone.
When looking at the treatment processes, "structural quality" appears as a "process resource": for each process, we can (and must) specify what we need for it. Rooms, equipment, materials, sufficient and qualified staff and monitoring of process control.
What is called "structural quality" is derived exclusively from the processes. Less leads to disruption of the process, more is decoration, luxury or waste.
The treatment processes require resources. Structural quality itself says very little about how good the medicine is.
Why we should not talk about the quality of results is more difficult to understand.
To the layperson, it seems so plausible: whether a treatment is good or bad is best judged by its results. If patients recover, that's good, if not, that's bad. Everyone can see for themselves from the results whether they have been treated well. He who heals is right. The whole world seems to be obsessed with this fallacy.
It is completely unclear what a result actually is.
In most cases, it is not even possible to agree on when the result occurred: the state of health with which you leave the hospital? Or that you are still sneezing 4 weeks later? It is often only possible to determine in the distant future whether the desired result has been achieved: has the patient "beaten" their cancer? How much longer did the treatment prolong their life? What is the decisive final result? What are merely interim results?
Antihypertensive therapy is supposed to protect against strokes - but protection cannot be measured in individual cases, only in terms of the lower probability of its occurrence in a larger population. Does the result mean "Reduction of stroke mortality in the population?
What matters about the result? If we have become so healthy, we may have suffered from side effects or complications that could have been avoided or must be accepted as unavoidable. Does the health benefit outweigh the damage we have suffered? In individual cases this may still be possible. However, if we look at all the patients treated, we eventually recognise considerable differences in the effectiveness and safety of the treatment procedures. As impressive as the desired results are, we have doubts as to whether the risks and opportunities are in reasonable proportion. We look at the positive effects and note the regrettable disadvantages.
Which of these is the result? We cannot simply add up the consequences to an overall result. But then what is "the" result? A mixture of positives and negatives. Acceptability falls to the back anyway.
Often the result cannot be read or measured at all. This is because most medical treatments are "special processes". The quickest way to understand what is meant by this is to look at the process of sterilisation. The result of sterilisation is sterility. But nobody can see or touch sterility (then it would no longer be sterile). All diagnostic processes are "special": you cannot tell from their results whether they are good or not. If the measurement procedure is correct and accurate, we trust the result - but we cannot judge the result.
A result is always what happens in the end. Naively, we see the cause of the result in the action that preceded it. In medicine, no one can say for sure in individual cases whether the desired event was caused by the treatment or whether it might not have materialised on its own. The cause cannot be seen in the result. Doctors and patients are easily deceived. They fall for a foil á deux.
Evidence-based medicine has made us aware of how sceptically we need to view results. In studies, events are counted - desirable and undesirable. If there are more desirable and fewer undesirable events in the group treated with A than in the group treated with B, then we say that A is more effective and safer than B.
So we don't even talk about results in clinical trials. That would be very naive. We test the quality characteristics of efficacy and safety of the treatment processes in carefully designed experiments. Some prove to be extremely effective and safe (e.g. anaesthetic procedures), others are effective but unsafe (e.g. certain surgical procedures or radiotherapy), some are very safe but not effective. You already know which ones I mean).
The rapture with which the quality of results is regarded as the stone of quality methods is responsible for many confusions in quality management. It takes a great deal of effort to distinguish between accidental and causal events. Large-scale clinical studies are usually indispensable. The characteristics of the treatment procedures must be verified and repeatedly validated. Only then are our decisions reasonably evidence-based.
Quality assurance of results is frustrating, costly and ineffective. It has been abandoned and replaced by mastery of the verified and repeatedly validated process.
In short: let's forget the quality of results. It is the quality characteristics of effectiveness and safety of treatment or accuracy and correctness of diagnostics that matter. The characteristics of the diagnostic and therapeutic processes can be tested and measured - quality of results cannot.
Donabedian writes in 1988 (DONABEDIAN 1988): "Because a variety of factors influence outcome, it is impossible to know with certainty the extent to which an observed outcome is attributable to prior treatment - even when extensive adjustments are made for differences between cases. What is needed is confirmation through a direct assessment of the process itself.
In the end, the process quality remains. The object of consideration in quality management are the production processes - in medical care, the treatment processes - diagnostic, therapeutic and nursing. For each process, a set of characteristics can be identified that can be checked and measured.
The characteristics of effectiveness and safety tell us something about the probability with which we can expect certain results. The configuration of the processes determines the essential resources. From the core processes, we can derive the need for support and management processes that ensure an undisturbed and efficient process. This is why the standard refers to a "process-orientated QM system" (DIN EN ISO 9001:2015). Everything revolves around the process.
Requirements are initially placed on each process. The process is designed in such a way that it fulfils the requirements to the highest possible degree. Proof of performance is provided during product development. Only then can the performance be reliably and effectively brought into routine use. Most processes, at least in medicine, are "special processes", i.e. we cannot directly read off the result. We rely on effectiveness and safety because we carry out a verified and validated process design under the conditions of process control.
Treatment processes are always designed for individual patients. In this respect, patient-centredness says everything and nothing. However, they are acceptable to patients in different ways. Today, we know the framework conditions for the acceptance of medicines, medical devices, medical and nursing services quite well. We know that the characteristics of acceptability are often enough the deciding factor - the pandemic has shown us this once again. The commitment to patient-centredness should motivate us to pay more attention to the process characteristic of acceptability that Donabedian cared so much about.
Donabedian was clear that the first step was to show how structure, process and outcome are actually connected. He hoped that organisational science, behavioural research and clinical research would contribute to this. His triad of structural, process and outcome quality successfully initiated quality assurance. Now it stands in the way of the modern concept of quality.
Author: Dr Ulrich Paschen QM elektronische post - Beiträge zur Guten Praxis in Medizin und Wissenschaft Broadcast 20 Fahrdorf, 22 May 2018 Reproduction is permitted provided the source is acknowledged and a specimen copy is supplied.
]]>Computer-aided quality assurance is the engineering use of computers and computer-controlled machines to plan and implement the quality of products.
Computer-Aided Quality (CAQ) is a digital tool for quality planning, continuous quality improvement throughout the entire product life cycle and quality assurance or quality control during production.
We also have to implement CAQ in our core processes in the hospital and are doing so, e.g. the following: (1) personnel requirements and deployment planning, training, (2) quality control, (3) reporting, communication, records, (4) QM processes in the narrower sense such as management of the QM manual, audits, (5) data acquisition QA, also for control charts, (6) digitalisation of service providers with networking.
What IT solutions are already available or which ones are suitable?
Personnel requirements planning
Personnel deployment planning
Personnel development planning
Training modules for instructions, briefings, training courses, further and advanced training, including success monitoring
Proof of qualification including monitoring of refresher training
Occupancy planning
OP planning
Appointment scheduling in the outpatient clinic, admission
Resource planning with warehousing, ordering, delivery times
Patient-related reporting obligations: Medical reports, transfer protocols, authorisations
Internal reporting obligations: performance statistics, adverse events, use of resources
External reporting obligations: external quality assurance, registration and deregistration in the event of non-fulfilment of requirements, quality reports, infectious diseases)
Means of communication: video conferencing, emergency call systems, alarm plans, data transmission to interested parties,
Telephone address directory and its maintenance
Treatment records (electronic medical file)
Image documentation (X-ray, wounds, clinical findings)
Prescriptions for medicines
Traceability of medical devices
Ordering processes for laboratory, X-ray, consultations, aids, technical repairs
Acquisition, distribution and provision of knowledge in the processes
Support in treatment decisions (diagnostics and therapy)
Knowledge management: access to databases, literature
Control of documented information (content management systems)
Patient information, information protocols, behavioural instructions
Archiving, data backup, access control
Release before data transfer
In-process controls (QC cards in the process or as target cards from administration and EPA data)
Audits
Tracking of reporting obligations, licences, certificates
Monitoring committee activities (appointments, minutes
Audit management
Change management
Follow-up of corrective measures
Calibration management.
Complaints processing
Document control
Project management
Error analysis
Care (especially records, personnel planning)
Laboratory: arrangement, sample identification, reports
X-ray: arrangement, scheduling, preparation of findings, archiving
Transport service
Sterilisation
Technical services
Pharmacy
Physiotherapy
]]>The WHO and its member states have initiated a Global Action Plan for Patient Safety 2021-2030. The plan is aimed at national governments, interest groups, healthcare organisations and services, and the WHO Secretariat itself.
The action plan sets seven strategic goals to be achieved through 35 specific strategies.
For each specific strategy, the Global Action Plan assigns different tasks to the addressees government, institutions, interest groups and the WHO Secretariat. They make up the plan's catalogue of actions.
In clinical risk management, we are interested in the measures that hospitals and comparable facilities are expected to take. The analysis resulted in 129 direct requests for action. How can the expectations be realised? What needs to be done in this decade? What do we possibly already have within the quality management systems?
Not all measures can be implemented with simple regulations. For some, a single procedural instruction is sufficient, while others require a bundle of activities that take some time to become effective. In a table, we have juxtaposed the requirements and brief descriptions of their implementation. The table refers to the QM Manual of Good Hospital Practice (http://www.gutehospitalpraxis.deregistration is required). The manual is based on the QM standards and contains suggestions for procedural instructions that can be helpful for implementing the required measures.
Users of the table can check whether the tasks mentioned are already included in their own QM system or not. Not all measures make sense in every organisation. There are probably still some points that are missing from this extensive list, particularly with regard to our national safety standards. Nevertheless, a self-critical assessment of the individual requirements can help to assess the maturity level of your own risk management.
2022-06_E-Letter_Global Action Plan_02
]]>
Anyone can define a concept as they see fit and practical, as long as they follow a few rules. They can also choose a name for it that they like. However, three things should be borne in mind:
I believe that it is very possible to apply the technical concept of quality to quality assurance in healthcare and I see no reason to abandon the internationally and professionally recognised definition. We should finally utilise the advantages of the newer concept of quality for quality assurance in healthcare. This could be particularly fruitful for the work of the IQTiG.
This should be investigated. Read the entire article
[i] Institute for Quality and Transparency in Healthcare Methodological principles V1.1 Status: 15 April 2019 https://iqtig.org/dateien/dasiqtig/grundlagen/IQTIG_Methodische-Grundlagen-V1.1_barrierefrei_2019-04-15.pdf, last accessed 2021-02-05
Section 1 "Quality of healthcare" states:
"The IQTiG defines quality in the healthcare as follows:
quality the Healthcare is the degree to which the care of individuals and populations fulfils requirements that are patient-centred and consistent with professional knowledge." (emphasis mine)
The definition is connected to the preceding sentences with "therefore". There it says:
... "healthcare must be assessed according to the extent to which it fulfils these overarching objectives and the requirements derived from them".
Four questions arise:
Contrary to what is claimed, the definition cannot be based on the definition in DIN EN ISO 9000:2015. It also does not match the definition of the Institute of Medicine.
The definition from DIN EN ISO 9000:2015 is only quoted in fragments. It reads in full[i]:
"Quality: Degree to which a set of inherent characteristics (3.10.1) of an object (3.6.1) fulfils requirements (3.6.4)"
The numbers in brackets refer to the terms used in the definition, which are defined elsewhere. "Inherent" is explained in Note 2 as "inherent in an object" as opposed to "associated".
"A set" is taken from normal language as for a set of spanners or a set of cutlery consisting of a knife, fork and spoon. This refers to things that belong together but are not equivalent and therefore cannot be offset against each other.
The IQTiG definition does not include the components "a set of inherent characteristics of an object". The (general) "object" is replaced by "the care of individuals and populations". At first glance, this seems like a harmless specification to adapt the definition to healthcare. But it is not as simple as that.
The change raises three problems:
If the definition is to apply to both types of care, then "care of individuals" must be separated from "care of populations" with "or", i.e. not connected by an "and".
A supply service has characteristic properties - in other words, features. The characteristics of the supply either fulfil the requirements or not (qualitatively) or to a certain degree (quantitatively). This can then be tested or measured.
The IQTiG's definition removes from the definition of quality the object of consideration and its characteristics that are ultimately to be measured or tested. How is that supposed to work?
The IQTiG's definition should also not be based on the definition of the Institute of Medicine [ii] are referred to. Although the term "quality" is used there, the definitional description is better suited to effectiveness. Effectiveness is the characteristic of an action that increases the probability of a desired event. The IOM reduces quality to the characteristic of effectiveness. This is entirely consistent with the view at the time (1990!). In the meantime, the IOM has discovered that other characteristics belong to the set of inherent characteristics of medical services, such as safety (e.g. with the document To Err is Human in 2000). The IOM's definition is - in the language of the IQTiG - only one-dimensional and should finally be discarded.
The methods paper states quite correctly:
"What all framework concepts have in common is that they make it clear that quality is multidimensional and cannot be comprehensively assessed on the basis of a few isolated aspects."
Apart from the fact that dimensions, aspects and characteristics are mixed up here, this realisation is reason enough to include "a set of characteristics of an object" in the definition of quality.
The definition should therefore be:
Quality (of a medical service) is the degree to which a set of inherent characteristics of the treatment of individuals fulfils requirements".
Quality (of health care for a population) is the degree to which a set of inherent characteristics of a population's health care organisation meets requirements".
Everything else (what are and who sets the requirements, what are the characteristics of a treatment?) is then best explained in notes.
Requirements as obligations usually result from generally recognised standards such as KRINKO guidelines, guidelines of specialist societies, laboratory guidelines including the legal framework such as the Patient Rights Act, Medicinal Products Act, Transfusion Act, radiation protection, occupational health and safety, etc. for the provision of medical services. Further requirements are usually assumed in the respective social, cultural and political-economic context without the need for further justification or further explicit formulation.
Further requirements are defined by the recipient of the service himself or from his point of view, usually after a careful assessment of his needs. The recipient of the service does not have to determine the requirements based on their own knowledge. They can seek professional advice and liaise with the service provider.
Further requirements can be set by interested parties who are not themselves recipients of the service. They can be guided by general principles, principles or their own objectives.
Medical quality assurance prioritises the quality of the service itself (design) and its provision (performance). The selected treatment procedure should meet the needs of the service recipient. The general social conditions of healthcare must be taken into account.
In principle, the care of individuals and the care of populations can both be the subject of a quality assessment. One can look at the care of a patient with a pacemaker in a clinic (care of an individual person). Or the care of all insured persons in a region who suffer from an AV block can be analysed (care of a population).
But there are worlds in between.
The confusion arises from the unclear use of the word "healthcare". Does it refer to the treatment of individual patients (medical practice) or the entirety of the organisational requirements for the provision of services (health care system, public health)?
Which of the two is the subject of consideration? Is the organisation of the healthcare market being considered - which is what the OECD does with its indicators - or the individual medical services, which should contribute to health as a whole? Both can be read out. The basic task of the IQTiG is to assess the quality of services provided by service providers in the inpatient sector. But what if priorities are set, depending on "whether the focus is on the healthcare system as a whole or on the quality of care provided by individual service providers"? These are not "priorities" that can be set differently, but categorical differences whose confusion will lead to serious contradictions.
One thinks of § 1 of the professional code of conduct for doctors working in Germany, which states "Doctors serve the health of the individual and the population". In the Reichsärzteordnung (1935), this was still called "Service to the health of the individual and the people as a whole".
The service to health leads to the unfortunate formation of the word "healthcare". Of course, it is not possible to provide health care because health is not a product or service that can be used to supply people like food, electricity or water. In the past, people used to talk about "health care", which still made sense insofar as they were thinking of services with which the sick were cared for or looked after so that they did not suffer any shortages.
If one wishes to summarise the provision of services for the diagnosis, treatment, prevention of illness, recovery from illness and care as healthcare, it must be borne in mind that the services themselves are then always the subject of the quality assessment and not an abstract term such as "the" healthcare of "the" population.
To make a clear distinction, the political, economic and social system in which the services are provided should be referred to as a "healthcare system". Even then, it must be clear that we use the term "healthcare system" to refer to a closed, concrete organisation, such as the English National Health System or similar systems in Ireland, Denmark and Sweden. On the other hand, we use it to describe the interaction between a number of organisations such as health insurance funds, service providers, professional associations and health administrations, which is not at all systematic, but rather chaotic, as in Germany or the USA, for example. For some time now, the term "healthcare industry" or "healthcare market" has been used to better reflect this. Because of the connotations, I prefer "healthcare system".
There is also a public health service (ÖGD), whose tasks are regulated by law. It is organised in lower health authorities (health authorities) and is therefore largely removed from market forces. The ÖGD is less concerned with the health of the individual and is more focussed on promoting and protecting the health of the population as a whole. The quality of the ÖGD can also be analysed - but this is not the task of the IQTiG.
One can therefore formulate: In every state, one can identify a more or less systematically organised and definable economic sector that provides services for the diagnosis, treatment and prevention of diseases, for recovery after an illness and for care. The production, distribution and consumption of services depend on the respective political organisation of economic relations, which can range from a state monopoly to mixed forms of regulated relationships between service providers, recipients and cost bearers to the unleashing of the market. There is no doubt that these structures have an impact on production costs, prices, local availability and - of course - on the quality of services.
However, the quality of the services is assessed regardless of whether the political framework conditions are favourable or not - otherwise it would be impossible to assess the impact of changed framework conditions on the quality of service provision! Of course, this also applies to prices and local availability.
The IQTiG's definition limits the requirements for the object of consideration to those that are "patient-centred and consistent with professional knowledge".
DIN EN ISO 9001:2015 defines "3.6.4 Requirement [as] "a requirement or expectation that is specified, customarily assumed or mandatory".
Since healthcare is always about patients, all requirements are somehow patient-centred. The characteristic then no longer serves to differentiate. It is nothing more than a superfluous adjective and as meaningless as "the patient is always at the centre"! If the quality of medical treatment is to be considered, the treatment process is "at the centre", if you like. Requirements must be set from the patient's perspective - anything else makes no sense. Is that what is meant by patient-centredness? But what about requirements that are not patient-centred at all?
If requirements from normative documents are described as mandatory or generally assumed, they could be listed. By no means all of them are patient-centred. It is possible that organisations, interested parties and customers (patients) may have similar, contradictory or even mutually exclusive requirements. This problem is ignored here.
Even more problematic is the restriction to conformity with professional knowledge. It remains unclear which requirements are meant. The scope and content of "professional knowledge" are far too vague. What is professionalism and where does it begin and end? Which profession? Or does it refer to a specialised discipline? Or specialised circles? Does it require a state licence to practise or other recognition? In which professional group is there a general consensus on the recognition of knowledge? And who has expressed the knowledge of requirements so explicitly that conformity can be established?
Does this exclude all requirements that are not based on professional knowledge? Do requirements have to correspond to knowledge at all?
Let's take the case of a person suffering from a rare and previously fatal cancer. Clinical treatment is expected to be effective and safe. To the best of our professional knowledge, there is no therapy that fulfils this requirement. Or there is (still) no evidence for its effectiveness - which is not the same thing. So far correct. But the requirement remains and nobody will contradict it. The requirement is made from the patient's point of view, regardless of whether it can be fulfilled. Because this is the case, the specialist disciplines search for effective means, test them, expand their knowledge and thus improve the treatment - in order to be able to fulfil the requirement.
Professional knowledge always relates to the characteristics of a service - requirements do not have to be based on professional knowledge. If "patient-centred" is to make any sense, then we must abandon the idea that a benevolent corporation limits the requirements for the quality of medical services to what corresponds to its knowledge - a knowledge that it has acquired by virtue of its professional existence.
The quality of healthcare would then be the degree to which care is what some people think it should be, without specifying who the people are and what their requirements are.
Consequently, the methods paper below also speaks more of guidelines than requirements. Institutions set guidelines, patients make demands. This makes all the difference when it comes to "patient-centredness".
The confusion is further increased by the unclear use of the term "dimension".
The term "dimension" can be traced back to DONABEDIAN[iii] (and beyond). It has been repeated many times since then. In addition, "components of quality", "fundamental aspects of quality", "core objectives of the healthcare system" or "domains" have been proposed. Certainly DONABEDIAN[iv] did not understand his triad of structure, process and outcome as "measurement dimensions". Rather, he was looking for characteristics by which "good" healthcare systems could be recognised.
"Dimension" is an unfortunate term here. It usually stands for measurement, extension or dimension, e.g. of a body in terms of length, width, height or extent with regard to spatial, temporal and conceptual comprehensibility[v]. Each dimension has its own base vector that is independent of the others. In physics, the dimension indicates the power in which the three basic units (g, cm, sec) are incorporated into a certain quantity[vi] None of that is meant here.
When measuring socio-cultural healthcare systems, one can perhaps speak of "dimensions" in a metaphorical sense, as DONABEDIAN has done. However, if you want to compare the systems, you have to name characteristics that can be used to identify the differences.
Here, the IQTiG concept suffers from its unclear language: It states:
"Such fundamental requirements for healthcare are often summarised in the form of basic quality dimensions in a conceptual framework for quality".
Can "fundamental requirements" ... be summarised "in the form of fundamental quality dimensions"? What does it mean that patient-centredness should be understood as an overarching guiding principle for all dimensions? A guiding principle for dimensions? What is the difference between requirement and dimension? It is at least conceded that requirements or dimensions differ in "whether the focus is on the healthcare system as a whole or on the quality of care provided by individual service providers". However, the examples listed there do not fit either perspective:
A care system can promote or hinder the effectiveness, safety and "patient-centredness" of treatment services, but they are not themselves characteristics of the system that can be used to measure the difference.
On closer inspection, the "dimensions" mentioned are all characteristics of services that are provided in medical treatment: Effectiveness, patient safety, patient-centred care design, timeliness and availability, appropriateness, coordination and continuity. "Patient safety" is a somewhat misleading term. The word actually refers to protection against the dangers of medicine. What is meant is the safety, or rather the uncertainty, of medical services. But that is how we have always understood it.
These characteristics (and many others) can be measured or tested. This is exactly what the IQTiG should and would like to do. Section 5.1 of the IQTiG's methods paper correctly refers to characteristics, and section 5.2.3 even contains the technically correct definition of a quality characteristic! This makes it all the more incomprehensible to me why the characteristics have disappeared from the definition of quality.
The term "quality feature" is used increasingly frequently in the remainder of the methods paper. However, its use differs significantly from the usual professional usage. However, no explanation is given as to why this is the case.
After such an elaborate clarification of the terms, characteristics, quality features, quality aspects, quality models, quality indicators, quality dimensions, quality objectives and requirements get mixed up again. How are we to understand a sentence like this: "It is only with the quality characteristics that concrete requirements are placed on medical care for a specific aspect, the fulfilment of which can be used to assess the quality of care. These requirements are referred to as quality objectives."
You can measure quantitative characteristics. Qualitative characteristics can be tested. Quality as the degree of fulfilment can be estimated - it cannot be measured, nor can aspects and dimensions. Neither can objectives - they are set. Requirements are collected. Even characteristics can sometimes only be measured indirectly via indicators. Without this admission, it makes no sense to talk about testing and measurement methods.
Measurement and testing methods for design or performance characteristics differ considerably, especially when they concern qualitative or quantitative characteristics. In the case of performance characteristics, a test characteristic (indicator) is usually considered over time. Characteristics of acceptability are not always inherent and are subject to social and cultural influence. The appropriateness of a treatment can only be assessed on a case-by-case basis.
Anyone who considers all this to be superfluous hair-splitting will have to look around when, in future, an exceptionally good or inadequate degree of fulfilment of requirements is to be determined so precisely that it can be used to justify increases or reductions. It's about more or less money. Such a difference must somehow be made "noticeable", otherwise the result will be contested.
[i] DIN Deutsches Institut für Normung e. V. Standard DIN EN ISO 9000:2015, 0.11.2015: Quality management systems - Fundamentals and terminology. Beuth Berlin 2015
[ii] Lohr, Kathleen (ed.) (1990): A Strategy for Quality Assurance. A report of a study by a committee to desigan a strategy for quality review and Assurance in Medicare. Institute of Medicine, Division of Health Care Services. 2 vols. Washington D.C., USA: National Academy Press
[iii] Donabedian, Avedis (1980): Explorations in quality assessment and monitoring. Ill. Ann Arbor, Mich.: Health Administration Pr.
[iv] Donabedian, Avedis (1966): Evaluating the Quality of Medical Care. In: Milbank Q Milbank Memorial Fund Quarterly Health and Society 44, pp. 166-203.
[v] Duden Das große Fremdwörterbuch 4th edition Mannheim and Leipzig 2007
[vi] Regenbogen A; Meyer U. Dictionary of philosophical terms. Meiner Hamburg 2013).
Version control:
Version 1.0 2019-06-19
Version 1.1 2019-10-13
Version 1.2 2020-01-27
Version 1.3.2021-02-05 with bibliography
© Dr U. Paschen 2021
Reproduction is authorised provided the source is acknowledged and a specimen copy is supplied.
Articles on good practice in medicine and science are published irregularly by
Dr U. Paschen QM Consulting in Medicine and Science
Dorfstr. 38 24857 Fahrdorf
Phone. 04621 4216 208; Mobile 0177 2125058, upaschen@web.de
Responsible: Dr Ulrich Paschen
More electronic letters on our website under "Specialist articles":
http://www.qm-beratung-krankenhaus.de and http://www.gutehospitalpraxis.de
]]>
The context of the definition of quality should not be sought in care, but where the word "quality" is used.
You can talk about quality in many contexts: in philosophy as a quality or essence. Quality as the opposite of quantity or when quantity is to be transformed into quality. In types of material (Manchester-quality cloth) or in medicines, whose pharmaceutical quality is important.
For this reason, I recommend that you first name the context in which "quality in care" is to be discussed here.
For the working group of a professional association, only the context of "quality management" comes into question if it wants to "define quality".
QM always talks about the quality of
(1) Products or (services) that are created by a provider (here: the carers) for someone (here: people in need of care). In this respect, they are always "customer-centred".
(2) This includes a certain organisational framework. The service must be provided by several people working together. It makes no sense to speak of QM when care is provided by a single person.
(3) The context also includes the fact that the service is always provided in return for a service (exchange). This is correctly emphasised in the working group's paper as a distinction between professional and lay care. The latter can also be good or bad. You can also make demands on compassionate services. However, it is not possible to sue for their fulfilment.
Talking about "quality in care" only makes sense in the context of professional care. It offers definable individual services that are often bundled into larger complexes. It is the result of institutional co-operation (hospital, nursing home, nursing service, etc.). It is based on professional training. It is paid because the service providers base their livelihood on it. QM serves to organise the provision of services in such a way that the requirements of those for whom they are provided are met - nothing else makes sense. Sometimes it is necessary to prove that the result has been achieved.
In order to practise QM, you need to know what is meant by quality.
Logical.
The IOM's definition (1990!) is inadequate. It has three serious errors:
Care is always patient-centred, never population-centred. We can therefore safely leave out the population aspect here.
Today we say: they must be evidence-based. However, this means something different: the statements about the services (how effective, how safe or acceptable they are) must be based on evidence. "Evidence-based" is therefore not a quality feature of the service, but of the statement about services. If grandma recommends caraway tea to me for constipation because her grandma already recommended caraway tea, then that is not evidence - but it can still be very effective, safe anyway, but less acceptable (unpleasant) because of the taste. But we have no evidence for this, just grandma's word.
Unfortunately, the IQTIG has adopted three errors and added one:
ISO 9000 refers to the service or product as the object of consideration in very concrete terms - the individual care services, not an abstract collective term such as "the" care. "Care of individuals" is the term for a set of nursing services, each of which may or may not fulfil requirements. Because the IQTIG definition leaves the object of consideration as "care" undefined and the "set of characteristics" undefined, the definition is tainted.
Whether you call the person for whom a care service is intended a patient or a resident is left to common parlance. In a hospital it is the patient, in a residential home it is the resident, in nursing care it is the person being cared for. Or, analogous to the vaccinated person (the person to be vaccinated), perhaps the carer? We are still struggling to find a suitable term. "Person in need of care" is exactly what is meant, but it is neither a common nor a particularly attractive term.
The term "patient safety" has become established. However, it conceals the fact that it refers to the safety of the services for the patient. Safety is a characteristic of the service, not of the patient. It would therefore be more correct to protect patients from the uncertainty of medicine. Very well. Everyone knows what is meant.
I find the distinction between avoidable and unavoidable adverse events (AEs) unwise. Even supposedly unavoidable ones are undesirable and can be quite unpleasant. A procedure in which more "unavoidable" AEs occur is simply less safe than one with fewer. A procedure in which these are avoidable is then better.
I would always speak of characteristics, not dimensions. That would be technically correct. The diagram also mentions characteristics. The term "dimensions", as used by Donabedian and, with reference to him, also by the IQTIG, is uncommon in QM and contradicts common usage. Or who calls efficacy and safety "dimensions" of a medicinal product?
Everything that follows from here deals with the problem of providing services under difficult conditions and how quality would be possible with limited resources.
All this is no longer part of the concept of quality, but of the conditions under which attempts are made to fulfil the requirements. I propose deleting these paragraphs.
But perhaps this would be a good place to explain the idea of the claim class.
The "conclusion" is still very confusing and not always grammatically correct.
The summarised presentation as a definition in the penultimate paragraph does not meet the requirements for a definition. It does not properly take up what was said before. This also applies to the last paragraph. I would not allow these two paragraphs to go out without revision.
While working through the document, I have edited the text considerably. Perhaps you will like one or the other.
Fahrdorf, the 2020-04-09
[/av_textblock]