ACTA Logo        
P.O. Box 2428 · Brighton, MI 48116

810.227.1859 · Fax: 415.382.0664 · acta@canadianpharmaciesonline.us.com

ACT Home

ACT Model

Origins of ACT

Model Fidelity

Standards

Annual Conference

Board of Directors

ACTA Trainings

Resources

Join Our Mailing List

Fidelity to the ACT Model
Presented here is an article by John P. Freeman, an ACT Association Board member, ACT researcher and model devotee. In this article, John speaks to the issue of Fidelity in ACT Programs in the United States and the United Kingdom. John is native to England, where he is a Non-Clinical Lecturer in Mental Health at the University of Sheffield.

Treatment Fidelity or Adherence to the ACT Model
ACT programs always want to provide as good a service as possible within their available resources. Teams and programs know about treatment fidelity but may feel as though it is a concept relevant only to researchers in their 'ivory towers'! Sometimes workers can feel that treatment fidelity is something that is used simply to make them feel bad about their work. This introduction will help workers and consumers to understand more about fidelity and how it can help their program.

A variety of terms are used to describe fidelity. Treatment adherence, faithful implementation, degree of implementation, and program fidelity have all been used to refer to what is essentially the same phenomena. Fidelity may be defined as conformity with prescribed elements and the absence of non-prescribed elements (McGrew et al, 1994). Waltz et al, referring to the field of psychotherapy categorized fidelity along the following lines:
  • Behaviours that are unique and essential to the model.
  • Behaviours that are essential but not unique.
  • Behaviours that are compatible with the model but are neither essential nor unique.
  • Behaviours that are prohibited.
                        Waltz et al, 1993.
Fidelity measurement emerged in the field of psychotherapy, where operationalising elements of respective models led to the refinement of models and detailed, accurate descriptions emerging of each (Bond et al, 2000). The appeal of this is clear to see. Poor definition of a treatment philosophy or approach would reduce its scientific objectivity and credibility. Furthermore it's potential to be applied in a rigorous way would be harmed by the inability of other proponents to accurately interpret and replicate the model in the original sense. In the field of psychotherapy, where variation in practices emerged that were subtle, it can be seen how the lack of a rigorous approach to define and replicating treatment components would hamper development of the model. The development of fidelity measurement therefore, begun to have very direct implications for the widespread dissemination of effective forms of treatment, as they enabled proponents to accurately describe and operationally define the core features of a model.

Issues related to Fidelity Measurement.

On the whole, the field of psychiatric rehabilitation has not been exposed to the scrutiny of fidelity measurement and there are relatively few examples of specific aspects of the model being defined and operationalised in a rigorous way, such that they may be replicated in an entirely accurate form. Stein and Test's (1980) seminal description of an emerging Assertive Community Treatment model is a notable exception to this. Bond has described how 'program drift' may occur if the critical components of a service are not specified and then defined operationally (Bond, 1991). Others have identified standards of comparison in program implementation, namely,
  1. An "average" criterion based on normative conditions in other programs.
  2. A criterion based on the identification of an ideal program as specified either by the authors of the approach or by the participants and staff.
  3. Theoretical analysis and expert judgement of goodness of fit.
                        Sechrest, West, Phillips, Redner and Yeaton, 1979.
It can be seen how the whole approach of fidelity measurement within mental health services is an ambitious endeavour. An entire team of individuals is a more complicated phenomenon to study and explain, and, perhaps for this reason, fidelity measurement has taken on various forms. Practice guidelines have evolved to meet the need to access clinically oriented information that is of an achievable, pragmatic use. These have proliferated over the last fifteen years but tend to be diagnosis based, for example on schizophrenia, as opposed to model-specific. Program Standards have also emerged in recent years. These have assumed a greater importance in the USA than in the UK primarily because of differences in the way the health-care system is financed. Until the last few years in the UK, outcome measurement was achieved more by the use of crude measures of statistical input, than an emphasis on the quality of the treatment and it's foundation on the current evidence-base. The drive to design and implement Program Standards has dovetailed with the recognition that prescriptive elements of a program could be easily identified and that an accrediting body could then rate competing programs to assign a certificate acting as an endorsement of that program's quality. Such a certificate can have profound funding implications for teams. Furthermore, the ever- increasing growth of standards and quality has encouraged stakeholders, including providers to expect that teams and programs attain quality standards according to external accrediting bodies. An example of this would be the Council for the Accreditation of Rehabilitation Facilities (CARF). CARF have designed a detailed set of guidelines across the rehabilitation spectrum. CARF accreditation is viewed as a significant achievement by individual programs, and is recognised as a clear mark of quality for the level of care and treatment that they provide.

How can fidelity measurement help?

Fidelity instruments, much like clinical assessment tools, can be used in a variety of ways to suit different purposes. Many programs may view existing descriptions, for example the CARF accreditation standards as something to aim towards, or as a practical tool to aid design and formation of a program starting out. Similarly, in the UK the service specification within the Mental Health Policy Implementation Guide (DoH, 2001) may suit this purpose. In other words, the guidelines are sufficiently detailed to enable teams to assist teams to write operational policies, to decide on aspects of targeting, resourcing, and to decide on complex areas of role and functioning. They may not aspire to full accreditation, but use and possibly adapt the published criteria to assist with their own local needs. Teams and programs may use fidelity criteria at a number of critical time points within their life also. Some may choose to embark on a rigorous process of team design before the commencement of clinical working. This kind of process, whereby crucial aspects of design, target groups, admission and exclusion criteria skill mix and team working would be assisted by the use of accepted fidelity tools. Indeed it can be seen how their use, along with a range of supporting literature could be merged by teams with an assessment of local need to ensure the newly created team is able to seamlessly co-exist within a comprehensive network of care providers. Conversely however, teams may wish to adopt the use of fidelity tools at a later point in their life span. They may wish to conduct an exercise to explore the development or progress of a team. This may occur at a fixed period of time, for example after twelve or eighteen months or following a 'critical incident' of some kind. It may be used in this sense as part of a package to review the processes within the team and to assess the utility of policies and operational procedures. It may be used more formally to assess whether and how far the team has deviated from it's initial goals and targets, in other words to check for 'program drift'.

Many ACT teams and programs are unable to spend lengthy periods of time developing their operational policies and procedures. They tend to expand in size and recruit a variety of professionals as they work, perhaps starting from a low threshold of workers. Their key objectives of targeting and relationships within the network of service provision are more often than not devised through evolving working practice. Some teams are large, citywide services, well resourced and with clear objectives. Many spend initial periods of time engaged in defining their operational criteria, undertaking joint training and team building.

Treatment Fidelity- what's that got to do with me?

Often people can ask the question: 'why should my service feel constrained by the fidelity literature- shouldn't we design our service to meet local need?' Team Leaders are often aware of the evidence- base and able to assimilate this in order to meet local needs. Informal networks tend to grow as ACT spreads. Teams tend to visit each other to gain advice and support, and, in the UK a more formal network, the National Forum for Assertive Outreach was created. ACTA has often served a similar role, providing support or advice or simply acting as a point of contact for newly starting-up teams. The National Forum was evidence that teams were experiencing difficulties in accessing the kind of support essential to create and maintain innovative services within the context of a pressurised network of care. To add to this and perhaps, partly as a result, teams and services diversified. The spread of community services for people with mental health problems was already a wide one, with a diverse array of providers from statutory (ie, health and social services), the independent sector, and private agencies. Furthermore, the patchwork of services varied from one area to another, for example there tended to be more independent sector provision in urban areas than rural, and some agencies existed only in certain regions or cities.

The issue of how much local services should feel they need to adhere closely to existing standards is not an easy one to answer. Teams may feel confused about PACT and ACT, or refer to themselves by a variety of other names or descriptions. We tend to feel that what is most important is that teams and services do everything they can to implement the spirit of ACT services as best they can. Our main role is to help people with serious and persistent mental illness to maintain their health and improve their lives. Our job is to work alongside them in enabling them to do this. Sometimes even well resourced teams that adhere to program principles 'on paper' can struggle and not function at their best. Some good evidence exists which can guide us in designing, implementing and evaluating our services, yet much good work is done with people with serious mental health problems outside ACT services, and perhaps in ACT services which do not adhere strictly to the existing standards.

ACTA would like to see clinicians, policy makers, consumers, researchers and families work together to discover how fidelity research can best be applied in local settings. Real diversity exists, and, though the evidence is good, it is by no means complete. It would be wrong for workers to feel that fidelity measurement is something designed to make them feel bad. At its heart, it should be about enabling all stakeholders to find a workable and realistic way to implement a proven method within the available resources. It is really a way of trying to ensure quality in your service by looking at the evidence-base. We have found that committed, enthusiastic workers given sufficient resources, effective leadership and properly supported can help some very troubled people make enormous and enduring change in their lives.

Further Reading:
  • Bond, G., Evans, G., Salyers, M.P., Williams, J., Kim, H.W., (2000), Measurement of Fidelity in Psychiatric Rehabilitation, Mental Health Services Research, 2, 2, 75-87.
  • Calsyn, R.J., Winter, J.P., Morse, G.A., (2000), Do Consumers Who Have a Choice of Treatments Have Better Outcomes, Community Mental Health Journal, 36, 2, 149-160.
  • Department of Health, (2001), Mental Health Policy Implementation Guide, London, The Stationary Office.
  • Gerber, G.J., Prince, P.N., (1999), Measuring client satisfaction with assertive community treatment, Psychiatric Services, 50, 4, 546-550.
  • Lachance, K., Santos, A.B., (1995), Modifying the PACT Model: Preserving critical elements, Psychiatric Services, 46, 6, 601-604.
  • Lang, M.A., Davidson, L., Bailey, P., Levine, M.S., (1999), Clinicians' and Clients' perspectives on the impact of Assertive Community Treatment, Psychiatric Services.
  • McGrew, J.H., Bond, G.R., (1995) Critical Ingredients of Assertive Community Treatment: Judgements of the Experts, The Journal of Mental Health Administration, 22 (2), 113-125.
  • Lang, M.A., Davidson, L., Bailey, P. and Levine, M.S. (1999). 'Clinicians' and Clients' Perspectives on the Impact of Assertive Community Treatment', Psychiatric Services, 50 (10), pp. 1331-1340.
  • McGrew, J.H., Wilson, R.G. and Bond, G. (1996). 'Client Perspectives on Helpful Ingredients of Assertive Community Treatment', Psychiatric Rehabilitation Journal, 19 (3), pp. 14-21.
  • Teague, G.B., Bond, G.R. and Drake, R.E. (1998). 'Program fidelity in assertive community treatment: Development and use of a measure', American Journal of Orthopsychiatry, 68 (2), pp. 216-232.
  • Winter, J.P., Calsyn, R.J., (2000), The Dartmouth Assertive Community Treatment Scale (DACTS): A Generalizability Study, Evaluation Review , 24, (3), pp. 319-338.
John Freeman, Nov. 2001.
J.P.Freeman@sheffield.ac.uk

Recent Updates


Address

The ACT Association
P.O. Box 2428
Brighton, MI 48116

Other Contact Information:
Phone: 810.227.1859
Fax: 415.382.0664
acta@canadianpharmaciesonline.us.com

Please note: The ACT Association is unable to respond to requests for information regarding the availability of ACT services or treatment in specific locations nor make any treatment referrals.

Back To Top

Assertive Community Treatment Association, Copyright © 2001-2014. All rights reserved.
Last Revised: Mar 21, 2007