Skip to primary navigation Skip to content Skip to footer

Aphasia assessment and the ICF

All domains of functioning and disability should be considered for assessment.

Reference: N/A
NHMRC level of Evidence: GPP

Rationale: 
Assessment of a person with aphasia and their family/carers needs to be flexible and holistic, considering whatever aspects are important or relevant to his or her situation. The ICF (WHO, 2001) can be used as a framework to ensure all key aspects of health have been considered.

As discussed by Bruce and Edmundson (2010), many tests can be used to assess people with aphasia. These range from functional measures of communication to tests of linguistic ability, and from single tests to comprehensive language batteries. The decision to use a particular assessment depends on:

  • the user’s theoretical perspective
  • their experience
  • the aims of the assessment process
  • the goals of therapy
  • the characteristics of the person with aphasia
  • the environment
  • and the time and resources available (Kerr, 1993)

No single assessment can hope to measure such a complex and multidimensional process; rather a variety of assessments and protocols are needed to evaluate the different domains. Ideally the assessments should be selected based on hypotheses generated by the clinician regarding the key factors that prevent the person from participating in the areas of life most important to them. For some people, this may mean assessing impairments of body structures and functions, but for others it may mean identifying participation restrictions (Bruce, 2005). The International Classification of Functioning and Disability can be used as a framework to guide assessment selection to ensure that the purposes for assessing are being fulfilled. 

International classification of Functioning, Disability and Health

The International Classification of Functioning, Disability and Health (ICF; WHO, 2001) (see Figure 1) has been used as a framework for defining aphasia (Cruice, Worrall, Hickson, & Murison, 2003; Davidson, Worrall, & Hickson, 2003; Howe, Worrall, & Hickson, 2004; Threats & Worrall, 2004). The ICF is not an assessment tool, but the framework can help guide the selection of assessment methods. The framework encourages a broad view of aphasia as it describes‘‘conditions in terms of body function and structure, performance of activities, participation in relevant life situations, and the influence on functioning of environmental and personal factors’’ (Simmons-Mackie, Threats, & Kagan, 2005, p. 12).

The WHO International Classification of Functioning, Disability and Health
Figure 1 - The WHO International Classification of Functioning, Disability and Health.

Aphasia assessments and the ICF

A number of known and commonly used aphasia assessments can be classified within the ICF structure. Below is a description of these aphasia assessments.  

Body Functions and Structure: Impairment-based tools

This area is currently lacking in systematic reviews that describe the best practice of assessment. Biddle et al. (2002) provided a broad review of speech pathology assessments tools which included three adult language assessments (the BDAE, WAB and the Porch Index of Communicative Ability (PICA).

Bruce and Edmundson (2010) provided a non-systematic review of established aphasia batteries in their commentary on Howard, Swinburn, and Porter’s (2010) “Putting the CAT out: What the Comprehensive Aphasia Test has to offer”.  Information from these reviews have been combined into a  table (see Table 3).

A comprehensive overview of the assessment of aphasia using a cognitive neuropsychological approach and therefore including the PALPA and the CAT is set out in Whitworth, Webster & Howard (2013).

Table 3 Comprehensive language batteries in use in clinical practice (Adapted from Bruce & Edmundson’s table, 2010, p92)

Boston Diagnostic Aphasia Examination (BDAE) - 3rd edition (2001) H. Goodglass, E. Kaplan, & B. Barresi  (USA)

Purpose of test
 
Test overview

Admin time
 
Psychometric data
Validity
Reliability

A comprehensive measure of aphasia that aims to provide: 

  1. "diagnosis of presence and type of aphasic syndrome, leading to inferences concerning cerebral localization and underlying linguistic processes that may have been damaged and the strategies used to compensate for them"; (p.1)

  2. "measurement of performance over a wide range of tasks, for both initial determination and detection of change over time"; (p.1)

  3. "comprehensive assessment of the assets and liabilities of the patient in all areas as a guide to therapy"(p. 1).

40-plus subtests divided into the following sections: conversational expository speech, auditory comprehension, oral expression, reading and writing, and praxis. The third edition contains several new subtests that target assessment of narrative speech, category-specific word comprehension, syntax comprehension, and specific reading disorders. The test has standard, short, and extended formats

 

2–6 hours

No sensitivity or specificity data provided. Discriminate cut off between normal language performance and people with aphasia was defined as "“The minimum scores obtained by any normal control are useful as an indicator of the borderline between minimally impaired aphasics and normal”  (Goodglass, H. Kaplan, E & Barrasi, B, 2001,p.19).

 

 

The BDAE-2 did not meet the reliability criterion (>0.80 using a correlation coefficient) set by the authors in Biddle et al’s review (2002).

Reliability of each individual subtest is provided. Internal agreement amongst the items ranged significantly from 0.54 (Auditory comprehension (foods) to Boston Naming Test (0.98).

Statistical test-retest reliability analysis appears not to have been provided.

Notes: The subtests do not control for variables such as frequency, imageability and the assessment does not provide a theoretical framework in which to interpret the results of the subtest (Bruce & Edmundson, 2010).

 

Western Aphasia Battery -  Revised (WAB) A. Kertesz (USA)

Purpose of test
 
Test overview

Admin time
 
Psychometric data
Validity
Reliability

A comprehensive assessment that identifies the individual’s aphasia syndrome and their severity of aphasia. 

31 subtests. The subtest scores for spontaneous speech, comprehension, repetition, and naming are combined to yield an aphasia quotient (AQ); subtest scores for reading and writing, praxis, and construction are combined to yield cortical quotient (CQ); subtest scores for auditory comprehension, oral expression, reading, and writing performance may be combined to yield lexical quotient (LQ).

 

1 hour +

No diagnostic accuracy data (sensitivity/specificity).  Instead uses a cut-off point of AQ 93.8.

Validity analysis indicates a high false positive rate of 30% using the aphasia quotient. This indicates that "the AQ alone cannot be used to label whether a brain damaged patient (stroke patient) is aphasic” (p.92).

 

The WAB did meet the reliability criterion (>0.80 using a correlation coefficient) set by the authors in Biddle et al’s review (2002).

 

Notes: The aphasia quotient (AQ) should not be interpreted in isolation when determining the presence of aphasia in stroke patients in clinical settings. 

 

The Comprehensive Aphasia Test (CAT) (2005) K. Swinburn, G. Porter, & D. Howard (UK)*

Purpose of test
 
Test overview

Admin time
 
Psychometric data
Validity
Reliability

The assessment provides a profile of performance across all modalities of language production and comprehension, as well as identifying associated cognitive deficits. It also reveals the psychological and social impact of impairment from the perspective of the person with aphasia. It predicts and follows changes in severity over time.

34 subtests, including a cognitive screen, a language battery, and a disability questionnaire. The language battery is designed to assess (i) language comprehension, (ii) repetition, (iii) spoken language production, (iv) reading aloud, and (v) writing.

 

90–120 minutes

Nil sensitivity/specificity data.

The authors stated that scores on the CAT were not compared with those of other aphasia assessments as no appropriate gold standard exists.K. Swinburn, G. Porter, & D. Howard (2005)   

 

High inter-rater agreement in almost all subtests (correlation coefficient about 0.9)

 

 

Psycholinguistic Assessment of Language Processing in Aphasia (PALPA) (1992) J. Kay, R. Lesser, & M. Coltheart (UK)

Purpose of test
 
Test overview

Admin time
 
Psychometric data
Validity
Reliability

An extensive assessment that provides information about the integrity of the language-processing system. The knowledge provides a firm grounding on which a treatment programme can be based.

60 subtests that assess (i) auditory processing, (ii) reading, (iii) spelling, (iv) picture and word semantics, and (v) sentence comprehension. Individual subtests are selected depending on the specific questions.

 

90–120 minutes

Only limited psychometric data for a small number of test.

 

 

 

Activity and participation: functional assessment tools

Connected speech tools: 

It is worth noting that the reviews mentioned above focus heavily on language batteries and there has been little synthesis to date on tools which asses connected speech.  Prins & Bastiannse (2004) and Nickels (2005) have provided a critical discussion of  various methods of  analyzing spontaneous speech including:

1. Conversational analysisAims to analysis ‘natural’ conversation between a person with  aphasia and a main conversational partner

  • Analytical procedures have been developed (e.g. Sacks, 1972, 1992; Jefferson 1973, 1974; Schegloff, 1968, 1981) which assist in analysing the structured organisation of conversations. Such analyses may be dependent upon transcription (Wilkinson, 2008), or may make use of rating scales of observations during interaction (either directly, or through recording), e.g. Kagan et al 2004, or may use a mixture of the two (Whitworth, Perkins & Lesser, 1997).
  • Conversational breakdowns caused by the aphasia (e.g. word finding, comprehension problems) can be identified and potential strategies taught to improve the overall conversational interaction, (e.g. Beeke, Maxim & Wilkinson, 2007)

2. Discourse analysis

  1. Confusion is present regarding the term ‘discourse’ ,  Nickels (2005) describes that some authors are referring to connected speech beyond the level of the sentence, while others refer to pragmatic aspects of connected speech. Some authors draw the distinction between ‘microlinguistic’ and ‘macrolinguistic’ features of discourse, in order to distinguish between description of particular wordings and grammatical aspects (‘microlinguistic’, e.g. Coehlo et al 2005) and description of the features of the structure of the discourse that draw it together as a whole and that distinguish it from another type of text (‘macrolinguistic’).
  2. For both, a speech sample of communicative behaviour is required
  3. Speech samples may be elicited in different ways such as through retelling a  fairy tale (Cinderella  story) or describing a picture (e.g. Cookie Theft picture)
  4. Communication behaviours can be analysed (e.g. turn-taking behaviour, conversational repair, discourse cohesion, content and efficiency of language)

Various tools are available to assess a persons connected speech such as: the Profile of Word Errors and Retrieval in Speech (Powers: Herbert et al), and the “Cinderella Story” (Berndt et al., 2000) as well as various discourse measures and approaches such as Correct Information Units (Nicholas & Brookshire, 1993).

While picture description (both of static pictures and of picture sequences) and procedural discourse sampling are frequently used in clinical and research contexts with a view to time efficiency and replicability, it needs to be recognised that there are significant limitations in both the validity and reliability of such short discourse samples (Armstrong et al., 2011). The validity of such samples is intrinsically compromised because the task is unnatural and pragmatically marked (i.e. speakers do not usually describe pictures, particularly not to people who can also see the picture being described), and so when speakers fail to refer to particular features or include unusual information, the methods used to interpret these observations may not reflect the nature and extent of difficulties.  The reliability of analyses of such samples faces two main problems.  First, there is  inherent variability of language use, i.e. speakers’ use of language, including their word choice and grammatical constructions, varies depending on significant aspects of context including what they are talking about, who they are talking with, and the role that language is playing in that particular instance (Armstrong & Ferguson, 2010).  Second, the tools and measures used to describe language use are subject to factors such as the training required for their use, and their suitability for the nature of the sample being analysed.  In particular, there are differences in language use between monologic and dialogic discourse (Armstrong et al., 2011). When using this kind of sampling, it is important to recognise the value of repeated sampling in order to establish a baseline range of performance and the value of ascertaining inter- and intra-judge reliability before interpreting any observed changes in measures.

Social participation tools

Dalemans, de Witte, Lemmens, van den Heuvel, and Wade (2008) completed a systematic review on measures for rating social participation in people with aphasia. Six speech pathologists working with people with aphasia scored the instruments. Of 10 assessments  deemed to be appropriate for measuring aspects of participation, two were considered possibly suitable for use in people with aphasia: the Community Integration Questionnaire (CIQ) and the Nottingham Extended Activities of Daily Living (NEADL).  The CIQ was considered to be much closer to the concept of participation than the NEADL. The  remaining 8  that were  reported as less suitable for people with aphasia were:  The Aachen Life Quality Inventory (ALQI), Frenchay Activities Index, Impact on Participation and Autonomy (IPA), Participation Objective, Participation Subjective scale (POPS), Stroke Adapted Sickness Profile - 30 (SA-SIP),  Sickness Impact Profile (SIP), Adelaide Activities Profile and the Participation Scale.

Functional communication tools

Other tests aim assess communication in a more functional context.  Prins & Bastiaanse  (2004) provided a critical review of  four tools; the Functional Communication Profile (FCP: Sarno, 1969, 1972),  the Communicative abilities in Daily Living (CADL: Holland, 1980), the Pragmatic Protocol (PP: Prutting & Kirchner, 1987) and the American-Speech-Language-Hearing Association Functional Assessment for Communicative Skills in Adults (ASHA FACS:  Frattali, Thompson, Holland, Wohl, & Ferketic, 1995).  The  assessment tools as  described by Prins & Bastiaanse (2004) are summarised in the table below.  While it is recognised  various book chapters  may  provide a more recent review of functional communication tools, such a wide search is beyond the scope of this overview.   Other functional tools  not mentioned in Prins & Bastiaanse (2004) include the Communicative Effectiveness Index (CETI: Lomas et al ., 1989), the La Trobe  Communication Questionnaire (Douglas, O’Flaherty & Snow, 2000) and the Amsterdam-Nijmegan Everyday Language Test (ANELT, Blomert et al., 1994).

Table 5 Overview of some functional assessment tools (Adapted from Prins & Bastiaanse, 2004)

Name of the test,  authors & country of development

Structure

Purpose of the test

Psychometric data

Approx. time to administer

Comment

 Functional Communication Profile (FCP: Sarno, 1969, 1972)

Rates 45 communication behaviours, divided into 5 areas (movement, speaking, understanding, reading, other)

Developed for patients in a rehabilitation setting and can only be used by therapists/researchers who see the client on a daily basis.

Interjudge reliability is high (.87- .95 across the different categories). Test-retest is said to be ‘significant’ but no further details provided. 

Not provided

While the FCP was an innovation in the diagnostic field, it is currently not frequently used (Katz et al., 2000)

Communicative Abilities in Daily Living - 2 (CADL-2): Holland, Frattali & Fromm, 1998)

Contains 50 test  items in seven areas: Social Interaction; Divergent
Communication; Contextual
Communication; Nonverbal
Communication; Sequential
Relationships
Humor/Metaphor/Absurdity

Assesses the functional communication skills of the person with aphasia in various scenarios (unique in that it requires direct observation of performance)

High reliability (.93 coefficient; .85 test-retest; and .99 interscorer), validity based on a small sample size was .66

30mins

Many of the subtests of the original  CADL such as role play were removed for the ease of administration of CADL-2

Pragmatic Protocol (PP: Prutting & Kirchner, 1987)

Allows for 30 communicative abilities (both verbal and nonverbal) to be rated on the basis of a conversation between the patient and a familiar partner. Areas include: verbal aspects, paralinguistic aspects & nonverbal aspects. Each is rated ‘appropriate’ ‘inappropriate’ or ‘unable to observe’

Provides a general observational profile of the person with aphasia

High reliability (.90) if between two trained therapist (8-10 hrs of training), however less acceptable among untrained therapists (.70). No test-retest reliability or validity.

15 mins

No clear criteria for what ‘appropriate’ or ‘inappropriate’ implies.

ASHA FACS  (Frattali, Thompson, Holland, Wohl, & Ferketic, 1995).

Contains 43 items divided into four domains: social communication; communication of basic needs; reading, writing and number concepts; and daily planning. Each rated from ‘does not’ to ‘does’ with various levels of support in between.

Scores communication impairments in daily life

Interrater reliability is relatively high (.88-.95 for Communicative Independence Scores and .72-.84 for the qualitative scales). External validity was .73 however there was a variation between .58 - .63 in the overall rating of functional communication by clinicians and family members. 

20 mins

Based on daily life of average American

Just as there have been various discussions around the appropriateness of impairment-based assessments, there is also a wider range of views on functional assessment. As Worrall (1992, 2000, Worrall et al 2002) describes it is unrealistic to expect a single assessment to be appropriate  to assess  all individuals with aphasia, all cultures, all impairments and all settings. (in Nickels, 2005)

Resources:

  1. Look for video resources on the CCRE Aphasia You Tube channel 

References:

  1. World Health Organization. (2001). International Classification of Functioning, Disability and Health (ICF). Geneva: World Health Organization.

GET  IN  TOUCH


l.worrall@uq.edu.au

+61 7 3365 2891

Professor Linda Worrall
The University of Queensland
ST LUCIA QLD 4072   

 

RESEARCH PARTNERS


NHMRC
The University of Queensland
La Trobe University
Macquarie University
The University of Newcastle
The University of Sydney
Edith Cowan University