Skip to primary navigation Skip to content Skip to footer

Screening tools

All people post stroke should be screened using a valid and reliable tool that is sensitive to the presence of aphasia.

Reference: National Stroke Foundation, 2010
NHMRC Level of Evidence:  GPP

Rationale:  Prompt, accurate identification of aphasia in stroke patients is an essential component of stroke care.  Efficient and effective screening procedures ensure that all patients with aphasia receive appropriate education, support, intervention, and the optimisation of rehabilitation outcomes.  Inadequate screening procedures risk missed diagnoses, inappropriate patient management and resultant unnecessary healthcare burden.  Aphasia is a common consequence post-stroke therefore all stroke patients require screening for language deficits in the acute post-recovery phase.   


Acknowledgements:
This section was written by Alexia Rohde (The University of Queensland). 

Screening for aphasia

Early identification and diagnosis of aphasia are important steps to maximizing rehabilitation gains. A routine screening test can be an invaluable tool in the identification and appropriate referral of patients with potential aphasia.

Screening practices can be implemented in different ways.  Aphasia screening can be conducted by speech pathology or non-speech pathology staff.  Choice of screening procedure will depend upon the demands and requirements of the clinical context.

  1. Screening by speech pathology staff – ideally, a blanket referral to speech pathology for all acquired brain injuries is in place. In this instance, the speech pathologist's initial contact with the patient with aphasia will involve screening and/or assessing communicative functioning (refer to ‘Optimising Initial Contact’ for more information).
  2. Screening by non-speech pathology staff – where a blanket referral is not in place, other health professionals (often nursing staff) can use a screening tool to identify potential communication deficits in stroke patients.  Patients who display communicative difficulties can be referred to speech pathology for assessment.

What makes a good screening tool?

It is important that screening tools meet acceptable criteria of both reliability and validity to be suitable for use in clinical practice. A good screening tool:

  1. Has high levels of accuracy (sensitivity).  Highly sensitive (valid) tools will rarely miss patients who have a language problem.  These tools will have sensitivity estimates of close to 100%.  A test with lower sensitivity such as 80% will detect only 80% of patients with the condition and 20% of screened patients will go undetected (Lalkhen & McCluskey, 2008).
  2. Is reliable (or consistent) in their results. Both inter-rater and intra-rater reliability estimates are important.  Generally, reliability is rated in terms of ICC (intraclass correlation coefficients) or Kappa (K) and weighted Kappa (Kw) values.  Estimates of reliability 0.70 and above (from 0.70 to 1.00) are generally considered to be reliable instruments (Landis & Koch, 1977).

RESOURCES: 

Aphasia screening tools

Below is a list and description of aphasia screening tools for non-speech pathologists. Validity and reliability data are provided to guide choice of screening tool in line with the CCRE Aphasia best practice statement.

Language Screening Test (LAST)

Test overview
 
Administered by

Admin time
 
Psychometric data
Validity
Reliability

Brief test examining:

Expressive index: naming; repetition; automatic speech

Reception index: recognition; verbal instruction;

NB: reading/writing not assessed.

Test is available as appendix at end of journal publication.

Nursing staff2mins

Sensitivity 98% (high)

Specificity 100% (high)

(External validation against BDAE)

Intraclass correlation coefficient 0.96 (high)

 

Inter-rater agreement = 0.998 (high)

(intraclass correlation coefficient)

Reference: Flamand-Roze, C., Falissard, B., Roze, E., Maintigneux, L., Beziz, J., Chacon, A., Join-Lambert, C., Adams, D., Denier, C. (2011).  Validation of a New Language Screening Tool for Patients with Acute Stroke: The Language Screening Test., Journal of the American Heart Association, 42, 1224-1229.

 

The Aphasia Rapid Test

Test overview
 
Administered by

Admin time
 
Psychometric data
Validity
Reliability

Brief test developed as a bedside assessment to rate aphasia severity in acute stroke patients.

Useful in monitoring early aphasic changes in acute stroke patients.  Highly predictive of 3 month verbal outcome.

Test examines: Execution of simple and complex orders, repetition, object naming, scoring of dysarthria, verbal semantic fluency task. 

Any healthcare professional3 mins

Sensitivity 90% (high)

Specificity 80%(moderate)

NB: Should not be used as a diagnostic tool since it does not discriminate between aphasia, apraxia of speech and dysarthria.

Inter-rater = 0.99 (high)

(concordance coefficient) 

Weighted Kappa = 0.93 (high)

Reference: Azuar, C., Leger, A., Arbizu, C., Henry-Amar, F., Chomel-Guillaume, S., Samson, Y. (2013).  The Aphasia Rapid Test: an NIHSS-like aphasia test, J Neurol, 260, 2110-2117.

 

Ullevaal Aphasia Screening Test

Test overview
 
Administered by

Admin time
 
Psychometric data
Validity
Reliability

Test is based on a painting “Self- portrait” by Theodor Kittelsen.

Test evaluates: Expression, comprehension, repetition, reading, reproduction of a string of words, writing and free communication.

Patients are classified into one of four categories (no, mild, moderate or severe)

Nursing staff5-15 mins

Sensitivity  75% (low -moderate)

Specificity 90% (moderate)

The predictive value of a negative test in this study was considered to be satisfactorily high by the authors.

Weighted kappa = 0.83

(coefficient of agreement) (moderately high)

Reference: Thommessen, B., Thoresen, G., Bautz-Holter, E. & Laake, K. (1999). Screnning by nurses for aphasia in stroke – the Ullevaal Aphasia Screening (UAS) test, Disability and Rehabilitation, 21, 3, 110-115

 

Frenchay Aphasia Screening Test (FAST)

Test overview
 
Administered by

Admin time
 
Psychometric data
Validity
Reliability

Test examines: comprehension, verbal expression, reading, writing and automatic speech.

Nursing staff or other health professional

3-10 mins

Sensitivity 87% (moderate)

Specificity 80% (moderate)

(Al Khawaja et al., (1996)

Intra-rater reliability (Kappa = 1) (high)

(Philip, Lowles, Armstrong and Whitehead (2002)

 Inter-rater reliability = 95% (high)

(Sweeney, Sheahan, Rice, Malone, Walsh and Coakley (1993).

Reference: Enderby, P., Wood, V and Wade, D., (2013) Frenchay Aphasia Screening Test, 3 Edition, Stass Publications

 

Mississippi Aphasia Screening Test (MAST)

Test overview
 
Administered by

Admin time
 
Psychometric data
Validity
Reliability

Nine subtests examining: naming; automatic speech; repetition, Yes/No accuracy; object recognition; verbal instructions; reading instructions; verbal fluency; writing/spelling to dictation. 
 

 

Health professionals.

5-10 mins

Sensitivity 72.7% (low-moderate)

Specificity 60% (low)

Validity data extrapolated from Table IV (Nakase-Thompson et al., 2005 p. 689).

Sensitivity and specificity estimates derived from total MAST score data of left hemisphere and right hemisphere stroke patient results.

Nil data on reliability of the MAST. 
Reference:  Nakase-Thompson, R., Manning, E. Sherer, M. Yablon, S., Gontkovsky, S., Vickery, C. (2005), Brief assessment of severe language impairments: Initial validation of the Mississippi aphasia screening test, Brain Injury, 19(9), 685-691.
 

 References:

  1. Al-Khawaja, I., Wade, D. T., & Collin, C. F. (1996). Bedside screening for aphasia: a comparison of two methods. Journal of Neurology., 243, 201-204.
  2. Christensen, H., Boysen, G. & Truelsen, T. (2005).  The Scandinavian Stroke Scale predicts outcome in patients with mild ischemic stroke, Cerebrovasc Dis, 20, 46-48.
  3. Lalkhen, A. G. & McCluskey, A. (2008).  Clinical tests: sensitivity and specificity, Continuing Education in Anaesthesia, Critical Care & Pain, 8, 6, 221-223.
  4. Landis, J. R., Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics 33:159-174
  5. National Stoke Foundation, Clinical Guidelines for Stroke Management (2010).  Melbourne, Australia.
  6. Philp, I., Lowles, R. V., Armstrong, G. K., Whitehead, C. (2002). Repeatability of standardized tests of functional impairment and well-being in older people in a rehabilitation setting. Disability and Rehabilitation, 24, 243-249.
  7. Sweeney, T., Sheahan, N., Rice, I., Malone, J., Walsh, J. B., Coakley, D. (1993). Communication disorders in a hospital elderly population. Clinical Rehabilitation, 7, 113-117.

< Back to Best Practice Statements

GET  IN  TOUCH


l.worrall@uq.edu.au

+61 7 3365 2891

Professor Linda Worrall
The University of Queensland
ST LUCIA QLD 4072   

 

RESEARCH PARTNERS


NHMRC
The University of Queensland
La Trobe University
Macquarie University
The University of Newcastle
The University of Sydney
Edith Cowan University