Assessment Types & Psychometrics: Standardized vs. Non-Standardized

Document about Assessment Types & Psychometrics. The Pdf provides a detailed overview of assessment types and psychometrics, distinguishing between standardized and non-standardized evaluations. It also covers COAST goal setting and ethical responsibilities for occupational therapists, useful for University students.

Mostra di più

22 pagine

Assessment Types & Psychometrics
Understand standardized vs. non-standardized assessments.
Aspect
Standardized
Non-Standardized
Definition
A structured test with
set procedures,
scoring, and
interpretation
An informal or flexible
tool tailored to the
individual
Scoring
Uses norms,
percentiles, standard
scores
Often qualitative or
descriptive
Purpose
Compare performance
to norm group
Understand function,
context, and individual
performance
Examples
BOT-2, Peabody,
Sensory Profile,
Bayley-4
Observations,
interviews, activity
analysis
Use in OT
Eligibility, progress
measurement
Goal setting,
intervention planning
Know how to distinguish between
Norm-referenced, criterion-referenced, and ipsative assessments
Norm-Referenced: Compares a child's abilities to same-age peers. Follows a bell
curve to compare a client’s performance of a larger group (norm group). Ex:
BOT-2
Raw and scaled scores
Criterion-Referenced: Measures functional skills based on task completion
criteria, not compared to group norms or a set standard. Ex: FIM
Based on points
Ipsative Assessment: compares individuals current performance to their own
past performance, focusing on personal growth and improvement rather than
comparing it to others, or a set standard.
Which assessments fall into each category.
For example,
Definitions and importance of
Reliability (test-retest, interrater, intrarater, internal
consistency)
Validity (face, content, construct, concurrent)
Test-retest reliability:
Same test given to the same people at two different times gives similar results.
(Example: A sensory profile questionnaire filled out two weeks apart has similar scores.)
Interrater reliability:
Different raters/observers give consistent scores.
(Example: Two OTs rate a patient's motor skills and get similar results.)
Intrarater reliability:
The same rater gives consistent scores across multiple observations.
(Example: You score a handwriting sample today and a week later, and your scoring is
consistent.)
Internal consistency:
How well the items in a test measure the same concept.
(Example: All the items in a social participation scale should be related to social
participation.)
Face validity:
Does the test look like it measures what it’s supposed to? (Very surface level.)
(Example: A stress questionnaire actually asks about stress symptoms — makes sense
on the surface.)
Content validity:
Does the test fully cover the topic?
(Example: A sensory processing checklist covers ALL sensory systems, not just touch
or hearing.)
Construct validity:
Does the test truly measure the theoretical concept (the construct)?
(Example: A depression scale really measures depression, not anxiety.)
Concurrent validity:
Does the test correlate well with an already established test measuring the same
thing?
(Example: A new fine motor skills test gives similar results to a gold-standard fine motor
assessment.)

Visualizza gratis il Pdf completo

Registrati per accedere all’intero documento e trasformarlo con l’AI.

Anteprima

Assessment Types & Psychometrics

Standardized vs. Non-Standardized Assessments

· Understand standardized vs. non-standardized assessments.

Aspect
Standardized
Non-Standardized
Definition
A structured test with
set procedures,
scoring, and
interpretation
An informal or flexible
tool tailored to the
individual
Scoring
Uses norms,
percentiles, standard
scores
Often qualitative or
descriptive
Purpose
Compare performance
to norm group
Understand function,
context, and individual
performance
Examples
BOT-2, Peabody,
Sensory Profile,
Bayley-4
Observations,
interviews, activity
analysis
Use in OT
Eligibility, progress
measurement
Goal setting,
intervention planning

Distinguishing Assessment Types

Know how to distinguish between
Norm-referenced, criterion-referenced, and ipsative assessments
. Norm-Referenced: Compares a child's abilities to same-age peers. Follows a bell
curve to compare a client's performance of a larger group (norm group). Ex:
BOT-2

Raw and scaled scores

Criterion-Referenced: Measures functional skills based on task completion
criteria, not compared to group norms or a set standard. Ex: FIM

Based on points
. Ipsative Assessment: compares individuals current performance to their own
past performance, focusing on personal growth and improvement rather than
comparing it to others, or a set standard.
Which assessments fall into each category.

Reliability and Validity Definitions

For example,Definitions and importance of
o Reliability (test-retest, interrater, intrarater, internal
consistency)

Validity (face, content, construct, concurrent)

Reliability Types

Test-retest reliability:
> Same test given to the same people at two different times gives similar results.
(Example: A sensory profile questionnaire filled out two weeks apart has similar scores.)
Interrater reliability:
> Different raters/observers give consistent scores.
(Example: Two OTs rate a patient's motor skills and get similar results.)
Intrarater reliability:
The same rater gives consistent scores across multiple observations.
(Example: You score a handwriting sample today and a week later, and your scoring is
consistent.)
Internal consistency:
-> How well the items in a test measure the same concept.
(Example: All the items in a social participation scale should be related to social
participation.)

Validity Types

Face validity:
Does the test look like it measures what it's supposed to? (Very surface level.)
(Example: A stress questionnaire actually asks about stress symptoms - makes sense
on the surface.)
Content validity:
Does the test fully cover the topic?
(Example: A sensory processing checklist covers ALL sensory systems, not just touch
or hearing.)
Construct validity:
Does the test truly measure the theoretical concept (the construct)?
(Example: A depression scale really measures depression, not anxiety.)
Concurrent validity:
Does the test correlate well with an already established test measuring the same
thing?
(Example: A new fine motor skills test gives similar results to a gold-standard fine motor
assessment.). For an assessment to be valid, it must first be reliable!

Standard Scores and Interpretation

Standard Scores, Z-scores, T-scores, and SEM

Know how standard scores, Z-scores, T-scores, and standard error of
measurement (SEM) are used and interpreted.
Standard Scores: a way to compare a person's raw score on a test to the average
performance of a normative group.
- Mean: 100
- SD: 15

Standard Score Interpretation

Standard Score
Interpretation
130+
Very Superior
115-129
Above Average
85-115
Average (within 1 SD of mean)
70-84
Below Average
<70
Significantly Below Average

Z-Scores Interpretation

Z-Scores: type of standardized score, raw distance from the mean (raw
score-mean/SD)
- Z-score of 0 = exactly average
- Z-score of +1 = 1 SD above the mean
- Z-score of -2 = 2 SDs below the mean

Z-Score
Interpretation
+2.0
Much higher than average
0.0
Average
-1.0
1 SD below average
-2.0
Significantly below average

T-Scores Interpretation

T-Scores: type of standardized score, convert raw scores to a common scale (typically
converted from Z-scores).
-
Mean: 50SD: 10
-

T-Score
Interpretation
70+
Far above typical range
60-69
Above typical range
40-59
Typical
30-39
Below Typical
<30
Far below typical

Standard Error of Measurement (SEM)

Standard Error of Measurement (SEM): how much a score might vary if the test were
repeated. It helps you understand the confidence range around a score.
- Lower SEM: more precise test

Specific Assessments to Review

Know the purpose, target population, type, and method of administration for the
following
· COPM: interview-based, ipsative, measures performance & satisfaction.
· DAYC-2: developmental, norm-referenced, early childhood domains.
. WeeFIM, PDMS-3, MMSE, MOCA, Beery VMI, Bayley-3

All are standardized

MOCA/MMSE are criterion-referenced and standardized and are
screeners for cognition
. Activity Card Sort: used to assess engagement and performance (ordinal
measurement).
. PROMIS and other NIH tools for holistic health outcomes.
o Required for Medicare/billing; We've gone away from the FIM and are now
moving toward the GG codes

Assessment Details

Assessment
Type of Tool
Population
Purpose
Administration
COPM
-
Ipsative
- Client centered
- Top down
All ages
(esp.
adults)
Measures
performance
& satisfaction
Interview-basedDAYC-2
Norm -
referenced
- Developmental
-
Birth to
5:11
years
Early
childhood
domains
Observation +
caregiver input
WeeFIM
-
Standardized
- Criterion-referenced
6
months
to 7
years
Measures
independence
in daily
activities
Observation +
caregiver report
PDMS-3
-
Norm-referenced
- Motor skills
Birth to 5
years
Measures fine
and gross
motor skill
development
Direct testing
MMSE
-
Criterion-
referenced
Older
adults
Screens
cognition
(orientation,
memory,
attention)
Paper/verbal
tasks
MoCA
-
Criterion-
referenced
Adults
(55+)
Screens for
mild cognitive
impairment
Paper and pencil
tasks
Beery VMI
-
Norm-referenced
- Visual motor
2-100
years
Assesses
visual-motor
integration
Drawing tasks
Bayley-3
Norm-referenced
-
- Developmental
1-42
months
Assesses
cognitive,
motor,
language
skills
Structured testing
and caregiver
input
Activity
Card Sort
-
Ipsative
- Ordinal scale
-top down
Adults
(older
adults)
Measures
engagement
&
performance
in daily
activities
Card sorting
PROMIS
- Patient-reported
outcome measures
All ages
Assesses
holistic health
(fatigue,
depression,
social roles,
pain)
Self-report
(paper/electronic)Section
GG Codes
-
Criterion
referenced
- Medicare-required
Adults in
post-
acute
Tracks
functional
performance
for
reimbursemen
t & outcomes
Team scoring and
observation

SOAP Notes & Documentation

Identifying Strong SOAP Statements

Be able to identify strong SOAP statements.

Subjective statements
. Client quotes and reported feelings

Objective observations
· Measurable and observable behaviors
o Assessment interpretations
. What is the impact of observed impairments
. Ongoing problems, progress (based off of the objective data,
what progress does that demonstrate from previous
sessions or assessments), potential (what currently shows
that we need future services for functional improvement)

Plan statements
. Next steps, frequency, goals

COAST Goal-Writing Format

Know examples of strong objective documentation and COAST goal-writing
format.

COAST stands for Client, Occupation, Assist Level, Specific Condition,
Timeline
C: Client
. The goal should specify who the client is, ensuring that the goal is individualized.
. Example: "The client will ... " or "The patient will ... "
. Be specific-use the client's name or describe their characteristics (age,
diagnosis, etc.) where applicable.
2. O: Occupation. The goal must focus on the occupation the client is working toward.
· Occupations should be meaningful and relevant to the client's everyday life. This
can include ADLs (Activities of Daily Living), IADLs (Instrumental Activities of
Daily Living), or any other activity or task.
. Be specific about the occupation-describe exactly what the client needs to do
(e.g., "dressing," "preparing a meal," "using a wheelchair independently").
3. A: Assistance Level
. The goal should specify the level of assistance needed to complete the
occupation.
. Use clear terms such as:
· Independent (no assistance needed)
o Supervision (client needs supervision)
o Modified Independence (client may use adaptive equipment or
strategies)
o Minimal, Moderate, or Maximal Assistance (how much support is
required from the therapist or others)
o Example: "With minimal assistance from the therapist," or "With the use of
a walker."
4. S: Specific Condition
. The goal should describe the specific condition under which the occupation will
occur.
. This can include environmental conditions, equipment, or specific situations (e.g.,
"Using a reacher to dress independently," or "In the home environment").
. Conditions help provide context for how the client will be expected to perform the
task.5. T: Timeline
. Each goal should include a timeline for achieving the goal.
. This helps set expectations and allows for measuring progress.
. Example: "Within four weeks," or "By the end of the treatment period."

COAST Goal Example

Example of a COAST Goal:
Goal:
Client will independently dress (occupation) with the use of adaptive equipment
(specific condition) within four weeks (timeline), requiring no more than supervision
(assistance level) to perform the task.

Tips for Writing Strong COAST Goals

Tips for Writing Strong COAST Goals:
. Specificity: Goals should be as specific as possible. Avoid vague terms and
generalities.
. Measurability: Ensure that the goal can be objectively measured. Use numbers,
percentages, or clear descriptors to track progress.
. Client-Centered: Ensure the goal is meaningful and focused on what the client
needs to achieve, rather than what the therapist wants to accomplish.
. Realistic: Goals should be achievable within the given timeline and with the
available resources and conditions.
. Relevance: Make sure that the goals are meaningful and relevant to the client's
life and well-being.

Ethics, Supervision & Scope

Roles and Responsibilities of OTs vs. OTAs

Review the roles and responsibilities of OTs vs. OTAs.
Who can perform evaluations, administer specific tools, interpret
results, etc.
OT-
Conduct and interpret evaluations to determine need for OT
services (responsible for selecting, administering, and interpreting
standardized/non-standardized assessment tools)
- Develop, initiate and modify treatment plans based on
evaluation findings
- Authorize changes to treatment plans and patient discharges
-
Delegates tasks to OTAs and determines level of supervision
OTA
-
Provide verbal and written reports of observations and client
capacities to OT
- Do not independently evaluate or develop treatment plans
- Carry out treatment interventions from OT plan
- Document client progress and outcomes, communicate these to
supervising OT
- May administer specific assessment tools delegated by OT

OT and OTA Role Comparison

Category
OT Role
OTA Role
Primary
Responsibility
Evaluates, plans, implements,
and manages all aspects of
OT services
Assists in implementing OT
services under OT supervision
Evaluation
Initiates and completes
evaluations; interprets results
Contributes by gathering
evaluation data if trained; does
not interpret results
Intervention
Planning
Develops and modifies
intervention plans based on
evaluation and client goals
Provides input to the OT but
does not independently create
intervention plans
Intervention
Implementation
Implements and adjusts
interventions as needed;
supervises OTA's interventions
Implements interventions
delegated by OT and reports on
client performance
Assessment
Tools
Selects, administers, and
interprets assessments
May administer certain
assessments if trained and
delegated; does not interpret

Non hai trovato quello che cercavi?

Esplora altri argomenti nella Algor library o crea direttamente i tuoi materiali con l’AI.