Diagnostic Infant and Preschool Assessment (DIPA)

Background:

The Diagnostic Infant Preschool Assessment (DIPA) is intended as an interview for caregivers of children up to 6 years old. It is acknowledged that many of the symptoms are not possible for infants but many scripts were worded so that they could be applied to younger children and were not based on an a priori assumption that these symptoms could not be detected in younger children. Also, the DIPA should be easily extended for use with children older than 6 years. Each disorder is in a self-contained module. All of the symptoms needed to make a DSM-IV diagnosis are in a disorder module and are presented in the same order that they are listed in the DSM-IV for ease of reference. Each module also contains a section for functional impairment, consistent with the DSM-IV.

Psychometrics:

Test-retest reliability and concurrent criterion validity of the DIPA have been published (Scheeringa & Haslett, 2010).

Author of Tool:

Michael S. Scheeringa, MD, MPH

Key references:

Scheeringa MS, Haslett, N (2010). The reliability and criterion validity of the Diagnostic Infant and Preschool Assessment: A new diagnostic instrument for young children. Child Psychiatry & Human Development, 41, 3, 299-312.

Scheeringa, M. S., Peebles, C. D., Cook, C. A., & Zeanah, C. H. (2001). Toward establishing procedural, criterion, and discriminant validity for PTSD in early childhood. Journal of the American Academy of Child and Adolescent Psychiatry, 40(1), 52-60.

Scheeringa, M. S., Zeanah, C. H., Drell, M. J., & Larrieu, J. A. (1995). Two approaches to the diagnosis of posttraumatic stress disorder in infancy and early childhood. Journal of the American Academy of Child and Adolescent Psychiatry, 34(2), 191-200.

Scheeringa, M. S., Zeanah, C. H., Myers, L., & Putnam, F. W. (2003). New findings on alternative criteria for PTSD in preschool children. Journal of the American Academy of Child and Adolescent Psychiatry, 42(5), 561-570.

Task Force on Research Diagnostic Criteria: Infancy and Preschool. (2003). Research diagnostic criteria for infants and preschool children: The process and empirical support. Journal of the American Academy of Child and Adolescent Psychiatry, 42, 1504-1512.

Primary use / Purpose:

The Diagnostic Infant Preschool Assessment (DIPA) is a diagnostic instrument for assessing DSM-IV syndromes in children aged 6 years or younger.

Files:

Download Diagnostic Infant and Preschool Assessment (DIPA)

DIAGNOSTIC INFANT AND PRESCHOOL ASSESSMENT (DIPA) MANUAL

Version 8/18/10 Copyright 2004

Michael S. Scheeringa, MD, MPH

Institute of Infant and Early Childhood Mental Health Section of Child and Adolescent Psychiatry Department of Psychiatry and Neurology

School of Medicine Tulane University New Orleans, LA [email protected]

DISORDERS COVERED

  • Posttraumatic stress disorder (PTSD)
  • Major depressive disorder (MDD)
  • Bipolar I disorder
  • Attention-deficit/hyperactivity disorder (ADHD)
  • Oppositional defiant disorder (ODD)
  • Conduct disorder (CD)
  • Separation anxiety disorder (SAD) Specific phobia
  • Social phobia
  • Generalized anxiety disorder (GAD)
  • Obsessive compulsive disorder (OCD)
  • Reactive attachment disorder (RAD)
  • Sleep onset disorder
  • Night waking disorder

BASICS

The DIPA is intended as an interview for caregivers of children who are 6 years and younger down to below 1 year of age. It is not clear where the lower age limit is for the capacity to express each symptom because there are still limited data. It is acknowledged that many of the symptoms are not possible for infants but many scripts were worded so that they could be applied to younger children if desired and were not based on an a priori assumption that these symptoms could not be detected in younger children in the absence of data. Also, the DIPA ought to be easily extended beyond 6 years if there were instances of desiring prospective continuity with the same measure for 7- or 8- year old patients.

Each disorder is in a self-contained module. All of the symptoms needed to make a DSM-IV diagnosis are in a disorder module and are presented in the same order that they are listed in the DSM-IV for ease of reference. Each module also contains a section for functional impairment, consistent with the DSM-IV.

PHILOSOPHY

This instrument is meant for either lay or clinician interviewers. The format contains features of the two historical formats that have been called “structured” or “semi-structured.” Structured is the term that has been applied to interviews written for lay interviewers to read scripts in a fairly rigid fashion. This structure was designed for non-clinician interviewers employed in large-scale research surveys. Semi-structured has been applied to interviews written for interviewers to have more allowance to follow up initial probes with “free-lance” questions. This design was thought better-suited to clinicians who could use their training to track down relevant information.

The DIPA contains the structured characteristic that the interviewer is expected to read the scripted probes exactly as they are written. But overall, the DIPA is semi-structured because the interviewer is expected to follow-up every scripted probe with questions to obtain examples, and clarify examples until the interviewer is satisfied.

In practice, structured and semi-structured have little meaning because implementation of the instrument probably depends more on the training and ongoing supervision of the interviewers than on how the instrument was written. If lay interviewers giving structured interviews never understood some of the symptoms, drifted in their understanding of symptoms, and/or didn’t stick to the scripts, topics on which there are few data, then validity and standardization across subjects could suffer. Likewise, if clinicians using semi-structured interviews misunderstood items, used idiosyncratic phrasing, or collected incomplete data, then the same problems could occur. These important points are described in more detail below:

  • The initial probe questions are detailed and ought to be read

The initial probes questions (scripts) were made structured and detailed. These scripts are educational for both the interviewer and the respondent. The author’s experience over the years is that no matter how experienced the interviewer, there inevitably occur drift and misunderstandings. Drift and misunderstandings occur particularly if the scripts are left too vaguely worded. Therefore, the DIPA was created with relatively more detailed scripts than most diagnostic interviews in order to prevent drift and misunderstandings.

  • To determine when an item passes the threshold of being a symptom, “the parent is the first judge, the interviewer is the final ”

One of the most important issues of validity is determining when an item passes the threshold of being a symptom (i.e., when to score a 0 or a 1). The DIPA philosophy of the “the parent is the first judge, the interviewer is the final judge” means that there is an explicit reliance on parents’ ratings but there must always be follow-up probing. All diagnostic interviews rely on the respondent being the first judge, and sometimes the only judge if the interview is highly structured. In this sense, all interviews have relied on the respondent accessing a frame of reference in their minds to determine whether they consider the item in question abnormal or not. All clinicians and researchers are therefore somewhat dependent on the premise that all respondents have a common frame of reference of what is normal and abnormal. A unique feature of the DIPA is that it makes this common frame of reference explicit in the dialogue between the interviewer and the parent. Nearly all the DIPA scripts include the phrase that the item needs to be evident “more than the average child his/her age.” Other strategies in the DIPA are to ask explicitly if the parent considers these as “problems” or “excessive.”

These strategies both help to continually educate the parent about what information is sought and shortens the interview by sidestepping conversations about normal emotions and behaviors. Another advantage is that this sidesteps having to write scoring rules for when each behavior crosses the threshold to be a symptom that would constrict the raters (interviewers) inappropriately in the absence of data in this age group. A disadvantage is that the assumption of a common frame of reference has limitations in the purest sense because of individual differences between caregivers.

However, since the interviewer must ask for an example for a behavior (“the interviewer is the final judge”) before it can be endorsed by the interviewer, this diminishes the risk that normative behaviors will be endorsed as problems.

FOR NON-RESEARCH, CLINICAL USES

The main purpose of the interview format in purely clinical settings serves to ensure complete coverage of all relevant symptomatology. The interview may be speeded up by skipping over the frequency, duration, and onset questions. These are critical research data but can take a lot of time if children are very symptomatic and/or respondents are not concise. Information on frequency and duration can help to support whether an item is truly a problem or not, but is not always needed.

SOME POINTS ABOUT PHRASING

The phrasing of the scripted questions is purposefully made more informal and conversational. This is meant to break up the monotony of repetitive phrasing and to make the interview more tolerable for both respondents and interviewers. Some scripts state that certain phrases will not be repeated over and over, but the interviewer would like the respondent to remember the phrase for all questions. This applies to the “more than the average child his/her age” rule and the “in the last 4 weeks” rule. This is meant to reduce the stiltedness of the interview. Sometimes questions are turned into statements for variety and it is implicitly understood that it is still a question. Informal words are used that are more commonly used in normal conversation, such as “a bunch of things” instead of “multiple symptoms”; and instead of starting every question formally with “Does s/he do this item. . “, it is assumed that the questioning pattern is implicit and the question starts only with “And how about this item. . .”. This interview also makes it explicit to the respondent that some parts of the interview are redundant, and we openly apologize for it. This makes for good “customer service” so that the respondents feel like the interviewers understand the “inconvenience.”

There is a script on page 1 asking respondents to “bear with me” as interviewers may make mistakes and not phrase the question in a developmentally-sensitive, age- appropriate way. This is because it is unnecessarily cumbersome to provide alternative scripts that are age-appropriate for all ages of young children. Interviewers will need to rely on their developmental training and negotiate with the respondent to come up with the proper phrasing case by case.

There are three options in which the index child is referenced throughout the interview: (1) “your child”, (2) “s/he” (or “him/her”), and (3) “X”. “X” is to be replaced by the interviewer with the child’s name. These three options are interspersed randomly for variety.

ACCEPTABLE INTERVIEW TECHNIQUES

All interview techniques are acceptable as long as they meet the main goals of the interview: (1) complete coverage of all items,

  • examples are obtained so that the interview can be the final judge as to whether to endorse an item,
  • the contextual frame is explicit that the behaviors are more than the average child his/her age, and
  • the interview is educational for the respondent when needed to help the respondent understand what is being

Different techniques are needed for different types of respondents. The most common types of respondents tend to fall into three categories:

  • Respondents quickly grasp the spirit of the questions, concisely answer the questions, and are not tangential. The interviewers can more or less read the scripts from the DIPA in sequential order and follow up linearly with probes.
  • Respondents don’t grasp easily the spirit of the questions. The interviewers need to explain items in more detail beyond what is provided in the scripts. Reading the scripts with little else would not extract the most accurate data.
  • Respondents are talkative and/or tangential. The interviewers may skip around, follow the respondents, and have to be more flexible about engaging in chit- chat to maintain the cordiality of the interactions. Reading the scripts sequentially would clash.

SCORING

Rounding

Caregivers often give ranges instead of precise numbers. Take the average and round up (if needed). For example, if a caregiver said 10 to 20 times for frequency, 15 would be entered. If a caregiver said 10 to 15 times, the average would be 12.5 and rounded up to 13.

Frequency

For all disorders and items, with some exceptions, frequency is coded as the number of instances over the last four weeks. This is a somewhat arbitrary period and can be changed to other periods if needed for other study designs. The two main reasons for choosing four weeks is that the majority of disorders in the DSM-IV require a four-week duration and this is a manageable period of time for caregivers to recall accurately.

For example, if the caregiver said “once a day, every day”, enter 28.

If a caregiver said three to four times a day, every day, then the lower bound would be three times 28, or 84. The upper bound would be four times 28, or 112. The average of 84 and 112 is 98.

Suppose a caregiver said five to six times a week, and “several times” on those days. First, make the caregiver clarify “several times.” Suppose the clarification leads to three times. The lower bound would be five times four weeks, or 20; then 20 times “several times” of three, or 60. The upper bound would be six times four weeks, or 24; then 24 times “several times” of three, or 72.  The average of 60 and 72 is 66.

Exceptions to this rule are flashbacks and dissociation under PTSD, for which frequency is coded for lifetime. Another exception is the symptoms for CD which must be tracked over six months.

If the symptom is not present, enter 0 for frequency.

Duration

Durations are scored in minutes unless stated otherwise. If the duration is “all day”, this is counted as only the waking hours, or 16 hours (960 minutes). Two days would be 960 times two days, or 1,920 minutes. Two and one-half days would be 2,400 minutes, etc.

If the type of item has a duration that has no natural end to it, and only really ends because the parent did something to end it, or the child simply wore out and fell asleep, the duration scoring would be problematic.

Example: Refusal to sleep alone in separation anxiety disorder for most kids does not have an end. Children are continually afraid until the parent joins them or they fall asleep. It doesn’t make sense to code how long it takes the parent to join them or for them to fall asleep.  Duration is not collected for this item.

If the symptom is not present, enter 0 for duration.

Onset

Onset dates are recorded as mm/dd/yy.

Oftentimes, caregivers protest that they cannot remember the date something started. However, with patience and successive approximations, nearly every caregiver can remember dates with modest accuracy. Some suggestions for jogging caregivers’ memories include:

“Was it summer or winter?”

If the answer is winter, then ask, “Before Christmas or after Christmas?”

If the answer is summer, then ask, “During school or was school out?” (Even if the index child was not in school, there are often siblings in the home).

Major holidays are also useful landmarks:

“Was it before or after Easter?” (or July 4th or Thanksgiving).

Birthdays are also useful landmarks:

“Was it before or after his fourth birthday?”

If the caregiver says “early” in a month, code it as the 7th day. For example, for “early March” of 2006, code 03/07/06.

If the caregiver says “middle” of the month, code it as the 15th day. If the caregiver says “later” in the month, code it as the 28th day.

If the item is not present and therefore there is no onset date, enter 11/11/11 for dates. (This will obviously need to be changed if this interview is used after November 11, 2011).

FUNCTIONAL IMPAIRMENT

Consistent with the DSM-IV conceptualization of disorders, functional impairment must be assessed. This is assessed at the end of every module for that particular disorder. This is assessed with five questions about role functioning (with parents, with siblings, with peers, at school/day care, and in public) and one question about the child experiencing distress from the symptoms (it is arguable whether child distress counts as functional impairment but these are discussed together here because they are lumped into a single item in each disorder in the DSM-IV).

Linkage of Symptoms to Functional Impairment

For an impairment to be endorsed it must be clear that it was a change in functioning that was caused by the presence of symptoms.

However, there are 2 exceptions to this rule:

(1)  Lifelong symptoms

If trauma(s) has occurred since infancy and a symptom and its related functional impairment have always been present, then an impairment may be endorsed even though there has been no detectable temporal sequence of a change in functioning.

(2)  Circumstance not encountered

If a particular circumstance (such as school) has not been encountered during the preceding four weeks, but it is clear that impairment was present in that circumstance the last time the child encountered it, then an impairment should be coded on the basis of the last time the child encountered it, assuming the caregiver agrees that this would accurately reflect the current impairment.

The DIPA has three unique questions of related significance in these sections: accommodations, perceived as a problem, and perceived need for treatment.

Accommodation

For four of the role functioning questions (with parents, with siblings, with peers, and in public), an extra question is added about whether the parent makes accommodations to keep the children out of those contexts specifically because they know it will lead to problems if they don’t. The purpose of this is to capture if children are not showing impairment because their caregivers have made adjustments. Therefore, accommodation questions are asked even if the caregiver states that there is no impairment.

If a respondent answers “no” to the impairment question, but “yes” to accommodation, we would count that as impairment.

Accommodation may be skipped if the parent endorsed impairment. However, this is often an informative question to ask anyway for both research and clinical data- gathering.

Perceived as a Problem

Each impairment section contains one question about whether the caregiver perceives these endorsed items as a problem. It is not uncommon with preschool children that parents will endorse much symptomatology and even impairment, but not perceive these things as problems, perhaps because young children are small, physically manageable, and/or parents believe children will grow out of it. This question provides a means to track this issue with data.

Perceived Need for Treatment

A second-tier issue of whether issues are perceived as problems is whether the caregiver believes the problem needs treatment. A caregiver might perceive that the child has real problems, but still doesn’t believe that they need treatment. This question provides a means to track this issue with data.

DISORDER-SPECIFIC ISSUES RDC-PA

The Research Diagnostic Criteria-Preschool Age (RDC-PA) was created by a consensus of experts based on data from studies that existed at the time (Task Force on Research Diagnostic Criteria: Infancy and Preschool, 2003). Alternative symptoms and algorithms

were endorsed that were considered more developmentally-sensitive than the DSM-IV. The tally sheet describes how to make RDC-PA diagnoses for PTSD, RAD, and sleep disorders. For more details about the RDC-PA see the publication listed above or go to http://www.infantinstitute.com/ and click on the link at the bottom of the page.

BIPOLAR I DISORDER

Symptoms are coded as lifetime instead of during the last 4 weeks time frame because bipolar is a uniquely episodic disorder.

A Manic Episode in DSM-V is defined as 1 week in duration for the symptom of elevated, expansive, or irritable mood. This criterion, plus other aspects of bipolar disorder, have been controversial during the past decade. This interview is not meant as an endorsement of any criterion but rather is meant to be a data gathering tool to facilitate empirical solutions to the debate.f

PTSD

Traumatic Life Events (page 1)

For a disaster, it would be useful to track children who directly experienced the event from those who only experienced the aftermath. The convention used with the DIPA following Hurricane Katrina was to endorse item #4 “natural disasters” if children directly experienced the disaster (stayed through the storm). If children evacuated before the storm, and then returned later and saw their devastated homes, then endorse item #12 “other”.

Intrusive Recollections

The sub-item “Affect WhenTalking About It” is simply for data gathering about whether children are distressed during intrusive recollections (per the DSM-IV requirement) or not (as has been empirically observed in research). Whether intrusive recollections is counted as a symptom contingent on distress or not is up to each researcher or clinician.  The research of Scheeringa et al. that first documented this does not require distress for the symptom (Scheeringa, Peebles, Cook, & Zeanah, 2001; Scheeringa, Zeanah, Drell, & Larrieu, 1995; Scheeringa, Zeanah, Myers, & Putnam, 2003).

Endorse avoidance and/or distress at reminders even if there has been no exposure to reminders in the past four weeks, but the caregiver believes the symptom would still be manifest if there were to be an exposure today.

Children may have avoidance and/or distress at reminders but show no episodes in the last four weeks only because their caregivers have not exposed them to reminders. In these cases, the interviewer must ask, “If your child were to be in such a situation (anticipation of exposure for avoidance or actual exposure for distress at reminders), do you think s/he would show the behavior (avoidance or distress)?” If the caregiver believes that the child would still show the behavior today, then the appropriate item must be endorsed.

If an avoidance or distress item is endorsed in such cases, frequency is coded as 0, and duration is coded for the actual behaviors that they showed prior to the four week period.

Fears that are part of distress at reminders, avoidance, and other new fears.

Fears that are discussed in the PTSD section, when viewed in isolation, appear identical to the fears of specific phobia. Fears ought not to be coded in more than one

disorder module. If the onset of the fear appears related to a traumatic event, then code it under PTSD.  If not, code it under specific phobia.

PHOBIAS

Frequency is not rated for phobias because these are driven by chance and circumstance of environmental exposures.

Both specific phobia and social phobia have an A criterion that is the “fear trait” aspect and a B criterion that is the “anxiety response to exposure” aspect. In young children, A often can only be inferred by the presence of B. Therefore, A is not really asked in this interview but B is. If B is present, this implicitly suggests that A is present, although the limitations of such inferences are acknowledged.

The C criterion that the fear must be recognized by the child as excessive or unreasonable is not asked due to developmental limitations.

PSYCHOMETRICS

The test-retest reliability and concurrent criterion validity properties of the DIPA were tested in a research study with 1-5 year old children for seven disorders – PTSD, MDD, ADHD, ODD, SAD, GAD, and OCD. The DIPA showed adequate properties but there were too few children with GAD and OCD symptoms to fully test those two disorders. The details were published in

Scheeringa MS, Haslett, N (2010). The reliability and criterion validity of the diagnostic Infant and Preschool Assessment: A new diagnostic instrument for young children. Child Psychiatry & Human Development, 41, 3, 299-312.

FAQ

Doesn’t an interview like this create a sterile atmosphere and parents won’t open up about their children’s personal issues?

For most parents, no. Most respondents view this as good quality. To make it more “customer service” friendly, it is recommended to say explicitly at the beginning that the interviewer needs to follow this interview and it will take a long time. For respondents who find the interview uncomfortable, this is usually a reflection of larger issues that are not isolated to this interview. Also, one must rely on the interpersonal skills of the interviewers to insert informality and conversational free-lancing as needed on an individual basis.

Why is the script “and this was present sometime in the last 4 weeks?” written out for nearly every item? Why can’t you assume interviewers will remember this and save ink?

In past experience, interviewers don’t remember this. The interview is sufficiently complicated that this seems to be forgotten fairly regularly and items get coded for lifetime if this reminder is not constantly written in the interview.

Why aren’t scripts written out for every item to remind the interviewer to ask about frequencies and durations?

There are blank lines with labels in the far right column for frequency and duration for every item where applicable. These appear to be sufficient reminders for interviewers.

If core required criteria are not met, why aren’t there skip outs to the next disorder module?

This is relevant only to PTSD and major depression. Most other published diagnostic interviews have a skip out to the next module if, for example, an individual does not have sadness or loss of interest under major depression. The decision was made to capture information about all possible items for these disorders, not just to determine the categorical presence of a disorder.

LITERATURE CITED

Scheeringa M.S., Haslett, N. (2010). The reliability and criterion validity of the diagnostic Infant and Preschool Assessment: A new diagnostic instrument for young children. Child Psychiatry & Human Development, 41, 3, 299-312.

Scheeringa, M. S., Peebles, C. D., Cook, C. A., & Zeanah, C. H. (2001). Toward establishing procedural, criterion, and discriminant validity for PTSD in early childhood. Journal of the American Academy of Child and Adolescent Psychiatry, 40(1), 52-60.

Scheeringa, M. S., Zeanah, C. H., Drell, M. J., & Larrieu, J. A. (1995). Two approaches to the diagnosis of posttraumatic stress disorder in infancy and early childhood. Journal of the American Academy of Child and Adolescent Psychiatry, 34(2), 191-200.

Scheeringa, M. S., Zeanah, C. H., Myers, L., & Putnam, F. W. (2003). New findings on alternative criteria for PTSD in preschool children. Journal of the American Academy of Child and Adolescent Psychiatry, 42(5), 561-570.

Task Force on Research Diagnostic Criteria: Infancy and Preschool. (2003). Research diagnostic criteria for infants and preschool children: The process and empirical support. Journal of the American Academy of Child and Adolescent Psychiatry, 42, 1504-1512.

x