^B00:00:19 >> [John Eckenrode:] So this is a good time to, you know, if you're wondering about anything we said earlier or if there's something on your mind, this is a good time to ask it. So we're just going to open it up. >> So is African-American considered black and would possibly people have said other race if they were actually not African-American but [inaudible] American. >> [Terri Lewis:] Let me tell you what the categories are at baseline was, and I don't know exactly what specifically the measure asks but I'm sure it was white, non-Hispanic, Hispanic, Asian, Native American, African-American, mixed race, other. >> [Alan Litrownik:] Other. >> Okay, so that's ethnicity. >> [Alan Litrownik:] That's ethnicity. That's ethnicity, yes. >> The term "white" is-- >> [Alan Litrownik:] Yes. Yes. >> [Terri Lewis:] And because the distribution of the sampling at the different time points depending on the analysis samples, sometimes the number for any particular category is just not enough to model, then things just have to get combined or people just have to get dropped out and so our other starts to kind of encompass Asian, Native American, other. >> [Alan Litrownik:] Well, we have that mixed category. >> [Terri Lewis:] And then there's mixed. >> [Alan Litrownik:] And there's 16% or 18%, I think for mixed. >> [Terri Lewis:] It's our fourth largest category. >> [Alan Litrownik:] Yeah and all we have is mixed with nothing else. We know nothing else about them. >> [Terri Lewis:] Now at age 12, we do have the multi-ethnic identity measure and the kids can self identify what group they feel they most belong to and I'm going to tell you that mixed identifies mixed. >> Okay. >> It's not helpful, you know, but, I mean, which is fine, you know, in the sort of how they're self identifying, but it's-- >> [Alan Litrownik:] Prior to 12, it was the caregiver that was identifying the ethnicity of the kid and we saw some ethnicities change. >> [Terri Lewis:] We use child ethnicity primarily because I think there may be some administrations when we don't have the caregiver ethnicity, depending on the interview. >> [Alan Litrownik:] Yeah. >> [Rae Newton:] If you're interested in that variable, it's also highly confounded with site, so something you want to pay attention to. >> I had noticed that it seems like a lot of the research you covered this morning was involved with one or two sites or several sites but not all sites and I wondered if you could just tell us, you know, we are speculating, why that might be, but it seemed like there wasn't much done on all the sites. >> [Terri Lewis:] I looked at that. >> Maybe you just didn't cover-- >> [Alan Litrownik:] Well, yeah. Well, some of them, the numbers were not the 13 15. The study that-- Some of the stuff I went over very quickly and didn't give you all the information, some we selected subsamples but they were all the sites involved. But the big problem was early on Chicago was, they were so young and we did the early studies, they were not included and you-- So we didn't select out for any other reason than we didn't have all the data in. We have a rule that from the beginning attempted to follow that we would not publish anything unless 90% of the data were complete, expected data, because we didn't want to publish on partial data and then go back and look at something again and things change. >> Okay, so you weren't using things like [inaudible] information, [inaudible] likelihood or imputing data or anything. So that's something that still needs to be done. >> [Desmond K. Runyan:] But the dataset wasn't available for analysis until it met the 90% rule. >> Okay, I get it. >> [Desmond K. Runyan:] For that, so one of the solutions at times was to say I'm going to take two sites with the oldest kids or three sites with the oldest kids and I'm going to analysis that data because I have 90% of the those three sites are in and so I can do that and if I wait-- If I want to add Chicago, I can't get 90% for another two years or three years and so-- >> So it really does mean that in every single paper that's been published, you've actually with quote LONGSCAN data, your population to which you were generalizing changes in every paper practically, because, I mean, I was talking to you about that and I think that was one of my major questions because, you know, I'm always asking my students to think about well what is the population to which you want to generalize these results, because that's why we do data analysis and we can generalize and so you really have to specify in each paper which population your sample is coming from. >> [Desmond K. Runyan:] Yes, explicitly, the two west coast samples, all those children were reported to DSS. >> [Terri Lewis:] Yeah. >> [Desmond K. Runyan:] By definition-- >> [Terri Lewis:] Not all of them. >> No. >> [Terri Lewis:] Not the northwest. >> [Alan Litrownik:] Well, they were all-- Yeah. >> Rules were made to be broken. >> [Terri Lewis:] Yes. >> [Alan Litrownik:] But their eligibility was through contact with the-- >> [Terri Lewis:] The family was reported. There are siblings in the dataset. One sibling could've been in the report and the family got into social services but the other may not have specifically had a report to CPI, such as if you look at the two sites, not all of the sites that you think should have report will have a report. >> [Alan Litrownik:] Yeah, well, then you raise the issue of siblings. >> There's no siblings. >> [Terri Lewis:] There's siblings. >> Are there? >> [Alan Litrownik:] Oh, it's worse than that. >> Oh, you're kidding. >> Really? I was told there were no siblings. >> [Alan Litrownik:] It's [inaudible] and we got kids who are living in the same households that aren't sibs. >> Can you identify specifically-- Oh, sorry. >> [Terri Lewis:] Not through the dataset. Not through-- >> So there's-- We have [inaudible]-- We actually have hierarchical data because of the fact that in some families, two children are identified with ID numbers or three or four. >> [Terri Lewis:] They will all have unique IDs. >> Yes, I understand that. >> [Terri Lewis:] I meant to the extent that, yes, I mean, we undertook this. We looked at it. There's not-- I mean, there's-- We opted not to do it. >> So there's no way of actually identifying which ID numbers belong to the same family? Can you give us-- >> Is it a lot of families? >> [Desmond K. Runyan:] No. >> [Terri Lewis:] No. >> An idea of how many there might be? >> It wouldn't be that many because they'd have to be the same age at the same time. I mean, I know-- >> [Alan Litrownik:] So we got a range of 20. >> [Desmond K. Runyan:] But I think it's like 20 kids. >> Twenty kids. >> [Miguel Villodas:] No, 20 or so at San Diego. >> [Alan Litrownik:] That's probably 10%. >> [Desmond K. Runyan:] North Carolina doesn't have any subsets. >> [Alan Litrownik:] Yeah, because they were identical. They were first borns. >> [Desmond K. Runyan:] They were all [inaudible]. So North Carolina doesn't have any sibs. I don't believe Baltimore has any sibs. And I don't think Chicago has any sibs actually, so the only two places that could have sibs are-- >> California. >> [Desmond K. Runyan:] San Diego and Seattle and that's because at both of those sites, you could have children in a range of ages because the recruitment was, they had to be less than 4. So if you had in Seattle, if you had a report made of a family and there was a 1 year old and a 3 year old and the parent consented, they both potentially could've been enrolled in. >> [Alan Litrownik:] Well, in our site, they were both removed and not necessarily to the same home. We've done some things when we were looking at the characteristics of the family. What we did is those that were living in the same home, actually it wasn't biological that we were interested in, it was if they were living in the same home at the time of the interview, because they could've been different ages. So you don't want to double count, but we got that information. Yeah, we had come up with actually a paper that we had submitted. We got an interesting review and then we started to revise and the question was one of whether or not we really wanted to lock ourselves in and we decided not to. So we pulled it. But what we had talked about with that approach was the issue of site was one of whether or not the relationship between the independent and dependent variable was the same across sites and that was really the issue and that if we could possibly control for first of all site different-- Well, the other thing is that sites differ on a number of dimensions. Ethnicity was one and recruitment status was another. If we could account for those with some of the other variables that we had, some of the demographics and we didn't find any of the interactions of site after that, that maybe that was enough to account for as long as we checked to make sure that there were not any interactions and if we checked, then we didn't need to control for it. ^M00:10:09 So we had this whole procedure of what we were going to do and we decided we didn't want to go through all of that. So we basically have-- Depending on the research questions that people are asking, they decide, those that are working on the papers will decide how they want to look at some and how they want to handle it. And some papers, it's just a simple main affect with five different groups. Some we've looked at the sites that are similar on the measures and we combined them. In other cases, we've actually checked and looked at interactions and some-- >> Risk factors. >> [Alan Litrownik:] Yeah, risk factors and some with the doing some hierarchical modeling. Yeah. >> [Desmond K. Runyan:] And one of the things that I think I've observed and talked about with Terri here is that it seems like as the kids get older, the site variable becomes less important, in terms of distinguishing, because we now know virtually everything that's happened to these kids from age 4 on and so the things that distinguished them in the first four years make relatively little difference compared to all the things we know about them in terms of life events and what's happened to them from 4 to 8 or 4 to 12 and so as the kids get older, a greater proportion of their life is completely described in our dataset and so the variations that made them different at the beginning become less important. >> [Rae Newton:] And I think part of the argument is when you control for site, you don't really know what it is you're really controlling for, because it's confounded with so many things, like maybe that's to control for ethnicity. So I think part of our rationale is it's better if you have a clear theoretically generated set of controls that you put in your model without just saying okay, you need to control for site. >> And what are those in your dataset? I know like in the CDP dataset, the child development project dataset, there's certain control variables that you invariably do always enter in all your models. Have you identified the ones that you feel we should definitely include as control variables, like site is definitely one, ethnicity or not? >> [Terri Lewis:] It depends. >> [Alan Litrownik:] It depends. >> [Terri Lewis:] It depends on your research question, what you're modeling, what your outcome is, what your predictors are, what's likely to be confounded with your predictor variables and-- ^M00:12:39 [ Multiple Speakers ] ^M00:12:42 >> That you found in the analyses to have really been important control. >> [Terri Lewis:] It depends. >> [Alan Litrownik:] Yeah, it does depend on the variables that we're looking at. >> [Desmond K. Runyan:] Yeah. Just as an example, the Baltimore kids have really high self esteem for reasons that we haven't been able to-- >> They're probably all-- >> Everyone from Baltimore-- >> [Desmond K. Runyan:] You know I think the success of the Orioles or something that has something to do with it. So disproportionate to any of the other evidence that we have about how kids are functioning, their self esteem seems to be a little higher and so there are some times when it's important to kind of think about the fact that they think a lot of themselves in Baltimore. >> [Terri Lewis:] In Seattle, caregivers are depressed. >> [Desmond K. Runyan:] That's true. In Seattle, the caregivers have a greater level of depression, but they have cloudy skies all the time, so maybe that's seasonal. Yes? >> Do you think that being involved in the study perhaps, you know, outcome, I mean, affects these children in a way that makes them different from other children in similar circumstances? >> [Desmond K. Runyan:] Well, objectively, I'd say we're a small part of it, a very small part of their lives. I mean, two hours every two to four years seems like it should not be enough to but we do have some families that when we ask them the question, who would you count among your support people, we've had some families that have identified our interviewer, who sees them two hours every two to four years, as one of the support people for them, which is a pretty sad commentary. >> I was thinking more like the if I were a child and asked all these questions, then it would give me a model-- It would be a model of myself and what, you know, what kinds of things, what kind of parameters in the world would affect me that other children would have and it might change [inaudible]. >> [Desmond K. Runyan:] My temptation is [inaudible] two hours every four years-- ^M00:14:40 [ Multiple Speakers ] ^M00:14:43 Is not a significant enough intervention to really change that pretty much. >> Yeah, but it constitutes a type of monitoring. >> [Terri Lewis:] Yeah. >> Regardless of how infrequently it is. You know, it means that they're special and somebody's kind of watching for them. >> That's right. >> Yeah, yeah. >> [Desmond K. Runyan:] We had families that have notified us when they've moved someplace else and wanted to be contacted, but we usually assumed it was because they wanted the $50, the money to complete the interview, they were going to lose that income, but-- >> [Alan Litrownik:] Well and we do send out typically newsletters. You know, all the sites do something like that for-- >> [Terri Lewis:] Birthday cards. >> [Alan Litrownik:] Birthday cards and holiday cards, mother's day cards. >> That's [inaudible] a relationship for some of these kids. >> [Alan Litrownik:] Yeah, and what we did in our newsletter is we had some of the kids who would make drawings for the [inaudible], you know, interviews that have gone out. You get a lot of things without identifying people where they were sort of part of this group. >> [Desmond K. Runyan:] It's a mixed-- I mean, you don't want to have them identify too closely with us but on the other hand, we want them to contact us and stay in touch and feel like they have some obligation to be available for followup. >> [Alan Litrownik:] Well and if this was an intervention, a positive intervention, it didn't have all that much of an impact, because our kids are not doing all that well. >> They're not using drugs or alcohol. >> [Terri Lewis:] At 12. >> At 12. >> [Alan Litrownik:] That's what they're reporting. >> Right. >> [Alan Litrownik:] That's what they're reporting. >> Well, okay. At least they know they shouldn't be saying they're using. >> [Alan Litrownik:] I should've told them that they-- >> That was one of my questions, too. You know, you do the consent and now they realize, hey, I'm not going to tell them about the sex abuse. >> [Terri Lewis:] Yeah. Alright, I mean, if you look at the self report, what they're reporting, I mean, the numbers of who are reporting things, I don't think that we suffer from under reporting. We getting the trauma symptom checklist at 12. >> [Desmond K. Runyan:] And we have more kids that tell us about sexual abuse than DSS knew about ever. >> [Alan Litrownik:] Yeah. >> [Desmond K. Runyan:] Dramatically more. >> [Alan Litrownik:] We had a number of, not a lot, of refusals, hard refusals where caregivers, this was a biological dad in one case, said you will not contact us again. If you do, I am going to, you know, file a lawsuit and so we stayed away, but we sort of kept track with CPS to see if the child ever came to the attention of CPS again. And this was after the 4 to 6 interview and at age 12, we saw her again. She was back in the system for guess what. Sexual abuse. >> By the father. >> [Alan Litrownik:] By the father. So we've had a few cases like that. So there's-- One of the things that we've done, it's not going to [inaudible], is we have the lives of these kids, especially our sample again that have been placed in substitute care, is so chaotic and talking about family structure and household composition is not anything that most kids experience. And just from some of the data that we're collecting, the quantitative data, doesn't really tell the story of the kids' lives. So what we've done is we've gone back with all the data we have, not just from interviews, but from contacts, trying to maintain contact with the kids, talking to their social worker or whatever, whoever we might talk to, we've got all of this information and we have a set way of reviewing this and we do life narratives when the kids age out at 18. So we've got from the time that they first came into the foster care mental health project, when they were first reported, and in and out of home care, we've got a narrative of what's gone on in their life, who they've lived with, things that have happened, the sort of problems they've had, and it's, I mean, the stories. From this, we've got some kids we've identified now who are perpetrators of sexual abuse and these are kids actually who are reported to CPS as perpetrators. So we-- >> Does anything of that [inaudible] in the CPS records? Did that [inaudible]? >> [Alan Litrownik:] No. >> [Desmond K. Runyan:] Not until age 12. >> [Alan Litrownik:] No. No. And actually what happens, we couldn't code it. So we've got a new coding the kids now who are perpetrators, some who are parents in perpetrating and some who are the target or the subject of a report and that hasn't, that, I guess, officially is not [inaudible]. >> [Terri Lewis:] It is. >> [Alan Litrownik:] Okay. >> [Desmond K. Runyan:] We had to develop a new code for that. >> [Terri Lewis:] Recently. >> Do you all have sort of recommendations for editing missing data? I mean, I guess [inaudible] install, where I have, you know, sometimes thousands of kids and in the sample if you lose 100, oh well, a little more [inaudible] about preserving sample size in this case to retain power and keep the stability of the estimates going and you have recommendations for that? Do you have a protocol or? ^M00:20:18 >> [Alan Litrownik:] Well, the analytic technique that you use. Some of it sometimes it can handle missing data. It depends on what you're doing. When you start looking at over time, you know, it's not so much-- Missing data is not necessarily a problem, it's how many missing data points you have and if you got the coverage and so it really depends on what questions you're asking. If you're doing a simple, you know, outcome at one time and predictor at another time, then there's an issue. >> I was going to be running some growth curves and so I mean sort of looking at the missing data structure that Terri presented. >> [Alan Litrownik:] Well, actually, you know, and some of the stuff that I didn't go into detail, one, looking at the resilience, I think, yeah, I think it was resilience, over time from 4 to 14-- Was it that one? Well, one of them is a certain number-- Yeah, I think it was. A certain number of data points that if three out of five data points, if they had that that we would include that and we didn't lose many when we did that. So-- >> [Terri Lewis:] It depends. >> [Alan Litrownik:] It depends. You have to sort of look at how-- You look at the number of missing data points you have per subject and you might make a decision based on that. >> [Rae Newton:] We haven't set forth any standard protocol for handling missing data over a longitudinal [inaudible] model. Different people have done it different ways. Sometimes we simply just say okay, we lost 20%. We've got complete data for these time points. That's what we're going to use. Sometimes we handle it another way. >> But in your experience, are there any publications out now where, I mean, you've got two different kinds of problems with missing data. I mean, you talked about some kids who drop out but then come back, so those missing data are sort of bounded by data, but then you've got [inaudible] and you lose them. So if you have the easier case being a child who maybe drops but then returns, will you or will some members of the team, have the imputed that missing spot or may have done some sort of imputation or do they just keep it in a stack set and just use the data that are there and not try to fill that missing hole. >> [Alan Litrownik:] The latter. We have not done any-- >> [Terri Lewis:] I don't think we've done any imputations to date but we are also probably in the last year and a half, two years moved more into our more sophisticated longitudinal modeling but a lot of the techniques that we use sort of handle those missing data. So we put in what we have and the model builds within [inaudible] the way the model deals with it. >> Depending on full information, the maximum likelihood to-- >> [Terri Lewis:] It depends on the model. >> As the estimator. >> [Terri Lewis:] And technique, depends on the model and the technique. >> Have you looked-- Have you seen any rhyme or reason to patterns of missing, I mean, like, if they're changing characters or, I mean-- >> [Terri Lewis:] When I looked at it, I did a rather, I mean, you thought my presentation this time was long, you should've seen the one on [inaudible] in Chapel Hill a couple years ago. >> I didn't think it was long though. You should [inaudible]. You could've talked about [inaudible]. >> [Terri Lewis:] I looked at a lot of different factors. I looked at CBCL [phonetic]. I looked at [inaudible]. I looked at changes in caregiver. I looked at who the caregiver was. I looked at ethnicity, site. I looked at all kinds of information in terms of seeing if there was something about the sample that you could pull out and say this looks like this is something that's non-random about it. I didn't find anything other than at certain time points or places site and to some degree ethnicity but it kind of depended but for the most part, the sample, I did not find anything-- >> [Alan Litrownik:] Systematic. >> [Terri Lewis:] Over time that was systematic. >> Otherwise, you could characterize it as at least [inaudible] random. >> [Alan Litrownik:] Yeah. >> Well, and one of the things that I've always said, because people, often reviewers, you know, this is one of the things that they harp on is, you know, how much [inaudible]. The thing is that if you use a program like [inaudible] that in which you can use no matter what your analysis is, but you can use full information and maximum likelihood, then, you know, you're using all the sample that you have, but you can also then just delete all of the people who are missing, fit the model again, and compare and then you're doing a sensitivity analysis which really is showing you whether there's some real difference between the full sample and those who have complete data and oftentimes there was very little difference, especially the things at random, as you were just saying, so that that at least appeases reviewers, then they'll say, oh, okay. Then it doesn't matter if it's on the report, because basically your results are pretty stable. That's what I always use to convince reviewers I'm doing the right thing. And I can be pretty damn persuasive [inaudible]. >> Have you constructed standardized [inaudible] to compare with the NIS studies or with national reporting norms on what the differences might be given your interview data that you've gotten to estimate incidences and prevalence that might differ from other reporting methods? >> [Desmond K. Runyan:] No. We-- Actually, that's a paper what we've talked about doing systematically. The one thing just looking at the data that strikes me is that we seem to have protected our kids from being sexually abused in the sense that given the number of kids in our sample and how many kids we have who have been sexually abused, it's remarkable actually how few of them have been compared to the problems data that [inaudible] and Marie and other people have in terms of 20% or 25%. I mean, we're less than 10% including as teenagers, including with other people. So it's outside the family. So it's really-- I mean, one wonders what that is and I've speculated that maybe it's because we got a social services involved sample and there's kind of a higher level of scrutiny that's going on with these kids and so maybe they're-- It's hard to believe that that's going to protect them, but maybe it is. I'm a little puzzled by kind of why there's so little sexual abuse. >> [Rae Newton:] Maybe high risk kids define sexual abuse differently. >> [Alan Litrownik:] Yeah. >> [Desmond K. Runyan:] Although we ask the questions-- ^M00:27:19 [ Multiple Speakers ] ^M00:27:21 We ask them in pretty standardized ways and so we're a little surprised at how low that is. >> [Alan Litrownik:] Yeah, one of the things that we have been attempting to do is to extend LONGSCAN. I mean, we figure we've got an interesting sample with a lot of data and we just finished a proposal to do another interview. I'm not sure that we would archive it here. If you guys are still around, well I don't know but when the kids are 22 to 25 and one of the things we're focusing-- >> We will. ^M00:27:55 [ Inaudible Comment ] ^M00:27:56 Okay, okay. It's a deal. We get the money, we'll tell them that they got to give you money too. >> They have to give us money to do the archives in order to archive. >> [Alan Litrownik:] Exactly. >> And we can feed off each other. >> [Alan Litrownik:] But one of the things that we talked about for this right now we're focusing on substance use and risky sexual behavior. But we had talked about wouldn't it be interesting to do retrospective reports as young adults about the history that might be more comparable to some of the other data that people have collected and it might be with some time, we might get different rates of reporting. >> [Desmond K. Runyan:] Kathy Whittam [phonetic] has written about the looking at that retrospective issue of asking adults and raised concerns that that's really not a very good way of doing it but it would be interesting. The other proposal that I think would be interesting to do is to gather our participants all in a room when we get done and present our results to them and sound it out about kind of does it feel like we're talking about their experiences, you know, more of a [inaudible] based participant research strategy with our subjects now that they're as adults and have them in the conversation with social service people in the room so they would be engaged in a dialogue about kind of what it felt like to go through the social service system. >> [Alan Litrownik:] I was talking to somebody about, I forget, what they did with the moms, yeah, okay. >> Yeah. >> [Alan Litrownik:] The moms who we had to go get consent and we did a little qualitative study. >> [Rae Newton:] Yeah. It was only a very small sample of mothers whose parental rights had been relinquished but not quite but they weren't living with their children. One of the interesting things about that was every single one of the moms cognitively it was still their kid and they were going to be reunified and no matter how messed up their lives were, it's "when I get my kids back" and "this is what I'm going to do" and it was interesting because the system looked at it entirely differently, that this person no longer was a parent, basically. ^M00:30:12 So it was-- >> [Alan Litrownik:] Yeah and they were living in rehab, you know, homes and, yeah, really messed up but they still were going to get their kids-- This is six, seven years after the kids had been removed, they were going to get them back. >> [Desmond K. Runyan:] And they had had their rights terminated. >> [Rae Newton:] Yes, but not quite. They still wanted consent. >> [Alan Litrownik:] Yeah. >> [Rae Newton:] Even though their rights were terminated, they still wanted consent. >> [Alan Litrownik:] These are the ones that their rights hadn't been terminated. It was the ones whose rights had been terminated, we could get it from whoever was in the legal responsibility. >> [Rae Newton:] But we have felt for a long time that a more qualitative approach with some selected sample of our group would really be helpful, particularly in we wanted to turn it into a [inaudible]. >> [Terri Lewis:]Yeah. >> [Miguel Villodas:] We been talking about in San Diego a lot is kind of developing more of a person-centered approach. >> Yeah. >> [Miguel Villodas:] And so rather than trying to fit the experiences, you know, based on the variables and all these interactions, we want to look at, you know, are there certain patterns of interactions or, you know, alternate experiences that, you know, occur across the sites or across kids and you could do that with looking at the severity [inaudible] type. Right now what I'm doing is just looking at the type, so it's just yes, no, you know, at different age periods and that's actually working out pretty well. Getting into severity, you're getting into latent profile analysis or, you know, latent transition analysis, you take it longitudinally and so that can be done and that's actually kind of probably a better way to get at those issues because otherwise there's just too many interactions. >> [Desmond K. Runyan:] One of the things that we've done and we've wrestled with whether we can count predominant type by which is the most severe using the severity codes. >> The hierarchy. >> [Desmond K. Runyan:] And use a hierarchal system, which that seems like it works in some situations but not every situation. >> [Miguel Villodas:] Yeah. We've seen about three studies that have come out in the past year that have latent [inaudible] latent profile type of procedure and I think pretty much across the board the more types of maltreatments that have been experienced have been related to the worst outcomes. >> Yeah. >> [Miguel Villodas:] Just generally. But that's, of course, not looking at these types longitudinally and using retrospective reports and all kinds of other issues that we can get around with our data that other studies aren't able to. >> Right. >> [Desmond K. Runyan:] Jonathan Kotch published a paper looking at early neglect and its relationship to subsequent aggression but I'm involved in analysis now which it looks like later domestic violence exposure has more potency for explaining age 8 trauma symptom checklist and behavior problems at age 8 than the earlier neglect that Jonathan found in his analysis but-- So it's the same subjects analyzed in a somewhat similar way but looked at slightly differently with different questions and it comes out slightly different and so it's kind of it depends. >> [Terri Lewis:] It does depend-- It also depends on what modeling strategy you're using. For example, with the Jones et al. [phonetic] paper was doing [inaudible] using trajectory models, looking at 0 to 4, 4 to 6, 6 to 8 and what she was finding is the variability was so limited in the time frames that what she went back to recode it to 0 to 2 and that actually gave her the variability that she needed in terms of making those trajectories work better, so there's-- >> [Alan Litrownik:] Part of that reason was she included all sites and she had two sites and actually not just two sites but part of the North Carolina site and part of the Chicago site all had been reported prior to age 4. And actually the studies that we did in the special issue of child abuse and neglect selected all kids who had a report between birth and 8 and so those were from all the sites and-- But when she was modeling that, since she had some sites that had all reports, that caused some problems in terms of the modeling. >> [Terri Lewis:] And other cases, it depends on what you're looking at or when you're lining up in terms of the outcome you're assessing, would it make sense in terms of how to measure or quantify that timeframe, so I mean, it really, it just depends. ^M00:34:44 [ Inaudible Comment ] ^M00:34:49 >> [Alan Litrownik:] Now what-- I mentioned a study that Laura Pratka [assumed spelling] was doing looking at just the Seattle and San Diego sites where they had all been reported and looking at trajectories after that and that was very different. Well, the research question was very different question, you know, what the sample of kids who were reported earlier, what does their trajectory reports after that look like and that's very different than the trajectory of reports for a sample over time. >> I guess I have two questions. One, I noticed that the measures change for various reasons, so I wonder why you decided to use a multi-group ethnic identity measure, what was your rationale for using that and then also what was your rationale for dropping it at age 14 and also the trauma symptom checklist, you've been alluding to some problems with that measure [inaudible]? >> [Alan Litrownik:] Well, the first the ethnic is based on [inaudible]. >> [Desmond K. Runyan:] Child self reports. >> [Alan Litrownik:] Yeah, it's child self report and that was one of the issues that we I guess had talked about wanting to have a measure but there were questions about whether or not at age 8 it was, you know, appropriate and we also-- In terms of the measures, we made decisions about what measures to include so we could do all the programming. We tried to do it, you know, well in advance and when North Carolina was coming in and we were always under the gun, because their kids were aging and I'm trying to think, when we made the decisions for age 8, it was probably early on '93 '94-- >> [Desmond K. Runyan:] Probably '93. >> [Alan Litrownik:] So, you know, you got to sort of put it into context of what was going on at the time and what was available and the measures that were just coming out and I think that's one reason why. >> [Terri Lewis:] And that it wasn't picked up at 14, I think in part 14 was funded by NICHD, a big component of that was mental health and we included the DISC, the diagnostic interview schedule for children, which ate up a huge block of the interview time. So the other LONGSCAN measure administered at that time had to be cut down significantly. So it focused at 14 on some of the sort of core things administered and then why it wasn't picked up at 16 or 18. I mean, also I think in terms of measurement was what was of interest at the time those have evolved over time, too. For example, I think at 18 there was a big interest in height and weight and obesity and eating and things like that and so that kind of ended up at 18 and not, you know, in prior interviews. So I know at 14, a lot of things didn't get included at 14, because of the [inaudible] that interview in part was pretty mental health based and just took up such a big block of interview time. >> [Alan Litrownik:] Yeah. I probably should've mentioned one of the other factors we considered in identifying measured was the total interview time and we-- I think with early on, the kids we wanted under an hour. As they aged, we were hour and a half at the longest. For the caregivers, we were up to two hours. But we pushed it. I mean, we always pushed it and the interviews actually were quite a bit longer. Age 12 was quite long. I think we had some people that refused to participate after that because of the length of an interview. So we were a little sensitive to that. So we had to make some decisions about what it was that we really needed to get given the time that we had, so we were always making those decisions which were-- >> [Desmond K. Runyan:] We didn't think that the kids' ethnic identity issue of changing from 12 to 14 or 16 was as important in terms of sort of as a question for us as an example is looking at [inaudible] labor and work and finishing school and so at each point at age 12, we said what are the major development tasks they have to do, what can we fit in here. At age 14 was funded to look specifically at developmental psychopathology, so the DISC was in. At age 16, we kind of came back to kind of dating and mating type behaviors and substance abuse and at age 18, we're kind of looking at school completion and work, not that we wanted to [inaudible] and life skills, yeah. So we-- It was a compromise and we all sat in a room and with five investigative teams and coordinators, you're kind of fighting it out question by question as to which instruments or questions of instruments were going to be included and so that's part of the reason for the change and even the BSI versus CSD got down to how long is this interview going to take, this many questions versus that many questions, and I'll trade you this if you'll do my instrument and that kind of, just a little bit like the US Census goes through and tries to figure what they're going to include. ^M00:39:58 >> [Alan Litrownik:] Do you want to address that? >> [Terri Lewis:] Trauma symptom checklist. We noticed there was a fairly significant decline in trauma symptom scores between 8 and 12. And in looking at the validity scales, the under-reporting scale that, like I think I mentioned earlier, almost 50% scored in the range that [inaudible] would suggest are not valid due to under responding. That seemed to be a pretty hefty number of people that the recommendation might theoretically according to the manual would be to drop those participants. >> [Desmond K. Runyan:] But we did talk to John. >> [Alan Litrownik:] I was going to say, yeah, why don't you tell them what his response was. >> [Terri Lewis:] First of all, communications with John Baer was that oh, ha, that's interesting. Well, nobody pays attention to-- ^M00:41:00 [ Multiple Speakers ] ^M00:41:02 >> [Alan Litrownik:] So what do you do, what do you do with those that come out on that, he said, "Oh, come to think of it, I don't think we've done anything." >> [Desmond K. Runyan:] Yeah, that actually got triggered because at age 8 there was a significant number of kids who were on the under reporting scale, under response, who were also and concurrently were on the over response and so we kind of scratched our heads and said what does that mean, why-- Is there something significant about this. And so we called John and talked to him about it and John's response is, "We never look at that anyway" >> [Terri Lewis:] Yeah, we don't look at that anyway. >> [Desmond K. Runyan:] So then that became the answer for the second one, but we still are a little troubled by why all these kids who had distress at age 8 are all looking pretty good at age 12. >> [Terri Lewis:] Just the entire sample. I mean, their [inaudible] had dropped. I've looked at the ten items that go into the under-reporting scale. I've looked at all of the items. I've looked at anything that looked like patterns of under-response. I have tried to see if there is some explanation in terms of method of administration, order of administration to the extent that the day would allow me to look at that and it really doesn't, because our interview, the order of the forms are not randomly assigned. So it's the same across all the interviews. If there was a different shortened version of the interview, it just means certain measures were not administered. It doesn't change the order. So to some degree, I'm bound in terms of trying to explain that by administration order or type of administration because those were the same. I have looked at age and it does seem to be that even within age 12, the older kids are less likely to under-respond than the young ones, but that's within a group. It really doesn't explain the whole, the fact that the whole group does it. Ethnicity, site always to [inaudible] to try to figure this out and we're left at the end of the day with what do we do with these data. They seem to be odd in this way and they do-- What we have done is look at okay, well, let's divide the group into maltreated, non-maltreated, what does, you know, given the trauma symptom checklist scores at 12, can they distinguish the groups and they do. So we've decided that the data are what they are and move ahead. We've tried to explain it. We've looked at it. I think we've done our very best to come through what we can to try to explain it and one can make the decision. If that just seems sort of funky, I'm not going to use it or I'm going to drop out half of the sample and just use those that don't seem to be under responding. My best guess is that there is something funky, there's something funky about 12 year olds. I really do think-- I mean, I'm not kidding. I really do think that it's something developmental about that specific sort of age group on that particular measure. >> That'd seem like a very plausible explanation; 12 year olds are fairly labile. They have-- In some areas, they very developmentally advanced and other areas, they're, you know, like emotional regulation, you know, at some level they can maintain basic societal expectations but, you know, there's a lot of lability in there. >> I have-- >> You can catch them on another day and you have completely different answers. >> [Terri Lewis:] I have seen at least one study looking at the stability, I think it was of, it might have been of depression or something that was measured over time and the least stable timeframe was around that time, which, you know, leads me to believe that we sort of have what we have and it probably is more of a reflection of the developmental period issue, not of anything wrong with our data, in particular, and again, it does sort of distinguish the two groups. I mean, it distinguishes those that are maltreated and non-maltreated kids are, you know, have higher scores-- >> [Alan Litrownik:] You were looking at patterns of and other measures over-- >> [Terri Lewis:] Yeah, we-- >> [Alan Litrownik:] Twelve on the CDCL >> [Terri Lewis:] CDCL and-- >> [Alan Litrownik:] And don't see the same kind of drops, so-- >> Does each 12 include a sex concern scale? >> [Terri Lewis:] Eight does not; 12 does. >> [Rae Newton:] I think part of the message of this discussion is that in this data set, there is multiple opportunities to do measurement studies. >> [Terri Lewis:] And I should say that our 16 data, the scores go back up. So there's a-- If they go from 8 markedly down to 12 and then a slight coming back up, so they don't come back up to the levels at 8, but they come back up from what they were at 12, so there seems to be kind of a coming back of an increase, which I think is probably a leveling out to what it probably should've been likely at 12. >> You spoke about sort of how to deal with this [inaudible] changing to BSI, are there a variety of approaches to dealing with different ways that things are measured over time that are worth mentioning, sort of different approaches that you guys have seen taken or that's not a very specific question, but sort of a range of [inaudible]. >> [Desmond K. Runyan:] We've invented each new paper, new investigation kind of address those issues kind of de novo again to see if there's a different way of playing with it. We haven't-- I mean, you need to remember that we're kind of busy collecting new data, so we're kind of our ability to go back and really-- >> But were there any papers that you admired? >> Boy, I wish I had thought of that. Not yet, no. >> [Desmond K. Runyan:] No. I think Deborah Jones' effort right now to do the trajectory model has got us all thinking very clearly about whether there are ways to do it and her approach was to look at the exposures as trajectory modeling. We're also actually thinking about looking at the outcomes in a trajectory modeling and then trying to-- I'm sorry. She looked at the outcomes but we're also looking at exposures, so we're doing both ways. >> [Terri Lewis:] She looked at exposure trajectories-- ^M00:47:17 [ Multiple Speakers ] ^M00:47:18 >> For measures that were more of a linear model, you would move them into percentiles or something like that. For the trajectories or could you fit those into trajectory modeling if, you know, one of those ages you've got a different metric, because with her trajectory modeling, she has, I mean, she has the maltreatment data and it's the same-- >> [Desmond K. Runyan:] Right, it'll be a challenge to do it with the other one. That would be a size kind of [inaudible] rue the day that we said why don't we do the BSI instead of the [inaudible] and-- ^M00:47:48 [ Multiple Speakers ] ^M00:47:51 >> Yeah, she was talking about [inaudible]. >> [Terri Lewis:] I don't know why we didn't [inaudible]. We just standardized it. We centered it. >> [Alan Litrownik:] Well, the other way is that the CSD does have a standard cut point and the BSI, the BSI-- ^M00:48:02 [ Multiple Speakers ] ^M00:48:12 >> The thing is I published a paper in which as long as you have some items that are somewhat similar, you can use different items across time if you make a [inaudible] variable-- >> [Alan Litrownik:] Right, that's the other-- >> Rather than, you know, just using observed variables and so it is possible, [inaudible] and a bunch of people and it basically is talking about as long as you have constrained the construct across time to be the same, you can use it in growth modeling or just about anything else you want. So it's not as big a problem, I agree with you, as it sounds like at the moment and there are many ways around it. Yeah. >> [Desmond K. Runyan:] It's been an interesting challenge putting this together and kind of [inaudible] and five other institutions have been playing along with this and [inaudible] investigators and we've learned a lot about each other and we now, we can actually take each others' roles at the meetings and we know which person is going to say no first and the first person that's going to say sure, we can do that and sometimes they'll switch roles just for fun. >> I have a question about using control group, probably each side has very different characteristics of [inaudible] matched to that [inaudible] sample each site has collected, so are there specific challenges and solutions, if you want to use [inaudible] data and we have control versus maltreated, is it-- >> [Desmond K. Runyan:] What Diana English did was to pull the maltreated and the non-maltreated group together and the pick matches among the maltreated group to match the non-maltreated group in order to do the maltreatments [inaudible] paper and so that's one solution was because the non-maltreated kids were the smaller group but she used that and then used the large group to find the matches. That's just one strategy to do it. It wasn't ideal because there's still some site differences in that paper but that was one attempt at doing that. ^M00:50:28 And so that's the paper, the final paper in that special issue of child abuse and neglect called [inaudible]. Are there other papers that come to mind? >> [Alan Litrownik:] With the control groups? It depends on what you're controlling for. >> [Terri Lewis:] And how you define it. >> [Alan Litrownik:] Yeah. If you're talking about controlling for maltreatment, that's one thing but then there are other things within a maltreated group that you might be interested in looking at. >> [Terri Lewis:] I know we have at least one [inaudible] that would say we don't have any controls in LONGSCAN. LONGSCAN is all at risk or maltreated. >> [Desmond K. Runyan:] But by the same token, every site had its own internal comparisons planned to begin with, so you just depict since we had the San Diego well represented of the kids that were still in foster care originally and the kids who had been in foster care and gone home and then there are the kids who have more permanent placements than kids who've come back and forth in the system. So that and then the Seattle site has the kids who were substantiated and the kids not substantiated, the kids in Chicago, there's some pretty clear comparison groups to Baltimore has comparison groups and North Carolina, so each of them has their own comparisons. But in some ways it has more [inaudible] that cross site comparisons. I mean, looking at the different sites and how it performs taught us a lot more than if we just had one of those individual studies. >> [Terri Lewis:] And you have to [inaudible] that at recruitment, it was a control doesn't mean it stays that way. I mean, those kids certainly kind of move into the maltreated group given most of them while they were controls, they were at risk for some things, for some either social, demographic or high risk on some measure of perspective maltreatment risk. >> Do we have a lot of students who moved from control to control to maltreated, because later they [inaudible]? >> [Terri Lewis:] We-- If you look at by, I don't know, 12 or 14, probably over 60%, 65% or 70% [inaudible] have some report of something. >> [Desmond K. Runyan:] I'm trying to think of the exact numbers but it's-- For instance, North Carolina started off with 33% by definition were maltreated and it's like 45% a few years later and the Seattle group started off at 60% substantiated, 40% non-substantiated and moved to a much higher percentage because many of the kids that were not substantiated first came back and the second time and were and-- >> Baltimore-- >> [Desmond K. Runyan:] Baltimore is generally-- Baltimore claimed when we first talked with them, we don't have any kids who are reported for maltreatment and that's crept up over time even deciding whether failure to thrive ought to count as a report. There's some interesting issues about Baltimore that either prenatal exposure to HIV or failure to thrive, should those, are those kids that should be counted as maltreated but a lot of them have moved into pretty clearly maltreated group over time. >> [John Eckenrode:] I really want to thank these folks for coming and making this such a productive day. It's having done this for several years. I mean, this is going to save these folks a lot of time and effort, I can guarantee you that and they can get off to a much quicker start than if we just turned them loose in the lab with the documentations and said good luck. So-- >> [Desmond K. Runyan:] We can still say good luck. >> [John Eckenrode:] We can still say-- >> [Alan Litrownik:] We didn't discourage people though, did we? >> [John Eckenrode:] Well, I hope we didn't but I'm sure you didn't discourage them. You let them understand the richness of the data, so much to be discovered.