^M00:00:18 >> John: I'm not going to do long introductions and give a lot of autobiographical information, that's why we put it in your - sent it to you and have it in your packets. So I'm just going to turn things over to Des. Des, all of you know by person or by reputation, at UNC has faculty appointments in Pediatrics and Epidemiology and maybe a couple of other departments. >> Desmond Runyan: Social Medicine is actually my - >> John: Social Medicine. Are you still Chair of Social Medicine? >> Desmond Runyan: No, I stepped down as Chair, but I'm still - that's my primary appointment. >> John: And as you know is one of the kind of founding PIs of the LONGSCAN project, so it's a real pleasure to have Des and Al and other members of the team here early to work with us today and to get us oriented to the LONGSCAN datasets. I'm just going to turn it over to Des and thanks for coming. >> Des: Thank you, John. So this is great to have LONGSCAN summer camp. Boot camp. Boot camp for you; it'll be summer camp for the PIs here. So I am pleased to have my colleagues, Terri and Al and a few folks that we've got [inaudible] working on it. Megan Shenahan is a doctoral student, but she's got, I think, three of us from LONGSCAN on her committee so she doesn't really have a chance. And Stephanie's [phonetic] working with us with the LONGSCAN data, so we've got - it's nice to have some - so those are the other resources you can turn to and you can - and Stephanie can say, "Oh, my God, I can't figure out how to use this dataset, either." If that's what you need. So I'm gonna - what I'm gonna do is not go my full hour and let Al start earlier, but to just give you some background on LONGSCAN itself and a little bit of the history of where we came from and how we got there. There are probably times when you've looked at the dataset and the measures and say why in the hell did they do that and what were they thinking. And so I'll take you through where we're thinking. I should acknowledge a few folks. The Health and Human Services, the Administration for Children, Youth, and Families. We got founded back when there was a National Center on Child Abuse and Neglect and we were told by the director or the commissioner of the Children's Bureau, actually the Commissioner of ACYF. He said this is not gonna be a Cadillac of a study; think Hugo. As he told us in terms of our financial resources early on. And that was when Yugos were still running. So we've outlasted the Hugo, so that's good. [Inaudible] A Hugo is a $3,000 Yugoslavian car that came out in the 90s and lasted for a little while, and I think - [inaudible]. Not all of them. Certainly the Children's Bureau, which has taken over the functions of the National Center on Child Abuse and Neglect, the National Institute of Health has dumped some money into us, specifically NICHD. That wasn't supposed to be there. Okay, it's a little bit out of order. These are just some pictures. This started off as a project with an RFA, which I'll talk about, for a three-year study, and we have been now doing this for 19 years. We keep meeting in nice places and Al got us - we meet annually in San Diego, or almost annually, at a place called Pacific Terrace, which is close to this pier out there in the Pacific Beach area. This is the dungeon where we met the first time. We discovered the second year that it was cheaper to not rent this room but to rent the Honeymoon Suite and have the rooms set up in the Honeymoon Suite. So typically what happens now is I rent the Honeymoon Suite and then we set up chairs and tables up in the Honeymoon Suite and have a very nice looking over the surface where we sit and work. my wife asked me, "You rented the Honeymoon Suite?" And I said, "Yeah, and there was a very nice young couple there that didn't mind sharing it." And then we've even branched out to overseas twice. This was the International ISPCAN Conference in York, where we're all sitting around I think that's a Guinness. We're trying to recollect that. and there's Al Litrownik looking intently at his beer. [Inaudible] This all began in the spring of 1989. The then National Center on Child Abuse and Neglect put out a call for proposals, and actually I should of dated this, I should've gone back two years earlier. In 1987 - well, let's go back even earlier, before many of you were born. 1974, a National Center on Child Abuse and Neglect was formed by a bill called the Child Abuse Prevention and Treatment Act, so the first CAPTA bill, and Walter Mondale was the sponsor. And in 1974 established the National Center on Child Abuse and Neglect, and that lived up until 1996, when the Welfare Reform Act took it apart. But during that time, they were funding different research projects on child abuse and neglect, and when I finished my fellowship in 1981, I wrote a letter to NIH asking to be introduced or to get the name of someone I should contact that could help me work in the area of child abuse and neglect. And I got a very nice letter back from NIH saying we don't fund research on child abuse and neglect; you'll have to talk to the National Center on Child Abuse and Neglect over in ACYF. And so NIH at that time had a policy they wouldn't fund work in child abuse and neglect. And so I - we went to the Administration for Children, Youth, and Families. They were funding lots of little demonstration projects, typically one year to 18 months. They would give money to a tribal group or to a small community and they had no evaluation planned. And in 1987, they were called up in hearings in Congress and said, okay, these are the questions about child abuse and neglect, what are your answers. And they couldn't - the agency was embarrassed that they didn't have answers to core questions about the epidemiology of the risk factors or the treatment of, etc., and they'd been going for 13 years. And so they were challenged by Congress and they then put out this - they made a decision they were going to fund a longitudinal study to answer these questions because of the heat they got from Congress. That decision was manifest as a request for proposals in which they asked for a coordinating center and up to four satellite sites that they'd fund. The - so the request for proposal said they wanted a separate proposal for a coordinating center and for up to four satellite sites and the RFA said they anticipated they would fund $60,000 a year for the coordinating center and $30,000 for each satellite site and that was going to be total funding, including indirect for the universities. But they were giving a $60,000 planning grant. And I said, well, $60,000 to write a grant sounds pretty good. $60,000 to run a national study doesn't sound like that'll work. and $30,000 for a site, counting university overhead, even back in 1989 wouldn't even fund the salary of one social worker let alone all the other processes. So it was pretty naïve and demonstrates kind of the agency that we're dealing with and how little they understood about the research process. So we actually wrote the planning grant, but reserved the right not to do LONGSCAN if we couldn't come up with an adequate amount of money. In the proposal, we said you can't do this study for the amount of money you're talking about; it needs to be much more reasonable. So we got funded to be the coordinating center and we got this nice letter, and then we said - and we were married to the Juvenile Protective Association. In 1990, there was a movie called The Gods Must Be Crazy about an aboriginal group in which a Coke bottle came out of the sky. And that's actually how we felt in the context of being married to the Juvenile Protective Association because we had a mandate to look at the epidemiology, the antecedence, and the consequence of child abuse, and we were going to plan that study, and only one satellite site was funded and it was a private non-profit treatment agency that when you asked them the question, "who do you serve?" "We don't know." "What's your population based that comes into you?" "We don't know." "How many cases a year do you have?" "Well, depends on the year. Sometimes we have a lot; sometimes we don't have very many." And so kind of all the questions that an epidemiologist would ask were not answerable by JPA at the time, which was interesting. So we scheduled the series of planning meetings and we kept thinking how are we going to do this national study with JPA and finally came up with some solutions. We had this big discussion about what was the right sample. Should it be kids who are at risk at birth? Should it be kids who were reported? Should it be kids who were substantiated? Should it be kids that were offered treatment? Should it be kids who are identified in some center outside of social services, like a medical center? And so the answer from LONGSCAN was "yes." And so we have each of those samples. In an ideal world, we would've said let's have each of those samples at each of our sites so we can compare and contrast across sites and we don't have site confounding it, but the ideal world and the funding from Administration for Children, Youth, and Families and NAB are two different things. And we sat down and talked turkey about budget and they finally said, well, we realize that $60,000 for a coordinating center isn't enough and we realize $30,000 for a site is naïve, and they said we're going to go back to our usual rule of thumb, which is $125,000 and what the rule of thumb that the Administration for Children, Youth, and Families and NCAN had was grants for $125,000, so coordinating center is $125,000, including indirects, and the sites were $125,000, including indirects. And so that was kind of their solution to the funding. And that made it so that we couldn't come up with kind of each of the samples at each of the sites. We had to make compromises to deal with the money. ^M00:09:59 So we spent the year planning the grant and we spent actually about nine months. We had JPA, and we were visiting Chicago and they were visiting us, and we had proposed a sample. Red [phonetic] proposed two alternative samples. One, we had a sexual abuse study that we'd done in North Carolina. We proposed kind of following that up as one group potentially. And we had another sample that was an at-risk group from birth that Jonathan Koch [assumed spelling] had and we proposed following that up. And then we had these treatments and we couldn't figure out - that took us and then one day we said, well, why don't we try and find other samples that are the other pieces and have this right answer for them. So that's kind of how it got put together. So we went looking for a social service agency that we thought could be involved in research because I'd done some prior child abuse research and I'd been sabotaged by an agency that was going to collaborate with me on a research project and then I didn't get any referrals for about nine months for child sexual abuse from this agency, which was a major county agency in a major city. And I said it seems impossible to me that we eliminated the problem of child abuse in your city by studying them, at which point they finally confessed that they were doing their own treatment study and that they were approaching them first about the consent form for them and that nobody was consenting for a second project after they'd been approached by them. So I learned my lesson, that you had to find an agency that wanted to get involved and you couldn't just kind of go in and do it, and so we went looking for a social service agency that was interested. And the two candidates were Diane Depamphilis [assumed spelling] and the group that was working with Baltimore and Diana English, who was in Seattle, because the Seattle state, or the Washington State Department of Social Services actually had a research unit that had its own research funding. And so that seemed attractive, and Diane Depamphilis in Baltimore seemed attractive, but Diane warned me aware. She said things are falling apart in Baltimore and Diana sounded very enthusiastic, so we signed Diana up and that's how Seattle got to be part of the study. And so they were originally subcontracted with us. We also, in our proposal, reserved the right not to do the study if we couldn't negotiate a better financial deal and so that was an important piece of it. So this all came together. We were actually - in spring of 1990, we were notified that we were funded to begin LONGSCAN. And Baltimore had - the University of Baltimore - the University of Maryland at Baltimore, Howard Dubowitz had put in a separate proposal to look at follow-up of failure to thrive kids and kids who had been born with HIV risk and a normal control sample, and they were told that they were part of LONGSCAN by the funding agency. They said you can have this money if you join LONGSCAN. They actually tried to pull the same thing on John Leventhal and Yale in a cocaine study and John and his colleagues said we just - they were fighting amongst themselves about their sample and they decided that they couldn't agree to do it and the agency let them keep their money and not join LONGSCAN. But that was the - but Howard came because the agency funded him and Seattle came because we needed something in between what we had, the at-risk sample and the treatment sample from JPA. And then the final sample was San Diego and Al and his colleagues, John Lansford [phonetic], were already funded to do a prospective study of children in foster care from NIMH. And John Lansford had been on the study section that had awarded the main thing, and he came to us afterwards and said, "We'd love to join LONGSCAN and it won't cost you a penny." He said, "We are funded by NIMH." He didn't say it was only for two more years, but - but they were funded by NIMH and they had adopted some, they had a bunch of the same measures, and so we had very common - and they were a foster care study and it was actually decided there was actually some utility to all this. We had an at-risk sample from North Carolina children identified in the newborn period as at-risk in newborn nurseries on a state high-risk sampling program. And then, because they had actually built a cohort of almost 800 kids and we didn't have enough money for that, we developed a cheaper plan, which was to identify all the kids who had already been reported - their kids were already four years old but had been followed since birth and had used most of the same measures. And they - so the North Carolina team looked at their kids and said we have 140 kids who had been reported to social services in the first four years of life; we'll try and recruit them. They got 50% of them to sign on for the study. And then they went back to their original cohort and matched them to the kids who were not reported for maltreatment but from the original at-risk sample. So that's how they got their roughly 220 kids in their study. So the was the at-risk, and then we had, going a little bit further, we had the Seattle program. And Diana English, they had done an earlier study on serious abuse, but they had not previously tried to validate their risk assessment tool, and they chose to recruit a whole group of folks who were of moderate risk on their risk assessment tool and approach them about participating in the study. And they were identified before substantiation occurred, so they tried to recruited before the SS investigation was done and it turned out that they had about 60% substantiated, about 40% not substantiated in their sample, so they had the reported group. Chicago provided us with the treatment group that had been put into family treatment, but they went back and we found all the other agencies that did the same kind of work they did on the north side of Chicago so we could get a more comprehensive assessment of those kinds of families. They also went and found a matching group of kids who were reported to social services but not referred for treatment to one of the family service agencies in the same age. And then finally they asked the families that they recruited if there were, about other kids in the neighborhood that were born, that were not reported, and so they actually had a third not reported but neighborhood matched, a third that were from social services not referred for treatment, and the third referred for treatment. So that was the Chicago sample. And then we had the foster care study which was the kids in San Diego, which had about half of them - they'd all been in foster care in the first four years of life. Half of them had gone home and half of them not going home. And then finally the Baltimore sample was a group of kids that really were defined by medical center, and so they kind of made a complete package. So the call for proposals had asked us, they talked about the need for a theory-based longitudinal study. Oh, the other thing I should say is in the original RFA, they asked for a three-year study and in our proposal we said the world doesn't need another three-year study on child abuse and neglect. There were lots of little studies that don't really answer the questions. What the world needs is a 20-year study and people kind of chuckled at me and the agency said, well, we can't fund a 20-year study on your proposal, but they came and said but we could fund you for five years and then you could come back in when there's a competitive renewal and you'd come back and you could see if you could - if you competed for all those times, you could see if you could keep it going for 20 years. And we said, well, that sounds like what we want to do then. So we made this proposal for this 20-year study, mostly being kind of audacious. I have a colleague in Seattle who referred to it as Long Scam because she thought that was the greatest thing that could ever happen was for me to talk somebody into giving us a 20-year study instead of a five-year study. So they were - it was supposed to address the causes and consequences of abuse and neglect, and it was to have implications for preventing maltreatment, preventing negative effects of maltreatment, and promoting recovery. So we began - I've explained it. so we have five distinct studies and each - the idea was that each site was its own study, but with common measures, common training, common data entry, some things could be pulled across sites, some things couldn't be pulled across sites. The measurement and data were going to be coordinated by UNC and the ace that we had in the hole for our application was we had a biostatistics group called the Collaborative Studies Coordinating Center, which was a group that had previously coordinated heart, lung, and blood studies from all over the country. So it had the experience of trying to get a whole bunch of universities to walk in the same direction and collect data in the same direction and had some sophistication about that. their previous studies had all been clinical trials, and so I think we were a trial for them and have continued to be a trial for them, as Terri's nodding her head, because they haven't quite figured out the social science research stuff in quite the same way. But we - our measurement center then was kind of involve with standardizing the measurements the same way the heart, lung, and blood studies had been done. Common measures, common coding systems, training of the interviewers. That's done commonly. And even a common data entry system to kind of force people to have the same kinds of responses. The other thing that had happened over this time, and I'll mention this more later, is that technology was evolving and we had paper and pencil interviews when we started and then we had direct entry by interviewers into the computer to the point that when the kids were older, we had the kids actually enter the data themselves into the computer with keyboards. All of which, of course, was expensive and hard to do on the budget that we were given by the Administration for Children, Youth, and Families for doing this study. Remember, this is the Hugo; we're not the Cadillac. We set up - the other interesting issue that I should probably take a moment to mention is that this is five principle investigators who each had their own studies who were then rolled into a coordinating center and suddenly subservient. And that was an interesting little kind of sociologic experience for folks. I mean, it was easier for Diana because she was invited in to subcontract, but for Howard Dubowitz, who had his own study that was funded, to suddenly be told that he's part of LONGSCAN - so there was some discussions about who was going to use the data, who had control over it, how we were going to make it all work together. And we set up a Governing Committee, which is that the PIs at each of the sites and the PIs at the coordinating center have an equal vote. So there's six of us and it takes a 75% vote to change the protocol. So that there would be - there's a mechanism for putting into place changes in the protocol. We had - the PI committee all had to get together to agree on what we were going to do. And then we set up the Measurement Committee, which Al and Liz Knight were involved in coordinating, which all the sites had input into what measures were used. ^M00:20:06 That was actually a challenge, too, because the Chicago site, for instance, to use as an example, their kids were very young when they were recruited and North Carolina had kids who were four when they started. So there's an age gap. And so when we designed the eight-year-old interview, the North Carolina people were right on it because their kids were almost turning eight, but the people in Chicago were years and years away from that, and trying to make sure that they paid attention to and were invested in the age eight interview so when it came around for them, they didn't suddenly said I don't like that stupid instrument, who designed that anyway. So we had to kind of anticipate what the needs would be at the other sites. So we had this Measurement Committee and we had an analysis group, the statistician to each of the groups got together, we had a Publications Committee, Dissemination Committee. And we wrote up a paper early on that we managed to get published that describes the study itself in aggression and violence. And we meet in nice places. That's the other - we tried - originally, this was all going to be internet and telephone communication, but it does turn out that when you're trying to run this kind of thing it helps to be face-to-face at least a couple times a year and so we've met in Kittery, Maine, connected to the New Hampshire conference. This was in Seattle; I can't remember which conference, though. John Hussey and Diana English in I-bars [phonetic] in Seattle. And this was actually a couple years later in the Edgewater Lodge in Seattle. And Marilyn Snyder, who used to run the Chicago site, and [inaudible], who's actually a CIA operative who masquerades. He masquerades as our Chief Statistician. Terri's laughing. [Inaudible] has visited 60 countries as a statistician and we keep saying nobody, nobody could use a statistician in 60 different countries. Okay. And the other little story that I thought, a backstory, is [inaudible] is the son or the daughter of LONGSCAN - I'm not sure which it should be. This is a - you know, we were struggling. In 1996, when the Welfare Reform Bill was being proposed, Ron Haskins, who was a psychologist who had been a SRCD fellow - someone here is going to SRCD. He went to Washington as an SRCD fellow and never came home and was House Ways and Means staff person for a Florida congressman who was head of House Ways and Means and they were writing the Welfare Reform Act and they decided that there was no point in having the National Center for Child Abuse and Neglect, that they were going to just break it up and send all the money to the states as part of the block grant process. And Ron Haskins, before he had gone to Washington as SRCD, was a psychology faculty member at North Carolina and he had been my wife's advisor when she was doing a pre-Doc fellowship in Policy. And so I knew Ron and I called him up on the phone and said, you know, I was looking at them abolishing the agency that was funding our research that we proposed for 20 years and we were about four years into it and suddenly it was going to go away. So I called up Ron and asked him, I said, "Ron, is it true that you're trying to eliminate the National Center on Child Abuse and Neglect?" He said, "Are you calling me to lobby?" I said, "Well, actually I'm calling you for information first. And if the answer is wrong, I'm then going to lobby you." And he then started telling me about how the agency wasn't doing anything effective and the money should just go to the states and I said to him, "Ron, what about research? Who's going to fund research on this? NIH doesn't fund research on child abuse, they've already told me that, and we've got this project going. Who's going to pick this up?" I said, "The state of North Carolina is not going to pick up a study that's got California and Seattle and Baltimore in it, and I can't imagine trying to figure out how to put together five states to get the funding for this. So what are you thinking?" And he said, "You're right, Des, the research is a legitimate federal responsibility." About two months later, he called me up on the phone and said, "I got you your money." And I said, "You got me money?" "Yeah." He said, "We just got money put in the Welfare Reform Act for your study. $36 million." And that was a very nice day. Unfortunately, it wasn't - then it came to light that because of the way it had been done, it couldn't be given as a grant. It had to be put out as a contract and the mandate in the legislation was for a nationally representative database that was going to be collected to study the child welfare system. So it really, it wasn't LONGSCAN despite that one fantasy, but it was intended to help LONGSCAN originally, so that's a nice little story about it. but those of you who worked with NISCA [phonetic] probably noticed that some similarity in terms of measures with LONGSCAN and that's because RTI, Research Triangle Institute, bid on the contract to do this and they pulled Rick Barth and myself and Paul Beamer in to be the co-PIs officially of that, so I was a co-PI despite the fact that I was not at their institution, and we set about designing it, and I gave them the measures manual for LONGSCAN. That was my initial gift to them. So the fact that there are so many common measures is because they took the LONGSCAN measures manual as a base to start doing their planning from. So it was our money, got taken away from us, would've been a very different story if LONGSCAN had gotten $36 million. And so I go back to reminding you that NCAN thought that they were going to get each of these satellite sites to work for $30,000 a year. So we are - originally, we were $125,000 max for each site and the coordinating center for the first five years. We persuaded them that that wasn't quite enough and their solution was to not fix the sites but to fix the coordinating center a little bit and give us a little bit more money, so it's $250,000 for the coordinating center and I think it's still $125,000 for the sites. Al could probably correct me if you've gotten more than that. but it took a long time to drag them up into numbers, and even that is pretty poultry. And we've been flat-funded for 15 years by the agency. So when you look at the dataset and think why didn't they do this or why didn't they do that, just remember that that's what we're living with. And so we still are at, for 15 years, we've been at $250,000 a year, including indirect. So we've had some decreasing staff support and smaller staff to complete the project despite we have more and more ways of data, so that's been a challenge. We did go after supplemental funding. We've put together small packages. Yes? ^M00:26:19 [ Inaudible ] ^M00:26:37 >> We wouldn't be here today unless we were successful in getting a supplemental grant. We started in 1988 with only $125,000 18-month grant as well and, but we've been flat-funded for 10 years and it's only because [inaudible] supplemental funds for this prior fiscal year that we're able to have this meeting today. Otherwise, we wouldn't [inaudible]. This is the kind of stuff we've all had to deal with [inaudible]. By Year 4 and 5, we actually had to start cutting people's time back [inaudible]. >> Desmond Runyan: Well, and I don't mean to do this to kind of poor-mouth that we have a crappy study because I think that we've had a fantastic team and people who have really poured their hearts and souls into trying to figure out how to make this valuable, but the other piece of it is we haven't had the money to do all the analyses and one of the reasons that we kind of jumped at the chance to come up and do this and encourage the use of the data from the data archive is that we have just scratched the surface on the data and the papers that can be written on this. So you guys are the army that are going to help make sure that all this investment gets actually used and used well. We've succeeded in getting some supplemental funds for the Injury Center granted at UNC and we've succeeded - we've had for five years a neglect grant from NIH, from NICHD, that helps supplement the staff and get some extra data collection. And the Age 14 data were actually paid, we're actually paid separately out of that. otherwise, it was - the original low-cost plan was every four-year intervals of data other than - when we were first there in flush in North Carolina, it felt like they really wanted to go ahead. Instead of doing 4, 8, 12, 16, we did 4, 6, 8 because the folks, we hadn't gotten over our heads yet and so we were able to do the Age 6 interview, but the plan was then to go out every four years. But the Age 14 interview was done with NIMH money, NIH money. So where we are. We have - our current status is that the youth are now 14 to 22 years of age. The data are summarized and updated four times a year. We send the data twice a year to all of our sites automatically. They get the distributions and Terri will tell you more about the data and what it looks like. We have it four times a year. People want it, but it turns out there's so much data and it takes so long to where people kind of get into it and start working on it that having a dataset that's only six months old is not a bad problem for them. And so - I mean, sometimes people update it, but usually they have more than enough to work with. The data archived here, and we've archived this Age 4, 6, 8, and 12. The plan was always that two years after we finish the data collection that each wave would go. We actually rushed the Age 12 a little bit. It would've been - because it's a little less than - I think that we're about four months shy of the last 12 year-old interview. We have annual contact interviews that we did across all the sites by phone with the families up until Age 11 and then after the Age 12 interview, again, because of resource and people worrying about kind of how much time it was taking, the annual contact interviews kind of dwindled off after that. Some sites did more of them and some didn't. But we have had a pretty rigorous effort to go through and code child protective service records all the way through. ^M00:30:00 I just listed - I probably left off some names, but I just wanted to make sure that some of the names here - you see San Diego has Al Litrownik and Rae Newton. Chicago has Rich Thompson, and I should mention Trish. Howard Marin [phonetic], Pete Starr [phonetic] in Baltimore. Diana English and Chris Graham and his alter ego on phone calls. We have every two-week phone calls among the PIs. John Kotch and Jon Hussey, and then you can see Terri is mentioned here. Jamie is one of our other analysts. Debbie Jones is a psychology faculty member. Liz Knight has been with the - she used to be the Coordinator when we first got started and then moved up to co-investigator status and has really been focused on the human subjects issues. Mark Everson, a psychologist who helped us with the original measures. And so that's kind of - and then there are - I just listed some major names, so a whole bunch of - oh, I forgot Shrikant, actually. Hmm? Lynn and - yeah, Lynn Martin is our Coordinator. I just listed investigators. I didn't put Lynn on there, and Shrikant Bangdiwala is our biostatistician who you saw earlier. And he'll be here. So the baseline sample. The East has 282 children who are at-risk and they came from possible FTT or drug exposed or community clinic. The South has - it's relatively transparent, but we decided to try and protect our subjects a little bit. We try and refer to their general geographic area as opposed to saying the Baltimore sample. It's relatively transparent. Al is not in Northern California, so we talk about California sample and you'll look and he's at San Diego State; it's probably not likely that he's doing studying up in Fullerton, but we're trying to make it so that there's a little bit more distance between our subjects and where they are. We have the Northwest sample, which is the kids who are at moderate risk at four years of age. The Midwest sample that has two-thirds are maltreated and a third of them were in family therapy, a third of them were just matched from DSS records, and a third from the same neighborhoods. And the Southwest began with 330 kids who were in foster care before age 4. I'm sorry, and you see the total is 1354 for our total. But we get there in kind of a funny math and so this shows - a table that just shows our math. We started the study at age 4 and you can see we only have 1250 kids in that sample. We lost some kids and we had our investigator want to replace some of them in a couple of cities and so we ended up including - we made a rule that if had either a four- or a six-year-old interview, they could be in - they had to have a four- or a six- to be in the study. There were some kids actually that were recruited in some of the cities before they turned 4 who we didn't get either a four- or a six- because we lost them. And so they dropped out of the sample and we never counted them again because we had - so we count 1354. And you can see as the kids - this is where we stand where we've completed the interviews on all the kids, all the 12-year-old kids are interviewed, and we have 976 kids out of the 1354 total counted between these. Age 14, 16, and 18, we're still in the process of data collections, so we're hoping to get those numbers up. In Chicago, there are still some 14-year-old being interviewed. The 16 has a couple sites and 18 I think there's a handful of kids, but mostly it's the Chicago kids that aren't collected yet. And these are the numbers of actual interviews collected baseline to Age 18 that we have in the dataset so far, and so you can see these are the numbers we're dealing with for the dataset, with 192 in the Northwest, 236 in the Southwest, 177 in the South, 181 in the Midwest, and 190 in the East. In terms of sample baseline, subtly more female-predominance, 26% Caucasian, 53% African American, 20% other race. And as you see, the sample doesn't diverge dramatically in any of those characters for the kids that we still have that are at 16. So the numbers have changed, but we haven't dramatically altered the characteristics of the sample. The other thing that's interesting - at Age 4, our median age was 4.6. the North Carolina kids were a little bit older. By the time the funding got started and we got into the field, the North Carolina kids, which were the oldest group, had aged a little bit. So then we kind of played catch-up and we got a little bit better and closer to the birthdays on each one after that. in terms of caregiver demographics, that hasn't changed dramatically except that you can see that the married rate went up from 33 to 39 as kids got older and the sample that we were able to keep into the sample were more likely married, and so the single rate dropped from 44% to 30% for the 16-year-olds. A little more separation; a little bit more divorce. Even some widows. And the median income went up, mostly probably as a result of society's income going up. And the education didn't change. So we were guided in the approach in our Measurement Committee by socio-ecological theory. We used the Broth and Benner Model [phonetic] like everyone else, but given that Jonathan Koch and Dorothy Brown, who helped originally found this, had written about that model and child abuse, we didn't have much choice. That was our model. We started off with looking at multiple domains, Children and Youth, the Characteristics and Functioning Caregiver, the Family Microsystem and the Macrosystem are all pieces of data in the data collection process. We tried to use multiple sources and methods and so we typically have - in most cases, we have a Teacher Report form, a Parent Report form, and a Child Self Report for most of the kids at different times. We have ratings questionnaires that we get from caregivers and teachers. We look at performance occasionally on the kids themselves. And then we use official records. We present them - we originally start with an interview and then we went to computers that the mothers completed while we interviewed the kids, and then we went to the kids and mothers both getting self-proved. And the kids at Age 12 put earphones on and the questions are asked audio to them and you can't see them on the screen for confidentiality and kids can be asked very confidential questions. Also, the advantage to the computer technique of interviewing is that clearly it has eliminated the need for us to worry about how to translate paper forms into data and get that and be reliable and we don't have to do the double data entry and all the kids of things we started off with early on. It has its own ups and downs as Terri can talk about in terms of just some of the idiosyncrasies of bringing the computer data in and trying to know how to use it. LONGSCAN publications. We started a little slowly and really picked up and we have a few things in press, but we're actually trying to pick up the pace. The probably is the funding has gone down, the staff members have gone down in terms of the number of people we have staffed, so we're trying to do more of having post docs [phonetic] and talk graduate students into working with them and coming to you guys because we've just basically run out of people to kind of work on the papers. We have the data, but we're so busy running, keeping the data coming in and trying to have a high quality dataset that we really require or really need help from others. The other thing I was going to talk briefly about is measurement of maltreatment and then Al's going to go on and talk about the whole thing. We originally started off thinking that we were just going to ask or use a combination of DSS records and then we were going to review when the kids turned 18. We were going to ask the kids themselves. And that was our solution to the issues of confidentiality and how to do it. in 1994, we were called out at a national meeting for being chicken. They said, come on, you guys are the league-leading, you're getting all the money from - all the money, in quotes - from NCAN; you guys need to figure out how to do this better. And we sat and we agonized over the issues of confidentiality and how to figure out how to collect the data on maltreatment. We actually got money from NICHD to run a conference on how to ask children about maltreatment and at what ages you could do it. we had 30 people come into a conference in Chapel Hill that we held and we probed them all about probative questions, like how would you do it, is it ethical to ask children directly to report on their parents and what they've done, what kind of constraints do you want to put on [inaudible] confidentiality work, what do the kids think is happening, how do we protect the kids, how do we protect the parents, how do we protect the investigators. And so we spent - a whole series of background papers were written, which took about another five years for us to massage into a publication into a special edition in June 2000 in the Journal of Interpersonal Violence with reports on that ethics conference. We got everybody to update their paper, so it's a little bit more current than 1995. But we did indeed go ahead and developed a plan to ask children directly. And we decided that Age 12 was okay. The consensus of the conference was that before Age 11, normal children probably couldn't give informed consent to be able to be asked about maltreatment and to understand that if they said their parent was being mal-, was maltreating them they might put themselves in foster care or send a parent to jail, and that that informed consent was just not possible before about Age 11. [Inaudible] we had come up with kids at Age 12, so ours by definition was 12. We also talked, and Dave Finklehorn [phonetic] at that meeting was very eloquent. He said there are ethical studies that ask children directly and don't report and there were other clinicians at that conference that said it's unethical to do a study where you don't report kids who are in harm's way. And so we hashed it out and we had our ethicist come in and we talked about it, and the purpose of research is not to be police surveillance and to try and capture people who aren't otherwise coming to attention and we ended up coming around to the idea that that idea of beneficence and kind of protecting the kids was probably misplaced in the sense that it should be respect for persons, respect for autonomy, respect for the kids' decision-making on what they wanted. ^M00:40:18 And so we developed a model which we didn't get to be completely proactive on it in every case, but in North Carolina, for instance, the [inaudible] interview at the end says "you told us that some of these things, serious things, are happening to you; would you like to tell the interviewer so that the interviewer can get you help?" And the kid can say "yes" or "no." And then if the kid says "no," there's a second side of the apple that they get, which is "would you like someone else to know and get help for you?" If they say "no" a second time, the data are encrypted and are not reported to social services. We have a few kinds - in North Carolina, we have suicidality and imminent danger or current sexual abuse that gets reported, that we break confidentiality for, but all other forms of maltreatment don't get reported. The other states do it a little bit differently. Seattle felt that they didn't have the flexibility to do that because they were actually part of social services doing the data, so their informed consent process doesn't let them have kids opt out of reporting it. But we had it - just for experience, in North Carolina, we had every form of maltreatment on our form indicated by the kids and no child wanted to have it shared. No child wanted to get help for it. so that was our solution to that, and the families, interestingly, we did lose a lot of families with that. when we went through it - we looked forward afterwards to see, when we gave the consent process at Age 12 that talked about the fact that imminent danger and current sexual abuse was going to be reported, did we get a lot of drop-outs and people dropped out of the study, and actually the kids who said "yes" stayed in the study at a higher rate than the rest of the kids. So there was no differential dropout. And I think part of what was going on there is these were families that have been there, done that, knows was social services is, they have the tee-shirt, they're not worried about social services. And our prior experiences working with - and it reflects what I think happened in Seattle originally. When they first recruited families, they had a lot of families that chose not to come, and it's often middle class families who fear the worst and fear what their kids are going to see. And the families who have been involved in social services didn't find this a particularly scary prospect, which I thought was an interesting revelation. So the question was did we have trouble getting to the IRB and the answer is it was a spirited discussion at some sites versus others. In North Carolina, it took the PI at the North Carolina site to go to the IRB and kind of make his case for why we needed to respect the opinions and decisions of the kids versus overruling them to do [inaudible]. And he did a good job and was convincing to the IRB. >> It really emphasizes the child's developmental status and their autonomy. >> Desmond Runyan: Yes. >> Okay. >> Desmond Runyan: Which was actually the theme of the conference we did in 1995, which - I mean, one of the things that emphasized and our ethicist emphasized was the respect for persons as a mandate of research means listening to what they say versus making decisions on their behalf, either good or bad. And so we really came down on that autonomy and respect for persons side. So just to give you, and I think others will have more detail on this, but 32% of the kids, birth through age 14, had no records. 16% had one maltreatment record, two had 12%. As you can see, there were 8-22 records of maltreatment for 12% of the sample. So you can see that it varies, but there were a lot of kids only - more than half the kids only had two or less reports of maltreatment. The age of first referral. Most of the referrals were in the first couple years of life and it really tapered down. The median age of referral was 1.2 years, the mean was 2.2, and the total number of kids with a record out of the 1,354 was 916. Allegations. Similarly, from birth to age 14, all allegations you can see were up over 3,000 allegations in the first four years of life. And it drops down pretty dramatically and these kids didn't have lots and lots of reports after that. and you can see the sexual abuse is actually a relatively small number all the way along. Physical abuse started at the first four years of life being much greater and dropped down, and the neglect was the most common in these reports and that also dropped down, although kids didn't get reported as often. So these were allegations. We'll talk more, but we ended up - we use allegations and not substantiations in most of our papers, and the reason for that is - and we look to see what the mental health consequences of one versus the other were in our special issue we did on child abuse and neglect and concluded that the kids who had allegations looked more like the kids that were substantiated and quite different from the kids that were never reported in terms of long-term consequences, and so in our work we use - in most of our papers, we focus on allegations and not substantiations. These are the substantiation numbers. That was the allegation numbers. Again, a similar pattern in terms of drop-off. 49% had one or more substantiations, 14% had one or more physical abuse, 6% had one or more sexual abuse, 41% are neglect, and 17 had emotional abuse. The other thing I should add about emotional abuse is we coded all this DSS data using a modification of the maltreatment classification system. Social services doesn't call anything emotional abuse or practically nothing is emotional abuse, but using our coding system there was a much bigger step-up in emotional abuse cases. So we think that we have reliable cross-site assessments of that. and chronicity, we looked - Diana English gets credit for kind of thinking about situational. There's only one developmental stage [inaudible] developmental period but not consecutive periods, like you might be maltreated when you were four and again when you were 14. Limited continuous in two consecutive age groups of kind of 0-4, 4-6, 6-8. And extended referrals for two consecutive periods and extended continuous, where it just gets repeated over and over again. And you can see this one there at about 16% of the kids who've been reported who have been maltreated multiple times. And then we asked the kids directly what had happened to them at Age 12, as I had told you, and you can see that, in terms of when they - the red line is birth to Age 12, which is when we did that interview, and so a little over 20% of the kids reported physical abuse, about 15% sexual abuse, and 40% reported psychological maltreatment; dramatically different from the 16% reported by DSS for emotional maltreatment. And you can see that in terms of timing, before elementary school and since elementary school for physical abuse were not that different here, but 10-15%. Sexual abuse looked like it was much higher, but it didn't kind of fit a pattern I would've expected in the last year to be a little higher. And then psychological maltreatment, again, it's an issue of since elementary school it appears to be going on. And then when we compared substantiations, yellow is agreement that abuse occurred; agreement that there's no abuse is red. So yellow here and red here, most of them for physical abuse, most psychological abuse, and most for sexual abuse. There are a small group of kids in which social services and the kid agreed on. So 4.3% of the kids agreed that the social services that they'd been physically abused and social services said so. Psychological abuse, 7.7% had DSS records of psychological abuse and the kids saying they'd been psychologically abused,1.7% of the sample, both DSS and social services. So the number of kids who said that they'd been sexually abused far outnumbered the kids that social services knew about. and there were a small number of kids who social services said had been sexually abused who said, nope, didn't happen. There is a website, the Injury Prevention and Research Center at UNC, so it's EDU, slash LONGSCAN is the website, and it has - there's a public site that has background information, helpful links, contact information, and they have access to a variety of publications, measures, manuals. There is an internal website that we use for communication and draft manuscripts and stuff that's on that website. But that can be a place to go to, to get access. And that completes what I was gonna say. Do you want to take a break now and get some coffee? Is this a good time and then we'll let Al go? ^E00:49:11