[Voiceover] National Data Archive on Child Abuse and Neglect. [Erin Macauley] all right everyone it is noon so I just want to say thank you again to everyone for being here we have really special presentation lined up. So this is the national Data Archive on Child Abuse and Neglect summer training series. This is an annual series we’ve been doing this is our third year and a idea behind the series is just offer more support to our data users and kind of semi-less formal areas. And this year we are focusing on administrative data and our new historical data acquisitions. And as I said where the national data archive child abuse and neglect, we also called by the archive or NDACAN. We’ve been housed in the Bronfenbrenner Center for Translational Research at Cornell University for quite some time now although just this month we also started to be affiliated with Duke University where some of our staff are now working along with our co-PI. So we are both at Cornell and at Duke now. This series is focused on new horizons for child welfare data. We’re continuing to push the use of our administrative data sets as well as showing off our new historical data acquisitions. We have a contract through Children’s Bureau and the Administration for Children and Families to archive data on Child abuse and neglect. And this session as I said is on administrative and new data acquisitions. Last year we did the whole administrative data cluster with a big emphasis on how data is collected and how it gets to us and then how we make it usable. And then our first series was on the NYTD data which is kind of one of our most recent data before our historical data acquisition. So again if you are interested in those I recommend checking out our website. Here’s our overview of the summer so if you have been with us so far you have our introduction to NDACAN kind of what services and so forth are available, but data we have both that are highlighted in the series and the other data that we have available. Next Alex who is going to be doing today's session talked about our new historical data acquisitions. This week is going to be showing us what we can really do with this data. Then in the following week were going to have a session on the rest of the administrative data cluster which are three different data sets following CPS reports through the foster care experiences and then for youth who age out without finding permanent placement: their experiences in the transition to adulthood. Next were going to have session on linking the administrative data. If you have been with us in previous Summers you know that basically every summer we have session about this however we emphasize different kinds software and different coding each time. So first we’ll be going over this theory of linking these administrative data sets and what you need to do before you can link but then will also be giving code and then through using SPSS. And then last we’ll have a research example using some linked administrative data. This is going to be done by Frank Edwards for. In previous Summers he has led our data management series and he’ll be talking about with the data management did, as well as doing a conference-style research presentation again showing the utility of the data. So now I’m going to pass it over to our presenter who you guys will know from last week. [Alex Roehrkasse] thanks Erin. My name is Alex Roehrkasse I’m a postdoctoral associate at the archive. I joined the staff there back in August. I’m trained as a sociologist and I do a lot of historical work. If you were with us last week you’d know that I talked a bit about two or three different new historical acquisitions that the archive is in the process of bringing online. I sort of intended that we are going to be incorporating some data on abuse and neglect in the coming year but mostly I detailed to new historical data sets: The Children’s Bureau Statistical Series, and the Voluntary Cooperative Information System, both of which offer state-level information on children in substitute care. And I showed that we can with a little bit of careful cleaning harmonize those data sets with the AFCARS, the sort of flagship administrative data set current the available on the archive for measuring children in foster care. We can combine those three sources of data to create long-term historical time series of children in substitute care. So I showed some descriptive results last week. What time going to do today is basically illustrate some of the analytic research you can do using some of these new historical time series. So the title of my talk today is “Correlates of Foster care caseloads in the United states, 1982-2018”. The research example today is going to be fairly basic. It has many limitations. The goal so is just to illustrate the kinds of things you might imagine using the data to do, the kinds of choices and challenges you might encounter as you’re doing historical data doing historical research with some of these new data on child welfare that are available through the archive.’s so the main question were going to be interested in today is, what kinds of macro-level demographic, social, economic, political factors help explain variation and change in levels of children in substitute care? What do I mean by explain? Well, we can’t really do an experiment in this research setting. It would be impossible to say nothing of the ethics, so instead were going to try to develop a research design that leverages the fact that we have quite a bit of data from a number of different observations, namely states, over quite a long time, specifically 37 years in the case I’m talking about today, to sort of create a sort of quasi experimental setting where under certain assumptions, we can start to get a better sense of the causal relationship between certain macrolevel factors and the outcome where interested in, the rate of children in substitute care. And of course, we’re hoping that from this exercise we might learn something new about how we should adjust our policies to achieve child welfare goals. And that includes both policies that are specifically targeted at child welfare outcomes and other policies that aren’t specifically about child welfare but which have important if indirect effects on child welfare outcomes. A special focus of sort of the research exercise today is going to be that were going to be interested in not only the ways that certain inputs or explanatory factors change over time, but how the relationship between those explanatory factors and the outcome of interest might also change over time. So of course the rate of children in substitute care is going to go up and down and then say rates of women incarcerated will also go up and down over time. But in this swords of historical research exercise today we are also going to be interested in whether the relationship between those two factors strengthens or weakens over time. And I think that’s kind of an important question to ask for two reasons. One, it’s kind of a hallmark of thoroughly historical research where we’re not making assumptions that things work the same way over long periods of time, but it also has important policy implications because if we’re relying on research that focuses on prior historical. But the relationships between variables are changing over time, we might be making sort of policy prescriptions based on relationships that no longer hold. And so allowing for more sort of historical flexibility in our analysis helps us understand how often we need to be updating our research on child welfare. Okay well what do we know from prior research? There hasn’t been a lot of research on this topic using this sort of research design but the research we do have is quite good and from that research we know a few things. We know that there is a fairly strong relationship between state level welfare generosity and rates of children in substitute care. So the theory here is that when states provide more social and financial assistance to families, families are less likely to face disruption and children are less likely to enter substitute care. There’s also evidence that there is a positive relationship between female incarceration and children in substitute care. Again the theory here is pretty straightforward: when we put mothers in prison we’re more likely to have to place children in foster care. There’s also been shown a relationship between violent crime rates and children in substitute care. This is a pretty strong association but the sort of conceptual relationship here is a little less clear. So Swann and Sylvester have a very good and influential piece that shows that during the 80s and 90s violent crime increases explained a fairly large proportion of increases in children in substitute care. Swann and Sylvester were interested in violent crime really with only as a backdoor way of getting at drug use. So this was during the crack cocaine epidemic and they thought that this epidemic would have a strong effect on children entering substitute care. Unfortunately they didn’t have any data on drug use that was systematic across states and across time. And so instead they relied on other research that showed that there was a strong relationship between drug use and violent crime arrests and then made the argument that when they observed a positive relationships between violent crime arrests and foster care caseloads what they were really observing was a relationship between crack cocaine use and children in substitute care. Now if we’re interested in the contemporary period we can use much better data that captures more directly drug use in all different sorts of ways. But if we are interested in doing long-term historical research we’re still never going to get that data for these prior historical periods. And so in this research exercise we’re going to use some of these same measures but we’re going to be very cautious about how we interpret them and we’ll circle back to this issue later in the presentation. Okay so what are the data and methods we’re going to use to do an analysis like this? So first I’ll talk about the research design. So the structure of our data are timeseries cross-sectional data. What does that mean? Well the cross-sectional component of our data indicates that we have for any given period of time multiple geographic units. So namely states. So in any given year year going to have either 50 states and DC so 51 observations or slightly less than that. There’s some missing data. The timeseries component says that for any geographic unit we have multiple measures across time. So most states we’re going to have 37 observations spanning from 1982 to 2018. And so our unit of analysis is going to be the state-year and our sample size is going to be something like 1600 state-years. Our modeling approach is going to be a fixed-effects weighted-least-squares log-log model. Okay I’ll walk through this equation and then I’ll kind of say what all of the words mean. So in the equation here we have three subscripts. I represents states, J indexes census regions or divisions. So the Census Bureau divides the US states into four different geographic groupings the Northeast, South, the Midwest, and the West. And then furthermore into nine divisions and so depending on the specific model in question, J is either going to index four regions or one of four regions or one of nine divisions. And then T is going to index time so specifically each year. So Y here our outcome is going to be the number of children in substitute care at the end of the year per 1000 children in that state. And so I have an observation of that outcome in each state-year. But then you See were going to take the log of that outcome and I’ll explain why in a minute. On the right-hand side of the equation we have two matrices of explanatory variables X and Z so each of those is a matrix of covariates. X are just just represents the sort of continuous covariates and Z represents the dichotomous covariate’s. And then we have two vectors of parameters beta X and beta Z which captures the relationship between those covariates and the outcome. And so were going to take the log also of the continuous explanatory variables. And then we have two other parameters, Gamma I and Theta JT. Gamma I is going to be an intercept that corresponds to each state so each state is going to get its own intercept. Theta JT is going to be a time intercept so each year is going to get its own intercept. But we’re also going allow those year intercepts to vary by census region or census division. So when we say that it’s a log-log model it just means that were taking the log of the outcome and the log of the explanatory variables. And we do that for two reasons really first of all it helps us satisfy some distribution law assumptions of the linear regression model but also it helps us compare the magnitude of different effects because instead of measuring unit changes we’re now we’re now capturing elasticities or the effect of a proportional change in the explanatory variable on a proportional change in the outcome variable. So that allows us to compare the magnitude effects across a different inputs but also across time. And then by including these separate intercepts for years for states and for region years or division years we’re essentially controlling for a lot of unobserved factors that may influence the outcome that are either stable across time or vary across time in systematic ways. And this is helpful because it helps us to identify the true impact of our observed explanatory variables and our model will be robust to certain violations of assumptions that are essential to the reign of effects model. Lastly we’re going to cluster our standard errors at the state level. Clustering of standard errors is a somewhat technical issue but it’s quite easy to fix in most standard statistical software packages but it’s really important whenever you are dealing with data that has multiple observations of the same unit over time. So in our case we’re observing rates state rates many times over multiple years. Your residuals can be serially auto correlated in these settings and so it’s really important to cluster your standard errors so that your essentially so that you don’t underestimate your standard errors. And we’ll talk more about this here but clustering your standard errors in this kind of research setting is quite important. Okay and then we’re going to modify our model in one other way that will allow us to capture the ways that relationships might change over time. So in this model time-dependent effects we’re taking out a subset of covariates from X and we’re calling them W I JT and for those covariates we’ll allow the parameter beta W to vary by time. So this means we’re going to get for each variable in W we’re going to get T parameters. So we’re going to have quite a bit of how to put. It’s not going to be very helpful to display these in a table, going to make data visualization quite important but this will allow us to see if the effects of variables are changing over time. Not just the levels of the inputs but the relationship of the inputs and outputs. Okay so just to recap quickly going to use two different data sources to measure the outcome of interest, children in substitute care at the end of the year. For the years 1982 to 1995 will use the Voluntary Cooperative Information System. This was a voluntary system and so on some years we get data for every state, and other years we don’t. And so we’re going to have an unbalanced panel. From 1995 to 2018 we can rely on the AFCARS. The AFCARS also had missing data in the late 1990s but gives us complete data from 2000 on. And then of course you’ll see the to data sources overlap in 1995. Wherever there is a discrepancy between the voluntary cooperative information system and the AFCARS I’ll just average the two values. And then into this model we’re going to put a bunch of different explanatory variables. First I should say to create the outcome of interest we need to calculate a rate right so the VCIS and the AFCARS will give us counts of children in care in each state-year but counts aren’t that helpful because the child populations of states vary and change quite a bit. And so we need to construct an underlying rate of children in substitute care and so we need a measure of the underlying child population in each state-year. We’ll get that measure from the Census Bureau’s intercensal estimates. The results I’m going to talk about today are also for the full child population. I should say the VCIS and AFCARS include enough racial information to do these kinds of analyses separately by ethno racial group. For simplicity we won’t do that today but instead we will include measures that capture variation and change in the ethno racial composition of the child population. So we’ll have one parameter we’ll have one measure capturing the thought proportion of the child population that is black or African-American and non-Hispanic and then another measure capturing the proportion of the child population that is Hispanic all ethnicities. We have another explanatory variable that captures violent crime, this is going to be measured has violent crime arrests per 1000 population. And those data are going to come from the UCR. We’re going to have two separate measures of imprisonment by sex so we’ll have imprisonment rates for men and women separately and sets going to be the per capita number of people in imprisoned at any given point in time from each state. And those data come from the NPS. The University of Kentucky puts out a really helpful sort of omnibus data product that includes pretty long term long time series measurements of a bunch of different sort of welfare policy measures for all states. And so from that data set will you as a measure of welfare generosity were going to use the maximum benefit for family of three under AFDC and we’ll also measure the proportion of families or rather the per capita let’s see the number of people per capita receiving AFDC benefits in each state in each year. We’ll include measures of the unemployment rate, the minimum wage whether that’s the state or federal minimum whichever is higher, and then the population poverty rate. Lastly, we’ll include dichotomous variables that capture sort of the partisan control of state governments. So we think that AFDC is a good way of capturing the welfare generosity but there may be other policies that affect children children’s likelihood of entering substitute care and we think that maybe those policies even though we can’t measure time directly, might be correlated with partisan control state governments. So were going to measure that as like I said using two dummy variables, one indicating whether the state government is unified Democrat. So the governor’s mansion and both houses of the legislature are controlled by Democrats are separate dummy capturing whether the state government is unified Republican. I talked a little bit last week about the importance of missing data in historical research. Almost all historical data sets contain missing data and is almost never an acceptable strategy to simply drop observations with missing data. Of course there’s a number of different strategies for handling missing data, I won’t review them here. We could use full information maximum likelihood. We could use Bayesian methods for dealing with our missing data. In this analysis I’m going to use what’s probably the most common approach which is multiple imputation. For our multiple imputation model we’re going to assume that the data are missing at random, so not completely at random. That is to say we can use information from the nonmissing observations to make valid inferences about the values of missing observations. And so we’ll develop a model to impute those values. We’ll do it using chained equations and we’ll do it 10 times. So the result is that will have 10 versions of our data set each slightly different, which expresses the uncertainty from our imputation model. But each has complete data so we can do our data model on these imputed data sets and uncertainty relating from the implications will be propagated through the data model and expressed in our results. Okay what are those results? First I think it’s helpful to show just some descriptive results that show the trends in the outcome that were interested in over time and also some of the trends in the explanatory variables that we are interested in. So here’s what we’re interested in explaining. This is the rate of children in substitute care per 1000. So all these figures are going to be national level trends so we collapsed all of our state level data just into a national time series just to get sort of some basic background here. And what you can see that sort of from a global low in the early 1980s foster care caseloads increased really rapidly through the late 80s early to mid-1990s. And they decreased almost as quickly over the late 1990s and 2000. And then for the last 10 years or so the increased again perhaps plateauing in the last few years. Over the same period we see some interesting similarities in violent crime trends so similarly violent crime arrests were increasing precipitously in the late 1980s, plateaued a little bit earlier, and then decreased steadily thereafter. So violent crime arrests haven’t started to increase in the last decade like foster care caseloads have but their decreases have kind of stopped and so you can see violent crime arrest rates have more or less leveled out for the last decade or so. Over the same period, welfare generosity has increased has decreased very steadily and fairly drastically so from a high of about $800 per family in 20 $18 the average maximum AFDC payments for family of three has decreased to somewhere around $400. And over the same period female incarceration rates have increased steadily and quite drastically so at the beginning of the period looks like slightly fewer than 10 women per hundred thousand were incarcerated and now it’s closer to 60. Okay so these trends are illustrative but they don’t tell us much actually about the relationship between these trends and particularly the way these relationships might vary by state and by time. Okay so these are the models does the results of our state level model. Let me talk to them bit by bit. So we call that our coefficients are going to be elasticities so they are going to represent proportional change in the outcome resulting from proportional change in the input. And so you see a dotted line or dashed line vertically at zero which indicates no effect. And so and then the dots in the plots represent the coefficient estimates, the line ranges are the 95% confidence intervals. And so for example if the dot lay at the point 0.5 that would mean that say for a 10% increase in that explanatory variable there would be a 5% increase in the outcome. For a 100% increase in the explanatory variable there would be a 50% increase in the outcome. So dots that lay to the right of dashed line are positive effects, dots that lay to the left of the line are negative effects. Each dot represents a different explanatory variable and those are each labeled on the left side. And then each panel just represents a different specification of our models. And so the first panel on the left requires that year fixed effects are equal across all states. The middle panel allows our year fixed effects to vary by census region. And then write most panel is the most flexible specification which allows year fixed effects to vary by division. And we should be encouraged that our results don’t seem to change much across the different specifications. If they varied, if the results across these panels were quite different we might take caution in interpreting our results because they would be sensitive to the sort of choices we made about specifying our model. The fact that they are fairly consistent means we can have more confidence in our results. And we see a few things. First of all, violent crime stands out as the strongest predictor of rates of children in substitute care. So in the preferred model the one on the right, for any given increase proportional increase in violent crime rates, rates of children in substitute care increase by about half as much. That’s a pretty large effect. There aren’t other large statistically significant effects. You’ll see that the effect for female imprisonment, the dot is quite close to the line indicating that the effect is small but the line range around the point estimate is also extremely small. You can’t even see it so those results are actually statistically significant but the effects are quite small. Other effects such as poverty rate, whether the state government is unified Republican, are marginally statistically significant and some of those effects are sizable. But it’s important to note that in all of these results we’ve required that the relationship between variables is fixed over time. Or rather we’ve estimated our model averaging out any variation in the effect over and historical time. So it could be the case for example that female incarceration used to matter quite a bit but doesn’t matter anymore. And as a result of averaging those effects over a long period historical period we don’t see any effect. And so this is why time-dependent effects models that allow us to measure time-dependent effects specifically or explicitly can be quite helpful. So here are results from our time-dependent effects model where the elasticities are allowed to vary over time. So here the colored lines represent the point estimates and the colored ribbons represent to 95% confidence intervals around the estimates. And what you can see is that the effect of violent crime not only is it higher than these other select variables we’ve pulled out, it’s actually changed a bit over time. So at the same time that violent crime rates were increasing in the 1980s and 1990s the importance of violent crime as a predictor of child welfare outcomes was also increasing. Then as violent crime rates were decreasing in the 2000s and 2010s there wasn’t a corresponding decrease in the importance of violent crime to predicting children in substitute care. Violent crime has remained an important predictor of children in substitute care at the same level that it was in the late 1990s. And so by allowing the models to be a bit more flexible in this way we can start to understand not only how the levels of different inputs affect outcomes of interest but how relationships among these different factors changes over historical time. Okay so what should we conclude from all this? To revisit the meaning of our violent crime measure. I said before Swann and Sylvester claim that this was a good way of capturing drug use. Perhaps that’s so but I am not so sure. The relationship between drug use and violent crime has probably changed over time. Moreover, violent crime captures other things that probably also matter to child welfare. And these might also be changing over time. So for example then I measure violent crime we’re not measuring violent crime victimization, we’re measuring violent crime arrests. Arrests are joint function of the underlying actual crime rate and criminal justice enforcement or policing. If we think about policing as an institution of social control we might also expect that it would be correlated with other institutions of social control like child welfare services, child protective services. And so we might see a positive association between violent crime rates and children in substitute care not because of a relationship with drug use, not even because of a relationship with violent crime but because in those times and places institutions of social control are more active. Those institutions including both police and child welfare services. We also want to be careful in not giving too much weight to the things we can measure compared to the things we can’t even measure at all. So we included as I described before these intercepts for states and for region years and division years. That modeling strategy helps us to better identify the true relationship between observed variables and the outcome of interest. But if we look at the proportion of variance that’s explained by these intercepts by these unobserved variables, it’s quite large. And so that indicates to us that there’s quite a bit of stuff going on that we’re not even capturing and so we should have some humility when we interpret these results and understand that we are only really able to explain explicitly a fairly limited proportion of the variation we see in the outcome of interest. What conclusions can we draw from this? Well I think that these results start to show that historical research is really much strengthened by attention not only to sort of changing levels of inputs but changing relationships between inputs. I think they give us more detail about historical processes but they also give us I would say more confidence that the policy implications we’re deriving from research are up to date. I think in particular what we’ve seen is that the relationship between violent crime for example and rates of children in substitute care has been very stable for the last 20 years or so. Other relationships though may be changing and so if we find change in those relationships we should bring caution to any policy prescriptions we derive from research that’s based on a different historical period. And I think you know of basic research exercise like this also points to a number of different ways that people like you might take future historical research. Clearly we need to continue to improve our measures and our interpretation of measures whether it’s by adding new data to models like this or by I didn’t talk much about this but sort of the the way we’ve measured the outcome of interest here today is as the stock of children in custody at any given point in time. Those processes are somewhat different than or rather that measure is somewhat different than the rate of children entering into care at any given time. And we might think about stocks and inflows as having somewhat different as being driven by a different set of factors. We might also sort of continue to improve research by continuing to search for historical data that allows to extend our timeseries even further back in time or of course wait for the future to come. And then finally I think if the relationships we are observing in research like this are valid we should also expect to see similar relationships with other outcomes of interest that are closely related. So for example if we did a similar model where the outcome of interest was investigations of child abuse and neglect or substantiations of investigations of child abuse neglect and we were to find similar relationships, that would help give us confidence that we are observing true relationships. But if they diverged widely then we might be a little more cautious about how we interpret some of these results. Those would all be avenues that people could take up to improve research in this area. Okay that’s it for me. Thank you so much for listening. It was really a joy to percent this research. I hope it was helpful in any research historical research you have planned and if not that it might inspire you to take up historical research in this area. I look forward to your questions and in case you don’t have time to ask them now or they come to you a little later my contact information is here and you should feel free to email me with any questions relating to my talk. Thanks so much. [Erin McCauley] well thank you Alex that was a wonderful presentation and certainly shows the utility of this data and we’ll take some questions that we can ask and answer to some extent using the new data acquisitions. Before we moved to the Q and a I just want to highlight next week’s session just because I know some people have to leave for other meetings. Can we move for next slide? Next week were going to have a presentation on the administrative data cluster so it’s going to be led by Clayton Covington and then I’m going to come in and preview the linking which will be the following session. But were going to be talking about our administrative data for CPS history foster care involvement and then for the aging out of foster care without finding permanent placement. I will be talking about the NYTD data set so we hope you can join us it was be Wednesday over lunch again. Thank you again for attending I will now open up the Q and A so if you have a question type it into the Q and A box at we can read it aloud and then answer it. I’ll just give people a few minutes to type out any questions you may have. And thank you again to Alex that was really a wonderful presentation. So I see something coming in on the chat. Try to put in the Q and A but I will read it. Thank you for answering this question. ‘This is a really elegant research. I have used NCANDS at two points in time to compare some of the covariates you mentioned. This is a real strength of many of your data sets.’ Pretty comment thank you. We have question coming in on the Q and A. ‘How to get copies of the presentations?’ Well at the end of the summer series kind of in August were going to start transcribing them and I will have the entire series available on our website including our presentations, a transcript of the presentation as well as a video. So that’s on our website so I recommend checking that daily of I’ll also send out a notification through our listserv and on our twitter telling people when the whole series is online. [Alex Roehrkasse] Erin if people were interested in just getting copies of the slides is that something you could send them if they contacted us individually or do they need to wait for the recording of the presentation? [Erin McAuley] we prefer that those weight so that we can make the slide show ADA compliant before we distribute it but if you would like the presentation kind with distribution and you have timeline that makes waiting difficult than definitely reach out to us and we will see what we can accommodate we just have to get permission to distribute it. [Clayton Covington] Erin as people are continuing to ponder their questions you mentioned earlier about how we will announce when this and other presentations are available on our website via twitter as one of our outlets and I just wanted to promote that again so if you all are looking for the latest and greatest news here at the National Data Archive on Child Abuse and Neglect you can follow our account at NDACAN_CU and you get the latest updates where we not only mention our recent data releases and updates but also we curate content to help explain the utility and viability of our data sets for you as researchers and just people interested in the field of child welfare. [Erin McAuley] excellent point Clayton I just put our twitter handle out in the chat so if you pull that up you will see it there. And then we have two new questions. First, did Alex share a link to the article key published with the data he shared here? Alex. [Alex Roehrkasse] I don’t think I’ve published anything yet so that should be forthcoming. I expect we’ll probably highlight that on the twitter account when it does come up. So stay tuned. [Erin McAuley] yeah so this is actually a preview of not yet published work using our new data. But you know you can always site the presentation if you have paper at needs to site it now and reach out to us about how to do that. But yeah look forward to this coming out into the world and highlighted on twitter. [Alex Roehrkasse] it looks like there was another question. Someone asked if I could share the full citations of the prior research at I shared. I’m happy to do that if you want to contact me in person I can send you those full citations if you Google the authors names I’m sure you’ll find it there are lots of different articles. But shoot me an email I’ll share the full citations gladly. [Erin McAuley] great thank you for that and I just sent that question asker your email address directly you can also put it back up on the screen if you’d like. All right we have another question coming in. Wonderful presentation. Alex can you say again in simple terms what were the most significant findings? Just want to make sure I understood how you interpreted the implications of these findings. That’s in the chat. [Alex Roehrkasse] oh great thank you. Yeah thanks I appreciate that. So one of the things how would I describe the in simple terms the most significant findings of this research exercise? I would say first, one of the things we’ve learned is that in broad strokes most of the things that we thought predicted child welfare outcomes like children in foster care based on data from the 1980s and 1990s also appear to be true for data in the 2000s and 2010s. So some of what we’ve done is simply update previous research to incorporate new historical data. And we see that a lot of the relationships that we saw back then still hold up today. We didn’t know that though, and it could have been otherwise, those relationships could have changed quite a bit more than we do see them change in so it’s important that we actually ask that question and answer it explicitly. But instead of just adding new data and running the same models, what we did was we allowed the relationships between the variables to change over time and so we got a better sense of specifically which relationships are changing. And I think one of the things we learned is that even though violent crime rates have decreased markedly in the 21st century, the relationship between violent crime and child welfare outcomes like children in substitute care is as strong as it ever was. And so I think that highlights at the end of the day when doing historical research the importance of investigating how the relationships between these variables might or might not change over time. [Erin McAuley] alright I think we will wrap it up we have a couple thank you’s to Alex for his wonderful presentation. We’re so lucky to have him for these two sessions giving us kind of the on the ground perspective on the administrative data that was our historical data acquisition. I can’t wait to see both his publications come out and all the publications that you guys will create using this new data. We will see you guys next week, thank you again so much for coming and if there are any lingering questions Alex’s email is up there all of our email addresses are available on the website. So thank you very much everyone, take care. [Voiceover] The National Data Archive on Child Abuse and Neglect is a project of the Bronfenbrenner Center for Translational Research at Cornell University. Funding for NDACAN is provided by the Children's Bureau.