Utilizing Post-Release Outcome Information To Measure the Effectiveness of Correctional Education...

37
Utilizing Post-Release Outcome Information To Measure the Effectiveness of Correctional Education Programs Eric Lichtenberger and Todd Ogle United States Department of Education Office of Safe and Drug-Free Schools April 2008 The content of this report does not necessarily reflect the view or policies of the U.S. Department of Education, nor does the mention of trade names, commercial products or organizations imply endorsement by the U.S. government. This publication also contains URLs for information created and maintained by private organizations. This information is provided for the reader’s convenience. The U.S. Department of Education is not responsible for controlling or guaranteeing the accuracy, relevance, timeliness or completeness of this outside information. Further, the inclusion of information or URL does not reflect the importance of the organization, nor is it intended to endorse any views expressed, or products or services offered.

Transcript of Utilizing Post-Release Outcome Information To Measure the Effectiveness of Correctional Education...

Utilizing Post-Release Outcome Information To Measure the Effectiveness of

Correctional Education Programs

Eric Lichtenberger and Todd Ogle

United States Department of Education Office of Safe and Drug-Free Schools

April 2008

The content of this report does not necessarily reflect the view or policies of the U.S. Department of Education, nor does the mention of trade names, commercial products or organizations imply endorsement by the U.S. government. This publication also contains URLs for information created and maintained by private organizations. This information is provided for the reader’s convenience. The U.S. Department of Education is not responsible for controlling or guaranteeing the accuracy, relevance, timeliness or completeness of this outside information. Further, the inclusion of information or URL does not reflect the importance of the organization, nor is it intended to endorse any views expressed, or products or services offered.

About the Authors

Eric J. Lichtenberger, Ph.D. – Dr. Lichtenberger is a Senior Research Associate for the Center for Assessment, Evaluation, and Educational Programming at Virginia Tech. Previously, he taught graduate and undergraduate courses in vocational education, human resources, and entrepreneurship. Vocational education and political science were the emphases of his doctoral studies. Over the past five years, he has been primarily involved in evaluation, research, and database development in the areas of correctional education and offender workforce development. His research agenda includes studying how individual-level factors, as well as local labor market conditions, contribute to successful post-release transitions. His long-term research goals relate to building a more thorough understanding of how correctional education could become more integrated in state-level, as well as local-level, economic development.

Todd Ogle, Ph.D. – Dr. Ogle is an analyst in the Virginia Tech Office of Institutional Research and Effectiveness. His responsibilities include leading the effort to deliver the University’s upcoming Reaffirmation of Accreditation by the Southern Association of Colleges and Schools via electronic format and leading the effort to upgrade the university’s faculty activity reporting process to an electronic system. Formerly, he worked at the Center for Assessment, Evaluation, and Academic Programming as a research associate, collecting, transforming, and analyzing correctional education data for use in evaluations for the Virginia Department of Correctional Education.

i

ii

Table of Contents

Introduction ..................................................................................................................................... 3 Planning the Evaluation .................................................................................................................. 4

Establishing Goals and Objectives .............................................................................................. 4 Understanding and Utilizing Existing Data ................................................................................ 5

Inmate Data ............................................................................................................................. 5 Program Participation Data ..................................................................................................... 6

Enrollment Dates ................................................................................................................ 6 Time from Participation to Release .................................................................................... 6 Program Type...................................................................................................................... 7 Contact Hours ..................................................................................................................... 7 Pre-Release Program Outcomes ......................................................................................... 8 Educational Hierarchy ........................................................................................................ 8

Obtaining existing data ........................................................................................................... 9 Administrative Burden and the Outside Agencies .............................................................. 9 Working with Raw Data ................................................................................................... 10 Understanding the Data Format ........................................................................................ 11 Integrating Data or Methodologies Used in Existing Evaluations .................................... 11 Using a Unique Identifier.................................................................................................. 11 Forming Release Cohorts .................................................................................................. 12 Distinguishing Between Completers and Non-Completers .............................................. 13 Distinguishing Between No-Fault Non-Completers and At-Fault Non-Completers ........ 13

Conducting the Evaluation ............................................................................................................ 14 Recidivism ................................................................................................................................ 15

Geographic Scope and Timeframe ........................................................................................ 16 Precipitating Event ................................................................................................................ 16 Survival Time ........................................................................................................................ 16 New Sentence Length ........................................................................................................... 16 Most Serious New Offense ................................................................................................... 17 Practical Examples ................................................................................................................ 18

Post-Release Employment Measures ........................................................................................ 20 Limitations of UI -Wage Data .............................................................................................. 21 Advantages of UI-Wage Data ............................................................................................... 21 Practical Examples ................................................................................................................ 22 Occupational/Industrial Relatedness Earnings ..................................................................... 24

Post-Release Educational Attainment ....................................................................................... 26 National Student Clearinghouse ........................................................................................... 28 Practical Examples ................................................................................................................ 28

Isolating Correctional Education Program Impact ....................................................................... 30 Techniques for Isolating the Impact of Correctional Education Programs ............................... 31

Purposefully Selected Comparison Groups .......................................................................... 32 Creating Comparison Groups Based on Key Variables ........................................................ 33

Summary ....................................................................................................................................... 34 Works Cited .................................................................................................................................. 35

Introduction

Evaluating Correctional Education (CE) Programs and communicating the results is becoming more and more necessary in today’s climate of limited resources and increased accountability for government programs. Program evaluation can and should be utilized for internal auditing and continuous improvement, but also can be used to present program results to external audiences, such as policymakers and others who might control funding sources. Communicating the results internally can provide information necessary to make decisions related to program improvement and resource allocation. Communicating the results externally, while sometimes required by governing bodies and funding agencies to meet basic reporting requirements, have been influential in impacting statutes, budgetary decisions, and public opinion. Lichtenberger and Ogle (2006) argue that a lack of appropriate program evaluation creates an informational vacuum in which CE administrators lose a great deal of control over how their programs are perceived by those influencing or controlling funding. Klein and Tolbert (2004) make the case that increasing support for correctional instruction (external justification) will require producing better and timelier information about the status and outcomes of such services. Without such information, it is reasonable to expect an erosion of support for correction education programs. Making this justification to external constituencies requires going beyond providing participation head counts, program completion rates, and certification or GED attainment rates. Today’s climate demands a linkage between the correctional education programs and the outcomes of program completers when they are re-integrated into society. Specifically, legislators and the public at large want to know whether correctional education participants recidivate and if they obtain and retain employment. In order to properly respond to such inquiries, post-release outcome data are required. The Office of Vocational and Adult Education, U.S. Department of Education, with editorial assistance from the Pacific Institute for Research and Evaluation, created this paper to provide CE administrators and research analysts with strategies for effectively collecting post-release outcome data and putting it to use both internally and externally. This paper highlights the approaches used by select states, as well as their outside evaluators and researchers, to measure the extent to which CE programs meet post-release program goals and objectives. In doing so, the authors bring to light issues associated with specific post-release program measures such as employment, recidivism, and educational attainment and the procedures used to collect and analyze the related measures. The paper consists of three major topics; 1) planning the post-release follow-up, 2) conducting the post-release follow-up, and 3) isolating the impact of CE programs.

3

Planning the Evaluation

Planning for the evaluation is the single most important step in the process of establishing program impact. Thorough and thoughtful planning can improve the success of the evaluation. The following questions must be answered during the planning process: 1) What information is required to determine if the goals have been met; 2) Is the information available; and 3) If the information is available, how can it be obtained? Once the required information is identified, the sources of the information can be determined. Certain pieces of information can be found in existing sources through a technique called data matching, while other information can only be obtained through surveys. Once again, the method (data matching or surveying) depends on the information required, the availability of that information, and the way in which it can be obtained. This section provides information related to establishing the goals and objectives of the evaluation, identifying the sources of information required to meet those goals, and planning to obtain the information.

Establishing Goals and Objectives

Any plan for the use of post-release outcome data begins with identifying the purpose of the evaluation—to determine the effect of a program. With this clarity in mind, the evaluator can determine exactly what type of information will be required to determine if that purpose has been achieved. There are numerous types of correctional education programs: vocational, academic/GED, college, apprenticeship, parenting, transition, and cognitive restructuring. It is important to be aware that goals of various types of programs are not always the same. . Taking program type into account allows an evaluator to determine what post-release measures are appropriate in determining the program’s level of success as predicted by its goals and objectives. It also allows an evaluator to pinpoint the programs that had more of an impact on post-release outcomes if multiple CE programs are being measured together. It could be argued that every CE program has the goal of reducing recidivism; however, for some programs, the goal of reducing recidivism is more direct than for others.

Often, the evaluator must balance internal (program improvement) and external (i.e., Federal reporting, state legislature requests) goals. It is important to keep the goals of the program in mind while attempting to meet the demands of external constituents. The post-release outcomes that are measured will differ depending upon the type of program and its goals and objectives. An evaluation of an apprenticeship program might focus on measures of recidivism, employment, and earnings and perhaps even the relatedness of the participants’ current employment to the respective apprenticeship program. Examining the - participants’ post-release enrollment in college courses might not be an appropriate measure, based on the goals and objectives of the apprenticeship program; however, that same measure would be very important in evaluating a college program, such as those funded through the Incarcerated Youth Offender (Specter) Grant. The nature of the CE program, as well as the program’s goals and objectives, will dictate the post-release outcome measures hypothesized in the evaluation plan.

Finally, it is imperative that the evaluation objectives are stated in clear and measurable terms that indicate an increase or a decrease in a behavior, a skill, an attitude, or knowledge. The evaluator must obtain information to determine the degree to which the objectives have been

4

met; if they are not written in terms of specific outcomes (obtained employment) with measurable criteria (within one quarter following release) the evaluator will be forced to apply goals based on the data that are available. For example, if one objective of a Youth Offender program is to see a percentage of completers enroll in post-secondary education following release, the most simple approach is to begin with a count of those who indeed enrolled. This simple measure will need to be followed by objectives that determine what happened as a result of this post-secondary enrollment, e.g., completion rate, resulting employment. For novice evaluators, however, we are suggesting not being overly ambitious during the first evaluation and let the level of complexity of the objectives evolve in subsequent evaluations, as a result of increased familiarity with the information and suggestions for further investigation. In order to address complex issues such as the relationship between vocational training and post-release employment relatedness, basic questions must be answered first. For example, in year one of an evaluation of post-release employment outcomes, initial questions regarding the number of former offenders employed in the first quarter, second quarter, etc. and their subsequent earnings can be answered. After having established a working relationship with the relevant state’s employment commission in year one, the evaluator can in year two begin working with North American Industrial Classification System codes to identify the industries in which the former offenders were employed. Future questions can delve into issues of cost/benefit ratios and other more complex topics. This methodical approach to the evaluation allows for the building of relationships with cooperating agencies and a base level of understanding of the data prior to more complex, exploratory efforts.

Understanding and Utilizing Existing Data

Once the objectives of the evaluation have been established, identifying the information needed to determine impact and locating that information are the next most important tasks. Much of the information needed in a post-release follow-up actually exists in hard copy or digital form somewhere in Department of Corrections and Correctional Education program files. The evaluator must become familiar with this information, obtain the information, and format it prior to making any connections with post-release data.

The remainder of this section will discuss the inmate and program participation data required for a post-release follow-up, how to obtain that data, and how it should be formatted for the evaluation.

Inmate Data

For every CE completer included in the sample, certain pieces of information are critical to the success of the post-release evaluation. At a minimum, two unique identifiers (such as inmate number and social security number) and release date must be available. There are also attributes of individual inmates that can be related to post-release outcomes. These attributes include gender, race, age at intake, age at release, highest level of education attained prior to intake, TABE scores, most serious offense, infractions committed while incarcerated, prior state or federal record, and drug or alcohol treatment administered during incarceration. This is not an all-inclusive list, and the evaluator should rely on the objectives of the evaluation to determine what information might be necessary.

5

In order to determine the post-release outcomes of CE programs, it is necessary to have information related not only to characteristics of the individual inmates. Equally important are the attributes of each occurrence of an individual’s participation in CE programming (e.g., for each course or program, each participant’s course outcome, course exit date, number of contact hours with instructor. Without individual-level program participation information, it would be difficult, if not impossible, to adequately evaluate programs. In a technical report that dealt with creating a common set of data elements across CE programs, Klein and Tolbert (2004) made a similar argument as they posited that to avoid inappropriate and potentially misleading comparisons, analyses of correctional education program should focus on outcomes gleaned from inmate-level data.

Program Participation Data

Beyond the inmate attributes mentioned above, the following data regarding treatments are useful when attempting to evaluate correctional education programs using post-release outcomes: the start date or the date of entry into the specific CE program, the exit date, the name of the program, the type of program (e.g., vocational, GED, college, cognitive, parenting), contact hours, and an outcome for the given program or course. These pieces of information will be discussed in the context of their importance in conducting correctional education post-release outcome evaluations.

Enrollment Dates

Establishing the dates during which an individual was enrolled in a particular CE program or course is important for ensuring that the participation occurred during the sentence (release cohort) of study. But these dates can also provide a more rich set of information as well, such as building each individual’s participation history. There are other uses for such information. Having the timeframe of a course provides information that could determine if the number of days between the program exit date and the offender’s release date is related to post-release outcomes. For example, a correlation could be calculated between the number of days between a vocational program exit date and release and earnings. One could assume that the relationship would be negative, as offenders who have participated in programs closer to their release date would not only possess a more up-to-date skill set, but those skills would be fresher. Therefore, as the number of days decreases, the likelihood of positive post-release outcomes would increase. Equally important is to make sure that the CE participation occurred prior to release. An evaluator could also use entry and exit dates to determine, for example, whether non-completers who spend more days enrolled in a particular course or program have better post-release outcomes than those who spend fewer days enrolled.

Time from Participation to Release

Participants in correctional education programs may not necessarily get released during the year in which they participated. In fact, it could be several years until all of the CE participants from a given year are released, unless each participant’s potential sentence length is taken into consideration prior to enrollment. CE program administrators could control for this by balancing each potential participant’s sentence length with the length of time it generally takes to complete a program. The Indiana Department of Correction attempts to balance a potential participant’s

6

ability, the requirements of a program, the length of time it takes to complete a program, and the participant’s sentence length minus the reduced sentence afforded to program completers. Florida also has a risk/needs ranking system that guides inmate placement into education programs. The obvious benefits of this approach far outweigh the administrative balancing required to make it work. If the given CE participant completed a vocational program, the closer the program exit date to the release date, the better. One could argue that the completer’s skill set would be more up to date (assuming the vocational program is current) and, with a shorter time between program exit and release, the participant could be in a better position to remember what was actually learned. For instance, if someone takes an information technology course and is released two years after exiting the program, not only will there have been considerable changes in the subject matter, that individual might not be able to accurately recall what was learned in the course. Furthermore, even if the amount of time between a participant’s program exit and release from prison is relatively short, it sometimes takes a while for the data related to post-release outcomes to mature or become available. For example, if unemployment insurance wage records are being used to determine post-release employment and earnings outcomes, there is usually a lag time between when the data are collected and when they are available for analysis. For both the Virginia Department of Correctional Education using data obtained from the Virginia Employment Commission and the Indiana Department of Corrections using data from the Workforce Development Board, the lag time is usually one quarter. For example, earnings/employment records for those employed during May, June, or July, would not be available until the following October. This lag time must be taken into consideration when developing an evaluation plan.

Program Type

Storing the program type information for each inmate is critical in linking the participation with an appropriate outcome. As stated previously, the program type dictates the appropriateness of the pre-release and post-release outcomes. Additionally, storing the program type allows for more rich post-release analysis, such as in the case of vocational programming, evaluating the relationship between the program and the industry in which the former inmate is employed post-release.

Contact Hours

Determining the contact hours for each participant in each course allows evaluators to determine the intensity (or dosage) of a particular program. This can be accomplished using a calculation based on hours per day, entry date, and exit date for that particular record of program participation. This pre-release CE measure also allows for evaluators to estimate how close a participant came to successfully completing a course. According to Klein and Tolbert (2004), seat time may very from state to state and, given the difference in intensity of coursework, may affect student outcomes.

The evaluation is important in determining attributes of programs so that others are able to contextualize the results and methodology. For example, if a given department of correctional education implements two types of carpentry programs at two separate facilities, and the first

7

program only has 100 contact hours between the participant and instructor and the second program has 250 contact hours between the participants and instructor, the expectations for the participants in the two separate programs likely will be different. Also, if someone from another CE agency wanted to replicate the evaluation performed in aforementioned state, having the contact-hour information is important to ensure comparability.

Pre-Release Program Outcomes

Program or course outcomes are important when making comparisons between program completers and those who did not participate in programs, for example. Without these outcomes, the evaluator will be unable to isolate potential differences that exist between the groups. One of the universal goals of correctional education programs is to instill skills, attitudes, or other attributes within participants that will enable them to be successful upon release. According to Klein and Tolbert (2004), researchers should seek to isolate inmates participating in coursework during a specific academic year or those admitted during a specific period of time. This works well with pre-release outcomes, but confounds the problems associated with post-release outcomes.

A potential problem exists when an offender has participated and completed one or more programs. In that case, it becomes difficult to determine which program gets “credit” for the potential positive post-release outcomes for that individual. If it is a descriptive report that presents the outcomes for a group of program participants or completers, isolating the impact is not necessary, especially if no causal claims are being made. In other words, the individual participated in a program and achieved specific outcomes. In addition, the same individual participated in a different program as well. Selecting which program completion to examine is made easier by creating an educational hierarchy for each participant. Advanced statistical procedures, such as regression analysis, can assist the evaluator in determining proportional contributions of multiple programs.

Educational Hierarchy

Creating an educational hierarchy based on each offender’s educational programming history is one way to isolate the impact of specific programs. For instance, if an offender completed both a pre-GED as well as a GED program, logically it would be appropriate to focus on the highest completion level (GED). If that same individual participated in a college-level program and passed several courses without earning an associate or bachelor degree, the highest level could be something similar to “some college.” The highest level of completion seems to work well with “academic” correctional education courses. However many offenders participate in both academic and vocational or apprenticeship programs, and it remains difficult to place vocational program participation on that “academic” hierarchy. Arguably, the goals of academic and vocational programs are similar (reducing recidivism and increasing post-release employment opportunities), but traditionally, the programs’ objectives are quite different. The highest level could be a combination of academic and vocational program outcomes. Therefore, if an ex-offender completed a GED program along with a construction-related vocational program, that individual could be considered as having achieved a higher level of education than a participant in a single program.

8

Obtaining existing data

Using post-release outcomes to evaluate CE programs—particularly those related to recidivism and employment—is a challenging task even when the post-release outcome information is readily available. The challenge lies in the discrepancy in the type and format of the data that exist between CE program reporting and type and format of the data required to match with post-release outcome data. Most CE programs are accustomed to systematically reporting the number of participants (head counts), demographic categorizations, contact hours, completion rates, and the number of certificates or degrees earned during each reporting cycle. This information is generally within the control of the program, as it is most likely collected and stored by CE staff.

However, when it comes to the information needed to measure post-release outcomes, whether via data matching or survey, many CE programs must look to other departments or agencies for access. The exception would be when correctional education is a functional unit within a state’s department of corrections. Such CE programs are one step ahead in their access to inmate-level information that is stored with information related to education programs. Those CE programs without direct access to inmate-level data must identify the information that is needed from the department of corrections and then, establish a memorandum-of-understanding with the department to facilitate transfer of that information.

The key issues involved with obtaining the required information are: 1) being able to communicate effectively with the agencies that have the information required, 2) ensuring secure transfer and storage of data, and 3) providing the agencies that have the data with the information they need to process the request. Upon establishing contact with the agencies that have the data, memoranda-of-understanding can be written between agencies. The memorandum-of-understanding must contain language that addresses the issues stated above, particularly the security aspects, but should also outline the tasks that will be performed by the partnering agencies. Placing as little burden as possible on the partnering agencies will improve the likelihood of successful data collection in the future.

Administrative Burden and the Outside Agencies

Fulfilling data matching requests is generally only a small aspect of the typical research associate’s or database manager’s job. At times, they could be working on other projects and are under pressure associated with time constraints; therefore, performing the matching requests is relatively low on their list of priorities. The amount of time it takes for the outside agency to fulfill the request varies widely, based on the current job responsibilities of the individual assigned to performing the request and the level of automation on both ends. However, since all these agencies are funded through taxpayer monies, many maintain at least a minimum level of customer service, and it is generally agreed upon that inter-agency cooperation is a professional courtesy.

For example, in Virginia, the third-party researchers at the Center for Assessment, Evaluation, and Educational Programming (CAEEP) took efforts to reduce the burden on the Virginia Department of Corrections (inmate demographics and recidivism), the Virginia Employment Commission (unemployment insurance data), and the State Council for Higher Education for Virginia (post-secondary enrollments). Initially, CAEEP researchers had difficulty developing a

9

relationship with the State Council for Higher Education for Virginia (SCHEV) in an effort to obtain higher education enrollment and course outcome records, along with the degrees conferred records. A miscommunication between the researchers and the database manager at SCHEV stemmed from a perception that CAEEP wanted the data manipulated and a report generated by SCHEV. Efforts made by CAEEP to ease the administrative burden on these partnering agencies went a long way in the development of an effective working relationship. These efforts included a willingness to prepare requests in such a way that made the partners’ queries easier, a willingness to work with raw data that had not been aggregated or transformed, learning about the partners’ reporting cycles, and the needed lead time on requests.

Also, the proactive approach to data security held by CAEEP researchers served to ease many of the concerns that were initially held by individuals employed at those agencies. All data transfers took place via secure File Transfer Protocol (FTP) sites. Files were never sent as email attachments, since the files often include offender names, social security numbers, and other sensitive information. All computers used at CAEEP are shut down each night and have password protection. Laptop computers are not used in data compilation, though they are sometimes used in report generation and presentations. Once a project is to the point that the final report can be generated, all of the information that could potentially identify an individual is purged from the working database. However, a backup with the individual identifiers is stored on a secure server.

Working with Raw Data

At times, the correctional education agencies performing the evaluation will be given the choice of obtaining data in a raw format or requesting that an outside agency transforms the data in a format familiar to CE administrators or data managers. Based on experience, the authors are convinced that being able to work with raw data helps to establish credibility with the cooperating agencies and perhaps can speed up the data exchange/transfer process. Many times, the outside agencies employ research associates or database managers who are given the responsibility of performing the data-match request. In other words, the correctional education employee who is performing the evaluation creates a file that includes the preferred unique identifier along with other required fields—if they are required—and securely sends that file to the outside agency that has the desired data. External agencies can have different requirements for the file. Some accept Microsoft Excel™ spreadsheets, or tab-delimited files, which are often the easiest to create. Other agencies require that data fields be formatted in a fixed-length format, requiring that each field be placed in a predefined position with a limit on the number of characters allowed.

Data providers are sometimes asked to fulfill requests that require the creation of proxy variables from the matched raw data, such as a variable that identifies employment for offenders during a particular quarter, year, or at post-release. Many of the proxy variables can be easily developed in such database software as Microsoft Access™ or Filemaker Pro™. Such programs usually include an aggregate function, which allows users to create more sophisticated proxy variables using the available raw data, including: average wages, percent of post-release quarter employed, number of credits earned, and grade point averages.

10

Understanding the Data Format

The first step in understanding the available data is to request a data dictionary, if one is available. Become familiar with the data fields, along with the format of the fields, and the data reporting cycles. Knowing the specific data to ask for enhances the requesting agency’s credibility and helps in being able to communicate the correctional educational agency’s data needs. It could also serve to speed up the data exchange. Evaluators could argue that it is difficult to know what to ask for without having used the data in the past. However, a data dictionary provides the necessary information, if it includes detailed descriptions of each data field. Other researchers or agencies that have used matched data in the past can be especially helpful about the utility of specific data fields. The CE research associates and database managers also have a working knowledge of the data and should be able to answer questions related to the quality and validity of the information being requested.

Integrating Data or Methodologies Used in Existing Evaluations

Whenever possible, making use of existing data or methodologies from existing evaluations not only speeds up the evaluation process, but also lends credibility to the evaluation. In the case of the Virginia Department of Correction Education (VDCE), CAEEP initially attempted to integrate data from an annual recidivism study conducted by the Virginia Department of Corrections into its Incarcerated Youth Offender Program (IYOP) historical analysis. However, when creating release cohorts, VDCE utilizes the fiscal year (since the fiscal year more closely parallels the academic year), while VDOC recidivism studies utilizes the calendar year. Though additional work was required to line-up the dates of the data, using the existing VDOC data provides validity to the findings because that data has been tested and validated.

Conversely, integrating methodologies can be a challenge. CAEEP encountered an issue around the operational definition of recidivism used by the VDCE—re-incarceration in a state-level facility in Virginia within three years of release. Based on that definition, at least three years of intake data are required, so the recidivism study for former offenders released in calendar year 2000 could not be conducted until at least 2004. Due to IYOP reporting requirements, including a more inclusive definition of recidivism, the rates for people released in 2004 could be calculated in the same year, but they did not fit the VDOC operational definition, making these rates ineligible for analysis. As an alternative, CAEEP researchers employed the exact same methodologies as VDOC (in determining whom recidivated) after altering the process to include those who had been released recently.

Using a Unique Identifier

First and foremost, it is necessary to have a unique identifier (two are better) for each participant so that the correctional education program datasets can be matched to post-release outcome and other related datasets. It is critical to ensure that the unique identifier is being used by the agency collecting the data. For example, a unique inmate number can be used to exchange data between the CE program database and a separate one containing demographic, behavioral, and other information. At the same time, if a correctional education agency is interested in obtaining information regarding post-release employment from the state agency responsible for maintaining unemployment insurance (UI) wage records, a valid social security number (SSN)

11

for each participant is typically required. According to Klein and Tolbert (2004), among the states included in their study of the feasibility of collecting common correctional education measures across the U. S, most states collect inmates’ SSNs. In Florida, the SSN is the identifier shared between the Department of Corrections and the Education and Training Placement Information Program, which provides employment and education outcomes for its state agencies. Texas also uses the SSN between Corrections and the Texas Workforce Commission. Some states are limited in their ability to use SSNs due to privacy laws. Thus, establishing a well-defined memorandum-of-understanding between agencies on the usage of SSNs, including security and privacy assurance, is a first step in removing such barriers.

Using SSN as an identifier has its pitfalls, namely the accuracy and validity of the number itself. However, SSN is the only identifier that allows for data matching with partnering agencies that store employment and education data. Obtaining the most complete and valid set of SSNs for a given cohort of inmates, then, is critical to post-release follow-up success. Texas’ Windham School District cites improving the percentage of records with valid SSNs as one future research strategies in its January 2007 evaluation report (Hunter, 2007). The Florida Department of Corrections validates SSNs on an ongoing basis by sending batches to the U.S. Social Security Administration.

There are basic business rules employed by the Social Security Administration in allocating SSNs, which can be used to screen for invalid numbers. For example, the first three digits of the SSN—called the area number—are assigned based on the zip code from which the application originated. An SSN beginning with “000” or “666” has never been assigned, nor have area numbers in the 800s or 900s or a number above 772 in the 700s. The second two numbers in an SSN are the group number, and no SSN with “00” has ever been assigned. Finally, the last four digits of the SSN are the serial number, and “0000” has never been used, nor have SSNs consisting entirely of the same numeral (111-11-1111, 999-99-9999, etc.). Familiarity with these rules allows an initial screening to identify invalidities quite readily.

Forming Release Cohorts

Organizing former offenders into release cohorts allows the development of a long-range, post-release evaluation that might include participants or completers from more than one year who happened to be in the given release cohort. Evaluators in Virginia, Texas, Indiana and Florida format their data as release cohorts in order to match employment records. There are two commonly used approaches for grouping former offenders for comparison. The first is to create a release cohort based on a specific time frame such as a calendar or fiscal year of release, or an academic year. Thus, a group being measured may include people released between- July 1, 2002 and June 30, 2003. Using this technique improves the comparability of groups because whether the inmates in the cohort have a record of CE program participation or completion, they all have a release date during the established range. The evaluator can then aggregate the results for the year and use those results for long-term trend analysis or for other uses in the future.

The second common approach is to form release cohorts based upon another attribute (such as program completion) and collapsing the release or participation time-frame data. In the context of post-release employment outcomes, the cohort of program completers is examined for obtaining employment one quarter after release, two quarters after release, etc., regardless of the

12

dates of program completion or release. When using release cohorts in measuring post-release employment outcomes, it is beneficial to use re-incarceration data to determine if the ex-offender recidivated. For example, if the ex-offender was released in the third quarter of 2000 and recidivated in the first quarter of 2001, it would be possible for the individual to have worked in the third quarter of 2000, the fourth quarter of 2000, as well as the first quarter of 2001. Therefore, for this ex-offender within this release cohort, the only portion of the earnings/employment records that would be pertinent would be the three identified quarters. It should be noted that when the release date occurs early in the quarter, some ex-offenders have more time to obtain employment. . The same is true when dealing with recidivists, as those who recidivate later in the quarter obviously had more time to potentially be employed. Those quarters should be included, but should be interpreted that earnings in the release quarter and the recidivism quarter could generally be lower.

Distinguishing Between Completers and Non-Completers

Distinguishing between participants who do or do not complete programs is critical information to analyze (Hunter, 2007, p.6). Logically, participants who complete programs have been exposed to all program objectives and have met all necessary requirements. Among participants who do not complete a course, there are potential inherent differences, such as intrinsic motivation, that could affect post-release outcomes. These differences are masked when they are not identified in the evaluation, resulting in an artificially weakened effect of program completion. The opposite is true as the effect of generic program participation as a whole is artificially strengthened when the program completers and the non-completers are combined in an analysis.

For example, a group of program participants consists of 100 completers and 100 non-completers. Their average quarterly earnings were $5,000 and the overall recidivism rate for the group is 25% (40 recidivists). If the completers and non-completers are not identified, the information that is being masked is the difference between the completers and non-completers in both earnings and recidivism. Potentially, the completers could be making $7,500, which would equate to average quarterly earnings of $2,500 for the non-completers. Also, the recidivism rate for the completers could be 10%, making the recidivism rate for the non-completers 30%. High schools do not usually group together dropouts, transfers, and graduates when it comes time for program evaluation. If the goal of CE programs is for participants to complete all requirements, there needs to be a way to identify completers and non-completers.

Distinguishing Between No-Fault Non-Completers and At-Fault Non-Completers

Creating categories within non-completers can be helpful in identifying program strengths and weaknesses. Some participants fail to complete due to no fault of their own, such as a medical issue, early release, or transfer to another facility. Conversely, there are non-completers who fail to complete due to things that would traditionally earn a student a failing grade—inability to meet the program’s learning objectives or dismissal from the course for a disciplinary issue. The assumption behind identifying and appropriately categorizing these participants is the expectation that no-fault non-completers would have post-release outcomes more similar to those who completely finished the program. Not distinguishing outcomes of these two groups masks the effect of the program in a post-release analysis.

13

Conducting the Evaluation

Gathering post-release outcome information may seem overwhelming at first, but careful planning will increase the odds of a successful evaluation. There are two methods generally employed for measuring post-release outcomes: data matching and surveying. Data matching involves the systematic matching of unique-identifying variables (e.g., social security numbers or inmate identification numbers) to records from other agencies that contain post-release outcomes information. Surveys typically involve parole officers as the most direct means of contacting released inmates. Both methods have their strengths and limitations. The benefits of utilizing the different methods of follow-up data collection can be maximized by including appropriate details in the evaluation plan. The following section outlines the strengths and weaknesses of each approach, describes the information available and methods commonly used to obtain that information using both approaches, and provides practical examples of states that are using each approach.

Data matching is a reliable technique for gathering quantitative data related to inmates post- release activities. Because this method generally makes use of department of corrections data, unemployment insurance data and post-secondary agency (e.g., Board of Regents, State Council) data, there is a level of accuracy assured in these numbers. Self-report information on the recidivism, employment, and education factors can be inaccurate, whether intentional or unintentional. The nature of matched data also lends itself to aggregation and trend analysis. Perhaps most importantly, data matching allows for near census-levels of sampling, being that it is not dependent upon making contact with and gathering a response from individuals. However, there are limitations with data matching.

Limitations of data matching include: 1) the lag time inherent in gathering data, 2) the fact that the information available is often not very detailed, 3) qualitative data is unavailable through data matching, and 4) data are approximations of what is being measured. Regarding lag time, when recidivism is defined as re-arrest, re-conviction, or even re-commitment within three years of release, then at least three years of intake data (plus the additional time the partnering agency needs to enter and validate the information) are required before the evaluator can use that information in an analysis. Because these data are often gross measurements, it is sometimes impossible to deduce certain information from them. For example, unemployment insurance records are typically stored by the quarter and have no indication of hourly wage or salary, only the total earnings for that quarter. Qualitative information regarding the individual’s specific job title or responsibilities and attitudes toward the job are not obtainable via this approach. Likewise, the simple attainment of employment can be used as an approximation or proxy for “success,” while the individual’s assessment of his or her success may be quite different.

Surveying former offenders can provide information that is not available via data matching, such as job descriptions, attitudes, and employment of educational goals. However, contacting released inmates requires the assistance of parole or probation officers in most instances. One could make the argument that ex-offenders who are released to community supervision or parole are easier to contact for traditional follow-up purposes. Logically, someone connected to a corrections-related agency is required to make systematic contact with ex-offenders, likely keeping track of such information as a record of employment.

14

The parole or probation officers can initiate direct contact via face-to-face interview, mail, telephone, or e-mail. Many times, a combination of the above is required to obtain the needed data. Alternatively, the survey can be directed at the parole or probation officers who act as proxy for the inmates. Please note, however, that the use of a proxy could be problematic if the survey questions require the parole or probation officers to recall detailed information. For example, a parole officer might be able to respond to a survey question asking about employment, but might not know an hourly wage or the average number of hours per week the ex-offender worked during a particular timeframe.

The cost of conducting a survey for the purposes of post-release follow-up could be prohibitive. In addition, response rates are sometimes too low to definitively evaluate programs, let alone make generalizations. Despite these drawbacks, working with parole or probation officers can reduce costs, since they are most likely collecting the information as part of their duties.

Clearly, ex-offenders sometimes are difficult to contact post-release. Survey response rates of five to ten percent are not uncommon in follow-up studies involving this population. However, the studies that achieve higher response rates tend to use smaller, more-targeted samples, rather than an entire cohort of released ex-offenders. In research that compared the earnings information gathered via data matching and surveys, Kornfield and Bloom (1999) concluded that Unemployment Insurance wage records are reliable as the primary follow-up data source for a full sample, and that individual surveys for a portion of the larger sample were appropriate, thereby providing more detailed information that is unattainable via data-matching.

The authors of this paper endorse a mixed-method approach that utilizes both the data-matching and survey methods. Because the costs associated with surveys are higher than data matching, it would be most appropriate to couple a data match with a survey of a small sub-set of the target population, as confirmed by Kornfield and Bloom (1999). This process allows the collection of qualitative information that can portray CE program activities and achievements through narrative descriptions. Regardless of whether a survey is used for the follow-up or a hybrid approach is used, utilizing as much existing data as possible is critical for improving accuracy of the results—and time saving. Any demographic, participation or treatment information that is available is information the parole or probation officers or ex-offenders do not have to provide. Keeping the survey as brief as possible can help to improve the response rate.

Recidivism

Establishing the way recidivism is defined is important both for the audience to understand the implications of the results and for other researchers to replicate the methodology for comparability purposes. Practically speaking, the definition of recidivism is limited by the availability and accessibility of data that are necessary to calculate the outcomes. Recidivism is generally defined in one of four ways: 1) re-arrest, 2) re-conviction, 3) supervision revocation (probation or post-release supervision), and 4) re-commitment/re-incarceration. It should be noted that at times supervision revocation could result in re-commitment/re-incarceration, so the method in which parole and probation violators fit within the definition of recidivism must be clearly articulated by the evaluator. If re-arrest data are not available to the researcher, but re-commitment data are, then the definition is relatively evident. Some studies have employed multiple measures of recidivism and have looked at re-arrest, re-incarceration, and re-

15

commitment/re-incarceration, notably the Three-State Recidivism Study (Stuerer and Smith, 2001).

Geographic Scope and Timeframe

There are also two dimensions that factor into the operational definition of recidivism: a) geographic scope, and b) timeframe. The geographic scope for defining recidivism is usually limited by the CE agency’s administrative reach. For instance, a state-level CE agency employing data matching to measure recidivism would most likely have to limit the scope of their measurement to within the given state, unless parallel data are available and accessible from cooperating states. The timeframe is also important as different studies employ different times from release to the end of the study or evaluation. Even though three years from release to the study’s end date seems to be the standard definitional timeframe in recidivism studies, the only true constraint for this dimension is data availability.

In addition to geographic area and timeframe dimensions, there are three factors related to each instance of recidivism that are important to obtain, and each one requires specific data to measure. Two factors are closely related to the conceptual framework developed by Kirshstein and Best (1997), who described the three dimensions of recidivism as the precipitating event, the time between release and the event, and the re-admitting facility.

Precipitating Event

The first factor, the precipitating event, would generally be constrained by the definition of recidivism used in the evaluation, i.e., -the event may be a re-arrest, re-conviction, supervision revocation, or re-commitment/re-incarceration. Once the type of event is established, there are other aspects that provide useful information in the evaluation of a CE program, namely the most serious offense for which the offender was re-arrested, re-convicted, or re-committed. The most serious offense for the event could be compared to parallel data from the previous offense to determine which is more serious. For example, when looking at cost to society, it is important to note whether a former offender was picked up for a technical violation of parole rather than a violent crime.

Survival Time

The second factor, the time between release and the precipitating event, parallels what the current literature defines as survival time. Keep in mind- that this factor is always constrained by the timeframe dimension of the particular recidivism definition; therefore, the maximum amount of survival time would be from the date of release to the end of the study period. In order to calculate survival time, the date of release is subtracted from the date of the precipitating event.

New Sentence Length

The third factor goes beyond Kirshstein and Best’s (1997) conceptual framework and emphasizes the length of the new sentence or the time served as a result of the precipitating event. This factor is most applicable for evaluation or studies defining recidivism as re-commitment/re-incarceration. The length of the resulting new sentence is a function of the precipitating event, as events related to more serious offenses would have longer resulting

16

sentences. Many times, the resulting new sentence is only an estimate, and it is rare for the resulting new sentence to equate to time actually served.

Systematically capturing these extra pieces of information related to the precipitating event could have numerous implications in the evaluation of CE programs. For example, if a CE group has the same recidivism rate at the end of a particular study period as a similar non-CE group, the results might be viewed as undesirable. However, many differences could have occurred during the study period that would be reflective of the CE group demonstrating more positive post-release outcomes relative to the non-CE group. Despite having the same recidivism rate, the CE-group recidivists may have had a greater average or median survival time when compared to the non-CE group, which could be viewed as a positive program outcome. Regarding the most serious new offense related to the precipitating event, if a higher percentage of the CE-group recidivists were re-committed for property offenses, while a higher percentage of non-CE group were re-committed for violent offenses, then it could be argued that the CE-group outperformed the non-CE group in that specific measure. Furthermore, it could be determined whether a higher percentage of the CE-group recidivists had more or less serious offenses in comparison to the most serious offense related to the previous sentence. If the percentage of CE group recidivists with less serious new offenses is greater than that of the non-CE group recidivists, then the outcome could be considered positive, or at least not as negative, depending upon one’s perspective.

Most Serious New Offense

Another factor related to the precipitating event, which is also highly related to the most serious new offense, would be the resulting new sentence. Generally speaking, the resulting new sentence would be most applicable to those studies or evaluations using re-commitment as the precipitating event within the definition of recidivism. Taking the previous mentioned example of a CE group and a non-CE group with the same recidivism rates as an outcome: if one group has shorter average or median resulting sentences (or better yet, shorter average or median time served from the resulting sentence), then that group could be perceived as having better post-release outcomes. From a fiscal perspective, the shorter the resulting new sentence for an instance of recidivism, the less it will cost for the re-incarceration. The resulting new-sentence length allows the researcher or evaluator to move beyond making mere assumptions regarding the precipitating event; it allows for exploration of the actual result of the precipitating event.

Please note that there is a difference between sentence length and time served, with time served being the stronger measure. Rarely is the sentence length and time served the same. The sentence length serves as somewhat of an estimate and can be used when time served is not available due to time constraints. An end-date for time served can generally be calculated by subtracting the date of the precipitating event (in this case re-commitment/re-incarceration) from the end-date of the study, assuming the individual has not been released again. From the fiscal perspective, if an evaluator can demonstrate that the CE-group has resulting sentences that are shorter—either estimated or actually served—than the non-CE group, then it could be considered a positive outcome.

17

Practical Examples

The Virginia Department of Correctional Education (VDCE) utilizes CAEEP as outside evaluators to establish measures of recidivism for all released program participants, which include program completers, with the CE participation data linked after the fact. This provides the d department with the same recidivism measures for non-participants so that comparison or control groups can be established to measure program impact (see the Establishing Programmatic Impact section). For Virginia, the precipitating event is re-commitment to state custody, and both parole violators and offenders who commit new crimes could potentially fall within the definition. At times, however, new crime recidivists and parole violators are reported separately. The geographic scope used by the department is limited to the Commonwealth of Virginia or VDCE’s administrative reach, as data from other states are not utilized.

The VDCE is housed within the Virginia Department of Education, and so to measure recidivism, three key datasets are obtained from the Virginia Department of Corrections. The first dataset includes all of ex-offenders released during a current fiscal year. There is one record per released ex-offender, some of which are CE participants. It is possible for an individual to be in one fiscal year release cohort more than once, if the individual was released, re-committed, and re-released during the same fiscal year. Furthermore, that individual could be considered both a recidivist and non-recidivist, depending upon the outcome of the subsequent release, as the outcome of the individual’s first release obviously results in an instance of recidivism. The second dataset includes all of the new court commitments, with one record per new court commitment. The major fields of interest within this dataset are the date of re-commitment, most serious new offense for the re-commitment, and the resulting new sentence. The third dataset includes all parole violations, with one record per parole violation. The major field of interest within the parole violation dataset is the date of parole violation. It is assumed that an ex-offender did not recidivate, if that person does not have a record within either dataset that occurred after the current release date. If an ex-offender does have such a record, the first instance of either a parole violation or new court commitment is systematically stored in the database, along with the most serious new offense and the resulting new sentence (for a new court commitment). That information is then used to evaluate CE programs in terms of recidivism and the related measures of survival time, most serious new offense, whether the precipitating event was a parole violation or new court commitment, and the resulting new sentence or time served (if available).

In Florida, Correctional Education was once housed outside of the Florida Department of Corrections (FDOC), as it is in Virginia today. Correctional education was merged into the FDOC about 20 years ago and thus correctional education evaluators have direct access to inmate and programming data. The FDOC Offender Based Information System is a decentralized system where data entry and report generation can occur at facilities across the state. FDOC captures, stores, and analyzes recidivism data that include - date of recommitment, whether the recommitment was for a technical violation or a new offense, and if so, the new offense type. Instructors at the various program centers enter outcome information directly into the system. Headquarters enters data pertaining to inmate demographics, sentence, behavior, etc., for use in correctional education participant recruitment. Program listing and roll data are also entered at headquarters. Arrest records come from Florida jails, while all other post-release outcomes come

18

from the Florida Education and Training Placement Information Program (FETPIP), which is described in the section on earnings outcomes.

With the integration of services and data in the Florida system, Department of Corrections Research Unit staff performs the evaluation of correctional education programs in-house. This requires a highly trained and capable staff of programmers, data analysts, and administrators, which has proven quite effective for Florida. The efforts of the in-house staff at FDOC are used for internal purposes and also in partnership with outside researchers (Bales, W., Bedard, L., Quinn, S., Ensley, D., Holley, G., Duffee, A., & Sanford, S., 2003). In presenting recidivism-related data in an evaluation report, many studies focus on the difference in recidivism rates between the CE group and a non-CE group at various points throughout the study period. Average and median survival times can also be calculated to determine if differences existed between the two groups.

19

Post-Release Employment Measures

Numerous recent studies and evaluations of correctional education programs have examined the post-release employment outcomes of program participants and/or completers (Lichtenberger, 2006; Smith et al, 2006; Sabol, 2004; Kling, 2006; & Hunter, 2007). Employment outcomes are similar to other post-release outcomes in that they are limited to the availability and accessibility of the data required to measure each specific outcome. As with the other types of post-release data, the geographic scope (where) is limited to the given correctional education agency’s administrative reach. The timeframe (when) is an equally important aspect of employment outcome measures and, once again, it is limited by the availability and accessibility of data. Many times, outcomes involving time-related patterns are employed, such as number of quarters employed throughout the study period, number of quarters until employment, and the number of consecutive quarters employed; however, such measures are more a function of the precipitating event. The precipitating event (what) is the most important aspect of the post-release outcome measure. Regarding post-release outcomes, the precipitating event can have numerous specific qualities or can be as simple as the answer to the following question: Was the participant/completer employed at any point post-release? Specific employment patterns can be examined by refining the precipitating event or combining two or more separate precipitating events. An example would be the number of quarters employed within the construction industry or manufacturing industry throughout the course of the study period. The precipitating event could be further refined by linking it to CE program data, particularly vocational education programming, so that the outcome measure is the number of quarters employed within an industry or sub-industry related to the vocational programming.

Generally speaking, when it comes to obtaining the information necessary to measure the given post-release employment outcome, unemployment insurance (UI)-wage records are used. The agency responsible for maintaining UI-wage records is generally the given state’s employment or workforce commission. The unique identifier used to link the CE data set to the UI-wage records is almost always the social security number, which makes valid SSNs necessary to perform the match. In the past, researchers have argued that the benefits of using UI-wage records far outweigh the limitations, and there is no reason to believe that has changed (Kornfeld & Bloom, 1999; King and Schexnayder, 1998). In fact, one could argue that with the technological improvements of the past decade, especially in data and record management systems, the advantages of using UI-wage records further outweigh the potential limitations. However, as with any source of information considered in their planning, researchers and evaluators should examine both the limitations and advantages of using UI-wage records. An important thing to consider when compiling earnings records is the release date and subsequent re-incarcerations and releases. One method to control for an ex-offender being released and recommitted is to utilize release cohorts (see page 12). This method reduces the possibility of having to deal with an ex-offender more than once; however, it still remains possible that numerous ex-offenders are released, recommitted for parole violations and released again all within the same fiscal year. In that fiscal year, one individual would be treated as two separate instances, first as a recidivist, second as a released former inmate. In order to prevent having individuals with more than one release per year (which could lead to statistical calculation errors), the truly unique identifier could be SSN or inmate number combined with the release date. This could be done with the calendar year or fiscal year. The Virginia Department of Correctional Education uses fiscal year release cohorts—July 1 through June 30—since the fiscal year more closely parallels the

20

traditional academic year, as well as the budgetary cycle. This protocol is followed despite the department utilizing calendar year release cohorts in nearly all of its reporting and evaluations.

Limitations of UI -Wage Data

One of the main limitations in using UI-wage data to measure post-release employment outcomes is if an employer is not required to report these data or if the employment-related information cannot be captured. This occurs with some railroad jobs, certain jobs within the agriculture industry, self-employment, and some employment with religious organizations. Another disadvantage, as argued by Lichtenberger (2006) and Kornfeld and Bloom (1999), is the tendency for ex-offenders to gain employment in informal labor markets or engage in “under the table” payment arrangements with employers. In these contexts, data matching with UI-wage records might miss some, if not all, employment-related information, particularly earnings. As Kornfeld and Bloom (1999) discovered, individual earnings reported to the U.S. Internal Revenue Service are generally 14-25% more than UI-wage data, and self-reported earnings are generally 13-53% more than UI-wage data, with male youths with a prior arrest toward the higher end for both ranges. Correctional education evaluators should take note, since the portion of the Kornfeld and Bloom (1991) study group that would be most similar to the groups typically being used in current CE evaluations, had the greatest discrepancies in data matching. Lichtenberger (2006) mentioned the prevalence of under-the-table payment arrangements in the construction industry; consequently, if the outcomes of completers of construction-related vocational programs are being measured, it could be an underestimate of earnings. For the above reasons, evaluators should understand that the results of using UI-wage records are most likely conservative in nature and present the worst-case scenario.

Another limitation is that unless a multi-state agreement can be established to gain access to UI-wage records, an evaluator is limited to obtaining information within the study’s state. In areas where bordering states have better job markets, or if a particular group of CE program completers is highly mobile, out-of-state employment could be more prevalent, but it would be immeasurable without establishing agreements with bordering states. As with the other post-release outcome measures, employment is limited to the administrative reach of the correctional education agency. There is also lag-time associated with using UI-wage records, which takes about six months for the necessary data to become available for use in an evaluation. Another limitation involving the use of UI-wage records is that, since UI-wage records provide quarterly earnings, they are often considered gross or granular in nature. Because they are reported in a quarterly format, accurate hourly, weekly, or monthly wages are more difficult to establish, although calculations can be used to approximate such wages.

Advantages of UI-Wage Data

The cost for data matching has been established as an advantage relative to other follow-up methods such as surveys. Kornfeld and Bloom (1999) reported that it can cost $100 per respondent to obtain two years of employment and earnings history via traditional survey methodology. Obtaining the same information via matched UI-wage records can cost less than $1 per respondent, often just pennies. In many cases, the exchange of UI-wage records does not include any direct charges, other than the amount of time needed to meet the requirements of the agency with the UI-wage records. Such requirements would be preparing the file with the unique

21

identifiers in the proper format and ensuring that proper safeguards are in place to protect the information. Furthermore, UI-wage records are considered an objective data source, so an evaluator does not have to rely on the recollections of the CE program completer or a parole officer regarding employment and earning histories. An additional advantage is the potential for long-term follow-up and longitudinal studies because it is relatively easy to continue updating the employment and earnings data set using the same methodology. Attributes of the employers of ex-offenders are also included, allowing industrial, geographic and employer size class profiles of employers to be established.

Practical Examples

VDCE obtains UI-wage records from the Virginia Employment Commission (VEC) to measure program outcomes related to employment and earnings. Each record within the dataset is related to one individual’s employment with one business entity. The data exchange that occurs between the VDCE and the VEC utilizes a secure file transfer protocol (FTP) site. Social security numbers are sent to the VEC to be matched to the most current UI dataset. As previously stated, there is usually a six-month lag time for availability of data, so that if the SSNs are sent for matching during the first quarter of 2007, the most recent complete quarter of earnings information would be available the third quarter of 2006. Because of the lag time, it is difficult to build earnings and employment histories for ex-offenders unless they have been released for a sufficient amount of time. This problem illustrates the difficulty in gathering post-release data for programs the same year the ex-offenders are released.

VEC performs a match that goes back 20 quarters from the most current quarter, which allows locating an ex-offender’s earning records any time during that extended timeframe. The record supplied by VEC includes 20 quarters of earnings information, along with other identifiers for the individual worker, such as first name, last name, middle name, and date of birth. These fields can be used to make sure the social security numbers are valid. If any of the name fields or date of birth are different from the information in the department of corrections file, the SSN is most likely invalid.

In addition to the various fields that are related to the individual, there are numerous fields about the business entity employing the individual that can be used to develop fields that identify each business’ industry, sub-industry, size class, and location. The location of the employer is embedded in several fields. However, the most useful fields for categorizing the employers by location are zip code or the city/county field established through the employer’s Federal information processing standards (FIPS) code.

The Indiana Department of Corrections (IDOC) utilizes UI-wage records obtained from the Indiana Department of Workforce Development (IDWD) to establish post-release employment and earnings measures. Unlike in Virginia, IDOC is able only to obtain UI-wage records for program completers who have been released in five specific counties within the state. UI-wage records from those counties were being used by Workforce Development in another capacity, so in an effort to ease the administrative burden for the IDWD and increase the likelihood IDOC would obtain UI-wage records, Corrections requested information for those five counties only.

22

The relationship between IDOC and IDWD has been described as mutually beneficial, because IDOC is able to use the data to engage in program evaluation that facilitates making changes and improvements in programs. This process, in turn, eases some of the burden placed on IDWD when the program completers are released and eligible for its services.

The data-matching method that IDOC and IDWD employ in the exchange is a double-blind technique, which differs from the technique used by VDCE. The SSN for each program completer is purged from the data set after the match, along with the identifying information for each specific employer. Marker fields remain within the data set to link the UI-wage records back to the given CE completer’s specific school and the program type. This allows IDOC to evaluate the outcomes related to the specific programs that take place at each of the schools that provide correctional education. Attributes of each of the employers also remain, so that employers can be categorized by industry, sub-industry, size class, and geographic location. The previously mentioned information can be reported for any of the categories, singularly or combined.

This practical example illustrates the importance of compromising with the agency that manages the UI-wage records in an effort to ease the workload necessary on the agency’s part to perform the match and conduct the safe exchange. Although IDOC is unable to obtain information related to all of its CE program completers, the information from the five counties has proven to be very important and is a lot more valuable than the alternative of no information at all.

The Florida Department of Corrections (FDOC) collects post-release employment information via a relationship with the Florida Education and Training Placement Information Program (FETPIP). FETPIP is a sub-unit of the Florida Department of Education and performs the data match with other Florida agencies such as the Agency for Workforce Innovation (UI wage records) on behalf of the participating educational agencies, FDOC being one. FETPIP gathers and stores information on employment and earnings, continuing education, military service, incarceration, and public assistance for all Florida public school graduates and dropouts, colleges, universities, Workforce Investment Act participants and others. FETPIP does not, however, provide information regarding earnings prior to incarceration. As is the case with Virginia, the completer’s SSN is the identifier shared between FDOC and FETPIP. FDOC provides data to FETPIP on its participants annually and requests follow-up data from FETPIP on an ad hoc basis.

Florida is a unique example of a very comprehensive and rich dataset that effectively removes many of the barriers that evaluators in other states encounter. However, similarities exist with other states, such as Virginia, Indiana and Texas. Each state makes use of UI-wage data in its post-release employment and earnings measures, and each is required to use the SSN to do so.

The Windham School District (WSD) in Texas obtains post-release employment information for its Youth Offender Program completers using UI-wage records in much the same manner as Virginia, Indiana and Florida. WSD goes an additional step further and determines whether the former offenders’ employment is related to their CE program participation (see page 26). As is Virginia, five quarters of employment and earnings information are provided by the Texas Workforce Commission via data match using the SSN. In a recent evaluation report (Hunter, 2007), WSD examined post-release employment outcomes for vocational program completers

23

and a comparison group composed of non-completers of a vocational course. Using only UI-wage data, WSD was able to ascertain former offenders’ performance at obtaining employment, retaining employment, and increasing their earning. Employment was considered to be any quarterly earnings after release. Former offenders were considered to have retained employment if they had earnings in the first, second and third quarters after their initial quarter of employment. Once a former offender obtained employment, earnings in the fourth quarter (one-year anniversary) were sought. An increase in earnings from the initial quarter of employment to the fourth quarter of employment was considered an indication of a salary increase.

Specifically related to their evaluation of the Incarcerated Youth Offender Program (IYOP), California obtains post-release employment information from a comprehensive service provider, Community Connections Resource Center (CCRC). California evaluators are able to achieve a 100% response rate on a follow-up survey that includes a series of questions related to its current CE activities, either through direct contact with the ex-offender, by contacting the offender’s parole officer, or members of the ex-offender’s family. The information presented in the 2006 IYOP report (for the 2005 academic year) indicates that ex-offenders are contacted at the following points: 30 days, 90 days, 180 days, 270 days, and one year. It should be noted that these contact points are the minimum contacts, and that ex-offenders can initiate contact with the CCRC at any point during that year to request such resources as employment placement, drug treatment, or counseling.

Employment is only one of several outcomes reported in California’s annual IYOP evaluation. IYOP also reports whether the ex-offender is in school, in school and employed, in vocational training, in vocational training and employed, in residential treatment/detox, was a parole hold, or a parole violator. All of the IYOP participants are released on parole, with each participant meeting with a CCRC case manager 90 days prior to release for parole and again 72 hours prior to release.

Like California, North Carolina contracts with outside evaluators to survey males and females who have participated in the Youth Offender Program. Due to the difficulty in obtaining post-release data, North Carolina’s evaluators conduct follow-up interviews up to three times per year with 30 YOP participants at up to six facilities annually while they are still incarcerated. The evaluation team also surveys approximately 250 additional participants pre-release. The evaluators attempt to maintain regular contact with the former participants pre-release, anticipating that they will agree to an interview post-release. That post-release data collection includes surveys sent to addresses the offender provides to the North Carolina Department of Corrections and phone interviews with former offenders released within the previous six months.

Occupational/Industrial Relatedness Earnings

An important measure of the effectiveness of CE vocational, apprenticeship, and job training programs is the extent to which program completers’ post-release employment is related to the CE training they received. If a high percentage of completers are obtaining jobs in related occupations or industries, it demonstrates that CE programs are providing sufficient training for entry-level employment and that CE programs are focusing on occupations/industries that not only have job openings, but job openings for which ex-offenders can gain employment. The

24

current research shows that there are generally two different measures of relatedness: 1) industrial and 2) occupational.

Recent evaluations have used one or both measures: Hunter (2007) examined both for the Windham School District; Lichtenberger (2006) investigated construction-related vocational programs for Virginia’s DCE; and Smith, Bechtel, Patrick, Smith & Wilson-Gentry (2006) focused on post-release outcomes of federal prison industry participants. This paper’s authors concur that occupational relatedness is the stronger measure, although it is more difficult to systematically collect for reasons that will be explained later in this section.

Occupational relatedness is determined by developing a crosswalk between the occupations for which vocational or job training programs prepare ex-offenders and the post-release occupations of the ex-offenders who participated in such programs. Occupational relatedness can generally be determined through the use of post-release surveys provided to ex-offenders or their proxies, such as a parole officer. This process seems to have a relative advantage in determining occupational relatedness, as ex-offenders or their proxies can provide a job title or brief position description, and an evaluator can classify that job title or position description within a specific occupation at a later time to complete the crosswalk.

Data matching—particularly the use of UI-wage records—is better suited in establishing industrial relatedness. Industrial relatedness is determined by developing a crosswalk between the industries and sub-industries for which the vocational programs prepare ex-offenders and the industries and sub-industries in which the employers of ex-offenders belong. When using industrial relatedness as a post-release outcome measure, a major assumption is made. The assumption is that the industry or sub-industry of an ex-offender’s employer is indicative of the type of job the ex-offender has. However, if one is employed for a firm within the construction industry, how reasonable is it to assume that the ex-offender is a construction worker (occupation), when a typical construction firm might have accountants, lawyers, truck-drivers, human resource specialists, and administrative assistants on the firm’s payroll?

North American Industrial Classification System (NAICS) codes are generally used to establish industrial relatedness. NAICS codes are provided within the UI-wage records and are an attribute of each employer. The more digits within the NAICS codes that are examined by an evaluator, the more specific the information that is provided. The first two digits of the NAICS code indicate the general industry (e.g., agriculture, retail, service, construction), with the specificity of the sub-industry with the inclusion of each additional digit. NAICS codes are at least two digits and may include up to six digits. As stated earlier, the WSD evaluation (Hunter, 2007) utilized both industrial and occupational relatedness as post-release measures. For those who completed vocational programs and were released on parole, Hunter (2007) was able to capture occupational relatedness by using parole employment data. For each ex-offender’s record of employment, the occupation is recorded using the Dictionary of Occupation Titles (DOT) three-digit codes. In turn, each vocational program has a list of DOT occupations for which it prepares individuals. For example, the masonry program would prepare individuals to become bricklayers, brick masons, masons, brick and block masons, all of which fall under either DOT Code 844 Cement Finishing and related occupations, or DOT code 861 Brick and Stone Masons and Tile Setters. If an ex-offender was reported in the parole employment data to have worked in

25

any of the previously mentioned occupations (DOT code 844 or DOT code 861), then occupational relatedness was achieved.

In the Windham study (Hunter, 2007) for those who did not match parole employment data, the researcher examined UI-wage records received from the Texas Workforce Commission (TWC). In a similar fashion to the Virginia Employment Commission data used by Lichtenberger (2006), TWC employment data uses NAICS codes rather than DOT codes. The limitation for this method is that the NAICS code is based on industry, rather than occupations; therefore it indicates the industry/sub-industries for which each employer belongs, not the job the ex-offender performed. The Windham study (2007) cross-walked from four-digit NAICS codes to Standard Occupational codes to the DOT codes to determine if industrial relatedness was achieved.

Lichtenberger (2006) employed a more direct approach, establishing the NAICS sub-industries for which the construction-released vocational programs offered by the VDCE prepared offenders. That information was then cross-walked to the actual six-digit NAICS codes of the employers. For example, using these codes, completers from the painting and drywall program were considered to have directly-related employment when their employer was within the drywall and insulation or painting and wall covering sub-industries.

Post-Release Educational Attainment

Post-program educational attainment is an additional post-release outcome that has traditionally been used as a measure of successful transition outside of correctional education for secondary career and technical education (vocational) programs and adult education programs. Most often, the post-program educational attainment measures are limited to a simple dichotomous measure of post-secondary enrollment within a particular timeframe and geographic scope. For example, an evaluator asks the following question: among this group of program participants, how many were enrolled in post-secondary institutions within the state within three fiscal quarters of exiting the program? In correctional education, post-program would have to be combined with the participant’s release, but answering that same question remains important.

Post-release post-secondary educational attainment is likely to be of great interest to those attempting to evaluate the effectiveness of Incarcerated Youth Offender (IYO) programs (Specter Grant), particularly if the short-term goal of the IYO program is to expose participants to college-level courses to increase the likelihood that participants will continue their education post-release. Furthermore, if one of the objectives for correctional GED programs is to increase the post-release enrollment rate of program completers at post-secondary institutions, then appropriate post-release outcomes should be measured that are directly related to enrollment.

As is the case with recidivism, post-release educational outcomes have many dimensions that are constrained by the availability and accessibility of the data necessary to measure each specific outcome. In general, the scope of each post-release educational outcome needs to be specifically stated regarding the when, the where, and the what. Each outcome measure has a timeframe dimension that is constrained by the study period (when). And as previously discussed, there is the geographic scope that is limited to the correctional education agency’s administrative reach or the availability of the data necessary to measure the outcome (where). The third dimension

26

would parallel the precipitating event in the recidivism section and involves the qualities of the specific outcome (what). Once defined, the qualities of the specific outcome are most often quantitative in nature. These outcomes could be employed to answer the following questions—assuming the timeframe and geographic scope are already defined—how many CE program participants enrolled in post-secondary courses, and how many CE program participants obtained a post-secondary degree or certificate?

Some correctional education agencies are able to obtain these data from key educational institutions, while others are able to obtain information from state-level educational clearinghouses that encompass all post-secondary institutions within a given state. Furthermore, the National Student Clearinghouse provides data related to course enrollments and degree/certificate obtainment from most post-secondary institutions and trade schools. The clearinghouse service is described in greater detail later in this section.

There is an added level of complexity with post-release educational outcomes because the evaluator potentially could have to deal with multiple records for each individual, if course-related records are being examined. Each of the course-related records has related timeframe information, usually in the format of separate fields for the year and semester in which the course was taken, and a specific course outcome, such as a grade or pass/fail indicator. When dealing with post-secondary course data, each record also could have a marker that indicates such levels of the course as remedial, undergraduate, or graduate, and the name of the institution where the course was taken. Each institution is associated with specific information, which could be employed to refine the qualities of the outcomes.

Whether the outcome is related to amount of time that passed before the individual enrolled, or the type of institution in which the individual earned an associate degree, the more complex the outcome, the more difficult it is to systematically determine if the outcome has been achieved. The scope of the records needs to be limited to ensure that the enrollment was truly post-release. As stated in the previous paragraph, if the records are course-centered, each record is going to have fields indicating the enrollment semester and year, and a combination of those fields could be used to approximate an enrollment date. This estimated date for each record could be compared to the CE participant’s release date for the current sentence to ensure that the enrollment occurred post-release.

Once that step is accomplished, all of the post-release records could be systematically examined to determine if a participant was enrolled at any point post-release, was enrolled at various points during the study, and the length of time it took for the given individual to become enrolled. The same methodology could be employed when dealing with a data set that includes post-secondary degrees conferred, or post-secondary programs completed, to ensure that the degree was awarded, or the program was completed post-release. This process makes access to release-date information essential. When utilizing release cohorts in an evaluation, determining outcomes becomes easier because each member of the release cohort has a release date within a specified timeframe.

The authors believe that the sophistication of correctional education evaluations related to post-release, post-secondary educational attainment should evolve over time, not only as a function of the evaluator’s increased ability to develop new and improved outcome measures with available

27

data, but also as a result of the additional research questions being raised as the result of evaluations. Furthermore, being overly ambitious early in the study regarding the outcomes that are going to be measured during an evaluation can prove to be damaging in the end, if the results seem to be clouded by their complexity or are not easy to replicate. Therefore, initially using simple outcome measures specifically related to enrollment and degree attainment- (e.g., how many participants were enrolled within a year or less of release) would be advisable before developing more sophisticated outcome measures that would require combining the outcomes from multiple course-related records. An example of the latter would be the percentage of post-release courses with a grade of “C” or better or a passing outcome if the course is pass/fail.

National Student Clearinghouse

According to the overview of the student tracker for educational organizations service, the National Student Clearinghouse (2007) “serves as a central repository and single point of contact for the collection and timely exchange of accurate, comprehensive enrollment, and degree/certificate records on behalf of participating institutions.” Over 3000 public and private post-secondary institutions, representing more than 91% of the currently enrolled students in the U.S., continually update the available dataset by providing the Clearinghouse with reports on all enrolled students. The following information is available to participating educational agencies: name of institution, type of institution (four-year, two-year, less than two-year), attendance dates, enrollment status (full-time, half-time, less than half-time, and graduated). Degrees earned and major courses of study are available from the institutions participating in the Clearinghouse’s Degree Verify Program. The cost is the greater of $0.54 a match or a flat-fee of $425, and data matching can be used through its batch file processing that takes place over a secure FTP site.

Practical Examples

The Virginia Department of Correctional Education is able to obtain two different datasets related to post-release post-secondary educational attainment. These data are maintained by the State Council for Higher Education in Virginia (SCHEV), which oversees and coordinates higher education within the state. The first set of data includes course enrollment and course outcome information for every course in which an ex-offender was enrolled. Typical descriptors within each course record include: institution, level, campus, course abbreviation, discipline, semester, year, credit hours, and grade. The second set of data includes information for each post-secondary degree or certificate earned by an ex-offender. Typical descriptors for each degree/certificate record include: institution, campus, year, semester, discipline, and type of degree. A limitation of these datasets is that they only include information from higher education institutions in Virginia; if the offender enrolled in or earned a degree at a college or higher education institution outside Virginia, the information is not captured.

Despite colleges and universities not using SSNs as student identification numbers in recent years, SSNs are still used by the State Council for Higher Education in Virginia as the unique identifier in its datasets. Institutions of higher education now assign student identification numbers that are unique for each student within the college or university, but nothing guarantees that another college or university has not assigned the same sequence of numbers to a student on its campus. Also, if a student attends more than one college or university, the student could have multiple unique identifiers. It also is possible that more than one person could have the same

28

identifier. Either of these cases violates the two main assumptions of unique identifiers discussed earlier, making a data match impossible. For this reason it is more efficient to use social security numbers.

The following information is related to pre-release post-secondary educational outcomes and could be useful in a program evaluation. At times, the Virginia Department of Correctional Education (see Lichtenberger and Onyewu, 2005) obtains aggregate course outcome information from the community colleges contracted to provide the post-secondary educational programming within Virginia correctional facilities. This outcome information allows for grade distribution comparisons between selected courses offered within prisons and parallel courses offered on-campus at the same community college during the same semester. For example, among the offenders enrolled in Sociology 101 offered through Southwest Virginia Community College during the Fall Semester of 2004, an evaluator can determine the percentage of offenders who earned an A, B, or C and compare those percentages to the “typical” student enrolled in the same class.

As is the case for post-release employment outcomes, Florida obtains post-release education outcome data via FETPIP (see page 23). As described previously, FETPIP is part of the Florida Department of Education. In the context of educational outcomes, FETPIP provides the Florida Department of Corrections with numbers of individuals who enroll in district post-secondary education, community college at the associates of arts or science level, the adult vocational level, the vocational college credit level, or some other level, state colleges and universities, and private colleges and universities.

29

Isolating Correctional Education Program Impact

In the evaluation of CE programs, it is necessary to determine the extent to which the programs had a positive impact on the post-release outcomes that have been measured. Oftentimes it is necessary to use comparison groups composed of individuals who did not receive a particular treatment when reporting post-release information in an effort to isolate the impact. The use of comparison groups places the results of an evaluation of a particular CE program in context and, if designed appropriately, the use of comparison groups can strengthen the assertion that CE programs had an impact on post-release outcome measures.

Comparison groups should be integrated into the planning of the evaluation and purposefully selected because not all comparison groups are the same. A major problem stems from using comparison groups without matched compositions or groups that should not be compared to the CE group in key variables that could affect the post-release outcomes claimed to be attributable to CE programming. The failure to control for pre-existing differences that may exist between a CE group and a comparison group may lead to spurious claims of programmatic impact. In fact, the credibility of an entire evaluation can be called into question if claims of programmatic outcomes have been made without controlling for pre-existing differences between groups.

Although it is tempting to label evaluations that use comparison groups as - quasi-experimental (experimental designs require control groups), most CE program evaluations and research studies employ ex post facto methodology. In a review of literature that focused on the relationship between correctional education and recidivism, Janic (1998) posited that the major limitation of most CE research is the lack of a true control group and that many studies are not even quasi-experimental, let alone true experimental designs. He also argued that the classic experimental approach is not possible because in most systems, inmates have a choice as to whether they participate in educational programs.

Duguid, Hawkey, and Pawson (1996) had a parallel viewpoint in their paper on the use of the scientific method and case studies in the evaluation of correctional education programs. They argued that in order to properly employ the experimental design in correctional education research, program participants would need to be randomly selected from the entire population of offenders and randomly assigned to either the treatment or control group. Random selection and assignment takes away the offenders’ choice to participate and is unacceptable, if not unethical, when applied to most correctional education programs. Janic (1998) suggested that some correctional education research studies are thought to be quasi-experimental because of their use of non-randomized, non-equivalent control groups. However, most of the studies claiming to be quasi-experimental studies are actually ex post facto designs or basic descriptive reports that include comparisons of participants and/or completers to non-participants or non-completers. In the latter category, the non-participants and non-completers are usually naturally-occurring or convenience groups that are used to make baseline comparisons or to describe the current situation.

According to Pedhazur and Schmelkin (1991), for a research design to be quasi-experimental, it must employ a manipulation of an independent variable during the study. Most correctional education research studies labeled as quasi-experimental manipulate independent variable(s) through the use of statistical control after the data are gathered, which would make those studies

30

non-experimental ex post facto. The point is not that experimental or quasi-experimental designs are better than ex post facto research, rather that experimental designs (both true and quasi) are generally not performed within correctional education evaluations and research studies due to practical limitations.

There are two main types of ex post facto research study designs: retrospective and prospective. With retrospective designs, the treatment group is identified after the treatment takes place, and the history of the group members is traced backward. With prospective research, the treatment group is identified after the treatment takes place, and the CE treatment group is followed forward, as one would do with a release cohort. The authors believe that employing a combination of prospective and retrospective ex post facto methodologies is important when attempting to isolate program impact. Logically, it is necessary to follow-up with a group (prospective) to measure post-release outcomes; however, to isolate the impact of the program on the post-release outcomes measures, it is also necessary to examine particular variables retrospectively, such as those related to previous educational programming, offense and incarceration history, and employment patterns prior to incarceration.

Techniques for Isolating the Impact of Correctional Education Programs

As stated earlier, the goal of the more advanced CE-program evaluations is to isolate the impact of the treatment. In most cases, the treatment is either one’s level of participation in a CE program, the completion of a correctional education program, or the highest correctional education program completed, based on the educational hierarchy discussed earlier in this paper. There are a couple of ways to isolate the impact of programs. One approach is to employ advanced statistical methods, such as multiple regression, to determine the relative importance of each variable. Variables include CE completion or participation in predicting differences in post-release outcome measures such as recidivism, employment outcomes and post-release educational attainment. The second approach is to use a closely matched comparison group in an effort to address the confounding problems mentioned earlier. The authors hesitate using the term “control group,” since that implies an experimental methodology, when in fact, most CE studies or evaluations are ex post facto studies.

Multiple Regression. This statistical procedure establishes the predictive value of each independent variable (CE program completion, race, offense type, etc.) while holding the other independent variables constant, in order to explain the variance in the dependent variable that results from a treatment. More importantly, multiple regression establishes the overall predictive value of the entire treatment under investigation, which would include all of the independent variables. The predictive value of each variable is generally measured with a statistic called the beta weight. The beta weight is the amount of increase that occurs for the dependent variable when the given independent variable increases by one standard deviation and the other independent variables are held constant.

A benefit of using multiple regression is that it allows evaluators to add as many CE treatments as possible to the statistical model, by including each treatment as an independent variable. For instance, number of vocational programs completed, highest educational level documented, and highest Test of Adult Basic Education (TABE) score(s) could all be included in the regression

31

equation. This procedure can determine the relative importance of each independent variable in predicting the post-release outcome.

In order to create a multiple regression model, it is necessary to use all of the independent variables that could have an effect on the given post-release outcome measure. It is equally important to establish how the different independent variables are related to each other. When two independent variables are strongly related, such as time served and offense type, the variables are believed to convey the same information. In fact, including both of these variables in the regression model would be redundant, as the combination would provide very little information in the regression model that would be considered unique. This problem is known as co linearity, formerly defined as an independent variable that is highly correlated with one or more of the other independent variables. According to Howell (2002), this creates a situation where the independent variable has little new to offer in explaining the variability in the given post-release outcome measure. Therefore it is important for evaluators to examine the correlations between the independent variables, taking into account practical aspects and the importance of the variables. Evaluators, then, could more accurately decide which variables to include or exclude from the regression model.

Purposefully Selected Comparison Groups

Non-Completers. Utilizing a group composed of program participants who failed to complete the given correctional education program is one way to create a comparison group. Developing a comparison group of non-completers attempts to control for the motivation necessary to enroll, but could bring to light other potential confounding issues. Members of a comparison group who failed to complete the program could be indicative of negative behavior patterns that also may be applicable in a post-release setting. Based on an ex-offender’s inability to complete the program, one would expect relatively lower post-release outcomes, when compared to ex-offenders who completed the program. Unless some other method is employed to control for such predispositions, it remains difficult to establish programmatic impact using such a comparison group.

A slightly more sophisticated approach would be to create a comparison group of no-fault non-completers, (see page 13) those who did not finish the program due to an institutional transfer or release. Generally speaking, such events are beyond the control of the CE participants. The assumption is that, if the transfer or release did not occur, the CE participants would have completed the program. If completers of the correctional education program are provided certificates or other transferable credentials, this approach could be a reasonable method of isolating the impact of certificate attainment, particularly if a control for enrollment time is employed.

Wait-Listed Non-Participants. An alternative method of creating a comparison group for an evaluation is to utilize wait-listed non-participants from parallel programs. One could make the argument that using a comparison group composed of these individuals could help control for the motivation necessary to enroll. In other words, the wait-listed non-participants had the same level of motivation to sign-up for a particular CE program as those who actually participated. Unfortunately, this does not address the level of motivation necessary to successfully complete a program, nor does it address other variables that could be related to program completion. As with

32

using non-completers as a comparison group, unless some other method is employed to control for preexisting differences between the wait-listed non-participants and the program completers, claims of program impact on post-release outcomes are spurious at best.

Creating Comparison Groups Based on Key Variables

Another method used to create comparison groups is to limit the composition of the comparison group by selected criteria based on one or more independent variables. This methodology was used by Anderson (1995) in an evaluation of Ohio Department of Rehabilitation and Corrections educational programs that focused on program impact regarding recidivism and earnings. In the Anderson study, the comparison groups were constructed using three variables: a tested reading score at intake based on the TABE, self-reported highest grade level completers, and participation in correctional education while incarcerated. Although this approach is better than using comparison groups of convenience, such as all non-participants released during the same timeframe, it still lacks control for other factors that could be having an impact on post-release outcomes. Hence, the controls are limited to the selection criteria.

Matched Pairs. Once a group of CE participants or completers is identified and the important independent variables are established, it is possible to find a similarly composed group of non-participants, if the data are available. In the most straightforward approach, CE participant #1 is selected and the independent variables related to that individual are examined. From a pool of potential matches, a search is conducted to find the individual from the non-participant pool that best matches CE participant #1 on selected independent variables. For example, if CE participant #1 is a white female drug-offender with a history of alcohol abuse (generally speaking, more independent variables would be used), the pool of non-participants would be queried to find another white female drug-offender with an alcohol abuse history.

This methodology attempts to isolate the impact of the CE program and control for confounding problems by matching characteristics that may be related to the post-release outcomes, such as employment or recidivism. According to Rudner and Peyton (2006), the approach falls apart when one matches too few or irrelevant independent variables. Also, the more independent variables used to create the matched pairs, the larger the potential reservoir of non-participants that is required in order to perform the exact match.

Propensity-Score Matching. Propensity-score matching is a refined approach that approximates a combination of the ideas behind multiple regression with the traditional matched-pair approach described above. Propensity-score matching is discussed in Rosenbaum and Rubin (1985) and employed in the Saylor and Gaes studies (1997, 1999). The first step in the matching process involves creating a binary logistic regression model based on the characteristics of the CE program participants to calculate the propensity scores for both the correctional education completers, as well as those in the potential comparison group. For instance, if it is determined that most correctional education completers had relatively high educational levels, then that independent variable would be heavily weighted in determining the propensity score. In turn, if a non-participant had a relatively high educational level, then that person’s propensity score would generally be higher. In other words, propensity scores are predictive measures used to calculate the likelihood that a non-participant has the same characteristics as those in the correctional education group.

33

Certain members of the CE group might not possess characteristics that are similar to the rest of the group and, in that case, those members’ propensity scores would be lower. In turn, those individuals would be matched to non-participants with similar characteristics. Oftentimes independent variables such as offense type, marital status, gender, race, custody level, infractions history, pre-incarceration employment history, and other correctional education program history are used in the binary logistic regression model to determine the propensity score. Once the propensity scores are calculated using the regression model, the propensity score of CE completer #1 is used as a reference. Next, the non-participant with the closest propensity score is selected as the matched pair and removed from the reservoir of non-participants, and so on.

Summary

Successful evaluation of correctional education programs requires planning. That planning begins with establishing concrete, measurable program objectives and the measures to determine whether those objectives have been met. Evaluators should begin by matching measures for which they can obtain information for the objectives of the CE programs. Once the information is identified, evaluators should plan a systematic approach to collecting data that builds upon simple starting points. It is not possible to answer every question related to program performance in one evaluation. Instead, it would be prudent to design a roadmap that will answer the desired questions over the course of several focused, discrete evaluations.

When the evaluation begins to provide results, the information gathered can be put to work in continuous improvement of the CE programs, enhancing communication with internal (instructors for example) and external (state legislature, the public at large) constituencies. The evaluation should answer appropriate questions about the CE programs. Administrators understand CE programming and the special constraints, such as transfers and lockdowns, that must be dealt with in this setting. Evaluation of CE programming must be a proactive measure that is intended to close the loop in the continuous improvement cycle, not a reaction to research performed by outside parties.

Finally, isolating the impact of CE programming on post-release outcomes can help justify the cost of the correctional education itself. Utilizing statistical controls and carefully selected comparison groups can unmask the positive effect that correctional programming can have on recidivism rates, post-release employment and earnings, and post-release educational attainment. Taking these approaches a step further can allow the evaluator to assemble a cost/benefit model that can show the return on investment provided by the programming and is easily communicated to the external constituencies that can affect correctional education program funding.

Correctional education program evaluation using post-release outcome measures is critical to the continuation of funding for correctional education. This document has provided detail on specific outcomes, data types, and strategies associated with collecting post-release outcome information. The process may seem daunting initially, but can and should be undertaken by using a systematic process, building from simple measures to more complex questions over time.

34

Works Cited

Anderson, S.V. (1995). Evaluation of the impact of correctional education programs on recidivism. :Columbus, OH: Ohio Department of Rehabilitation and Correction.

Bales, W., Bedard, L., Quinn, S., Ensley, D., Holley, G., Duffee, A., & Sanford, S. (2003)

Recidivism: An Analysis of Public and Private State Prison Releases in Florida. Available at: http://www.dc.state.fl.us/pub/recidivismfsu/. Accessed April 16, 2008.

Duguid, S., Hawkey, C., & Pawson, R. (1996). Using recidivism to evaluate effectiveness

in prison education programs. Journal of Correctional Education, 47(2), 74-85. Howell (2002). Statistical Methods for Psychology. Pacific Grove, CA: Duxbury Publishers. Hunter, R. (2007). Evaluation of training services: Career and technical education. Huntsville,

TX: Windham School Division. Janic, M. (1998). Does correctional education have an effect on recidivism? Journal of

Correctional Education, 49(4), 152-161.

King, C.T. & Schexnayder, D.T. (1998) The use of linked employer-employee UI wage data: Illustrative uses in Texas policy analysis. Austin, TX: Center for the Study of Human Resources.

Kirshstein, R. & Best, C. (1997). Using correctional education data: Issues and strategies. Washington, DC: Office of Correctional Education, Office of Vocational and Adult Education, U.S. Department of Education.

Klein, S. & Tolbert, M. (2006). Correctional Education Data Guidebook. Available at: http://www.cedatanetwork.org/pdf/guidebook.pdf. Accessed July 19, 2007.

Kling, J.R. (2006). Incarceration length, employment, and earnings. The American Economic Review, 96(3) 863-876.

Kornfeld, R. & Bloom, H.S. (1999). Measuring program impacts on earnings and employment: Do unemployment insurance wage reports from employers agree with surveys of individuals? Journal of Labor Economics, 17(1), 168-197.

Lichtenberger. E. & Onyewu, N. (2005). Virginia Department of Correctional Education’s Incarcerated Youth Offender Program: A Historical Report. Richmond, VA: Department of Correctional Education.

Lichtenberger, E. (2006). A Comprehensive Analysis of the Post-Release Employment

Relatedness Patterns for Participants in Construction-Related Career and Technical Education Programs. Richmond, VA: Department of Correctional Education.

35

36

Lichtenberger, E. & Ogle, J.T. (2006). The collection of post-release outcome data for

the evaluation of correctional education programs. Journal of Correctional Education, v.57(3), 230-238.

National Student Clearinghouse. Available at: http://www.studentclearinghouse.org/. Accessed April 16, 2008.

Pedhazur, E.J. & Schemelkin, L.P. (1991). Measurement, design, and analysis: An

integrated approach. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc. Rosenbaum, P.R. & Rubin, D.B. (1985). Constructing a control group using multivariate

matched sampling methods that incorporate the propensity score. The American Statistician, 39(1), 33-38.

Rudner, L.M. & Peyton, J. (2006). Consider propensity scores to compare treatments.

Practical Assessment, Research & Evaluation, 11(9), 2-4. Sabol, W. (2004). Local labor conditions and post-prison employment: Evidence from Ohio. In

Shawn D. Bushway, Michael A. Stoll, and David F. Weiman, eds., Barriers to reentry? The labor market for released prisoners in post-industrial America. New York: Russell Sage Foundation.

Saylor, W.G. & Gaes, G.G. (1997). PREP: training inmates through industrial work

participation, and vocational and apprenticeship instruction. Corrections Management Quarterly, 1(2), 32-43.

Saylor, W.G. & Gaes, G.G. (1999). The differential effect of industries and vocational

training on post release outcome for ethnic and racial groups. Washington, DC: Office of Research and Evaluation, Federal Bureau of Prisons.

Smith, C.J., J. Bechtel, A. Patrick, R.R. Smith, and L. Wilson-Gentry, Correctional Industries

Preparing Inmates for Re-entry: Recidivism and Post-release Employment, final report submitted to the National Institute of Justice, Washington, DC: June 2006 (NCJ 214608), available at www.ncjrs.gov/pdffiles1/nij/grants/214608.pdf. Accessed April 16, 2008.

Specter (Incarcerated Youth Offenders) Grant. Available at:

http://www.in.gov/icpr/webfile/formsdiv/50525.pdf. Accessed April 16, 2008. Steurer, S.J., Smith, L., and Tracy, A. (2001). OCE/CEA Three State Recidivism Study.

Submitted to the Office of Correctional Education, United States Department of Education. Available at: http://ceanational.org/PDFs/3StateFinal.pdf. Accessed April 16, 2008.