Standard Four

Narrative for standard four in the CAEP self-study report

Neither the Pennsylvania Department of Education nor the Commonwealth of Pennsylvania currently share data related to program impact. Completers have not been well followed into practice in Pennsylvania and many EPP completers take positions outside the Commonwealth. Student growth measures are not available to the provider. Therefore, gauging our completers' contributions to student-learning growth is particularly challenging. At present we leverage data and evidence we have garnered and triangulate across data sources. Although in places our data are lean, multiple sources lend support for reliability and validity of conclusions drawn from available evidence. The EPP endeavors to overcome the challenge to follow completers through several strategies and continues to improve efforts to seek adequate data regarding completers overall and their students' growth in particular. In the future, these efforts will be enhanced as we will be able to track the placement and retention of all Penn State educators by certificate area (e.g., elementary, middle school mathematics, secondary school English language arts) and school level (e.g., elementary-, middle-, and high-school) within the Commonwealth through data recently available from PDE and addressed through a funded project awarded to EPP faculty (Tracking Teacher Shortages Project). These data will enhance our abilities to address all Standard 4 components. Some of the strategies the EPP has employed to gather reliable and valid evidence for Standard 4 components overall and planned next steps follow.

The Commonwealth of Pennsylvania utilizes a Student Learning Objectives (SLO) process to assess and document educator effectiveness based on student achievement of content standards. SLO is a part of Pennsylvania’s multiple-measure, comprehensive, system of Educator Effectiveness authorized by Act 82 (HB 1901). SLO serves as one source of evidence of completer's impact on student learning and development. Within the SLO form, teacher ratings, goals, performance measures, and student achievement are jointly developed and assessed by the teacher and administrator/supervisor on the rating scale of Failing, Needs Improvement, Proficient, and Distinguished. The SLO process is used to improve teachers' effectiveness and positive impact on student learning. Teacher and student learning goals are set in advance to ensure student growth in specific learning objective areas. The administrative manual and available training provide rubrics, guidelines, and procedures to assure consistency in the execution of SLO. This consistency contributes to the reliability and validity of the data regarding teacher effectiveness. The nature of the student learning data collected varies and although this can present challenges in interpreting across completers, the systematic collection and examination of data that measure student learning and development on state learning objectives support SLO as a measure to include in decisions about program performance. http://www.education.pa.gov/Documents/Teachers-Administrators/Educator%20Effectiveness/Educator%20Effectiveness%20Administrative%20Manual.pdf. Representative Agricultural Education (AG ED) completers' SLO and sample SLO from other programs' completers coupled with examples of student work provide evidence, through pre- and post-assessment of learning objectives aligned with academic standards. AG ED serves as a case for examination of SLO and has a program culture of examination of SLO. Therefore, SLO shared by the AG ED program are likely representative of evidence of completer impact. The EPP sought SLO and student work from completers more broadly across other programs. Few completers volunteered their SLO, so those included from other programs are selective. While some SLO are provided, these data are limited. The EPP recognizes the potential for SLO to provide evidence of completer impact on students' learning and will seek more programmatic and systematic mechanisms to collect SLO of completers.

In recognition that adequately addressing Standard 4 would be a challenge, the EPP executed a contract with the Survey Research Center (SRC), http://www.survey.psu.edu/, in October, 2015 for $42,794 (SRC Contract), as another effort to locate and follow completers into the field to collect information to address components of Standard 4, including placements, preparation, satisfaction, burnout, and evidence of student learning and development. Unfortunately, the SRC work was only partially successful. The findings shared from the contract in 2016 include only summary data from a brief survey of graduates. Of 844 whom they had identified for the database, 347 were working in a field related to their degree, 23 were pursuing additional education, 29 were actively seeking employment, and 40 chose “other.” Of the 347 working in a field related to their degree, 174 were working as teachers: 64 at the primary level, 39 at the middle level, 50 at the secondary level, and 21 without a stated level. These limited data were of little utility as evidence for Standard 4. No additional tools were developed or administered by SRC.

Another strategy the EPP employed to follow completers and their students' learning and development was to solicit proposals for research projects by stakeholders to be used as case studies. This resulted in projects conducted by faculty from EPP programs. One awarded project sought to identify SPLED graduates since 2003 to establish future survey contact and to ask completers about satisfaction with their EPP training and adoption and implementation of knowledge, skills, and dispositions in their own practice. The work was grounded in a conceptual model: Reach; effectiveness; Adoption; Implementation; Maintenance (Re-AIM). The project was largely successful in locating completers. Eighty-seven who had graduated between 2003 and 2014 responded to the survey (Re-AIM). Representation was well distributed across Undergraduate, IUG, and Master's level completers and responses were from completers across completion years. Items addressed quality of training, if practices were adopted in day to day work, and frequency of use, therefore capturing knowledge and skills of completers, as well as assessing program quality to identify areas for improvement. Data also included self-reported use of knowledge and skills in P-12 instruction. Open ended responses were also included and analyzed. Overall internal consistency reliability for ratings for quality of training was α=.92. Care was taken to use several questions per subscale to increase scale reliability and validity of completer's data and reliabilities for each subscale were also strong: Foundations for Teaching and Learning in Special Education (7 items, α=.75); Instructional Design and Delivery (13 items, α=.87); Collaboration and Communication (5 items, α=.88); Behavior Management (6 items, α = .95); and Assessment 7 items, α=.81). Data from the survey are compared with results from other EPP and PA survey efforts for comparison as a source of validity as discussed below. As the summary report indicates, completers, mostly practicing in classrooms, rated their preparation as effective and that they would recommend the program. Areas of particular strength, with a median rating of Very effective, were writing goals and objectives, progress monitoring, data-based decision making, and explicit instruction. Areas where ratings were slightly lower were preparedness in co-teaching and applying technology to support student learning. While ratings in these areas were still quite strong, it may be the varied technologies and the pace of innovation in technologies to support individuals with disabilities is a challenge for practicing teachers. Faculty at the EPP recently received a federal training grant that will support instruction for candidates and increased exposure to these technologies. Respondents endorsed with high percentages that they had adopted practices taught in the program to support student learning and development, including, for example, 97% stating they adopted practices of objective writing and goal setting, 97% stating they adopted practices of explicit instruction, and 95% reporting effective use of classroom routines for student learning.

Qualitative results from this effort were also informative. Among the comments, one theme was that practicing completers mentioned more instruction in the IEP process would be helpful. Given these feedback, the program piloted the use of TeachLivE (convention.apa.org) to simulate IEP meeting processes. The technology will also be used this coming year in pre-student teaching courses and in behavior management courses to offer opportunities for candidates to practice and receive feedback prior to field experiences with students and families.

A second awarded project sought to examine the preparedness of technical and vocational education 2010-2014 completers. Completers were asked to rate their perceptions of importance and preparedness of knowledge related to teacher success. The work was grounded in Darling-Hammond's (e.g., 2002) research on preparing educators for the future and previously published and established measurement strategies were employed. For example, mean weighted discrepancy scores were calculated and of particular interest were those areas of perceived high importance but lower preparedness. Identifying areas of discrepancy can inform continuous improvement in programs as they prepare classroom-ready completers. The project focused on six domains, consistent with previous research and representative of EPP goals and InTASC standards. Domains included: 1) Design curriculum and instruction to promote learning; 2) Support diverse learners; 3) Use of assessment to guide learning and teaching; 4) Create a productive classroom environment/teach critical thinking; 5) Professional development; and 6) Use of technology. The survey yielded 82 responses; 58 (response rate 74%) Agricultural Education and 24 Work Force Education (response rate 24%). Overall graduates from both programs (AG ED and WF ED) rated technology as having the least discrepancy. This is an interesting contrast to the data from the other contracted study and but consistent with other survey responses of completers. On the EPP measures overall, technology is generally an area of strength on most measures for programs overall. Areas for program improvement suggested due to larger discrepancy scores, include Assessment, Supporting diverse learners, and Classroom environment. (AGWF Internal Report).

Other similar projects are underway. Mathematics education, for example, recently surveyed alumnus who were certified as secondary mathematics teachers over the last 30 years. The primary purpose of the project is to provide evaluation for continuous improvement and decision making about the program. The program had conducted a similar survey about 20 years ago, of about 600 alumnus, which will provide a unique contrast. The survey includes employment data regarding years taught, career paths, and quality of training. These data will be particularly relevant for component 4.3. Post cards with a survey link were sent to home addresses of the 1300 identified alumnus. The link was also sent to email addresses that were available for 450 of those 1300. The mailing and emails went out in June 2018 with a requested September 2018 return.

As another source of evidence regarding completers, PDE has initiated surveying completers as they apply for licensure. A sample of data (n=171) are available since May 2017. Six questions were posed to include items regarding how well the EPP communicates certification requirements, how well the EPP communicates program requirements, how well completers feel prepared to design and implement instruction and assessments aligned with state standards, how well completers feel prepared to work with all students, how well they feel field experiences provided opportunities to work with diverse learners in a variety of settings, and whether completers receive appropriate feedback, mentoring, and coaching to build knowledge and skills. Disaggregated findings are provided but present little data regarding trends by program, mostly because there is little variance in responses. Overall, EPP completers rated their experiences very positively. Regarding communication about certification 86% of respondents agreed or strongly agreed that certification requirements were effectively communicated. Regarding communication about program requirements, completers reported that they were provided information from several sources, with advising reported as the most. Regarding how well they felt prepared to design and implement instruction and assessments aligned with state standards, 95% agreed or strongly agreed. In the area of opportunities to work with all students, 89% agreed or strongly agreed that those experiences were provided. Finally, 90% agreed or strongly agreed that they received adequate mentoring, feedback, and coaching. Over time more data will be available from PDE, an external, independent source from which data can be triangulated with other evidence. Several areas targeted by the PDE survey, overlap with InTASC standards and other EPP data collection efforts (PDE survey results). However, data are not provided in raw form and therefore while trends can be tracked, less is known about item level characteristics for the EPP compared with other providers or to examine psychometric properties of the instrument.

A number of program completers, however, leave Pennsylvania upon graduation and serve as professional educators within other states such as Virginia and Maryland. These data are dynamic, however, because many come back to teach in PA. The EPP, and teacher preparation programs in PA generally, export many educators to other states. Therefore, it is also necessary to determine the perception of teaching preparedness of our out of state program completers. Thus, we have obtained contact information for approximately 50 program completers who teach in Virginia and Maryland. Focus group interviews with these program completers will assist us to determine if they are effectively applying the professional knowledge, skills, and dispositions that their preparation experiences were designed to achieve.

In additional effort to follow completers, the EPP has administered a survey, grounded in InTASC standards to completers for three cycles (2015, 2017, 2018). The survey was not administered in 2016 because the intent was for the SRC to be surveying completers at that time. The survey has evolved a little over the three administrations, limiting some generalizability, but the foundational InTASC standards can be compared over time. The 2015 survey was very long with multiple indicators per standard. The version administered in 2017 and 2018 was reduced in efforts to streamline administration and increase completion. The versions administered in these years posted strong internal consistency reliability with α = .91 in 2017 and α = .94 in 2018. Completers, both those teaching and those not teaching, were surveyed. Completers rated their perceptions of preparedness; perceptions of program quality can be evaluated by these survey items. Expectations would suggest that ratings would go down slightly for those completers who were teaching as they faced the known challenges of early career professionals. Overall, regarding learners and learning, three questions addressed learners' development, all learners, and environments. Mean completer ratings were above 'adequacy' and approached proficiency for each of the three administrations. In both 2017 and 2018, as anticipated, across items those teaching rated themselves lower than the mean of the entire group. This trend was not true for any of the areas for completers reporting in 2015. Again, regarding content, the pattern continued with all completers rating themselves above adequacy and those teaching in 2017 and 2018, but not 2015, endorsing slightly lower than the group mean. Instructional practice, in reference to including all students, included three items: assessment, planning for instruction, and instructional strategies. Completers ranked these items highly, with those teaching in 2017 and 2018 providing slightly lower ratings. Of these areas, planning for instruction, while still with a mean adequate rating, was the lowest. For professional ability, across items, all ranked themselves as above adequate, with those teaching in 2017 and 2018 posting smaller rankings. For the professional responsibility items, the patterns held with all ranking proficiently. In 2017 and 2018 two items regarding ability to align instruction with state and national standards were added.  Every candidate surveyed reported proficiency on these two items. Assessing the ability to align instruction with standards parallels with an item on the PDE survey and can be triangulated for validity support.

Regarding employer perception, until recently data regarding where completers took positions were not available. The EPP employed numerous strategies, such as leveraging data from the alumni association and searching social media to reach out to completers. PDE recently started providing current PA placement data for completers. These data will be of some use for following individuals into the field, however many of our completers do not remain in PA. The data provided by PDE also does not include contact information. Nonetheless, it will be a place to start. In preparation for the ability to track completers into positions in Pennsylvania, the EPP identified concentrations of recent completers (since 2012) who took positions in varied districts. We intentionally picked schools for this pilot that included variance regarding size of district, suburban, rural, and urban districts, and grade level of teachers taught. Principal emails were found through district websites. Principals (n=17), all with more than one EPP completer in their school, were selected. Each was sent an email with a 10-item survey grounded in InTASC standards and parallel with EPP developed student surveys to provide mechanisms to support validity, with rating scale: not at all, limited, adequate, and proficient. Principals (n=2) responded in reference to P-4; 2 Reading Specialists; 2 Spanish (P-12); 1 Biology (7-12); 1 Cooperative Education (7-12); 2 Social Studies (7-12); 1 Health and Physical Education (P-12); 2 English (7-12); 1 Special Education (P-8); 1 Special Education (P-12). These two principals rated completers as 'proficient' for each standard, except for one. The elementary principal rated the PK-4 completers as 'Adequate' in Penn State teachers use of multiple methods of assessment for progress monitoring and decision making. These data have significant limitations and provide limited evidence as a small pilot with a very low response rate. The EPP, however, plans to conduct a stronger principal survey. Faculty colleagues at the EPP have recently received data from PDE on the certification and placement of all educators. Thus, in the future, we will be able to track the placement and retention of all Penn State educators by certificate area (e.g., elementary, middle school mathematics, high school English Language Arts, etc.) and school level (e.g., elementary-, middle-, and high- school) within the Commonwealth, in accord with component 4.3. Further, PDE will be releasing all principal and superintendent emails, which will greatly facilitate effective sampling. Unfortunately, there was too little data from the pilot to address psychometric properties of this scale. However, the alignment with InTASC standards and the ability to correlate with completer surveys hold promise the data from the survey will meet standards for reliability and validity. The response rate in the pilot, however, is concerning. Therefore, prior contact with principals we will survey to make them aware of the intent of the survey will be initiated. This should result in increased response rate and adequate sampling.

Three cycles of an employer survey of career fair participants also represents ongoing assessment that provides evidence of program quality. These data are provided in compiled graph form so assessing internal consistency reliability is not possible. However, responses are generally consistent across the data collected. Spring 2018, 130 district personnel were present with 21 PA schools represented. As some districts bring more than one person, and districts outside of the state may elect to send fewer people, these data are not particularly helpful in understanding the sample. Fifty-one who attended the career fair were surveyed regarding the degree to which program completers are perceived to be classroom ready. Nine Likert-type items and one open-ended response item were included in the survey, a survey of those who attended the career fair was conducted. Forty-eight (98%) rated completer preparedness as excellent or vary good; 46 (90%) rated their satisfied with elementary completers preparedness at the highest level; and 38 (79%) rated secondary completers preparedness at the highest level. While we can track future cycles of similar data, the concerns with sampling make conclusions limited. Nonetheless, valuable qualitative comments were also provided. These findings and evidence, coupled with anecdotal evidence, such as communications from hiring districts (e.g., Loudoun letter), contribute to the conclusion that the EPP is preparing classroom-ready completers. Future analysis will target consistency in responses over time, which will inform program improvement.

Regarding employer and completer surveys, the AG ED program also publishes a showcase each year. Three cycles of these are included (AG ED Showcases). These include testimonials by employers and completers as an additional indicator of program quality.

A number of EPP programs utilize advisory committees as a structure for feedback on how program completers contribute to an expected level of student learning growth. For example, the Workforce Education and Development and Agricultural Education and Extension Programs have well-established advisory committees, which are charged with providing oversight of the quality of these teacher preparation programs. These advisory committees are comprised of representatives from Pennsylvania Department of Education, school administration, professional teachers, education preparation providers, and business and industry representatives. Committees meet twice per year (i.e., Fall and Spring) and regularly address and discuss observations related to teacher effectiveness and how program completers are applying knowledge, skills and dispositions to enhance teaching and learning for students at the classroom level. (Advisory Meeting Agendas). Information gathered from these advisory committees is used to guide and enhance the teacher preparation programs for the future. These committees play an important role in reviewing, monitoring, and assessing academic programs as well as providing a unique perspective on programmatic issues. The ultimate goal of these advisory committees is to provide independent feedback in order to assist the EPP in enhancing academic programs to influence student learning.

Standard 4 presents known challenges for the EPP. Despite several efforts and a multiple measurement approach to address Standard 4 components, supporting evidence contains limitations and planned strategies to establish additional data are underway. Many efforts are grounded in the InTASC standards, which allow opportunities to triangulate across sources to support validity of conclusions. Measures are also grounded in the Penn State Conceptual Framework (Conceptual Framework). Across multiple sources of data, completers perceive their preparation as relevant to the responsibilities they will face and do face on the job. They report that their preparation was effective. Regarding impact on students' learning and development, we do not currently have a systematic way to address student growth for all completers. Ideally student level data, yoked to teachers would be available, SLO, student work, and data from the case studies provide some evidence that completers' students are making adequate growth. Additional SLO data would be helpful as well as additional sources of evidence to complement that already available from case studies and surveys. Data from case studies, surveys, SLO, and student work lend support that completers apply professional knowledge, skills, and dispositions in their practice. An extension of planned focus group interviews, that include cases of completers' teacher and student data may provide additional insight into this component of Standard 4.

The EPP completers are employed and in demand. Unsolicited communications to the EPP, such as from district personnel, support that our completers are in high demand. The career fair survey and teacher showcase support that our completers are well prepared and that they perform well in classrooms. Once completer placement data, and principal email addresses are readily available from PDE, additional opportunities to track completers' milestones and career trajectories will be available. Examination and reflection from data collected for the self-study also suggest greater opportunities to communicate with and support completers may result in additional ability to collect evidence in support of Standard 4. Some mechanisms are in place but a more systematic and programmatic approach may be warranted. The Penn State College Alumni Society includes almost 12,000 members. The alumni society board engages in many aspects of the College and is open and willing to support ongoing and new initiatives. The board is active in helping to support the ASTN (Alumni Student Teaching Network), for example, which provides programs that enhance completers' knowledge, skills, and dispositions in sessions simultaneous with field experiences. In additional efforts, a mentoring program has been initiated by alumni relations that supports classroom ready completers as they move into positions. https://ed.psu.edu/alumni-friends/student-alumni-mentoring-program. As another example of support for completers, the Workforce Education and Development (WF ED) program has a well established initiative, Professional Academy for career and technical educators (CTE). The objective of Professional Academy is to enhance the art of teaching by deepening knowledge in areas of interest to participants while broadening professional networks. Each year school administrators select 10 educators to participate in the Professional Academy. Academy activities regularly include facilitated group discussion, online discussion, modeling of instructional strategies, and presentation planning to promote enhanced learning and skill development. Benefits for participating teachers include firsthand experience with formal and information professional development presentations and an expanded network of support to enhance their professional practice in the classroom. The EPP shall consider strategies to leverage these programs to better follow completers into classrooms to collect data on impact on student development and learning, program quality, career readiness, and milestones. 

The Evidence Table for Standard Four complete with components and documents referenced in parentheses above can be found at http://sites.psu.edu/caepreview/2018/09/10/caep-standard-four-evidence-table/