Standard Five

Narrative for standard five in the CAEP self-study report

The EPP maintains a Quality Assurance System (QAS) to inform continuous improvement based on data and evidence collected, maintained, and shared. Data inform practices and procedures and provide the basis for inquiry, additional data collection, revisions to programs, and new initiatives. Data and evidence are used to improve programs, consistency across programs, and to measure impacts of programs and completers’ impact on P-12 student learning and development.

Data are collected, monitored, stored, reported, and used by stakeholders both within and outside the EPP. Collecting evidence in response to the Standards and reflection through the self-study process has illuminated many strengths of our quality assurance system and where programs are using data well for continuous improvement and innovation and clear areas for continuous improvement. Through self-study, for example, attention to areas where more or better data and evidence are needed were unveiled. Further, some lack in consistencies across programs and locations became apparent. Also, further transparency and publicity regarding what data are available and how data and evidence are known and shared is warranted.

The QAS is under continuous review at several levels. Penn State is a well-resourced, world-class University. The University, the College, and the EPP reside within a culture of data use for decision making, including the use of data for decision making about practice. The University infrastructure provides a foundation for a comprehensive quality assurance system, including, for example, technology tools used to securely house data, professional staff who provide support for data access, use, and reporting, and numerous other resources for those who use data. Specifically, the University has numerous and varied offices that continuously oversee data and data management. These include, for example, data security (https://security.psu.edu/), LionPATH for student records https://lionpathsupport.psu.edu/ , iTwo for institutional data (http://ais.its.psu.edu/services/itwo/), institutional assessment that oversees institutional data, assessment, and accreditation  (https://www.opa.psu.edu/), all of which play roles in the Quality Assurance System.

Access, security, reliability, and validity of the data the EPP uses starts with the Assessment and Accreditation Coordinator (AAC) for the College of Education within the office of the Associate Dean for Undergraduate and Graduate Studies. The AAC interfaces with the University systems. The need for a systematic approach to assessment data was indicated after the most recent NCATE review (AFI), and therefore, in preparation for CAEP review, a more systematic approach to data management was taken. The AAC assures security of sensitive student data, and through training on university systems conducts queries for the EPP faculty and staff to access data for decision making by programs to inform continuous improvement. The AAC is well qualified to serve in this role based upon a career path that bridged decades in education, software, and data management in post-secondary institutions and other settings. The AAC position itself demands continuous learning and improvement in order to remain current about IHE, data management, software tools used by the University and the various systems that house data and that are used for reporting. The AAC accepts this challenge and has software certification in Access and Excel from Microsoft, attends required trainings and professional development opportunities at Penn State, and is certified as a Master Technology Trainer from PDE. The AAC is the primary student information system analyst for the College, originally using the University’s Data Warehouse system, which has now transitioned to LionPATH and iTwo (PeopleSoft and Oracle). The AAC represents the College of Education by serving on University committees such as the Student Records Advisory Group, the College Data Management Interface (IT) Group, and the Assessment Management System Evaluation Committee. These committees examine practice and policy of data use for the University and its stakeholders. To assure best practices and to serve the EPP in data use and reporting to stakeholders, the AAC attends annual webinars by PDE, Title II, AACTE, and assessment vendors, among others. In recognition of the critical role of data and evidence in accreditation, the AAC has attended the fall CAEPCON for the past six years to inform our quality management system.

The AAC regularly downloads data and reports from ETS and Pearson’s on-line score reporting systems and PDE’s Teacher Information Management System, and uploads student records into ETS for Title II. The AAC also has access to the following data systems at the restricted sensitive data level in order to complete reports for the College of Education:  Penn State’s older data warehouse, current LionPATH student information system, iTwo business analytics system, and the human resources reporting system. The AAC completes the following annual reports for the College:  Title II, CAEP, PDE, and US News and World Report. Within the College of Education, the AAC provides student data to departments and programs for business, accreditation, and certification needs and serves as a liaison with University level assessment efforts. Within the College, the AAC works closely with the Associate Dean for Undergraduate and Graduate studies as well as the Associate Dean for Research, Outreach, and Technology, and the Assistant Dean for Multicultural Programs in continuous review of access and needed data concerns.

The AAC coordinates access to many of the measures that provide data and evidence the EPP uses to inform tracking, review, decision making, and reporting of data. For example, all data from proprietary measures are coordinated by the AAC. The AAC and other members of the QAS have access to these data to disaggregate by program, student characteristics, or other factors, for individual decision making and for comparison across the programs. Further cycles of these data are evaluated for trend.

Although coordinated by the AAC, data from multiple EPP measures are collected by other members of the QAS. For example, enrollment data, admissions data, and monitoring students are often accessed by individual programs and by members of the advising and certification staff housed under the Director of Advising and the Associate Dean of Undergraduate and Graduate Studies.

Validity of decisions made from these data are often enhanced by system decisions. For example, candidates are not admitted to majors until all requirements are met. LionPATH will prevent them from enrolling in courses for which they are not eligible. LionPATH also creates degree audits at key points during a candidate's academic career, such as entrance to major and at time of graduation. This audit provides verifiable evidence that degree and state certification requirements are being met. Overall, these systems provide the data to make decisions about academic progress, graduation, and inform certification recommendations. These data are continuously monitored. Candidates are also monitored through Starfish, an advising system that allows advisers to view and record advising notes, and provide students feedback on their academic progress. Data in Starfish may also inform decisions about dispositions, such as integrity violations.

The Director of CIFE and Supervisors in concert with program faculty collect, analyze, monitor and report measures such as the ST-1 and PDE-430. The CIFE Director serves as the coordinator of these data.  Recent monitoring led to questions regarding the execution of assessments in field placements. Actions were taken to increase consistency across field placements with supervisors as key stakeholders in administering and interpreting these assessments. CIFE also coordinates and reports clearances information as a data source required for field placements, adequate progress, certification eligibility, and as a factor in dispositions. Program faculty enact the assessments for content and methods courses that inform student grades and GPA. Development and monitoring of these assessments is conducted by program faculty and is overseen by the program, college, and University curriculum committees, the office of institutional assessment, and the Faculty Senate.

In this culture of data-based decision making, members of the QAS actively seek answers to questions regarding characteristics of assessments and data to answer questions of practice for continuous improvement. For example, in addition to collecting, managing, and reporting data required for College decision making, the AAC responds to about a dozen requests monthly for requests for data to use for decision making. Recent examples include a list of names and emails for all of active M. Ed. in LLAED students over the 2017-2018 academic year to inform program delivery; three years of course enrollment and grades disaggregated by gender, ethnicity and passing grades for 10 instructors enrolled in the Equity-Minded Teaching Institute for use by  faculty as they embark on new teaching methods in key EPP and College courses; UG Education students enrolled and discontinued across all campuses for advising and program monitoring; two most recent years of Vocational II completers to recruit to serve as Field Resource personnel; total number of departmental program completers within an academic year, including all campuses, all degrees and certifications desegregated by gender and race/ethnicity for analyses of trends for the Associate Dean; list of ten years of school principals and superintendents recommended for certification to examine for potential program changes by UP and Great Valley campuses; admitted underrepresented freshmen with contact information for the college’s Multicultural Student Center; and STEM field graduates for an NSF grant.

PCCC serves as the instrument by which representatives from all educator certification programs across campuses are able to meet as a collective unit to discuss and assess candidate performance and programs, as well as state mandates and processes that impact Pennsylvania teacher certification programs. Candidate data overall and disaggregated by program are presented and often shared in monthly-held PCCC meetings. Data are also shared at Curricular affairs meetings with Directors of Undergraduate Education for each of the in the programs. While these two committees are particularly interested in disaggregated data on key measures and metrics, in this culture of data-based decision making, the AAC and program faculty and members of the College leadership all communicate with data regularly.

Programs employ the use of numerous EPP created assessments as part of the quality assurance system. Most are program specific, such as the (ST-1 Template) or the (SLO Template) for Agricultural Education. Surveys used by the EPP and by specific programs are largely grounded in the InTASC standards, such as the (Candidate and Completer Survey). This shared foundation provides opportunities to examine disaggregated data in efforts to examine trends, outliers, and areas for targeted improvement. EPP-created assessments are grounded in disciplinary academic standards.

Through the self-study, three primary concerns regarding EPP measures became evident. The first concern (SPA feedback) focused on rubrics in ways that those programs without SPA should also attend. Many programs have parallel assignments but may use quite different rubrics. Second, for some measures it is not clear that they are grounded in national academic standards and perhaps they should be. Third, enhanced communication across programs about assessments in use may be beneficial to individual programs and the EPP overall. Added consistency, where appropriate, would benefit examination of individual programs.

The EPP employs the data from administration of numerous measures to support data-based decision making and continuous improvement. These measures include both Proprietary and EPP development instruments. Care is taken to address the quality of measures administered to assure empirical evidence to support reliable and valid decision making.

Data from measures the EPP uses as part of the QAS are relevant, verifiable, representative, cumulative, and actionable. Regarding relevance, the measures the EPP employs assess what they are intended to assess. For example, The University employs the use of SAT scores and HS GPA s (Basic Skills Disaggregated by Program Area). The SAT is intended to be used in admissions decisions and is relevant metric for inclusion. https://collegereadiness.collegeboard.org/pdf/sat-suite-assessments-technical-manual.pdf

High school GPA is known to be a better predictor of College success (https://files.eric.ed.gov/fulltext/ED563073.pdf).

Proprietary assessments that measure content knowledge have established reliability and validity metrics. For example, the American Council of Teaching Foreign Languages (ACTFL) https://www.actfl.org/sites/default/files/assessments/ACTFL%20OPI%20Tester%20Certification%20Information%20Packet%202017.pdf

https://www.actfl.org/assessment-professional-development/assessments-the-actfl-testing-office. For example, Surface and Deirdorf (2008) reported strong interrater reliabilities for oral fluency (α>.90) across a sample of 5881 interviews representing 19 languages. The ACTFL FAQ, https://www.languagetesting.com/faq provide additional information regarding the reliability and validity of the tests. Similarly, the Praxis content exams have known established properties. https://www.ets.org/s/praxis/pdf/validity.pdf; https://www.ets.org/s/praxis/pdf/technical_manual.pdf. All EPP program wide surveys, are grounded  InTASC standards and case study data collection was grounded in theoretical models (Candidate and completer survey; AG WF Internal Report). Recent rubric development informed by SPA feedback has led to increased relevancy as programs carefully tie metrics to academic standards (SPA feedback on content and pedagogy). Programs take clear steps when developing instruments to assure relevance (Rubric Development).

The data available from EPP measures are verifiable. University data management systems assure all student level data collected and stored by the University are verifiable through LionPATH and iTwo. For EPP administered assessments, such as the PDE-430 and ST-1, ongoing annual training is provided to supervisors to discuss, practice scoring and to provide feedback for reliability of scoring (Supervisor Training; Mentor Guidebooks).

Representativeness of evidence from EPP measures of knowledge, skills, and dispositions are generally population measures and therefore representative of the EPP programs. The PDE survey is administered to every candidate for licensure who must complete the embedded survey in order to complete the application for licensure. The data provided, therefore, represents the population. EPP Surveys are administered to all completers who apply for certification in Pennsylvania. (TIMS)

The EPP strives to secure three cycles of data for all measures. As demonstrated in the exemplars provided by all non-SPA programs, three years of course grades and proprietary assessments are available (Exemplars). Three years of all institutional data are available as well (Certification rates and GPA disaggregated by year and program).

The EPP employs several mechanisms to assure that data are actionable. For proprietary exam scores, institutional data, including admissions data, course grades, and GPA, and the EPP created surveys, the AAC provides access and reporting upon request and through the PCCC. These data are also shared by the Associate Dean of Undergraduate and Graduate Studies in curricular affairs meetings and Dean's advisory meetings for systematic review and to be used in continuous improvement. CANVAS, the institution's learning management system provides information on student performance on course level measures and can be shared with faculty upon request. The supervisors, program coordinators and the CIFE Director shares ST-1 and PDE-430 data with program faculty. Starfish provides entry to major data and dispositions data for access by program faculty and advising 

The EPP does not select candidates at admission and therefore cannot investigate selection criteria in relation to candidate progress and completion. Selection into the program majors is monitored by program staff and faculty, advising, and by LionPATH. Validity of the selection criteria of having met all ETM requirements, including coursework, and grade point average of 3.0 is established by hard controls in the LionPATH system. Other ETM requirements, clearances, and volunteer hours are tracked by advising in coordination with program faculty. To stay in EPP majors, candidates must maintain a 3.0 and this is monitored through Starfish and the LionPATH system.

Some examples of how EPP programs have employed data to inform decision making include enhancements made to the WFED program based upon data collected during safety inspections. In response to these data, the program developed a new course on occupational safety and health to better prepare candidates to provide a safe and healthful teaching and learning environment. This is particularly important because career and technical programs today have the same hazards that are found within the workplace and teachers need to learn how to manage safety in the classroom and lab setting. The new course is WF ED 411 and is now part of our CTE teacher certification program (Safety Inspection).

The Secondary Education Social Studies program area regularly monitors data on teacher-candidate preparation and qualification. Chiefly, the faculty are concerned with candidates’ receiving adequate content preparation to be successful in classroom placements and on state-mandated subject matter knowledge testing. The key data sources consulted are 1) grades earned in academic courses aligned with content preparation expectations and 2) scores earned on the Praxis Social Studies Subject Test (5081) by content area. As is typical at universities of this size, Penn State relies on other academic departments outside the College of Education for offering academic content courses, such as History, Political Science, Geography, etc. The College of Education has little input on the directions these other departments take, so it is necessary for faculty to maintain communication with contacts in other departments and monitor whether certification program requirements are adequately aligned with the content preparation needs. This necessitates reexamination by year over multiple years to discern whether requirements need to be adjusted, such as by replacing an older course with a new alternative that provides more effective preparation. As a consequence, Secondary Education Social Studies has undertaken program revisions several times over the current decade—in 2012-13, again in 2015-16, and currently in 2018. Each program revision has been motivated by steering teacher-candidates into the most effective available content preparation for their methods coursework, field experiences, and teacher testing.

The EPP determined through feedback from candidates and results of survey data that special education graduates found they lacked knowledge and skill in developing IEP and holding IEP meetings (Re-AIM). The program introduced TeachLIVE as a tool for potential curricular revision. A pilot study Spring 2018 that determined benefit for the software as measured by student learning outcomes. Additional research will be conducted with this instructional tool during the 2018-2019 academic year.

Measures of completer impact

We post measures of completer impact on our website, share them at program meetings, share ongoing data trends, including enrollments, and graduation rates, at monthly PCCC meetings, curriculum meetings, and program meetings. Both the Associate Dean for Undergraduate and Graduate Studies and the Director of Advising examine and monitor impact and trends in enrollment and certification. These data are shared with the College leadership team at Dean's council meetings. Admissions data are tracked by the Recruitment and Retention Specialist and shared at PCCC and Alumni Board Meetings.

Impact that completers' teaching has on P-12 learning and development

The Pennsylvania Department of Education (PDE) is currently developing a reporting system that connects P-12 student outcomes data with the classroom teacher of record and the educator preparation program (EPP) from which they completed. This report is available to EPPs in other states and is a very helpful tool for the EPPs to use when assessing impact on student learning. These data will represent strong evidence to support the impact of completers, will be posted as they become available, and will be tracked annually.

Assessing student learning in P-12 classrooms is challenging. Neither where completers take positions nor student learning outcomes by teacher are currently tracked. The EPP employs several methods to collect impact data. Some of these methods are less representative, such as anecdotal evidence from districts. To examine impact on P-12 learners the EPP also conducted case studies, examined SLO of practicing completers, and contacted completers, who are practicing, to attain evidence of student learning in P-12 classrooms. While evidence collected from the case studies and available SLO appears to support our completers, impact student learning, this conclusion, from the data available, is tentative. The EPP recognizes that additional data are necessary to evaluate impact. As such, until data from PDE, as referenced above, are available, a more systematic approach to collecting SLOs will be taken in 2018. Further, a representative sample of classroom observations will also be used to examine impact. Case studies of completers, will include individual interviews with completers, principals, colleagues, and will incorporate evidence of student learning tied to academic standards (SLO and Teacher Evals; AG ED Showcase).

Indicators of teaching effectiveness

We graduate candidates who are classroom-ready. The EPP measures candidate's teaching effectiveness through the ST-1 and the PDE-430 assessments. The College of Education contacted completers, who are classroom teachers, to ask for evidence of their teaching effectiveness that is collected through teacher evaluation tools administered by the school districts, such as SLO and principal evaluations of teaching. Case studies were conducted to inform indicators of teaching effectiveness (Re-Aim; SLO and Teacher Eval).

Employer satisfaction

PDE recently started providing current PA placement data for completers. These data will be helpful for following completers who remain in PA. In preparation of these placement data, the EPP conducted a pilot study with a Principal Survey, grounded in the InTASC standards and aligned with completers’ surveys. A purposive sampling plan was employed to represent known district characteristics. Principals with more than one EPP completer in their school, were selected. The response rate was very low but the EPP plans to conduct a stronger principal survey upon the release of completer locations from PDE. This will allow the EPP to track the placement and retention of all Penn State educators by certificate area (e.g., elementary, middle school mathematics, high school English Language Arts, etc.) and school level (e.g., elementary-, middle-, and high- school) within the Commonwealth. (TIMS)

Three cycles of an employer survey of career fair participants also represents ongoing assessment that provides evidence of employer satisfaction. Spring 2018, fifty one employers who attended the career fair were surveyed regarding the degree to which program completers are perceived to be classroom ready. Nine Likert-type items and one open-ended response item were included in the survey. Forty-eight (98%) rated completer preparedness as excellent or very good; 46 (90%) rated their satisfied with elementary completers preparedness at the highest level; and 38 (79%) rated secondary completers preparedness at the highest level. Qualitative comments were also provided. Findings from the survey contribute to the conclusion that the EPP is preparing classroom-ready completers. Future analysis will target consistency in responses over time, which will inform program improvement (Spring Educator Data).

The AG ED program demonstrates employer satisfaction through publication of a showcase each year. Three cycles of these are included (AG ED Showcase). These include testimonials by employers and completers.

Results of Completer surveys

PDE surveys completers with six questions regarding how well the EPP communicates certification requirements, how well the EPP communicates program requirements, how well completers feel prepared to design and implement instruction and assessments aligned with state standards, how well completers feel prepared to work with all students, how well they feel field experiences provided opportunities to work with diverse learners in a variety of settings, and whether completers receive appropriate feedback, mentoring, and coaching to build knowledge and skills. EPP completers rated their experiences very positively. Regarding communication about certification 86% of respondents agreed or strongly agreed that certification requirements were effectively communicated. Regarding communication about program requirements, completers reported that they were provided information from several sources, with advising reported as the most. Regarding how well they felt prepared to design and implement instruction and assessments aligned with state standards, 95% agreed or strongly agreed. In the area of opportunities to work with all students, 89% agreed or strongly agreed that those experiences were provided. Finally, 90% agreed or strongly agreed that they received adequate mentoring, feedback, and coaching. Over time more data will be available from PDE, an external, independent source from which data can be triangulated with data from other EPP assessments (TIMMS).

We have obtained contact information for approximately 50 program completers who teach in Virginia and Maryland. Focus group interviews with these program completers will assist us to determine if they are effectively applying the professional knowledge, skills, and dispositions that their preparation experiences were designed to achieve.

The EPP administered a survey, grounded in InTASC standards to completers in 2017 and 2018. The survey demonstrated strong internal consistency reliability with α = .91 in 2017 and α = .94 in 2018. Completers, both those teaching and those not teaching, were surveyed and rated their perceptions of preparedness; perceptions of program quality can be evaluated by these survey items. Across all areas of the survey: learners and learning, content, instructional practice, and professional responsibility, completers rated their preparedness as adequate near proficient. Every candidate surveyed reported proficiency on two items that assessed ability to align instruction to State and National standards (Candidate and Completer Survey Data).

Program outcomes and consumer information

Graduation rates, hiring rates, and ability to meet licensing.

EPP students enter the major after completion of 48 credit hours. Therefore, a true metric of graduation rates from the EPP specifically are not available. Six-year graduation rates from Penn State at University Park are 85%. There is no available metric for hiring rates. Although PDE has started to share completers positions in the State, many EPP completers take positions outside the Commonwealth. As teacher shortages increase there may be more completers who elect to seek employment in PA. The EPP uses GPA and eligibility for certification as a graduation metric.

The EPP posts the number of completers hired into PA school districts annually on its website and systematically tracks these data. It is important to note that these data are dynamic and so care is taken to update the numbers at least twice annually. For example, completers sometimes take positions outside the state and then return. Further, some wait to become certified and then seek employment in the state.

The EPP systematically tracks certification. Nearly 100% of all graduates are eligible for certification as program requirements (Applications Recommended).

Student Loan default rates and other consumer information

The University posts default rates and tuition rates on its website. Nationally, loan default rates are 11.5%, all Public four-year schools are 7.5%, all Pennsylvania Schools are 9.9%. Penn State University compares favorably with a 5.2% loan default rate.

For Fall 2018, the estimated cost of attending Penn State at University Park for tuition and fees for undergraduate students is approximately $45,000 a year for PA residents and $52,000 a year for NonPA residents.

The average starting teacher salary in PA is approximately $44,000, many of our completers take positions in Maryland and Virginia with average starting salaries of approximately $44,000 and $40,000 respectively. These numbers will be tracked for trend. https://www.niche.com/blog/teacher-salaries-in-america/

The EPP welcomes collaboration and feedback from stakeholders, and stakeholders are included in decision-making, program evaluation, and selection and implementation for continuous improvement.

Among others, EPP stakeholders include those in the Program, College, and University communities, members of the Commonwealth, including children in schools and their families, program alumni, employers, and school and community partners. Program, College, and University stakeholders serve to inform curriculum and program decisions, memorandum of understandings, and resource allocation, for example. Children in schools and their families are important stakeholders as the Land Grant Institution in Pennsylvania, where our completers may serve as educators in the Commonwealth.

There are numerous examples of how stakeholders inform EPP decision-making, program evaluation, and continuous improvement. As one example is, when EPP enrollments declined, a task force comprised of College personnel, teaching professionals, and other school community professionals from around the State was convened and charged to explore recruitment solutions. Members of this task force recommended that the EPP hire a designated recruitment and retention specialist. Community members of the task force also served as the selection committee that decided who would fill that position.

Another example is the PDS partnership. Through this award-winning, nationally recognized Professional Development School partnership (NAPDS) shared decision making is demonstrated in program execution (SLICE and Principal Agendas). For example, mentor teachers inform program curriculum change. Inquiry projects target elements of teaching practice for evaluation and continuous improvement. Our PDS hosts monthly meetings with administrators and PSU representatives to discuss needs, new ideas, and potential changes (slice and principal agendas) and to address any concerns. Slice agendas provide some insight into the coordination and collaboration of this mutually beneficial partnership between the EPP and the district. Inquiry conferences provide a window into how teachers in the PDS, as stakeholders, inform preparation of candidates. PDS stakeholders are surveyed annually and include students, parents, mentor teachers (Mentor Surveys) (https://ed.psu.edu/pds/teacher-inquiry).

In a related example, the CIFE office partnership with districts demonstrates shared decision making and program change. In collaboration with individual districts we are imagining new partnerships for field experiences. Through these collaborations we are working together to decide how these new partnerships may work, how we will collect evidence regarding the impact of these changes on the development of candidates' knowledge, skills, and dispositions, and how these partnerships may impact teachers in the school community as well as impact on P-12 students learning. Important decisions will include how we will measure these potential changes and impacts. Pilot work has initiated with two semester placements and discussion with the district about how to incorporate mentor teachers as supervisors in these sites. (MOU)

Advisory committees and Alumni societies play an active and important role as stakeholders in EPP programs. For example, the Advisory Committees of for the Workforce Education and Development and Agricultural Education and Extension Programs meet regularly to provide oversight for these EPP teacher preparation programs. These committees review, monitor, and assess academic programs. Stakeholders on these committees include representatives from Pennsylvania Department of Education, school administration, professional teachers, education preparation providers and business and industry representatives. During advisory meetings, they discuss observations related to teacher effectiveness and how program completers are applying knowledge, skills and dispositions to enhance teaching and learning for students at the classroom level. These groups provide a mechanism for stakeholders to directly inform program decisions and inquiry projects (AG ED Advisory Council).

The Alumni Society Teacher Network (ASTN) serves in some similar roles (ASTN Schedule of Student Teacher Activities). The network is involved in program curriculum decision making through seminar conducted in conjunction with field placements. With attention to continuous improvement, these seminars are evaluated and changed annually based upon candidate feedback.

The Evidence Table for Standard Five complete with components and documents referenced in parentheses above can be found at http://sites.psu.edu/caepreview/2018/09/10/caep-standard-five-evidence-table/