The unit has an assessment system that collects and analyzes data on the applicant qualifications, the candidate and graduate performance, and unit operations to evaluate and improve the unit and its programs.
2.1 Assessment System and Unit Evaluation
2.1 a Assessment System
The School of Education (SOE) regularly collects, analyzes and reports data pertaining to candidates’ and graduate performance, program quality and unit operations to evaluate and improve the unit and its programs. The system reflects the four guiding themes of the unit’s conceptual framework, which emphasizes the importance of reflective and active professionals who are committed to capable leadership and student academic growth and success.
The system is designed so that the progress of its candidates is reviewed at various transition points and data are gathered from multiple assessment measures. Candidates not only provide evidence that they are capable of moving from one transition point to another, but that their proficiencies are developing as they move through transition points. Key candidate assessment measures are aligned with professional and institutional standards and data collected at each decision point is used to determine whether candidates have met established requirements as well as to inform decision making for unit improvement. Program data is collected on teacher licensure, general content area knowledge, candidate ability to plan and manage the learning environment, candidate dispositions, as well as on candidate impact on student learning.
Five transition points at which candidates are assessed are: admission to the program; admission to practicum/field experience; admission to clinical/internship; program completion; post-graduation.
Candidates in the graduate program are presently assessed at three transition points: admission to the program, admission to practicum, and at program completion. The unit has considered adding decision points at ‘entry to internship’ and at ‘post-graduation’. This will, however, require program review and revision.
In addition to assessing candidates’ knowledge and skills, candidate dispositions are also measured. The Dispositions Survey includes among its elements the ideal of fairness and the belief that all students can learn.
Built into the unit’s assessment system is a process for assuring that assessments are consistent, fair and free of bias as indicated in the unit’s Conceptual Framework. Assessments, scoring protocols and the results of assessment instruments, are reviewed, revised and approved by the unit and other stakeholders, as appropriate. Candidates are exposed to the knowledge, skills, and dispositions that will be evaluated on key assessments. They are informed that they will be assessed, the points at which this would occur as well as the level of proficiency they are expected to attain. The same assessments are administered to all students in any given individual program.
Remediation is required for candidates who are unable to meet established standards at a satisfactory level. Referrals to remediation might take place at any point during the degree program through the unit’s Student Support System as well as through the institution’s Center for Student Success. /P>
The quality of the program and its unit’s operations are evaluated through identified areas targeted by the unit to monitor. For example, the unit monitors its operations and programs, its candidates’ achievements and its faculty effectiveness.
External evaluation of the institution and its schools and colleges is done at the institutional level through the regional accrediting agency for the (Middle States). Data on direct and indirect institutional level measures are collected and disaggregated by academic unit, on the Noel-Lovitz Senior Exit Survey and the National Survey of Student Engagement, on candidates’ course evaluations, enrollment, retention, and graduation rates.
Internal evaluation of program quality is done using candidate and employer surveys. Data compiled and analyzed by the Unit’s Assessment Coordinator are shared and discussed in the unit.
Candidate/Graduates’ Performance .
Candidate knowledge, skills, and dispositions are evaluated by faculty through course-based assessments as well as assessments at program transition points. Data from these assessments are used to inform faculty practice. To monitor the satisfaction and effectiveness of its recent alumni employer surveys are administered. These surveys gather data that inform the unit about performance of its graduates and the quality of the program. Exit surveys will be administered to graduating seniors and graduate students during the spring 2013 semester.
All full-time faculty in the SOE have earned doctorates in their field and qualify for their assignments in the unit. Additionally, in keeping with the University’s recruitment policy, faculty has extensive experience in K-12 settings. Data on faculty effectiveness come from student evaluations, yearly classroom observations by the unit supervisor, yearly self-evaluations, as well as external comprehensive unit evaluation. Feedback to faculty on classroom observations and faculty self-evaluations is done on an individual basis by the academic supervisor or unit manager. Faculty, in turn, uses this data to develop professional development plans for subsequent years.
To monitor its unit operations, the unit identified five areas on which to focus its attention. These areas include: 1) the quality of its leadership role in governance and management of its curriculum instruction and resources; 2) its recruitment, admission, and retention system; 3) its advisement system; 4) its record keeping system as well as 5) its system for handling candidate complaints. To assist the process a unit operations exit survey was developed. Candidates rate the unit in these areas. Additionally, program director reports, institutional reports, all serve to provide data on unit operations.
2.1.b Data Collection, Analysis, and Evaluation
Much of the data are collected, analyzed and disseminated by the Unit’s Assessment Coordinator on a regular basis and reviewed by the Dean. Praxis results, data of a confidential nature (e.g., student complaints, evaluations of faculty), and reports at the Institution level are collected, analyzed and disseminated by the Dean. At monthly unit meetings, there is a standing Assessment report on the agenda, which may contain assessment issues at both the unit and institutional levels. These monthly reports allow for the Unit faculty to discuss and advise on the issues on a regular basis. Microsoft Excel, along with the Institution’s database (Banner) is the software programs used to store, analyze and organize quantitative assessment information. Qualitative data are filed in paper or electronic format and/or scanned into PDF format.
Candidate performance data are collected from candidates, cooperating teachers, principals, university clinical supervisors, faculty, and institutional participation in national testing. Key Assessments instruments are distributed on the Transition Points schedule per program. The collection, analysis and dissemination of the data occur in the nearest semester of the regular academic year. Typically, data from candidates’ key assessments collected during the Fall semester is analyzed and shared during the Spring semester. Data collected during the Spring semester is analyzed over the Summer semester and shared at the beginning of the next academic year.
Program quality data are collected from candidates, alumni, and employers. These data are gleaned from applicable item results at the institutional level from course evaluations and alumni surveys, as well as both the Noel-Levitz and NSSE surveys; at the unit level through an Exit Survey, and Alumni and Employer Surveys. Course evaluations are conducted every semester. The Exit Survey is collected at the end of spring semester yearly. Alumni and Employer Surveys are collected every third year. Institutional level data are collected at the discretion of the institution. As a part of program quality, faculty effectiveness data are also collected via course evaluations, classroom observations (each spring semester), and annual faculty evaluations and feedback by the Unit Chair, Dean, and faculty colleagues (Retention, Promotion and Tenure Committee).
Unit operations data at the unit level are collected from candidates through a Unit Operations Exit Survey. Unit operations are also assessed at the institutional level through institutional reports pertaining to the unit submitted by the Dean. Unit operation data is collected, analyzed and disseminated once per academic year.
2.1.c Use of Data for Program Improvement
Candidates receive feedback from their evaluations on Key Assessments. Feedback provides an opportunity for discussions not only among faculty, but also between candidates and faculty that can be used to inform the unit as to areas where changes might merit investigation.
Prior to academic year 2012-2013, a grade point average of 2.33, successful completion of an English Proficiency exam, a computer literacy exam, and Praxis I was requirement for candidates entering all initial programs in the unit. PRAXIS I was the only external requirement at that point and typically was a ‘stumbling block’ in the progress of many candidates to complete the requirements for program admission. In order to lessen this issue as a barrier, and so as not to stunt the growth of the program, the unit, removed this exam as an entry requirement and in its place instituted the PRAXIS II, a requirement for licensure by the Board of Education, as a requirement to student teaching. To support candidates taking these examinations, the unit, through its SAFRA grant, and the Center for Student Success at the institution have provided tutorials to assist candidates with this exam. A coordinator of Student Support Services, for the unit has been granted release time to implement this effort.
Commentary from initial-level candidates and cooperating teachers during the field experience and student teaching courses, particularly in the Elementary Education program, indicated a need to update the unit classroom learning environment. The Unit did not have technology (promethean boards) that was standard equipment in the public schools, and the use of this type of technology was not addressed within the coursework. Promethean boards in the SOE were provided through a Title III grant (English as a Second Language (ESL) grant) written in the School of Education. These boards were installed in the Unit’s classrooms early this semester. Additionally, the unit’s SAFRA grant provides for the professional development of faculty in this and many areas that improve classroom instruction.
Course grade performance in content areas revealed that candidates mostly perform above the admission requirements. Based on course grade evidence, the faculty decided to increase the minimum grade point average required for entry to the unit from 2.33 to 2.50.
2.2 Moving Toward Target or Continuous Improvement
2.2.b Continuous Improvement
The School of Education is steadily evolving toward a culture of evidence through data. The SOE regularly collects, analyzes and reports data pertaining to a variety of elements within its degree programs. At a higher level of evolution, the SOE is beginning to systematize the use of the results of assessment to identify areas in which to improve candidate performance and program improvement. Further evolution will include the use of assessment data to inform decisions regarding strategic priorities, resource allocation and faculty development within the academic Unit. Professional development sessions, in the form of Unit “brown bags,” were initiated Fall 2012.
Enrollment and retention is a concern of the entire Institution and the SOE is no exception. An investigation into persistence patterns among pre-education applicants in hopes of identifying and finding solutions to barriers will be done. Typically, 100% of candidates become completers.