MIDWESTERN INDEPENDENCE AND EDUCATIONAL TECHNOLOGY USE:
EVALUATION STRATEGIES OF THE NEBRASKA CATALYST PROJECT
Dr. Neal Grandgenett, Project Evaluator, University of Nebraska at Omaha
Dr. Jean Jones, Project Director, Nebraska Department of Education
Abstract:
This chapter discusses the efforts of the Nebraska Catalyst Project and its collaborative evaluation process for monitoring progress of the integration of educational technology use into pre-service teacher education in the state. Nebraska is a very independent operational environment for educational institutions, which includes 535 K12 school districts, and 17 institutions of higher education accrediting Nebraska teachers. Such institutional independence meant that the higher education institutions and K12 school districts, although individually quite excellent, had limited experience in working together on educational technology related goals. The Nebraska Catalyst Project was a bold step toward shared institutional strategic planning, decision-making, and faculty training related to educational technology. The evaluation mechanism used by the project was an important component of this successful project, and used four key strategies to help successfully monitor progress. These strategies included 1) developing a well-organized reporting system, 2) encouraging joint work on institutional assessments, 3) establishing an online format for evaluation information, and 4) systematically returning feedback to the individual institutions. This article describes the evaluation component of the Nebraska Catalyst Project and how it operated in the context of these four evaluation strategies, and within the very independent educational environment existing within the state.
PDF Version
About the Authors
MIDWESTERN INDEPENDENCE AND EDUCATIONAL TECHNOLOGY USE:
EVALUATION STRATEGIES OF THE NEBRASKA CATALYST PROJECT

Midwestern Independence and the Nebraska Catalyst Project

Textbooks on Nebraska history typically record a rugged beginning to the state and often showcase the courage and independence of the many pioneer families who settled as Nebraskans. Fighting the harsh elements of unpredictable weather and expansive prairie, these families learned to be very independent by successfully building their own houses, raising their own food, and generally protecting their hard-won homestead. Although always pleased to work with other families and build communities when possible, these families developed a proud heritage of independence in demonstrating that they could indeed make it on their own.

That spirit of Midwestern independence is a proud and powerful value in the state of Nebraska, and its influence is still noticeable today in many of the state’s institutions. For example, the state is the only state out of the 50 states to maintain a unicameral method of government, operating with a single legislative body. Education in the state is also a good example of this midwestern independence, where more than 535 school districts operate relatively independently, with no state mandated achievement testing (districts select their own assessments). Some of the independence and variety of education in Nebraska can be examined at the extensive Nebraska Department of Education website of http://reportcard.nde.state.ne.us/. The 17 institutions of higher education that accredit teachers in the state are also very independent, and have selected unique curriculums that have resulted in strong, but also very diverse sets of learning experiences for pre-service teachers.

The Nebraska Catalyst Project was initiated into this setting of educational independence. The 17 participating institutions included both public and private institutions, and ranged in size from a very small private college of 12 teacher education students to a large research university with over 3000 teacher education students. Partnering school districts were equally diverse, and included a large urban district with more than 50% minority students, to numerous rural districts, with small numbers of teachers and students. Collaboration across these higher education and K12 school district environments, related to enhancing the use of educational technology, was relatively uncommon for these organizations. However, all these institutions did share a genuine interest in educational technology, which has been seen as an important learning tool in the classroom. Upon this shared interest, but within an environment of rugged independence, the Nebraska Catalyst project undertook its efforts to build collaboration and enhance the use of educational technology in teacher preparation.


Building on Independence while Laying the Foundation for Partnership

When the Nebraska Catalyst Project was funded by the U.S. Department of Education’s Preparing Tomorrow’s Teachers to Teach with Technology (PT3) program in October of 1999, it was clear that the culture of independence of the participating institutions was going to be a real challenge. Institutions were rarely open to sharing individual educational technology innovations across institutions, and some of the larger university-based organizations even appeared to be in more of a competitive rather than collaborative mode with each other. Within this context, the project sought to create a new “culture of dialogue” between institutions, and become a forum for both discussion and mutual understanding. Like many teacher preparation institutions in the United States, Nebraska found that educational technology presented a unique challenge for teacher educators (Cavanaugh, 2003; Suleiman, 2001; Burke, 2000; McCoy, 1999). Topics related to faculty support, student expectations, and modeling frequently surfaced in the initial discussions.

The challenges of starting the partnership process were considerable and often reflected a need to further define the partnership and particularly how institutions might work together and what they would work together on to accomplish. There was a realization that change would need to be documented based upon these initial definitions. To help facilitate this definitional process, the two goals of the Nebraska Catalyst Project were frequently reviewed in initial partnership meetings. These goals consisted of 1) through an effective statewide consortium, serve as a catalyst to create systemic improvements in the preparation of new teachers to use technology, and 2) through partnerships with K-12 school districts and teachers, the project will strengthen teacher education programs statewide to prepare new teachers to effectively use educational technology in K-12 settings. The definition and goals discussion eventually formalized four task forces, which included planned efforts targeting enhanced graduation requirements, stronger institutional assessments, increased networking with K12 educators, and enhanced distance education. Such initial discussions also reflected challenges in the considerable diversity of the institutions in technology awareness, technical sophistication, levels of integration, and availability of resources.

However, from the start of the Nebraska Catalyst Project, the 17 participating institutions were very clear in their commitment and responsibility to future teachers: to effectively integrate and model the use of educational technology within their own teaching. Each of the Nebraska teacher preparation institutions expressed a real desire to better integrate educational technology into their programs, no matter what their current state of integration, from the very minimal, to the quite extensive. This mutual statewide aspiration parallels the national desire to improve in this important teacher preparation area, as reflected in national reports such as “Technically Speaking: Why All Americans Need to Know More About Technology” (U.S. Committee on Technology Literacy, 2002).

A strong working relationship between the project director and evaluator helped formalize an institutional understanding that evaluation would consistently be a strong component throughout the project. The close link between the project operation and the project evaluation was modeled by these two project leaders. Both individuals often led discussions at the site facilitator meetings, and consistently worked to build an initial institutional awareness of the importance of the evaluation process to both the project and individual institutions.

Personal visits by the project director to each institution offered an opportunity to further build a collaborative environment, and began the evaluation/assessment process. Two surveys were conducted for the purpose of needs assessment; one survey focused on the status of the institutions based on various grant objectives and project design; and the other was an institutional self-assessment on the status of teacher preparation with educational technology integration. These surveys provided institutional baseline data from two very distinct perspectives, as well as provided a data-driven foundation for many project-related discussions and concrete performance indicators for demonstrating growth in specific areas. The project director, evaluator, and the institutional site facilitators all periodically referenced the results of these two survey instruments during discussions. Most importantly however, the initial needs assessment carefully modeled the desired four key strategies of the evaluation process by illustrating the utility of well-organized reporting, encouraging collaborative work, establishing an online format, and systematically returning feedback to the individual institutions.

Many of the ongoing meetings of the Nebraska Catalyst Project sought to encourage discussion on current efforts and challenges as well as establish working relationships from which to build trust. One excellent example of this relationship building process was the formation of “strategic planning Cadres”. These Cadres were comprised of teams of at least five individuals from three or four institutions, guided through the facilitation of a PT3 colleague from a neighboring state. These original Cadres have continued to plan together, share ideas, and have maintained this special bond formed at the inception of the grant.

To further build a culture of dialogue and inclusion, the Catalyst project embraced mutual work across a broad spectrum, by helping some smaller institutions get started with educational technology, and expanding innovative ideas that larger institutions had developed. The Nebraska Catalyst also broadened the dialogue when possible, and expanded its partnership into other Nebraska stakeholder institutions, such as the Nebraska Department of Education, the Nebraska Association of Colleges of Teachers Education, the Nebraska Council of Teacher Education, the state’s Educational Service Units, the Nebraska Distance Learning Association, the Nebraska Educational Telecommunications Commission, the Nebraska Educational Technology Association, and the Nebraska Distance Learning Association. The project also initiated a newly organized group of Nebraska pre-service students called SETA (Students Educational Technology Association), to help expand dialogue with the pre-service teachers themselves.

The ongoing institutional dialogue helped to continually focus the group desire to strengthen the institutional teacher education settings as they related to training future teachers to use technology effectively. To provide additional focus of efforts, four task forces were formalized and included Task Force I, focusing on assessment development; Task Force II, focusing on educational technology related degree completion requirements; Task Force III, focusing on K-12 teacher cadre development; and Task Force IV, focusing on enhancing distance learning. Task Force I and II were eventually combined to help work more effectively together, and to undertake the development of various prototype instruments to help assess pre-service teacher’s classroom readiness in educational technology, which evolved to be a key effort within the project, and the overall evaluation process.

The integration of various national standards, particularly the National Educational Technology Standards (International Society for Technology in Education, 2000), emerged as a core theme across the four task forces. Considering standards in the planning process has been shown to be critical to effective educational technology reform (Dewert, 1999; Peck, 1998). In the Nebraska Catalyst Project, the standards in essence, became a common language for the educational reform discussions, and helped to structure the dialogue on what might eventually be accomplished by the working groups. Building strong institutional linkages to teacher competencies in educational technology and establishing a variety of high-quality assessment strategies for those competencies further emerged as operational objectives of the standards theme. A good institutional linkage between teacher competencies in educational technology, and a good assessment process for those competencies, would seem to be very important for a teacher preparation program’s successful integration of educational technology (Krueger, Hanson, and Smaldino, 2000; Smith, Harris, Simmons, Waters, Jordan, Martin, Cobb, 2000; Waugh, Levin, Buell, 1999). The common language of standards allowed everyone to both contribute to and benefit from the dialogue. Group discussions often sought to build upon the work and perspectives already underway at an institution, as well as sought a careful review of efforts being undertaken in other places in the country. Each organization had its own individual perspective on educational reform and ideas that surfaced within task forces were typically both passionate and creative.

Systematic Evaluation with Four Key Strategies

A strong evaluation process is one that is very systematic in its approach to understanding and mapping change within a project. As defined by Weiss (1998), evaluation can be considered to be “the systematic assessment of the operation and/or the outcomes of a program or policy, compared to a set of explicit or implicit standards, as a means of contributing to the improvement of the program or policy” (p. 4). In the Nebraska Catalyst Project, the evaluation process was carefully organized around this definition to include four key strategies of:
    1) developing a well-organized reporting system,
    2) encouraging joint work on institutional assessments,
    3) establishing an online format for evaluation information, and
    4) systematically returning feedback to the individual institutions.

Key Strategy 1: Developing a well-organized reporting system
The evaluation process within the project needed to be very collaborative and flexible to successfully enlist the participation of all 17 institutions. This process sought to assist the member institutions in providing the raw data to track the overall progress of the project, as well as retrieving data summaries that might help inform their individual institutions as they sought to improve pre-service education. Through this evaluation effort, focused upon blending both project and institutional needs, the participating institutions were encouraged to help decide what data elements would be particularly important at their institution, and how that data might be best summarized to contribute to the common evaluation effort. A carefully structured reporting process, shared by the 17 institutions, helped make this blending of individual and collaborative evaluation purposes more workable and convenient for the participating institutions. The evaluation component attempted to model the use of educational technology for data tracking and analysis and thus helped the Catalyst Project itself model the use of educational technology, which has been shown to be critical in the effective reform of teacher preparation programs related to education technology (Wilkerson, 2003; Whetson and Carr-Chellman, 2001; Carlson and Gooden, 1999).

The partner institutions were required to contribute institutional reports during each of the three project years. Project orientation sessions, specific to the mid-year or year-end report periods were often held, and institutions received considerable background information and support at the orientation sessions. For example, at many meetings, the site facilitator at each location was given a well-organized notebook that provided an overview of the reporting requirements for that period of the project. In addition to the notebook, a CD/Disk that had all forms loaded was also offered to facilitators. As the project developed and became more technically sophisticated, reporting forms were made exclusively available electronically on the web so that partner institutions had convenient access to all necessary forms and an easy way to quickly submit the information, as well as have Frequently Asked Questions answered. An overview PowerPoint presentation that explained each form in detail was also made available at both the orientation sessions and on the website. Institutions were constantly reminded of the importance of prompt and accurate reporting and the need to report evaluation-related information was also tied to the ability for an institution to participate in future funding. Thus, all member institutions were usually quite prompt and supportive in their reporting process. The need and purpose for an evaluation process was continually reinforced at various project meetings, with periodic evaluation updates provided to participants as routine agenda items at task force meetings.

Key Strategy 2: Encouraging Joint work on Institutional Assessments
Eventually, a strong interest to develop institutional assessments measuring the integration of educational technology into teacher preparation that all institutions might use emerged from the discussions. The effective assessment of educational technology competency and use within a program was seen as a critical gap by institutions. Participating institutions, as well as other stakeholders (such as the Nebraska Department of Education), volunteered individuals to help work on this important joint effort. This shared interest greatly aided the instrumentation component of the evaluation process, and partner representatives were often ready to help retrieve data from their respective organizations, or work collaboratively to pilot a particular assessment instrument. Initial efforts on the assessment process encompassed a variety of organizational approaches and perspectives, as institutions worked collaboratively to identify or develop a wide range of assessments, such as performance/portfolio, self-report, self-reflection, focus groups, surveys, and classroom observation strategies. These assessments often were built upon existing individual interests already present at a particular institution.

The management of this wide range of assessment instrumentation also became a general interest area and key topic for discussion in the task forces. Collaborative work eventually resulted in a prototype “Assessment ToolKit” for helping manage these assessments, as well as help document their reliability and validity (the Toolkit is also available at http://necatalys.org). In addition to instrument management, the toolkit also offered institutions an opportunity to interact “online” with regard to their experience with the various instruments, building a community of learners. All this focus on assessment made collecting evaluation data an almost natural by-product of these discussions and member institutions were very good about participating in the evaluation process through this collaborative instrumentation.

Key Strategy 3: Establishing an Online Format for Evaluation Information
Compatible with the online Assessment ToolKit concept, the participating institutions found that online formats for general evaluation reporting were particularly useful for project-related reporting requirements and were convenient for both cost and later data analysis. Evaluation data was also automatically retrieved as institutions completed various instruments and reporting forms online. Online formats for the pre-service teacher instruments were particularly helpful, since an instructor could simply take a group of students to a computer laboratory to jointly administer the instrument, or perhaps assign the students to take the online instrument at their own convenience later in the week.

The Nebraska Catalyst Website emerged as a way to not only store and access the evaluation related assessments, but to also further organize task force efforts. Online discussions using software such as Facilitate.com extended meeting discussions outside of the face-to-face settings and greatly enhanced a partner’s ability to ask questions and generally receive informational support. Institutions could thus seek additional assistance for their various efforts, such as strategic planning, and extend the work already done within the Catalyst evaluation process, to further refine and develop their own institutional approaches. Member institutions were very good about participating through the online venues that included website interaction, listserv, online facilitation software, and online reporting templates.

Key Strategy 4: Systematically Returning Feedback to the Individual Institutions.
The overall project evaluation process eventually became quite collaborative in operation, by allowing institutions to help lead on particular assessments and also helping them enlist the shared efforts of partners to improve and pilot various assessments, so that all institutions might use and benefit from them. Some institutions focused on portfolios, while others focused on self-reports, observation instruments, etc. The project also worked to better formalize these assessments by funding outside experts to assist with the validity and reliability, establishing pilot administrations of the instruments, and encouraging replications by participating institutions.

Summary institutional reports that reflected institutional progress on both single and joint instruments were an important feedback component of the evaluation process. As institutions had pre-service teachers take a particular instrument, participate in a focus group, or contribute to a portfolio, the evaluation process always resulted in a brief individual institutional summary, as well as contributed to the overall evaluation data. Such feedback helped the individual institutions better recognize the personal utility of the evaluation information, as well as its importance to the overall project itself.

The Key Evaluation Strategies in Action: The Catalyst Assessments

The development of shared assessments indeed became a real collaborative interest and target effort for the 17 participating institutions of the Nebraska Catalyst Project. These assessments also provided the ability for the four key evaluation strategies to essentially be operationalized, by contributing to a well structured reporting process, encouraging collaborative work on assessment, facilitating online formats for data retrieval, and providing a focused venue for institutional feedback. Considerable work was undertaken to make sure that project-related assessments were of the highest quality possible, whether they were from outside sources, or jointly developed within the project.

The participating institutions found that a considerable number of informal instruments that measured growth and progress of pre-service teacher’s technology skill and integration competencies were already in use by institutions of higher education (IHE’s) across the country. However, somewhat atypical were instruments that have been systematically developed, piloted, and carefully refined that are targeted at reliability in administration and validity in content, and based upon standards or competencies. This is consistent with national trends. During the last decade, many institutions appear to be moving more toward a wider variety of assessment strategies, and have embraced strategies that are generally more qualitative and performance-based in format, such as portfolios (Milman, 1999; Georgi & Crowe, 1998; McKinney, 1998; Petrakis, 1996).

As a first step, more than 50 different assessments already available nationally were shared and discussed with partner institutions. Site facilitators actively shared ideas on what might be useful to their institution, but also useful to others. When possible, they encouraged graduate students and various faculty members to become involved in the discussions. Based on the ongoing assessment discussions and collaborative work, the project partners eventually decided to view the diversity and independence of their institutions as strength rather than a weakness in the project. It was believed that since there was a wide range of ways that educational technology might be appropriately infused within a particular teacher preparation program, or used by pre-service teachers at an institution, a wide range of assessment strategies were appropriate to help monitor and evaluate that integration process. Each institution operated within its own unique context, and what worked well for one organization might not work well for another. All institutions eventually saw the potential benefit of having a wide range of assessments available to them, and thus all institutions were willing to also help contribute to the development or pilot opportunity of an assessment at another institution. The institutional assessment instruments that surfaced within the Nebraska Catalyst project were then both diverse and collaborative.

Instruments were eventually made available to all organizations through the extensive project website (http://www.necatalyst.org). The most successful assessments in the project consisted of the following prototypes, which continue to be used within various subgroups of the participating institutions.

Self-Report Instruments

In the context of educational technology reform within higher education institutions, self-report mechanisms can be an important piece of a multiple assessment strategy (Gershner, Snider, Huestis, Foster, 2000). Many institutions in the Nebraska Catalyst Project believed that a self-reporting process was a valuable approach for examining what their pre-service teachers were learning about educational technology. The Technology Ability Perception Self-Report Instrument (or TAPSI) was developed within the project as a general self-report instrument related to a pre-service teacher’s perceived educational technology skills and knowledge. It is currently in use at several participating institutions, and available to all interested institutions through the Nebraska Catalyst website. An online knowledge rating scale instrument was also developed and piloted. This scale was used by several Nebraska Catalyst institutions to help pre-service teachers reflect upon current knowledge levels in the use and integration of educational technology in the teaching and learning process. Both instruments also take advantage of the convenience of an online format.

Student Portfolios

Student portfolios can be both effective and challenging related to examining teacher competency in educational technology (Wright, Stallworth, Joyce, Ray, 2002). What a pre-service teacher reports as technology competencies, and what they have actually had experience integrating into their teaching and learning experiences, are two very different issues. Several institutions focused on portfolio-related efforts to help demonstrate what their pre-service teachers were learning about technology. The development of a prototype for an electronic student-based portfolio was undertaken through a direct collaboration between the Nebraska Catalyst Project and the two Nebraska PT3 Implementation projects underway within the state (and particularly that of the University of Nebraska at Omaha). The initial prototype of a student portfolio, which is “institutionally flexible”, now contains information from more than 2000 students, across various classes, and has been considered by NCATE (institutional visitation team) to be an evolving model that might be recommended to other institutions. The Technology Skills Certificate effort is a similar portfolio-related effort that is continuing at a NECatalyst participating institution (The University of Nebraska at Lincoln) and is being refined as a “class-based” electronic portfolio for pre-service teachers. Within this assessment mechanism, students involved in a particular class or set of courses undertake a variety of technology-based performance assessments, which eventually result in a certificate of successful completion. Furthering the work of these initial online portfolio efforts, the project was involved in the development of a web-based qualitative grade book prototype, (called I-Beam).

Classroom Observation Instruments

Examining the teaching process through systematic observation has been a useful way to provide feedback to teachers and their preparation programs, and can be effectively adjusted to also reflect educational technology use (Ewens, 2001; Tseng, 1998). Having pre-service teachers actually demonstrate what they have learned about educational technology by classroom demonstration can no doubt help inform institutional decision making. This information can particularly help inform an institution about whether a pre-service teacher will actually incorporate educational technology into the teaching and learning process once they are in a classroom. Each of the Nebraska Catalyst institutions expressed an interest in a systematic way to observe student teachers and their use of technology in that field experience. In response to this interest, the Classroom Observation Instrument was created through the assistance of WestEd, the NSF Center for Assessment and Evaluation of Student Learning (CAESL), in San Francisco. This instrument was piloted during 2001, and was further used in 2002, and is structured to formalize the identification of the classroom uses of educational technology by both teachers and students. It is particularly useful in examining whether a student teacher is using educational technology within that capstone field experience. It includes a rubric for examining various levels of educational technology (as well supportive constructs such as constructivism).

Focus Group Efforts

A stakeholder focus group can be a powerful way to gather input about the general effectiveness of a program (Reynolds, 1996). In the Nebraska Catalyst Project, a focus-group reflection process for pre-service teachers was found to be an important evaluation strategy that offered a unique perspective and a useful way for institutions to examine and document their relative progress related to educational technology initiatives. An extensive focus group protocol for pre-service teachers asked for input about how well their respective institutions appeared to be preparing them related to the use of educational technology, as well as offering an opportunity for them to share their vision for technology use in education was also available to institutions. In addition to the instrument, a form report was offered as a model for institutions as they analyze data and utilize the responses for future program planning.

Student and Faculty Surveys

Simply asking students and faculty about their technology use can actually go a long way in informing an institution of technology integration needs (Denton, Clark, Allen, 2002). In a follow-up process to the “face-to-face” focus group effort mentioned above (and based upon that protocol), an online and web-based survey was prepared for pre-service teachers, for use within the Nebraska Catalyst institutions. This web-based survey broadened the input base of pre-service teachers, and provided valuable additional feedback from pre-service teachers on the perceived value and revision needs of their pre-service preparation programs. A faculty survey was also established to help gain faculty member perceptions of their institutional programs. Survey questions for both students and faculty focused on two main areas, including 1) their knowledge and experiences related to educational technology within the institutional program, and 2) their general attitudes related to educational technology. Institutions continue to have both these instruments freely available to them for potential revision and use.

Snapshots of Teachers in the Field

Asking current teachers about their perceptions of the success of the teacher certification programs that prepared them is often a useful strategy for program review (Imbimbo, Silvernail, 1999; Attwenger, 1997). In the Nebraska Catalyst Project, the perceptions of what in-service or field-based teachers learned in their teacher preparation programs was also a part of the assessment-related information for the project evaluation process. A web-based snapshot instrument was administered during February, 2000, 2001, and 2002, and focused on determining the beliefs, use of technology, and the technology based needs of Nebraska teachers, as connected to pre-service and in-service programs. A total of 7600 Nebraska teachers eventually responded, providing a rich perspective on the current practices and needs. The results were also disseminated to the Nebraska State Board of Education, and the state legislature.

Commercially Developed Instruments
The Nebraska Catalyst Project also embraced various commercial instruments within the evaluation process. The use of institutional instruments such as the School Technology Readiness (StaR) Chart, as one example, have been shown to be effective tools for helping monitor the institutional integration of educational technology (Fulton, 2000). This excellent instrument was also used to help inform the project evaluation process for the 17 Catalyst Institutions. This instrument helped the project track integration across the 17 partners, and questions provided feedback on technology integrated courses, faculty support, field experiences, and technology standards integration. The instrument also offered institutions an important perspective for planning dialogue associated with offering the skills needed for 21st century learners, through their teacher preparation programs.

The Evaluation Model Matured with Feelings of Collaborative Success

As the evaluation process continued to evolve and mature, it became a strong component within the Nebraska Catalyst Project. Institutions participated fully and completely, and appeared to feel successful in the collaborative effort to collect data and refine instruments. As the overall project matured, institutions and site facilitators even contributed more than was necessary for the reporting process by sending the project evaluation additional pieces of information, or regularly contributing samples of institutional efforts and documents being developed. Several evaluation-related assessment efforts even resulted in several master theses and two doctoral dissertations. After this continued evolution process, further evidence of the utility of the four key strategies of the evaluation model became evident.

The well organized reporting effort, as identified as the first key evaluation strategy became almost routine for institutions. After a well-timed “reminder e-mail”, institutions would go to the project website and review the reporting requirements. They would then complete the needed online or interactive forms. The collaborative assessment work, as the second key evaluation strategy eventually became a real source of synergy and partnership for the project, as institutions formed natural sub-groupings to work on particular assessment efforts. The flexibility of assessment use, with a few assessments used by all institutions, and some assessments used by just a few of the institution was a nice balanced approach for serving the evaluation needs of both the individual institutions, and the project itself. The online mechanisms for evaluation, as the third key strategy, was also well-embraced by the institutions, as institutions recognized the convenience of online formats for assessments, institutional reporting, and the sharing of information through interactive forms and listservs. A strong website helped to make this convenience a reality. Most important perhaps, was the fourth key strategy, that of closing the feedback loop to individual institutions. It was easy for the institutions to recognize the utility of contributing evaluation related information, when such information also contributed directly to a personal understanding of their own institution.

When the value of independence is as engrained as it is in a state like Nebraska, long time collaboration among diverse partner institutions is not always simple, and not necessarily always desired when institutions already feel that they are doing an excellent job in what they are doing. However, the Nebraska Catalyst Project went a long way in establishing the belief that diverse teacher preparation institutions could indeed share strategies, work closely together in a larger context, and still maintain local control over their institutional assessment process. The evaluation process of the project eventually became the model for that shared institutional vision.

Most importantly, all 17 participating teacher preparation institutions, public and private, large and small, saw consistent progress in the project. Some participating institutions made particularly significant progress, and for the first time conducted an online course, involved faculty in educational technology training, or initiated new graduation requirements for educational technology. Others institutions took initial efforts much further. For example, the online portfolio established at one of the institutions and refined by shared feedback, is now becoming a model often requested for presentation a various national conferences, including that of a 2003 North Central Accreditation of Teacher Education (NCATE) meeting.

Strategic planning in educational technology, relatively uncommon before the Catalyst Project (only 9 of 17 institutions had ever undertaken such planning), is now common place, and all 17 of the institutions now regularly participate in such strategic planning efforts, or make it a key feature of their overall institutional planning. Assessment is another new strength area for many institutions, with all institutions now having at least some assessment strategies in place, when before the project only 6 or 7 institutions conducted any assessment of educational technology at all. Perhaps most impressively, all institutions are now involving school districts and teachers in this planning and assessment process, up from just 2 institutions when the Nebraska Catalyst Project started.

The project reports for the Nebraska Catalyst Project indeed illustrate how 17 diverse institutions can still move forward together, by systematically tracking progress on individual institutional assessments, selected or modified from the many project-related assessments. Project sub-reports contributed to both individual institution and overall project insight, with institutional sub-reports distributed routinely to each of the 17 institutions, providing a basis for continued strategic planning at their institution. Most importantly, it has been found that institutional independence can still be an asset to collaboration, when innovation is both embraced and shared across partner institutions.

In Summary, a Few Lessons Learned about Institutional Independence and Evaluation

The Nebraska Catalyst Project has found that the process of evaluating educational reform, like educational reform itself, is indeed best recognized as a collaborative venture. The independence of institutions, when recognized as a potential source of shared leadership and input, can be one of the greatest strengths of a collaborative project. If done with this in mind, the project evaluation can both help inform the project of its successes and ongoing challenges as well as help in maintaining a consistent vision for reform. We learned several simple but powerful lessons along our way that reinforce that a collaborative focus and institutional independence can exist side by side, and that such a blend can actually strengthen the evaluation process. These eight lessons learned include the following.

1) A well organized evaluation process is critical in building a coherent project partnership, and contributing to eventual project success.
2) A strong evaluation begins with a strong partnership between the project director and the project evaluator, which essentially models the collaborative environment desired within the evaluation process.
3) The planning for institutional “buy-in” within an evaluation process requires communication at all levels (dean, faculty, administrators, pre-service teachers, and local districts).
4) The diversity of organizations (large/small, public/private) can operate as an advantage within the project evaluation rather than a barrier, when shared leadership and innovation is encouraged across institutions.
5) Instruments that are online (web-based) and on-target (tied to standards) are the instruments most embraced by institutions when they are striving for efficient and low cost assessment strategies.
6) Although larger institutions may have more resources to undertake evaluation related efforts, smaller institutions have much to contribute as well, through activities such as piloting new assessment tools and trying new evaluation initiatives more quickly.
7) The periodic use of outside facilitators and consultants within the project-related evaluation activities can be very helpful, such as in external review of assessments and related data summaries.
8) An awareness of what other stakeholders within a state are doing, for example in efforts like statewide distance education, technology support funding, and local control for school districts can be of critical assistance to collaborative efforts, and for establishing a context to better understand evaluation related information.

In summary, we found that after three years of extensive efforts within the Nebraska Catalyst Project, we are proud of the institutional progress. We also found that we were well-positioned for continued joint efforts, and that shared dialogue was now much easier to undertake. Such progress is founded upon a lot of hard work, a strong collaborative focus, and a careful, well planned, and flexible evaluation process.

It has been said that "You can't teach today's students with yesterday's materials, and expect them to have success tomorrow" (Teacher Librarian, March/April, 1999, p.34). It is indeed becoming a technological world of fast paced change and the preparation of our pre-K12 students for the challenges of tomorrow no doubt demands a teacher preparation program that takes full advantage of educational technology. The success of such programs will no doubt depend upon careful evaluation strategies. As our state, like many others, braces for some substantial budget cuts, better collaboration among institutions, and better monitoring of success based upon data, is becoming an ever more critical necessity for both the shared health of all institutions, and the success of individual ones. The Nebraska Catalyst Project, and its three years of collaborative project and evaluation related efforts, helped positioned Nebraska to better exist in this challenging environment; by helping all of us understand that Midwestern independence is indeed a Nebraskan trait of which to be proud, and also a potential source of collaborative innovation and success.

Acknowledgements
This chapter and the Nebraska Catalyst Project itself were made possible by a grant awarded through the Preparing Teachers to Teach with Technology Program (PT3). In addition, the Nebraska Catalyst Project has benefited from a strong collaborative base of creative and talented professionals who have worked on the various assessment strategies and prototypes. Many individuals have led and assisted in leading these daunting tasks, and have included individuals such as Del Harnish, Paul Clark, Al Steckelberg, Bob Pawloski, Mike Timms, Mike Dempsy, and Neal Topp, to name just a few of these innovative developers. More information about these individual assessments, and the contributions of various Nebraskans, can be found at the Nebraska Catalyst web site of http://www.necatalyst.org.

References

     Attwenger, K.V. (1997). Computer and Internet use among special education graduate students. Research Report of the University of Southern Maine, ED 407 941.
     Burke, J. (2000). New directions – Teacher Technology Standards. Research report: Office of Educational Research and Improvement, ED 459695.
     Carlson, R., Gooden, J.S. (1999). Are teacher preparation programs modeling technology use for pre-service teachers? ERS Spectrum, 17(3), p11-15.
     Cavanaugh, C. (2003). Information Age Teacher Education: Educational Collaboration To Prepare Teachers for Today's Students. TechTrends, v47 n2 p24-27, Mar-Apr 2003.
     Committee on Technology Literacy. (2002). Technically speaking: Why all Americans need to know more about Technology. National Academy Press, Washington, D.C. ISBN 0-309-08262-5.
     Denton, J.J., Clark, F.E., Allen, N.J. (2002). A dilemma for technology professional development in colleges of education: Building capacity vs. providing tech support. Texas A&M University Research Report, ED 464617.
     Dewert, M.H. (1999). The times they are a-changing’: A look at technology-related requirements for teacher licensure and certification. Journal of Computing in Teacher Education, 15(2), p.4-6.
     Ewens, D. (2001). Observation of teaching and learning in adult education: How to prepare for it, how to do it and how to manage it. Learning and Skills Development Agency, London (England). ED 466 990.
     Fulton, K. (2000). Teacher preparation StaR chart: A self-assessment tool for colleges of education. A paper available from the CEO Forum on Education and Technology, 1341 G Street, N.W., Suite 1100, Washington, D.C. ED 437382.
     Georgi, D., Crowe, J. (1998). Digital portfolios: A confluence of portfolio assessment and technology. Teacher Education Quarterly, 25(1), p73-84.
     Gershner, V., Snider, S.L., Huestis, A., Foster, J.M. (2000). Integrating technology at the preservice teacher level: Examining the process of change. Paper presented at the Society for Information Technology and Teacher Education Conference 2000, San Diego, California. ED 444 536.
     Imbimbo, J., Silvernail, D. (1999). Prepared to teach? Key findings of the New York City Teacher Survey. New York City Board of Education, New York, New York. ED 442 764.
     International Society for Technology in Education. (2000). National Educational Technology Standards. International Society for Technology in Education, 1787 Agate Street, Eugene, OR 97403-1923. ISBN 1-56484-150-2.
     Krueger, K., Hanson, L., Smaldino, S. (2000). Preservice teacher technology competencies: A model for preparing teachers of tomorrow to use technology. TechTrends, 44(3), p47-50
     McKinney, M. (1998). Pre-service teachers’ electronic portfolios: Integrating technology, self-assessment, and reflection. Teacher Education Quarterly, 25(1), p85-103.
     McKoy, A. (1999). Integration of technology into higher education. Paper presented at SITE 99: Society for Information Technology and Teacher Education International Conference, San Antonio, Texas.
     Millman, N. (1999). Web-based electronic teaching portfolios for pre-service teachers. Paper presented at SITE 99: Society for Information Technology and Teacher Education International Conference, San Antonio, Texas.
     Peck, K.L. (1998). Ready…fire…AIM! Toward meaningful standards for educators and students. TechTrends, 43(2), p47-53.
     Petrakis, E. (1996). Using a portfolio to assess pre-service teachers technology competencies. Journal of Computing in Teacher Education, 13(1), p12-13.
     Reynolds, M.R. (1996). Focus groups for informal evaluation of non-instructional interventions. Paper presented at the annual meeting of the Mid South Educational Research Association Annual Conference 1996, Tuscaloosa, Alabama. ED 406 408.
     Smith, P. L., Harris, C. M., Simmons, L., Waters, J., Jordan, W., Martin, D., Smith, N., Cobb, P. (2000). Using multimedia portfolios to assess pre-service teacher and P-12 student learning. A report submitted to ERIC document service, ED 445052.
     Suleiman, M. (2001). Technology and teacher preparation: Towards a humanistic framework. An independent research paper within the ERIC Document Service, ED 454221.
     Teacher Librarian. (1999). Poster: You can't teach today's students with yesterday's materials, and expect them to have success tomorrow. As provided in the March/April issue of Teacher Librarian, 1999, 26(4), p34.
     Tseng, K.K. (1998). Observation instrument for assessing preservice teacher technology use. Paper presented at the Annual Meeting of the International Technology Education Association Annual Conference 1998, Fort Worth, Texas. ED 419103.
     Waugh, M., Levin, J., and Buell, J. (1999). The technology competencies database: Computer support for assessment, teaching, and portfolio management. Journal of Technology and Teacher Education, 7(4), p351-63.
     Weiss, C.H. (1998). Evaluation. Prentice Hall: Upper Saddle River, NJ 07458. ISBN 0-13-309725-0.
     Whetstone, L., Carr-Chellman, A. (2001). Preparing preservice teachers to use technology: Survey results. TechTrends, 46(4), p11-17, 45.
     Wilkerson, T. L. (2003). A Triad Model for Preparing Preservice Teachers for the Integration of Technology in Teaching and Learning. Action in Teacher Education, v24 n4 p27-32 Win 2003.
     Wright, V.H. Challenges of electronic portfolios: Student perceptions and experiences. Journal of Technology and Teacher Education, 10(1), p49-61.

Cover page at: http://www.oten.info/pt3insights.html
Pages last updated June 23, 2005
Copyright © 2005 Dr. Neal Grandgenett, Dr. Jean Jones. All rights reserved.
Direct comments or questions to baileym@pacificu.edu