UNIVERSITY OF HOUSTON CLEAR LAKE

SCHOOL OF EDUCATION
 
 
 
 
 
 
 

ASSESSING THE IMPACT OF THE NASA JOHNSON SPACE CENTER DISTANCE EDUCATION AND MENTORING PROGRAM, TEXAS AEROSPACE SCHOLARS
(YEAR ONE, 2000 - 2001)
 
 
 
 
 
 

By RITA KORIN KARL
 
 
 
 
 
 
 
 

A Project Proposal submitted to the

School of Education

in partial fulfillment of the

requirements for a degree

of Master of Science








Approved:

________________________________

James Sherrill, Ph.D.

Associate Dean

 ________________________________

Atsusi Hirumi, Ph.D.

Associate Professor, Project Supervisor

________________________________

Glenn Freedman, Ph.D.

Education Faculty, Committee Member
 
 

September, 2001

Abstract

To meet the growing demand for a high-tech workforce, the Texas Aerospace Scholars (TAS) aims to encourage 11th grade students to consider careers in math, science, engineering or technology. This study evaluates the initial impact of TAS on student’s future career choices. Extant data, collected by the National Aeronautics and Space Administration (NASA) Johnson Space Center (JSC) during the first year of the program, will be analyzed to evaluate two primary program features (a) the on-line distance education science and engineering curriculum, and (b) the mentoring relationships between NASA engineers and scientists and students. The students’ attitudes toward the program and engineering as a career, their choice of colleges and intended majors will be assessed. The results of the summative evaluation will be used as a basis for making decisions about program continuation.

Table of Contents 2
List of Figures and Tables 3
Chapter 1. Introduction 5

Introduction  5
Project Background 6
Problem Statement 10
Significance 13

Chapter 2. Review of Literature 15

Science and Engineering Education Programs  15
Mentoring Programs  19
Mentoring Gifted High School, Female and Minority Students 23
Implications for Program Evaluation and Continuous Improvement 30

Distance Education  31
Distance Education for Gifted Students 31
Web-based Education Programs  33
Web-based Science Education Programs  35
Web-based Mentoring Programs 41
Implications for Program Evaluation and Continuous Improvement 44

Program Evaluation 45
What is Program Evaluation? 45
Forms, Approaches and Types of Educational Evaluation  48
Program Evaluation Procedures  60
Validity and Reliability of Results  73
Implications for Program Evaluation and Continuous Improvement 76

Chapter 3. Method 78

Population and Sample 80
Research Design 83
Instruments  84
Data Collection Procedure 88
Data Analysis 90
Limitations  93

References 95

 Appendix

A - National Science Foundation Science and Engineering Indicators 103
B - National Merit Scholars Planned Majors 113
C - Joint Committee Standards on Program Evaluation 116
D - Student Application, Commitment Form, Talent Release For 121
E - TAS Databases 127
F - Data Analysis Outline 129
G - TAS Evaluation Instruments 132
 

List of Figures and Tables

Figure 1. Program components.

Figure 2. Texas Aerospace Scholars' assumptions and target impacts.

Figure 3. NASA Project development cycle.

Figure 4. Texas Aerospace Scholars student profile.

Figure 5. Texas Aerospace Scholars district representation - House

Figure 6. Texas Aerospace Scholars district representation - Senate.

Table 1. Differences between formative and summative evaluation.

Table 2. TAS timeline and data collection points.
 
 

Chapter 1
Introduction

According to the National Science Foundation (1998) the number of available jobs in science and engineering is growing faster than the number of college students pursuing degrees in these fields. Current national statistics show that 1 in 10 positions are vacant in high-tech industry. Projections show that from 1998 to 2008 the number of positions for computer scientists is expected to increase by 118%, for computer engineers by 108% and for engineering, mathematics and natural science managers by 44% (National Science Foundation, 1998) (Appendix A).

To meet the growing demand for a high-tech workforce, the Texas Aerospace Scholars (TAS) aims to encourage 11th grade students to consider careers in math, science, engineering and technology through a distance education and mentoring program. Funded by the State of Texas, NASA’s Johnson Space Center and two private organizations (Rotary NASA and the Houston Livestock Show and Rodeo), TAS is designed to harness the excitement of the space program and encourage gifted students to choose science, engineering or technology as a career. Although there are other space science education programs for gifted scholars, TAS is NASA Johnson Space Center's (JSC) first program to involve distance education, face-to-face and on-line mentoring. NASA has accumulated a large amount of data regarding the program, both numerical and anecdotal. The problem is that little has been done to synthesize and analyze the data. A comprehensive view that incorporates all of the data submitted by students, staff, mentors and educators involved in the program would help to paint an accurate portrayal of the first year of the program.

JSC is committed to a four-year investigation of the TAS program. This study evaluates the initial impact of TAS on student’s future career choices by analyzing qualitative and quantitative extant data collected during the first year. The study focuses on the program’s two key features (a) the on-line distance education science curriculum, and (b) the mentoring relationships between NASA engineers and 11th grade students. Student evaluations, grades, testimonials and surveys will be used to assess students’ attitudes toward the program, engineering as a career, on-line Internet activities, and on-site mentoring workshop. Students’ choice of colleges and intended majors will also be measured. The results will be used as a basis for making decisions regarding program improvement and continuation. Additional research studies at two, three and four-year increments are planned to give a full and accurate picture of the program’s impact.

Project Background

An on-line distance education and mentoring program was chosen to provide access to the greatest number of students. The summer workshop was included to allow students direct contact with their mentors. Every Texas legislator may nominate one or two outstanding high school juniors, from a pool of students recommended by their respective high schools, to participate in TAS. Interested scholars are nominated based on their academic standing and an interest in science, math, engineering and technology.
Utilizing exciting, hands-on interaction with the space program, TAS aims to inspire students to consider engineering careers. Almost every aspect of the program represents an innovation in United States’ secondary education including:
 

In the winter of the first year, students begin a four-month interactive multimedia science curriculum with engineering design challenges and research projects, tutorials, videos, on-line discussion sessions and bulletin boards. Scholars completed six lessons and submitted seven research and design projects on human space exploration from February to May 2000. The course included lessons, assignments, tests and attendance at on-line discussion groups. This participation involved scholars from two-four hours a week. NASA educators and engineers reviewed scholar assignments via e-mail and real-time discussion sessions.

The TAS curriculum encourages higher order thinking skills, problem solving, creative design, teamwork, and mentoring with NASA engineers. Scholars are required to be U.S. citizens, at least 16 years of age and have access to the Internet and e-mail. Scholars then attend a one-week summer workshop in June or July at the NASA Johnson Space Center in Houston, Texas.

The summer workshop utilizes the knowledge and experience gained from the distance education curriculum by placing scholars in teams with NASA engineering mentors who provide positive role models. The students work cooperatively in teams alongside their mentors to design a human mission to Mars. A variety of briefings and field trips with astronauts, mission controllers, scientists and engineers fill out the week. At the end of the week, students present their mission to an audience of NASA administrators, parents and Texas legislators. After the summer workshop, all mentors are encouraged to maintain contact with students to provide continued support and advice in a unique one-on-one interpersonal bond.

All scholars complete a NASA Commitment Form that stated that they were committing to a four-year follow-up research study and a Talent Release form that allows NASA to use scholar video and audio interviews. Students completed one test after each of six assignments, a program evaluation and post-program surveys. The student tests, interviews, evaluations, surveys and unsolicited scholar testimonials will guide the summative evaluation of the program. While the full impact of the program can only be assessed through a longitudinal study completed with scholars over time, initial milestones can be assessed. The final outcome measure of the success of the TAS program will be whether participants choose to remain in science and engineering through-out their college years and actual enter the workforce. Prior to this final outcome milestones can be set for each year. These are, at the end of year one a choice of college and intended major, in year two and three, continuing to remain in a science or engineering major, and ultimately in year four a choice of occupation or graduate school. Figure 1 outlines the six major program components and the element of time.
 

Figure 1. TAS Program components.
 
 

Statement of the Problem

In 1992, approximately 40 percent of all National Merit Scholars were interested in majoring in either the natural sciences or engineering (National Science Foundation, 1992, see appendix B). Despite high levels of freshmen intentions for a science and engineering major, the percentage of students majoring in natural science, mathematics, and engineering fields declines from 27 to 17 percent between freshman and senior years (National Science Foundation, 1993) (Appendix A). The Commission for the Advancement of Women and Minorities in Science, Engineering, and Technology Development (CAWMSET) note that the shortage of skilled workers in high-tech jobs may lead to an economic crisis unless more under-represented individuals pursue education and careers in science, engineering and technology (CAWMSET, 2000).  To address these concerns JSC considered the type of students likely to enter into these fields, and concluded that outstanding scholars with an interest in science, mathematics, engineering and technology were most likely to major in these fields.

Certain assumptions were made regarding how to identify, target, and cultivate these students. An early intervention program for high school scholars in their junior year was developed. For the delivery of the program, several assumptions were made. To attract teenagers, a science and engineering curriculum would need to be challenging, highly interactive, and result in a sense of accomplishment. In addition, the creation of an environment that would support these gifted scholars was deemed paramount to the success of the program. Mentoring by actual scientists and engineers working at NASA was considered as the best way to maintain the student's interest and commitment to a course of study (and a career) in these fields over time. Integral to the development of the curriculum was an assessment of the learners themselves and their learning styles. Gifted students were assumed to have a high degree of proficiency with technology and the Internet, and a Web-based distance education program was devised based on adult learning principles.

NASA’s Educational Policy states "We involve the educational community in our endeavors to inspire America's students, create learning opportunities, and enlighten inquisitive minds" (NASA, 2001). The NASA Education Program Mission Statement reads "NASA uses its unique resources to support educational excellence for all" (NASA, 2001, para. 4). TAS scholars are 42% female with 26% minorities from across the state of Texas. Although other science and engineering programs have been developed for outstanding scholars, and mentoring programs geared to affect scholar's future career choices have been developed, this program is unique in that it integrates both distance education and mentoring, on-line and in real-time, by peers and professionals.

The specific problem addressed by this study is that NASA has gathered considerable amounts of data about TAS, but little to no effort has been made to organize, analyze the report the data. The purpose of this study is to organize and examine data already collected by NASA during the first year of the program. Specifically, the study will evaluate scholars’ attitudes, achievement, choice of colleges and intended majors. The study will also examine important decisions made by the instructional technology designer and NASA education staff about the program (e.g., Was it beneficial to choose the Internet over other methods of education delivery? Include mentoring as an integral component of the program? Address the needs of gifted scholars over other candidate? Use on-line evaluation, video interviews and on-line post-program surveys?) Important impact questions will also be addressed, such as: Does TAS help gifted scholars make informed college and career choices? Does it encourage scholars to look at the math, science, engineering and technology fields with certainty? Does it help to foster their decisions regarding these choices by providing role models and visible options to these students? Is distance education without doubt the best way of reaching these students? Figure 2 illustrates the assumptions made by NASA and the instructional designer and the measurable impacts over time.

Figure 2. Texas Aerospace Scholars Assumptions and Target Impacts.

Significance

This study will benefit three particular groups: (a) key stakeholders (entities with a vested interest in the increase of engineers in the workforce); (b) instructional designers (who are looking for Web-based models that utilize mentoring), and (c) other educational organizations (who are looking for model Web and mentoring programs).

Key stakeholders including the State of Texas, NASA, The Houston Livestock Show and Rodeo and Rotary NASA who want data regarding the program's impact on student's attitudes towards careers in science, engineering, and technology to guide continuing funding decisions. In addition, several other states and at least one other country are looking at the TAS program as a role model. The results of this preliminary evaluation may have an impact on other sites and other industries considering replicating the program to meet the need for future high levels of scientists and engineers entering the workforce.
Instructional designers who are interested in distance education and online science education and mentoring programs can utilize the program evaluation when considering aspects of programs to replicate or not to replicate. The program and evaluation can serve as a blueprint for other curriculum designers and educators to use should they encounter a similar need for an interactive Web-based educational and mentoring program.

Although Web-based distance education is still in its infancy many program models such as cyberschools, virtual schools and educational Web-sites have blossomed in the past few years. Most of them are aimed at high school students. Some are aimed at gifted scholars, and a few utilize mentoring. TAS attempts to incorporate effective instructional design models and utilize cutting edge technology and mentoring techniques. This project will help bring together all of the extant data that has been accumulated during the first year of the program, organize and analyze it, and present findings aimed at determining the future improvement and continuation of the program.

Chapter one provides a brief introduction to TAS – its purpose, basic design and significance. Chapter two reviews literature related to science and engineering education, mentoring gifted scholars, using distance education with high school students, and program evaluation – each of the major variables under study. Chapter three outlines the method to be employed in the program evaluation.
 
 
 
 

Chapter 2.
Review of Related Literature

This chapter reviews literature related to the major variables under study and is divided into three major sections. Section one reviews literature related to science education and mentoring programs for high school students. Section two reviews literature on distance education for high school students, focusing on Web-based programs and section three reviews literature on program evaluation approaches and procedures. Implications of findings derived from the reviews are discussed at the end of each major section.

Space Science and Engineering Education

A variety of engineering programs exist for high school students, several of them target gifted scholars and several involve mentoring. Several are targeted to women and minorities, and a few involve computer technology or distance education. This section gives an overview of high school science and engineering programs, and similar programs that utilize mentoring. It concludes with a section on the benefits of mentoring gifted students, including minorities and young women.

A variety of science and engineering intervention programs exist for high school students across the country. In 1976, The Science, Engineering, Communications, Mathematics Enrichment Program (SECME) began a teacher training summer institute at the University of Virginia. Teachers who have attended the SECME summer institute bring hands-on science and engineering back to their predominantly minority classrooms. SECME students work collaboratively on projects side-by-side with their teachers. Mousetrap car competitions, bottle rockets, and egg-drop engineering designs are used to bring physics, calculus and engineering principles to students.

Seven southeastern university deans formed SECME in 1975 concerned about the dearth of minorities in high technology fields, which they believed was the key to the nation’s economic health (Hamilton, 1997). The university’s engineering and education faculty designed the courses along with alumni teachers. Teachers brought teams of students in for hands-on mousetrap car competitions and took home curricula for integrating computer technology and engineering into the classroom.

SECME involved 37 participating universities and 65 industry and government partners (including NASA and Lockheed Martin). Currently, a total of 84 school systems, 583 schools and nearly 30,000 students from 15 states participate in SECME. Since 1980, nearly 50,000 SECME seniors have graduated from high school and 75% with the goal of attending a four-year college. Average SAT scores are 147 points higher than the U.S. African American average, and 97 points higher than the U.S. Hispanic average. Half of the students planned to major in science, math, engineering and technology fields (Hamilton, 1997).

In 1996, a comprehensive engineering program began at Madison West High School near the University of Wisconsin. The course, Principles of Engineering is offered to high school sophomores, juniors and seniors. The course was developed with a grant from the National Science Foundation. The course explores the relationship between math, science and technology.  Students keep daily logs as any engineer or scientist. The course shows students important engineering concepts and has them work on real-world case studies resembling engineering problems. Students are given a problem to solve and choose their own methods.

To aid in the development of their solutions students are exposed to brainstorming, thumbnail sketching, and various problem-solving techniques. After developing solutions on paper students choose one solution to prototype. Engineering systems and design principles are addressed including functionality, quality, safety, ergonomics, environmental considerations, and appearance. The program was developed to encourage students to consider careers in engineering as a response to the significant labor shortage faced by the U.S. in technology professions (Gomez, 2000).

The Uninitiates' Introduction to Engineering (UNITE) in cooperation with the U.S. Army is a summer program for minority students who want to pursue their interest in engineering and technology study and build their math and science knowledge and skill. It is offered on five university campuses throughout the country in Michigan, New York, Florida, New Mexico and Delaware. The UNITE program has been sponsored by the United States Army since 1980. It is an aggressive program for high school students that encourages and assists students in preparing for entrance into engineering schools. Each year UNITE coordinators identify talented high school students with an aptitude for math, science, and engineering to provide them the opportunity to participate in college-structured summer courses. The courses combine hands-on applications, participation in lectures, laboratories, and problem solving as well as tours of private and governmental engineering facilities. The students are introduced to ways in which math and science are applied to real-world situations and are related to careers in engineering and technology (UNITE, 2001).

The Mathematics, Engineering and Science Achievement (MESA) program was designed to create a curriculum-based program catering to the specific career to the specific academic or career interests of students. Created in 1970, in Oakland, California, eight states currently participate in the program. MESA boasts that 80% of its graduates go to college the fall after they graduate (compared to 57% of students in those states). It also boasts that 80% of under-represented students who receive bachelor’s degrees in engineering at the 23 institutions where MESA is situated are MESA students. MESA targets African American, Hispanics, American Indian, and female students (Rodriguez, 1997).

MESA provides opportunities in mathematics, engineering and science for students in grades 7–12, helping them to prepare for college level studies in science and technology fields. Activities include SAT workshops, field trips, speakers, competitions, classroom enrichment activities, and college preparation. Weekend academies, summer camps, satellite teleconferencing, teacher curricula, in-service and after school enrichment programs, parents programs, teacher training, student visits to college campuses and student competitions are some of the programs MESA centers engage in. Universities like Berkeley's College of Engineering and Johns Hopkins University's Applied Physics Laboratory are just two of MESA's high profile facilitators.

At the Johns Hopkins MESA site, components include academic tutorials, counseling, field trips, incentive awards, communication skills, science fair and engineering projects, and math competitions and computer use. Students are counseled to enroll in high school courses prerequisite for college entry for majors in science and engineering. Engineers, scientists, mathematicians, and college students provide academic tutorials in math, science, English, and communication. Visits to colleges and industry allow students to interact with professionals and observe the scientific community. Students engage in various scientific and engineering projects throughout academic year. Incentive awards are given periodically to students who meet program goals and maintain a "B" average or better (Johns Hopkins University, 2001).

Characteristics of all four of these programs include hands-on activities and projects, interaction with professionals in the field, pre-college tutorials, site visits, and real-world problem solving. The use of technology tools and design competitions focus students on higher-order thinking, teamwork, and technology skills by applying them creatively to real problems. Key factors of the science programs reviewed in this section will be used to guide the design of the program evaluation and to guide future program improvement.

Mentoring Programs

A variety of science and engineering programs include mentoring for high school students. As in many of the MESA programs (e.g., Johns Hopkins), the perceived importance of utilizing science and engineering professionals as tutors and role models has resulted in a variety of programs that utilize mentoring specifically to encourage students to consider scientific and technological career paths. This section gives an overview of seven science and engineering programs for high school students that utilize mentoring.

Located in Los Angeles, the California State University-Dominquez Hills teamed up with industry and 11 school districts to form the California Academy of Mathematics and Science, a public high school designed to attract students interested in math, science and engineering. Of all the students, 52% are female, 29% are Asian, 28% Hispanic, 28% African American, and 15% are white. The school selects from students in the top 35% who indicate a strong interest and potential in math and science. Industry contributions account for about 16% of the school’s budget.

Industry also provides an extensive mentoring program of about 150 scientists and engineers paired up one-to-one with students. The mentors meet with students once a month and stay in contact with students throughout their high school experience. Industry partners also offer internships to students to provide a hands-on experience in science and engineering careers. Approximately 90% of the students work as interns in their junior year. The National Science Foundation and the Sloan Foundation support the internship and mentoring programs. The university also provides professors as mentors, teachers, and curriculum developers. So far the program reports the school has graduated two classes (230 students) with 99% going on to 4-year colleges, and 75% majoring in science and engineering (Panitz, 1996).

PRIME is the Pre-college Initiative for Minorities in Education. It is designed to encourage African American college-bound high school graduates to pursue engineering as a career. The Tennessee Technological University program involves academic courses in mathematics, engineering, seminars, tours and tutorials. The key component of this program involves using undergraduate engineering students as mentors and role models for these young students. The impact of the mentors on their younger peers plays a significant role in the success of PRIME (Marable, 1999).

To enhance the participation of minority students in engineering, Tennessee Technological University (TTU) established the Minority Engineering Program in 1986. For minority students is a key component for success in navigating through engineering programs in predominantly white institutions. To attract these talented minority students to TTU, a summer ‘’bridge’ program was developed. The six-week summer program aims to address some issues identified as affecting minority students including academic preparation, inadequate peer support, lack of role modeling and mentoring, and racism, among others (Marable, 1999).

The program aims to reduce the stress of high school to college transition by building confidence and self-esteem, providing minority students with mentors, and developing academic skills. 20 students study mathematics and engineering, take field trips, and complete seminars on the use of computers, test taking and study skills. Undergraduate students work with students on a daily basis and the relationships continue throughout the first year of college. The American Council on Education states that the most effective means whereby minority students can be mentored is through peer counseling (Marable, 1999). Other studies argue that peer support is an important key to retention, helping students cope with stress levels often associated with a major like engineering (Marable, 1999).

TTU undergraduate mentors share their knowledge and experience of engineering with a similar social, ethnic and cultural background. During the first two years of the PRIME program all of the program participants remained in engineering. The students reported that their knowledge of engineering was enhanced and they were better prepared for college course work. Students described their mentors as "a great help," and "they encouraged us to strive toward getting the degree because it will result in a good-paying job." Other comments included "we are still really close," "they would say work hard, it pays off in the long run," and "they were role models for us…they taught me to be professional" (Marable, 1999, para. 23-25).  It is believed that the student-mentoring component provided social and emotional support for students to persist in majors like engineering. "The ultimate success of PRIME rests on the number of participants who become successful engineers" (Marable, 1999, para. 26).

The New Mexico Comprehensive Regional Center for Minorities with a grant from the National Science Foundation established an outreach program for high school students with disabilities including summer computer institutes for middle and high school students. The group sponsors numerous programs including summer camps that introduce high school seniors to college life, stipends for disabled college students to serve as mentors to pre-college students with disabilities. It is believed that the mentor program is critical to the success of the program (Coppula, 1997). One mentor agreed noting that discrimination and academic difficulties discourage many high school students with disabilities from pursuing engineering. "We get them tutoring and answer their questions…we make them feel good about themselves, show them someone cares. It gives me a great feeling of accomplishment" (Coppula, 1997, para.8).

The Junior Engineering Mentoring Program at the Lacey High School in New Jersey was launched in 1996. Local engineers from the Oyster Creek Nuclear Generating Station work with high school students, mentoring them and providing first hand experiences in real world situations. JEM provides teachers with curriculum materials to strengthen their subject areas and to show how careers integrate math, science and technology. Students in JEM compete in the annual National Engineering Design Challenge that brings students from over 2,000 high schools around the country. Over 75% of students in the JEM program go into engineering schools (Molkenthin, 2001).

The National Engineering Design Challenge (NEDC) is a cooperative program with the National Society of Professional Engineers and the National Talent Network. NEDC challenges teams of high school students working with an engineering adviser, to design, fabricate, and demonstrate a working model of a new product that produces a solution to a social need. NEDC is just one of several programs sponsored by JETS, the Junior Engineering Technical Society at the University of Missouri College of Engineering. The JETS mission is to guide high school students towards their college and career goals. The JETS program provides activities, events, competitions, programs, and material to educate students about the engineering world. It is a partnership between high schools, engineering colleges, corporations and engineering societies across the country (JETS, 2001).

JETS sponsors the Tests of Engineering Aptitude, Mathematics, and Science (TEAMS) competition enabling high school students to learn team development and problem-solving skills using classroom mathematics to solve real-world problems. JETS also sponsors the National Engineering Aptitude Search+ (NEAS+), a self-administered academic survey that enables individual students to determine their current level of preparation in engineering basic skills subjects like applied mathematics, science, and reasoning, and encourages tutoring and mentoring (JETS, 2001).

Common characteristics of science and engineering programs that utilize mentoring include tutoring, real-world problem solving, teamwork, activities, events, competitions and materials for educators. These programs often match students with mentors of their same ethnicity, gender or disability. The mentoring experience is often long-term continuing through student's college years. Other characteristics of some programs include peer mentoring, summer camps, pre-college instruction, and the use of technology. These characteristics will be considered during the evaluation of the Texas Aerospace Scholars program.

Mentoring Gifted High School, Female and Minority Students
 

The term mentor is believed to have originated with Homer’s Odyssey.  Before embarking on his journey Ulysses gave the care and education of his son into the hands of his wise friend Mentor. A mentor has since then signified a well-respected teacher who can provide intellectual and emotional counseling to a younger individual (Casey & Shore, 2000). In today’s world, a mentor (usually an adult) acts as a guide, role model and teacher to a younger individual in a field of mutual interest. This section will give an overview of the literature on the benefits of mentoring gifted scholars in particular.

Scientist biographies and interviews have shown that mentoring is one of the most important influences on gifted scholars’ vocational and emotional successes. One quarter of 56 space scientists surveyed by Scobee and Nash (1983) reported that mentors played a critical role in the evolution of their interest, and were one of three most highly recommended experiences for students.

Research on mentoring gifted adolescents has shown that gifted students value a mentor’s social and vocational modeling just as much as the intellectual stimulation it provides (Casey & Shore, 2000). Gifted scholars have a thirst for knowledge, move at an accelerated pace and locating a mentor who can challenge them to explore their interests provides the excitement and motivation that is often missing in the classroom.
Gifted students have an easier time relating to adults because of their advanced cognitive abilities. They need role models with whom they can relate to their own experiences and give them guidance and advice in ways of handling their intense work ethic, intellectual needs and drive to make the world a better place. Hands-on activities, work experiences and role modeling are the most often quoted experiences that have helped scientists and engineers choose and stick with their careers (Scobee and Nash, 1983).

Research has shown that these students not only have the ability to work well with adults but also the capacity to learn from them. Gifted scholars may have non-traditional approaches to learning and interact more successfully with adults because of their advanced cognitive abilities. Gifted scholars have a higher degree of self-motivation and the ability to work independently. Working with mentors may fulfill a gifted student’s desire to become immersed in an area of interest, interact at an adult level, and develop specific talents (Casey & Shore, 2000).

Research has shown that there is a need for good vocational counseling for the gifted for a variety of reasons. Mentors can help gifted adolescents consider their future career choices. Gifted scholars seek out information regarding why people work, the lifestyles related to high-level occupations and moral concerns (such as risk-taking and delayed families) associated with certain careers. They must consider the impact of such high aspirations and the educational investment they must make in choosing a higher-level career (Casey & Shore, 2000).

High-level occupations (such as science, engineering and technology) make heavy intellectual and lifestyle demands, require creativity, and the ability to deal with problems that do not have known solutions. Mentoring can enhance gifted students experiences with intellectual risk-taking and self-directed learning. The opportunity to develop high level thinking skills, problem solving techniques and inquiry-based learning can be facilitated through mentoring programs (Casey & Shore, 2000).

Research has shown that gifted scholars have a distinct feeling of isolation in their communities where being smart is not necessarily equated with being popular. Gifted scholars seem to require emotional support as well as career advice (Bennett, 1997). Mentors can provide them with examples of ways that they have learned to deal with these tendencies in their own lives. Mentors can also help students evaluate their talents realistically, for gifted adolescents often feel they have to hide their talents in order to gain peer acceptance (Casey & Shore, 2000).

When the mentoring experience is well structured and the mentor is well suited to for the student, the relationship can provide the gifted student with encouragement, inspiration, and insight. One of the most valuable experiences a gifted student can have is exposure to a mentor who is willing to share personal values, a particular interest, time, talents, and skills (Berger, 1990). Mentoring allows students to learn new skills and check out potential career options. The relationship is a dynamic one that depends on interaction, the mentor passing on values, attitudes, passions, and traditions to the gifted student (Berger, 1990).

Gifted students generally have a variety of potentials as they enjoy and are good at many things (Berger, 1990). The wide range of interests, abilities and choices available to these students is called ‘multi-potentiality’ (Kerr, 1990). Since gifted scholars often have multiple interests and potentials they may require substantial information regarding career options. In high school, gifted scholars have been seen to have decision-making problems that result in heavy course loads and participation in a variety of school activities. These students are also often leaders in school, community, or church groups.

Parents may notice signs of stress or confusion regarding college planning while students are maintaining high grades. Kerr (1990) notes a variety of interventions for these high school students including: vocational testing, visits to colleges, volunteer work, internships and work experiences with professionals in an area of interest, establishing a relationship with a mentor in the area of interest, and exposure to a variety of career models.

Mentor relationships with professionals seem to be highly suitable for gifted adolescents, particularly those who have mastered the basics of high school academics. Many of these students are good enough that they are excelling in their schools because they can memorize subject material rather than really understand it. They may not know how to study at all. These students need to learn how to set priorities and establish long-term goals something mentors can help them to learn how to do (Kerr, 1985).

Gifted scholars have more career options and future alternatives than they can realistically consider. Parents often notice that mentors have a maturing effect on these students, they help them to develop a vision of what they can become, find a sense of direction, and can help them focus their efforts (Berger, 1990).

School personnel and parents may overlook a gifted scholars need for vocational counseling, assuming they will simply succeed on their own (Kerr, 1981). Often parents assume that career choices for gifted students will take care of themselves. They may assume that students will choose a career in college and that there is no pressing need for career planning.   However studies of National Merit Scholars, Presidential Scholars and graduates of gifted education programs have shown that gifted students may have socio-emotional problems and that their needs may differ from other students (Berger, 1990).

The benefits of mentoring gifted students include the positive impact of mentoring for students struggling with their multi-potentiality, their need for adult role models, the desire to explore a variety of career options, with aid in the setting of goals and priorities, and for emotional support with feelings of isolation. Gifted scholar characteristics that can aid in the development of a successful mentoring relationship include self-motivation, a desire to interact with adults for intellectual stimulation, and interest in a variety of topics and careers.

Research indicates that special intervention programs for girls in math and science can make a difference. Six months after attending a one-day career conference, girls’ math and science career interests and course-taking plans were higher than prior to the conference (American Association of University Women, 1992). Three years of follow up of an annual four-week summer program on math/science and sports for groups of average minority junior high girls found they increased their math and science course-taking plans increased an average of 40% and are actually taking the courses (American Association of University Women, 1992).

Two and a half years of follow-up of a two-week residential science institute for minority and white high school junior girls (already interested in science) found that the program decreased the participants stereotypes about scientific professions and helped to reduce their feelings of isolation. The program also helped to solidify participants' decision to choose a career in math or science (American Association of University Women, 1992).

Research and case studies focusing on mentors and mentoring often cite the effects of the mentor in terms of career choice and career advancement, especially for young women and minority students (Kerr, 1983). Kaufmann's (1981) study of Presidential Scholars from 1964 to 1968 focused primarily on the nature, role, and influence of the students’ most significant mentors. The most frequently mentioned benefits to having a mentor were having a role model, support, and encouragement. The students noted that their mentors set an example for them, offered them intellectual stimulation and communicated to them an excitement about learning. Kaufmann's research highlighted the importance of mentors for gifted young women. The study was conducted 15 years after the students graduated from high school, and indicated that when the women’s salaries were equal to those of the men, the women had had at least one mentor.

Student’s self-confidence and aspirations grow with mentoring relationships, especially for students from disadvantaged populations (McIntosh & Greenlaw, 1990). Mentor programs throughout the United Stated match bright disadvantaged youngsters with professionals. Students learn about the professional’s lifestyle as well as their profession and the education that precedes the job. These relationships often extend past the boundaries of schools where mentors often become extended family members and even colleagues (McIntosh & Greenlaw, 1990).

Introduced in April of 2001, a new U.S. House bill, called the ‘Go Girl Bill’ aims to encourage girls in grades 4-12 to pursue studies and careers in science, mathematics, engineering, and technology. Services provided under the bill would include tutoring, online and in-person mentoring, and underwriting costs for internship opportunities. Specifically the bill aims to encourage girls to major and plan for careers in science, mathematics, engineering, and technology at an institution of higher education. The bill would provide academic advice and assistance in high school course selection and educate parents about the difficulties faced by girls to maintain an interest in and desire to, achieve in science, mathematics, engineering, and technology, and enlist the help of the parents in overcoming these difficulties. Services provided would include tutoring and mentoring relationships, both in-person and through the Internet.

The bill would also provide after-school activities and summer programs designed to encourage interest, and develop skills, in science, mathematics, engineering, and technology. It would also support visits to institutions of higher education to acquaint girls with college-level programs, and meeting with educators and female college students to encourage them to pursue degrees in science, mathematics, engineering, and technology (U.S. House, 2001).

Clearly the impact of mentoring on minorities and young women is just as critical as the impact of mentoring on gifted scholars, perhaps even more so. The new bill in the U.S. House supports the research in recent years that illustrates the positive impact interventions and mentoring have on young women and minorities. Mentors give them intellectual stimulation and communicate to them an excitement about the field of science. Their self-confidence and aspirations grow, and feelings of isolation are lessened.

Implications for Project

The review of literature shows the potential of utilizing intervention programs and mentoring to encourage gifted students to consider careers in engineering, science, math and technology. The use of computer technology is also important underscoring the decision by NASA to utilize a Web-based course and a variety of computer tools and software. Variables that will be assessed in the program evaluation include characteristics noted in the recent literature on science and engineering intervention programs such as the hands-on activities and projects, interaction with professionals, pre-college academic course, use of technology, workshop experience, and team competition.

The findings also underscore the impact of mentoring gifted scholars, women and minorities. It supports the intuitive decision of JSC instructional designers in choosing mentoring as the support system for sustaining scholar’s interest in science and engineering. TAS provides mentors on two levers of professional development – senior engineers already into their career and college students who have chosen a career progression, which will lead to a science or engineering career. The program evaluation will assess the specific characteristics of mentoring including adult and peer mentoring (engineers and co-ops), the use of technology (on-line mentoring), the workshop experience (face-to-face mentoring), academic tutoring (student project assessment by mentors) and long-term mentoring (continued guidance by mentors for one year). Assessment measures of all scholars, and of female and minority scholars, will be dependent variables in the program evaluation study.
 

Distance Education

A variety of distance education venues exist for high school students, including both virtual schools and educational Web-sites. This section begins with a discussion of the use of distance education with high school students and the benefits of using the Internet over other forms of distance education. This section continues with an overview of virtual schools and educational Web-sites, and concludes with a section on Web-based science programs and a section on Web-based mentoring programs. Characteristics of model programs will be considered for program evaluation and continuous program improvement.

Distance education for gifted students
 

Research has shown that gifted adolescents have a high degree of self-motivation and the ability to work independently (Casey & Shore, 2000). Gifted students can take advantage of independent distance education programs because they need less hand-holding to accomplish the reading and research projects they must complete on their own.

Distance education removes the impediments created by a student’s geographic location. Independently motivated students can participate in distance learning and mentoring experiences (have access to data, educational activities, knowledge and resources) without being near the center of learning. Scholars do not have to be residents in large cities to have learning experiences that only take place in large population centers. Selected outstanding scholars from across large regions can easily work together. Distance education also breaks down many of the stereotypes seen in face-to-face classrooms such as race, social status and gender. Distance education can provide a common experience for students with similar characteristics and needs.

Distance education is well suited to the characteristics of gifted scholars. Gifted scholars are generally self-motivated, independent, curious, hard-workers and have experience using technology. While other students may be interested in pursuing a distance program, lack of access, time, and ability hinders their ability to persist. Only the most motivated will stay with the course, as distance learning is often more difficult than regular course work. Distance courses can be seen as too time-consuming, too difficult, or too much competition with their other activities for regular students (Casey & Shore, 2000).

Some of the advantages of using Web-based distance education include that students can learn almost any time and any where and about almost any topic. Students use Web resources to create real-life projects tailored to their learning styles. Student-to-student peer work-groups allow students a measure of interactivity with other students. Most virtual schools have formal course descriptions and assessments so students know what is expected of them. Some allow students access to their own online portfolios. There is age and demographic diversity found in most virtual schools. Disabled students can participate from their homes. There are few racial or cultural barriers visible on-line. Rural area students have access to the same wide range of courses available to urban or suburban schools (Tuttle, 1998).

There are limitations to the on-line distance education environment. Families may not be able to afford the cost of the hardware and software needed to complete the on-line activities (computer, modem, Internet access and tuition). However, many students who do not have a computer in the home can still participate in distance programs through local school or library facilities if their interest warrants the extra effort. Some virtual schools require video-conferencing equipment and CD-ROM players in addition to other hardware, which may be another limiting factor. It seems clear that at this time distance learning on the Web is not the answer for all students (Tuttle, 1998) however it may be the answer for students who are self-motivated and have initial expertise in the use of technology.

Web-based education programs

Examples of virtual schools (or cyberschools) are beginning to become more common in the United States, with large numbers of students taking advantage of the ease of the Web as an alternative method of learning. This section describes seven virtual schools for high school students.

In Eugene, Oregon, the Lane County School District Cyberschool features courses that last from two and a half weeks to two semesters long. The instructors e-mail course syllabuses to students URLs for Web-sites and a book list. A class listserv is used to exchange messages with others and the teacher. Additional book lists, Web-sites, real audio lectures and expert’s e-mail addresses are offered to students. Students earn full high school credit for these classes (Yahoo, 2001).

In Moad, Utah, the Electronic High School combines technology with classroom courses. The curriculum is text-based and includes Internet resources. Each course last one semester (a quarter of the year) and students complete exercises and take tests on-line, using e-mail to submit their work. Students receive full credit and can choose to take all their courses on-line. Each course costs $55 (Yahoo, 2001).

In Lake Grove, N.Y., the Babbage Net School has students meet in a chat room twice a week with their instructor. The instructor asks questions and students respond in real-time. A file cabinet on-screen contains worksheets, Web-sites, assignments, tests, sounds, images and any other material provided by the instructor. Assignments are submitted by e-mail. The tuition for a full year course is $1500 (Yahoo, 2001).

In British Columbia, Canada, the Netchako Electronic Busing Program is an individualized learning program that began as a support network for home schooled students. Parent and on-line teachers create a learning plan for each student based on district learning outcomes. Families are provided with a computer, software and access to e-mail and the Internet. The courses are free (Yahoo, 2001).

In Los Angeles, California, the Dennison On-line Internet Academy is a private school registered with the California State Board of Education. The teachers are university professors and the curriculum is for high school students. Students select research topics and work at their own pace. Students draw upon multimedia software, the Internet and other resources. Students report daily to their instructors via e-mail and chat sessions. Tuition is $3600 a year (Yahoo, 2001).

The Willoway Cyberschool is a private 5-8 grade video-conferencing virtual school that offers on-line courses in many disciplines to students across the country. Students spend 40 minutes daily in a virtual video conference class, 40 minutes a day researching online and 20 minutes a day talking with peers. Teachers help students take charge of their own learning, do problem solving and become technically literate. Projects include Hyperstudio stacks, designing Web pages, constructing models and writing reports.  Individual password protected assignments are accessed by students. The cost is $2,250 a year (Yahoo, 2001).

The Virtual High School (VHS) located in twelve U.S. States, Jordan and Germany is funded by a grant from the U.S. Department of Education. VHS provides content through the Internet utilizing teacher designed labs and multimedia courses. Students receive guidance from teachers and are evaluated by e-mail, group discussion, on-line evaluation and projects. A wide variety of courses are offered. Initially begun in 1998, 29 courses were first offered from grades 9-12, by the second year, 40 courses were offered. Students generate questions, design and create projects, work in teams and work at a fast pace. VHS combines motion video with music and sound in several of its courses (Hammonds, 1998).

Common characteristics of these virtual schools include electronic mail, on-line chats, bulletin boards and listservs, Internet resources, archived resources, and any time, anywhere access. Students generally need a personal computer, CD-ROM and Internet access without firewalls. The characteristics of model virtual schools will be studied during the evaluation of the TAS program. A review of Web-based educational programs gives further insights into key factors to consider for program evaluation and continuous improvement.

Web-based science education programs

This section begins with an overview of ten Web-based science education programs, several of which are sponsored by NASA. This section concludes with a comparison between the ten programs and a recent research study that reviewed over 400 science, math and technology education Web-sites.

Imagination Place! in KAHooTZ is an interactive on-line club for children interested in the world of invention and design. Created by the Center for Children and Technology (CCT), Imagination Place is an electronic setting where students investigate technology and invention in their everyday world through activities on and off the computer, and analyze problems in their daily lives and stretch their imaginations to come up with inventive solutions. Students use technology to create their innovations and chat with club members around the globe about their designs (CCT, 2001). "You can easily use it as an assessment tool, to see the processes that they've gone through in putting it together, to understand what they've been thinking and also to assess their level of understanding of the topic," says one teacher from Melbourne (CCT Testimonials, 2001, para.4)

NASA’s Classroom of the Future: Exploring the Environment poses situations students must solve collaboratively using Internet and technology tools and skills. NASA’s Classroom of the Future: Earth Science Explorer poses real world questions to students who must work collaboratively using research tools and problem solving techniques (Classroom of the Future, 2001). At the Center for Educational Technology's International Space Station Challenge students find activities and real world challenges that allow students to investigate and solve open ended problems (Center for Educational Technology, 2001). Cooperative learning and collaboration with experts highlight many of NASA's science education sites on-line.

Amazing Space is a set of Web-based activities primarily designed for K-12 students and teachers developed by the Hubble Space Telescope Institute. The lessons are interactive and include photographs taken by the Hubble Space Telescope, high quality graphics, videos, and animation designed to enhance student understanding and interest. Activities include comet building, investigating black holes, playing with the building block of galaxies, solar system trading cards, training to be a scientist by enrolling in the Hubble Deep Field Academy, and creating a schedule for the Second Servicing Mission to upgrade the Hubble Space Telescope (Amazing Space, 2001).

The NASA Quest Project is a resource for teachers and students who are interested in meeting and learning about NASA people and the national space program. NASA Quest includes on-line profiles of NASA experts, live interactions with NASA experts each month, and audio and video programs delivered over the Web. Quest also offers lesson plans and student activities, collaborative activities in which kids work with one another, a place where teachers can meet one another, and an e-mail service in which individual questions get answered. Frequent live, interactive events allow participants to come and go as dictated by their own individual and classroom needs. These projects are open to anyone, without cost (NASA Quest, 2001).

The Observatorium, from the Learning Technologies Project, is a science resource site that includes some Java enabled simulations as well as some Shockwave tutorials for students to explore in or out of the classroom (Learning Technologies Project, 2001). Links to a variety of on-line primers and tutorials for students can be found at the Lunar and Planetary Institute along with a series of hands-on activities in planetary geology and an Earth/Mars comparison project for students and educators (Lunar and Planetary Institute, 2001).

Specific Web-based student involvement programs get students to participate in actual scientific research, collecting real data, using actual research facilities, and working on scientific projects. The GLOBE Program brings together students, teachers, and scientists from around the world to work together and learn more about the environment. By participating in GLOBE, teachers guide their students through daily, weekly, and seasonal environmental observations, such as air temperature and precipitation. Using the Internet, students send their data to the GLOBE Student Data Archive. Scientists and other students use this data for their research.  Teachers integrate computers and the Web into their classroom and get students involved in hands-on science (GLOBE, 2001).

Explore Science showcases interactive on-line activities for students and teachers. Explore Science utilizes the Shockwave plug-in to create real-time correlation between equations and graphs helping students visualize and experiment with major concepts from algebra through pre-calculus. A large number of multimedia experiments include virtual two and three-dimensional simulations such as the Air Track activity that models a basic air track with two blocks. In the simulation, students can change the coefficient of restitution, initial masses, and velocities. Topics such as like electricity, magnetism, gravity, density, light, color, sound, the Doppler Effect, resonance, aerodynamics, interference patterns, heat, inertia, orbital mechanics, genetics, torque, time and vector addition are all simulated for students using interactive on-line computer tools (Explore Science, 2001).

Common characteristics of the ten reviewed science education Web-sites include the use of interactivity, multimedia, simulations, data collection, teamwork (over distance and with scientists), primers, tutorials, activities, on-line discussion sessions and e-mail with scientists, and real-world problem solving. Although these features were common to Web-based science education programs, they were not representative of Web-based educational materials in general.

The comparative study by Miodoser, Nachmias, Lahav, and Oren (2000) reviewed 436 educational Web-sites focusing on mathematics, science and technology. The study illustrates that most Web-based math, science and technology programs do not utilize the potential of telecommunications technologies to facilitate online collaboration and interaction.

According to Miodoser, Nachmias, Lahav, and Oren (2000) most educational Web-sites are still predominantly text-based and do not yet exhibit evidence of current pedagogical approaches to education including the use of inquiry-based activities, application of constructivist learning principles, and the use of alternative evaluation methods.

The originators, goals, target populations, pedagogical beliefs, and technological features of each site were found to be very different. 100 variables were looked at in four different dimensions. These dimensions included descriptive information, pedagogical considerations, knowledge attributes, and communication features. A team of science educators identified and evaluated the sites in 1998. Their findings included a breakdown of the sites originators, where academic institutions and museums comprised one third of the sites each, public and private organization the remaining third (Miodoser, Nachmias, Lahav, & Oren, 2000).

Most sites were aimed at junior and high school students, 62% were aimed at upper elementary, 22% at elementary, and 16% at higher education. The small number of sites that supported collaborative learning were all targeted at the high school level. The largest number of inquiry-based sites also targeted high school students. The museum environments supported the most collaborative, inquiry based and problem solving activities with contextual help and links.

While 93% of the educational sites supported individual work, only 3% supported collaborative work. Only 38% of the sites supported inquiry-based learning. Informational (65%) and structured activities (48%) comprised the most frequent instructional methods. A small percentage, 7-13% included open-ended activities, Web-based tools and virtual environments (Miodoser, Nachmias, Lahav, & Oren, 2000).

In all, 76% of the sites supported browsing and in 33% of the sites included question and answer tasks. Only 3-7% included more complex interactions, or the use of on-line tools. Interaction with people (mainly asynchronous) was found in only 13% of the Web-sites and feedback was only present in a small number of sites (Miodoser, Nachmias, Lahav, & Oren, 2000).

Information retrieval (52%) and memorizing (42%) comprised most of the sites educational approaches. Information analysis and inferencing were found in about one third of the sites. Only a few sites supported problem solving, creation, or invention. Of the researched sites, 83% relied in resources within the site, and only 31% provided links to other Web resources. Very few sites referred learners to experts and peers. Evaluation methods (standard or alternative) were rare (Miodoser, Nachmias, Lahav, & Oren, 2000).

Current pedagogical approaches support learning that requires student involvement in the construction of knowledge, interaction with experts and with peers, adaptation of instruction to individual needs and new ways of accessing student’s learning (Dick, 1996, Gagne, Briggs & Wagner, 1992). The expectation that science and math educational Web-sites would address constructivist principles was found not to be the case. Only 28.2% of the sites included inquiry-based activities and most were highly structures with control given to the computer rather than the learner. Only 2.8% supported any kind of collaborative learning. Few sites offered complex or online activities (3-6%), and few offered any form of feedback – human or automatic (5- 16%). Only a few offered links to online tools (12.8%), external resources (31%) or to experts (8.7%) (Miodoser, Nachmias, Lahav, & Oren, 2000).

The potential of the Internet to provide communication between learners and with instructors or experts was not seen in this study. Most resources presented were by e-mail (65%) while chat and other telecommunication between peers was almost non-existent. Distance work was supported in less than 2% of the sites. Only 4% included methods for synchronous communication (surprising since the popularity of chats and gaming is high among school age users of the Web). Features aimed at supporting learning communities were not found in any site. Interactivity (based on Java applets or Shockwave) were rare, with most interactivity resembling classic CAI transactions (i.e. multiple choice questions, assembling configurations, etc.) (Miodoser, Nachmias, Lahav, & Oren, 2000).

The results of this study show that while many institutions are taking advantage of the Web to increase access to science and math education, few utilize constructivist learning principles or promote interactivity. However, interactive simulations, interactions with experts and peers, creative real-world problem solving are common in the NASA and NASA-related sites located in the review of literature. Interaction with experts and peers is also an essential feature of mentoring programs.

Web-based mentoring programs

Mentoring is a key aspect of the TAS Program. This section provides an overview of Web-based mentoring programs for high school students, including one developed specifically for students interested in science, engineering and computing.

The Telementoring Young Women in Science, Engineering and Computing program used the Internet to provide high school students pursuing scientific and technical fields with electronic access to role models. On-line mentors offered career guidance and emotional support. Sponsored by the National Science Foundation, Telementoring was a three-year project from 1995-1998 that drew on the strengths of telecommunications technology to build on-line communities of support among female high school students, professional women in technical fields, parents, and teachers.

Using telecommunications, the project goals was to provide validation, advice, and support critical for young women making decisions about pursuing courses and careers in science, engineering, computing, and related fields. Other goals included enabling young women to work through their concerns and fears when considering further studies in technical and scientific fields, and to address the isolation young women experience in engineering, computing, and science classrooms.

Telementoring developed a telecommunications environment that enabled high school girls from six states in technical courses to communicate on an ongoing basis with successful women professionals and college students. These environments were designed to provide girls with the validation and advice not available in traditional educational settings. It also developed informational parent forums and an archive of equity materials to assist parents and teachers in supporting girls' pursuits in engineering and computing.

Students and mentors reported talking at least once a week online through e-mail and discussion groups. The predominant topics were college and future careers. Students predominantly had positive feelings about their mentors and that they were supported by them. Over 90% of the mentors stated they would do it again, and felt they had some impact on broadening their student’s horizons regarding career choices. In all, 28% of students felt that communicating with a mentor had altered in a positive way their perceptions about women in science. Some girls had thought that their mentors would be serious, non-sociable, older and not 'cool', and most were pleasantly surprised (Bennett, 1997). Of the students surveyed, 68% said they would like to have their mentor’s lifestyles. Students indicated they were more likely to apply for an internship, join a science club and join a study group. Students especially found that mentor impacted their views on science and technology when their personal hobbies (such as reading and music) were found to have a relationship to their career interests (Bennett, 1997).

The Electronic Emissary program was prototyped in the fall of 1992 and went online early in February, 1993. It is one of the longest-running Internet-based telementoring and research effort serving K-12 students and teachers around the world. Each team is comprised of a K-12 educator (teacher or parent), one or a group of K-12 students, one or more volunteer subject matter experts, selected by the participating educator and student(s), and an online facilitator who works with the Electronic Emissary Project at the University of Texas at Austin.

E-mail between students and their mentor is facilitated by graduate students in education at the University of Texas at Austin and other experienced educators with an interest in online learning. One mentor notes "I found that my role would range from being a ‘listener’ to ‘technician’ to ‘prompter,’ and, once, even as referee" (Figg, 1997). On-line evaluations ask team members are asked to describe their project's purpose, audience, and structure, what aspects of the project seemed to work well, and what they would do differently if they were to do the project again.

The International Telementor Program is an international project-based mentoring opportunity for students around the world. Since 1995, more than 16,000 students, mentors and teachers from around the world have participated in the program. Primary corporate sponsors include Hewlitt-Packard and Sun Microsystems. Their mission is to create project-based online mentoring support in math, science, engineering, communication and career education planning for students and teachers in classrooms and home-school environments with a focus on serving at-risk student populations. The International Telementor Program serves 5th-12th grade students, and college and university students, in targeted worldwide communities, through electronic mentoring relationships with business and academic professionals. Mentors have helped students with a wide-range of projects including studying whales in the North Atlantic, peregrine falcons, developing Web-sites, programming on-line quizzes, and assembling a statistical analysis for the performance of a professional baseball team.

Characteristics of these on-line mentoring programs include matching mentors with students based on gender and interest, career guidance and emotional support, parent forums and materials, and projects involving mentors, educators and students. These characteristics of on-line mentoring programs will be considered as variables under study in the program evaluation.

Implications for program evaluation and continuous improvement

Findings from the review of literature support the intuitive decision of NASA instructional designers in choosing distance education to impact the most number of students. Use of the Web to facilitate mentoring and sustain scholars’ interest in science and engineering, seem to be appropriate for self-motivated students who seek out and benefit from adult relationships.

The review of literature validates science education Web-sites that utilize multimedia, are highly interactive, allow for access to experts in the field, and provide inquiry based activities. Variables that will be assessed in the program evaluation include characteristics noted in the recent literature on Web-based distance education programs such as the use of on-line discussion sessions and e-mail, bulletin boards, listservs, Internet resources, and archived resources.

Characteristics specific to Web-based science education programs that will be considered during the program evaluation include the use of interactivity, multimedia, simulations, data collection, teamwork (over distance and with scientists), primers, tutorials, activities, and real-world problem solving, all based in constructivist learning principles that promote interactivity.

The review of literature regarding on-line mentoring programs illustrates the importance of matching on-line mentors with students based on gender and interest, offering both career guidance and emotional support, and involving students and mentors in activities and projects that encourage them to work together. These characteristics are not at all different from traditional mentoring programs except for the delivery method which in this case is the Internet, and reflect the same basic variables regarding mentoring to be considered in the program evaluation.

Program Evaluation

This section reviews literature on program evaluation to answer four key questions: (1) What is program evaluation? (2) What forms, approaches and types of evaluation are used for instructional programs? (3) What are the procedures for conducting a program evaluation of education programs, distance education and mentoring programs? and (4) How can issues of validity and reliability be addressed in program evaluation?

What is program evaluation?

The idea of formal evaluation has been around since as long ago as 2000 B.C. when the Chinese began a functional evaluation system for their civil servants. Since then, many definitions of evaluation have been developed. In 1994 the Joint Committee on Standards for Educational Evaluation described evaluation as the systematic investigation of the worth or merit of an object (Western Michigan University, 2001). According to The User-Friendly Handbook for Evaluation of NASA’s Educational Programs (2001, p. 5), "This definition centers on the goal of using evaluation for a purpose. Evaluations should be conducted for action-related reasons and the information provided should facilitate deciding a course of action."

Research aims at producing new knowledge, but is not necessarily applied to a specific decision-making process. In contrast, program evaluation is deliberately undertaken to guide decision making (Wolf, 1979). Research comes about from scholarly interest or academic requirement, and is an investigation undertaken with a definite problem in mind. Experimental research produces knowledge that generalizable; this is the critical nature of research. If the researcher’s results cannot be duplicated, they are usually dismissed. Little (or no) interest may attach to knowledge that is specific to a particular sample of individuals from a single location, and studied at a particular point in time (Wolf, 1979). In contrast, program evaluation seeks to produce knowledge that is specific to a particular setting. Evaluators concern themselves with evaluating a particular program in a particular location, with the resulting evaluative information being of high local relevance for the teachers and administrators in that particular setting. The results may have no scientific relevance for any other location. Scholarly journals generally do not publish the results of evaluation studies, since they rarely produce knowledge that is sufficiently general in nature to warrant widespread dissemination (Wolf, 1979).

The major attributes studied during program evaluation represent educational values, goals or objectives that we seek to develop in learners as a result of exposing them to a set of educational experiences. Learner achievements, attitudes, and self-esteem are all educational values. Traditionally, measurement in education is undertaken for the purpose of making comparisons between individuals with regard to some characteristic. In evaluation, it is not necessary or even desirable to make such comparisons. What is of interest is the effectiveness of a program (Wolf, 1979). In such a situation there is no requirement that learners be presented with the same tests, questionnaires or surveys. Resulting information from different sets of questions can be combined and summarized to describe the performance of an entire group. Therefore, evaluation and measurement lean towards different ends, evaluation toward describing the effects of a program, and measurement towards the description and comparison of individual performance. Evaluation is directed toward judging the worth of a total program, and also can be used for judging the effectiveness of a program for a particular group of learners.

NASA managers are encouraged to do program evaluation to communicate with stakeholders information about a program, to help improve a program, and to provide new insights about a program. When evaluating a program, information should be gathered on whether the program’s goals are being met and how the varied aspects of a program are working – the end being a continuous improvement process. Sanders (1992, p. 3) echoes this idea, "Successful program development cannot occur without evaluation." Sanders cites the benefits to both students and educators, including, improvement of educational practices and the elimination of curricular weaknesses, the recognition and support of educators, and prioritizing areas of need in school improvement.

Worthen, Sanders and Fitzpatrick (1997) expand the definition of evaluation to "…the identification, clarification, and application of defensible criteria to determine an evaluation object’s value (worth or merit), quality, utility, effectiveness, or significance in relation to those criteria" (p. 5). The inquiry and judgment methods used in evaluation include determining standards (relative or absolute), collecting relevant information, and applying those standards to determine the object’s value. Formal evaluation is a systematic and structured process in which criteria is explicitly stated, accurate information is gathered, and as a result, judgments are made. Examples of the use of evaluation in education are varied. They can be used to empower teachers, determine school budgets, judge the quality of curricula, for accreditation purposes, to determine the value of school programs, and to satisfy external funding agency’s request for reports on the programs it supports (Worthen, Sanders & Fitzpatrick, 1997).

The two key methodologies employed in program evaluation are quantitative and qualitative. Quantitative inquiry focuses on the testing of specific hypotheses, uses structured designs and statistical methods of analysis, and encourages standardization, precision, objectivity, and reliability of measurements, as well as replicability of findings. Some quantitative tools include standardized tests and multiple-choice questionnaires. In contrast, qualitative inquiry is typically conducted in natural settings, uses the evaluator as the primary 'instrument'; emphasizes rich description of phenomena, employs multiple-gathering methods, and uses an inductive approach to data analysis. Some qualitative tools include focus groups, non-participant observation, and open-ended questioning (Worthen, Sanders, and Fitzpatrick, 1997).

Forms, approaches and types of educational evaluation

There are two basic forms of evaluation, formative and summative, as first described by Scriven (1967). There are also six basic approaches to program evaluation (Worthen, Sanders, & Fitzpatrick, 1997) and six types of program evaluation models that may be applied with each approach (Fitz-Gibbon and Morris, 1987, Wolf, 1979). This section reviews forms, approaches and types of educational evaluation to be considered in designing the Texas Aerospace Scholars program evaluation study.

Formative and summative evaluation. There are two basic forms of program evaluation, summative and formative. Formative evaluation occurs during program development or while a program is in progress to improve the program in some way. Formative evaluation is generally conducted to provide program staff information used in improving the program under study. Formative evaluation focuses on how well goals are being achieved, rather than whether they were the right ones in the first place (Thorpe, 1988).

The purpose of data collection in a formative evaluation is diagnostic. The measures are sometimes informal in a formative study and the sample size for a formative study is usually small (Worthen, Sanders, and Fitzpatrick, 1997). Formative evaluation is most often conducted by an internal evaluator since the inside knowledge of program is of great value and a possible lack of objectivity is not much of a problem.

According to NASA (2000) formative evaluation includes both implementation evaluation and progress evaluation. Implementation evaluation assesses whether a program is being conducted as planned. The underlying principle is that, "before you can evaluate the outcome or impact of a program, you must make sure the program and its components are really operating, and if they are operating according to the proposed plan or description" (NASA, 2000). Progress evaluation assesses the progress in meeting the program’s goals. This involves collecting data to determine the impact of the activities on the participants at various stages. If data collected early shows failure to meet some of the program’s goals the project can undergo alterations. This data can also be used to form the basis of a summative evaluation.

Summative evaluation is conducted and made public to provide decision makers and potential consumers with "judgments about that program’s worth or merit in relation to important criteria"(Worthen, Sanders & Fitzpatrick, 1997, p. 14). Summative evaluation occurs after a program is completed and is used to determine a program's effectiveness. Summative evaluation is used to determine the value or quality of a program. The decision about whether to retain or eliminate a program is considered summative (Scriven, 1967).
In summative evaluation the target audience includes program personnel, but also potential consumers, funding sources, and supervisors. The audience of a summative evaluation is targeted towards internal and external customers. Validity and reliability are of major concern in a summative evaluation. The sample size for a summative evaluation is usually large. The purpose of data collection in a summative evaluation is judgmental. Both forms of evaluation are equally important, formative during the developmental stages of a program, and summative to judge it’s final worth.

Summative evaluations are primarily conducted by external evaluators. With an external evaluator objectivity is higher, and results therefore more credible. However summative evaluation can be conducted in certain cases by internal evaluators when then organization is so structured that the internal evaluator can be insulated or shielded from the consequence of a negative evaluation.  The larger the organization the more insulated the evaluator may be from outside pressures to provide a positive evaluation (Worthen, Sanders & Fitzpatrick, 1997). While summative evaluation is most often completed by an external evaluator (with less bias and more objectivity) NASA recommends that "it is better to have an internal evaluator than none at all" (NASA, 2000). A compromise can be reached when after an internal evaluation has been completed, to have an external agent review and validate the findings and conclusions (NASA, 2000). Table 1 below illustrates the primary differences between formative and summative evaluation.


Table 1. Differences between formative and summative evaluation.

Since the 1960’s evaluation has blossomed into a full-fledged profession with Congress funding evaluation for a variety of social programs including education. NASA includes project evaluation in the project development cycle as noted in Figure 3.


Figure 3. NASA Project development cycle.

(The User Friendly Handbook for Evaluation of NASA Educational Programs, 2000).
 

According to NASA (2000) summative evaluation takes place after a program has been established and should addresses the following questions:
 


A summative evaluation will be used for the Texas Aerospace Scholars' study specifically to report to key stakeholders whether at the end of the first year the program has met it's year one impact goal. Additionally the effectiveness of the distance education and mentoring components (based on scholar and mentor satisfaction and participation levels) will be addressed. Finally, the cost of the program in view of its ability to be replicated will be summarized.

Approaches to program evaluation. This section examines six approaches to program evaluation described by Worthen, Sanders and Fitzpatrick (1997). Worthen et. al. (1997) note that since program evaluation is a fairly new field their approaches do not meet the criteria to be considered scientific models. The categories are not theories, and do not allow evaluators to make predictions or conjectures. They conclude that these approaches are simply useful information. The six approaches are management-oriented, consumer-oriented, participant-oriented, expertise-oriented, adversarial-oriented, and objectives-oriented.

In the management-oriented approach the central concern is on identifying and meeting the informational needs of managerial decision-makers. A manager identifies a decision to be made, an evaluator collects information about the pros and cons of alternatives, and the manager decides what to do. Data is gathered in relation to issues on which changes are already being considered, or at specific steps in the program's implementation. Effective program management is the goal, and the evaluator works backwards from specific decision points to determine what information is needed to inform the decision-making process. One example of a use of this approach would be a determination whether a specific training intervention in a corporate environment is the increasing employee output. Models include the CIPP (context, input, process, product) and UCLA evaluation models (Worthen, Sanders and Fitzpatrick, 1997).

In the consumer-oriented approach the central issue is developing evaluative information on educational "products," for use by educational consumers in choosing among competing instructional products. This approach concentrates on providing information that will be useful to those who are in a position to take action based on the data. The emphasis is on people, and the evaluator involves user groups through out the evaluation. The consumer-oriented approach features independent reviews of educational products patterned after the Consumers Union. One example of this approach would be an evaluation of surveys by educators on various textbook series in order to rank their usefulness in the classroom. Models of this approach include Scriven's Key Evaluation Checklist and Komoski's Educational Products Information Exchange (Worthen, Sanders and Fitzpatrick, 1997).

The participant-oriented approach puts the evaluator in the position of seeking understanding of all evaluation questions from the point of view of all participants, staff, and administrators. This approach utilizes more qualitative data collection methods, and is used in less focused evaluations. In this type of more naturalistic inquiry, participants (stakeholders) are central in determining values, criteria, needs and data for the evaluation. The evaluator seeks first-hand experience of the situation, studying it in-situ without constraining, or manipulating it. Evaluators acknowledge multiple realities and seek, by inductive reasoning, to understand the various perspectives, and, at the same time, evolve an appropriate methodology. Such evaluations are based on personal observation and interpretation, and are by definition highly subjective. Examples can include evaluations that use case studies, story-telling, qualitative techniques and ethnography. Models of this approach include Stakes' Responsive Evaluation (1975) and Guba and Lincoln's Naturalistic Evaluation (1981) (Worthen, Sanders and Fitzpatrick, 1997).

The expertise-oriented approach is the oldest form of evaluation in which a professional in the field is called on to evaluate a specific program. The approach may be a formal professional review as in accreditation, or it may be an ad hoc group as when consultants are hired. These experts are brought in to provide subjective judgments about the program being evaluated. Site visitation is frequently the mode for conducting expertise-oriented evaluations.  Models of this approach can include school or university accreditation, funding agency review panels and blue-ribbon panels (Worthen, Sanders and Fitzpatrick, 1997).

The adversary-oriented approach attempts not only to generate but also balance opposing viewpoints on a program. The process can include open hearings in which all viewpoints are expressed. Planned opposition in points of view of different evaluators (pro and con) is the central focus of this type of evaluation, which borrows ideas from the field of law. The adversary-oriented approach acknowledges that bias is inevitable in evaluation, so, instead of trying to control it, attempts to balance it, by adopting a judicial model where different external evaluation teams present opposing points of view to a judge or jury. Expensive to conduct and relatively undeveloped, this type of evaluation can be controversial. The legal paradigm may obscure the fact that the evaluation is more concerned with merit not guilt, with worth and not with winning. As with court cases, adversary-oriented evaluations happen only when there is a problem.  Examples of this approach include an evaluation of classroom techniques and materials used to teach a particular topic in which the school's students are repeatedly failing, or the continued support of a costly school program such as after-school tutoring or team-teaching.  Models of this approach include Owens' advocate-adversary evaluation (1970) and Wolf's judicial evaluation model (1975) (Worthen, Sanders and Fitzpatrick, 1997).

The objectives-oriented approach emphasizes the goals and objectives of a program and tries to determine the extent to which they have been attained. In this type of evaluation the evaluator takes on the role of measurement specialist. Although dozens of objective-based models exist, the approach generally begins by clarifying broad goals, and then defining more specific objectives. Subsequently, the evaluator finds a situation in which achievement of the objectives can be shown, develops or selects a measurement technique, collects data, and compares the performance data with the intended outcomes. In most program and classroom evaluations the objectives-oriented model tends to be utilized. Program or lesson objectives are determined and then are used as the comparative criteria to attempt to suggest that change has or has not occurred due to participation, usually through testing. Two models of objective-oriented evaluations are further examined because they illustrate the same purposes as the TAS program evaluation.

Two models of objectives-oriented approaches include Tyler's evaluation approach (1950) and the Kirkpatrick model (1994). Ralph Tyler postulated three major elements of evaluation: objectives, learning experiences and appraisal procedures. Tyler described evaluation techniques as a set of procedures used to assess learner progress towards the achievement of specific program objectives. Tyler's approach followed these seven steps (a) establish broad goals or objectives, (b) classify the goals or objectives, (c) define objectives in behavioral terms, (d) find situations in which achievement of objective can be shown, (e) develop or select measurement techniques, (f) collect performance data, and (g) compare performance data with behaviorally stated objectives.

In 1959, Donald Kirkpatrick proposed a ten-step process for planning and implementing an effective training program of which evaluation is number ten. Numbers one through nine include: determining needs, setting objectives, determining subject content, selecting participants, determining the best scheduling, selecting appropriate facilities, instructors, audio-visual aids and coordinating the program (Kirkpatrick, 1994).

Kirkpatrick developed a four-level model of criteria for evaluating training. The four levels are learner reactions, learning, job behavior and observable results. Each of the four levels are important, although as you move from one to the next the process is more difficult and time-consuming.  Kirkpatrick believes that the higher level you reach, the more valuable the information becomes (Kirkpatrick, 1994). Kirkpatrick's three major reasons for program evaluation are (a) to justify its existence and show how it contributes the organization's goals, (b) to decide whether to continue the program, and (c) to gain information on how to improve the program.

Reaction implies customer satisfaction. The future of a program often depends on positive reactions from its participants. If participants are not happy, they will not be as motivated to learn. Learning can be defined as the extent to which participants change attitudes, improve knowledge and increase skills as a result of a program. Some programs are more aimed at attitudinal change, some are more geared towards the acquisition of a new knowledge set or a specific skill, and some address all three. For behavior to change after an intervention, four conditions are necessary.  The person must:
 


The first two conditions can be met with (a) a positive attitude (reaction) and (b) the necessary knowledge and skills (learning). The third condition refers to the participant's supervisor(s). Climates can be described as preventing, discouraging, neutral, encouraging or requiring. If a participant is forbidden to do what he or she was taught in the program, then that is preventing. If the supervisor makes it clear that the participant shouldn't change as a result of the program, for that would make him or her unhappy, that is called discouraging. If the supervisor ignores the training, this is considered neutral. If the supervisor encourages the participant to apply his or her learning then it is considered encouraging. If the boss ensures that the learning is implemented then it is called requiring.

When applying this principle to students outside of the workforce, several individuals may fill the 'supervisor's' role. These can include parents, peers, teachers and guidance counselors.

The fourth condition, rewards, can either be internal or external. Feelings of satisfaction or pride are internal, while external rewards can include monetary rewards, praise and recognition. In a preventing or discouraging climate, there is a very small chance that behavior will be affected by the intervention. In a neutral climate, a change will depend on the other conditions. If the climate is encouraging or requiring, then the amount of change depends on the first and second conditions. One way to create a positive climate is to involve supervisors in the program at some level.

Results are defined as the final results that occur because the participants attended the program. The final objectives of a program need to be clearly stated for evaluation to occur. Sometimes programs have outcome measures that are difficult to measure, but if one can state desired behaviors clearly, then tangible results can be determined (Kirkpatrick, 1994).

The objectives-oriented approach will be used for the program evaluation of the Texas Aerospace Scholars program, specifically the Kirkpatrick model. Program objectives have been determined and will be used as the criteria to attempt to determine that change has or has not occurred due to participation. The strengths of the objectives-oriented approach are that it is logical, scientifically acceptable, simple to use, easily understood, and helps educators reflect on their intentions. It uses empirical methods for evaluating goals or objectives and provides pertinent information for the key stakeholders. Many types of program evaluation designs exist for use in program evaluation.

Types of program evaluation designs. Fitz-Gibbon and Morris (1987) present a positivistic approach to program evaluation. They offer six ways to design an evaluation, (a) true control group, pre-test, post-test, (b) true control group, post-test only, (c) non-equivalent control group, pre-test, post-test, (d) single group time series, (e) time series with non-equivalent control group, and (g) before and after. These designs assume an experimental orientation to program evaluation and can only be effectively implemented in close to ideal conditions.

The 'true control group pre-test, post-test', 'non-equivalent control group pre-test, post-test', and 'time series with non-equivalent control group' are the three most commonly employed when choosing a design for program evaluation. The 'true control group, post-test only' design is used somewhat less frequently. When looking at a single group where no control group is available, the 'single group time series' and the 'before and after' (or 'pre-test, post-test only') design is frequently employed to evaluate a program’s effectiveness.

Time-series designs are often used when a treatment comes in the middle of a larger educational endeavor so as to relate a specific treatment within the context of an overall program. They can be completed with a single group or a non-equivalent contrast group.

The 'before and after' design is a 'pre-test post-test only' design used when there is no control group readily available. The 'before and after' design is also called a 'one-shot case study'. A particular group is exposed to a treatment, and after the treatment, learner performance is measured. This type of study can be used effectively in the formative evaluation of portions of a course in development. It can also be used in the evaluation of highly specialized programs where it can be presumed that the learners have no former knowledge of the topic and it is not possible to have a comparison group. In this last case the likelihood that other interventions and student's maturation could impact learner outcomes is extremely low and can be rejected on logical grounds (Wolf, 1979). Learner mortality (drop-out numbers) can be measures and if low, rejected. An examination of selection procedures can provide information about the type of learners the results may apply to in a larger sense, and replication of the intervention can fortify the validity of the results over time. In short, what might be typically rejected as an unacceptable research design can be quite serviceably used in evaluation (Wolf, 1979).

Ideally comparison studies should utilize control groups, however in most real-world applications and most NASA projects these conditions simply cannot be created (NASA, 2000).  The setting up of control groups in educational settings is often at cross-purposes with the formation of alternative programs, which often target highly specialized populations. Educational evaluators must overcome this dilemma, as it is crucial to the success of the evaluation process.

While randomization of control groups is the best approach for conducting experimental studies there are many reasons why it often cannot occur in educational evaluation. There may only be a single treatment being studied, and all eligible learners may be assigned to it. Another obstacle to randomization occurs when a measure of choice is involved in the enrollment of a learner (or parent) in a program. Teachers, students and faculty participate in NASA efforts because they choose to by and large (NASA, 2000).

The 'before and after' or 'one-shot case study' design will be used for the program evaluation of the Texas Aerospace Scholars program. The chief requirement for the use of this design is the novelty and specialization of the course, and a group of learners without proficiency in the subject. This type of evaluation study is recommended for highly specialized programs where the learners have no former knowledge of the topic and it is not possible to have a comparison group.

Procedures for evaluating instructional programs

There are several different procedures employed to evaluate learners’ behavior related to a program’s educational objectives. This section will review procedures for evaluating instructional programs, distance education programs and mentoring programs. The review of literature is used to help formulate the procedures for the Texas Aerospace Scholars evaluation study.

Traditionally, there is a tendency to equate evaluation solely with formal testing. However utilizing test scores alone can limit the types of outcomes that can be measured. More than one evaluation procedure can be used for each specific objective. These can include written tests, self-reports, essays, oral questioning, observations, reports, situational (hands-on) tests, and the rating of learners’ products (Wolf, 1979).

When the purpose of the evaluation is summative in nature then developing scores based on clusters of items that measure the same sorts of things are more valuable than individual scores (Wolf, 1979). Depending on the type of material to be learned, the timing of the evaluations is also paramount. If learners’ prior information on a topic is very little then post-program evaluation is recommended. Otherwise pre- and post-tests are generally recommended.

In 1994, the Joint Committee Standards for Program Evaluation were established by sixteen professional associations. They identified standards for program evaluation that were approved by the American National Standards Institute as an American National Standard. Sound evaluations of educational programs, projects, and materials in a variety of settings should have four basic attributes, utility, propriety, feasibility, and accuracy.

The utility standards are intended to ensure that an evaluation will serve the information needs of intended users. The feasibility standards are intended to ensure that an evaluation will be realistic, prudent, diplomatic, and frugal. The propriety standards are intended to ensure that an evaluation will be conducted legally, ethically, and with due regard for the welfare of those involved in the evaluation, as well as those affected by its results. The accuracy standards are intended to ensure that an evaluation will reveal and convey technically adequate information about the features that determine worth or merit of the program being evaluated (American National Standards Institute, 1994). These standards should be applied to all evaluation studies (Appendix C).

Worthen, Sanders & Fitzpatrick (1997) state that a good design should include the following six steps:
 

1. Focusing the evaluation
2. Collecting information
3. Organizing information
4. Analyzing information
5. Reporting information
6. Administering the evaluation

Worthen et al (1997) recommend evaluators and stakeholders examine various approaches carefully to identify any important issues that are relevant to the specific questions they wish the evaluation to answer. Some evaluations use multiple approaches or combinations of approaches to address different questions. Some items that need evaluating may be complex and require multiple measures because no one measure is sufficient to capture the totality of the phenomenon. Sources of information that can be used when conducting a program evaluation include various program participants.

The most common sources for original data include:
 


Before a program evaluation begins, the conditions for gathering information should be ascertained. To determine the appropriate conditions for collecting information:
 

When designing the program evaluation, evaluators must plan a method for how they will organize and assess all of the data collected. To determine the methods for organizing, analyzing, and interpreting information evaluators need to identify the statistical or summarizing technique to be employed for analyzing both quantitative and qualitative information, and designate some means for conducting the analysis (for example, computer software). Before conducting a program evaluation, evaluators must complete a management plan.

Evaluators must make a management plan that outlines methods for structuring and controlling resources, including time, money, and people, and that specifies tasks and timelines, personnel and resources, and cost. Evaluators should analyze personnel needs and assignments by outlining personnel role specifications, and considering skills required for completing each task. For a small project, the choices may be one evaluator and one or two assistants. For larger projects, one can utilize existing professional staff or consultants.

When developing the budget, the evaluator should consider all of the following:


Similar to Worthen et al (1997), NASA recommends six phases to the evaluation process:
 

1. Develop a conceptual model of the program and identify key evaluation points
2. Develop evaluation questions and define measurable outcomes
3. Develop an evaluation design
4. Collect data
5. Analyze data
6. Provide information to interested audiences

This conceptual model focuses on four specific elements are closely connected, program inputs (funding sources), activities (development and services), short-term impacts (immediate results), and long-term impacts (broader impacts on the system or individual). The procedures used for program evaluation in general are used with distance learning programs with some modifications.

Procedures for evaluating distance education programs. How do we evaluate what students learn in distance learning environments? Distance learning requires the learner to study independently of a teacher, very often in the form of private study at home. One cannot directly observe this process, though one might notice some of its effects on questions the learner asks or in tests or assignments. This is very different from a course which is defined in terms of attendance of a series of classes which can be observed and evaluated directly (Thorpe, 1988).

In traditional course work there is one key figure, the teacher.  In distance learning one does not diminish the role of this key figure, however other staff roles such as subject matter experts and other staff roles are often just as important. In distance learning, a team effort is generally utilized and therefore each of these inputs into the learners’ experience should be evaluated.

There is also more to be evaluated in a distance learning environment because individuals have a greater degree of choice about how they learn. They may use all of the materials or only a part of them. Since there is a more diverse range of inputs in distance learning assessment of the data acquired by evaluators needs to encompass all of them (Thorpe, 1988).

Distance learning involves learners who may elect only to complete the course merely for their own satisfaction. These individual decisions can be represented statistically in the aggregate data on drop out, course completion, grades, examination passes and so forth (Thorpe, 1988). However it may not be possible to turn to a single statistic such as percentage pass rate to provide a criterion of success. Comparative data should be provided on the demographic characteristics of the learners, including educational qualifications. Secondly, data on intentions and outcomes other than pass rates are necessary to provide a context and other criteria of success (Thorpe, 1988).

Traditional measures to evaluate distance education courses include student evaluations, surveys, grades and attrition rates. Student evaluation is a strong indicator of course quality. Appropriate evaluation forms designed for distance education must be provided for learners, for example placing surveys on-line for ease of access.

Questions about technology demands, instructor and peer interaction are as important as content related questions (Wade, 1999). Surveys can be used before courses begin to see what types of technology skills the students have, and what needs they perceive they might have in the virtual classroom. Post-course surveys can be used to see how students perform in follow up course work or to determine why students may have dropped out of a class. Grades alone cannot provide all the answers and should be looked at in conjunction with other factors including task performance and participation (Wade, 1999).

The Open University in Britain suggest when evaluating distance learning, evaluate the operation and outcomes of the program both while it is in progress and while it is completed. In addition, use the results of the evaluation both to improve support services and to give feedback to producers about the effectiveness of their learning materials and any unexpected problems or opportunities that may have become apparent while using them. Specific questions regarding various components and aspects of distance education courses can include whether the course was:

Traditional evaluations of distance education programs have concentrated on quantitative procedures that have been in practice for years. More recently, evaluators of distance education programs have begun to propose qualitative procedures that include the collection of non-numerical types of information. In general, distance education seems to be as effective a traditional education with regards to learning outcomes, and learners have more favorable attitudes than traditional learners do, expressing the feeling that they learn just as well in distance education environments (Hanson & Maushak, 1996).

Simonson (1997) discusses both quantitative procedures and qualitative procedures.  Traditional quantitative evaluation procedures use carefully matched variables and pre- and post-course data analysis. Beginning in 1986, a more naturalistic method based on qualitative data utilizing interviews, focus groups, observations and journals was begun at The Open University of Great Britain. Six categories of evaluation information were established: measures of activity, efficiency, outcomes, program aims, policy and organization.

Measures of activity are counts of events, people and objects.  Questions can include:

Measures of efficiency questions can include: Measures of outcomes are usually considered to be the most important in determining course effectiveness. Surveys and interviews with students are recommended to supplement course grades. Measures of program aims can be specified in terms of what and whom the program aims to teach. Surveys can be used to collect information to establish the extent to which these aims were met. Measures of policy include determining the demand for distance education activities and often take the form of market research. Students can also be surveyed to determine whether there were impediments to a course's success, such as lack of access to computers or libraries. Measures of organizations evaluate a distance program in terms of its internal organization and procedures. Evaluators may be asked to determine ways of improving course development or delivery to help an organization be more efficient. By surveying and interviewing program organization leaders an evaluator can help to determine levels of efficiency or inefficiency (Simonson, 1997).

Iowa's Star Schools Project developed a similar method combining both qualitative and quantitative methodologies. The five components that are addressed in this method include accountability, effectiveness, impact, organizational context, and unanticipated consequences. Accountability assesses whether program planners did what they said they were going to do, and is the first step in determining the effectiveness of the program. Counts of number, people and things are collected, such as number of students, number of class sessions and numbers of materials produced. Effectiveness questions focus on participants' attitude and knowledge. Grades and test scores are supplemented by answers to survey questions and minutes of focus group sessions. Impact refers to whether the program made a difference, what changes occurred as a result of the program, and are they tied to the programs stated objectives. A key element of measurement of impact is longitudinal data, and follow-up studies, surveys, and other recorded data are recommended (Simonson, 1997).

The distance learning component of the program will be evaluated with regard to current methods and procedures recommended in the literature. Student levels of participation and satisfaction will be assessed using both quantitative and qualititative methodologies. Traditional quantitative measures of participation will be represented statistically in the aggregate data on drop out, course completion, and grades. Data regarding the learner's demographics, characteristics and intentions will provide a context and other criteria of success (such as the measures of minority and female participants).

Student's level of satisfaction with members of the team of mentors and staff will be analyzed along with student's assessment of technology demands and on-line interactions.
Measures of activity, efficiency, and outcomes will be compared to program goals and objectives, and unanticipated consequences will be addressed if any are found. Evaluations of mentoring programs utilize the procedures recommended for general program evaluation with some modifications.

Procedures for evaluating mentoring programs. The U.S. Department of Education bulletin, Yes, You Can: A Guide for Establishing Mentoring Programs to Prepare Youth for College (October, 1998) recommends that evaluations be ongoing and continuous. They advocate evaluation plans designed at the beginning of a program's development so that planners are caused "to think about proposed activities and the ways in which these activities could be assessed as successful or not" (U.S. Dept. of Ed., 1998, para. 16). The staff should be involved in the planning phases of evaluation as well, so that evaluation is not perceived as threatening, but instead as an opportunity to further program goals. The staff can also play an important role in interpreting the findings, since they are the most knowledgeable about the daily operations of the program. The data collected for evaluations can be used as positive reinforcement for mentors and can alert program directors and program staff to any problems with program implementation. Summative mentor program evaluation questions can include:

Examples of outcome data might include: According to Erwin Flaxman at Columbia University's Institute on Education and the Economy (1993), outcomes for students involved in mentoring programs can include: Flaxman recommends that the quality of mentoring and its effect on students be measured by authentic assessment tools such as portfolios, exhibitions, and student records (Flaxman, 1993).
Peterson (1989) recommends assembling background information to describe both the program and its participants, and gathering feedback from all participants in the program. Background information might include a description of: Peterson contends that feedback gathered over the course of the program will naturally provide more information than a single survey at the end of the program.  In the summative report the evaluator should compare feedback with his or her own expectations (given the resources and constraints) to identify problems or concerns about the program, and make recommendations or decisions regarding the continuation of the program (Peterson, 1989).

The National Centers for Leadership in Academic Medicine (NCLAM) sponsored by the Office on Women’s Health in the Department of Health and Human Services recommends two types of evaluations be performed to measure the success of mentor program goals and objectives. Both quantitative evaluation of objective measures and qualitative evaluation of subjective measures should be used. Quantitative evaluation can assess whether goals and objective were reached, for example, "Were ten mentors recruited and trained within two months of the start of the program?" Qualitative evaluation can be performed using common tools such as surveys and group discussions. These evaluations might include questions that measure aspects such as personal satisfaction, and the extent to which training materials and other mentoring tools provided by the institution were helpful (NCLAM, 2001).

The National Mentoring Center at the Northwest Regional Educational Laboratory produced the handbook, Making the Case: Measuring the Impact of Your Mentoring Program (1999). They describe both process and outcome evaluation procedures. Sample questions for process evaluation can include:

Outcome evaluation should assess The I Have a Dream-Houston program was founded in 1988, inspired by New York philanthropist Eugene Lang, who promised 61 East Harlem sixth-graders he would send them to college if they completed high school. I Have a Dream-Houston offers mentoring to disadvantaged youth and college scholarships. Using an assessment instrument designed by university researchers, program staff collects data annually to track the effectiveness of the various program components in achieving their ultimate goal, high school graduation and ongoing success for students. As the data comes in, it allows the program to better judge the success of its strategies in recruiting and matching mentors (Sherman, 2001).

The National Foundation for the Improvement of Education (1999) discusses procedures used to evaluate the effectiveness of mentoring. While the most effective programs focus on the quality of program outcomes, they also recommend having mentors evaluate their students, and having students and staff evaluate the mentors. Evaluations should try to demonstrate that the mentor programs offer a significant return on investment, for example that the cost of mentoring is less than the costs of recruitment or remediation (National Foundation for the Improvement of Education, 1999).

While traditional program evaluation approaches are recommended for mentoring programs, there are evaluation questions, outcomes and indicators that are specific to the mentoring relationship. Quantitative measures of outcome indicators that will be used in the evaluation of the mentoring component of the Texas Aerospace Scholars program include the percentage of homework assignments completed, an increase in enrollment and completion of college preparatory courses, and enrollment in a post-secondary education program. Qualitative measures will address student's attitude changes as a result of the mentoring relationship including a reduced feeling of isolation, a greater access to knowledge, and an enhanced sense of the ability to achieve. Quantitative measures of the mentor program can be analyzed including the mentor drop-out rate and mentor decisions to repeat the program.  Qualitative measures will address the types of relationships that formed, whether the relationships continued, and what types of activities resulted.

Validity and Reliability of Results

While content validity is central to the matching of evaluation procedures to specific course and program objectives, conventional notions of reliability do not apply to the evaluation of a program the same way that it does with measuring an individual’s performance. In evaluation studies, concern focuses on estimating performance for a group of learners. Thus the evaluation study centers on the reliability of group estimates of performance (not an individual’s performance). Typically the reliability of group performance is higher than the reliability of individual estimates (Wiley and Block, 1968). Therefore one can use instruments with reasonable confidence in an evaluation study which might not be acceptable in individual measurement. A consequence of the greater reliability of group estimates is that it allows evaluators to use sub-scores and even single items to characterize group performance in a fairly reliable fashion (Wolf, 1979). Overall, the concern for reliability of measures of learner performance is not as critical in group evaluation as in individual measurement.

According to Wolf (1979) eight major factors can jeopardize the validity of an evaluation study. These are history, maturation, testing, instrumentation, statistical regression, selection, mortality and interactions. History refers to the influence of events that are taking place at the same time as the treatment. Evaluators must be able to consider alternative explanations for the results, that are not related to the treatment itself. Maturation refers to the normal growth of an individual not related to the treatment, biological and/or psychological in nature. Evaluators must determine to what extent the change in the learner is a result of the treatment, and what could be a result of normal maturation processes. Testing refers to the effect that repeating a test has on secondary scores. The way to eliminate this factor is by administering only one test after the period of instruction. This however precludes pre-testing and therefore pre- and post-tests should involve different but related test items. Instrumentation refers to the quality of the scoring of tests when there may be different standards employed in each case. The evaluator should be very sensitive to the nature of the instruments used and the units of measurement employed. Statistical regression arises when the evaluator seeks out individuals based on extremely high or low scores. Generally when re-tested these individuals will have more moderate scores. Selection considers the employment of different criteria used in evaluating comparison groups in the study. Evaluators must determine how much the comparison groups results reflect similar or different criteria. Mortality refers to the dropout rates of the learners being studied. When a course is not compulsory the effect of attrition can be considerable.

Interactions of all the factors can lead to erroneous conclusions when comparing results. The presence of various factors can determine whether learner performance is related to the treatment or due to something else. Since the goal of evaluation is to determine a program’s effectiveness, failure can result when the evaluator cannot say whether learner outcomes are a result of the program’s effective, maturation of the students, other experiences the learner had, or something else entirely (Wolf, 1979).

One of the best ways to improve the validity of the evaluation study is to replicate it.  Replication can occur in three different ways. The first way is to separate learners into groups, with the evaluation of each group considered a replication. The second is to carry out the evaluation study with successive groups of learners in different years. The results of the two studies can be compared and judgments and decisions made at a future time in a longitudinal study. Inconsistencies can arise if the program is significantly altered from one year to the next, however the results of more than one study are generally considered to be superior as a basis for making important decisions regarding a program’s ultimate fate. The third way to replicate the study is to have the treatment simultaneously introduced into a variety of institutions at the same times with separate evaluations occurring in each location (Wolf, 1979).

In selecting a population sample for the evaluation, chunk samples (a selection based on availability), judgment samples (based on the judgment of the evaluator) and probability samples (randomly selected) are generally used. The precision of the results of using a sample are based on the size of the sample, the larger the sample the more precise the estimation. Sampling statistics in the case when 100% of the population is used have a zero-error and is the most precise. Sampling is widely used in research, where exhaustive testing of every member of the population can be prohibitive. In educational evaluation studies however the situation can be quite different. Often all the learners in a population are available and sampling is unnecessary.  It is certainly better to have all the information about a group than a part no matter how carefully selected. The testing of all learners in a program is highly desirable for reliable evaluation.  NASA recommends attempting to get data from as many members of your population as possible to ensure validity. It is recommended that an attempt should be made to gather data from at least 80% of the participants. Whenever possible one should assess whether there is a systematic difference between those who respond and those who do not (NASA, 2000).

Implication for program evaluation and continuous improvement

An objectives-oriented approach will be used for the summative program evaluation of the Texas Aerospace Scholars program, specifically the Kirkpatrick model of four levels of evaluation: reaction, learning, behavior and results. Levels one and two will be measured using participant evaluations and tests. Level three, behavior will be based on the first interim target, the year 1 impact: student's choice of a college and major. Level four, results, will need to be addressed in future program evaluations.

Kirkpatrick (1994) offers the following guidelines for evaluating results: use a control group (if possible), allow time for results to be achieved, measure both before and after the program, repeat the measurement, consider cost vs. benefits, and be satisfied with evidence if proof is not available.

Many factors can influence program results (level four) as well as changes in behavior (level three). Perhaps more students are choosing computer programming as a major not just because of an intervention program but because the wages for computer programmers has skyrocketed. Level four evaluation is a major challenge and beyond the scope of this study, but a comparison between pre and post-program attitudes, learning and behaviors can still justify a program's worth. When one cannot assess results conclusively, one can go back a level or two, and assesses reaction, learning, and perhaps changes in behavior. Often these are very good indicators of a program's merit, if not the ultimate one (Kirkpatrick, 1994).
 

Chapter 3.
Method

The program evaluation will report summative evaluation data gathered during the first year of the TAS program. Chapter 3 describes the method used to gather and analyze the data. Included is a description of the subjects, evaluation design, instruments, data collection procedures, data analysis techniques and limitations of the study.

Subjects

In year one, the Texas Aerospace Scholars was composed of 160 gifted high school juniors representing a majority of regions across the state of Texas who exhibited a strong interest in science, math, technology and engineering. They were nominated by a local legislator and selected by their schools in most cases, self-selected in some cases, and politically selected in a few cases. Figure 4 details the ethnic breakdown of the students including 26% non-white scholars. In addition, 42% of the total population was female. Figures 5 and 6 indicate on the map of Texas the participating legislative districts (senators and representatives).
 
 


Figure 4.  Texas Aerospace Scholars student profile (other = not-reported)

Figure 5. Texas Aerospace Scholars district representation - House.
 
 

Figure 6. Texas Aerospace Scholars district representation - Senate.

For the purposes of this first year evaluation, all scholars, mentors, teacher and co-ops were included in the sample. When 100% of the population is used the statistics have a zero-error and is most precise. In the case of TAS, all the learners were available and sampling was unnecessary. 100% of the scholars completed post-program evaluations, approximately 95% completed course assignments and exams. A 50% return on follow-up surveys and a 50% return on adult evaluations including post-program briefings were used as a cut-off for inclusion in the data collection.

Research Design
The objectives-oriented approach applying Kirkpatrick’s (1994) summative evaluation model will be used to assess the impact of the TAS distance education and mentoring space science program for gifted scholars. A non-experimental qualitative design will be employed that utilizes descriptive statistics and anecdotal self-reports to assess scholars' level of participation, satisfaction and choice of majors. Descriptive statistics, including percentages and frequencies, will be plotted to assess whether the scholars who had a high level of participation and satisfaction were the ones who chose engineering and science as their intended major.

Non-experimental studies occur after a program has occurred and use descriptive information to supplement other program statistics. The evaluation will assess whether a high level of participation in the program is predictive of the year one milestone: an intended choice of major in college. Descriptive statistics will be used to determine what factors scholars perceived were influential in sustaining their interest in science and engineering. In addition, a review of the data collected from mentor professionals in the program will be used to determine what factors they perceived were influential to sustaining scholar interest in the field.

A recommendation for continuance of data collection for a future longitudinal study on all subsequent year scholars is the best solution for assessing the full merit of the program over time. The same group of scholars will be followed over four years as they continue to participate in on-line TAS activities and mentoring relationships. A high level of participation in these follow-up activities may be a predictor of future milestones including the ultimate goals: the choice of a career in science, engineering or technology.

Ethical considerations were addressed by NASA before scholars began the program with the implementation of parent permission, follow-up commitment and talent release forms (Appendix D) as the decision to complete an assessment of the program was planned while not formalized.

Instruments
 
 

The instruments used for the data collection included tests, evaluations, surveys, testimonials and focus group interviews (Appendix G). The instruments were developed prior to (and during) the first year of the program by the instructional designer and program manager in anticipation of the summative evaluation. Each instrument was correlated to course and program objectives with the goal of systematically assessing each part of the program including participation, success, satisfaction and impact.
 
 

Student Achievement Tests. Student tests were used to gather Level two data–student learning. The achievement tests were given after each unit of study. Eight questions were developed for an on-line quiz that scholars took after completing the reading in the unit. Scholars were given immediate feedback on each test. The tests reflected the course objectives listed at the beginning of each unit. A 75% or better grade was deemed a passing level. Over 95% of the scholars completed every test and over 90% of the participating scholars passed each course unit.

The rationale for item development for each course test (of eight questions each) was each unit's specific objectives. Each unit was composed of approximately eight chapters with each quiz question reflecting a major concept presented in that chapter.

The validity of the test instruments as they were developed were a measure of a team of eight NASA Johnson Space Center subject matter experts (scientists and engineers) and three working teachers in the field. A two-thirds rule of thumb was used to establish the validity of participant agreement on all quiz questions. The reliability of the instruments were not addressed since there is no pre-testing, nor a control group used for program assessment.
 
 

Program Evaluation Surveys. Student and mentor surveys were used to gather Level one data–participant reactions, and Level two data–scholar attitudes. The surveys were given after the conclusion of the summer workshop. Program evaluations were completed by scholars and all adult participants after the on-line course and summer workshop were finished. Each on-line unit was rated (1-5, with 5 being the highest rating), the time it took to complete, and comments and suggestions were solicited for each. Post-program evaluation questions addressed scholars' change in attitudes towards engineering, college plans, intended college major, and futures careers as a result of the distance learning course. Each component of the on-site summer workshop was rated including tours, field trips, activities, team projects, logistics and staff. Each adult participant rated each student's academic skills and interest level (1-5, with 5 being the highest rating). Program evaluations were informally assessed and used to improve the quality of the Web-site, the on-line course and the summer workshop for the second year. 100% of the participating scholars completed the on-line evaluation while they were at the summer workshop. Ratings for all components (except logistics) were averaged at over 3 (good, very good or excellent).

The rationale for the development of post-program evaluation questions was to elicit information about scholars attitudes towards engineering, intended choice of college, majors, and choice of careers. In addition, the questions addressed specific program components including the Web-site, the distance education course, and the on-site summer experience (participation and satisfaction levels). The rationale for the development of the adult participation post-program evaluations was to elicit information about specific program components including: each individual scholar, the Web-site, the distance education chat sessions, and the on site summer experience.

The validity of the evaluation instruments as they were developed were a measure of a team of eight NASA Johnson Space Center subject matter experts (scientists and engineers) and three working teachers in the field. A two-thirds rule of thumb was used to establish the validity of participant agreement on complete coverage of all program components. The reliability of the instruments was not addressed since there is no control group used for program assessment.
 
 

Post-Program Surveys. Surveys were used to gather additional Level one data–participant reactions, and primarily Level three data–scholar behavior. The surveys were completed on-line two months after the program's completion and six months after that. A dozen questions polled scholar's interest in science, technology and engineering post-program, their intended choice of college and major, and whether they participated in giving presentations about TAS at their school, and any other NASA-related events. In addition, scholars were surveyed as to their volunteer activities, any jobs they held and their desire to co-op at NASA or a related contractor. Approximately 50% of the participating scholars completed each of the two on-line surveys.

The rationale for the development of follow-up survey questions were to elicit any attitudinal changes based on the distance education course work, the summer workshop and the mentoring experience (attitudes towards engineering, intended choice of college, majors, and choice of careers). In addition, scholar characteristics were profiled including school course load, work and volunteer participation, and continued activity in the TAS program (mentor correspondence, curriculum review, chat participation, technology survey participation, and program promotion in the community).

The validity of the post-program survey instruments as they were developed were a measure of a team of eight NASA Johnson Space Center subject matter experts (scientists and engineers) and three working teachers in the field. A two-thirds rule of thumb was used to establish the validity of participant agreement on the topics addressed in each follow-up survey. The reliability of the survey instruments was not addressed since there is no control group used for program assessment.
 
 

Anecdotal Reports. Anecdotal reports reflect Level one data–participant reactions, and Level three data–scholar behavior. Unsolicited scholar testimonials were received by mail and e-mail and filed for informal assessment purposes. Over 75 unsolicited testimonials were received. Fifteen scholars from weeks two and three were interviewed on video for use on the TAS Web-site, in the TAS promotional video, and for informal assessment purposes. Scholars were asked questions about how they were selected, the on-line course work, on-site summer workshop, team mentoring projects and how TAS might impact their future choice of career.

The rationale for the development of the video testimonial questions included an articulation of the nomination process, the distance learning experience, the on-site teamwork, the space program and attitudinal changes as a result of the program.

The validity of the video testimonial questions as they were developed were a measure of a team of eight NASA Johnson Space Center subject matter experts (scientists and engineers) and three working teachers in the field. A two-thirds rule of thumb was used to establish the validity of participant agreement on testimonial questions. The reliability of the questions were not addressed since there is no control group used for program assessment.
 
 

Focus Group Interviews. Focus group interviews were conducted to gather Level one data–adult participant reactions. TAS mentors and co-ops participated in follow-up round-table evaluation meetings. Comments about what worked, the Web-site, team project work, and the on-site summer workshop were gathered and summarized. Suggestions for improvement were incorporated into the second year of the program. Over half of the participating co-ops and mentors participated in the round-tables.

The rationale for conducting adult participant round table discussions was to elicit information about specific program components including: each individual scholar, the Web-site, the distance education chat sessions, and the on site summer experience.

The validity of the discussion group questions as they were developed were a measure of all participating NASA Johnson Space Center mentors (scientists and engineers). A two-thirds rule of thumb was used to establish the validity of participant agreement on discussion topics to reflect a complete coverage of all program components. The reliability of the instruments was not addressed since there is no control group used for program assessment.
 
 
 
 

Data Collection Procedure
 
 

Data was collected from the beginning of the on-line course in February of 2000, through six months past the summer workshop, February 2001 (Appendix E). Table 1 details the data collection points during the first year.
 
 

Table 1. TAS timeline and data collection points.



 
 
 
 
 
 
January, 2000 Scholars Selected
February - May 2000 On-line Distance Education Course Work
February - May 2000 On-line Course Tests
May-June 2000 Final Projects Submitted to Mentors
May-June 2000 On-line Team Chats with Mentors
July-August 2000 Summer Workshop
July-August 2000 Video Interviews
July-August 2000 Program Evaluations from Scholars
August-September 2000 Program Evaluations from Mentors, Co-ops and Teachers
August, 2000-February 2001 Unsolicited Testimonials
September, 2000 Mentor/Co-op Round-tables
October, 2000 Follow-up Survey #1
February, 2001 Follow-up Survey #2
September, 2001 Program Evaluation

 

During the distance education course, six on-line unit tests were given to all participating scholars at the completion of each unit. Tests were created using Active Server Pages. During the summer workshop a random selection of scholars were interviewed on video for testimonials about the program. At the close of each session of the summer workshop scholars, mentors and co-ops completed an on-line post-program evaluation. Evaluations were created using Active Server Pages. Within one month after the conclusion of the summer workshops focus group discussions were held with program mentors and co-ops. Two months following the conclusion of the summer program scholars were asked to complete an on-line survey. Surveys were created the Survey Says software. From February 2000 to February 2001 unsolicited scholar testimonials were received from scholars by e-mail and letter.

Data Analysis

The focus of the analysis of the first year of data will be to assess students' perception about engineering as a career, their choice of colleges, and their intended majors and degrees after completing the TAS program. The correlation between student satisfaction and participation levels with attitudinal changes, college choices and intended majors will be the focus of the year one evaluation study. The correlations will be calculated using the Pearson product-moment correlation coefficient formula.

A secondary focus will be the success of the distance education program based on student participation and grades, and the success of the mentoring component based on scholar evaluations, surveys, testimonials and discussion groups. An outline of the data analysis components is given in Appendix F.

The distance education units will be assessed using descriptive statistics of the final test scores, completion rates, and time allocation ratings. Course grades, completion rates, and time allocation ratings will be described using analysis of the raw and standard scores, mean, median and standard deviation, and frequency polygons of raw scores will be completed.

The Web-site will be assessed using scholar and adult post-program evaluations. Descriptive statistics will be presented regarding various Web-site components and interactive features. Mean, median and standard deviation, and frequency polygons of raw scores will be completed. Some anecdotal information from post-hoc self-reports will be summarized in narrative form.

The distance education course work will be assessed using scholar post-program evaluations. Descriptive statistics will be presented with some anecdotal information from post-hoc self-reports. Both quantitative and some qualitative information will be presented with the focus on satisfaction and attitudinal changes. Mean, median and standard deviation, and frequency polygons of raw scores will be completed.

The on-site summer mentoring experience will be assessed using scholar and adult post-program evaluations. Descriptive statistics will be presented for each component of the summer program including events, activities, briefings and teamwork. Some anecdotal information from post-hoc self-reports will be summarized in narrative form. Both quantitative and qualitative information will be assessed with the focus on participation levels, satisfaction and attitudinal changes. Mean, median and standard deviation, and frequency polygons of raw scores will be completed.

The follow-up surveys will be analyzed using descriptive statistics and the presentation of some anecdotal information. The descriptive data will be presented with the focus on attitudinal changes, college choices, intended majors, intended careers and follow-up TAS activity participation levels. Mean, median and standard deviation, and frequency polygons of raw scores will be completed. A correlation between satisfaction and participation levels with attitudinal changes, college choices and intended majors will be the primary final focus of the year one evaluation study.

Cost information will be presented in tables to provide as full a picture about the program as possible. Supplemental program information will be summarized and presented in terms of measures of central tendency including the extent of endorsement or rejection by participants (scholars, mentors and co-ops).

Limitations
 
 

Texas Aerospace Scholars was developed without a formal evaluation plan in place before the curriculum was developed. However, as formal assessment was an eventual goal for NASA, a large amount of data was collected. Descriptive and anecdotal information was collected from all participants, but not systematically. A variety of instruments were developed in an attempt to provide an overview of the various program components.

This large amount of data is now difficult to organize, includes quite a lot of anecdotal material, and has never been systematically analyzed. Data complexity and quality may affect the time needed for data collection and analysis. This study utilizes extent data (collected by JSC in 2000-2001), however only informal analysis of the data has been completed. While quantitative data will be processed using technology when possible, qualitative data may be more time consuming to thoroughly assess. It may be necessary to shorten the analytic process somewhat, thereby limiting the value of the findings.

NASA recommends using the 'mixed-method' approach combining both qualitative and quantitative data since a majority of NASA projects are not targeted to participants in a carefully controlled and restrictive environment but in complex social environments. To ignore the complexity of the background could impoverish the evaluation (NASA, 2000) therefore the analysis of the data to a sufficiently complex level of detail may be a limitation based on the time restraints of the study (one semester).

Since the program has been initiated with a single group of learners, and neither a comparison group is available nor is the initial status of learners available, judgments about the attainment of objectives may be difficult to make. Since TAS is a highly specialized independent distance curriculum, it can be considered so novel that the initial status of the learners (and the likelihood of external events influencing learner performance) can be ignored. In this case, presumptive conclusions can be drawn on the basis of performance after instruction. Judgments may contain a high degree of presumption however, and may not be acceptable if a decision is to be made at this point in time. Therefore it is recommended that these results should be used for formative evaluation and that major decisions should be postponed until more information is obtained.

The study will only assess the impact of the program immediately following the completion of the program and after a period of eight months (through scholar's senior spring semester). The study will focus only on the year one milestone established prior to the research, the scholars' choice of college major as an early outcome measure. A full picture of impact of the four-year follow-up program can only be assessed by a longitudinal study. Therefore, a future study, or series of studies, should be conducted at that time.
 

References

Amazing Space, Space Telescope Science Institute. Web-site. [On-line]. Available http://amazing-space.stsci.edu/general-info.html [September 3, 2001].
American Association of University Women. (1992). How schools shortchange girls: a study of major findings on girls and education. Washington, D.C.: National Education Association and The American Association of University Women.
American National Standards Institute. (1994). Joint committee on standards for educational evaluation, the program evaluation standards.  Thousand Oaks, CA.: Sage Publications. Web-site. [On-line]. Available http://www.eval.org/EvaluationDocuments/progeval.html [September 3, 2001].
Beeby, C. E. (1975, June). The Meaning of evaluation. Paper delivered at evaluation conference, Department of Education, Wellington, New Zealand.
Bennett, D. T. (1997). Providing role models on-line, Electronic Learning, 16, 50-51.
Berger, S. L. (1990). Mentor relationships and gifted learners. ERIC Digest #E486 [ERIC Identifier: ED321491 ]. Reston, VA: ERIC Clearinghouse on Handicapped and Gifted Children. Web-site. [On-line]. Available: http://www.ed.gov/databases/ERIC_Digests/ed321491.html [June 5, 2001].
Casey, K. M., & Shore, B. (2000). Mentors’ contributions to gifted adolescents’ affective, social, and vocational development. Roeper Review, 22, no. 4, 227-30.
Center for Children and Technology. KAHooTZ, Imagination Place! Testimonials. Web-site. [On-line]. Available http://www.kahootz.com.au/testimonials.html [July 6, 2001].
Center for Educational Technology. International Space Station Challenge. Web-site. [On-line]. Available http://voyager.cet.edu/iss/ [September 5, 2001].
Classroom of the Future: Earth Science Explorer. Web-site. [On-line]. Available http://www.cotf.edu/ete/modules/msese/explorer.html [September 3, 2001].
Classroom of the Future: Exploring the Environment. Web-site. [On-line]. Available http://www.cotf.edu/ete/ [August 3, 2001].
Commission for the Advancement of Women and Minorities in Science, Engineering, and Technology Development (CAWMSET). Land of Plenty: Diversity as America’s competitive edge in science, engineering and technology. Washington, D.C.: National Science Foundation. Web-site. [On-line]. Available http://www.nsf.gov/od/cawmset/ [June 15, 2001].
Coppula, D. (1997). Making room at the table. American Association of Engineering Education Prism, v. 6, 14.
Dick, W., & Carey, L. (1996). The systematic design of instruction. New York: HarperCollins.
Explore Science. Web-site. [On-line]. Available http://www.explorescience.com/ [September 5, 2001].
Figg, Candace. (1997). Reflections from the Journal of a First-Time Facilitator for the Electronic Emissary Project. University of Texas at Austin. Web-site. [On-line]. Available http://emissary.ots.utexas.edu/emissary/candace.html [September 5, 2001].
Fitz-Gibbon, C.T., & Morris, L.L. (1987). How to design a program evaluation. Newbury Park, CA: Sage.
Flaxman, E. (1993). Standards for mentoring in career development. New York, NY: Columbia University's Institute on Education and the Economy.
Gagne, R., Briggs, L. & Wagner, W. (1992). Principles of instructional design (4th ed.) New York: Holt, Rinehart, and Winston.
GLOBE Program. Web-site. [On-line]. Available http://www.globe.gov/ [September 5, 2001].
Gomez, A.G. (2000). Engineering, But How? The Technology Teacher, 59, no. 6, 17-22.
Hamilton, K. (1997). Mousetrap cars, Egg Drops, and Bridge Building. Black Issues in Higher Education, v. 14, 22-5, July 10, 1997.
Hammonds, L.O. (1998). The Virtual High School. The Clearing House, 71, no. 6, 324-5, July/August 1998.
Hanson, D. & Maushak, N. (1996). Distance education: review of the literature. Ames, Iowa: Research Institute for Studies in Education.
JETS. Junior Engineering Technical Society at the University of Missouri College of Engineering. Web-site. [On-line]. Available http://www.jets.org/nedc/nedc.htm [September 6, 2001].
Johns Hopkins Applied Research Laboratory. MESA Program. Web-site. [On-line]. Available http://www.jhuapl.edu/mesa/content.htm [September 16, 2001].
Kaufmann, F. (1981). The 1964-1968 Presidential scholars: a follow-up study. Exceptional Children, 48, 164-169.
Kerr, B. (1983). Raising the career aspirations of gifted girls. The Vocational Guidance Quarterly, 32, 37-43.
Kerr, B. (1985). Smart Girls, Gifted Women. Columbus, OH: Ohio Psychology.
Kerr, B. (1991). Career Planning for Gifted and Talented Youth. ERIC Digest #E492. Reston, VA: ERIC Clearinghouse on Handicapped and Gifted Children.
Kirkpatrick, D. L. (1994). Evaluating training programs: the four levels. San Francisco, CA: Berrett-Koehler.
Learning Technologies Project. The Observatorium. Web-site. [On-line]. Available http://observe.ivv.nasa.gov/nasa/entries/entry_6.html [September 5, 2001].
Lunar and Planetary Institute. Web-site. [On-line]. Available http://cass.jsc.nasa.gov/education/EPO/students.html [September 5, 2001].
McIntosh, M. E., & Greenlaw, M. J. (1990). Fostering the postsecondary aspirations of gifted urban minority students. In S. Berger (Ed.), ERIC Flyer Files. Reston, VA: ERIC Clearinghouse on Handicapped and Gifted Children.
Marable, T.D. (1999). The Role of student mentors in a pre-college engineering program, Peabody Journal of Education, 74, no. 2.
Mioduser, D., Nachmias, R., Lahav, O., & Oren, A. (2000). Web-based learning environments: current pedagogical and technological state. Journal of Research on Computing in Education, 33, no 1, 55-76, Fall 2000.
Molkenthin, R. (2001). Model school programs: Lacey Township High School. Tech Directions, 60, no. 7, 27-30, Fall 2001.
National Aeronautics and Space Administration. (2000). The User-Friendly Handbook for Evaluation of NASA Educational Programs. Washington, D.C.: National Aeronautics and Space Administration.
National Aeronautics and Space Administration. (2001). Operating Principles for NASA’s Educational Plan. Web-site. [On-line]. Available http://education.nasa.gov/implan/principl.html [July 2, 2001].
National Aeronautics and Space Administration. (2001). The Role of Education in NASA’s Strategic Plan. Web-site. [On-line]. Available  http://education.nasa.gov/implan/role.html [July 2, 2001].
National Aeronautics and Space Administration (NASA) Quest. Web-site. [On-line]. Available http://quest.arc.nasa.gov/index.html [September 3, 2001].
National Centers for Leadership in Academic Medicine. (2001). Recommendations for a successful mentoring program. Washington, DC: U.S. Department of Health and Human Services' Office on Women’s Health. Web-site. [On-line]. Available http://www.4woman.gov/owh/col/mentoring.htm [September 6, 2001].
National Education Agency Foundation for the Improvement of Education.(1999). Creating a Teacher Mentoring Program. Washington, DC: National Education Agency Foundation for the Improvement of Education. Web-site. [On-line]. Available http://www.nfie.org/publications/mentoring.htm#content [September 6, 2001].
National Mentoring Center. (1999). Making the case: measuring the impact of your mentoring program. Northwest Regional Educational Laboratory. Web-site. [On-line]. Available http://www.nwrel.org/mentoring/pdf/makingcase.pdf  [September 16, 2001].
National Science Foundation. (1998). Science and Engineering Indicators 1998. National Science Foundation. Web-site. [On-line]. Available http://www.nsf.gov/sbe/srs/seind98/start.htm [July 3, 2001].
National Science Foundation. (1993). Planned majors of National Merit Scholars. Web-site. [On-line]. Available http://www.nsf.gov/search97cgi/vtopic [July 3, 2001].
Panitz, B. (1996). Strengthening the Pipeline. American Association of Engineering Education Prism, v. 5, p.13.
Peterson, R.W. (1989). How to organize and evaluate a mentor program. Irvine, CA: University of California at Irvine. Web-site. [On-line]. Available
http://apollo.gse.uci.edu/MentorTeacher/Contents.html [September 12, 2001].
Rodriguez, R. (1997). Reaching Out, But in What Direction? Black Issues in Higher Education, v. 13.
Sanders, J. R. (1992). Evaluating School Programs: An Educator’s Guide. Newbury Park, CA: Corwin Press.
Scobee, J. & Nash, W. R. (1983). A survey of highly successful space scientists concerning education for gifted and talented students. Gifted Child Quarterly, 27, 147-151.
Scriven, M. (1967). The methodology of evaluation. In R.E. State (ed.), Perspectives of curriculum evaluation. (American Educational Research Association Monograph Series on Evaluation, No. 1, p. 39-83). Chicago: Rand-McNally.
Sherman, Lee. Program profile: I Have A Dream-Houston. National Mentoring Center Bulletin. Northwest Regional Educational Laboratory. Web-site. [On-line]. Available
http://www.nwrel.org/mentoring/bulletin2/ihad.html [September 16, 2001].
Simonson, M. R. (1997). Evaluating teaching and learning at a distance. New Directions for Teaching and Learning, no. 71, Fall, 1997.
Telementoring Young Women in Science, Engineering and Computing. Web-site. [On-line]. Available http://www.edc.org/CCT/telementoring/ [September 5, 2001].
Texas Aerospace Scholars. (2001). Curriculum: Shuttle, Station, Moon and Earth to Mars. Web-site. [On-line]. Available: http://aerospacescholars.org [June 10, 2001].
Texas Aerospace Scholars. (2001). On-line scholar databases. On file on-line at the NASA Johnson Space Center, Houston, TX.
Texas Aerospace Scholars. (2001). Texas Aerospace Scholars (Year 1 and Year 2) Technology Survey Results. On file on-line at the NASA Johnson Space Center, Houston, TX.
Texas Aerospace Scholars. (2001). Texas Aerospace Scholars. Web-site. [On-line]. Available http://aerospacescholars.jsc.nasa.gov [June 10, 2001].
Thorpe, M. (1988). Evaluating open and distance learning.  Essex, U.K.: Longman.
Tuttle, H. (1998). What is a Virtual School? Multimedia Schools, 5, no 3, 46-8, May/June 1998.
Tyler, R. (1950). Basic Principles of Curriculum and Instruction.  Chicago: University of Chicago Press.
UNITE. Uninitiates' Introduction to Engineering. Web-site. [On-line]. Available http://www.jets.org/unite/unite.htm [September 6, 2001].
United States Department of Education. (1998). Yes, You Can: A Guide for Establishing Mentoring Programs to Prepare Youth for College. Web-site. [On-line]. Available http://www.ed.gov/pubs/YesYouCan/index.html [September 9, 2001].
The Virtual High School. (2001). Web-site. [On-line] Available: http://vhs.concord.org [June 29, 2001].
The United States House of Representatives, Representative Woolsey, L. C. (sponsor). (2001). H.R.1536: To amend the Elementary and Secondary Education Act of 1965 to provide grants to local educational agencies to encourage girls to pursue studies and careers in science, mathematics, engineering, and technology.  Washington D.C.: United States House of Representatives.
Wade, W. (1999). Assessment in distance learning: what do students know and how do we know that they know it?  T.H.E. Journal, 27, no. 3, 94-6, October 1999.
Web66.  (2000). Web66’s On-line List of Virtual Schools. Web-site. [On-line] Available http://web66.coled.umn.edu/Schools/Lists/OnLine.html [June 29, 2001].
Western Michigan University. The Web-site for the Joint Committee on Standards for Educational Evaluation Glossary. Web-site. [On-line]. Available http://ec.wmich.edu/glossary/ [September 3, 2001].
Wiley, D. and Bock, R. (1968). Quasi-experimentation in Educational Settings. School Review, 75, 353-366.
Wolf, R. M. (1979). Evaluation in Education: Foundations of Competency Assessment and Program Review.  New York: Praeger Publishers.
Worthen, B. R., Sanders, J. J. & Fitzpatrick, J. L. (1997). Program Evaluation: Alternative Approaches and Practical Guidelines (2nd ed.). New York: Longman.
Yahoo. (2001). Yahoo K-12 Distance Learning. Web-site.  [On-line]. Available http://www.yahoo.com/Education/Distance_Learning/K_12/ [July 3, 2001].
 

Appendix A

National Science Foundation Science and Engineering Indicators

Excel Files:

  1. Total science and engineering jobs: 1996 and projected 2006
  2. National trends in science course taking at age 17, by sex and race/ethnicity: 1986, 1990, 1992, and 1994
  3. Freshman Choice of Major in broad science and engineering fields, by race/ethnicity and sex, 1972-1992.
  4. Proportion of freshmen intending to major in science and engineering, by field, sex, and race/ethnicity: 1976-96
  5. Undergraduate enrollment in engineering, by sex, race/ethnicity, and citizenship: 1979-96
  6. Undergraduate enrollment in engineering and engineering technology programs: 1979-96
  7. Earned bachelor’s degrees by race/ethnicity/citizenship and field: 1977-1991.

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Appendix B

Planned College Majors of Merit Scholars: 1982-1992

Excel File

Planned College Majors of Merit Scholars 1982-1992
 
 
 
 
 
 

Appendix C

Joint Committee Standards for Program Evaluation (1994)

Utility Standards

The utility standards are intended to ensure that an evaluation will serve the information needs of intended users.

U1. Stakeholder Identification--Persons involved in or affected by the evaluation should be identified, so that their needs can be addressed.

U2. Evaluator Credibility--The persons conducting the evaluation should be both trustworthy and competent to perform the evaluation, so that the evaluation findings achieve maximum credibility and acceptance.

U3. Information Scope and Selection--Information collected should be broadly selected to address pertinent questions about the program and be responsive to the needs and interests of clients and other specified stakeholders.

U4. Values Identification--The perspectives, procedures, and rationale used to interpret the findings should be carefully described, so that the bases for value judgments are clear.

U5. Report Clarity--Evaluation reports should clearly describe the program being evaluated, including its context, and the purposes, procedures, and findings of the evaluation, so that essential information is provided and easily understood.

U6. Report Timeliness and Dissemination--Significant interim findings and evaluation reports should be disseminated to intended users, so that they can be used in a timely fashion.

U7. Evaluation Impact--Evaluations should be planned, conducted, and reported in ways that encourage follow-through by stakeholders, so that the likelihood that the evaluation will be used is increased.

Feasibility Standards

The feasibility standards are intended to ensure that an evaluation will be realistic, prudent, diplomatic, and frugal.

F1. Practical Procedures--The evaluation procedures should be practical, to keep disruption to a minimum while needed information is obtained.

F2. Political Viability--The evaluation should be planned and conducted with anticipation of the different positions of various interest groups, so that their cooperation may be obtained, and so that possible attempts by any of these groups to curtail evaluation operations or to bias or misapply the results can be averted or counteracted.

F3. Cost Effectiveness--The evaluation should be efficient and produce information of sufficient value, so that their sources expended can be justified.

Propriety Standards

The propriety standards are intended to ensure that an evaluation will be conducted legally, ethically, and with due regard for the welfare of those involved in the evaluation, as well as those affected by its results.

P1. Service Orientation--Evaluations should be designed to assist organizations to address and effectively serve the needs of the full range of targeted participants.

P2. Formal Agreements--Obligations of the formal parties to an evaluation (what is to be done, how, by whom, when) should be agreed to in writing, so that these parties are obligated to adhere to all conditions of the agreement or formally to renegotiate it.

P3. Rights of Human Subjects--Evaluations should be designed and conducted to respect and protect the rights and welfare of human subjects.

P4. Human Interactions--Evaluators should respect human dignity and worth in their interactions with other persons associated with an evaluation, so that participants are not threatened or harmed.

P5. Complete and Fair Assessment--The evaluation should be complete and fair in its examination and recording of strengths and weaknesses of the program being evaluated, so that strengths can be built upon and problem areas addressed.

P6. Disclosure of Findings--The formal parties to an evaluation should ensure that the full set of evaluation findings along with pertinent limitations are made accessible to the persons affected by the evaluation, and any others with expressed legal rights to receive the results.

P7. Conflict of Interest--Conflict of interest should be dealt with openly and honestly, so that it does not compromise the evaluation processes and results.

P8. Fiscal Responsibility--The evaluator's allocation and expenditure of resources should reflect sound accountability procedures and otherwise be prudent and ethically responsible, so that expenditures are accounted for and appropriate.

Accuracy Standards

The accuracy standards are intended to ensure that an evaluation will reveal and convey technically adequate information about the features that determine worth or merit of the program being evaluated.

A1. Program Documentation--The program being evaluated should be described and documented clearly and accurately, so that the program is clearly identified.

A2. Context Analysis--The context in which the program exists should be examined in enough detail, so that its likely influences on the program can be identified.

A3. Described Purposes and Procedures--The purposes and procedures of the evaluation should be monitored and described in enough detail, so that they can be identified and assessed.

A4. Defensible Information Sources--The sources of information used in a program evaluation should be described in enough detail, so that the adequacy of the information can be assessed.

A5. Valid Information--The information gathering procedures should be chosen or developed and then implemented so that they will assure that the interpretation arrived at is valid for the intended use.

A6. Reliable Information--The information gathering procedures should be chosen or developed and then implemented so that they will assure that the information obtained is sufficiently reliable for the intended use.

A7. Systematic Information--The information collected, processed, and reported in an evaluation should be systematically reviewed and any errors found should be corrected.

A8. Analysis of Quantitative Information--Quantitative information in an evaluation should be appropriately and systematically analyzed so that evaluation questions are effectively answered.

A9. Analysis of Qualitative Information--Qualitative information in an evaluation should be appropriately and systematically analyzed so that evaluation questions are effectively answered.

A10. Justified Conclusions--The conclusions reached in an evaluation should be explicitly justified, so that stakeholders can assess them.

A11. Impartial Reporting--Reporting procedures should guard against distortion caused by personal feelings and biases of any party to the evaluation, so that evaluation reports fairly reflect the evaluation findings.

A12. Metaevaluation--The evaluation itself should be formatively and summatively evaluated against these and other pertinent standards, so that its conduct is appropriately guided and, on completion, stakeholders can closely examine its strengths and weaknesses.
 
 
 
 
 
 
 
 

Appendix D

Student Application, Commitment Form, Talent Release Form
 
 
 
 
 
 

 
 

Texas Aerospace Scholars Program

Student Application

I. STUDENT DATA Please Print Legibly or Type

______________________________________________________________________________________

Name: Last First Middle Name Preferred

______________________________________________________________________________________

Home Address

______________________________________________________________________________________

City State Zip Code Telephone

___________________________________ ______________________________________

Date of Birth Student’s E-mail Address

U.S. Citizen Yes ________ No ________ Gender Male ________ Female ________

Ethnic Group – check one (optional)
 
 

II. PARENTAL CONSENT

I understand that my child is being considered for a position in the Texas Aerospace Scholars Program which will include a 1-week period, Sunday through Friday, at the Johnson Space Center between June and August. Direct supervision will be provided by a NASA sponsor. I certify below, that I

Emergency Contact _________________________________________________________________

Relationship ____________________________ Telephone Number ____________________

Parental Signature ________________________________________ Date ____________________

Parents’ Phone Number (if different from above) __________________________________________

Texas Aerospace Scholars Program

      1. ESSAY FORM
Name: __________________________________ Date: ______________________

Prepare a 300-word legibly written or typed essay addressing the following:


 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

______________________________________________________________________________________

Name of High School

Please attach a copy of your transcript to this Student Application Form (If available, transcript should include final grades for courses through the 2000 Spring academic semester.)

Texas Aerospace Scholars Program

IV. TEACHER RECOMMENDATION

TOP PORTION TO BE COMPLETED BY STUDENT BEFORE SUBMITTING TO TEACHER FOR RECOMMENDATION.

Student’s Name: __________________________________________________________________________

Last First M.I.


Name and Title ____________________________________________________________________________

School/Organization ________________________________________________________________________

How long have you known the student and in what capacity?

______________________________________________________________________________________

______________________________________________________________________________________

Please tell us in narrative form why you endorse this student for the Texas Aerospace Scholars Program. Address what you know about the student’s academic performance and participation in school activities. Attach an additional sheet of paper if necessary. PLEASE PLACE THIS RECOMMENDATION FORM IN A SEALED ENVELOPE BEFORE RETURNING TO STUDENT.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Signature ___________________________________________________Date ____________________

May we contact you for additional information? Yes ___ No ___Telephone No. ____________________

This form should be returned to the student in a sealed envelope as a part of the student packet.

Talent Authorization and Release

Texas Aerospace Scholars Program








I _________________________, hereby grant to the National Aeronautics and Space Administration (NASA), and others acting in its behalf, the right to record my person and voice using audio, photographic, and video techniques and to use these recordings in the making of NASA training productions, public information productions, and any other productions intended for official NASA business. I further grant to NASA the right to use any such productions. I hereby waive all rights of any nature in such recording(s) and the exhibition thereof.

It is understood that this grant includes the right to use, reproduce, distribute, and exhibit such photographic, video, or audio productions in any and all media throughout the world without limitation, and to authorize others to do so.

It is further understood that this grant is provided at no cost to the government and that no compensation of any kind shall be due or expected.
 
 

Signed:__________________________________

Printed Name:____________________________

If a minor, signature of parent or guardian:

_________________________________________

Date:____________________________________

Witness:_________________________________
 
 

The Talent Release form must be received at NASA-JSC no later than June 1, 2000.

Mail to:

Texas Aerospace Scholars

NASA Johnson Space Center

Mailcode AH2

2101 NASA Road 1

Houston, TX 77058

Attn: Jeannie Aquino

Follow-Up Commitment Form

Texas Aerospace Scholars Program








I _________________________, agree that as an Aerospace Scholar after my distance education program and on-site internship ends, I will

Signed:__________________________________

Printed Name:____________________________

If a minor, signature of parent or guardian:

_________________________________________

Date:____________________________________

Witness:_________________________________
 
 

The Follow-Up forms must be received at NASA-JSC no later than June 1, 2000.

Mail to:

Texas Aerospace Scholars

NASA Johnson Space Center

Mailcode AH2

2101 NASA Road 1

Houston, TX 77058

Attn: Jeannie Aquino








Appendix E

TAS Databases

The TAS Databases collected by NASA-JSC in 2000-2001 include student test scores, program evaluations, post-program surveys, student testimonials (written & audio), mentor and co-op mentor roundtable meeting minutes.

All data files are in Excel, PowerPoint and Word files and are located at: http://www.ghg.net/ritakarl/research1.html

Numerical Ratings are from 1-5 (1-Poor, 5-Excellent).

Test Scores

Quick Quiz! Unit Tests

Excel file

Program Evaluations

Attitudes towards engineering (students)

Excel File

Numerical Evaluations: Distance Education, Web-site, On-Site Workshop: General (students)

Excel file

Numerical Evaluations: On-Site Workshop: Rating of Logistics (students)

Excel File

Numerical Evaluations: On-Site Workshop: Rating of Staff (students)

Excel File

Written Comments: Distance Education, Web-site, On-Site Workshop, Future Careers (students)

Excel File

Adult Participant Evaluations

Numerical Evaluations: On-site Workshop (teachers)

Excel File

Numerical Evaluations and Written Comments: Trips, Program, Application Process (mentors, teachers, co-ops)

Excel File

Mentor and Co-op Roundtable Meeting Minutes

Word File

Post-Program Survey Data

Survey #1 Numerical Data Charts (attitudes towards engineering)

PowerPoint file

Survey #1 Written Answers

Excel file

Survey #2 Numerical Data Charts (attitudes towards engineering)

PowerPoint file

Survey #2 Written Answers

Excel file
 
 
 
 

Appendix F

Appendix G

TAS Instruments

Quick Quiz! Unit Tests

Post-Program Evaluations (Scholar, Mentor, Co-op)

Follow Up Survey #1

Follow Up Survey #2

Mentor and Co-op Focus Group Discussions