Sunday, December 1, 2013

MAA Calculus Study: Persistence through Calculus

A successful Calculus program must do more than simply ensure that students who pass are ready for the next course. It also needs to support as many students as possible to attain this readiness. And it must encourage those students to continue on with their mathematics. As I wrote in my January 2010 column, "The Problem of Persistence," just because a student needs further mathematics for the intended career and has done well in the last mathematics course is no guarantee that he or she will decide to continue the study of mathematics. This loss between courses is a significant contributor to the disappearance from STEM fields of at least half of the students who enter college with the intention of pursuing a degree in science, technology, engineering, or mathematics. Chris Rasmussen and Jess Ellis, drawing on data from MAA’s Calculus Study, have now shed further light on this problem. This column draws on some of the results they have gleaned from our data.

For the MAA Calculus Study, students were surveyed both at the start and end of the fall term in mainstream Calculus I. A student was classified as a persister if she or he indicated at the start of the term an intention to continue on to Calculus II and still held that intention at the end of the term. A student was classified as a switcher if she or he intended at the start of the term to continue on to Calculus II, but changed his or her mind by the end of the term.

Not all students completed both the start and end of term surveys. While 50% of all Calculus I students received an A or B in the course, A or B students accounted for 80% of those who completed both surveys. Almost all of the remainder received a C. This implies that our data reflect what happened to the students who were doing well in the class. Of the students who started the term with the intention of taking Calculus II (74% of the students who answered both surveys), 15% turned out to be switchers. Less than 2% of all Calculus I students started with the expectation that they would not continue on to Calculus II but changed their minds by the end of the course.

The rates of switchers varied considerably. Women were far more likely to switch (20%) than men (11%). Those at large research universities were also more likely to switch (16%), particularly if they were taught by a graduate teaching assistant (19%). Rates varied by intended major, from a low of 6% switchers for those headed into engineering to 23% for pre-med majors and 27% for business majors taking mainstream calculus.

Classroom instruction had a significant effect on switcher rates (see Figure 1). "Good Teaching" reflects the collection of highly correlated observations described in this column in March 2013, "MAA Calculus Study: Good Teaching." "Progressive Teaching" refers to those practices described in the following column from April, "MAA Calculus Study: Progressive Teaching." Good Teaching is most important. In combination, Good and Progressive Teaching can significantly lower switcher rates.

Figure 1.

Our study offered students who had chosen to switch out a variety of reasons from which they could select any with which they agreed. Just over half reported that they had changed their major to a field that did not require Calculus II. A third of these students, as well as a third of all switchers, identified their experience in Calculus I as responsible for their decision. It also was a third of all switchers who reported that the reason for switching was that they found calculus to require too much time and effort.

This observation was supported by other data from our study that showed that switchers visit their instructors and tutors more often than persisters and spend more time studying calculus. As stated before, these are students who are doing well, but have decided that continuing would require more effort than they can afford.

I am concerned by these good students who find calculus simply too hard. As I documented in my column from May 2011, "The Calculus I Student," these students experienced success in high school, and an overwhelming majority had studied calculus in high school. They entered college with high levels of confidence and strong motivation. Their experience of Calculus I in college has had a profound effect on both confidence and motivation.

The solution should not be to make college calculus easier. However, we do need to find ways of mitigating the shock that hits so many students when they transition from high school to college. We need to do a better job of preparing students for the demands of college, working on both sides of the transition to equip them with the skills they need to make effective use of their time and effort.

Twenty years ago, I surveyed Calculus I students at Penn State and learned that most had no idea what it means to study mathematics. Their efforts seldom extended beyond trying to match the problems at the back of the section to the templates in the book or the examples that had been explained that day. The result was that studying mathematics had been reduced to the memorization of a large body of specific and seemingly unrelated techniques for solving a vast assortment of problems. No wonder students found it so difficult. I fear that this has not changed.

Friday, November 1, 2013

An International Comparison of Adult Numeracy

This past October, the Organization for Economic Cooperation and Development (OECD) released the first results from its survey of adult skills, OECD Skills Outlook 2013 [1]. It presents more evidence that the United States is lagging behind other economically developed nations in building a quantitatively literate workforce. A rich source of data, the report is unusual in its focus on the numerical skills of adults, covering ages 16 through 65, and on its parallel investigations of literacy and "problem solving in technology-rich environments." Intriguingly, its data suggest that—although their numerical skills rank near the bottom—U.S. workers consider the numerical demands of their work and their ability to handle those demands to be greater than do workers in most other developed countries.

The OECD measured numerical proficiency at five levels:
1.      Able to perform basic calculations in common, concrete situations.
2.      Can identify and act on mathematical information in a common context.
3.      Can identify and act on mathematical information in an unfamiliar or complex context.
4.      Can perform multi-step tasks and work with a broad range of mathematical information in unfamiliar or complex contexts.
5.      Can understand complex mathematical or statistical ideas and integrate multiple types of mathematical information where interpretation is required.

As an illustration of a task at level 3 (from the Reader’s Companion to the report [2, p. 30]): In 2005, the Swedish government closed its Barsebäck nuclear power plant, which was generating 3,572 GWh (Gigawatt hours) of power per year. Given that a wind power station generates about 6,000 MWh (Megawatt hours) of power per year, that 1 MWh = 1,000,000 Wh (Watt hours), and 1 GWh = 1,000,000,000 Wh, how many wind power stations would be needed to replace the Barsebäck plant?

Now the discouraging news. Only just over a third, 34.4%, of U.S. adults were capable of solving such a problem. In many OECD countries, over half the working age population was numerate at level 3 or above, including Austria (50.8%), the Czech Republic (51.9%), Finland (57.8%), Japan (62.5%), Norway (54.8%), the Slovak Republic (53.7%), and Sweden (56.6%). Germany came in just under at 49.1%. South Korea, at 41.4%, suffered from the fact that many of its older workers, especially those over 45, have skills that are far below those of younger Koreans. Other countries in which less than 40% of the population reached level 3 include Poland (38.9%), France (37.3%), and Ireland (36.4%). Only Italy (28.9%) and Spain (28.6%) came in lower than the United States. [1, Table A2.5, p. 262]

While the top 5% of U.S. adults are capable of working at level 4, the scores at the 95th percentile in the United States were well below those in most other OECD countries. The exceptions were France, Ireland, Italy, South Korea (again the unequal opportunity effect for older workers), Poland, and Spain. Only Finland had more than 2% of the adult population capable of working at level 5. In the United States, 0.7% of the adult population was capable of answering questions at level 5. [1, Table A2.8, p. 266]

The OECD data also reveal that the weakness of U.S. adults is not a recent phenomenon. The report separates numeracy skill levels by age decade: 16–24, 25–34, 35–44, 45–54, and 55–65. The United States is near the bottom of every age cohort, though it stayed above Italy and Spain and managed to climb above France and Ireland for adults 45 and older and above Poland and South Korea for adults 55 and older. [1, Table A3.2 (N), p. 272]

Given the low marks on numerical ability, it is interesting that when U.S. workers were asked whether they need to use their numeracy skills at work, the percentages were near the top of the OECD list. All of the following comparisons are for workers in the top 25% in terms of numeracy level. In the United States, 28.8% of these workers said that they need to use their numeracy skills frequently, as opposed to 28.0% in Finland, 26.7% in Germany, and only 17.7% in Japan. Only the Czech Republic at 30.0% and the Slovak Republic at 29.4% reported higher rates of frequent use of numerical skills. [1, Table A4.3, p.  303]

In addition, U.S. workers are more inclined to consider their numeracy skills to over qualify them for the requirements of their job. In the United States, 9.4% of workers considered their numeracy skills greater than the requirements of their job. In Italy, it was 12.6%; in Spain, 15.8%. In contrast, only 7.9% of the workers in Japan and 7.0% in Finland considered their numeracy skills to be greater than the demands of their job. [1, Table A4.25, p. 358] Across the OECD countries, there is a strong negative correlation between numerical ability and the perception of how well one has mastered the numerical skills required for one’s work.

That should be the most troubling aspect of this study.

 


[1] OECD (2013), OECD Skills Outlook 2013: First Results from the Survey of Adult Skills, OECD Publishing. http://dx.doi.org/10.1787/9789264204256-en

[2] OECD (2013), The Survey of Adult Skills: Reader’s Companion, OECD Publishing.


[3] The OECD countries in the survey were Australia, Austria, Czech Republic, Denmark, Estonia, Finland, France, Germany, Ireland, Italy, Japan, Korea, Netherlands, Norway, Poland, Slovak Republic, Spain, Sweden, United States, and three subnational entities: Flanders (Belgium), England (UK), and Northern Ireland (UK). Some data are also presented for Cyprus and the Russian Federation.

Tuesday, October 1, 2013

Evidence of Improved Teaching

Last December I discussed the NRC report, Discipline-Based Education Research: Understanding and Improving Learning in Undergraduate Science and Engineering. One of its themes is the importance of the adoption of “evidence-based teaching strategies.” It is hard to find carefully collected quantitative evidence that certain instructional strategies for undergraduate mathematics really are better. I was pleased to see two articles over the past month that present such evidence for active learning strategies.

One of the articles is the long-anticipated piece by Jerry Epstein, "The Calculus Concept Inventory—Measurement of the Effect of Teaching Methodology in Mathematics" which appeared in the September 2013 Notices of the AMS [1]. Because this article is so readily available to all mathematicians, I will not say much about it. Epstein’s Calculus Concept Inventory (CCI) represents a notable advancement in our ability to assess the effectiveness of different pedagogical approaches to basic calculus instruction. He presents strong evidence for the benefits of Interactive Engagement (IE) over more traditional approaches. As with the older Force Concept Inventory developed by Hestenes et al. [2], CCI has a great deal of surface validity. It measures the kinds of understandings we implicitly assume our students pick up in studying the first semester of calculus, and it clarifies how little basic conceptual understanding is absorbed under traditional pedagogical approaches. Epstein claims statistically significant improvements in conceptual understanding from the use of Interactive Engagement, stronger gains than those seen from other types of interventions including plugging the best instructors into a traditional lecture format. Because CCI is so easily implemented and scored, it should spur greater study of what is most effective in improving undergraduate learning of calculus.

The second paper is "Assessing Long-Term Effects of Inquiry-Based Learning: A Case Study from College Mathematics" by Marina Kogan and Sandra Laursen [3]. This was a carefully controlled study of the effects of Inquiry-Based Learning (IBL) on persistence in mathematics courses and performance in subsequent courses. They were able to compare IBL and non-IBL sections taught at the same universities during the same terms.

IE and IBL describe comparable pedagogical approaches. Richard Hake defined IE as
“… those [methods] designed at least in part to promote conceptual understanding through interactive engagement of students in heads-on (always) and hands-on (usually) activities which yield immediate feedback through discussion with peers and/or instructors.” [4]
IBL does this and also is expected to incorporate a structured curriculum that builds toward the big ideas, a component that may or may not be present in IE. For the Kogan and Laursen study, IBL was a label that the universities chose to apply to certain sections. The trained observers in the Kogan and Laursen study found significant differences between IBL and non-IBL sections. They rated IBL sections “higher for creating a supportive classroom atmosphere, eliciting student intellectual input, and providing feedback to students on their work” than non-IBL sections. IBL sections spent an average of 60% of the time on student-centered activities; in non-IBL sections the instructor talked at least 85% of the time.

Kogan and Laursen compared IBL and non-IBL sections for three courses:
  • G1, the first term of a three-term sequence covering multivariable calculus, linear algebra, and differential equations, taken either in the freshman or sophomore year;
  • L1, a sophomore/junior-level introduction to proof course; and
  • L2, an advanced junior/senior-level mathematics course with an emphasis on proofs.

For L1 and L2, students did not know in advance whether they were enrolling in IBL or non-IBL sections. The IBL section of G1 was labeled as such. In all cases, the authors took care to control for discrepancies in student preparation and ability.

IBL had the least impact on the students in the advanced course, L2. IBL students had slightly higher grades in subsequent mathematics courses (2.6 for non-IBL, 2.8 for IBL) and took slightly fewer subsequent mathematics courses (1.5 for non-IBL, 1.4 for IBL).

For the introduction to proof course, L1, IBL students again had slightly higher grades in the following term (2.8 for non-IBL, 3.0 for IBL). There were statistically significant gains (p < 0.05) from IBL in the number of subsequent courses that students took and that were required for a mathematics major, both for the overall population (0.5 for non-IBL, 0.6 for IBL) and, especially, for women (0.6 for non-IBL, 0.8 for IBL).

For L1, the sample size was large enough (1077 non-IBL, 204 IBL over seven years) to investigate persistence and subsequent performance broken down by student overall GPA, recorded as low (< 2.5), medium (2.5 to 3.4), or high (> 3.4). For the non-IBL students, differences in overall GPA were reflected in dramatic differences in their grades in subsequent mathematics courses required for the major, all statistically significant at p < 0.001. Low GPA students averaged 1.96, medium GPA students averaged 2.58, and high GPA students averaged 3.36. All three categories of IBL students performed better in subsequent required courses, but the greatest improvement was seen with the weakest students. Taking this course as IBL wiped out much of the difference between low GPA students and medium GPA students. It also decreased the difference between medium and high GPA students in subsequent required courses. For IBL students, low GPA students averaged 2.43, medium GPA students averaged 2.75, and high GPA students averaged 3.38 in subsequent required courses. See Figure 1.
Figure 1: Average grade in subsequent courses required for the major following introduction to proof class taught either as non-IBL or IBL.
While the number of subsequent courses satisfying the requirements for a mathematics major was higher for all students taking the IBL section of L1, here the greatest gain was among those with the highest GPA. For low GPA students, the number of courses was 0.50 for non-IBL and 0.51 for IBL; for medium GPA the number was 0.53 for non-IBL, 0.62 for IBL; and for high GPA the number was 0.49 for non-IBL, 0.65 for IBL. See Figure 2.
Figure 2: Average number of subsequent courses taken and required for the major following introduction to proof class taught either as non-IBL or IBL.
For the first course in the sophomore sequence, G1, IBL did have a statistically significant effect on grades in the next course in the sequence (p < 0.05). The average grade in the second course was 3.0 for non-IBL students, 3.4 for IBL students. There also was a modest gain in the number of subsequent mathematics courses that students took and that were required for the students’ majors: 1.96 courses for non-IBL students, 2.09 for IBL students.

These have been the highlights of the Kogan and Laursen paper. Most striking is the very clear evidence that IBL does no harm, despite the fact that spending more time on interactive activities inevitably cuts into the amount of material that can be “covered.” In fact, it was the course with the densest required syllabus, G1, where IBL showed the clearest gains in terms of preparation of students for the next course.

IBL is often viewed as a luxury in which we might indulge our best students. In fact, as this study demonstrates, it can have its greatest impact on those students who are most at risk.
 


[1] J. Epstein. 2013. The Calculus Concept Inventory—Measurement of the Effect of Teaching Methodology in Mathematics. Notices of the AMS 60 (8), 1018–1026. http://www.ams.org/notices/201308/rnoti-p1018.pdf

[2] D. Hestenes, M. Wells, and G. Swackhamer. 1992. Force concept inventory. Physics Teacher 30, 141–158. http://modelinginstruction.org/wp-content/uploads/2012/08/FCI-TPT.pdf

[3] M. Kogan and S. Laursen. 2013. Assessing Long-Term Effects of Inquiry-Based Learning: A Case Study from College Mathematics. Innovative Higher Education 39 (3). http://link.springer.com/article/10.1007/s10755-013-9269-9

[4] R.R. Hake. 1998. Interactive engagement versus traditional methods: A six-thousand student survey of mechanics test data for physics courses. American J. Physics 66 (1), 64–74. http://www.physics.indiana.edu/~sdi/ajpv3i.pdf

Sunday, September 1, 2013

JPBM Presentation to PCAST

On July 18, 2013, I had the pleasure of being part of a presentation from the Joint Policy Board for Mathematics (JPBM, the umbrella organization for AMS, ASA, MAA, and SIAM) to a joint meeting of the President’s Council of Advisors for Science and Technology (PCAST) and the British Prime Minister’s Council for Science and Technology. The title of the presentation was Mathematics Education: Toward 2025, and the focus was on the recent NRC report, The Mathematical Sciences in 2025 (see my column on this report from February 1, 2013). I was one of four presenters. The others were Mark Green, vice-chair of the committee that produced the report; Eric Friedlander, Past-President of AMS; and Frank Kelly, Chair of the British Council for the Mathematical Sciences (the British equivalent of JPBM). A webcast of the presentations and copies of the slides are available on the White House PCAST website.

The impetus for JPBM’s request to make this presentation was PCAST’s Engage to Excel report (see my column from March 1, 2012). While there is much in this report with which the mathematical community disagrees, especially the implication that mathematicians are not engaged in trying to improve undergraduate education, it was quickly decided that a positive message would be most productive. We told the Council that we appreciate the attention they have drawn to undergraduate mathematics education, we assured them that our community is actively seeking ways to improve the teaching and learning of post-secondary mathematics, and we offered to work with PCAST as we move forward.

There was a great deal of preparation in the months leading up to the presentation. It would be impossible to overstate the importance of Jim Gates’ role in making this happen. He has been a strong friend of the mathematical community, helping to ensure that our voice is heard. It was through his efforts that the July meeting was made possible. I also must emphasize the role that David Levermore played in helping to refine our message and coordinate the preparation of our presentations. I had hoped and expected that he would be included in those making the presentation to PCAST. Unfortunately, he was cut from the list of proposed speakers.

Leaders of all four mathematical societies helped to develop our position statement, which was distributed to PCAST in advance of the meeting and is available on the web as Meeting the Challenges of Improved Post-Secondary Education in the Mathematical Sciences. It includes a substantial appendix describing many of the activities of the JPBM societies that are directed toward the improvement of undergraduate mathematics education, the provision of evidence of what works, and the encouragement of widespread adoption of approaches to teaching and learning that are known to improve student outcomes. Following is the one-page opening statement from this document, written by Eric Friedlander, David Levermore, and myself and created with extensive feedback from and ultimate endorsement by the leadership of all four societies.

MEETING THE CHALLENGES OF IMPROVED POST-SECONDARY
EDUCATION IN THE MATHEMATICAL SCIENCES
DAVID M. BRESSOUD, ERIC M. FRIEDLANDER, C. DAVID LEVERMORE

The mathematical sciences play a foundational and crosscutting role in enabling substantial advances across a broad array of fields: medicine, engineering, technology, biology, chemistry, computer science, social sciences, and others. Due to this foundational role, the delivery of excellent post-secondary mathematics education is essential to the present and future well being of our nation and its citizens.

We greatly appreciate the engagement of PCAST in the challenges of post-secondary mathematics education. A key finding of the 2012 PCAST Engage to Excel report is that mathematics education is a critical component of all undergraduate STEM degrees. We share this perspective of mathematics education as an enabler of STEM careers, provider of broad mathematics literacy, and shaper of the next generation of leaders in our increasingly technological, data-driven, and scientific society.

The report also found that current deficiencies in mathematics learning are partly driving the loss of STEM majors in the early college years. We acknowledge many of the shortcomings highlighted by the report. The wake-up call delivered by PCAST has sharpened the awareness of the mathematical sciences community of the need for intensive, broad-scale efforts to address these problems. We emphasize that efforts by a great many in the mathematical sciences community predated PCAST's report, that progress is being made, and that plans are in place to broaden these to a community-wide effort.

Our task is to encourage and help lead constructive actions that will address the difficult and varied challenges facing post-secondary education in the mathematical sciences. How should mathematics educators improve developmental education in order to enable students to aspire to STEM careers? How should mathematical scientists in colleges and universities augment their cooperative efforts with “partner disciplines” to best serve the needs of students needing basic university mathematics? How should mathematical sciences departments reshape their curricula to suit the needs of a well-educated workforce in the 21st century? How can technology be best used to serve educational needs? 

These questions must be answered in the context of a changing landscape. There are growing disparities in the preparation of incoming students. A third of all undergraduate mathematics students are enrolled in precollege level mathematics. At the other extreme, almost 700,000 high school students in the US completed a course of calculus this past year. The mathematical sciences themselves are changing as the needs of big data and the challenges of modeling complex systems reveal the limits of traditional curricula.


The NRC report The Mathematical Sciences in 2025 eloquently describes the opportunities and challenges of this shifting landscape. This report should serve as a springboard for initiatives in mathematics education that more closely intertwine the learning of mathematics with the appreciation of its applications. However, the mathematical community alone cannot bring about the scale of changes called for in Engage to Excel. Building on all the activities in mathematics education underway or that have arisen as a result of the PCAST report, we ask for PCAST’s help in promoting greater awareness, collaboration, and cooperation among all of the scientific disciplines who are working to prepare the STEM workforce of the future.

Thursday, August 1, 2013

MAA Calculus Study: Effects of Calculus in High School

This month I return to the MAA Calculus Study, Characteristicsof Successful Programs in College Calculus, with a report on what we learned about the effects of taking calculus in high school. Because this study only looked at students in Calculus I, we can say nothing about how many of these students never take another calculus class or how many start their college mathematics with Calculus II or higher. Since the survey was conducted in the fall term, we cannot even say anything about students who might postpone taking Calculus I until later in the academic year. But, thanks to this survey, we can say a lot about the students who study calculus in high school and then begin with Calculus I in their first term at college.

First of all, we have a good idea of how many students this involves. Of the 300,000 who enrolled in Calculus I in fall 2010, just over half had studied calculus in high school. That represents just about one quarter of the 600,000 or so students who had studied calculus in high school the previous year. Of those who had taken the AP Calculus exam that spring, just over a quarter were in Calculus I that fall. From the latest CBMS report [1], about 55,000 incoming freshman arrived with credit for Calculus I. This leaves almost two-thirds of those who studied calculus in high school neither acquiring college credit for their high school work nor enrolling in Calculus I in a fall term.

We can break these data down further by score on the AP exam:
  • Half of those who scored a 1 or 2 on the BC exam or a 3 on the AB exam enrolled in Calculus I in the fall.
  • A third of those who scored a 3 on the BC exam enrolled in Calculus I.
  • A quarter of those who scored 1, 2, or 4 on the AB exam enrolled in Calculus I.
  • 1 in 8 of those who scored a 5 on AB or a 4 on BC enrolled in Calculus I.
  • 1 in 20 of those who scored a 5 on BC enrolled in Calculus I.
  • A quarter of those who scored a 3 or higher on the AB or BC exam received college credit for Calculus I.
Of those who arrived with credit for Calculus I, we do not know how many used it to place into a higher calculus class and how many never studied any further calculus in college.

At the research universities, those characterized by offering a doctorate in mathematics and dominated by the flagship state universities, over 70% of the students in Calculus I had completed a course of calculus in high school. We have data for about 5,000 of these students and so can report fairly accurately on the effect of studying calculus in high school on student performance in Calculus I in college. As shown in Figure 1, less than a third studied no calculus in high school, less than a third studied calculus but did not take the AP Calculus exam, a third took the AB exam, and a small but significant percentage (8%) took the BC exam.
 
Figure 1: Distribution of high school calculus experience among Calculus I students at research universities.
There is a common perception among students that having studied calculus in high school gives students a significant advantage in Calculus I when they get to college. Our data tend to confirm that. Figure 2 compares the final grades of the total student population at research universities with three subsets: those who did not study calculus in high school, those who did and did not take the AP Calculus exam, and those who did and did take the AP Calculus exam. We see a much lower percentage of A’s and a much higher percentage of DFW’s (grades of D or F or a withdrawal from the course) among students who did not study calculus in high school as opposed to those who did. There is little difference in Calculus I grades between those who did and those did not take the AP Calculus exam.

Figure 2: Final grades of Calculus I students in research universities by experience with high school calculus. 
If we separate the grades of those who took the AP Calculus exam by their performance on this exam, we see that it makes a large difference in their final grade (Figure 3). Students who score a 1 or 2 on the AP exam are comparable to students who did not study calculus in high school, although there is a slightly lower probability of receiving a D or F or withdrawing from the course. Students with a 3 on the AB exam are comparable to the average student who took calculus in high school and did not take the AP Calculus exam, but with a somewhat higher probability of earning a B. Not surprisingly, students who earned a 4 or higher on the AB exam or a 3 or higher on the BC exam did very well in Calculus I: 45% received an A, over two thirds at least a B. It is interesting that even among these students, roughly a quarter received a D or F or withdrew from the course. In fact, the rate of DFW is remarkably consistent across all levels of preparation, suggesting that the decision to stop working or to withdraw from the course is one aspect of course performance that has little to do with high school preparation.

Figure 3: Final grades of Calculus I students in research universities by performance on AP Calculus exam.


The MAA national study of calculus, Characteristics of Successful Programs in College Calculus, is funded by NSF grant no. 0910240. The opinions expressed in this column do not necessarily reflect those of the National Science Foundation.

Monday, July 1, 2013

Measuring Teacher Quality

A perennial issue at every college and university is how to measure teacher quality. It is important because it directly influences decisions about retention, tenure, and promotion. Everyone complains about basing such decisions on end-of-course evaluations. This column will explore a recent study by Scott Carrell and James West [1], undertaken at the United States Air Force Academy (USAFA), that strongly suggests that such evaluations are even less useful than commonly believed and that the greatest long-term learning does not come from those instructors who receive the strongest evaluations at the end of the class.

The study authors chose to measure teacher effectiveness in Calculus I by examining value added in both Calculus I and Calculus II: comparing student performance on course assessments for each instructor after controlling for variables in student preparation and background that included academic background, SAT verbal and math scores, sex, and race and ethnicity. It is generally acknowledged that better teachers produce better results in their students, but this has only been extensively studied in elementary students, and even there it is not without its problems. The authors reference a 2010 study by Rothstein [2] that shows a strong positive correlation between the quality of fifth grade teachers and student performance on assessments taken in fourth grade, suggesting a significant selection bias: The best students seek out the best teachers. This may be even truer at the university level where students have much more control over who they take a class with. For this reason, Carrell and West were very careful to measure the comparability of the classes. At USAFA, everyone takes Calculus I, and there is little personal choice in which section to take, so such selection bias is less likely to occur. The authors also tested for and found no evidence of backward correlation, that the best Calculus II instructors were correlated with higher grades in Calculus I.

The authors had a large sample size with which to work, all of the students who took Calculus I from fall 2000 through spring 2007, over 10,000 students and 91 instructors. The faculty make-up at USAFA is unusual among post-secondary institutions. There is a small core of permanent faculty. Only 15% of Calculus I instructors held the rank of Associate or Full Professor, and only 31% held a doctorate in mathematics or a mathematical science. Most of the teaching is done by officers who hold a master’s degree and are doing a rotation through USAFA. The average number of years of teaching experience among all Calculus I instructors was less than four years. Because of this, there is tight control on these courses, which facilitates a careful statistical study. There are common syllabi and examinations. All instructors get to see the examinations before they are given so that there is opportunity, if an instructor so wishes, to “teach to the test,” emphasizing those parts of the curriculum that are known to be important for the assessment.

Positive responses to the following prompts all had positive influence on student performance in Calculus I, significant at the 0.05 level:

  1. Instructor’s ability to provide clear, well-organized instruction.
  2. Value of questions and problems raised by instructor.
  3.  Instructor’s knowledge of course material.
  4. The course as a whole.
  5.  Amount you learned in the course.
  6. The instructor’s effectiveness in facilitating my learning in the course.
Surprisingly, these evaluations of the Calculus I instructor all had negative impact on student performance in Calculus II, with responses 1, 2, and 6 significant at the 0.05 level.

On the other hand, faculty rank, highest degree, and years of teaching experience were negatively correlated with examination performance in Calculus I, but positively correlated with performance in Calculus II, with statistical significance for years of teaching experience for both the negative impact in Calculus I and the positive impact in Calculus II.

The suggested implication is that less experienced instructors tend to focus on the particular skills and abilities needed to succeed in the next assessment and that students like that approach. Experienced instructors may pay more attention to the foundational knowledge that will serve the student in subsequent courses, and students appear to be less immediately appreciative of what these instructors are able to bring to the class.

This study strongly suggests that end of course student evaluations are, at best, an incomplete measure of an instructor’s effectiveness. It also suggests a long-term weakness of simply preparing students for their next assessment, though it should be emphasized that this represents merely a guess as to why less experienced instructors appear to get better performance from their students in Calculus I assessments.

At Macalester College, we recognize the importance of student reflection on what they learned months or years earlier. When a promotion or tenure case comes to the Personnel Committee, we collect online student evaluations of that faculty member from all of the students who have taken a course with him or her over roughly the past five years, combining both recent and current appraisals with longer term assessments of the effect that instructor has had.

References:

[1] Scott E. Carrell & James E. West, 2010. "Does Professor Quality Matter? Evidence from Random Assignment of Students to Professors," Journal of Political Economy, University of Chicago Press, vol. 118(3), pages 409-432, 06. Available at http://www.nber.org/papers/w14081


[2] Jesse Rothstein, 2010. “Teacher Quality in Educational Production: Tracking, Decay, and Student Achievement.” Quarterly Journal of Economics 125 (1): 175–214. Available at  http://gsppi.berkeley.edu/faculty/jrothstein/published/rothstein_vam_may152009.pdf

Saturday, June 1, 2013

Who Needs Algebra II?

In May this year, the National Center on Education and the Economy (NCEE) released its report, "What Does It Really Mean to Be College and Work Ready?[1]. The report is in two parts: Mathematics and English Literacy. It is based on a national study of the proficiencies required and actually used for the most popular associate’s degree programs at two-year colleges. Just a few weeks earlier, Jordan Weissman published a piece in The Atlantic, "Here’s How Little Math Americans Actually Use at Work" [2]. That was based on a 2010 report written by Michael Handel at Northeastern University, "What Do People Do at Work? A Profile of U.S. Jobs from the Survey of Workplace Skills, Technology, and Management Practices" [3]. This large-scale survey includes an assessment of what mathematics is actually used in the workplace.

I hope that it will come as a surprise to no one that not everyone actually uses the contents of Algebra II in their work or that College Algebra taught in two-year colleges is essentially high school Algebra II. Weissman highlights Handel’s data that less than a quarter of all workers use any mathematics that is more advanced than fractions, ratios, and percentages. He raises the question whether requiring Algebra II for high school graduation is placing an unnecessary roadblock in the way of too many students. NCEE poses more nuanced questions. Why are so many students being hurried through the critical early mathematics that they will need to be work and college ready, especially fractions, ratios, and percentages, just so that they can get to Algebra II? Isn’t there a better way to prepare them for what they will need?

I will begin with the data. Handel divided the workforce into five categories:

  • Upper White Collar (management, professional, technical occupations)
  • Lower White Collar (clerical, sales)
  •  Upper Blue Collar (craft and repair workers, construction trades, mechanics)
  • Lower Blue Collar (factory workers, truck drivers)
  • Service (food service, home health care, child care, janitors)
Generally, the best paying and most desirable jobs are Upper White Collar (UWC) and Upper Blue Collar (UBC). We should be equipping our students so that they can aspire to such jobs. It’s still not true that everyone needs Algebra II, but 35% of UWC workers reported using basic algebra, geometry, and/or statistics in their work. This level of mathematics is even more important for UBC workers, with 41% reporting using mathematics at the level of basic algebra, geometry, and/or statistics. This is still not Algebra II (which Handel lists as “complex Algebra” as opposed to “basic Algebra”), which was reported being used by 14% of UWC workers and 16% of UBC workers. Much less is it Calculus, which was reported by 8% of both UWC and UBC workers. But about 40% of those working in UWC or UBC jobs need a working knowledge of some high school mathematics, a higher bar than simply having passed the relevant courses. It is interesting to observe that UBC workers are more likely to use mathematics than UWC workers.

The NCEE report looked at the mathematics required for the nine most popular associate’s degree programs at two-year colleges: Accounting, Automotive Technology, Biotech/Electrical Technology, Business, Computer Programming, Criminal Justice, Early Childhood Education, Information Technology, and Nursing, as well as the General Track. This ties nicely to the Handel study because the nine are generally seen as preparation for UWC or UBC careers. NCEE selected seven two-year colleges in seven states and examined the texts, assignments, and exams in the introductory courses for these disciplines as well as for the mathematics courses required for these fields. There were three notable insights:

First, except for some work on geometric visualization, NCEE found no content in either College Algebra or Statistics, two college-credit bearing courses, that goes beyond the high school curriculum described in the Common Core State Standards in Mathematics (CCSS-M). They found that College Algebra had a large component of middle school topics, especially CCSS-M for grades 6–8 in Expressions and Equations, Functions, Number Systems, Geometry, and Ratios and Proportions. Statistics was a mix of CCSS-M middle and high school level statistics, with a significant component of grades 6–8 Ratios and Proportions and Expressions and Equations.

Second, the introductory textbooks in the disciplinary fields used nothing beyond Algebra I. Ratios and proportions are important as well as interpreting quantitative relationship expressed in tables, graphs, and formulae, but, as the report says,
When mathematics is present in the texts, equations are not solved, quadratics are absent, and functions are present but not named or analyzed, just treated as formulae. […] Students do not have to perform algebraic manipulations nor construct graphs or tables. […] The area of high school content with the highest representation in the texts, Number Systems, is found in six percent of the text chapters. [p. 16]
Third, the mathematical knowledge that was tested in these introductory courses in the disciplinary fields was far lower than what was in the textbooks. Not only was there nothing requiring Algebra II on the exams, the NCEE team could find nothing, or almost nothing, that reflected knowledge of Algebra I. Furthermore, the questions that were asked on examinations were of low difficulty. The NCEE team used the PISA (Program for International Student Assessment) Item-Difficulty Coding Framework with four levels. Examples of what is expected at each level include

  • Level 0: perform simple calculations and make direct inferences;
  • Level 1: use simple functional relationships and formal mathematical symbols, interpret models;
  • Level 2: use multiple relationships, manipulate mathematical symbols, modify existing models; and
  •  Level 3: solve multi-step application of formal procedures, evaluate arguments, create models.
The team found that over 60% of the mathematical questions on the examinations given in introductory courses were at Level 0. Few rose to Level 2, much less Level 3. (This was not the case in College Algebra and Statistics where most of the examination items were at Level 1 or 2 and some attained Level 3. This suggests that even though the material of College Algebra and Statistics does not go beyond topics covered in CCSS-M, the level of expected proficiency may be higher than what is typically encountered in high school.)

NCEE did find three mathematical topics required for the introductory courses that are not covered in CCSS-M nor in the College Algebra or Statistics classes: complex applications of measurements, schematic diagrams (2-D schematics of 3-D objects and flow charts), and geometric visualization. They also found a much greater demand for knowledge of statistics, probability, and modeling (“how to frame a real-world problem in mathematical terms”) than is commonly taught in most mainstream high school mathematics programs today.

What makes the NCEE report even more depressing is that it restricted its attention to college-credit bearing courses. Most of the mathematics taught at two-year colleges is below the level of College Algebra (see Figure 1). The mathematical requirements for UWC and UBC jobs may not be high, but we do not seem to be doing a very good job of preparing students even for what they will need.

Figure 1. Fall term Mathematics course enrollments (thousands). “Introductory” includes College Algebra, Trigonometry, and Precalculus.
Source: CBMS.

All of this raises serious questions about whether Algebra II should be expected of all graduating high school students. This parallels the situation that has been my primary concern: Should Calculus be expected of all graduating high school students who are going directly into a four-year undergraduate program, especially those who may need to take Calculus in college? I would far prefer a student who can operate at PISA Level 3 in Algebra I over a student who cannot handle problems above Level 1 in Algebra II. I would prefer Level 3 in Precalculus over Level 1 in Calculus. When students are short-changed in their mathematical preparation simply so that Algebra II or Calculus appears on the high school transcript, with little regard to what that actually means, then neither they nor society as a whole are well served.

It also raises questions about what mathematics should be required for an associate’s degree. College Algebra constitutes a significant hurdle for most two-year college students. Should there be alternatives? In this case, I believe that most two-year college students would be better served with a program that combines demanding use of the topics of Algebra I with a college-level introduction to Statistics.

We are not at the point where we can demand Algebra II for high school graduation. To do so would either create unacceptable rates of high school failure or force us to change what we mean by “understanding Algebra II.” But I worry that if we simply lower our sights and decide that, since few of our students actually will use anything from Algebra II once they have graduated, it should not be expected for graduation, then that will actually weaken the preparation that occurs in the earlier grades. Elementary and middle school mathematics should be laying the foundation for a student to succeed in Algebra II. If we want our students to have a strong working knowledge of the high school mathematics that is needed for 40% of the UWC and UBC jobs, then we want them to have the mathematical preparation that would enable them to succeed in Algebra II.

References:

[1] National Center on Education and the Economy. 2013. What does it really mean to be college and work ready? The mathematics requirements of first year community college students. Washington, DC. Available at http://www.ncee.org/college-and-work-ready/

[2] Jordan Weissman. April 24, 2013. Here’s how little Math Americans actually use at work. The Atlantic. Available at http://www.theatlantic.com/business/archive/2013/04/heres-how-little-math-americans-actually-use-at-work/275260/

[3] Michael Handel. 2010. What do people do at work? A Profile of U.S. jobs from the Survey of Workplace Skills, Technology, and Management Practices. OECD (forthcoming). Available at http://www.northeastern.edu/socant/?page_id=366

Wednesday, May 1, 2013

MAA Calculus Study: Graphing Calculators and CAS


This column continues my report on results of the MAA National Study of Calculus I, Characteristics of Successful Programs in College Calculus. This month I am sharing what we learned about the use of graphing calculators (with or without computer algebra systems) and computer software such as Maple or Mathematica. Our results draw on three of the surveys:
  • Student survey at start of term: We asked students how calculators and/or computer algebra systems (CAS) were used in their last high school mathematics class and how comfortable they are in using these technologies.
  • Student survey at end of term: We asked students how calculators or CAS had been used both in class and for out of class assignments.
  • Instructor survey at start of term: We asked instructors what technologies would be allowed on examinations and which would be required on examinations.
Our first question asked students how calculators were used on exams in their last high school mathematics class (see Figure 1). As in previous columns, “research” refers to the responses of students taking Calculus I at research universities (highest degree in mathematics is doctorate), “undergrad” refers to undergraduate colleges (highest degree is bachelor’s), “masters” to masters universities (highest degree is masters), and “two-year” to two-year colleges (highest degree is associate’s).

  Figure 1. GC = graphing calculator. CAS = graphing calculator with computer algebra system capabilities (e.g. TI-89 or TI-92).
There are several interesting observations to be made from this graph. First, not surprisingly, almost all Calculus I students reported having used graphing calculators on their exams at least some of the time (“always” and “sometimes” were mutually exclusive options). Second, there is a difference by type of institution. Students at undergraduate colleges were most likely to have used graphing calculators on high school exams (94%), then those at research universities (91%), then masters universities (86%), and finally two-year colleges (77%). The differences are small but statistically significant. My best guess is that these are reflections of the economic background of these students. A second observation is that for most students, access to a graphing calculator was not always allowed. However, it is still common practice in high schools (roughly one-third of all students) to always allow students to use graphing calculators on mathematics exams.

Another striking observation from Figure 1 is that the percentage of students who were always allowed to use graphing calculators on exams is almost identical to the percentage of students who were always allowed to use graphing calculators with CAS capabilities on exams. For all categories of students, over half of them were allowed to use graphing calculators with CAS capabilities at least some of the time, which suggests that over half of the students in college Calculus I own or have had access to such calculators.

The next graph (Figure 2) shows how students at the start of the term reported their comfort level with using graphing calculators or computer algebra systems (Maple and Mathematica were provided as examples of what we meant). The most interesting feature of this graph is that students at two-year colleges are much more likely to be comfortable with Maple or Mathematica than those at four-year programs. I suspect that the reason behind this is that most Calculus I students at two-year colleges are sophomores who took pre-calculus at that college the year before. This gave them more opportunity to experience these computer algebra systems.

Figure 2. Student attitude toward use of graphing calculator or CAS on a computer such as Maple or Mathematica.
The graphs in Figures 3–5 show what students reported at the end of the term about use of technology. For the graph in Figure 3, students were asked how frequently each of these occurred in class. Percentage shows the fraction of students who responded “about half the class sessions,” “most class sessions,” or “every class session.” We note large differences in instructor use of technology generally (for this question, “technology” was not defined), and especially sharp differences for instructor use of graphing calculators or CAS (with Maple and Mathematica given as examples). It is interesting that students are most likely to encounter computer algebra systems in undergraduate and two-year colleges, much less likely in masters and research universities.

 Figure 3. End of term student reports on frequency of use of technology (at least once/month). For this question, CAS refers to a computer algebra system on a computer, such as Maple or Mathematica.
The first two sets of bars in Figure 4 show student responses to “Does your calculator find the symbolic derivative of a function?” The first set gives the percentage responding “N/A, I do not use a calculator.” The second set displays the percentage responding “yes.” Looking at the complement of these two responses, we see that across all types of institutions, roughly 50% of students taking Calculus I own a graphing calculator without CAS capabilities. The third set records the percentage responding “yes” to the question, “Were you allowed to use a graphing calculator during your exams?” Note that there are some discrepancies between what students and instructors report about allowing graphing calculators on exams (Figures 4 and 6), but the basic pattern that graphing calculators are allowed far less frequently at research universities than at other types of institutions is consistently demonstrated.

 Figure 4. End of term student reports on calculator use. No calculator = do not use a calculator. Calculator with CAS = use a calculator with CAS capabilities. Calc allowed on exams = graphing calculators were allowed on exams.
We also asked how often “The assignments completed outside of class time required that I use technology to understand ideas.” Again, we see much less use of technology at research universities, the greatest use at undergraduate and two-year colleges.

 Figure 5. Frequency with which technology (either graphing calculators or computers) was used for out of class assignments. Almost never = less than once per month (includes never). Sometimes = at least once per month but less than once per week. Often = at least once per week.
The last two graphs (Figures 6 and 7) are taken from the instructor responses at the start of the term: what technology they would allow on their exams and what technology they would require on their exams. Again, we see a clear indication that technology, especially the use of graphing calculators without CAS capabilities, is much less common at research universities than other types of institutions.

It is interesting to observe that there are large numbers of instructors who allow but do not require technology on the exams. At research universities, 26% require the use of some kind of technology, and a further 25% allow but do not require the use of some sort of technology. For undergraduate colleges, 38% of instructors require technology, an additional 42% allow it. At masters universities, 42% require, and a further 33% allow. At two-year colleges, 52% require, and an additional 36% allow.

 Figure 6. Start of term report by instructor of intended use of technology on exams. GC = graphing calculator. Most of those who checked “other” reported that they allowed graphing calculators on some but not all parts of the exam. Some reported allowing only scientific calculators.

Figure 7. Start of term report by instructor of intended use of technology on exams. GC = graphing calculator. Most of those who checked “other” reported that they required graphing calculators on some but not all parts of the exam. Some reported requiring only scientific calculators.
We see a pattern of very heavy use of graphing calculators in high schools, driven, no doubt, by the fact that students are expected to use them for certain sections of the Advanced Placement Calculus exams. They are still the dominant technology at colleges and universities, but there the use is as likely to be voluntary as required. This implies that in many colleges and universities questions and assignments are posed in such a way that graphing calculators confer little or no advantage. The use of graphing calculators at the post-secondary level varies tremendously by type of institution. Yet even at the research universities, over half the instructors allow the use of graphing calculators for at least some portions of their exams. 

The MAA national study of calculus, Characteristics of Successful Programs in College Calculus, is funded by NSF grant no. 0910240. The opinions expressed in this column do not necessarily reflect those of the National Science Foundation.