Assessment, a Mouse, and a Cookie  


Assessment, a Mouse, and a Cookie  

By Nicole Lesher, PhD, VP of Institutional Effectiveness 

[Editor’s note: This article was originally published in AALHE Intersection Summer 2016 Pages 15 – 17] 

You may be familiar with the contemporary classic children’s book written by Laura Nueroff, If You Give a Mouse a Cookie. This delightful book is a brilliantly illustrated tale about a little boy who gives a mouse a cookie. Eating the cookie leads to the mouse wanting a glass of milk, which leads to asking for a straw and so on until the ever-so-energized mouse eventually arrives back to wanting a cookie again, while the boy is left exhausted. Over years of doing assessment for my university, I have often thought of this story. Assessment for us, just as in the story, has turned out to be quite circular, typically taking us right back to where we started.  

Our university assessment program includes the use of faculty-developed, course-embedded signature assignments to assess program learning outcome competencies. These assignments are evaluated using four-point, analytic rubrics (modified AAC&U VALUE rubrics developed by faculty). Curriculum maps have been developed for each program to identify where competencies are introduced, reinforced and mastered. Courses are selected in the beginning and at the end of each program to allow for both formative and summative assessment. Assignments or artifacts are embedded in these courses to gather data to assess learning outcomes. Throughout each program, faculty are tasked with assessing artifacts within their courses. Assessment data are collected across all programs and evaluated in accordance with our assessment calendar.  

An assessment team made up of faculty from each program met to assess the quantitative reasoning core competency in undergraduate programs. We assembled artifacts (with student names redacted) and assessed them ourselves – the “cookie.” While faculty had already collected and assessed artifacts across programs, participating in actual assessment and using the rubric was an excellent way to immerse the team into the process. Initially, the development exercise was created to expose the team to the rubric and the types of assignments to assess, but instead became an integral part of the assessment process.  

The AAC&U rubric was developed by teams of faculty and other educational professionals from over a hundred higher education institutions engaged over many months. Even so, the quantitative reasoning results were surprising due to a number of artifacts that could not be adequately assessed by the quantitative reasoning rubric used. In these cases, the assignments either did not effectively measure the outcome or the instructions needed to be modified to enable sufficient measurement. For example, one assignment required students to examine the termination of marriage, including divorce, annulment and issues of child custody and support. Students were also required to evaluate the division and allocation of property, as well as the tax implications of a divorce and to examine miscellaneous, but important issues tied to the practice of family law. These issues included the legal rights of women and the status of children, as well as the interplay between torts and other aspects of law. Having three parts to the assignment made this extremely complex for assessment purposes. Students did need to apply rules to a factual scenario, leading to a conclusion, and they were required to use math skills and the communication of quantitative information. However, the rubric is designed to assign ratings to the traits Interpretation, Representation, Calculation, Application/Analysis, and Communication. To assess the assignment was difficult, since the rubric required the use of data presented in graphs and/or the presentation of mathematical information, and the assignment did not.  

In this assignment, mathematical results were used only minimally. The focus in the assignment is on ap-plying the rule and while quantitative data is used in the question, it did not call for real interpretation or analysis of data. Therefore, there was a mismatch between what the assignment was asking for and what we needed to measure. In another assignment, students were instructed to research a specific aspect of social psychology and write an in-depth explanation of the perspective and how it affects groups and individuals. However, the actual student work did not provide a sufficient amount of quantitative evidence to support the student’s thesis and the data used did not strongly connect the purpose of the work. With more specific instructions on what is being measured, this assignment could have been used to measure quantitative reasoning.