Design and implement assessments and assignments that reward deep approaches to learning
A number of phrases are use to describe deep learning, such as transfer, critical thinking and problem solving. Transfer occurs when knowledge is not just retained, but learned in a meaningful way (Anderson et al., 2001). When designing assignments, particularly the final project of a continuous assessment module, I want the learning to be meaningful – that the students have engaged with it, integrated different concepts into a whole and have an achievement that can be talked about with confidence during an interview for a job. With that in mind, I have a number of approaches to outline when designing and managing assessments.
Chickering and Gamson (1987) include as one of their seven principles for good practice in undergraduate education that high expectations be communicated. If one does not set the bar high, then those most strategic of students (as Biggs or Entwisle would call them) may assume a low bar will maximise their chances of doing well overall in terms of effort versus reward. One way of communicating high expectations is via a rubric. By distinguishing between novice or unacceptable efforts and exemplary ones, students will have a clearer idea of how high the bar is to perform well. Of course, a rubric will also assist the strategic learner because such a learner has a focus on time management, an alertness to assessment and the intention to achieve high grades – the ASSIST inventory (Entwistle, 2000) clearly illustrates this.
I have been using rubrics for a number of years. Under pressure, I’ve simply used them as a grading tool, but increasingly I’ve found them very useful as a way to get the students thinking more deeply about how to approach an assessment, so in that sense I see them as another tool to promote deep approaches to learning (as per Marton and Saljo, 1976, for example). For the criterion of “Scalable Architecture” I have levels of achievement ranging from poor (Badly design application lacking a layered architecture that would be difficult to maintain where it requires a lot of refactoring to enable reuse of components.) to proficient (Excellent effort at designing an architecture that allows for component reuse and it should be easy to maintain for years to come.) If I simply had the criterion = 15% and left it at that, some students would fail to think more deeply about the criterion and hope for the best. Key words and phrases such as “component reuse” and “easy to maintain” force the students to consider some of the fundamental principles of software engineering, such as loose coupling and layered architectures, which demonstrate a deeper understanding as these are. Gagné et al. (1992) would consider these examples of “higher-order rules” that are fundamental to problem solving.
Exemplars are helpful in conjunction with rubrics. When one has a criterion and a grade such as exemplary, then it is useful to point to examples that illustrate what exemplary code is, for example. I provide a significant amount of code in an online GitHub repository which I have been improving and adding to over the years. I provide these as good examples to be measured against. This is suggested by Andrade (2000), however she also suggests that examples of poor quality work could be supplied to illustrate what constitutes poor / developing / novice efforts. This is something I rarely do and should consider incorporating. Andrade also suggests that rubrics promote the development of understanding and good thinking; for example, rather than just thinking about how to do something, they also are encouraged to think about how not to do something.
Chickering and Gamson also suggest that good practice in undergraduate education includes the development of reciprocity and cooperation among students. Johnson, Johnson and Smith (1998) describe five basic elements of cooperative learning: positive interdependence the individuals of the group will either succeed together or fail together, so the best approach is to try to succeed by working together; individual accountability where as much as is practical, the activity is designed to avoid piggy backing of team members off of each other; face-to-dace promotive interaction where small groups of perhaps 2 to 4 have meaningful face-t0-face interaction to help each other to solve problems, for example; use of teamwork skills where teamwork skills must be taught – leadership, decision-making, communication, conflict management, and so on; group processing where students assess how they are working as a team to achieve group and individual goals, maintaining effective working relationships.
Each of those five elements have been employed to solve what has been a challenge with group projects over the past number of years. Typical issues include weak students being rewarded with the overall mark for a project and strong students carrying a team and falling short of their potential mark. With 60% of the 10-credit module for the group project, I broke the marks down into 3 key phases: project initiation where the group of 4 or 5 students find their feet and put a rough plan in place, an architecture, etc. I scaffolded this with what is called the Inception Deck, which is a set of 10 PowerPoint slides with slides covering things like elevator pitch, mission, risks, scope, etc. This would have helped to emphasise early on that positive interdependence meant that they had to work as an effective team to overcome the risks and complete the mission of the project. As long as everyone took part in that initiation, everyone got the same mark out of 10%. The remaining 50% was split in two – 25% shared based on my marking of the project in its own right. The other 25% split into 15% for contribution, using self and peer assessment forms (this addresses individual accountability), and 10% for a short reflection. To support the reflection and self and peer assessments, I have asked the students to keep a private journal and to reference it in their forms and reflection (rather than me having to read all their lengthy journals). My hope is that each student knows a) that they cannot hide because they will lose marks by not contributing, and b) the mark they get is a fair one that is aligned to a large extent with their efforts. I gave the students a hypothetical scenario where a project scored 64% and 4 team members made different contributions and reflected to various degrees of success, and I explained how I calculated the final mark for each student, which ranged from 46% to 74%.
Formative feedback can be very useful for students to keep them on track and motivated. Nicol and MacFarlane-Dick (2006) provide a model with seven principles for effective feedback. A rubric, as discussed earlier, is a very useful tool for feedback; students can at any point judge their own progress against the rubric, but it can also be used by a lecturer to indicate prior to formal submission where a student is in relation to the goals of the assessment. This can assist with the first 2 principles of clarifying what good performance is and facilitating self assessment / reflection. I also encouraged the students to keep a weekly reflective journal – it was not mandatory, but I explained that it could be used later on when writing their final reflection. The self and peer reflection form also gave the students an opportunity to reflect on how they did as an individual and in relation to other group members – something of use for the next project they encounter.
In the 2 lengthy group meetings that took place online, where a demo was asked for (not always provided) and questions asked, the difficult balance to strike is between giving honest feedback, as harsh as it might be, yet still providing motivation. Feedback must enable students to self-correct, as Nicol and MacFarlane-Dick suggest in their third principles. I would preface some of my comments with statements such as: “Do you think you might have overcomplicated things a bit?” or “How did you arrive at that particular solution?” and tease things out, hoping for students to arrive at a problem rather than me being too blunt. I would also avoid simply listing things that are wrong as this would simply act as a checklist, which as Lunsford (1997, quoted in Nicol and MacFarlane-Dick, 2006) suggests would lead to a less holistic approach of simply addressing little problems with little thought – I like the idea that three well-considered comments (corrective in nature) is sufficient for many assessment; the scale and nature of the assessment will dictate this, so for example a project worth 50% on a 10-credit module might have eight learning outcomes and thus it might make sense to limit the number of feedback items to eight.
Integration of concepts
Smith and MacGregor’s (1992) concept of the learning community, briefly discussed in LO1, suggests that knowledge shouldn’t be fragmented, but integrated. My approach to assessment is to integrate concepts where possible (whether I or other lecturers have taught the concept), demonstrating how they relate to each other and how they relate to the workplace. As I explained to the students, the group project is as close as they will get to real work experience using processes that many in industry use. I give the students a reasonably well-defined problem, but one that allows for significant interpretation. I enjoy a bit of role playing as the “product owner” to use Scrum parlance and invite the students to treat me as the person paying for the product and who has the greatest level of knowledge (I use a Google+ community to answer the questions students post). I explain that I don’t know everything up front and that my understanding as well as their’s will evolve as the project goes along. I treat project specifications as living documents and explain how this relates to my industry experience (again the concept of Brown et al.’s cognitive apprenticeship arises) and ask students to consider their own experience from other fields. This does require continual engagement throughout the project.
Anderson, L.W., Krathwohl, D.R., Bloom, B.S., 2001. A taxonomy for learning, teaching, and assessing: a revision of Bloom’s taxonomy of educational objectives. Longman.
Andrade, H.G., 2000. Using rubrics to promote thinking and learning. Educational leadership, 57(5), pp.13-19.
Brown, J.S., Collins, A., Duguid, P., 1989. Situated Cognition and the Culture of Learning. Educational Researcher 18, 32–42.
Chickering, A.W., Gamson, Z.F., 1987. Seven Principles for Good Practice in Undergraduate Education. AAHE Bulletin.
Entwistle, N., 2003. Promoting deep learning through teaching and assessment: conceptual frameworks and educational contexts, ESRC Teaching and Learning Research Programme, First Annual Conference.
Gagné, R.M., Briggs, L.J., Wager, W.W., 1992. Principles of Instructional Design. Harcourt Brace Jovanovich College Publishers.
Johnson, D.W., Johnson, R.T., Smith, K.A., 1998. Maximizing instruction through cooperative learning. ASEE Prism; Washington 7, 24.
Nicol, D.J. and Macfarlane‐Dick, D., 2006. Formative assessment and self‐regulated learning: A model and seven principles of good feedback practice. Studies in higher education, 31(2), pp.199-218.
Smith, B.L., MacGregor, J.T., 1992. What is collaborative learning?