Reflections on the MBA Roundtable + LEPE Experiential Learning 2019 Workshop (Part 1)

by | Jan 25, 2019 | Written Posts

LEPE Experiential Learning

Defining and then Measuring Experiential

Experiential project-based learning is the hottest curriculum category in business education (especially MBA) and has been for at least five years. LEPE has been at the forefront of research and discussion on the topic since 2009. Even still, best practices remain elusive as guideposts for schools running, growing or launching PBL programs.

Three thought leaders from the MBA experiential group LEPE: Kerry Laufer of Dartmouth Tuck, Michellana Jester of MIT Sloan and Shannon McKeen of UNC Kenan-Flagler, are working to change this.

Experiential is very expensive to deliver.. but we believe it is worth it.

Shannon McKeen, during his presentation at the MBA Roundtable Experiential Workshop

Strikingly, the top of the MBA Roundtable Experiential Workshop was spent on the foundational challenge of defining experiential learning. This category that has to date defied standards and this is reflected in the confusion over defining it.

All three LEPE presenters spoke on their work to define experiential within their schools and in some cases, ruffling feathers along the way with some educators being left out of the newly defined experiential boundaries. The group consensus (including the audience) seemed to be that each school would have to define experiential for themselves. The eternal challenge of defining experiential continues!

My first impression of this market, when we launched in 2014, was that we need consistently applied labels for activities under the umbrella of experiential learning along with clear definitions. It is simply hard to think about things without clear language to use and harder yet to share share information across schools. As long as this issue remains, building Best Practice and benchmarking will be difficult.

Kerry Laufer put forth that project-based experiential learning is the most complex experiential format and is meaningfully different and apart from the other varieties. She noted that this difference does not necessarily mean superiority. The difference, she said, lies in other programs relying on designed content while experiential is more like a “runaway train” taking on a life of its own, making assessment complex.

I have long heard from educators of the challenge in assessing individual student work and overall project outcomes. This due to the moving part of external client involvement and a not entirely consistent scope and content for each project. Ironically, but maybe because of this difficulty, MBA Roundtable Executive Director Jeff Bieganek, announced that the leading edge work being done for assessment in experiential projects is ahead of the curve, compared to more routine assessment work in the general business education curriculum where things are more straightforward.

How do we get students complete surveys (and to give honest feedback)?

This topic sparked much conversation among the audience and is apparently a super common issue. Michellana shared a clever tactic from her program: (G-Lab at MIT Sloan) students can only view their feedback results once every team member has completed their peer evaluation. Students are notified of any stragglers on their team in an effort to bring some good old fashioned peer pressure to bear to get total survey completion from students. Kerry Laufer reported employing a stack ranking for her teams, solving the issue of students giving all of their peers a 5 out of 5.

This kind of clustering of survey results is something we have observed in aggregate with student data in EduSourced so I can confirm this is an issue common to most schools. Shannon McKeen reports that issuing a peer survey halfway through the project engenders more “raw” feedback and is less prone to this clustering effect than surveys issued at the end of project.

Actionable Takeaway

Issue peer evaluations at least twice during a project, once at the midpoint and again at the end. Share the results back to students for a learning opportunity with time to improve before the project ends.

After this session, we went back and built out the EduSourced surveys feature, which includes 360 peer feedback that takes design inspiration from workplace peer review tools. Our peer review does exactly as described above: allows for students to review their (anonymized) peer review results and learn from them. EduSourced surveys can even be scheduled in advanced or fully automated to save time.

What is the goal of peer evaluations?

Should 360 peer evaluation results be used for grading or should they be a learning tool only? This was a point of contention among the audience and historically, peer survey results have been a major input for grading. The presenters explained that by making the survey putative, students will not provide the most honest feedback, undercutting the learning value.

Given the challenge of assessment in experiential projects, several audience members ask: if not survey results, then what should be used for grading? Shannon McKeen offered that a rubric should be used to grade the project’s output as in any other course. Further, it was mentioned that perhaps students should be graded based on the quality (and quantity) of the feedback given to their teammates rather than on the feedback received. I have identified this feedback input grading concept as a burgeoning trend as I talk to engineering and business school programs across the country. In my opinion, this is an elegant solution to the two primary challenges of peer reviews: student completion rate and honesty of input.

Measuring learning objectives must seem daunting with an open-ended challenge like an experiential project. One participant offers that experiential programs by nature tend to be loaded with everyone’s “hopes and dreams” at the school, but that this ambition must be tempered and a useful guideline is seeking to achieve two or three learning objectives. Another participant offers that her school strips down the number of learning objectives by barring anything that should have been taught in previous courses, mostly technical competencies. Their projects focus instead on soft skills.

Related: the 2016 MBA Roundtable Annual Symposium found (via a large employer survey) that soft skills were by far the biggest source of disappointment for employers hiring freshly minted MBAs. Focusing only on soft skills as the measured learning objective of an experiential project sounds like a good plan to me, given the primary goal of an experiential project with an employer client is to prepare students for employment.

Part two of the MBA Roundtable Experiential Workshop reflection is posted here.

Recommended reading (if you haven’t already): Taking Measure of Experiential Learning from the presenters of this workshop. Keep an eye out for an upcoming entry in the Texas Education Review that further expands on this article.

For more about experiential learning in business (and engineering) schools, check out our Experiential Academy.

Demo EduSourced

See why EduSourced is trusted by more Experiential Learning programs than any other tool with a live demo and discussion on your EL program.

You May Also Like…

Automating Capstone Team Formation

Automating Capstone Team Formation

University of Georgia’s College of Engineering experimented with automation of capstone team formation, perhaps the most time-consuming activities for senior design and capstone, and present their findings in this paper presented to ASEE.