Tuesday, October 28, 2008

Academic Analytics: Using Institutional Data to Improve Student Success

This was this morning's pre-conference workshop run by John Campbell and Kim Arnold from Pursue. The session was OK in many respects but sadly the basic premise was lost in translation. They did some very impressive statistical modelling to take a whole range of data sets to look for indicators of "at risk", identifying a particular model profile that was 80% successful in prediction "risk". They then applied this model (or a customised version of this model - more on this bit later) to a course weekly, giving students an early warning indicator (traffic lights) and using these to make interventions (email, sms, f2f) pointing them to additional support.

What was good - focus on actionable data, timeliness of interventions, proved (in their context) models were reasonably predictive, focus upon large first year modules.
What was more problematic - customised model for each course (ie module) - each one took approx 16hr per week, every week, in management, analysis and publication so 3 modules would be 1 FTE! (long-winded way of saying not scalable), not sure the very complex stats necessarily identified different students from those who might be identified through a couple of indicators (may be sledge-hammer to crack a nut), traffic lights - worked for them but I though yuck! likely to be very instrumental.

Their indicators were in 3 categories "educational prepareness" (another way of saying entry qualifications), "performance" (phase test results), "effort" (amount on time logged into VLE)
I'm gonna leave the last one for you to ponder cos most of you know my take on that sort of thing...except to say that they found it was the best predictive indicator of success, so my question would be thinking what we think and knowing what we know - how can that be?

3 comments:

Paul Helm said...

In a tragic case of SHU not being very academically analytical (or in this case maybe just not mentioning things to colleagues), I was in this session also. I agree with Louise - it is an interesting idea but nowhere near scaleable. I could sort of see it working for killer modules, but I dont think it would tell us anything we didnt already know about them. It has given me some food for thought about how we may approach IR, particularly about how we might "sell" the institutional readiness stuff to the university.

anne said...

Having to make this a moving feast of competition space as it's just becoming increasingly difficult to follow the right threads:) so here's a nice little story about an academic kinda gal in Orlando who finds success in finally being able to take the photo she wants of traffic lights using an analytic(al) approach in readiness for her institution's best student experience survey.
http://www.flickr.com/photos/jasminflower/2226186902/

Abbi said...

Did they add anything to the articles and sessions they've already run on this?
One of my concerns from the online seminar I attended was the langauge they used in contact with the 'at risk' students - They used peer comparison and basically told 'at-risk' students they were 'falling behind their peers' and 'were unlikely to succeed on the course unless they sought help'. The onus was very much on the students rather than looking at LTA approaches on the course.
Having said that - I like the idea of using different data strands to produce 'actionable intelligence'.