User Tools

Site Tools


csedu
  • Unlocking the Gates - intersting because it predates MOOCs and talks about why universities started putting course materials online to begin with. The story of Berkeley's course archiving system, organized originally by Larry Rowe's Berkeley Multimedia Research Center (BMRC) and driven by constraints of low cost, is one of the interesting case studies.
  • Disrupting Class - Clayton Christensen's [“The Innovator's Dilemma”] take on MOOCs and higher ed. The essence of disruptive innovation is that the entrenched players aren't intersted because the new technology doesn't suitably solve the problem they think their customers have. Innovators must therefore go looking for customers who DO have a need for what they offer, and for companies that spring up to serve that need using the new technology. Eventually the new technology's price/performance leads to enough improvements that it either catches up to or overtakes traditional technology, OR end users get to a point where they cannot absorb additional product improvements at which point it becomes a cost game (and the innovators, who have been forced to learn to be lean, tend to come out the winners). The best of the entrenched players then “migrate upstream” by focusing on a premium product for customers willing to pay more, and the less capable entrenched players disappear. Christensen argues that large-scale online education could unfold in just this way.
  • College Unbound Jeff Selingo is a leading journalist in this area. Main message of book is how college's expenses (and tuitions) have been driven by trying to recruit more paying students by building better facilities (dorms, stadiums, concert halls, golf courses…) to where higher ed has become a for-profit marketing-driven business where most dollars are NOT spent to improve the learning experience.

MOOC experience reports and evaluations

Summaries of many of the following reports

  • Edinburgh MOOCs Report 2013 (and a brief commentary; if you google the report title, there are many other commentaries as well)
  • MITx/HarvardX report on first year of MOOCs from those universities - who took the courses and why, plus more detailed analyses of each course
  • Changing "course": Reconceptualizing educational variables for massive open online courses. Jennifer DeBoer, Andrew Ho, Glenda Stump, Lori Breslow (Harvard & MIT). Argues that most traditional variables used for evaluating brick-and-mortar courses, such as attrition, need to be reconceptualized for MOOCs since the current interpretations are either meaningless for MOOCs (wrong semantics) or cannot be measured the way they can for traditional courses.

Some high order bits from the inaugural Learning@Scale conference

Student engagement/behaviors in MOOCs

Combining machine and human grading for open-ended questions, to better leverage human effort

  • Two papers from Learning@Scale 2014 that propose 2 different approaches to using machine learning to scale human grading. One combines ML with peer learning to see if ML can reduce amount of effort per response and either improve or not decrease grading accuracy (answer: ranges from “just barely” to “no”). The other proposes a UI and automatic clustering to improve human grader leverage by applying grades & feedback to whole clusters of similar answers at a time (they get ~8x efficiency improvement, ~3x more responses receiving feedback, no loss in grading accuracy).

Automatic evaluation of students' computer programs

* Detecting similar solution approaches and offering feedback - paper summaries and ideas in progress

Helping beginning programmers

* Language-independent conceptual “bugs” in novice programming. Roy D. Pea. J. Edu. Computing Research, 2(1), 1986.

  • Beginning programming students are observed to make 2 types of conceptual errors both rooted in their taking an “intentional stance” toward the computer/programming system, which in turn is hypothesized to derive from students' using “informal conversation with another human” as their analogical basis for “formal conversation [programming] with a computer”.
  • “Parallelism bugs”: applying informal-speech semantics to declarative-looking code (“While I am sick, don't call me”, or “Area := width * height”) ⇒ students think code that comes *far after* the While or If is somehow magically influenced by it. Eg, if set width & height AFTER declaring Area, students think it'll work.
  • “Egocentrism bugs”: students expect computer to infer intentions & fill in missing info/statements; complicated because some programming systems do try in a limited way to do this at a low level (assuming initial default values for uninitialized variables; automatic type promotion/conversion; Perl-like “do what I mean” operational semantics).
  • (This suggests that an ability to “play mr. computer” and mentally single step through a program may be valuable for novices. It also makes me wonder if novices coming from non-English spoken languages do better, because the terminology in programming languages doesn't resemble informal conversation in their language so it *must* be mastered as a formalism.)

Offering help

MOOC Analytics and Dashboards

  • Stuart Reges: The Mystery of "b := (b = false)", SIGCSE 2008. In 1988 CS AP-A exam, 5 questions in particular turned out to be statistically significant predictors not only of exam success, but also success on the supposedly separate B-exam covering more advanced material. (The single most “powerhouse” question is the one in the paper's title, which asks students about the result of executing that statement.) These questions deal with assignment and recursion, supporting Knuth's informal hypothesis that successful programmers have “algorithmic thinking” skills consisting of a mental model of program execution AND in particular of dynamic program state, and supporting Dehnadi and Borhat 2006's claim that assignment and recursion are the toughest concepts for beginning CS students so success on those would be a predictor of future success.

* Data Analysis

* Dashboards

IRT

* Includes a couple of different articles One the articles is like the talk David Pritchard gave. It Goes over differences in assumptions needed to do IRT in a high-stakes test vs. a mooc. Some ideas about modeling how student ability changes over time, and how questions get easier after multiple attempts. Concludes with wondering how to exploit assessment of student learning rate to build a tutor. The other article explains how you can do score equating with IRT – that is, how you can compare the results of students doing two tests that are slightly different. The use is that if you use exams will multiple versions to prevent cheating, you want to be able to compare the results between the exams.

* Using the General Diagnostic Model to Measure Learning and Change in a Longitudinal Large-Scale Assessment This paper looks at how to use IRT to measure improved performance of students between different times they take tests. As in, if they take a test one time, and then retake a similar test a few months later. Specifically it looks at specific school populations and sees how different schools improved differently.

* If at First You Don’t Succeed, Try, Try Again Applications of Sequential IRT Models to Cognitive Assessments This paper explain a model for dealing with questions that allow multiple attempts. It's called SIRT, for sequential IRT. It then goes through three possible uses of it: one where question has a parameter that determines how much easier it gets on multiple attempts, one where each student has a parameter that determines how much that student can learn/improve after each attempt, and one where a student has completely different set of ability parameters for each attempt.

*Assessing Item Fit for Unidimensional Item Response Theory Models Using Residuals from Estimated Item Response Function This paper goes through a method for assessing how well and IRT model actually fits.

* A Note on Explaining Away and Paradoxical Results in Multidimensional Item Response Theory There is a paradox that sometimes arises in applying MIRT where getting a question correct lowers a student's ability, and getting the question incorrect raises a student's ability. This paradox is an instance of a more general paradox known as Berkson's paradox, which can be explained in terms of conditional independence and Bayes nets. The paradox may be a problem if we consider tests to be contests, it is not so much a problem if we consider tests to be purely for measurement.

* Correlating skill and improvement in 2 MOOCs with a student's time on tasks This paper finds a negative correlation between a student's amount of resource use and their skill in the course. They also find little or negative correlation between time spent with resources and skill improvement. This paper makes the claim that these negative correlations can largely be explained by the great variation in initial skill level of students doing MOOCs. They examined two courses and found significantly different correlations, so they think they should look at more courses for further investigation.

csedu.txt · Last modified: 2018/02/28 17:02 (external edit)