Optimizing the learning of the analytics process using a reinforcement learning framework
Samuel P M Choi and S S Lam
The Open University of Hong Kong
Hong Kong SAR, China
Learning analytics (LA) is a relatively new research field concerned with analysing data collected from various sources so as to provide insight into enhancing the learning process and teaching. As suggested by Campbell and Oblinger (2007), a complete LA process typically involves five distinct, yet interrelated, stages — capture, report, predict, act, and refine — which form a sequential decision process. So far, research efforts have been focused mostly on studying independent research questions involved in individual stages. It is therefore necessary to have a formal framework to quantify and guide the whole LA process. In this paper, we discuss how reinforcement learning (RL), a well-understood sub-field of machine learning, can be employed to address the sequential decision problem involved in the LA process. In particular, we integrate LA stages with an RL framework consisting of state space, action space, transition function and reward function, and illustrate this with an example of how the three most studied optimality criteria in RL — finite horizon, discounted infinite horizon, and the average reward model — can be applied to the LA process. The underlying assumptions, advantages and issues of the proposed RL framework are also discussed.