Undergrad Research Project - Multi-Core Scheduling using Deep Reinforcement Learning

Fall 2017

Mario Srouji
Ruslan Salakhutdinov
Project description

The problem of scheduling in multi-core contexts has been, and continues to be an optimization problem that involves multiple heuristics. Whether it be utilization of the machine's cores and available resources, low power scheduling, throughput maximization, or latency reduction, there are many aspects that need to be considered when running workloads on the machine. In the current state of the art scheduling applications, there is a deterministic and well-defined scheduler that decides task deployment based on a fixed policy. Hence often times one does not have a direct way of optimizing the scheduling of the workload based on multiple desired outcomes (from the ones mentioned). Additionally due to the dynamic nature of the workloads and tasks, there could be a more optimal policy that is able to schedule the tasks in a more optimal manner across the resources of the machine. This brings up the possibility of using Machine Learning to not only aid in the decisions the scheduler makes, but to even possibly replace the scheduler all together with an intelligent agent that is able to make dynamic task deployment decisions based on a wide variety of state information. Deep Reinforcement Learning has advanced, and provides much possibility for improvement in the context of multi-core scheduling. The desired results from this project would be to prove that a Deep RL scheduler can not only provide more optimal scheduling decisions given the same workload as a state of the art scheduler, but also be able to make scheduling decisions based on multiple criteria (depending on how the user weights the value of each heuristic).

Return to project list