The first part of this course will present an introduction to standard techniques in dynamic programing for Markov Decision Problems (MDPs). Concepts such as value iteration and some advanced topics including stochastic shortest paths etc., will be surveyed. The case of linear quadratic Gaussian systems will be discussed with particular emphasis on adaptive control design. The second part of the course will cover general adaptive and learning methodologies for decision making with model uncertainties, such as reinforcement learning. The other half of the course will survey state-of-the art methods in large scale decision problems and distributed decision making as applicable to real-world systems, for example, the power system.