Carnegie Mellon University

Mor Harchol-Balter

Mor Harchol-Balter

Courtesy Professor, Electrical and Computer Engineering
Associate Professor, Computer Science Department

Address 5000 Forbes Avenue
Pittsburgh, PA 15213

Bio

Mor Harchol-Balter is a Professor of Computer Science at Carnegie Mellon. She received her Ph.D. from U.C. Berkeley in Computer Science in 1996 under the direction of Manuel Blum. She joined CMU in 1999, and served as the Head of the PhD program from 2008-2011. Mor is a Fellow of the ACM and a Senior Member of IEEE. She is a recipient of the McCandless Junior Chair, the NSF CAREER award, and several teaching awards, including the Herbert A. Simon Award. She has received faculty awards from a dozen companies including Google, Microsoft, IBM, EMC, Facebook, Intel, Yahoo!, and Seagate. Mor's work focuses on designing new resource allocation policies, including load balancing policies, power management policies, and scheduling policies, for distributed systems. She is known for her new techniques in queueing-theoretic modeling and analysis, as well as her work in computer systems implementation. Mor is heavily involved in the SIGMETRICS / PERFORMANCE research community, where she has received multiple best paper awards, and where she served as Technical Program Chair in 2007, as General Chair in 2013, and as the Keynote Speaker for 2016. She is also the author of a popular textbook, Performance Analysis and Design of Computer Systems , published by Cambridge University Press, which bridges Operations Research and Computer Science. Mor is best known for her enthusiastic keynote talks and her many PhD students, most of whom are professors in top academic institutions.

Education

PhD, 1996 
Cognitive Science 
University of California, Berkeley

Research

Theoretical work on Algorithms and Analysis
Queueing behavior and stochastic analysis of resource allocation in distributed computer systems. Load sharing policies, routing policies, cycle stealing, replication algorithms, power-efficient policies, threshold policies, and multi-class/multi-server prioritization. Fairness metrics in scheduling. Correlated arrival processes, and analysis under high-variability workloads. Known for ``All-Can-Win Theorem'', demonstrating that scheduling policies which are biased towards favoring small jobs can also be preferable to large jobs.

Stochastic Analysis & Queueing Techniques
Developing new methods for stochastic analysis. Examples include: (i) Recursive Dimensionality Reduction (RDR), a technique that allows one to reduce a Markov chain that grows unboundedly in many dimensions to a Markov chain that grows unboundedly in only one dimension, by using the idea of busy period transitions; (ii) Recursive Renewal Reward (RRR) and Clearing Analysis on Phases (CAP) , techniques that allow one to obtain closed-form solutions for many one-dimensional infinite repeating Markov chains, including the M/M/k with setup chain; (iii) Exact analysis of Replication Systems: the first exact solution (in product-form) for replication systems, involving any number of servers, and number of classes, and any degree of replication.

Computer systems implementation
Resource management in data centers and other distributed systems. Implementation of QoS tail latency guarantees for flows in networked storage systems, based on techniques from Deterministic Network Calculus and Stochastic Network Calculus (PriorityMeister and SNC-Meister). Online scheduling of jobs with heterogeneous preferences in data centers with heterogeneous servers (TetriSched). Dynamic power management in multi-tier data centers (AutoScale). Scheduling of Memory Controllers (ATLAS). Pricing and queueing optimizations for cloud services. Kernel-level connection scheduling in Web servers. Solutions for coping with transient overload in Web servers. QoS for e-commerce applications involving the backend database. Prioritization mechanisms for OLTP transactions in database servers. Job scheduling in supercomputing centers.

Modeling and Workload characterization
Known for the discovery of Pareto heavy-tailed distribution of UNIX process CPU lifetimes. Statistical characterization of workloads including UNIX processes, web, OLTP, supercomputing, memory workloads, and parallel jobs.