Tower at Night


Main Program:
Dates: March 13 and 14, 2012
Location: Carnegie Mellon Univ.
Baker Hall A51
Giant Eagle Auditorium

Pre-Conference Workshop
Date: March 12, 1:00- 6:00 p.m.
Location: Carnegie Mellon Univ.
ECE Department
Hamerschlag Hall, HH 1107
Bombardier Smart Infra-structure Collaboration Center


Abstracts due:
Feb. 01, 2012
Acceptance notices sent:
Feb. 15, 2012
Presentations due:
Mar. 01, 2012

Abstracts / Presentations / Papers

Leonard Hyman:  Carnegie Mellon’s Eighth Annual Conference on the Electricity Industry:
     What They Really Said--A Personal Take


Casten, Thomas
     Forbes, Kevin
     Mathieu, Johanna
     Wells, Chuck




D1.A1, #1
Title:  Wind Integration:  Status and Prospects.  
J. Charles Smith, Executive Director, UWIG 
Abstract:  A short overview of the major issues associated with the integration of large amounts of wind energy into electric power systems will be provided.  The basic characteristics of variability and uncertainty of wind power will be addressed, and the use of sources of flexibility and forecasting to manage them.  A range of issues will be explored, including integration cost, capacity value, energy storage, and market design.  An example from the field of wind forecasting will be used to illustrate some of the themes of the conference.

D1.A1, #2
MISO Energy: Example of Use of Economic Studies
Dale Osborn
Abstract: MISO ran an analysis for the EPRI Economics of Transmission Planning study,1021927.  Several futures for the generation and transmission  were examined for two transmission plans designed to serve MISO. The Regional Generation Outlet Study ( RGOS) 765 kV transmission option has 6,000 MW more of power transfer capability to PJM from MISO than a Native Voltage transmission option.   The 765 kV transmission option costs an increment more than the Native Voltage transmission option.  Both transmission options meet MISO needs. Benefits from transmission in one region benefit other regions.  The example provided shows that the decision to build incrementally more transmission based on economic justification depends on the participants benefits included in the  plan for cost allocation.

D1.A1, #3
Plans for Wind Integration in PJM: Progress and Challenges
Paul McGlynn

D1.A1, #4
Plans for Wind Integration in ISO-NE: Progress and Challenges
Jonathan Black
Abstract: ISO New England (ISO-NE) is working to reliably and efficiently integrate growing penetrations of wind power in the region. This presentation will provide an overview of ISO-NE’s work to date to successfully integrate wind, including the completion of the New England Wind Integration Study, and ongoing activities required to facilitate the large-scale integration of wind resources. Emphasis will be on the development of a centralized wind power forecasting system for the region, including the wind plant data requirements that will enable the successful implementation of this system.

D1.A1, #5
System Planning Challenges of Wind Integration in ERCOT
Shun-Hsien Huang (Fred)

D1.A1, #6
Dynamic Monitoring and Decision Systems (DYMONDS) Simulator for Low-Cost Green Azores Islands
Marija Ilic

Coffee Break

D1.A2, #1
Title: The Challenges of Large-scale Wind Power Integration
Dr. Niamh Troy & Prof. Mark O’Malley
Abstract: As higher penetrations of wind power are achieved, power system planning and operation becomes increasingly complex due to the variable and unpredictable nature of wind power. This presentation will outline the primary challenges which must be overcome for power systems to integrate large levels of wind power.  The variable nature of wind power will increase variability in net load (load minus wind generation), which must be met by conventional generation on the system, resulting in a greater demand for operational flexibility from these units. Expected or unexpected reductions in net load, which can arise due to declining wind power output, will force conventional plant to ramp up their output, or if sufficient ramping capability is not available, fast-starting units will need to come online. Periods of low demand coinciding with high wind power output can lead to conventional plant being shut down, a problem which has been exacerbated of late due to a reduction in demand as a result of widespread economic recession. The culmination of adding more variability and unpredictability to a power system is that thermal units will undergo increased start-ups, ramping and periods of operation at low load levels, collectively termed “cycling". In some systems wind is allowed to self- dispatch, so the forecast output from wind farms is not included in the day-ahead schedule. This can lead to increased transmission constraints which will further intensify plant cycling.

Prior to the large-scale deployment of renewables, uncertainty in power systems was limited to load
forecast error and the unplanned outages of generators or transmission lines. In order to maintain a
secure system, adequate levels of spinning and non-spinning reserve were maintained to cover this
error. Incorporating variable renewable generation adds an additional source of uncertainty given
the unpredictable nature of renewable power sources. With low levels of renewables on power
systems, additional reserve is needed to cover the additional uncertainty associated with
renewables. However, as the penetration of renewables grows, it becomes increasingly inefficient to
rely on reserves alone to cover the uncertainty related to renewables. More robust schedules
produced via stochastic scheduling, which considers multiple scenarios corresponding to multiple
values of the stochastic variable (in this case the power output from the renewable generation) may
be necessary. In addition, to make the most efficient use of the renewable generation, forecasts
need to be utilized. As the accuracy of these forecasts increases as the forecast horizon decreases, it
is important that updated forecasts are used to update the commitment decisions through a rolling
unit commitment mechanism. This can in turn lead to a reduced reserve requirement.
Wind power will also impact a system's dynamic performance. As wind power will tend to displace
conventional generation, it will also displace the inertial response provided by these units, which is
vital to maintain system security when faults or outages occur. In addition, wind turbines supply
asynchronous power to the system which can impact the system's voltage stability. However, it is
possible to implement control features to emulate inertial response and mitigate the impact on
voltage stability and these will be necessary in order to increase the upper limit to the maximum
penetration of wind power on a system.

At the longest time-scale power system planning to ensure that the future portfolio has sufficient
capacity will have to be adapted to consider the uncertain contribution from wind plants. Market
signals and policies will also have to ensure that the future portfolio can deliver a plant mix which is sufficiently flexible to meet an increasingly variable net load.

D1.A2, #2
Title: Enhanced DPlan for Enabling Large Panetration of DERs in Electricite de Portugal (EdP).

Marcelino LF
Pedro Carvalho 

Deployment of advanced metering infrastructure promises increasing access to data about residential end use electricity consumption. Real-time data could be used to achieve fast time-scale demand response (DR), allowing loads to participate in both energy and ancillary services markets.  Importantly, we expect DR will be a low-cost resource capable of providing much of the additional regulation, load following, and ramping required of power systems with high penetrations of variable and uncertain renewables.  However, how much and what kind of data do we really need to control loads with high fidelity?  What is the value of offline versus real-time data? How do data needs change if loads are controlled centrally via direct load control or in a decentralized manner (e.g., each load or load aggregation controls itself to minimize energy costs)? Our work focuses on using offline and real-time data along with load aggregation models, state estimation techniques, and control strategies, to control heterogeneous populations of residential thermostatically controlled loads (TCLs) to deliver power systems services and participate in short time scale energy markets. We have shown that if real-time state information from each TCL is available to a central controller, the TCL population can follow a signal very closely.  However, if real-time information is only available from the distribution substation the population can still follow the signal, but the tracking performance degrades.  These results shed light on the required sensing, communications, and data needed for loads to provide reliable power systems services.  In addition to presenting these findings, I will discuss our preliminary work to understand data and modeling requirements for decentralized control.

D1.A2, #3

Model-Predictive Scheduling for ERCOT
Le Xie
Abstract: Compared with conventional static security constrained economic dispatch, look-ahead dispatch leverages the near-term forecasts of variable generation for an overall more cost-effective utilization of generation assets. This talk presents a case study of applying look-ahead dispatch in the nodal market operations of Electric Reliability Council of Texas (ERCOT) system. The major components of the ERCOT nodal market and the incorporated look-ahead functions are introduced first. Then an ERCOT-scale case study to demonstrate the benefits of look-ahead dispatch compared with static security-constrained economic dispatch is presented. Nodal price behavior with static and look-ahead dispatch is also compared and analyzed. This is joint work with Yingzhong Gu (Texas A&M), Diran Obadina (ERCOT), and Xu Luo (formerly at ERCOT).

D1.A2, #4
The RenewElec Project: Exploring Challenges and Opportunities for Integrating Variable and Intermittent Renewable Resources
Stephen Rose, Emily Fertig, David Luke Oates, Paulina Jaramillo, and Jay Apt, CEIC
Abstract: Thirty-three U.S. states have enacted legislation requiring that renewable electricity make up as much as 40% of electric generation. In his 2011 State of the Union Address, President Obama said, “So tonight, I challenge you to join me in setting a new goal: By 2035, 80 percent of America’s electricity will come from clean energy sources.” In contrast to most conventional sources of power, electricity produced from wind and solar, the two most abundant sources of renewable power (after large hydroelectric projects), are both variable and intermittent: variable because the wind does not blow all the time and clouds sometimes cover the sun, and intermittent because there is no sun at all during the night. Today, wind contributes roughly two percent and solar about one onehundredth of a percent of all U.S. electricity generated.

Proponents of renewables argue that large amounts of variable and intermittent power can be easily accommodated in the present power system. Others argue that even levels as low as a 10% of generation by variable and intermittent power can cause serious disruptions to power system operation. A much expanded role for variable and intermittent renewables is technically possible.
But, only if we adopt a systems approach that considers and anticipates the many changes in power system design and operation that will be required to make this possible, while doing so at an affordable price, and with acceptable levels of security and reliability.

There is a considerable risk that if we do not do the necessary planning, and develop the necessary new policy enviroment, serious problems could develop resulting in a major backlash against renewables in a decade or two. The U.S. cannot afford to let that happen. Reducing emissions of CO2 by 80% by mid-century (HR2454 called for 83%) will take everything we have got, including as much wind and solar as we can manage. By supporting high quality and interdisciplinary research, the RenewElec project will help the nation make the transition to the use of significant amounts of electric generation from variable and intermittent sources of renewable power in a way that:
• Is cost-effective;
• Provides reliable electricity supply with a socially acceptable level of local or large-scale outages;
• Allows a smooth transition in the architecture and operation of the present power system;
• Allows and supports competitive markets with equitable rate structures;
• Is environmentally benign; and
• Is socially equitable.

D1.A2, #5
Limitations oin Reduction of Wind Power Intermittency with Storage Technologies
Christina Jaworsky, Konstantin Turitsyn, Mechanical Engineering, MIT
Stochastic variations and unpredictability of wind energy are the major concerns of power industry and hinder the wide scale adoption of wind power. Compensation of short term variability is one of the major challenges that the industry will face in the coming years. Our study focuses on statistical analysis of fluctuations of wind power on the minute to hour time scales. Using the publicly available wind measurement data we show that the statistics of fluctuations is strongly non-Gaussian and highly correlated in this time frame. Specifically we show that traditional Gaussian can underestimate the probability of rare events by several orders of magnitude. In the second part of our work we analyze the potential impact of advanced control and storage technologies in reducing the intermittency of wind power. Using the convex optimization techniques we study the theoretical limits on the performance of storage technologies. Specifically we analyze the interplay between the stat istics of electric power fluctuations and the characteristics of storage available in the system. We quantify the trade-off between the reduction in power intermittency, storage capacity, and charging rate. In the end we present a general approach to the intermittency mitigation problem that incorporates multiple objectives and system constraints.

Lunch Speaker:
Sustainability Initiative and Big Data
Dr. Krishna Kant, National Science Foundation

D1.P1, #1
System-Wide Centrally Coordinated Power System Operation and Control Challenges & Future Directions
Bruce Fardanesh
Abstract: In this presentation the requirements and benefits of centrally coordinated power system operation and control are discussed with a premise of reducing operating margins with more reliance on controls to achieve maximum asset utilization while maintaining system reliability and security. Methodologies and system analysis tools are proposed for enabling the migration from today’s control methodologies to future coordinated automatic closed-loop control of power systems.

D1.P1, #2
Adapting AGC to Manage High Renewable Resource Systems
Ralph Masiello and Waren Katzenstein

D1.P1, #3
Grid Analytics to Enhance Distribution Feeder Performance
Marrasoul J. Mousavi
Abstract: With the advent of smart grid and proliferation of networked sensors and Intelligent Electronic Devices (IEDs) throughout the T&D system, grid data become abundant. The abundance of raw data provides unprecedented opportunities and at the same time significant challenges in the midst of rising data volumes across various levels of the control hierarchy. To achieve a resilient and self-healing grid of the 21st century, the data must be converted to actionable information at all levels through the application of analytics. This presentation will focus on the grid side of utility analytics and present the process by which analytics is applied to substation IED data to enhance feeder performance and situational awareness in a cost-effective manner. Real world scenarios and use cases will be presented along with a review of end-to-end automation system requirements to deliver the value by incorporating grid analytics in real-time operations.

D1.P1, #4
Overview of Grid Data, Collection, Communications, and Use
Bob Cummings, NERC
Abstract: NERC is embarking on into a period of increased use of and reliance on high‐speed data acquisition for monitoring and controlling the Bulk Electric System. Through the American Recovery and Reinvestment Act of 2009, funding became available for the installation of several phasor measurement units (PMUs) and networks to bring those data to data concentrators. Those data streams will be coming on‐line within the next year. Use of those new data sources will offer significant improvements in the area of state estimation, dynamic system ratings, early detection of inter‐area oscillatory behavior, and prediction of voltage stability. Additionally, we will be using those data to perform trend analysis and develop system performance metrics to foster power system
reliability. However, to take best advantage of those data, significant hurdles need to be overcome in data handling, storage, and processing. My discussion will focus on the challenges of bringing these data streams on line and making them useful.

D1.P1, #5
Potential of FACTS and Flywheels for Transient Stabilization Agains Large Wind Disturbances and Faults
Milos Cvetkovic and Kevin Bachovchin
Abstract: Transient stability problems in networks with high wind penetration are analyzed. Disturbances caused by wind power perturbations and major equipment failures are considered and their impact on system dynamics is analyzed. Next, the potential of power-electronically-switched devices for the stabilization of system dynamics during high energy perturbations is considered. In particular, the use of shunt Flexible AC Transmission Systems (FACTS) devices, which are the most common power-electronically-switched devices in today’s power grids, and flywheel energy storage systems is analyzed.  The impact of nonlinear FACTS and flywheel control on the system stability is quantified by benchmarking it against today’s FACTS control using the electrical power system on the Flores Island. Major contributions are made in exploring the potential of nonlinear control logic using the dynamical model of the interconnected system.

D1.P1, #6
Zooming-in and Zooming-out Computer Methods for Contingency Screening in Large Scale Power Systems
Sanja Cvijic and Marija Ilic (CMU), and Peter Feldmann (IBM)
Abstract: Contingency analysis remains one of the most computationally difficult challenges in power system analysis. Its complexity is due to repetitions of non-linear power flow computations of a huge number of possible outages. But, in fact, a network in the event of an outage topologically does not change much compared to its normal operation. This project proposes a modular, multistep ”DC” power flow algorithm that enables re-use of certain steps in power flow computations between outage cases and normal operation. Instead of re-computing the entire distribution factor matrix, this algorithm handles contingencies in an intuitive way by requiring only the minimum number of computations to be repeated. The algorithm will be demonstrated in the case of an internal line outage within an area and a tie-line outage.

In order to deal with the non-linear nature of ACPF, should smoothen abrupt variations of network parameters while enabling detection of solutions even in cases when ACPF does not converge.

Coffee Break

D1.P2, #1
The Value of Real-time Data in Controlling Electric Loads for Demand Response
Johanna Mathieu and Duncan Callaway

D1.P2, #2
The Benefits and Challenges of Data-Based Management in Power Systems: Lessons Learned from Three U.S. Case Studies
Jessica Harrison
Abstract: A number of advanced resources such as voltage conservation and distributed energy resources offer potential benefits for the electricity system, including energy savings, increased reliability and emissions reductions. However, thoughtful integration of these resources is necessary to emphasize their benefits and limit any potential liabilities. Liabilities include more frequent or greater imbalances between supply‐demand and reductions in power quality. Power system data, in particular, appears to offer a valuable means for successfully integrating these technologies. Improved monitoring and better access to grid data, for example, allows operators to more readily identify and respond to system events or even prevent them. This paper highlights three case studies to examine how advanced energy resources can benefit the grid, what liabilities exist with improper integration, and what role power system data might play in facilitating their successful integration. The paper also briefly highlights what barriers exist in using power system data. The three case studies look at integrating demand response into the wholesale markets; operating voltage conservation programs and verifying their savings; and gaining visibility and control of distributed energy resources. Findings from the paper highlight the fact that data can play an important role in both electricity grid operations and electricity market operations and indicate the need for additional research in this area.

D1.P2, #3
Cluster based baselining method for determining load reductions
Jason Black, Battelle, and Yi Zhang and Weiwei Chen, General Electric
Abstract: Demand response programs often provide payments for the amount of load reduction a consumer provides during an event period. Measuring the amount of this load reduction is highly
problematic, since it involves first determining the amount that would have been used if there were
no event (known as the baseline) and then assuming that any amount of metered usage that is less
than the baseline is due to the consumer actively reducing their consumption. Baselines are
typically based on historical data, with some allowing for adjustments based on actual meter
readings in the time periods prior to an event (this is often known as a morning adjustment). This
paper presents a baseline method that uses a cohort to establish a dynamic control group for
comparison of event day consumption, rather than making direct comparisons to consumption on
previous days. The cohorts are developed according to the highest correlation between normal
daily consumption patterns. The baseline for a participant in a DR event is then established by the
average consumption of non-participating members of their cohort. In this way, daily weather
variability and other factors affecting loads are incorporated automatically into the baseline. The
idiosyncratic variations in an individual load within a day are not captured, but these are also
ignored by existing baseline methods. Our method is compared to a representative, history-based
baseline method using real, 15-min meter data from a set of residential customers for a summer
season. In order to determine the accuracy of the baseline method, baselines are calculated for
each non-event day and compared to the actual meter data on these days. The results of this
comparison show that the cohort based baselines are significantly more accurate for predicting
actual, normal, consumption.

D1.P2, #4
Not All Megawatts are Created Equal
Tom Casten, Marija Ilic, Masoud Nazari
Abstract: Contrary to most regulatory policy, all megawatts are not equal; one megawatt-hour generated near users with a distributed generation plant (DG) can displace 1.2 to 1.5 centrally generated (CG) megawatt-hours and displace even more central generation and wires capacity.  Treating all megawatts as equal – paying a DG MWh the same price as a CG MWh – discourages DG deployment, which causes society to suffer higher costs, and higher emissions.  To induce an optimal electric supply, policy makers and regulators need to recognize the relative value of DG megawatts.DG megawatts usually flow directly to users, regardless of who purchased the power, thus bypassing long wires and associated line losses.  By reducing the power flowing through the wires, DG also reduces line losses on all remaining centrally generated power.  Furthermore, most DG burns less fossil fuel than central generation.  Fueled combined heat and power plants (CHP) located near thermal users do two jobs with one fire, recycling normally wasted exhaust energy to displace boiler fuel; CHP efficiency rises to 85% versus average CG delivered efficiency of only 33%.  Some DG plants recycle waste process energy into electricity with zero incremental fuel or pollution. Other DG plants convert renewable energy – sun, wind, biomass, hydro, etc. – into power with no fossil fuel.  In other words, each DG megawatt-hour reduces fossil fuel use versus generating the same MWh in an electricity-only central fossil fueled plant, and then displaces more than one CG megawatt-hour.

Society needs DG to meet today’s goals of reducing fossil fuel use and energy imports, reducing greenhouse gas emissions, protecting and adding local jobs and decreasing grid vulnerability to extreme weather and terrorists.  Federal Energy Regulatory Commission Chair Jon Wellinghoff recently noted a danger of overinvesting in transmission, ‘if we don’t break down the non-economic barriers to putting in the distributed generation.’  Recognizing the extra value of DG megawatts will speed achievement of today’s societal goals.

D1.P2, #5
Ships at Sea: The Original Micro-grids?
Timothy J. McCoy
Abstract: Historically, there has been only limited interaction between the terrestrial power and energy community and the naval power systems engineering community. While both communities resolve challenges associated with generation, transmission and distribution of electricity from source to user, the specifics of power architectures and technical issues associated with each application have been different enough to create two distinct technical communities. Recent changes on the terrestrial side, including distributed generation, planning for high renewable penetration and plug-in electric vehicles, system protection, quality of service and especially micro-grid and smart grid initiatives are making the some of the historical differences between terrestrial and shipboard power systems disappear. The US Navy has been installing electric power systems aboard its ships since 1877. Initially, these were small Direct Current systems to replace oil lamps that were the incumbent technology of that time. Today, shipboard power systems include several multi-megawatt generators, medium voltage radial and zonal distribution systems and numerous loads to include electric propulsion, weapons and sensors. This talk will highlight some of the technical similarities and differences between terrestrial and shipboard power systems, concluding with a discussion of the regulatory and technological drivers for innovation aboard seagoing platforms.

D1.P2, #6
Hardware Acceleration for Load Flow Computation
Jeremy Johnson, Dept. of Computer Science, Dept. of ECE, Drexel University, Chika Nwankpa, Prawat Nagvajara (with P. Vachranukunkiet, T. Chagnon, K. Cunningham)
Abstract: This talk discusses the use of special purpose hardware to accelerate the computation of load flow computation of power systems. Sparse LU factorization accounts for 85% of the computational cost and our focus is on accelerating this part of the computation. Initial work focused on building a processor designed to compute LU factorization of sparse matrices. A prototype was developed using a Field Programmable Gate Array (FPGA). This design took advantage of properties of the matrices that arise from power system analysis and obtained substantial speedup over general purpose processors available at the time. Much of the speedup was due to hardware to perform indexing and updates when performing row operations (merge unit).

The disadvantage of this approach is the time required to design, implement and maintain the system. Much of the processor design was devoted to data flow, control and a special purpose cache, components found in a general processor, though with some modifications specific
to the problem. This led to a second approach where an accelerator for just the merge operation was implemented as a core that could be included as an accelerator for a more general processor. This approach was carried out with several paramaterized processor designs that allow configuration of the memory hierarchy. Initial performance results show promising speedup with substantially less design and implementation effort.

In addition to high performance, the use of accelerators and FPGA provide substantially less power consumption than general purpose processors or high speed platforms such as GPUs. This suggests that a network of load flow units could be deployed in the field providing real time analysis
and feedback.

Dinner Speaker:
Business Sustainability
Leonard Hyman
ABSTRACT: Discussions of sustainability rarely include the issue of business sustainability. A line of business is sustainable if, after the original capital investment, the revenues from the sale of product to willing customers at competitive price covers the cost of product plus profit without government subsidies or periodic injections of cash to support the line of business. Unless the line of business meets that definition, it exists with the risk of termination if external supports are removed--- scarcely sustainable. In addition, planners of sustainable energy initiatives generally intend to lean heavily on the electric utility industry, which relies on frequent injections of outside capital to raise money for new facilities and equipment and to pay to existing investors a return on and of capital. That business model is not sustainable without the implicit guarantee from government that competitors will not be allowed to enter the business or act in a way that will jeopardize the utility's ability to raise new capital. Yet government policy over the past 20 years has encouraged more competition in the electric business. Thus the foundation assumed for much sustainability planning may not, itself, be sustainable. Efforts to promote sustainability in the environmental sense could founder on lack of sufficient consideration for sustainability in the business sense.

Plenary Talk
Rethinking Electricity Markets - Data You Can Believe In: “Planning, Markets and Change in the Electricity Supply (and Demand?) Industry”
Richard Schuler
Abstract: Facilitating the evolution of the electricity supply (and demand) industry may be the biggest benefit to be derived from greater collaboration and understanding among economists and engineers, who far too frequently have been like ships passing in the dark because of their different assumptions, tools and perspectives. By focusing on improved, believable data and when and how those data might be made to be forthcoming, both through the greater use of markets and their restructuring in some instances and their override in others, mechanisms and flexible systems that might accommodate more diverse innovation may be devised. Furthermore institutions, planning sequences and markets can be structured in ways that are more consistent with the hopes and aspirations of society, if customers are let into the game and economists and engineers collaborate. So, the smart grid (whatever that means) is viewed both as an enabling tool for customer reality-checks as well as providing opportunities for vast improvements in supply-side performance through better collaboration over space and time. Could such a schema built around the electric industry become a model and an important cog in a pathway toward society’s “sustainability” (our stealth planning tool)?

D2.A1, #1
Title: “How are Market Adjusting to Large Amounts of Renewable Generation?”
Judy Chang
Principal, The Brattle Group
Kamen Madjarov
Associate, The Brattle Group

Abstract:  Many of the markets across North America are revising their market rules to accommodate the high penetration of variable energy generation, such as wind and solar.  In this presentation, we will first describe the type of system challenges that may arise with high penetration of wind and solar energy.  Then we will discuss how grid operators are setting up new routines and procedures to ensure system reliability while providing sufficient opportunities for renewable generators to sell into the established markets efficiently.  While some of the details of these market rules are still being formed, we will discuss the drivers behind how and why these rules may help grid operators and affect various market participants. 

D2.A1, #2
ARPA-e Investment in a More Flexible Grid
Tim Heidel

D2.A1, #3
Title: The Impact of Distributed Energy Resources on Utility Rate Structure 
Authors: Desmond W. H. Cai, Sachin Adlakha, Steven Low, K. Mani Chandy, and Paul De Martini

Abstract: Recent technological improvements in photo-voltaic (PV) technology and the growth of distributed electricity generators (such as bloom boxes) have demonstrated that distributed energy resources (DERs) have huge potential to become economically viable sources of electricity. However, the economic impact of such distributed generation on utility companies is not well understood. The introduction of DER resources would imply that the consumers with DER would demand less energy from the grid. Thus, to recover their fixed cost of investment, the utility companies would force to increase the rate of electricity. This in turn would exert positive feedback, making it more attractive for other consumers to switch to DERs. Such positive feedback could have significant impact on the ability of the utility company to sustain future investments. In this work, we construct a comprehensive model of DER adoption and study its impact on the utility companies. Our model incorporates parameters such as distribution of consumers in an area, the cost of PV, efficiency of PV, operational costs for utility companies, etc. Using this model, we then study various scenarios for DER adoption and their impact on the financial viability of utility companies. We use our model to demonstrate that utility rate structure could have a significant impact on DER adoption.

D2.A1, #4
Title: Stratum Electricity Market: Toward Multi-temporal Distributed Risk Management for a Sustainable Electricity Provision
Zhiyong Wu, Marija Ilic
Abstract: In this paper we extend the notion of sustainable long term electricity provision to incorporate the engineering, economic, financial and environmental attributes.    We establish the performance metrics which can  be  used to  evaluate how  close the regulated industry and industry under restructuring  come to meeting the sustainability objectives.  We argue that the long-term uncertainties facing electricity industry are enormous and multi-dimensional, ranging from both supply side, demand side as well as the regulatory and policy sides.  The current electricity market design is  incomplete and not adequate to managing the long-term risks  in such a way that the right incentives are given to ensure sustainable services. Therefore, we believe that an extension of current DART to enable stakeholders to express their preferences for long-term energy provision and consumption is necessary. The Stratum Electricity Market (SEM) is proposed as a means for market participants to provide long-term, mid-term and short-term bids/offers based on their own best knowledge and risk preferences. These bids/offers are used by ISOs to create system aggregated demand and supply curves for reliable energy services over various temporal and spatial horizons under uncertainties.

The SEM contains a series of forward sub-markets which clear over different time horizons and at different spatial granularity. The short-term markets under the SEM,  similar to the current DART markets operated by the ISOs, are designed to balance the short-term deviations from  mid-and long-term  commitments at nodal level.  One innovative design of SEM market is adoption of heat rate instead of energy as the trading products in mid- and long-term markets. The cost of power generation can be estimated by the heat rate of certain generation technology and the associated fuel price. Heat rate-based products could give market suppliers a better way of managing fuel price volatility and  a better projection of their own profits. At the same time, they enable the flexibility to lock in profits provided that long-term fuel contracts with same duration could be obtained. The mid- and long-term heat-rate products are traded as zonal instead of nodal products to promote market transparency and liquidity. New market rules, rights and regulations (3Rs)  concerning the sub-markets interactions, product hierarchy and financial settlements are also proposed in the paper.  The 3Rs are designed to support flexible and effective adjustments by the market participants when creating their portfolio and to manage their inter-temporal and inter-spatial risks as more information unveils. Long-term commitments are financially binding in the sense that they could be bought-back partially or fully in short-term markets. Both physical and financial players are allowed to participate in the markets as long as sufficient collaterals can be secured.

D2.A1, #5
The Relationship between Wind Energy and System Operator Actions to Ensure Power Grid Reliability:  Econometric Evidence from the 50Hertz Transmission System in Germany
Kevin F. Forbes, Department of Business and Economics, The Center for the Study of Energy and Environmental Stewardship, The Catholic University of America, Washington, DC USA;Marco Stampini, Inter-American Development Bank, Washington, DC USA; Ernest M. Zampelli, Department of Business and Economics, The Center for the Study of Energy and Environmental Stewardship, The Catholic University of America, Washington, DC USA
AbstractThis paper examines whether the integration of wind energy into a power system has any implications for the actions taken by a system operator to ensure reliability. The issue arises because the stability of an electricity control area requires that the supply of electricity match electricity demand at all times, not merely on average. Maintaining stability is greatly facilitated by having accurate forecasts of both supply and demand. Forecasts errors impose a “cost” on the power system because intervention by the system operator is required when actual energy levels are not equal to the forecasted levels.  Large forecast errors also have implications for the level of operational uncertainty.  

The analysis focuses on the 50Hertz transmission control area in Germany (formerly Vattenfall), a power system accounting for approximately 41% of Germany’s installed wind energy capacity.  Over the sample period of 1 November 2008 through 31 December 2009, wind energy in 50Hertz accounted for approximately 20.4 percent of consumption. Evidence that the errors in forecasting wind energy in this control area are very large relative to the errors in forecasting load is presented.  An econometric model is formulated to evaluate the effect of wind energy on power system operations. The empirical analysis indicates that the wind energy forecasting errors have operational consequences.  The results also suggest that the higher is wind energy’s share of forecasted demand, the more likely it is that the system operator will need to undertake measures to ensure “safe, secure, and reliable operations.”  More specifically, the presence of wind energy raises the probability of such measures by a factor of 15 relative to the counterfactual case of no wind energy.

D2.A1, #6
A Price-based Approach to Demand Side Management
Lizhi Wang
Abstract: We present a price-based approach to demand side management. First, we define a new efficacy measure for a given retail electric rate (such as flat rates, time-of-use rates, real-time rates, etc.) Numerical results using data from PJM will be presented to demonstrate the efficacy measures for different electric rates. Next, we propose a trilevel optimization approach to designing optimal electric rates with respect to the efficacy measure.

D2.A1, #7
Markets Instead of Penalties: Creating a Common Market for Wind and for Energy Storage Systems
Mark B. Lively
Abstract: Electric systems typically penalize generators for being out of balance. Surpluses are bought at a price less than the nominal price. Shortages are charged at a price higher than the nominal price. Such a market is actually a penalty plan, since the generator always looses. An alternative approach is a true spot market for imbalances, where the price is based on the total utility
condition. When the utility is long, imbalances are priced at a price lower than the nominal price. When the utility is short, imbalances are priced at a price higher than the nominal price. In such a true spot market for imbalances, generators operating to improve the utility's long/short position receive favorable prices, instead of always receiving unfavorable prices. Since Energy Storage Systems should always be operating in a manner that is improving the utility's long/short position, Energy Storage Systems can thrive in this true spot market for imbalances. A true spot market for imbalances will also encourage right-sizing, both of wind and for offsetting Energy Storage Systems. Too much wind caused imbalances will make the price less favorable for wind and more
favorable for Energy Storage Systems.

Coffee Break

D2.A2, #1
Title: Optimal Power Flow over Radial Networks

Steve Low, Subhomesh Bose, Mani Chandy, Masoud Farivar, Lingwen Gan,
Dennice Gayme, Caltech, and Chris Clarke, SCE

Abstract: We consider optimal power flow (OPF) problem over radial networks and propose
two convex relaxations.  OPF is generally nonconvex and recently a sufficient condition is proved for general meshed under which an semidefinite relaxation is exact.   We prove that, if the network is radial (tree), then the sufficient condition is always satisfied and hence semidefinite relaxation is always exact, provided the upper and lower bounds on the voltage magnitudes satisfy a simple
pattern. Using the DistFlow model, we then propose a simple SOCP relaxation to OPF for radial networks, and prove, under a mild condition, that the relaxation is exact.

D2.A2, #2
Energy-efficient Control of a Smart Grid with Sustainable Homes Based on Distributed Risk
Brian Williams, Masahiro Ono and Wesley Graybill
Abstract: Our goal is to develop distributed control systems for a smart grid that is supplied by renewable generation, such as wind and solar, and that provides energy to sustainable homes. A major new source of uncertainty in this smart grid is the intermittent energy production provided by renewable sources, coupled to uncertainty in demand. This combined uncertainty increases the risk of blackouts and system failure, as well as an increase in the need for spinning reserves, which rely upon fossil fuels. We propose a robust, distributed control method for smart grids and sustainable homes based on four key principles. First, we have develop chance constrained model-predictive controllers that use stochastic models and operator specified risk constraints (called chance constraints) to operate efficiently at specified risk levels. Second, these chance-constrained optimization problems are solved efficiently based on the concept of iterative risk allocation, which reformulates a series of stochastic optimization problems into an equivalent set of well-understood deterministic optimization problems. Third, risk allocation is implemented within a distributed control paradigm by treating risk as a resource to be allocated, and by designing mechanisms for controlling a risk market. Finally, increased flexibility is introduced on the demand side by introducing explicit, qualitative representations for user activity plans, and by reducing peak demand and sensitivity to uncertainty by developing controllers that exploit the increased flexibility in these plans

D2.A2, #3
Security Challenges in Cyber-Physical Systems
Nicolas Christin, Bruno Sinopoli, Adrian Perrig
Abstract: Cyber Physical Systems (CPS) refer to the embedding of widespread sensing, computation, communication, and control into physical spaces. Application areas are as diverse as aerospace, chemical processes, civil infrastructure, energy, manufacturing and transportation, most of which are safety-critical. The availability of cheap communication technologies such as the Internet makes such infrastructures susceptible to cyber security threats, which may affect national security as some of them, such as the power grid, are vital to the normal operation of our society. Any successful attack may significantly hamper the economy, the environment or may even lead to loss of human life. As a result, security is of primary importance to guarantee safe operation of power grids. In this talk, we first present a broad overview of the security issues that apply to smart grids. We then provide an illustration with a study of the effects of false data injection attacks on control systems. Thi s study y ields a necessary and sufficient condition under which the attacker could destabilize the system without being detected. We conclude by a few thoughts for future smart grid security research and challenges.

D2.A2, #4
Title:  Setting a standard for electricity pilot studies: Meta-analysis and guidelines for design and reporting

Authors:  Alexander Davis (presenter), Tamar Krishnamurti, Baruch Fischhoff, & Wändi Bruine de Bruin
Abstract: Many evaluation studies have examined the impacts of interventions targeting residential
electricity use – overall and during peak-demand hours. We offer a general approach to
designing, reporting, and evaluating such studies. We apply it to 32 studies of in-home
displays, dynamic pricing, or automated devices, conducted in the US or Canada. We
then adjust the treatment effects of each study for bias. In-home displays are the most
effective at reducing overall electricity use (~4% using reported data; ~1% after adjusting
for biases). Additionally, dynamic pricing significantly reduced peak demand (~12% with
reported data; ~6% after adjusting for biases), an effect that was amplified by adding
home automation (~24% with reported data; ~14% after adjusting for biases). Most
studies had three methodological flaws: intervention selection bias, arising from letting
participants chose their program, thereby limiting researchers’ ability to separate the
effects of that choice from those of the intervention; volunteer bias, arising from reliance
on volunteer participants, thereby limiting researchers’ ability to generalize results to
non-volunteers; and attrition bias, arising from participants dropping out of the study,
limiting researchers’ ability to separate the effects of the program from the causes of
dropout. Using estimates from medical clinical trials showing that these biases are
associated with overestimating treatment effects, we approximate that the typical
evaluation study miscalculates treatment effects by about 50%. The precision of our
estimates is limited by the incomplete reporting of most studies. Our framework provides
practical guidance for designing and reporting on evaluation studies, so to improve the
return on the resources invested in them. We suggest that these guidelines are also applicable to other domains in social science.

D2,A2, #5
Adaptive Load Management (ALM): Possible Implementation of Demand Response According to Well-Understood Value and Choice
Jhi-Young Joo and Marija Ilic

D2.A2, #6
What Could Deskside Supercomputers Do For The Power Grid?
Authors: Franz Franchetti, Tao Cui, and Cory Thoma 
Abstract: The computational power of commodity system has reached unprecedented levels: a $1,000 personal computer with  multicore CPUs and CUDA-enabled graphics processors (GPUs) and running on 200W power has a peak performance of  over 1 Tflop/s (tera floating-point operations per second), which rivals the top supercomputer (ASCI Red) from 15 years ago.  In  this talk we discuss how this change in computational capabilities enables previously infeasible algorithms to  be used even in real-time settings.  We focus on two examples: (1) probabilistic power flow for the  distribution network using Monte Carlo simulation, and (2) privacy enhanced smart meters that use secure  multiparty computation to share information without actually revealing it.

Lunch Speaker:
DOE OE Grid Modeling Initiative                       
Jay Caspary, DoE
Abstract: The DOE's Office of Electricity Delivery&   Energy Reliability (OE) has a major effort underway now in terms of grid modeling.  Jay Caspary has joined OE on an assignment from SPP along with Lauren Azar and Anjan Bose to bring expertise and perspectives to help shape future DOE initiatives.  In this presentation, Jay will describe the outreach and research results to date in his preparation of a white paper on grid analytics, framing DOE's strengths/weaknesses, addressing gaps and making recommendations on DOE's efforts regarding tools/skills, leveraging capabilities of the national labs, other agencies, e.g., FERC, DOD, etc. to provide long term value from in OE in support of national needs.  In his remarks, Jay will also discuss a $10M funding request for FY13 to support advanced modeling grid research in OE.

Data is critical success factor for future grid modeling efforts not only at DOE, but throughout the bulk power industry.  Information is power and the lack of understanding and transparency regarding existing assets are impediments to collaborative, coordinated and cost effective bulk power system planning and operations.  Getting consensus on data and metrics are paramount with aging infrastructure and the challenges of building new facilities.

D2.P1, #1
Mapping Energy Futures: Overlay of an Environmental Transmission Model on the Super OPF to Simultaneously Account for Air Quality, Locational Reliability and Price
Richard Schuler, William Schulze, John Taber, Ray Zimmerman, Max
Zhang, Jubo Yan, Charles Marquet, Kale Smith (Cornell)
Dan Shawhan, Andy Kindle (RPI)
Dan Tylavsky and Di Shi (Arizona State University)

Abstract: Energy futures for the United States depend critically on the electric power system. Meeting goals of energy independence, as well as cleaner energy sources for industry, commercial, and residential uses, as well as transportation, depend on investment in the future power system. A
planning tool that optimizes investment in generation is needed because the electric power industry faces the possibility of increased loads from plug-in hybrids, increased loads from other energy users trying to find cleaner sources of energy, renewable portfolio standards, and integration of a smart grid that allows for demand response. These challenges need to be met while maintaining reliability and with a $1000 bid cap for generators (in areas with markets) that defeats a free market solution for new investment in generation. Both reliability and investment require planning. This paper reports on an integrated engineering, economic and environmental modeling
framework for the electric power system (the SuperOPF Planning Tool), developed with support from the Department of Energy CERTS program. The model maximizes the net expected benefits of electricity production, optimizes investment in new generation, and includes environmental and
other regulations and impacts. This paper, presents the results of the first stress testing of the SuperOPF Planning Tool using a reduced network model of the Northeast for a number of policies: a base case, with no new environmental legislation; enactment of the Kerry-Lieberman CO2 allowance proposal in 2012; following Fukishima, retirement of all US nuclear plants by 2022 with and without Kerry-Lieberman; marginal damages from SO2 and NOX emissions charged to coal, gas and oil-fired generation; plug-in hybrid electric vehicle load filling; wind incentives in place; and two cases which combine these. The cases suggest that alternative policies may have very different outcomes in terms of electricity prices, emissions, and health outcomes. In all cases, however, new natural
gas combined cycle plants are the dominant technology for future investment. Policies can change how much new generation is built, whether other types of plants are built, or what types of plants are retired. We are in the process of completing a detailed model for the Eastern Interconnection
using a 4,400 node reduced network that retains all high voltage lines. Our research shows that this level of network detail is required since future investment in generation is driven in great part by line flow capacities. We plan to extend the model to the entire nation in the coming year.

D2.P1, #2
Title: PMU Placement to Ensure Observable Frequency and Voltage Dynamics: A Structured System Approach.

Sergio Pequito, Qi-Xing Liu, Soummya Kar, Marija Ilic
Carnegie Mellon University, ECE Department,
5000 Forbes Avenue, Pittsburgh, PA 15213

Abstract: Modern sensing infrastructures based in fast sensing devices, like phasor
measurement units (PMUs) are nowadays cheaper but with high power constraints. Thus, one
has to design a network topology considering the trade-off between energy cost and the overall
performance of distributed estimation techniques. Among, possible performance criterion for
estimation purpose, one can relates to the concept of observability measure. Nevertheless,
considering this criterion few work as been done, leaving the following open question: How can
we perform PMU placement such that we increase the observability measure of the system?
Several sensor placement methods developed in the literature, consider only generic observability
from structured systems. Those approaches are independent of the actual parameter values and
are only function of the underlying structured system, i.e, the zero and non-zero pattern of the
physical system.

In this paper, we give a partial answer to the question aforementioned. For that we purpose a
method, that contrarily to most of the literature, considers the parameters and explains how
those parameters affect the design of the PMU placement. The goal of the purposed method
is twofold: i) derive the minimum number K of PMUs such that the system is observable and
maximizes the observability measure and ii) fixed the K PMU locations found in i) where
should be placed K′ additional sensors that maximize the observability measure.
Simulation results using the purposed method are presented using the IEEE 14 Bus Standard
Test System.

D2.P1, #3
Data Analysis Finds Hidden Information in Electrical Systems
Dave Loucks
Abstract: Depending on the type and size of the electrical system, different techniques are used to monitor performance.  Sensors that monitor voltage, current, phase angle (including time-aligned synchrophasors), temperature (e.g. IR imaging, DTS, etc.), vibro-acoustic, and other inputs are commonly used.  Data from these sensors are processed (e.g. FFT, state-estimation, etc.) to identify information “hidden” within the raw data.  This paper discusses another method of processing data to reveal conductor impedances to a higher resolution than previously available.  This method can identify loosening pressure junctions, failing breaker contacts, misaligned draw-out breaker stabs and improperly terminated cables.  The model also allows for the calculation of real-time conductor losses.  Therefore besides providing a solution to track system reliability, this method can be used as an input to an economic dispatching model when options exist for routing power over multiple paths.

D2.P1, #4
Title: Energy Management by Distributed Sociotechnical Systems

Bruce McMillin, Suzanna Long, Joon-Ho Choi, Badrul Chowdhury, Mariesa Crow
Missouri University of Science and Technology

Abstract: The US National Science Foundation’s vision of a sustainable energy future requires developing fundamental knowledge, exploring alternative energy sources and technologies, to investigate novel pathways for human energy futures, and facilitate a public understanding of sustainable energy.  Sustainability must equally embrace environmental, economic, and social justice.  Too often sociotechnical systems are forged from technology with people as a secondary consideration.    Thus, such solutions may be underutilized or even discarded – people cannot live in them.   Thus, if you build it, will they come?  This paper outlines an energy sustainability science synthesized from the scientific integration of technological, economic, and social actors into a sociotechnical community.  The proposed science of integrating the three E’s with Energy forms a societal nexus of Sustainable Energy, integrating the three E’s of sustainability, Environmental, Economic, and Equality/social justice aspects on a common footing.  The science is rooted in the fundamentals of information sharing, SmartGrid technology and economics, human-centered environmental control, and consumer acceptance.  Validation of the science is proposed to be through models, evaluations, and practice/implementation.

D2.P1, #5
Title: Factored Models for Multiscale Decision-Making in Smart Grid Customers
Authors: Prashant Reddy, Machine Learning Department, School of Computer Science, Carnegie Mellon University and Manuela Veloso, Computer Science Department, School of Computer Science, Carnegie Mellon University 
Abstract: Active participation of customers in the management of demand, and renewable energy supply, is a critical goal of the Smart Grid vision. However, this is a complex problem with numerous scenarios that are difficult to test in field projects. Rich and scalable simulations are required to develop effective strategies and policies that elicit desirable behavior from customers. We present a versatile agent-based "factored model" that enables rich simulation scenarios across distinct customer types and varying agent granularity. We formally characterize the decisions to be made by Smart Grid customers as a multiscale decision-making problem and show how our factored model representation handles several temporal and contextual decisions by introducing a novel "utility optimizing agent." We further contribute innovative algorithms for (i) statistical learning-based hierarchical Bayesian timeseries simulation, and (ii) adaptive capacity control using decision-theoretic approximation of multiattribute utility functions over multiple agents. Prominent among the approaches being studied to achieve active customer participation is one based on offering customers financial incentives through variable-price tariffs; we also contribute an effective solution to the problem of "customer herding" under such tariffs. We support our contributions with experimental results from simulations based on real-world data on an open Smart Grid simulation platform. (Work under submission.)

D2.P1, #6
Big Data!  How Will It Be Used?

Howard Illan, Energy Mark
Abstract New data will be available in quantities greater than ever before.  This data will include Operating Data from PMUs that will enable the implementation of measurements previously unavailable.  Are evaluation methods ready to be implemented to evaluate and use this data?  In addition, even greater amounts of interval use data will be available from interval metering.  Are evaluation and pricing mechanisms ready to use the data effectively?  The alternative will result in garbage-in / garbage-out analysis.  I will present my views on the above issues including, recognizing the pitfalls and suggesting the path to workable solutions to these questions.

Coffee Break - Sponsored by OSIsoft

D2.P2, #1
Dynamic Line Ratings a Solution for Congested Transmission Lines
Tip Goodwin

D2.P2, #2
Title: Decentralized Algorithms for Oscillation Monitoring in Power Systems from Overwhelming Volumes of Phasor Data

Aranya Chakrabortty (North Carolina State University)
George Michailidis (University of Michigan)
Yufeng Xin (UNC Chapel Hill)

Abstract: As the number of sensors (Phasor Measurement Units or PMUs) in the US transmission grid scales up to thousands within next few years under the US Department of Energy’s smart grid demonstration initiative, Independent System Operators (ISO) and utility companies are struggling to understand how the resulting gigantic volumes of real-time data can be efficiently harvested, processed, and utilized to solve wide-area monitoring and control problems. It is rather intuitive that the current state-of-art centralized processing architecture will no longer be sustainable under such data-burst, and decentralized algorithms must be developed instead. In this talk we will propose such an algorithm for one of the most critical applications of phasor data- namely, modal decomposition for detecting slow/fast oscillation modes in the system and evaluation of their respective damping factors. Given a multiple set of coherent generation clusters in the system, we will first use data from all PMU sources to calculate the oscillatory modes, their damping and participation in a centralized fashion. Next, we will categorize the PMUs into several disjoint sets, and use the data from each of these sets to evaluate the modal frequencies for the entire system individually, assuming that the network has a connected topology guaranteeing system observability. A global estimate for any eigenvalue of interest will then be computed from the geometric mean of the eigenvalues obtained from the disjoint estimation, and finally parametric bounds will be derived to indicate how this geometric mean, representing the ‘fused distributed solution’ compares to the centralized solution. We will illustrate our results with common power system network models as well as the WECC system using real disturbance data.

D2,P2, #3
Title: “Real time data consumption and archival for the power grid”

Dr. Charles H. Wells, OSIsoft, LLC
Center of Excellence
Abstract: Secure wide area transport of high speed time synchronized data with minimal latency to a centralized location is discussed. Near real time consumption of these data by multiple clients and the types of basic calculations are outlined, with code examples of unwrapping of discontinuous phase angle data. Archival volumes and data compression are reviewed, and compression examples with real PMU data are shown. This will debunk the “myth” that PMU data must be stored in uncompressed form.

D2.P2, #4
Title: Cloud Computing-Based Architecture for Future Distribution Systems
Jignesh Solanki, West Virginia University
Abstract: Smart grid investment led by department of energy initiated nationwide revitalization of electric grid. Many utilities have started installing AMI meters, solar farms, wind farms, battery storage and static var compensators specifically in electric distribution networks. The installation of meters has resulted in voluminous amount of data generating many research areas. One of them is Demand Response (DR) that utilizes consumer behavior modeling considering different scenarios and levels of consumer rationality represented by Price Elasticity Matrices (PEMs). DR when applied to a real world distribution network considering a day-ahead real time pricing scenario can result in considerable boost in system voltages paving way for further demand curtailment through demand side management techniques like Volt/Var Control (VVC). The other research area is validating the existing models with the additional data as well as building new models for the different types of distributed generation prolifering in distribution systems. An analysis of distribution feeders with these developed models indicates their impacts and paves way for mitigation solutions in cases where distribution system cannot handle this generation. Yet another important research area is Multi Processing for coordination among all events of demand side management, distributed generation integration and control. Utilities are overwhelmed with amount of data coming in and hence cloud computing technology can provide an excellent platform to handle the data. Current cloud computing technology has the potential needed to provide the security, scalability and performance required for near-real time distribution network operation. Most importantly, smartly designed cloud architecture could significantly reduce the cost incurred with traditional data centers. There are several open challenges for researchers in Smart Grid area and this talk will provide current research results on demand response, volt/var control as well as solar integration in distribution network and will introduce some of these challenges. 

D2P2, #5
Markets and Demand Management Coupling with Renewable Energy Sources
Alberto J. Lamadrid
Abstract: As more countries are adopting renewable energy sources (RES) to cover their electricity needs, new challenges arise from the operational and reliability point of view. With further adoption
of zero marginal cost resources, and due to the inherent uncertainty associated with these resources, operational considerations, like ramping costs from conventional generation, will become
important for generators. The engagement of demand may play a significant role in the financial viability of the conventional generators to compensate for the variability added to the operation. Additionally, due to the the locations where it is economical to install RES, congestion in the system is likely to increase, leading to actual spilling of the renewable capacity. The objective of this paper is to study the effect of internalizing costs and engaging resources aimed at operating a secured system, while maintaining the sustainability of the system. The measures studied are, first, the role that ramping costs have in the operation of the system, to counteract the unpredictable nature of Renewable Energy Sources; second, the optimal management of Energy Storage Systems (ESS), given the inter-temporal constraints they face, to be coupled with RES; and third, the use of deferrable demand to better deal with management of uncertainty in RES and demand. This will extend the concept of deferrable demand to include thermal storage, and in particular, the use of new technologies like ice batteries to replace standard forms of air-conditioning. An empirical analysis is done by simulation in MATPOWER (Zimmerman, Murillo-Sanchez, and Thomas (2011)) for a Multi-period, stochastic, security constrained AC optimal power flow (SCOPF). A set of specific constraints reflecting ramping costs for all generation is included. The expected amount of Load Not Served (LNS) is also endogenously solved. Wind is modeled as the RES, with a characterization similar to historical data from the National Renewable Energy Laboratory (NREL). The network model is a heavily modified version of a reduction of the North Eastern Power Coordination Council (NPCC, Allen, Lang, and Ilic (2008)). The objective function to be minimized follows a formulation similar to a security constrained Alternating Current Optimal Power Flow (SC AC OPF) (see Condren, Gedra, and
Damrongkulkamjorn (2006)), with a probabilistic approach to the management of the possible
contingencies that the system may face Chen et al. (2005). The commitment of the generators
is assumed to have been solved beforehand. One of the main features of this formulation is the
physical ability to move from a high probability case (a “base case") to any contingency, due to
the fact that physical ramping constraints are explicitly included. For each time period, several
base cases are part of the formulation, each one reflecting an overall “state of nature". Such states of nature can correspond to, for example, observed overall levels of output from an stochastic resource in the system. Additionally, the equations describing the behavior of an energy storage system (ESS) are included. Since energy can be used both to perform arbitrage over the day and to cover contingencies, a set of additional variables is included to track the maximum and minimum expected charge levels over the optimization horizon. This formulation adds flexibility in the ESS usage and solving for the optimal schedule of the ESS for ancillary services.

The results compare the total costs of covering demand as calibrated from historical data for
the NPCC network. Three main metrics are used to evaluate the performance of the system:
1) The amount of wind that is accommodated in the network. 2) The expected operating costs
incurred by the system, with compensation including payments for energy and reserves for
contingencies and ramping, and 3) The amount of conventional capacity needed to securely
service the load. The implication is that if regulators ensure that the correct economic incentives
are given to market participants, the higher cost of capacity associated with renewables will lead
to investment in new capabilities that will make it feasible to manage the total system load more
effectively. Consequently, there will be systemic benefits obtained in terms of the generation
capacity needed to cover the peak demand, while consumers will not witness a change in the
amount of energy services delivered.

The main contribution we have is the use of a rigorous stochastic optimization model that
extends the dispatch models used by Independent System Operators (ISO’s). We find significant
benefits in the use of controllable demand, both from the fleet operation point of view and
from the perspective of welfare and payments in the system for reliability purposes. One of the
premises of ESS adoption is that time arbitrage will make storage sources more economically
viable. We have argued before that charging an ESS when is cheap and discharging it at expensive
hours is not enough to justify the investment costs given the current technology. Moreover,
such an approach ignores the general equilibrium effects that will eliminate price differences
over the day. The case illustrated in this article is a point in case, in which establishing a procedure
to defer load (e.g. a contract, or the use of novel devices to cover the temperature sensitive
demand) achieves less congestion in the system, better utilization of the RES available in the
system and decrease in spatial and temporal price differences. In all measures used, there is a
clear superiority of engaging demand. The main advantage from the operational point of view
is the reduction of congestion observed, leading to substantial savings to consumers, and the
Pareto-improvement usage of the generation assets available. For generators, the use of a load
following reserve product is proposed to correct a market failure derived from not accounting
the private costs of ramping conventional generation to counteract the variability of RES. While
such a product will inevitably reduce the amount of wind dispatched - or in best cases leave it
at the same level if the wind resource is ‘nicely’ behaved - it is necessary to include collateral
damages like increased emissions and the effects on health for populations close to the involved
generation plants. This is a message we want to resonate with policy makers and ISO’s.

D2.P2, #6
A Proposed Framework For A Simple Plug-And-Play Information Exchange Standard
Andrew Hsu and Marija Ilic, CMU ECE
Abstract: As newer and different components are introduced to both the transmission and distribution networks of the electric energy system, the information exchange protocol must be updated to reflect the tasks that operators and grid participants wish to perform. Conventional SCADA collects and transmits measurements to the central operator in order to do state estimation, in order to confirm the topology of the network from a top-down approach.
Based on work on distributed power flow calculation, a distributed state space estimator is proposed which requires “smart” calculators on and corresponding to all network components, including both lines and nodal components. However, this relieves the system operator from requiring knowledge of parameters and redundant measurements in order to do centralized state estimation calculation. A new, simple information exchange standard can be derived from this to facilitate the distributed power flow calculation as well as based on what information is necessary to communicate to the system operator. In addition, these “smart” components can independently determine connectivity with their neighbors, thus verifying topology data for the system operator.
One use for this power flow data would be to check for line congestion, which can be done by each line component instead of by the system operator, who must check all lines in the system using a centralized calculation. This task can be combined with other “smart” devices such as dynamic line rating units (DLR’s), which can determine if a line would violate thermal limits for a given power flow, and thus send a red flag to the system operator.
Other uses may be derived from the distribution network, where many small consumers (and potential producers and storage elements) may participate in the network without centralized planning or direct communication. A standardized information exchange protocol would enable these smaller elements, which may change both temporally and geographically frequently, to engage in distributed power flow calculations, which would give more detailed load information than simple aggregation, as well as allow for distributed contingency checking such as in the line congestion case, enabling utilities to respond more effectively.
A simple standardized information exchange protocol is proposed based on a mathematical algorithm for distributed computation, in order to facilitate calculation in the system using computing and communications units as well as to synergize with other smart devices. This may open many more options in how the future electric energy system is operated, monitored and controlled.