Improving Consistency in Regional Transmission Planning

Dated: September 28, 2015

Posted in: Commentary

Improving Consistency in Regional Transmission Planning

By Denis Bergeron P.E., Senior Utility Analyst, Maine Public Utilities Commission


This paper describes how the degree of latitude afforded in the interpretation of various transmission planning standards, allows inconsistency between planning regions, and even between adjacent transmission utilities within the same planning region. The discussion below highlights a couple of examples of such inconsistencies and also proposes a possible solution. The examples provided here are illustrative only and not intended as critiques of particular projects or practices.

Problem statement:

The use of highly subjective terms in both federal and regional transmission planning standards intended to ensure comparable reliability levels between planning authorities and utilities currently allow a wide range of interpretations. Systems planned on widely divergent interpretations of these subjective terms will lack equivalence and homogeneity, defeating the purpose of standards. Furthermore, in regions where transmission costs are socialized over a system of multiple service territories, disparate interpretation or application of planning standards can lead to consumers paying the same rate for uneven levels of reliability. Power system reliability is measured in two ways; Resource Adequacy and Transmission Security. The industry has adopted a statistical convention that “one in ten” is an appropriate level of reliability for resource adequacy purposes, but no statistical convention for reliability has been established for transmission security. Why? In the New England Planning Advisory Committee meetings, there are stake holder debates about the initial system conditions (base case) assumed for nearly every transmission needs assessment conducted. Those debates almost always center around the probability of occurrence of the events assumed in the base case and ultimately lead back to the question of whether the stress is “reasonable.” The introduction of statistical parameters to narrow the range of interpretation afforded by the current language can enhance the uniformity of transmission planning between utilities and address one of the more controversial topics in the transmission needs assessment process; the development of the base case.

Mandatory Transmission Planning Standards:

The 2005 Energy Policy Act resulted in the creation of a non-industry reliability organization with the responsibility to institute and enforce mandatory reliability standards. A system of voluntary conventions and standards had existed prior to FERC’s designation of NERC as the Electric Reliability Organization (ERO), but their interpretation and implementation had been voluntary. The evolution of the standards from voluntary conventions to enforceable mandates can be traced through numerous FERC rulemakings and continues today.

How Transmission Planning Needs are Determined:

The transmission planning exercise has remained fairly static for many years. An electrical model of that simulates the operation of the transmission system that includes all of the existing generators and transmission elements is developed. That model is then subjected to a number of events that might occur to test how the system is likely to perform. FERC’s Orders adopting the NERC Transmission Planning Standards (the TPLs) have attempted to; standardize the initial conditions (base case) under which the system is studied, define the type of “planning events” (contingencies) to which it should be subjected, and establish performance levels which it must achieve. If performance levels are not achieved, transmission operators must propose corrective action plans that address the performance issues.

FERC, NERC, and the regional entities[1] have not yet been able to be as prescriptive about base case conditions as they have about the contingency events despite recognition that the assumed base case conditions are as important as the contingencies. FERC attempted to address the issue by directing that transmission planners should conduct sensitivity studies.

As stated in the NOPR, system conditions are as important as contingencies in evaluating the performance of present and future systems, and yet TPL-001-0 does not specify the rationale for determining critical system conditions and study years. consistent with our discussion of the issue above regarding sensitivity studies and critical system conditions, the Commission concludes that proposed modification (1), which requires that critical system conditions be determined by conducting sensitivity studies, is justified. Accordingly, we direct the ERO to modify the Reliability Standard to require that critical system conditions and study years be determined by conducting sensitivity studies with due consideration of the range of factors outlined above.[2]


Section B.R2.1.4 of NERC’s TPL-001-4 incorporates this approach by indicating that sensitivities should, “stress the System within a range of credible conditions that demonstrate a measurable change in System response. The Northeast Power Coordinating Council simply asserts that studies should “stress the system.”[3] The directory provides no guidance here on how credible or probable the conditions should be, but when studying the system for its response under extreme contingencies, “ EC testing should use a dispatch pattern considered to be highly probable for the year and load level being studied[4].  ISO New England’s planning procedure provides a different guidance that design studies will assume “conditions that reasonably stress the system.[5]

One conclusion that can be drawn from the language provided above is that there is agreement transmission systems should be planned to withstand unexpected events under challenging but credible, probable, or reasonable conditions. However there seems to be no convention about what those terms mean. Is the stress imposed by the “ range of credible conditions” specified by NERC the same as the “stressed system” dictated by NPCC, or the “reasonable stress” in the ISO New England Planning Procedure?

The degree of variability permitted by the current standards can be demonstrated by examining the probabilities of the base cases of transmission planning studies.   The probability of a base case in which numerous independent events take place simultaneously is determined by the multiplication rule of statistics. Given independent events A and B, the probability of both occurring together is the product of probability of A times the probability of B. Using this approach, it appears the application of the planning standards between the adjacent control areas of New York and New England are quite different.

  • Transmission planners in NYISO assume the system is under stress when it is at the 50/50 peak load and generator availability modeled as EFORd reflecting unit outages.[6] Because both events must occur together, the probability of this circumstance is the product of the probability of each independent circumstance, so hitting the single peak hour out of 8,760 hours, times the 50% probability level, times the average availability rate of the generation fleet[7] gives the probability of the event: (1/8,760) X .5 X .9269 = 5.291 * 10-5.
  • Transmission planners in New England apply a different level of stress by modeling the system at the 90/10 peak forecast, hand picking the two most “impactful” generators,[8] and assuming they are out of service. Multiple generation dispatches with varying levels of probability are developed, and the probability of the scenarios can range widely depending on the reliability of the two generators assumed out of service. In this generic example, the New England base case probability would be the product of the probability of each independent circumstance, so hitting the single peak hour out of 8,760 hours, times the 10% probability level, times the average unavailability rate of the two generators assumed out of service gives the probability of this base case: : (1/8,760) X .1 X .075 X .075 = 6.421 * 10-8

New York base case is more than 800 times more likely than New England’s. Two adjacent control areas, both with a requirement to “stress the System within a range of credible conditions” seem to have different views of what “credible” means.

Even within the same control area and in the same planning exercise vastly different levels of “credible” or “reasonable” stress are being be applied across the sensitivity scenarios. An analysis of nineteen base case sensitivities in a recent New England area Needs Assessment[9] demonstrated wide disparity in the probability of those base cases. In that analysis, the most likely scenario had a probability of 3.95 *10-18, the least likely with a probability of 9.36*10-36  The ratio of the most to least likely sensitivity indicates that the least likely scenario is 4.22 8 1017 less likely to occur, begging the question of whether both sensitivities can be considered to be imposing “reasonable stress” within this study.[10]


A Possible Solution:

Transmission planning practices in the U.S. are clearly on an arc towards greater uniformity and standardization, but as demonstrated above, a significant “eye of the beholder” aspect still exists. One way to increase standardization would be to develop a numerical definition of what is meant by credible, probable, or reasonable by establishing a value or range for the probabilities of base case sensitivities.[11] The calculation can be made with information that is already available;[12]

  • NERC GADS data specific to each region and divided by plant type provides Expected Forced Outage Rates for generating plants and can be used to develop a probability for choosing specific plants out of service.
  • Peak load forecasts are already expressed in terms of their statistical probability.
  • Hourly interface flows between systems are available.


A transmission planning convention that selects a probability value or range to represent what “reasonable” stress level means would allow transmission planners the latitude to modify system conditions to stress the system so long as the probability of the scenarios fall within the prescribed range. The concept offers a number of advantages:

  • The ability to set clear guidelines for when it is acceptable to utilize non-consequential loss of load as a response to challenging system conditions,
  • the ability to distinguish between scenarios that should be affirmatively addressed, and those that are less likely and perhaps should simply be monitored,
  • greater uniformity of planning within and between control areas, and
  • the opportunity for stakeholders and planners to have quantitative discussions regarding the appropriate levels of reliability.


The electric industry has had a long established reliability metric of 1 – in – 10 when planning for resource adequacy. The introduction quantitative parameters to bound the input assumptions to transmission planning studies such as those discussed here, is a step in that direction.


[1] Regional entities are control areas such as PJM, MISO, NY ISO and ISO New England.

[2] Order 693 ¶1765

[3] NPCC Reliability Reference Directory 1 Design and Operation of the Bulk Power System section 5.1.1 “Design Criteria.”

[4] Ibid. Appendix C “Procedure for Testing and Analysis of Extreme Contingencies” section 3.0 “Modeling Assumptions”

[5] ISO New England Planning Procedure 3 “Reliability Standards for the New England Area Bulk Power Supply System.

[6] NY ISO 2014 RNA Assumption Matrix.

[7] This is the average power plant fleet availability level as reported in NERC’s Generation Availability Review Dashboard.

[8] NERC 2007 – 2010 GADS data from New England shows EFORd for fossil units in New England, 2011 to be .075.

[9] NH/VT 2022 Needs Assessment Study Scope – Final 6/12/2013. It should be noted here that New England’s planning practice does not utilize economic dispatch in the modeling. Instead generation in the models is chosen to run in a way that drives stress on internal interfaces. Because the generators are “hand picked” to be off line, the multiplicative rule of the failure rates of all generators that are assumed to be offline generates some very small values.

[10] The statistical disparity between study scenarios are not unique to this study. Similar disparities hacve been observed in other studies.

[11] The correct number or range should be developed through a consensus process and yield a reliability range that is at least as stringent as the probabilistic value that is widely accepted for resource adequacy.

[12] It is important not to confuse the use of statistics here as an effort to be predictive. Rather the focus is to develop comparability between sensitivities using publicly available data and simple calculation that can serve as an effective metric.