Reprint: 36th IEEE Conference on Decision and Control, San Diego, CA, December 10-12,1997

Incorporation of Uncertain Intent Information in

Conflict Detection and Resolution

James K. Kuchar and Lee C. Yang

Department of Aeronautics and Astronautics

Massachusetts Institute of Technology

Cambridge, MA 02139

jkkuchar@mit.edu

Abstract

This paper describes fundamental issues in conflict detection and resolution when intent information is available to the decision-maker. Different categories of intent (termed implicit and explicit) are defined and described in terms of aircraft automation. A model of conflict detection is presented that incorporates intent information and the probability that intent information is followed. Tactical and strategic conflict decision problems are described. In the strategic case, the need to consider both confidence in intent and criticality of intent is discussed. In the tactical case, the required amount of confidence in intent needed to prevent avoidance action is shown in a simplified example.

Introduction

In order to efficiently detect and resolve conflicts between aircraft, estimates of both the current state of the traffic environment as well as its future state are required. The current states, typically obtained through sensors such as radar or datalinked position information, are used to indicate whether a conflict currently exists and to provide a starting point for projecting future trajectories. The future states of the traffic must be estimated using some form of dynamic model that propagates the states forward in time. Predicted conflicts between aircraft can then be identified in time for resolution actions to be taken to maintain safe and efficient traffic flow.

The dynamic model used to propagate states generally requires some assumptions regarding the future intentions of each aircraft. Often, it is assumed that aircraft will fly on their current headings at their current speeds and altitude rates. Such a model is acceptable if aircraft indeed tend to fly in straight lines. However, if an aircraft will be maneuvering (e.g., changing heading, speed, or altitude), the straight-line projection of its states becomes inaccurate. If additional intent information is available (e.g., that the aircraft will be leveling off at a certain altitude), a more accurate future trajectory may be predicted. However, the use of faulty intent information may result in unpredicted conflicts or unnecessary maneuvering. Thus, the confidence that can be placed in intent information is a key parameter affecting its use.

Different types of intent information may be available, each with certain benefits and limitations. This paper provides a categorization of different types of intent, develops a general model of how intent can be used in the conflict detection process, and illustrates the concepts in a simplified example.

Categories of Intent Information

Implicit Intent

The current air traffic system operates using implicit intent information. Implicit intent is that which is inferred through predefined rules-of-the-road, filed flight plans, or air traffic control (ATC) clearances. Intent to follow predefined rules of the road is typically implied through the use of standard phraseology in communications [1]. For example, an aircraft might contact ATC with "XYZ Tower, November 12345 is ten east with information Alpha". In this case, under current flight rules, ATC understands the implied intent that the aircraft is inbound for landing from its present position (ten miles east of the airport) and that the aircraft has the current information regarding winds and runways in use. Although it is not a detailed description of the aircraft's intentions, such implicit intent can still be of some use to ATC (e.g., it is unlikely that the aircraft would climb in the previous example).

In an ATC clearance example, the controller might vector an aircraft using a phrase such as "Airline 123, turn right heading 220". This would then be followed by the pilot's readback, "Turning right heading 220, Airline 123". ATC then can operate using the implicit intent information that the aircraft will be turning as described.

Implicit intent can also be obtained through a filed flight plan. Unless otherwise informed, ATC assumes that the aircraft will change course to follow this flight path.

By using radar track data, ATC generally obtains limited feedback that aircraft are following implicit intentions. If an aircraft does deviate from the intended path, there is typically some delay before this non-conformance can be detected by ATC. In such cases, or if ATC wishes to increase confidence that intentions will be followed, ATC may request additional feedback from the pilot. For example, ATC might ask an aircraft to "verify heading 220" as a means by which confidence can be increased that the intent information is being followed. It is left to the controller's judgment whether these additional calls are required to increase confidence in adherence to the intended path.

Typically, uncertainties in implicit intent information are too large to allow accurate conflict prediction without relying on current position data from radar. However, the implicit intent information plays a critical role in reducing workload by aiding controllers in predicting the future traffic situation.

Explicit Intent

As digital datalink of information between aircraft and ground centers becomes more feasible, a more explicit form of intent information will be available [2]. With explicit intent information, ATC has continuous feedback regarding the programmed trajectory in an aircraft's automation system. Examples of explicit intent information include the aircraft's autopilot target states or Flight Management System (FMS) flight plans.

Information on target states might include the current speed, heading, altitude, and vertical speed commands in the aircraft's autopilot mode control panel. Conceivably, more complex systems could also include information on higher-order states such as the maximum bank angle or acceleration capability of the aircraft. With explicit intent information, the fact that an aircraft intends to level off at a certain altitude, for example, can be verified with a cross-check of the commanded altitude in its autopilot.

Additional confidence can be obtained by also having access to the current autoflight mode of the aircraft: confidence is likely to be greater if it is known that the aircraft is in an autopilot Altitude Capture mode as opposed to an open descent hand-flown by the crew.

Finally, access to the FMS flight plan could allow conflict prediction well beyond the current time because the entire flight path of the aircraft could be examined. Depending on the sophistication of the FMS, this path could be three- or four-dimensional (position and time) and could include models of aircraft performance and winds.

Access to the real-time feedback in explicit intent will likely increase the confidence that ATC can place in future predictions of aircraft position, and thereby increase the performance of conflict detection and resolution. However, even explicit intent information cannot be trusted completely. There is always the possibility, for example, of a pilot manually flying straight through a heading change even though the FMS had been predicting a turn. Also, flight plans may change in response to weather or ATC clearances. Thus, some consideration must be given to the possibility that an intended change in state may ultimately not occur, or that a change in state may occur when it was not expected.

Generalized Model of Conflict Detection

To describe the decision process in conflict situations, the discussion below begins with an example in which intent information is not available. Later, intent information is introduced, and its impact on the decision process is discussed.

Consider a simplified case in which two aircraft are flying in the airspace as shown in Figure 1. Aircraft 1 and 2 are currently on intersecting horizontal paths at the same altitude, and there is some probability of a conflict. At each point in time, a decision must be made whether to intervene and resolve the potential conflict or to wait and not intervene. If the decision is made to intervene at time t, aircraft 1 follows an Avoidance trajectory, labeled A, over which there is some reduced probability of conflict PC(A, t). If intervention is not taken, aircraft 1 follows a Nominal trajectory, labeled N, over which there is a different probability of conflict PC(N, t). Note that these probabilities are functions of the specific encounter between aircraft, the trajectories that might be flown, and time. Additionally, the avoidance trajectory need not be a vertical maneuver as shown in Fig. 1, but could involve speed or heading changes and could be performed by either or both of the aircraft.

Fig. 1: Example Conflict Scenario

The probability that an action taken at time t is successful, PSA(t), is given by:

PSA(t) = 1 - PC(A, t) (1)

which corresponds to the probability that a conflict does not exist along the Avoidance trajectory A.

The probability that the action at time t is unnecessary, PUA(t), is given by:

PUA(t) = 1 - PC(N, t) (2)

That is, PUA is the probability that a conflict will not occur when no action is taken.

PUA and PSA describe the basic performance tradeoff of the decision to take action. To maximize the safety and efficiency of maneuvering, PSA should be close to 1. To minimize the frequency of unnecessary action, PUA should be close to 0. Generally, these goals cannot be met simultaneously. If action is taken earlier, PSA will increase but so will PUA. Alternatively, if action is delayed, PUA will decrease, but so will PSA.

This tradeoff can be viewed graphically using a System Operating Characteristic (SOC) curve [3]. SOC curves can generally be constructed if relevant uncertainties in the situation (e.g., sensor accuracy or pilot response latency) can be modeled. As shown in the example in Figure 2, as the decision time is moved, the tradeoff between PSA and PUA is constrained to follow the SOC curve. As sensor accuracy is improved, the SOC curve will shift toward the ideal operating point in the upper-left corner of the plot. Thus, the SOC curve provides a means of viewing achievable system performance and can be used to aid in placing decision thresholds.

Fig. 2: Example System Operating Characteristic Curve

Strategic vs. Tactical Analyses

The SOC curve as described does not account for economic factors such as the fuel or time requirements of the action. Such costs can be included by developing models of the additional path length, time, and possibly workload that occurs when action is taken. Because it is difficult to directly combine safety and economic cost, it is often useful to examine the two factors separately and treat a given situation as either tactical (safety-based) or strategic (economics-based) [4].

In strategic models of conflict detection and resolution, the goal is to make decisions that minimize some cost function. It is typically assumed that all conflicts will be safely resolved, though the cost may increase as conflicts become more critical. In tactical models, economic costs are often neglected and the focus is on maximizing safety (either by minimizing probability of collision or by maximizing minimum separation). The following discussion examines both strategic and tactical situations, but does not attempt to define when a given situation is strategic or tactical.

Incorporation of Intent Information

Next, consider a case similar to that discussed previously, except that intent information is now also available. Again, the example that is used has been simplified to better illustrate the fundamental decision issues. Assume that the intent information indicates that aircraft 2 plans on beginning a descent at some point a as shown in Figure 3. The question now is whether action should still be taken by aircraft 1 given the new information about aircraft 2's intentions. More complex cases are certainly possible in which each aircraft has intent information and/or each may maneuver. The discussion below can be extended to cover these more complex situations.

Fig. 3: Example Scenario with Intent Information

There are two factors that need to be considered in this decision. The first is that there may not be complete confidence that aircraft 2 will indeed follow its intended path. As discussed earlier, even with knowledge of the aircraft's FMS path and autopilot modes, the pilot could intervene at any time to change the flight path of the aircraft from what had been predicted. The second factor relates to the criticality of the situation if the intended path is not followed. Thus, the decision of whether avoidance action should be taken requires both (1) an estimate of the likelihood that the aircraft will follow its intended path and (2) an estimate of the severity of the situation should the intent not be followed. Generally, if it is very likely that the intended path will be followed or if the situation can be easily resolved should intent not be followed, then action need not be taken. Alternatively, a situation in which ATC would try to increase confidence in intent is one in which there would be a critical outcome should intent not be followed.

Strategic Model of Intent

Strategic decision-making with intent information can be described formally using a decision tree [5], as shown in Figure 4. At each moment in time, a decision must be made whether to take some form of action in response to a possible conflict. Event N occurs when action is not taken, and event A occurs when action is taken. Additionally, the intended flight path of aircraft 2 may or may not be followed, as denoted by events I and X respectively. The probability that the intended path is followed, PI, is a measure of the confidence in the intent information. Note that PI could itself be a function of whether action was taken, but is assumed constant here.

Depending on whether action was taken and whether the intended flight path was followed, there is then some corresponding outcome and ultimate cost. Typically this cost is some combination of factors related to efficiency and safety (e.g., expected fuel burn, delay, probability of conflict, or probability of collision). The costs may be difficult to define and are often subjective, case-specific, and vary with time. In general, however, four cost parameters can be defined to represent the relative costs of the four possible outcomes. Using the event notation introduced above, the four possible outcomes are AI, AX, NI, and NX, with corresponding costs JAI, JAX, JNI, and JNX. Thus, NX, for example, corresponds to the case in which action is not taken and aircraft 2 does not follow the intended flight path.

Fig. 4: Intent Information Decision Tree

In the earlier example in Figure 3, there is no conflict if event NI occurs. Assuming that actions are only taken that increase safety, event AI provides a greater safety margin than event NI. JAI is then primarily a function of the economic cost of performing the action (which happens to be unnecessary because intent was followed), and is generally greater than JNI. If the intended path is not followed, a critical conflict may occur that would have been better offset by taking action. Thus, JNX is greater than JAX because action taken at the current time (AX) is more efficient than action that occurs later when it is determined that aircraft 2 has indeed not followed its intended path (NX).

In a different case in which the intended path was projected to cause a conflict, these relationships could be reversed (i.e., JNI > JAI, etc.).

The expected cost of taking action, JA, is then:

JA = PI JAI + (1-PI) JAX (3)

and the expected cost of not taking action, JN, is:

JN = PI JNI + (1-PI) JNX (4)

Assuming that the decision to take action is based on minimizing the expected cost, action should be taken when JA < JN, or equivalently when:

PI < (5)

if the denominator is positive. If the denominator is negative, then action should be taken when PI is greater than the right side of Eq. 5. The denominator is positive when JNI < JAI and JNX > JAX, as is the case in the example in Figure 3. If instead the intended path is projected to cause a conflict, then JNI > JAI and JNX < JAX, and the denominator will be negative.

Strategic Examples

Applying the decision-tree analysis to the example in Figure 3, consider a case in which both aircraft are close to the point a in the figure. In this case, JNX might be large because there is little time and space to resolve a conflict should intent not be followed. According to Eq. 5, if JNX is large relative to the other costs, the critical value of PI approaches 1. Therefore, action should be taken unless the confidence probability (PI) is close to 1. That is, unless it is very certain that the intended path will be followed, action is needed to mitigate the chance that intent is not followed.

Alternatively, if the aircraft are far from point a, then JNX is relatively small and approaches (though is still greater than) JAX. In this case, Eq. 5 shows that the critical value of PI approaches 0. That is, action need not be taken even for relatively small values of PI because ample space and time remains to resolve a case in which intent is not followed.

Tactical Model of Intent

In tactical safety-critical analyses, intent information may be more appropriately applied to the SOC curve representation of the decision problem. In this case, two separate SOC curves can be constructed: one for the case in which intent is followed [using PUA(I, t) and PSA(I, t)], and one for the case in which intent is not followed [using PUA(X, t) and PSA(X, t)]. Here, the notation explicitly includes I or X to denote whether the intended path is followed or not. An overall SOC curve is then constructed by combining the respective probabilities using PI as a weighting factor. Then, the overall probability of unnecessary action is:

PUA(t) = PI PUA(I, t) + (1 - PI) PUA(X, t) (6)

and the overall probability of successful action is:

PSA(t) = PI PSA(I, t) + (1 - PI) PSA(X, t) (7)

The resulting combined SOC curve automatically incorporates the confidence in intent and the severity of the situation if the intended path is not followed. By examining SOC curves constructed for varying values of PI, the required amount of confidence for a given situation can be determined.

Tactical Example

The tactical case study involves a situation similar to that shown earlier in Figure 3. The specific parameters used in this example are listed below. The values of these parameters are meant to be representative of one of many possible encounters between aircraft, and are used for illustrative purposes only. There are two co-altitude aircraft, each at a ground speed of 200 kt, on opposite intersecting ground tracks. If neither aircraft changes course or altitude, a conflict is predicted to occur (defined as separation less than 1000 ft vertically). However, aircraft 2 is predicted to begin a descent at 1000 ft/min at a point 3 nmi in front of its current position, and will pass safely below aircraft 1. However, there is some probability that this descent will not occur, PI. Aircraft 1 begins 13 nmi from aircraft 2 and may attempt to avoid the threat by climbing at 1000 ft/min after a 2 second pilot/aircraft response delay after the decision to act. Trajectories are assumed to be piecewise linear - the acceleration during maneuvering is neglected. It is also assumed that there is a zero-mean normally-distributed relative altitude error with a standard deviation of 83 ft, corresponding to recent accuracy requirements [3,6]. For simplicity, no other error sources are considered.

Following the procedure outlined earlier, SOC curves were generated for this situation for two cases: aircraft 2 follows intent (I), and aircraft 2 does not follow intent (X). For each case, the SOC curves were generated by computing the probability that the aircraft would have less than 1000 ft vertical separation over the Nominal (level flight) and Avoidance (climb at 1000 ft/min) trajectories. These calculations were performed by determining the expected vertical separation between aircraft (based on simple kinematics) and then computing the probability of having less than 1000 ft separation from a normal distribution. Different probabilities were computed over a series of time steps beginning at the initial conditions described above and ending when the aircraft pass one another.

Because altimetry errors are the only source of uncertainty (other than the likelihood of following intent), the probability of conflict along the Nominal trajectory, PC(N, t), is independent of the time or horizontal distance between aircraft (but has a different value between case I and X). Thus, for a given intent situation of I or X, PUA is a constant and each SOC curve collapses into a vertical line. Accordingly, the curves themselves do not convey much information and are not reproduced here. If lateral position error were also included, then PUA would vary with time, producing SOC curves more similar in appearance to Fig. 2.

The values of PSA, however, do vary with time, and provide some insight into the decision process. In general, the later that action is taken, the larger the value of PSA because there is less space in which to maneuver. By computing PSA as a function of the range between aircraft, one can find the point at which action should be taken. A series of analyses were performed to determine the minimum range between aircraft, r, at which action must be taken such that PSA ³ 0.95. Thus, if r = 8 nmi, for example, then action must begin when the aircraft are no closer than 8 nmi in order to resolve the conflict with 95% confidence. This minimum range is a function of the likelihood that intent is followed (PI) and is plotted in Figure 5. For reference, the distance between aircraft when aircraft 2 is predicted to begin its descent (7 nmi) is shown with a dashed line.

As Fig. 5 shows, the relationship between r and PI is non-linear with a break-point at approximately r = 7 nmi. The interesting nature of this curve deserves some discussion. For example, consider a case in which there is no confidence in the intent information (or equivalently, there is no intent information available). Then, PI = 0, and from Fig. 5, r is approximately 7.8 nmi. Thus, action must be taken even before aircraft 2 has reached the point of intended descent: due to the low confidence placed in intent, action must be taken as a preventive measure. If aircraft 2 actually does descend, then this action was unnecessary. Similar unnecessary events have occurred in actual operations with earlier versions of the Traffic Alert and Collision Avoidance System (TCAS) [7].

Fig. 5: Minimum Relative Range for Action, r

(95% confidence of resolving conflict)

The break point in the plot corresponds approximately to the time at which aircraft 2 intends to begin its descent. In order to delay action beyond this time, confidence that a descent will begin (PI) must be relatively high (approximately 0.9). If there is more than 90% confidence that aircraft 2 will begin its descent, then r decreases rapidly. At the limit where PI = 1, the range at which action must be taken is approximately 3.9 nmi. This corresponds to a case in which action is delayed until well beyond the point at which aircraft 2 should have begun its descent. With only 3.9 nmi until closest point of approach, however, even if aircraft 1 climbs and aircraft 2 descends, there is a reasonable probability (>0.05) that separation will fall below 1000 ft, and thus action must be taken.

Note that action can be delayed by improving the confidence in the intent information. Consider a scenario in which there is limited implicit intent information that aircraft 2 will descend. PI might therefore have a low value of, for example, 0.2. As discussed above, at approximately 7.8 nmi from closest point of approach, action will need to be taken or confidence will have to be increased. Thus, as the aircraft move toward one another, ATC could request additional verification that aircraft 2 does intend to descend (either through verbal confirmation or through some form of explicit intent). If confidence were then increased to 0.95, for example, action could be delayed. If aircraft 2 still has not begun to descend when the range is approximately 3.9 nmi, then action must be taken regardless of the confidence in intent. Thus, this type of analysis can be used not only to determine when action must be taken, but also to determine how much of a confidence increase is needed in order to delay action.

If estimates of the confidence in intent can be placed on different forms of intent (e.g., implicit, explicit target state, or explicit FMS path), then the potential benefit of access to this information can be determined from an analysis similar to that shown in Fig. 5. If, for example, PI could not be made larger than 0.8 regardless of the type of intent, then there would be little benefit to having access to intent information (r varies only slightly for PI < 0.8). If, on the other hand, explicit intent information increased PI beyond the break point in the figure, then there might be sufficient benefit to have access to explicit intent information. A thorough analysis, however, would require consideration of a wide variety of both tactical and strategic situations.

Conclusions

Because automated conflict detection and resolution tools will be needed in the complex future air traffic environment, quantitative models are required to aid in design, analysis, and operation. Intent information can play a critical role in the decision to take action to resolve a potential conflict, and thus needs to be part of these models. This paper outlines some of the fundamental issues that arise in problems involving intent, and provides an initial approach to modeling the decision tradeoffs.

The use of intent information in the current air traffic system relies heavily on human integration, intuition, and flexibility. When making decisions, air traffic controllers generally use mental estimates of the criticality of a situation should intent not be followed and their confidence that intent will indeed be followed. As criticality grows and confidence decreases, the controller will either vector traffic or request confirmation that intent will be followed. In order to have acceptable automated conflict detection tools, models be designed to match or emulate these human decisions as much as possible.

Because of the many factors that must be considered in conflict detection and resolution, it is difficult to generate a model that can be applied beyond a few simplified examples. Difficulties include estimating appropriate cost functions and safety levels and determining a priori what the potential outcomes might be if intent is not followed. Additionally, there are often factors (e.g., severe weather) beyond the scope of most models that have an important impact on conflict decisions.

Of particular interest for further study are analyses and tools in which the value of different forms of intent information can be determined. If such tools are available, then aircraft equipage and datalink requirements can be determined as a function of traffic density or airspace category. In operational situations, it may be possible to vary the time at which action must be taken by taking into account the equipage and resulting confidence in the intentions of different aircraft. The tactical conflict example provided here is a preliminary step in this direction. Additional work is required, however, to help estimate the level of confidence that can be placed in different types of intent information under different situations.

References

[1] Federal Aviation Administration, "Airman's Information Manual", Washington, DC, August 18, 1994.

[2] Hahn, E., and C. Wanke, "Preliminary Requirements for Avionics Intent Information for Free Flight", 14th Digital Avionics Systems Conference, Cambridge, MA, November 5-9, 1995.

[3] Kuchar, J., "Methodology for Alerting System Performance Evaluation", Journal of Guidance, Control, and Dynamics, Vol. 19, No. 2, 1996, pp. 438-444.

[4] Krozel, J., and M. Peters, "Conflict Detection and Resolution for Free Flight", Air Traffic Control Quarterly, Special Issue on Free Flight, 1997.

[5] Hillier, F., and G. Lieberman, Introduction to Operations Research, McGraw-Hill, New York, 1995.

[6] Radio Technical Committee on Aeronautics (RTCA), Minimum Performance Specifications for TCAS Airborne Equipment, Document RTCA/DO-185, Washington, DC, September, 1983.

[7] Mellone, V., and S. Frank, "Behavioral Impact of TCAS II on the National Air Traffic Control System", 7th International Symposium on Aviation Psychology, Ohio State University, April 27, 1993.