The Minimum Radar Vectoring Altitude
The Minimum Radar Vectoring Altitude
For my work as a safety manager and investigator for ATC incidents it is vital to understand why practitioners make their decisions, why their actions make sense to them, whether the outcome later on is positive or negative. To see their work through their eyes helps to support the system, ensuring that things go right while also preventing things from going wrong. What makes sense to one controller or engineer most of the time usually makes sense to others at the time.
Our controllers make decisions based on their local rationality and in 99.9% the outcome is positive. One example was when a controller came to us and reported that he had taken an aircraft below the minimum radar vectoring altitude (MRVA). This is normally prohibited. The procedures do not allow this because the MRVA is the lowest safe altitude, which is clear of obstacles. Within controlled airspace in most cases this absolutely makes sense.
In this special case the pilot of a small aircraft had navigational difficulties and was running short on fuel. He wanted to land at a nearby aerodrome but his instruments no longer worked properly for an IFR approach. He requested to descend below the MRVA to come below the clouds and approach the aerodrome visually. Without waiting for permission by the controller he descended below the minimum on his own. According to procedures a controller cannot tolerate that and has to advise the pilot to climb back to the MRVA. But on the other hand, such constraints sometimes do not apply in an emergency.
In this case the controller considered within seconds the obstacle situation and decided not to instruct the pilot to climb, but rather to assist him by giving position information and pointing out the location of the aerodrome.
The pilot finally managed to land safely. The first thought of readers might be: “How can he break a procedure and tolerate a descent below the minimum altitude?” But once you look at the situation from the inside perspective, you understand that it made sense to the controller: he did not want to make the pilot more nervous by instructing him to climb in this emergency situation. He knew the obstacle situation and he wanted to assist the crew to land as soon as possible. With these quick decisions, the controller possibly saved the life of the crew.
And here is a close link to another principle: because the controllers knew about our just culture policy they were able to report this case for other controllers to learn from it. They did not have to fear consequences and knew that safety management would look at this case in context and not only for rules and procedures that might be involved and broken.
Another example can be seen in infringements of separation when a controller did not recognise traffic on the radar screen. We have had the experience that sometimes the relevant traffic was displayed in different colours for various reasons (level band filter, warning colour, group code, etc). We then ask “Why did the controller not recognise the traffic displayed and why were colours not perceived in the way that the designers expected.” Then we are able to investigate the colour settings further.
Sometimes system and procedure designers, managers and investigators have their own vision of how things will work – work-as-imagined. But these cases show that it is most important to see what makes sense to the field experts in practice, how they make their decisions, and how they see their world.
Christiane Heuerding
Safety Manager, ACC Bremen
DFS, Germany
Source: EUROCONTROL (2014). Systems Thinking for Safety: A White Paper. Brussels.