Ever wanted a deeper dive into the life of the International Space Station? The flight directors in charge of the teams that oversee its systems have written a 400-page book that offers an inside look at the time and energy the flight control team at the Mission Control Center at NASA’s Johnson Space Center in Houston devote to the development, planning and integration of a mission.
At 2:49 a.m. Central Standard Time, a red alarm illuminated the giant front wall display in Mission Control in Houston. The alert read: TOXIC ATMOSPHERE Node 2 LTL IFHX NH3 Leak Detected.
The meaning was clear. Ammonia was apparently leaking into the Interface Heat Exchanger (IFHX) of the Low Temperature cooling Loop (LTL) in the Node 2 module.
“Flight, ETHOS, I expect the crew to be pressing in emergency response while I confirm,” said the flight controller from Environmental and Thermal Operating Systems (ETHOS). In other words, the crew needed to don oxygen masks to protect themselves from ammonia while ETHOS looked more closely at these data.
This was not a drill. When the red alarm appeared, the flight director turned her full attention to ETHOS. The words—unwelcome at any time from ETHOS—were especially jarring at an hour when the crew and the ground were humming along on a busy day of running experiments. Of the many failures for which the flight control team prepares, especially in simulations, this failure presents one of the most life-threatening situations, and one the team never wants to encounter on the actual vehicle.
On January 14, 2015, this scenario happened on the International Space Station (ISS). Data on the ETHOS console indicated toxic ammonia could be bleeding in from the external loops, through the waterbased IFHX, and into the cabin (see Chapter 11). Software on the ISS immediately turned off the fans and closed the vents between all modules to prevent the spread of ammonia. At the sound of the alarm, crew members immediately began their memorized response of getting to the Russian Segment (considered a safe haven, since that segment does not have ammonia systems) and closed the hatch that connected to the United States On-orbit Segment (USOS). They took readings with a sensitive sensor to determine the level of ammonia in the cabin. The flight control team—especially the flight director, ETHOS, and the capsule communicator (CAPCOM [a holdover term from the early days of the space program])—waited anxiously for the results while they looked for clues in the data to see how much, if any, ammonia was entering the cabin. Already, the flight director anticipated multiple paths that the crew and ground would take, depending on the information received.
No ammonia was detected in the cabin of the Russian Segment. At the same time, flight control team members looked at multiple indications in their data and did not see the expected confirming cues of a real leak. In fact, it was starting to look as if an unusual computer problem was providing incorrect readings, resulting in a false alarm. After looking carefully at the various indications and starting up an internal thermal loop pump, the team verified that no ammonia had leaked into the space station. The crew was not in danger. After 9 hours, the flight control team allowed the crew back inside the USOS. However, during the “false ammonia event,” as it came to be called, the team’s vigilance, discipline, and confidence came through. No panicking. Only measured responses to quickly exchange information and instructions.
Hearts were pumping rapidly, yet onlookers would have noticed little difference from any other day.
A key to the success of the ISS Program is that it is operated by thoroughly trained, well-prepared, competent flight controllers. The above example is just one of many where the team is unexpectedly thrust into a dangerous situation that can put the crew at risk or jeopardize the success of the mission. Both the flight controllers and the crews, often together, take part in simulations. Intense scenarios are rehearsed over and over again so that when a real failure occurs, the appropriate reaction has become second nature.
After these types of simulations, team members might figure out a better way to do something, and then tuck that additional knowledge into their “back pocket” in the event of a future failure. Perhaps the most famous example of this occurred following a simulation in the Apollo Program. After the instructor team disabled the main spacecraft, the flight controllers began thinking about using the lunar module as a lifeboat. When the Apollo 13 spacecraft was damaged significantly by an exploding oxygen tank, the flight control team already had some rough ideas as to what they might do. Since the scenario was not considered likely owing to all the safety precautions, the team had not developed detailed procedures. However, the ideas were there.