Lessons from the Red Sea: A Reflection on Scott Snook’s Friendly Fire

In December 2024, the tragic shootdown of a U.S. Navy F/A-18 Hornet over the Red Sea by the guided-missile cruiser USS Gettysburg sent shockwaves through military and civilian communities alike. As the ship was acting as the anti-air warfare (AAW) coordinator for a nearby aircraft carrier, this incident highlighted the vulnerabilities that emerge in high-stakes, highly networked military environments.

To fully grasp the significance of this event, it’s worth revisiting Scott Snook’s Friendly Fire, which offers an in-depth analysis of a similar tragedy: the accidental shootdown of two U.S. Black Hawk helicopters by F-15s during the Gulf War in 1994. Both incidents underscore how organizational dynamics, human error, and technological complexity can combine to create devastating outcomes.

The Incident

The USS Gettysburg, equipped with the advanced Aegis combat system, was tasked with protecting an aircraft carrier from potential aerial threats. Operating in a region on edge due to escalating missile and drone attacks by Houthi forces, the ship was on heightened alert. During a tense moment, its crew identified a target that was misclassified as hostile—an F/A-18 returning to the carrier—and engaged it with deadly consequences.

This scenario, while devastating, is not unique. In Friendly Fire, Snook demonstrates how incidents of fratricide often arise from systemic vulnerabilities rather than individual failings. The parallels between the Red Sea shootdown and the 1994 Black Hawk tragedy offer critical insights into the nature of such accidents.

Snook’s Framework: Organizational Accidents

Snook’s Friendly Fire introduces the concept of an “organizational accident,” a framework that examines how multiple small failures across a complex system can interact to produce catastrophic results. This perspective is invaluable when analyzing the USS Gettysburg incident.

1. Fragmented Communication

Communication breakdowns were pivotal in both the Red Sea and Black Hawk incidents. In the Gulf War case, conflicting rules of engagement, poor information-sharing, and a lack of shared situational awareness between F-15 pilots, AWACS controllers, and ground personnel led to the fatal misidentification of friendly helicopters.

On the USS Gettysburg, similar issues likely played a role. Acting as the AAW coordinator, the ship was responsible for integrating information from its radar systems, the carrier strike group, and regional command centers. Any gaps in communication—whether due to unclear threat identification protocols or misaligned data streams—could have contributed to the fatal decision. Did the Gettysburg crew fail to receive confirmation of the F/A-18’s friendly status? Or did procedural gaps prevent the pilot from effectively communicating their identity and intentions?

2. Expectation Bias in High-Stakes Environments

In both cases, the operators were operating under intense pressure and primed to expect threats. In Friendly Fire, Snook highlights how the F-15 pilots in northern Iraq were predisposed to view unknown aircraft as enemy targets, leading to snap judgments.

For the USS Gettysburg, the heightened threat environment in the Red Sea likely amplified a similar expectation bias. With the ship and its crew on edge due to recent missile and drone attacks, any unidentified radar contact might have been perceived as a potential hostile. This bias, coupled with the crew’s responsibility to defend a high-value target like an aircraft carrier, likely created an atmosphere where rapid, aggressive action felt necessary.

3. Organizational Complexity and Automation

Snook emphasizes how complexity in military organizations can create gaps where errors flourish. In the 1994 Black Hawk shootdown, overlapping responsibilities and unclear protocols between various units created confusion.

The USS Gettysburg’s role as an AAW coordinator introduces a similar level of complexity. Modern naval operations depend on automated systems like Aegis, which are designed to process vast amounts of sensor data and assist operators in making split-second decisions. However, automation can introduce its own vulnerabilities. Did the Aegis system misclassify the F/A-18 as a threat based on its flight path, speed, or proximity? If so, did the human operators blindly trust the system’s assessment without fully verifying the target’s identity? These questions underscore the delicate balance between human judgment and machine reliability.

4. Normalization of Deviance

One of Snook’s most compelling arguments is the concept of “normalization of deviance”—the gradual acceptance of procedural shortcuts or minor rule violations as standard practice. Over time, these deviations create systemic vulnerabilities.

In the case of the Gettysburg, normalization of deviance might manifest in how the crew handled threat identification. Did they rely too heavily on automated systems for friend-or-foe identification, skipping additional confirmation steps in the name of speed? Or did the urgency of defending the carrier lead them to relax standard protocols for target verification?

Accountability and Systemic Vulnerabilities

Both the Red Sea and Black Hawk incidents force us to grapple with the issue of accountability. It’s tempting to place blame on the individual operators who made the final decision to fire, but Snook’s analysis reminds us that true accountability lies in addressing systemic weaknesses.

The U.S. military’s response to the Black Hawk shootdown included reforms to airspace coordination and stricter adherence to identification protocols. Similarly, the Red Sea tragedy will likely prompt a review of naval air defense operations. Key areas for improvement might include:

Enhanced Identification Protocols: Ensuring that Aegis and other combat systems have more robust mechanisms for distinguishing between friendly and hostile aircraft, even in ambiguous situations.

Streamlined Communication Channels: Improving the flow of information between AAW coordinators, pilots, and regional command to prevent misclassification of friendly forces.

Training for High-Stakes Scenarios: Conducting more rigorous simulations of high-pressure environments to help operators manage stress and expectation bias.

The Human Element

At its core, Snook’s Friendly Fire is a sobering reminder of the human element in organizational accidents. Both the Black Hawk and Red Sea incidents reveal how even the most advanced systems remain vulnerable to human limitations. Operators are not infallible, and the systems they rely on can fail in ways that are difficult to predict.

The USS Gettysburg shootdown is not merely a story of technological failure or operator error—it is a story of systemic fragility. Understanding and addressing this fragility requires a willingness to confront uncomfortable truths about how military organizations function under stress.

Conclusion

The shootdown of an F/A-18 by the USS Gettysburg in the Red Sea and the 1994 Black Hawk tragedy share striking similarities. Both incidents arose from a deadly combination of communication failures, high-pressure environments, and the complexities of modern military operations.

Scott Snook’s Friendly Fire provides a framework for understanding these events as organizational accidents rather than isolated mistakes. By addressing systemic vulnerabilities and prioritizing accountability at every level, we can honor the lives lost in these tragedies and work to prevent similar incidents in the future.

The stakes—both in terms of human lives and operational effectiveness—are simply too high to do otherwise.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Maritime Safety Innovation Lab LLC

Subscribe now to keep reading and get access to the full archive.

Continue reading