We can Work it out (I)

Clear­ing the Skies: Build­ing Deci­sion-Mak­ing Strate­gies for Crises at Work

Dual-Process The­o­ry – Part III

The first two parts of this series exam­ined the trag­ic crash of Air France Flight 447 and its lessons on deci­sion-mak­ing. This cat­a­stroph­ic event vivid­ly illus­trates the dan­gers of instinc­tive reac­tions in high-pres­sure sce­nar­ios. The pilots’ emo­tion­al, reflex­ive response — guid­ed by Sys­tem 1 think­ing as described in Tver­sky and Kahneman’s Dual-Process The­o­ry — result­ed in a fatal stall and the loss of 228 lives. While Sys­tem 1 is fast and intu­itive, it is prone to errors, par­tic­u­lar­ly when emo­tions and stress take over. In con­trast, Sys­tem 2, a slow­er, more ana­lyt­i­cal process, could have guid­ed the pilots to make delib­er­ate and informed deci­sions, poten­tial­ly avert­ing dis­as­ter [1] [3] [5]. In Part 2, we explored avi­a­tion industry’s struc­tured pro­to­cols, devel­oped to address such chal­lenges. Among these, the Unre­li­able Air­speed Indi­ca­tion (IAS) Pro­ce­dure stands out as a prime exam­ple of delib­er­ate, step-by-step deci­sion-mak­ing that mit­i­gates errors and pro­motes com­po­sure and team­work in high-stakes sit­u­a­tions [4]. These pro­to­cols are designed to coun­ter­act instinc­tive errors by pro­mot­ing com­po­sure, sit­u­a­tion­al aware­ness, and team­work under pres­sure.

These pro­to­cols empha­size the impor­tance of prepa­ra­tion, sit­u­a­tion­al aware­ness, and clear role dis­tri­b­u­tion, ensur­ing that instinc­tive mis­takes are avoid­ed, and deci­sions are made with clar­i­ty and pre­ci­sion. What sets this approach apart from exist­ing cri­sis man­age­ment mod­els is its foun­da­tion in real-world, high-stakes envi­ron­ments like avi­a­tion, where fail­ure often has cat­a­stroph­ic con­se­quences. This lev­el of rig­or and sys­tem­at­ic prepa­ra­tion pro­vides a unique frame­work that can be adapt­ed to var­i­ous indus­tries beyond avi­a­tion.

Trans­fer and Adap­ta­tion of Avi­a­tion Pro­to­cols

Avi­a­tion pro­ce­dures, such as the Unre­li­able IAS pro­ce­dure, offer more than just solu­tions for cock­pit crises — they pro­vide uni­ver­sal­ly applic­a­ble strate­gies for man­ag­ing uncer­tain­ty and pres­sure. These pro­to­cols are metic­u­lous­ly designed to address high-stakes sit­u­a­tions where human error is most like­ly, mak­ing them valu­able in fields beyond avi­a­tion. Trans­fer­ring these prin­ci­ples to oth­er indus­tries, such as health­care, IT, or busi­ness, can help teams nav­i­gate com­plex chal­lenges, reduce errors, and improve out­comes So let us try to trans­form the pro­ce­dures to pre­vent rash and painful deci­sion-mak­ing in high-pres­sure sce­nar­ios.

Adap­tive Deci­sion-Mak­ing and Cri­sis Nav­i­ga­tion Pro­ce­dure

When fac­ing a cri­sis, a struc­tured approach is essen­tial. Aviation’s prin­ci­ples offer a proven frame­work that can be adapt­ed to rev­o­lu­tion­ize how you han­dle high-stakes deci­sions. We will call it the Adap­tive Deci­sion-Mak­ing and Cri­sis Nav­i­ga­tion Pro­ce­dure. To illus­trate its appli­ca­tion, imag­ine your­self as a prod­uct man­ag­er in the midst of a chal­leng­ing project:

You’re lead­ing the devel­op­ment of a new cus­tomer rela­tion­ship man­age­ment (CRM) soft­ware. Two weeks before the dead­line, your client excit­ed­ly asks about the “auto­mat­ed feed­back ana­lyt­ics” fea­ture they were told would be includ­ed. You are star­tled, because his fea­ture wasn’t part of the agreed deliv­er­ables and isn’t tech­ni­cal­ly fea­si­ble with­in your frame­work. You dis­cov­er that a sales­per­son casu­al­ly assured the client this fea­ture was “no prob­lem” but nev­er com­mu­ni­cat­ed it back to the team. Now you face:

  • a client expect­ing an addi­tion­al unplanned fea­ture.
  • a devel­op­ment team with­out the resources to deliv­er it with­in the giv­en time frame.
  • senior man­age­ment is bank­ing on this project’s suc­cess for the company’s rep­u­ta­tion.

What do you do? This is the crit­i­cal moment when act­ing impul­sive­ly can lead to greater prob­lems. Instead, this is exact­ly where the pro­ce­dure should come into play, guid­ing you to pause, assess the sit­u­a­tion, and make delib­er­ate, informed deci­sions. Here’s how to apply the Adap­tive Deci­sion-Mak­ing and Cri­sis Nav­i­ga­tion Pro­ce­dure:

1. Rec­og­nize the Prob­lem (Gain Sit­u­a­tion­al Aware­ness)

Stop! Before you act, take your time, assess the sit­u­a­tion and gath­er the facts. Define the issue clear­ly: The client expects an addi­tion­al fea­ture, and your resources are lim­it­ed. Gath­er all rel­e­vant facts. Think of this step as a men­tal pause but­ton that pre­vents rash reac­tions and ensures you see the full scope of the chal­lenge.

Key Guide­lines:

  • Avoid Com­plain­ing or Blam­ing: Focus your ener­gy on iden­ti­fy­ing solu­tions rather than dwelling on mis­takes or assign­ing fault.
  • Gath­er Facts Objec­tive­ly: Col­lect only rel­e­vant and ver­i­fi­able infor­ma­tion with­out allow­ing emo­tions to influ­ence your obser­va­tions.
  • Stay Calm to Main­tain Clar­i­ty: Pan­ic clouds judg­ment and can spread quick­ly. A com­posed mind­set allows you to eval­u­ate the cur­rent envi­ron­ment, con­di­tions, and risks more effec­tive­ly.

Try This:

Adopt a “Pause and Assess” habit. When a cri­sis aris­es, set a timer for five min­utes to list the key facts about the sit­u­a­tion. Dur­ing this time, focus sole­ly on the facts — no judg­ments, no opin­ions. This brief pause helps reset your per­spec­tive, ensures you act based on accu­rate infor­ma­tion, and min­i­mizes emo­tion­al reac­tions.

2. Main­tain Con­trol

Stay calm! Sta­bi­lize the sit­u­a­tion you are in! Focus on clear com­mu­ni­ca­tion and rein­forc­ing your team’s con­fi­dence:

  • Main­tain What’s Work­ing: Ensure that tasks already run­ning smooth­ly remain undis­turbed. Avoid pulling peo­ple away from crit­i­cal respon­si­bil­i­ties with a hasty “all-hands-on-deck” mind­set, as this can dis­rupt areas that are func­tion­ing well and essen­tial to the project’s suc­cess.
  • Under­stand Cru­cial Para­me­ters: Famil­iar­ize your­self with the project’s most impor­tant aspects, includ­ing key deliv­er­ables and agreed-upon objec­tives for the cur­rent stage. Refer to project doc­u­men­ta­tion to avoid jeop­ar­diz­ing these by mak­ing impul­sive deci­sions.
  • Com­mu­ni­cate Con­fi­dence: Engage with your team in a way that con­veys com­po­sure and sta­bil­i­ty. Lead by exam­ple, cre­at­ing an atmos­phere of con­trol and calm­ness to pre­vent unnec­es­sary stress from spread­ing.

Try This:

Prac­tice the “Con­trol — Reset Tech­nique” dur­ing team meet­ings or crises. When ten­sion ris­es, take a moment to pause the dis­cus­sion and ask each team mem­ber to briefly state:

  1. What’s work­ing well in their area.
  2. One crit­i­cal task they are focus­ing on.

This sim­ple exer­cise not only redi­rects atten­tion to what’s under con­trol but also rein­forces con­fi­dence with­in the team. It helps you iden­ti­fy poten­tial risks while main­tain­ing a sense of sta­bil­i­ty and focus on pri­or­i­ties. By cen­ter­ing on what’s work­ing, you build a foun­da­tion of calm to tack­le emerg­ing chal­lenges effec­tive­ly.

3. Con­sult Guide­lines

Cer­tain process­es, like fea­ture design or scope man­age­ment, are inher­ent­ly prob­lem­at­ic and can lead to recur­ring chal­lenges in any project. To nav­i­gate these effec­tive­ly, cre­ate a “Quick Ref­er­ence Hand­book” (QRH) tai­lored to your team’s needs. This guide should pro­vide gen­er­al prin­ci­ples and action­able steps for address­ing com­mon pat­terns, such as han­dling unplanned fea­ture requests or man­ag­ing mis­aligned expec­ta­tions. By rec­og­niz­ing recur­ring pat­terns, you can proac­tive­ly pre­pare for these sce­nar­ios, ensur­ing your team knows exact­ly how to respond when they arise.

Try This: Devel­op a “Pat­tern Recog­ni­tion Matrix” along­side your QRH. For each recur­ring issue (e.g., last-minute fea­ture requests or scope changes), iden­ti­fy its usu­al trig­gers and the best respons­es. Use this matrix to update your QRH with real-world exam­ples and solu­tions, mak­ing it a liv­ing resource that evolves with your projects. Review and refine these pat­terns reg­u­lar­ly to keep your team pre­pared for both pre­dictable and unex­pect­ed chal­lenges.

4. Con­firm and Trou­bleshoot

Once you have iden­ti­fied the errors, act quick­ly to mit­i­gate them before they esca­late. Fos­ter a cul­ture where mis­takes can be open­ly acknowl­edged with­out fear of back­lash, as this pro­motes learn­ing and con­tin­u­ous improve­ment. Keep in mind that high work­loads can impair cog­ni­tive respons­es and deci­sion-mak­ing, so mon­i­tor­ing and man­ag­ing work­load dur­ing trou­bleshoot­ing is essen­tial. When­ev­er pos­si­ble, fol­low the instruc­tions out­lined in your quick ref­er­ence guide. If no guid­ance exists, devel­op a step-by-step response tai­lored to the sit­u­a­tion. Focus on crit­i­cal tasks, assign clear roles, and estab­lish dead­lines.

For exam­ple, you might:

  • del­e­gate a team mem­ber to devel­op a workaround or tem­po­rary solu­tion
  • Com­mu­ni­cate trans­par­ent­ly to man­age your cus­tomers’ expec­ta­tions.
  • Break the prob­lem into small­er, man­age­able parts and pri­or­i­tize solu­tions.
  • Assem­ble cross-func­tion­al teams with mem­bers from diverse depart­ments (e.g., devel­op­ment and prod­uct man­age­ment) to address the issue col­lab­o­ra­tive­ly

Ready to Take the Next Steps:

Now that you’ve mas­tered the fun­da­men­tals of iden­ti­fy­ing, main­tain­ing, and trou­bleshoot­ing dur­ing crises, it’s time to dive into the next cru­cial ele­ments of effec­tive cri­sis nav­i­ga­tion. In the upcom­ing sec­tion, we’ll explore how to har­ness the pow­er of com­mu­ni­ca­tion, col­lab­o­ra­tion, and strate­gic sup­port to resolve chal­lenges with con­fi­dence and clar­i­ty. Stay with us to com­plete the jour­ney toward a ful­ly adap­tive deci­sion-mak­ing frame­work!

Ref­er­ences

[1] BEA — Bureau d’en­quêtes et d’analy­ses pour la sécu­rité de l’avi­a­tion civ­il. (2011). Air­craft acci­dent report: On the acci­dent on 1st June 2009 on the Air­bus A330-203. Paris. https://aaiu.ie/foreign_reports_fr/final-report-accident-to-airbus-a330-203-registered-f-gzcp-air-france-af-447-rio-de-janeiro-paris-1st-june-2009/

[2] The Bea­t­les. (1965). We can Work it out. https://www.youtube.com/watch?v=Qyclqo_AV2M

[3] Kah­ne­man, D. (2013). Think­ing, fast and slow. First paper­back edi­tion. New York, Far­rar, Straus and Giroux.

[4] Sky­brary. Unre­li­able Air­speed Indi­ca­tions. Retrieved 2025.01.02. From: https://skybrary.aero/articles/unreliable-airspeed-indications

[5] Sky­brary. Hand­book (QRH). Retrieved 2025.01.02. From: https://skybrary.aero/articles/quick-reference-handbook-qrh

[5] Tver­sky, A., & Kah­ne­man, D. (1974). Judg­ment under Uncer­tain­ty: Heuris­tics and Bias­es. Sci­ence (New York, N.Y.), 185(4157), 1124–1131. https://doi.org/10.1126/science.185.4157.1124