← Back to blog

Quality in Practice: The Martian's Dilemma

Header image for Quality in Practice: The Martian's Dilemma

(Spoilers ahead for “The Martian” - a film that came out over a decade ago, so TBH you’re lucky you even got a spoiler warning here.)

There’s a scene in “The Martian” that perfectly captures the tension at the heart of most quality decisions. NASA needs to launch a rescue mission fast. They have a choice: skip 10 days of launch site testing (historically only a 1 in 20 chance of finding an issue) and redirect that time to more critical preparations.

From a purely technical standpoint, it’s obvious. Skip the testing. Use those 10 days to reduce bigger risks. Maximize the astronaut’s chances of survival.

But here’s the catch: if something goes wrong, explaining why you chose to skip established testing procedures becomes… complicated. It would be easy to dismiss this as “The right thing to do” vs “Politics”, but there’s more to it than that.

The Two Competing Quality Definitions

In that room at NASA, there were essentially two different quality frameworks at work:

The Mission Quality View: Primary goal: Get the astronaut home alive. Key metric: Overall mission success probability. Optimal decision: Skip low-yield testing for high-yield risk reduction.

The Institutional Quality View: Primary goal: Protect NASA’s reputation and future missions. Key metric: Public perception if things go wrong. Optimal decision: Follow established procedures, maintain defensible process.

Neither view is wrong. They’re just optimising for different stakeholders who matter.

This Happens in Software All The Time

I see this tension constantly in software teams:

Product wants to ship the customer demo on time. They’ll accept some bugs if it means showing the feature to that Fortune 500 company next week.

Engineering wants to refactor the authentication system before building new features on top of it. They know the current system will cause problems down the line.

Support wants better error messages and logging so they can actually help customers when things break.

Sales wants the integration with that popular third-party tool because three deals are waiting on it.

Operations wants proper monitoring and alerting because they’re the ones who get called at 3am when things break.

All of these are legitimate quality concerns. All represent value to people who matter. And they often conflict directly.

The Temptation of False Unity, and a Better Approach

The usual response to this tension is to pretend it doesn’t exist. We tell ourselves quality is quality, and we just need to “do the right thing.”

But this leads to passive-aggressive decisions. Engineering secretly prioritises tech debt work. Product quietly cuts scope. Support starts building their own tools. Everyone thinks they’re upholding quality while the overall system suffers.

Instead of pretending these tensions don’t exist, make them visible and have an explicit negotiation:

1. Identify all the stakeholders who matter. For that customer demo: Sales (need working demo), Engineering (want sustainable code), Support (need debuggable system), Customers (want reliable product).

2. Understand what each values most. Sales: Feature completeness for demo scenarios. Engineering: Code maintainability. Support: Error visibility and diagnostics. Customers: System stability under load.

3. Make trade-offs explicit. “We can have the demo working perfectly, but we’ll need to add proper error handling afterwards” rather than “everything will be fine.”

4. Distribute the risk intelligently. Maybe you build the happy path perfectly for the demo but add extensive monitoring so you know immediately if edge cases hit production.

A Real Example

I worked with a team facing exactly this scenario. Sales needed a new feature integrated for a demo for our key customer in 2 weeks. Engineering estimated it would take four weeks to do it “properly.”

Instead of fighting about what “properly” meant, we got specific:

Sales needed: Core workflow that would work reliably for demo scenarios. Engineering needed: No shortcuts that would create ongoing maintenance burden. The solution: We had one strong engineer build just the core workflow pieces needed for the demo, and crucially, had that same engineer running the demos to ensure we didn’t accidentally try anything we shouldn’t. Meanwhile, the rest of the team started building the full feature properly, with the same interfaces as the demo version (but fleshed out), so we could do a rip-and-replace once the demos were over.

Sales got their demo that worked flawlessly. Engineering got to build the feature right without compromising the main codebase. And everyone knew exactly what was temporary demo code versus what was going into production.

The NASA Lesson for Software Teams

The genius of the NASA example isn’t that they found the “right” answer. It’s that they made the trade-off explicitly and took responsibility for it.

They could have pretended there was no trade-off (“we’ll just work harder and do both!”). They could have quietly skipped testing and hoped no one noticed. Instead, they acknowledged the tension, made a conscious choice, and owned the consequences. In doing all that, they made the right decision (and we as the audience could see that it was the right decision given the information they had - even if in the film that 1 in 20 event happens and causes the second half of the film to be much more exciting than it would otherwise have been!)

Software teams should do the same. When stakeholders want different things: make the trade-offs visible, get explicit agreement on priorities, document what you’re optimising for (and what you’re not), and plan how to address deferred concerns.

The Bottom Line

Quality isn’t about finding the one “right” answer. It’s about consciously balancing the competing needs of different stakeholders who matter.

The teams that struggle with quality are often the ones pretending these trade-offs don’t exist. The teams that succeed are the ones who make them explicit, negotiate honestly, and take ownership of their choices.

Sometimes the technically optimal solution isn’t politically viable. Sometimes the politically safe choice increases technical risk. That’s not a failure of quality - that’s the reality of building software for humans in human organizations.

The key is making those choices consciously rather than pretending they don’t exist.


Originally published on Edmund Pringle’s Substack. Follow Ed for more on software quality and engineering leadership.