· Dayo Adetoye (PhD) · Managing Uncertainty and Complexity  · 11 min read

The Great Security Bluff:

Why Your Controls Might Fail When You Need Them Most

Can you be confident whether your security controls are battle-ready for a real-world test against threat actors? Are you betting the house on a control that you last tested during last year's audit? This blog post provides some critical analyses and strategies for gaining assurance that your controls will withstand contact against adversaries.

Can you be confident whether your security controls are battle-ready for a real-world test against threat actors? Are you betting the house on a control that you last tested during last year's audit? This blog post provides some critical analyses and strategies for gaining assurance that your controls will withstand contact against adversaries.

Introduction

In the relentless battle against cyber threats, the resilience of your security controls could mean the difference between a near-miss and a catastrophic breach. Yet, how often do we ask ourselves: are our controls truly ready to withstand an adversary’s test? It’s easy to place trust in measures that passed last year’s audit or met compliance standards, but the real question is whether they can hold up in the chaotic, high-pressure reality of a live attack.

The cybersecurity landscape is anything but static. On the one hand. threat actors constantly evolve their tactics, probing for weaknesses in even the most robust defenses. Meanwhile, on the other hand, your organization’s technical environment is constantly evolving as you add, configure, and manage assets within your infrastructure which may expose unforeseen weaknesses through misconfiguration, human error and unexpected interactions between systems and controls. Is the architecture that you have in your network diagram actually the true picture of your technical environment?

Organizations often overestimate the strength of their safeguards, blinded by assumptions or outdated testing methods. This gap between theoretical security and operational readiness leaves businesses vulnerable — right when they need their controls to perform most.

Assurance through Continuous Validation

At the heart of all these lies the need for continuous validation — a systematic approach to testing and measuring the effectiveness of your controls. This blog introduces a framework for gaining assurance through rigorous analysis and simulation. By calculating the Threat Mitigation Potential of individual controls, we can quantify their ability to protect against specific threats.

Introducing Threat Mitigation Potential (TMP)

At the heart of any effective cybersecurity strategy lies a critical question: How well do your controls mitigate threats in the real world? While compliance and audit results can provide some assurance, they often fall short of revealing how well controls perform under actual attack conditions.

Threat Mitigation Potential

To address this gap, we introduce the concept of Threat Mitigation Potential (TMP)—a comprehensive model designed to quantify the real-world effectiveness of a security control. TMP provides a structured way to evaluate a control’s ability to reduce risk, accounting for three pivotal factors:

  1. Mitigation Effectiveness (Efficacy): The probability, expressed as a percentage, that the control successfully performs its intended function. For example, an antivirus might block malware 85% of the time, or a firewall might intercept 90% of malicious traffic.
  2. Efficacy Decay: Decline in the degree of confidence that the control continues to function as expected over time. This factor introduces a dynamic element, modeled using an exponential decay function to reflect the natural reduction in confidence as time passes without rigorous testing or validation.
  3. Deployment Coverage (Coverage): The extent, also expressed as a percentage, to which the control is deployed across relevant assets. A control that’s only applied to 50% of your systems leaves significant gaps in your defenses.

TMP combines these factors into a practical framework, enabling you to measure the true performance of your controls. It highlights strengths, uncovers blind spots, and provides actionable insights to prioritize and optimize your defenses.

In the sections ahead, we’ll explore how TMP provides a rigorous framework for evaluating your controls’ ability to mitigate threats, ensuring you have a solid foundation to continuously strengthen your defensive security posture.

Control Efficacy Decay

Security controls can fail for a variety of reasons: misconfigurations, outdated detection signatures, conflicts with other controls, or unforeseen changes in the environment. To ensure a control remains effective when it’s needed most, continuous testing and validation are essential. Without this, confidence in a control’s ability to meet its threat mitigation objectives diminishes over time.

We represent the decline in confidence using an exponential decay function, which models the effect of time tt (in days) on a control’s efficacy. The decay function Δ(t)\Delta(t) is defined as follows:

Control Efficacy Decay Over Time
Δ(t)=ekt\begin{aligned} \Delta(t) & = e^{-kt} \\ \end{aligned}

Where:

  • kk: the efficacy decay rate parameter, which reflects the rate at which confidence in the control’s effectiveness diminishes. A higher kk indicates faster decay, suitable for more critical controls where risks associated with failure are higher. Conversely, a lower kk implies slower decay and is suitable for less critical controls.

The graph below illustrates control efficacy decay over time with an example value of k=0.018k = 0.018:

control efficacy decay

This model provides a straightforward yet powerful way to quantify the importance of continuous validation. The longer a control remains untested, the less certainty there is about its ability to perform as intended. This decline underscores the need to integrate regular testing into your security operations to maintain confidence in your defenses.

Continuous Validation and Efficacy Decay

Testing a control to ensure it functions as expected provides reassurance and resets our confidence in the control’s effectiveness back to its initial value. This process of continuous validation helps counteract the natural decline in confidence over time, effectively “refreshing” the control’s efficacy.

We model this behavior with the following function: suppose vv represents the interval (in days) between tests. The adjusted efficacy decay, Δ(t,v)\Delta'(t, v), accounts for these validation intervals and is defined as:

Control Efficacy Decay with Validation
Δ(t,v)=ek×(t mod v)\begin{aligned} \Delta'(t, v) & = e^{-k \times (t \ \text{mod} \ v)} \end{aligned}

Where:

  • vv: validation cadence of the control, in days.
  • tt: time elapsed, in days.
  • kk: the efficacy decay rate parameter, which reflects the rate at which confidence in the control’s effectiveness diminishes.

The modulus operation (t mod vt \ \text{mod} \ v) resets the decay whenever validation occurs at interval vv.

The graphs below illustrate how regular validation impacts the control’s efficacy decay, highlighting the importance of consistent testing to sustain confidence in your security controls.

control efficacy decay with validation

As shown in the graph, a weekly validation cadence ensures that efficacy decay rarely drops below 90%, whereas a bi-annual cadence allows confidence to diminish to nearly 0% before the next test. This highlights the critical role of testing frequency, represented by vv, in sustaining an effective and reliable defensive posture.

Simulating Threat Mitigation

With an efficacy measure in place, we can use a Monte Carlo simulation to evaluate how effectively a control mitigates threats over time.

The simulation is governed by the following rule:

Deriving TMP through Monte Carlo Simulation
Γ(r,t,v)={mitigated,if r<Efficacy×Δ(t,v)× Coveragenot mitigated,otherwise.\begin{equation} \Gamma(r,t,v) = \begin{cases} \text{\small mitigated}, & \text{if $r < \text{Efficacy} \times \Delta'(t,v) \times $ Coverage} \\\\ \text{\small not mitigated}, & \text{otherwise.} \end{cases} \nonumber \end{equation}

Here’s what each parameter represents:

  • Γ(r,t,v)\Gamma(r,t,v): The outcome of a single experiment, indicating whether the threat was mitigated or not.
  • rr: A random number drawn from a uniform distribution (0, 1).
  • tt: The simulated day when the threat event occurs.
  • vv: The validation cadence of the control.

In this model, the control mitigates the threat if its mitigation potential—calculated as the product of Efficacy, Δ(t,v)\Delta'(t,v) (decayed efficacy), and Coverage—exceeds the random number rr. We repeat this process over many iterations to determine how often the control successfully mitigates threats versus when it fails.

By aggregating the results, we can calculate the Threat Mitigation Potential (TMP) of the control, providing a quantifiable measure of its effectiveness in real-world scenarios.

Deriving Threat Mitigation Potential

Instead of approximating the Threat Mitigation Potential (TMP) through thousands of Monte Carlo simulations, we can derive it analytically (details provided in the drop-down below). The formula for TMP is as follows:

Threat Mitigation Potential (TMP) Defined
TMP=Efficacy×1ekvkv×Coverage\begin{aligned} \text{\bf TMP} &= \text{Efficacy} \times \frac{1 - e^{-kv}}{kv} \times \text{Coverage} \end{aligned}

Where:

  • Efficacy: The probability, expressed as a percentage, that the control successfully performs its intended function.
  • Coverage: The extent, also expressed as a percentage, to which the control is deployed across relevant assets.
  • vv: The validation cadence of the control.
  • kk: the efficacy decay rate parameter, which reflects the rate at which confidence in the control’s effectiveness diminishes.

This formula provides an exact calculation of TMP, incorporating the effects of control efficacy, validation cadence, and deployment coverage into a single metric. By directly computing TMP, organizations can better understand and quantify the real-world impact of their security controls.

By recognizing that Δ(t,v)\Delta'(t,v) is a continuous function of tt and that tt is uniformly distributed, we can simplify the calculation. For a constant validation cadence vv, the function Δ(t,v)\Delta'(t,v) forms a repeating cycle. Therefore, the expected value of Δ(t,v)\Delta'(t,v) as tt \to \infty is the same as its expected value over one interval [0,v][0, v].

Within this interval, Δ(t,v)=Δ(t)=ekt\Delta'(t,v) = \Delta(t) = e^{-kt}. The expected value of Δ(t)\Delta(t) over [0,v][0, v] can be derived as follows:

F(t)=1k×ekt+CThe antiderivative of Δ(t)C is a constant.E[Δ(t)]=F(t1)F(t0)t1t0Expected value of Δ(t) over [t0,t1]=ekt0ekt1k×(t1t0)Simplification=1ekvkvSubstitution: t0=0t1=v\begin{aligned} F(t) & = -\frac{1}{k} \times e^{-kt} + C & \gray{\footnotesize \text{The antiderivative of $\Delta(t)$. $C$ is a constant.}} \\\\ E[\Delta(t)] & = \frac{F(t_1)-F(t_0)}{t_1-t_0} & \gray{\footnotesize \text{Expected value of $\Delta(t)$ over $[t_0, t_1]$}} \\\\ & = \frac{e^{-kt_0} - e^{-kt_1}}{k \times (t_1 - t_0)} & \gray{\footnotesize \text{Simplification}} \\\\ & = \frac{1 - e^{-kv}}{kv} & \gray{\footnotesize \text{Substitution: $t_0=0$, $t_1=v$}} \\\\ \end{aligned}

Since rr in the simulation Γ(r,t,v)\Gamma(r,t,v) is uniformly distributed, the probability that the control mitigates a threat is given by:

TMP=E[Γ(r,t,v)]=Efficacy×E[Δ(t)]×CoverageExpected value of a control’s mitigation potential=Efficacy×1ekvkv×CoverageSubstitution\begin{aligned} \text{TMP} = E[\Gamma(r,t,v)] &= \text{Efficacy} \times E[\Delta(t)] \times \text{Coverage} & \gray{\footnotesize \text{Expected value of a control's mitigation potential}} \\\\ &= \text{Efficacy} \times \frac{1 - e^{-kv}}{kv} \times \text{Coverage} & \gray{\footnotesize \text{Substitution}} \\\\ \end{aligned}

Applying Threat Mitigation Potential in Practice

Let’s explore a couple of examples to illustrate how TMP can guide practical security decisions while aligning controls with an organization’s risk tolerance.

Example 1: How Often Should We Validate Our Anti-Phishing Control?

Imagine a control designed to prevent phishing attacks by analyzing email content and blocking suspicious messages. Its parameters are:

  • Efficacy: The control successfully blocks phishing attempts 85% of the time when functioning as intended.
  • Coverage: The control is deployed across 80% of the organization’s email systems.
  • Validation Cadence: The control is tested monthly, so v=30v = 30 days.
  • Decay Rate (kk): The efficacy decay parameter is set to 0.02, reflecting a moderate decline in confidence without validation.
  • Risk Tolerance Threshold: The organization requires controls to maintain at least a 70% TMP to meet its risk appetite.

Using the TMP formula:

TMP=Efficacy×1ekvkv×Coverage\text{TMP} = \text{Efficacy} \times \frac{1 - e^{-kv}}{kv} \times \text{Coverage}

Substituting the values:

TMP=0.85×1e0.02×300.02×30×0.80\text{TMP} = 0.85 \times \frac{1 - e^{-0.02 \times 30}}{0.02 \times 30} \times 0.80

After solving:

TMP0.85×0.752×0.80=0.51\text{TMP} \approx 0.85 \times 0.752 \times 0.80 = 0.51

The result, 51% TMP, indicates that the control’s current configuration is insufficient to meet the organization’s risk tolerance of 70%.

Improving TMP

  1. Increase Validation Cadence: Testing the control weekly (v=7v = 7 days) instead of monthly yields:

    TMP=0.85×1e0.02×70.02×7×0.80\text{TMP} = 0.85 \times \frac{1 - e^{-0.02 \times 7}}{0.02 \times 7} \times 0.80

    Solving gives:

    TMP0.85×0.933×0.80=0.63\text{TMP} \approx 0.85 \times 0.933 \times 0.80 = 0.63

    The TMP increases to 63%, narrowing the gap to the 70% threshold but still falling short.

  2. Increase Coverage: Deploying the control across 95% of email systems raises the original TMP to:

    TMP=0.85×0.752×0.95=0.61\text{TMP} = 0.85 \times 0.752 \times 0.95 = 0.61
  3. Combined Approach: Testing weekly and increasing coverage to 95% achieves:

    TMP=0.85×0.933×0.95=0.75\text{TMP} = 0.85 \times 0.933 \times 0.95 = 0.75

With these combined improvements, the control now meets the 70% TMP requirement, aligning it with the organization’s risk tolerance.

Example 2: Is Our Perimeter Firewall Meeting Risk Tolerance Goals?

A perimeter firewall designed to block malicious network traffic has the following characteristics:

  • Efficacy: The firewall blocks malicious traffic 90% of the time when functioning as intended.
  • Coverage: The firewall protects 70% of the organization’s critical assets.
  • Validation Cadence: The firewall is tested quarterly, so v=90v = 90 days.
  • Decay Rate (kk): The efficacy decay parameter is set to 0.015, reflecting a conservative decline in confidence without validation.
  • Risk Tolerance Threshold: Critical controls must maintain at least a 75% TMP to align with the organization’s risk appetite.

Using the TMP formula:

TMP=Efficacy×1ekvkv×Coverage\text{TMP} = \text{Efficacy} \times \frac{1 - e^{-kv}}{kv} \times \text{Coverage}

Substituting the values:

TMP=0.90×1e0.015×900.015×90×0.70\text{TMP} = 0.90 \times \frac{1 - e^{-0.015 \times 90}}{0.015 \times 90} \times 0.70

After solving:

TMP0.90×0.549×0.70=0.35\text{TMP} \approx 0.90 \times 0.549 \times 0.70 = 0.35

The result, 35% TMP, is well below the required 75%.

Improving TMP

  1. Increase Validation Cadence: Testing monthly (v=30v = 30 days) gives:

    TMP=0.90×1e0.015×300.015×30×0.70\text{TMP} = 0.90 \times \frac{1 - e^{-0.015 \times 30}}{0.015 \times 30} \times 0.70 TMP0.51\text{TMP} \approx 0.51
  2. Increase Coverage: Expanding the firewall to protect 90% of assets raises the original TMP to:

    TMP=0.90×0.549×0.90=0.59\text{TMP} = 0.90 \times 0.549 \times 0.90 = 0.59
  3. Combined Approach: Monthly testing and 90% coverage achieves:

    TMP=0.90×0.805×0.90=0.65\text{TMP} = 0.90 \times 0.805 \times 0.90 = 0.65
  4. Aggressive Weekly Validation: Testing weekly with 90% coverage achieves:

    TMP=0.90×1e0.015×70.015×7×0.90\text{TMP} = 0.90 \times \frac{1 - e^{-0.015 \times 7}}{0.015 \times 7} \times 0.90 TMP0.77\text{TMP} \approx 0.77

This approach finally aligns the control with the risk appetite, demonstrating how aggressive testing and broader deployment can meet organizational risk tolerance goals.

Decision Support

These examples demonstrate how TMP enables organizations to:

  • Quantify Control Effectiveness: TMP translates security control performance into actionable insights that align with risk management objectives.
  • Optimize Resource Allocation: TMP helps identify where to invest in increased validation or coverage to maximize risk mitigation.
  • Balance Risk and Cost: TMP offers a data-driven foundation for weighing operational costs (e.g., testing frequency) against the benefits of improved risk mitigation.

Conclusion

The Threat Mitigation Potential (TMP) framework provides a structured and quantitative approach to assess and improve the real-world performance of security controls. By incorporating efficacy, coverage, and validation cadence into a single metric, TMP transforms abstract risk discussions into actionable decision-making tools.

These examples illustrate how organizations can apply TMP to evaluate whether their controls align with risk tolerance thresholds and explore strategies for improvement. Continuous validation emerges as a key enabler, reinforcing the importance of proactive testing and deployment in maintaining an effective security posture.

In today’s dynamic threat environment, where the stakes are high and adversaries relentless, TMP equips decision-makers with the confidence and clarity needed to ensure controls are ready to defend when it matters most. Start incorporating TMP into your risk management practices today and position your organization for a resilient tomorrow.

Back to Blog

Related Posts

View All Posts »
Reimagining Human Risk:

Reimagining Human Risk: How to Measure and Manage it.

Your biggest security threat isn't malware—it's Mark from Accounting. Human risk in cybersecurity is a dynamic challenge that directly impacts organizational resilience and profitability. From employees and contractors to partners, human behaviors and errors are often the catalysts for breaches and business disruptions. This article explores how to measure and manage human risk, focusing on actionable insights, predictive modeling, and risk indicators that help organizations stay ahead. By turning the human element from a vulnerability into a strength, leaders can build a more secure and resilient business foundation.