There is some considerate backlash toward GoDaddy right now, because their security program was responsible for sending out phishing emails which promised fake holiday bonuses to employees during a pandemic that has particularly hit people hard (emotionally, financially, and otherwise.) If you aren't familiar with the situation, you can read more about it here.
Quite a few places have already pointed out the blatant insensitivity of this test, but I wanted to address some other points I thought were equally important.
Phishing emails should replicate real world attackers. Maybe.
Sure, malicious actors are malicious. They can be more than willing to take any and all advantages over victims in order to make a profit. So naturally, we should adopt this "nothing is sacred" approach to simulated testing in order to be 100% authentic, right? Honestly, it's significantly more complicated than that.
Let's start with the most basic facet. Malicious actors are acting illegally. As defenders, we are trying to operate within our legal constraints. You aren't going to steal your employees' personal funds or belongings from their homes to simulate a test, are you? The lines must be drawn somewhere, and it remains fact that we need to successfully operate within these constraints. Offensive simulations aren't going to be exactly like real attacks. If you're doing it right, that shouldn't even be the point anyway.
Even if you were to argue that phishing attempts should be as realistic as possible within legal constraints, GoDaddy's phishing test would still fall short here. The emails used in this campaign came from internal domains, with valid DKIM signatures. Real phishing attacks just won't look like this. Some might try to argue that you could have ATOs on legitimate internal accounts, but then this wouldn't exactly be phishing anyway. At the very least it's a hybrid of sorts which is unfairly represented in these tests.
So what should a successful phishing test look like? Well, that's going to contextually depend on the organization. What does your normal attack profile look like? What are the TTPs you are trying to protect against? There's no one size-fits-all answer to this one.
I think a lot of people get a bit carried away here, and start putting the cart before the horse. They are so focused on making these simulations "as realistic as possible" they forget what the goal was in the first place. And this brings us to our next point...
What is the goal of the security program (and the phishing campaign?)
It may seem obvious, but for a security program to be successful, it needs to have concretely defined goals. You'd probably be surprised how often this step is missed. Many people and organizations will assume goals, without ever trying to formally document them. But program goals which aren't documented usually suffer from either being too vague, or misalignment with the company goals and mission. It can be all too easy for goals like this to miss even satisfying their own objectives.
Let's examine a generic phishing program in this light. We're going to start by asking a few basic questions.
- Who is this phishing campaign for?
- What is this campaign trying to achieve?
- How do we measure the success of this campaign?
Knowing the target audience of your campaign is going to be important here. You shouldn't be crafting the same material for all groups of people. You have different knowledge bases, experiences, workflows, etc. to consider, and all of this is going to impact how effective your program will be. For example, would you give the same testing material to IT veterans that you would to a finance team?
Knowing what you are trying to achieve with the campaign is absolutely critical. You simply can not be successful if you can't define this. Are you trying to identify knowledge gaps? Are you trying to identify opportunities for improvements? Are you looking to educate employees? These things are all similar, but subtlety distinct. For example, if you want to help educate your employees to strengthen your organization's overall posture, but then go on a shaming spree which alienates your employees, sows distrust in your program, and fosters a climate of shadow IT, you are going to miss the mark by miles. And in that scenario, the biggest failing is not the employees, but the program and the leadership teams who signed off on it. Formal documentation is probably everyone's least favourite thing, but believe me, it goes a long way. Just the process of writing things down will help you think about things differently than you may otherwise have.
Knowing what and how to measure for success is another deceptive task. Sometimes, we get stuck focusing on the seemingly obvious pieces that we miss the contextually important pieces. Let's say we have 500 employees who click on these phishing emails. Is this bad? Is this expected? Is this within an organization's tolerance? The right answer is: it all depends. 500 employees out of 1,000 is going to be different than out of 10,000. Or out of 100,000. And even then, how do we even know what the acceptable tolerance is? 500/100,000 is 0.5%. Qualitatively we may think that seems pretty good, but what is the impact of this 0.5%? What kind of access do they have? How can that be abused? What's the cost of all of these people clicking on truly maliciously links? What's the cost of even a single person doing so? Again, we need to understand how to tie these numbers back to impact before we can even tell what numbers are good vs bad. (This is the point where a Business Impact Analysis can come in handy.)
All of these questions need to be asked, and it's a shame that many programs will run without ever asking them. I'm not saying GoDaddy necessarily didn't ask these questions, but from the sound of this campaign, I assume they did not.
Are the consequences even impactful?
Let's say we get past all of this, we run a campaign, we get the results, and now we are ready for mitigation. This is one of the most precarious pieces. We've identified a problem, now how do we correct for this?
This is another major pitfall for corporate security programs. Employee awareness and education is often trivially considered. The extent of this usually looks like some repetitive and unengaged online video training course. Watch some short videos, answer 1 or 2 questions after each one, and forget about it all when it's over. This doesn't exactly sound like the recipe for a successful program. And if your goal is to improve the organization's security posture, you're likely to fall short here as well.
Each company is going to have a tailored solution for their program. And to be clear, I'm not saying video courses can't be a successful part of this solution and/or program. But what I am saying is that it can't be the totality of it. It can't feel like a punishment, or people will resent it, and that leads to resistance (even if subconscious.) Security teams are supposed to be there to help people navigate through these issues they may not have a deep understanding of. They need to feel supported to do the right things, rather than victimized for failing the pass a bar they may not even be able to see. It's important to remember that these internal campaigns should be a metric for evaluation, not blame. We need to respond accordingly; in a way that's going to strengthen the organization's posture, not undermine it.
I'm not going to sit here and dictate how each company should be doing this, but I do recommend that leadership teams put some serious contemplation into this piece.
~500 employees clicked on these emails. What does this say about security culture at the organization?
With approximately 500 employees reportedly clicking on GoDaddy's phishing emails (and the organization having roughly 7,000 total employees,) over 1/7 of the organization was vulnerable to this campaign. Regardless of where an organization puts their tolerance threshold, this is a lot of employees. There are quite a few people talking about how this proves the ignorance of the employees, but I protest that view. Instead, I believe this speaks much more about the organization's security culture. How does the organization educate their employees about security? What kind of support and involvement is coming from their leadership teams? What are even the security boundaries which they are trying to protect? And who is trying to protect them? The answers to these questions (and many more) will strongly influence a company's security culture; and strong security culture, intrinsically built into the fabric of the organization's processes and procedures, is going to drastically impact the effectiveness of any security controls put in place.
Successful security programs need empathy and trust. Without trust, your program will fail.
In my experience, I've noticed one of the worst neglected facets of any security program in the human element. At the end of the day, we need to remind that there's a reason we're bothering to do all of this and that reason can pretty much always be tied back to people in some way. Communities and relationships are built on trust and empathy. Without these, both communities and relationships will not thrive. Think about the worst case scenario there. Why do people often create shadow IT in organizations? They don't trust the processes in place to be sufficient, so they try to augment them or get around them in some way to solve a need or a problem. If we can find ways to solve these needs first, shadow IT becomes an impracticality. It becomes more work and a larger obstacle for people, and that will naturally lead to its decline. But we have to be willing to put in the time, energy, and resources into identifying and addresses these concerns.
A big part of that is going to depend on how the organization (and the security team) treat employees. If we view employees through the lens of enemies, or "ticking time-bombs", or any such analogy, it just erodes the trust we need to build successful relationships between security teams and the rest of the organization. People are not going to come forward to discuss their concerns/observations if they feel that they will be victimized or penalized for doing so. It may not seem natural to some, but a strong and positive security culture only grows when employees feel like they are rewarded for being part of the initiative. This doesn't even have to be prodigious. Just acknowledging and thanking people can go a long way, and yet is one of the most overlooked tools in the industry.
Personally, I think infosec can stand to have greater compassion. Assuming we're even at the point where you agree that GoDaddy's campaign was technically sound and reasonable for emulating "real adversaries", how does this impact the trust their employees will have in the organization and the security program in the future? And if this layer of trust is gone, how successful will any programs, campaigns, or controls be as a result? It's all interconnected. Second-order effects are just as important (and impactful) as first order effects.
My takeaway
In the end, I think this is a glaring example of how not to run a security program. Not simply for the obvious impacts of the technical capabilities, but because a campaign run in this manner contravenes every reasonably and effective process and paradigm a healthy security program should be using.
We need to be thinking bigger, and better. We need to be doing more.