When is phishing education going too far? [closed]How does anti-phishing software work to identify phishing sites?Crawling for Phishing WebsitesAnti-phishing educational resources for users?Is “user education” actually doable?How could we provide certainty to users that education material on phishing, isn't phishing itself?Is user-education considered a security measure?Phishing Analysisphishing from chromeGmail (Dot) Phishing Attack From Avalanche BotnetHow to convince users that security is a good thing?
What do you call the small burst of laugh that people let out when they want to refrain from laughing, but can't?
Strange math syntax in old basic listing
Looking after a wayward brother in mother's will
C++ variable that is true and false at the same time
Why would Lupin kill Pettigrew?
Do adult Russians normally hand-write Cyrillic as cursive or as block letters?
Can a helicopter mask itself from radar?
If Sweden was to magically float away, at what altitude would it be visible from the southern hemisphere?
How do I get a list of only the files (not the directories) from a package?
What is a simple, physical situation where complex numbers emerge naturally?
Do I add my ability modifier to the damage of the bonus-action attack granted by the Crossbow Expert feat?
What is the intuition behind uniform continuity?
Order by does not work as I expect
Can I ask a publisher for a paper that I need for reviewing
How to detach yourself from a character you're going to kill?
Relativistic resistance transformation
Bringing Food from Hometown for Out-of-Town Interview?
Does Peach's float negate shorthop knockback multipliers?
What people are called "кабан" and why?
The qvolume of an integer
Pros and cons of writing a book review?
Why is there a need to modify system call tables in Linux?
Are there mythical creatures in the world of Game of Thrones?
Modern approach to radio buttons
When is phishing education going too far? [closed]
How does anti-phishing software work to identify phishing sites?Crawling for Phishing WebsitesAnti-phishing educational resources for users?Is “user education” actually doable?How could we provide certainty to users that education material on phishing, isn't phishing itself?Is user-education considered a security measure?Phishing Analysisphishing from chromeGmail (Dot) Phishing Attack From Avalanche BotnetHow to convince users that security is a good thing?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
I currently work on the IT security team at my workplace in a senior role. Recently, I assisted management in designing the phishing / social engineering training campaigns, by which IT security will send out phishing "test" emails to see how aware the company employees are to spotting such emails.
We have adopted a highly targeted strategy based not only on the user's job role but also on the content such employees are likely to see. The content have been varied to include emails asking for sensitive content (e.g: updating a password) to fake social media posts, to targeted advertising.
We have been getting push back from end users that they have no way of distinguishing a legitimate email that they would receive day to day from truly malicious phishing emails. They have been requests to scale back the difficulty of these tests from our team.
Edit to address some comments that say spear phishing simulations are too extreme / bad design of simulations
In analyzing the past results of phishing simulations, the users who clicked tended to show certain patterns. Also, one particular successful phish that resulted in financial loss (unnecessary online purchase) was pretending to be a member of senior management.
To respond to comments on depth of targeting / GDPR, methods of customization are based on public company data (i.e: job function), rather than private user data known to that person only. The "content that users are likey to see" is based on "typical scenarios", not what content users at our workplace see specifically
Questions
When is phishing education going too far?
Is pushback from the end users demonstrative that their awareness is still lacking and need further training, specifically the inability to recognize legitimate from malicious emails?
phishing user-education
closed as primarily opinion-based by Xander, ThoriumBR, Conor Mancone, Rory Alsop♦ Apr 16 at 21:03
Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise. If this question can be reworded to fit the rules in the help center, please edit the question.
add a comment |
I currently work on the IT security team at my workplace in a senior role. Recently, I assisted management in designing the phishing / social engineering training campaigns, by which IT security will send out phishing "test" emails to see how aware the company employees are to spotting such emails.
We have adopted a highly targeted strategy based not only on the user's job role but also on the content such employees are likely to see. The content have been varied to include emails asking for sensitive content (e.g: updating a password) to fake social media posts, to targeted advertising.
We have been getting push back from end users that they have no way of distinguishing a legitimate email that they would receive day to day from truly malicious phishing emails. They have been requests to scale back the difficulty of these tests from our team.
Edit to address some comments that say spear phishing simulations are too extreme / bad design of simulations
In analyzing the past results of phishing simulations, the users who clicked tended to show certain patterns. Also, one particular successful phish that resulted in financial loss (unnecessary online purchase) was pretending to be a member of senior management.
To respond to comments on depth of targeting / GDPR, methods of customization are based on public company data (i.e: job function), rather than private user data known to that person only. The "content that users are likey to see" is based on "typical scenarios", not what content users at our workplace see specifically
Questions
When is phishing education going too far?
Is pushback from the end users demonstrative that their awareness is still lacking and need further training, specifically the inability to recognize legitimate from malicious emails?
phishing user-education
closed as primarily opinion-based by Xander, ThoriumBR, Conor Mancone, Rory Alsop♦ Apr 16 at 21:03
Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise. If this question can be reworded to fit the rules in the help center, please edit the question.
30
I would re-word the title from "education" to "testing" or "simulations"
– schroeder♦
Apr 14 at 19:05
10
This question seems to me like it lacks key details. Why are your users claiming that the phishing emails you send them are indistinguishable from legitimate ones? Is it because they truly are (at least with the tools at a normal user's disposal), or is it because they're screwing up? Receiving an email from a person you've not previously had contact with is not inherently suspicious, so it matters how you are measuring failure. Based on them actually handing over sensitive information? Or just based on them clicking a link in an email that they could not reasonably know was fake in advance?
– Mark Amery
Apr 15 at 13:00
1
Comments are not for extended discussion; this conversation has been moved to chat.
– Rory Alsop♦
Apr 15 at 18:22
add a comment |
I currently work on the IT security team at my workplace in a senior role. Recently, I assisted management in designing the phishing / social engineering training campaigns, by which IT security will send out phishing "test" emails to see how aware the company employees are to spotting such emails.
We have adopted a highly targeted strategy based not only on the user's job role but also on the content such employees are likely to see. The content have been varied to include emails asking for sensitive content (e.g: updating a password) to fake social media posts, to targeted advertising.
We have been getting push back from end users that they have no way of distinguishing a legitimate email that they would receive day to day from truly malicious phishing emails. They have been requests to scale back the difficulty of these tests from our team.
Edit to address some comments that say spear phishing simulations are too extreme / bad design of simulations
In analyzing the past results of phishing simulations, the users who clicked tended to show certain patterns. Also, one particular successful phish that resulted in financial loss (unnecessary online purchase) was pretending to be a member of senior management.
To respond to comments on depth of targeting / GDPR, methods of customization are based on public company data (i.e: job function), rather than private user data known to that person only. The "content that users are likey to see" is based on "typical scenarios", not what content users at our workplace see specifically
Questions
When is phishing education going too far?
Is pushback from the end users demonstrative that their awareness is still lacking and need further training, specifically the inability to recognize legitimate from malicious emails?
phishing user-education
I currently work on the IT security team at my workplace in a senior role. Recently, I assisted management in designing the phishing / social engineering training campaigns, by which IT security will send out phishing "test" emails to see how aware the company employees are to spotting such emails.
We have adopted a highly targeted strategy based not only on the user's job role but also on the content such employees are likely to see. The content have been varied to include emails asking for sensitive content (e.g: updating a password) to fake social media posts, to targeted advertising.
We have been getting push back from end users that they have no way of distinguishing a legitimate email that they would receive day to day from truly malicious phishing emails. They have been requests to scale back the difficulty of these tests from our team.
Edit to address some comments that say spear phishing simulations are too extreme / bad design of simulations
In analyzing the past results of phishing simulations, the users who clicked tended to show certain patterns. Also, one particular successful phish that resulted in financial loss (unnecessary online purchase) was pretending to be a member of senior management.
To respond to comments on depth of targeting / GDPR, methods of customization are based on public company data (i.e: job function), rather than private user data known to that person only. The "content that users are likey to see" is based on "typical scenarios", not what content users at our workplace see specifically
Questions
When is phishing education going too far?
Is pushback from the end users demonstrative that their awareness is still lacking and need further training, specifically the inability to recognize legitimate from malicious emails?
phishing user-education
phishing user-education
edited Apr 16 at 3:47
Anthony
asked Apr 14 at 15:58
AnthonyAnthony
1,1961918
1,1961918
closed as primarily opinion-based by Xander, ThoriumBR, Conor Mancone, Rory Alsop♦ Apr 16 at 21:03
Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise. If this question can be reworded to fit the rules in the help center, please edit the question.
closed as primarily opinion-based by Xander, ThoriumBR, Conor Mancone, Rory Alsop♦ Apr 16 at 21:03
Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise. If this question can be reworded to fit the rules in the help center, please edit the question.
30
I would re-word the title from "education" to "testing" or "simulations"
– schroeder♦
Apr 14 at 19:05
10
This question seems to me like it lacks key details. Why are your users claiming that the phishing emails you send them are indistinguishable from legitimate ones? Is it because they truly are (at least with the tools at a normal user's disposal), or is it because they're screwing up? Receiving an email from a person you've not previously had contact with is not inherently suspicious, so it matters how you are measuring failure. Based on them actually handing over sensitive information? Or just based on them clicking a link in an email that they could not reasonably know was fake in advance?
– Mark Amery
Apr 15 at 13:00
1
Comments are not for extended discussion; this conversation has been moved to chat.
– Rory Alsop♦
Apr 15 at 18:22
add a comment |
30
I would re-word the title from "education" to "testing" or "simulations"
– schroeder♦
Apr 14 at 19:05
10
This question seems to me like it lacks key details. Why are your users claiming that the phishing emails you send them are indistinguishable from legitimate ones? Is it because they truly are (at least with the tools at a normal user's disposal), or is it because they're screwing up? Receiving an email from a person you've not previously had contact with is not inherently suspicious, so it matters how you are measuring failure. Based on them actually handing over sensitive information? Or just based on them clicking a link in an email that they could not reasonably know was fake in advance?
– Mark Amery
Apr 15 at 13:00
1
Comments are not for extended discussion; this conversation has been moved to chat.
– Rory Alsop♦
Apr 15 at 18:22
30
30
I would re-word the title from "education" to "testing" or "simulations"
– schroeder♦
Apr 14 at 19:05
I would re-word the title from "education" to "testing" or "simulations"
– schroeder♦
Apr 14 at 19:05
10
10
This question seems to me like it lacks key details. Why are your users claiming that the phishing emails you send them are indistinguishable from legitimate ones? Is it because they truly are (at least with the tools at a normal user's disposal), or is it because they're screwing up? Receiving an email from a person you've not previously had contact with is not inherently suspicious, so it matters how you are measuring failure. Based on them actually handing over sensitive information? Or just based on them clicking a link in an email that they could not reasonably know was fake in advance?
– Mark Amery
Apr 15 at 13:00
This question seems to me like it lacks key details. Why are your users claiming that the phishing emails you send them are indistinguishable from legitimate ones? Is it because they truly are (at least with the tools at a normal user's disposal), or is it because they're screwing up? Receiving an email from a person you've not previously had contact with is not inherently suspicious, so it matters how you are measuring failure. Based on them actually handing over sensitive information? Or just based on them clicking a link in an email that they could not reasonably know was fake in advance?
– Mark Amery
Apr 15 at 13:00
1
1
Comments are not for extended discussion; this conversation has been moved to chat.
– Rory Alsop♦
Apr 15 at 18:22
Comments are not for extended discussion; this conversation has been moved to chat.
– Rory Alsop♦
Apr 15 at 18:22
add a comment |
12 Answers
12
active
oldest
votes
I think there is an underlying problem that you will need to address. Why do the users care that they are failing?
Phishing simulations should, first and foremost, be an education tool not a testing tool.
If there are negative consequences to failing, then yes, your users are going to complain if the tests are more difficult than you have prepared them for. You would complain, too.
So, your response should be:
- educate them more (or differently) so that they can pass the tests (or rather, the comprehension tests, which is what they should be)
- remove negative consequences to failing
This might not require any content changes to your education material, but might only require a re-framing of the phishing simulations for users, management, and your security team.
Another tactic to try is to graduate the phishing simulations so that they get harder as the users are successful in responding to phishing. I have done this with my custom programmes. It's more complex on the back end, but the payoffs are huge if you can do it.
Your focus needs to be the evolving maturity of your organisation's ability to resist phishing attacks, not getting everyone to be perfect on tests. Once you take this perspective, the culture around these tests and the complaints will change.
Do it right, and your users will ask for the phishing simulations to be harder not easier. If you aim for that end result, you will have a much more resilient organisation.
Comments are not for extended discussion; this conversation has been moved to chat.
– Rory Alsop♦
Apr 16 at 21:02
add a comment |
We have been getting push back from end users that they have no way of distinguishing a legitimate email that they would receive day to day from truly malicious phishing emails.
This is an indication that tests that could be rooted out as fakes by trained security professionals are being used to evaluate people who aren't. You may have the skills to pick an email apart and interpret the headers, but Dan in Accounting probably doesn't and his management's not likely to agree that a master class in RFC 822 is a good use of his time.
Crafting targeted emails to increase the hit rate has to be done based on intelligence collected about your users and your purported sender. This is not information to which a phisher will be privy and, as Michael Hampton pointed out in his comment, rises to spearphishing. That's a different ball game played on a different field.
If there are adversaries (real or potential) capable of good-enough spearphishing to damage your business, all of the phishing countermeasures and training won't help. Your job is to deploy tools that will give Dan in Accounting a way to distinguish the real ones from the fakes. That might mean security on the sending end like a cryptographic signature that users' mail clients can check and post a prominent warning when something is unsigned or the signature doesn't match. You can't depend on humans to get this stuff right 100% of the time, especially as your organization gets larger and people don't know each other so well.
This seems to suggest that the fix is to have an automated process that can check for these kinds of signs and warn the user. Gmail does this, putting up a red banner warning if the mail looks suspicious, e.g. fake headers.
– user25221
Apr 16 at 11:16
RFC822 has been superseded long ago by RFC5322.
– Patrick Mevzek
Apr 16 at 14:48
@PatrickMevzek RFC 2822 existed between the two, but some of us old geezers are going to cling to the old numbers 'til you pry them from our cold, dead hands.
– Blrfl
Apr 16 at 15:21
Which is against the IETF way of doing things. If an RFC supersedes another one, there is no reasons to cling to the former version. Except for historical reasons and to show knowledge. Newer versions include bugfixes and disambiguitions. But this is mostly unrelated to the question.
– Patrick Mevzek
Apr 16 at 15:23
@PatrickMevzek Clingage is in name only; I certainly wouldn't implement something based on an obsoleted RFC.
– Blrfl
Apr 16 at 15:49
add a comment |
There's one possible point to make that I haven't seen in other answers, but have seen in the real world.
Users say they "have no way of distinguishing a legitimate email that they would receive day to day from truly malicious phishing emails". What this may tell you, is that legitimate emails about password renewals, service changes and such, do not obey the rules that users are expected to follow.
I have certainly seen organisations whose training materials tell users not to click links in emails, and definitely not to put their passwords into the sites those links point to, or to install software from them. And the service teams at those organisations then send out mass emails about service updates that require action (such as password updates, software installs, etc), with helpful links to click.
One thing that might help would be to clarify that users should report these legitimate emails. It might not help the users directly, but it may help to remind the service team that their emails have rules to follow, which should make things clearer for users in the long run.
15
This. I've worked in an organisation that sends similar phishing test emails, but then regularly sends "legitimate" emails that are indistinguishable from spam/phishing, often containing links to external sites (sometimes requiring logins) which my company has previously had no connection with. The problem may very well be that the legitimate mails are too spammy, rather than your tests going to o far.
– Mohirl
Apr 15 at 12:27
6
This is absolutely a problem. If the organization is sending out legitimate emails that the users are expected to click links in, and you are not explicitly identifying those emails as legitimate and teaching the users how to identify them, your legitimate emails are actively working against and undoing the training you are trying to provide.
– Colin Young
Apr 15 at 13:41
8
I used to make a point of reporting emails from IT security to IT security as apparent phishing attempts. They never liked it.
– Michael Kay
Apr 15 at 16:37
@MichaelKay: It really irks me that so many organizations send out real messages that are indistinguishable from phishing attempts. If Acme's VISA card moves from BankCorp to MegaBank, it should not inform customers about it by leaving phone messages asking them to visitAcmeViSAupdate.com
[a domain the customers have never used before] but I had a real that did precisely that (names changed to protect the guilty), but instead inform them how to get the information using the phone number or web site printed on their card.
– supercat
Apr 15 at 21:25
I've been known to receive purchase orders from a previously unknown sender saying simply "please find our purchase order attached". And of course the spam filter might well zap them before I have to make a decision.
– Michael Kay
Apr 15 at 22:36
|
show 1 more comment
- When is phishing education going too far?
When the cost exceeds the benefit. Benefit is generally measured in lower click-through rates and increased rates of reporting of genuine phishing emails. Cost can be measured in:
- the effort to implement the test
- false positive reporting of (not) phishing emails
- lower engagement rates on legitimate emails
- ill will towards the Security group.
The last is the hardest to measure, and often ignored, but if your job is to trick your own people, you shouldn't be surprised if they start viewing you with suspicion.
- Is pushback from the end users demonstrative that their awareness is still lacking and need further training,
specifically the inability to recognize legitimate from
malicious emails?
Um, maybe?
If their click-through rates remain high, then awareness is still lacking and they need further training.
If click-through rates in general have dropped, but the test emails consistently fool them, then their concerns about the testing may be legitimate.
It sounds like your content is pretty closely tailored to your users and even their job roles. This may be what is generating the negative reaction. Ideally, a phishing test should not rely upon knowledge or understanding of internal email practices, just as an attacker should not have access to those. (And note, your internal messaging should not look like your external messaging, for the same reason).
You may want to consider outsourcing your phishing tests. The organizations that are dedicated to offering this service have a better feel for what "in the wild" looks like, and their tools for measuring and reporting on engagement rates are usually better than you can do on your own.
Personally, I'm not fond of phish testing, because I believe it erodes trust between users and Security. But the fact of the matter is it's one of the best ways to improve your users' defences.
1
Forgive me if i am wrong.But if a few people click,wouldnt that be a failure?
– Vipul Nair
Apr 14 at 17:44
8
@VipulNair eradication is not a realistic goal for phish training. I believe I've seen 10-20% click-through described as ideal improvement. I have seen organizations celebrate pushing down below 50%.
– gowenfawr
Apr 14 at 17:55
4
@gowenfawr most recent research shows that getting below 10% is not realistic. Even CISOs click phishing emails (one CISO I know gets 600 emails a day and sometimes he clicks on a well-crafted phish).
– schroeder♦
Apr 14 at 19:11
Where are you guys getting these stats on targets for click through? I'm not in our IS group but I'm on their steering team, we're routinely around 5 - 6% click through for a fairly non-technical workforce of around 500 employees, and what I would consider very realistic test emails. I'm surprised that your comments seem to imply we're way ahead of average (or my interpretation of how difficult our simulated emails are is totally wrong).
– dwizum
Apr 15 at 13:23
2
@dwizum Lance Spitzner, who's a SME in this area, claims <5% is "good". However, my comments about 10-20% and starting >50% stem from personal experience with a handful of organizations. My gut says that Lance has a self-selecting population ("people who care enough about this to hire him") and that 10-20% is a realistic churn point for good organizations. You may very well be doing better than average :)
– gowenfawr
Apr 15 at 13:39
|
show 5 more comments
There's one way in which this may have gone too far:
We have adopted a highly targeted strategy based not only on the user's job role but also on the content such employees are likely to see.
You need to ask yourself whether employees at your company will actually be subject to this level of spearphishing. If the answer is no, then you've gone too far. Of course, this is all dependent on what the group does. If its the DNC, then the answer is yes.
add a comment |
You've seemingly committed a very common mistake among us security professionals: You have gone too much into the mindset of the attacker and you are trying too hard to defeat your fellow employees, instead of making them your allies.
Your phishing campaign should be based on your threat model and risk analysis. Are your employees likely to be a target of carefully crafted spearphishing attacks, or is the higher risk the more common untargeted, mass-phishing campaign of moderate attacker skill?
In the later case, don't do things to your employees that are exceptionally unlikely according to your risk analysis. You simply can't explain to management why you're doing it and it will seem that you are trying to get a high out of appearing smarter and "beating" regular employees. (which of course you can in your field of expertise, just like they could beat you hands down in budgeting, handling customer complaints or supply management).
If you do have targeted, high-skill spearphishing campaigns in your threat model, then you need to gradually escalate and plan a campaign in multiple steps. Because your goal is to teach, not to defeat and embarass. So you do what every teacher does: You start with the simple base excersise and then follow with the more difficult ones.
Example
For example, in a three-step process, you would start with a mail that is fairly easy to spot as a fake, but also contains elements that are more difficult to see. When a user correctly identifies it as a phishing mail, you congratulate them and then point out all the clues, including the better hidden ones. This is the learning part - they get positive reinforcement for the clues they spotted, and are taught additional clues that they missed.
In the second round, you send a phishing mail that is roughly targeted (say, to a department or function) and has fewer obvious and more of the difficult to spot clues. At least half of them should include those that were taught in the previous mail.
Again when a user correctly spots the phishing attempt, you congratulate and point out all the clues, including the new ones you introduced. This reinforces, teaches new clues and raises awareness that some clues can be more difficult to spot than the user thought before.
In the third round, you send your personally targeted mails, with no obvious clues, but at least half the hidden clues must be in the set the user was taught before.
Again, if a user correctly identifies, you congratulate and highlight all the clues, so he can again learn even more.
In all the cases, if a user misidentifies the phishing mail, you also point out all the clues, and then repeat that step until he gets it. Don't progress to more difficult lessons while the learning person is still struggling with the current one.
This is much more work on your part, but will provide a much stronger reinforcement and higher involvement on the employees side, and in the end you are doing it for them.
add a comment |
The question of "going too far" requires context; what part is going too far?
The thing that phishing tests are trying to do is to make people suspicious of their email, because when they aren't then they are at risk of literally inviting unauthorized users onto the network.
So there shouldn't be an overwhelming amount of emails to the point that they are sifting through known bad emails to get to the ones they need to do their job, but there should be enough that it is commonly known that someone in the organization is portraying an attacker and trying to get them to click the wrong link because there are already people outside the organization trying to get them to do that.
The question then becomes when someone does go for the ploy, are you glad that you caught them instead of a malicious actor? As other people have mentioned here (and @BoredToolBox should not have been downvoted in my opinion) this is about education.
If you put that into the wording of the question then, I'm sure that it's not meant as "How much education is going too far?" right?
What is probably going too far in most organizations is the reaction to people who are clicking thorough, and especially if there is a punitive aspect to it. You should be glad when you are the one that caught the action, because it is a chance for you to help the user understand what could possibly have happened and why you are performing this exercise. People should not be punished or shamed.
Imagine that this was an exercise on how to prevent an illness from spreading worker to worker. A deadly virus that will lay dormant until it has found an appropriate host and will then possibly kill everyone, but they don't know that it is spread by people that are randomly coming in the front door handing them packages.
We have enough common sense to know not to just accept packages from people that walk into the building, but what people don't see is that this is exactly what is happening with their emails. So this is about a change in culture and perspective, and I don't really see what part of the knowledge of this is going too far when you are talking about education.
5
The purpose of phishing simulations is not to make people suspicious, but to practice the procedures and behavours taught in a safe simulation of an attack.
– schroeder♦
Apr 14 at 19:16
1
Right...but if they leave that simulation without being suspicious of emails then what was the point? They should be suspicious of anything that looks different, and the point of training is to make them so, right?
– Roostercrab
Apr 14 at 20:00
3
No. That's my entire point. The goal is not suspicion. I'm afraid to explain further will be to simply repeat my first comment.
– schroeder♦
Apr 14 at 20:43
I guess the question then is what do you want them to think when they look through their email inbox if not suspicion...I know that I am suspicious of emails and having users share my suspicion is the prime objective.
– Roostercrab
Apr 15 at 2:13
add a comment |
Faced something similar and currently part of a team that runs something similar. Here are my two cents:
Education is a very tricky concept as the way people learn are
different for different individuals. But what I have seen is that if
you try to concise the information you want to convey in 2-4 points,
in as few words as possible that always help. We do something like
this when it comes to educating people:
Whenever you get an email from someone outside the org ask these questions:
- Do you personally know this email id?
- Does the email id and the domain name look fishy to you?
- Do you really want to click that link or want to give this guy your personal info?
And lastly we always mention that:
if you are not sure please forward this email to email id that verifies this@yourorg.com
- Definitely. Since all they need to do (I guess) is to ignore that email or maybe forward it to your internal security team for review.
I guess what needs to be done here is more on education. Because the employees need to know how a successful phish can not only hurt the company but also the employee as well.
5
The question is when does phishing campaign's,to educate employees crosses the line.You are answering on "how to better educate them"
– Vipul Nair
Apr 14 at 17:49
Downvoted for the reason @VipulNair stated
– Kevin Voorn
Apr 14 at 18:38
@VipulNair Isn't "not being able to educate" is education gone too far?
– BoredToolBox
Apr 15 at 4:38
And the top voted one, says the exact same thing.
– BoredToolBox
Apr 15 at 4:39
add a comment |
I don't know whether this applies to your case or not, but one potential problem may be if your expectations about user awareness are higher than the security norms put into use. For example:
- You may educate users to always check the https certificates, but at the same time some internal web sites may use self-signed or expired certificates, or even require submitting usernames and passwords through plain unencrypted http.
- Or you may educate users that all official internal tools reside on your company domain, but in reality you use popular third-party services like Gmail or Slack connected with OAuth.
While the first example is an actual issue with the infrastructure, the second one is a safe practice paired with out-of-date recommendations. I have seen both happening in the wild and in these cases the principles that you are trying to teach can not be applied in day-to-day practice and may ultimately lead to confusion and failure to comply.
add a comment |
I'm not sure the size of your organization, but the most practical advice I can offer is that you can go too far when you overthink it.
- Make some spoofy emails, send them to users, see what users do.
We use a tool (KnowBe4)- run a few trials against the users, and use that to educate them/get them aware. We capture who passed, who failed, and use the overall process to educate and demonstrate that we educate.
Don't overthink the audience with custom targeting; don't do complicated data analysis... If you are, you are probably wasting time you could spend on the next challenge.
If you see there's spear phishing at your execs or certain folks, engage them personally and often, and maybe do something operational to make sure that if they are fooled, you catch it. By operational change, for example, if someone's trying to get your CFO to release wire payments- then the CFO better have an additional maker/checker process, or get secondary non-email (Voice?) confirmation that a wire should go out.
add a comment |
It sounds to me like their may be two issues here:
Users are frustrated that they are regularly being lambasted because they fail a test they consider impossible.
Users are annoyed that IT is wasting their time with endless tests of dubious value.
RE #1, there are three possibilities:
A: You ARE making impossible demands on your users. At least, impossible in the sense that you are demanding they demonstrate a level of sophistication far beyond what can reasonably be expected of people who are not experts on security. To spin an analogy, it might be reasonable to demand that all employees be prepared to perform basic first aid: put on a bandage, give someone an aspirin, etc. But surely you would not expect all employees to be able to perform emergency heart surgery. If you start giving them practice drills on emergency heart surgery and blast the employees who are unable to adequately describe how they would implant a stent or who can't correctly list all 182 steps in a heart transplant, clearly that would be unreasonable. Making unrealistic demands and then berating employees for failing to meet them accomplishes nothing except building resentment and killing morale.
B: Your expectations are completely reasonable, and the employees are insufficiently trained. If that's the case, the obvious answer is to provide training. If you have never provided any training, and you are now berating employees for not knowing something that they have never been taught, again, you are being unreasonable. Bear in mind that what is "obvious" to a computer security professional is not necessarily obvious to someone with no such background. I'm sure there are many things about accounting that are obvious to professional accountants but not to me, or things about auto maintenance that are obvious to professional mechanics, etc.
C: Your expectations are completely reasonable, and the employees are too lazy or irresponsible to make the effort. If that's the case, it's a management issue. Someone has to give the employees the proper incentive to work harder, which could range from an encouraging pep talk to firing those who don't measure up.
RE #2: When I was in the Air Force, of course security was a major concern. We had people who wanted to destroy our aircraft and kill us. But even in that extreme situation, the security people were well aware that more strict security is not always best. The standard was that security should be as effective as possible to deal with realistic risks while interfering as little as possible with people doing their jobs.
In this case, of course it's a bad thing if some hostile hacker gets hold of passwords and steals or vandalizes your data. That could cost you big money, maybe even drive you out of business. But unless the threat is huge, you can't expect the employees to spend 90% of their time warding off threats and only 10% doing work that brings in income for the company. That's a recipe for going broke, too. You have to have a reasonable balance between protecting against threats and making it impossible for anyone to do their job.
add a comment |
I suspect that your simulation is using knowledge about your intended targets that no genuine phisher would ever know. That is why they complain about your fakes being too hard to distinguish from the real thing. In a word, you are cheating.
3
Not neccissarily, there can be malicious actors within an orginisation.
– meowcat
Apr 16 at 1:39
1
Please review Shannon's Maxim: The enemy knows the system.
– forest
Apr 16 at 2:31
What things might a "genuine phisher" not know?
– schroeder♦
Apr 16 at 7:39
adding to @meowcat, you'll also be surprised how much information you can find online on someone (varies per person).
– Alex Probert
Apr 16 at 9:23
If a malicious actor inside the system can send me an email that MS Exchange assures me comes from my employer, but has spoofed the sender, so it appears to come from my manager, but doesn't, then no amount of training is going to let me reliably distinguish good from bad. I can devote effort to examining emails from outside the organization to see if they are trustworthy. If I have to expend the same effort on every single internal email then the battle is already lost.
– BoarGules
Apr 16 at 10:23
|
show 1 more comment
12 Answers
12
active
oldest
votes
12 Answers
12
active
oldest
votes
active
oldest
votes
active
oldest
votes
I think there is an underlying problem that you will need to address. Why do the users care that they are failing?
Phishing simulations should, first and foremost, be an education tool not a testing tool.
If there are negative consequences to failing, then yes, your users are going to complain if the tests are more difficult than you have prepared them for. You would complain, too.
So, your response should be:
- educate them more (or differently) so that they can pass the tests (or rather, the comprehension tests, which is what they should be)
- remove negative consequences to failing
This might not require any content changes to your education material, but might only require a re-framing of the phishing simulations for users, management, and your security team.
Another tactic to try is to graduate the phishing simulations so that they get harder as the users are successful in responding to phishing. I have done this with my custom programmes. It's more complex on the back end, but the payoffs are huge if you can do it.
Your focus needs to be the evolving maturity of your organisation's ability to resist phishing attacks, not getting everyone to be perfect on tests. Once you take this perspective, the culture around these tests and the complaints will change.
Do it right, and your users will ask for the phishing simulations to be harder not easier. If you aim for that end result, you will have a much more resilient organisation.
Comments are not for extended discussion; this conversation has been moved to chat.
– Rory Alsop♦
Apr 16 at 21:02
add a comment |
I think there is an underlying problem that you will need to address. Why do the users care that they are failing?
Phishing simulations should, first and foremost, be an education tool not a testing tool.
If there are negative consequences to failing, then yes, your users are going to complain if the tests are more difficult than you have prepared them for. You would complain, too.
So, your response should be:
- educate them more (or differently) so that they can pass the tests (or rather, the comprehension tests, which is what they should be)
- remove negative consequences to failing
This might not require any content changes to your education material, but might only require a re-framing of the phishing simulations for users, management, and your security team.
Another tactic to try is to graduate the phishing simulations so that they get harder as the users are successful in responding to phishing. I have done this with my custom programmes. It's more complex on the back end, but the payoffs are huge if you can do it.
Your focus needs to be the evolving maturity of your organisation's ability to resist phishing attacks, not getting everyone to be perfect on tests. Once you take this perspective, the culture around these tests and the complaints will change.
Do it right, and your users will ask for the phishing simulations to be harder not easier. If you aim for that end result, you will have a much more resilient organisation.
Comments are not for extended discussion; this conversation has been moved to chat.
– Rory Alsop♦
Apr 16 at 21:02
add a comment |
I think there is an underlying problem that you will need to address. Why do the users care that they are failing?
Phishing simulations should, first and foremost, be an education tool not a testing tool.
If there are negative consequences to failing, then yes, your users are going to complain if the tests are more difficult than you have prepared them for. You would complain, too.
So, your response should be:
- educate them more (or differently) so that they can pass the tests (or rather, the comprehension tests, which is what they should be)
- remove negative consequences to failing
This might not require any content changes to your education material, but might only require a re-framing of the phishing simulations for users, management, and your security team.
Another tactic to try is to graduate the phishing simulations so that they get harder as the users are successful in responding to phishing. I have done this with my custom programmes. It's more complex on the back end, but the payoffs are huge if you can do it.
Your focus needs to be the evolving maturity of your organisation's ability to resist phishing attacks, not getting everyone to be perfect on tests. Once you take this perspective, the culture around these tests and the complaints will change.
Do it right, and your users will ask for the phishing simulations to be harder not easier. If you aim for that end result, you will have a much more resilient organisation.
I think there is an underlying problem that you will need to address. Why do the users care that they are failing?
Phishing simulations should, first and foremost, be an education tool not a testing tool.
If there are negative consequences to failing, then yes, your users are going to complain if the tests are more difficult than you have prepared them for. You would complain, too.
So, your response should be:
- educate them more (or differently) so that they can pass the tests (or rather, the comprehension tests, which is what they should be)
- remove negative consequences to failing
This might not require any content changes to your education material, but might only require a re-framing of the phishing simulations for users, management, and your security team.
Another tactic to try is to graduate the phishing simulations so that they get harder as the users are successful in responding to phishing. I have done this with my custom programmes. It's more complex on the back end, but the payoffs are huge if you can do it.
Your focus needs to be the evolving maturity of your organisation's ability to resist phishing attacks, not getting everyone to be perfect on tests. Once you take this perspective, the culture around these tests and the complaints will change.
Do it right, and your users will ask for the phishing simulations to be harder not easier. If you aim for that end result, you will have a much more resilient organisation.
edited Apr 14 at 20:46
answered Apr 14 at 18:33
schroeder♦schroeder
82.2k33184220
82.2k33184220
Comments are not for extended discussion; this conversation has been moved to chat.
– Rory Alsop♦
Apr 16 at 21:02
add a comment |
Comments are not for extended discussion; this conversation has been moved to chat.
– Rory Alsop♦
Apr 16 at 21:02
Comments are not for extended discussion; this conversation has been moved to chat.
– Rory Alsop♦
Apr 16 at 21:02
Comments are not for extended discussion; this conversation has been moved to chat.
– Rory Alsop♦
Apr 16 at 21:02
add a comment |
We have been getting push back from end users that they have no way of distinguishing a legitimate email that they would receive day to day from truly malicious phishing emails.
This is an indication that tests that could be rooted out as fakes by trained security professionals are being used to evaluate people who aren't. You may have the skills to pick an email apart and interpret the headers, but Dan in Accounting probably doesn't and his management's not likely to agree that a master class in RFC 822 is a good use of his time.
Crafting targeted emails to increase the hit rate has to be done based on intelligence collected about your users and your purported sender. This is not information to which a phisher will be privy and, as Michael Hampton pointed out in his comment, rises to spearphishing. That's a different ball game played on a different field.
If there are adversaries (real or potential) capable of good-enough spearphishing to damage your business, all of the phishing countermeasures and training won't help. Your job is to deploy tools that will give Dan in Accounting a way to distinguish the real ones from the fakes. That might mean security on the sending end like a cryptographic signature that users' mail clients can check and post a prominent warning when something is unsigned or the signature doesn't match. You can't depend on humans to get this stuff right 100% of the time, especially as your organization gets larger and people don't know each other so well.
This seems to suggest that the fix is to have an automated process that can check for these kinds of signs and warn the user. Gmail does this, putting up a red banner warning if the mail looks suspicious, e.g. fake headers.
– user25221
Apr 16 at 11:16
RFC822 has been superseded long ago by RFC5322.
– Patrick Mevzek
Apr 16 at 14:48
@PatrickMevzek RFC 2822 existed between the two, but some of us old geezers are going to cling to the old numbers 'til you pry them from our cold, dead hands.
– Blrfl
Apr 16 at 15:21
Which is against the IETF way of doing things. If an RFC supersedes another one, there is no reasons to cling to the former version. Except for historical reasons and to show knowledge. Newer versions include bugfixes and disambiguitions. But this is mostly unrelated to the question.
– Patrick Mevzek
Apr 16 at 15:23
@PatrickMevzek Clingage is in name only; I certainly wouldn't implement something based on an obsoleted RFC.
– Blrfl
Apr 16 at 15:49
add a comment |
We have been getting push back from end users that they have no way of distinguishing a legitimate email that they would receive day to day from truly malicious phishing emails.
This is an indication that tests that could be rooted out as fakes by trained security professionals are being used to evaluate people who aren't. You may have the skills to pick an email apart and interpret the headers, but Dan in Accounting probably doesn't and his management's not likely to agree that a master class in RFC 822 is a good use of his time.
Crafting targeted emails to increase the hit rate has to be done based on intelligence collected about your users and your purported sender. This is not information to which a phisher will be privy and, as Michael Hampton pointed out in his comment, rises to spearphishing. That's a different ball game played on a different field.
If there are adversaries (real or potential) capable of good-enough spearphishing to damage your business, all of the phishing countermeasures and training won't help. Your job is to deploy tools that will give Dan in Accounting a way to distinguish the real ones from the fakes. That might mean security on the sending end like a cryptographic signature that users' mail clients can check and post a prominent warning when something is unsigned or the signature doesn't match. You can't depend on humans to get this stuff right 100% of the time, especially as your organization gets larger and people don't know each other so well.
This seems to suggest that the fix is to have an automated process that can check for these kinds of signs and warn the user. Gmail does this, putting up a red banner warning if the mail looks suspicious, e.g. fake headers.
– user25221
Apr 16 at 11:16
RFC822 has been superseded long ago by RFC5322.
– Patrick Mevzek
Apr 16 at 14:48
@PatrickMevzek RFC 2822 existed between the two, but some of us old geezers are going to cling to the old numbers 'til you pry them from our cold, dead hands.
– Blrfl
Apr 16 at 15:21
Which is against the IETF way of doing things. If an RFC supersedes another one, there is no reasons to cling to the former version. Except for historical reasons and to show knowledge. Newer versions include bugfixes and disambiguitions. But this is mostly unrelated to the question.
– Patrick Mevzek
Apr 16 at 15:23
@PatrickMevzek Clingage is in name only; I certainly wouldn't implement something based on an obsoleted RFC.
– Blrfl
Apr 16 at 15:49
add a comment |
We have been getting push back from end users that they have no way of distinguishing a legitimate email that they would receive day to day from truly malicious phishing emails.
This is an indication that tests that could be rooted out as fakes by trained security professionals are being used to evaluate people who aren't. You may have the skills to pick an email apart and interpret the headers, but Dan in Accounting probably doesn't and his management's not likely to agree that a master class in RFC 822 is a good use of his time.
Crafting targeted emails to increase the hit rate has to be done based on intelligence collected about your users and your purported sender. This is not information to which a phisher will be privy and, as Michael Hampton pointed out in his comment, rises to spearphishing. That's a different ball game played on a different field.
If there are adversaries (real or potential) capable of good-enough spearphishing to damage your business, all of the phishing countermeasures and training won't help. Your job is to deploy tools that will give Dan in Accounting a way to distinguish the real ones from the fakes. That might mean security on the sending end like a cryptographic signature that users' mail clients can check and post a prominent warning when something is unsigned or the signature doesn't match. You can't depend on humans to get this stuff right 100% of the time, especially as your organization gets larger and people don't know each other so well.
We have been getting push back from end users that they have no way of distinguishing a legitimate email that they would receive day to day from truly malicious phishing emails.
This is an indication that tests that could be rooted out as fakes by trained security professionals are being used to evaluate people who aren't. You may have the skills to pick an email apart and interpret the headers, but Dan in Accounting probably doesn't and his management's not likely to agree that a master class in RFC 822 is a good use of his time.
Crafting targeted emails to increase the hit rate has to be done based on intelligence collected about your users and your purported sender. This is not information to which a phisher will be privy and, as Michael Hampton pointed out in his comment, rises to spearphishing. That's a different ball game played on a different field.
If there are adversaries (real or potential) capable of good-enough spearphishing to damage your business, all of the phishing countermeasures and training won't help. Your job is to deploy tools that will give Dan in Accounting a way to distinguish the real ones from the fakes. That might mean security on the sending end like a cryptographic signature that users' mail clients can check and post a prominent warning when something is unsigned or the signature doesn't match. You can't depend on humans to get this stuff right 100% of the time, especially as your organization gets larger and people don't know each other so well.
answered Apr 14 at 20:13
BlrflBlrfl
1,588107
1,588107
This seems to suggest that the fix is to have an automated process that can check for these kinds of signs and warn the user. Gmail does this, putting up a red banner warning if the mail looks suspicious, e.g. fake headers.
– user25221
Apr 16 at 11:16
RFC822 has been superseded long ago by RFC5322.
– Patrick Mevzek
Apr 16 at 14:48
@PatrickMevzek RFC 2822 existed between the two, but some of us old geezers are going to cling to the old numbers 'til you pry them from our cold, dead hands.
– Blrfl
Apr 16 at 15:21
Which is against the IETF way of doing things. If an RFC supersedes another one, there is no reasons to cling to the former version. Except for historical reasons and to show knowledge. Newer versions include bugfixes and disambiguitions. But this is mostly unrelated to the question.
– Patrick Mevzek
Apr 16 at 15:23
@PatrickMevzek Clingage is in name only; I certainly wouldn't implement something based on an obsoleted RFC.
– Blrfl
Apr 16 at 15:49
add a comment |
This seems to suggest that the fix is to have an automated process that can check for these kinds of signs and warn the user. Gmail does this, putting up a red banner warning if the mail looks suspicious, e.g. fake headers.
– user25221
Apr 16 at 11:16
RFC822 has been superseded long ago by RFC5322.
– Patrick Mevzek
Apr 16 at 14:48
@PatrickMevzek RFC 2822 existed between the two, but some of us old geezers are going to cling to the old numbers 'til you pry them from our cold, dead hands.
– Blrfl
Apr 16 at 15:21
Which is against the IETF way of doing things. If an RFC supersedes another one, there is no reasons to cling to the former version. Except for historical reasons and to show knowledge. Newer versions include bugfixes and disambiguitions. But this is mostly unrelated to the question.
– Patrick Mevzek
Apr 16 at 15:23
@PatrickMevzek Clingage is in name only; I certainly wouldn't implement something based on an obsoleted RFC.
– Blrfl
Apr 16 at 15:49
This seems to suggest that the fix is to have an automated process that can check for these kinds of signs and warn the user. Gmail does this, putting up a red banner warning if the mail looks suspicious, e.g. fake headers.
– user25221
Apr 16 at 11:16
This seems to suggest that the fix is to have an automated process that can check for these kinds of signs and warn the user. Gmail does this, putting up a red banner warning if the mail looks suspicious, e.g. fake headers.
– user25221
Apr 16 at 11:16
RFC822 has been superseded long ago by RFC5322.
– Patrick Mevzek
Apr 16 at 14:48
RFC822 has been superseded long ago by RFC5322.
– Patrick Mevzek
Apr 16 at 14:48
@PatrickMevzek RFC 2822 existed between the two, but some of us old geezers are going to cling to the old numbers 'til you pry them from our cold, dead hands.
– Blrfl
Apr 16 at 15:21
@PatrickMevzek RFC 2822 existed between the two, but some of us old geezers are going to cling to the old numbers 'til you pry them from our cold, dead hands.
– Blrfl
Apr 16 at 15:21
Which is against the IETF way of doing things. If an RFC supersedes another one, there is no reasons to cling to the former version. Except for historical reasons and to show knowledge. Newer versions include bugfixes and disambiguitions. But this is mostly unrelated to the question.
– Patrick Mevzek
Apr 16 at 15:23
Which is against the IETF way of doing things. If an RFC supersedes another one, there is no reasons to cling to the former version. Except for historical reasons and to show knowledge. Newer versions include bugfixes and disambiguitions. But this is mostly unrelated to the question.
– Patrick Mevzek
Apr 16 at 15:23
@PatrickMevzek Clingage is in name only; I certainly wouldn't implement something based on an obsoleted RFC.
– Blrfl
Apr 16 at 15:49
@PatrickMevzek Clingage is in name only; I certainly wouldn't implement something based on an obsoleted RFC.
– Blrfl
Apr 16 at 15:49
add a comment |
There's one possible point to make that I haven't seen in other answers, but have seen in the real world.
Users say they "have no way of distinguishing a legitimate email that they would receive day to day from truly malicious phishing emails". What this may tell you, is that legitimate emails about password renewals, service changes and such, do not obey the rules that users are expected to follow.
I have certainly seen organisations whose training materials tell users not to click links in emails, and definitely not to put their passwords into the sites those links point to, or to install software from them. And the service teams at those organisations then send out mass emails about service updates that require action (such as password updates, software installs, etc), with helpful links to click.
One thing that might help would be to clarify that users should report these legitimate emails. It might not help the users directly, but it may help to remind the service team that their emails have rules to follow, which should make things clearer for users in the long run.
15
This. I've worked in an organisation that sends similar phishing test emails, but then regularly sends "legitimate" emails that are indistinguishable from spam/phishing, often containing links to external sites (sometimes requiring logins) which my company has previously had no connection with. The problem may very well be that the legitimate mails are too spammy, rather than your tests going to o far.
– Mohirl
Apr 15 at 12:27
6
This is absolutely a problem. If the organization is sending out legitimate emails that the users are expected to click links in, and you are not explicitly identifying those emails as legitimate and teaching the users how to identify them, your legitimate emails are actively working against and undoing the training you are trying to provide.
– Colin Young
Apr 15 at 13:41
8
I used to make a point of reporting emails from IT security to IT security as apparent phishing attempts. They never liked it.
– Michael Kay
Apr 15 at 16:37
@MichaelKay: It really irks me that so many organizations send out real messages that are indistinguishable from phishing attempts. If Acme's VISA card moves from BankCorp to MegaBank, it should not inform customers about it by leaving phone messages asking them to visitAcmeViSAupdate.com
[a domain the customers have never used before] but I had a real that did precisely that (names changed to protect the guilty), but instead inform them how to get the information using the phone number or web site printed on their card.
– supercat
Apr 15 at 21:25
I've been known to receive purchase orders from a previously unknown sender saying simply "please find our purchase order attached". And of course the spam filter might well zap them before I have to make a decision.
– Michael Kay
Apr 15 at 22:36
|
show 1 more comment
There's one possible point to make that I haven't seen in other answers, but have seen in the real world.
Users say they "have no way of distinguishing a legitimate email that they would receive day to day from truly malicious phishing emails". What this may tell you, is that legitimate emails about password renewals, service changes and such, do not obey the rules that users are expected to follow.
I have certainly seen organisations whose training materials tell users not to click links in emails, and definitely not to put their passwords into the sites those links point to, or to install software from them. And the service teams at those organisations then send out mass emails about service updates that require action (such as password updates, software installs, etc), with helpful links to click.
One thing that might help would be to clarify that users should report these legitimate emails. It might not help the users directly, but it may help to remind the service team that their emails have rules to follow, which should make things clearer for users in the long run.
15
This. I've worked in an organisation that sends similar phishing test emails, but then regularly sends "legitimate" emails that are indistinguishable from spam/phishing, often containing links to external sites (sometimes requiring logins) which my company has previously had no connection with. The problem may very well be that the legitimate mails are too spammy, rather than your tests going to o far.
– Mohirl
Apr 15 at 12:27
6
This is absolutely a problem. If the organization is sending out legitimate emails that the users are expected to click links in, and you are not explicitly identifying those emails as legitimate and teaching the users how to identify them, your legitimate emails are actively working against and undoing the training you are trying to provide.
– Colin Young
Apr 15 at 13:41
8
I used to make a point of reporting emails from IT security to IT security as apparent phishing attempts. They never liked it.
– Michael Kay
Apr 15 at 16:37
@MichaelKay: It really irks me that so many organizations send out real messages that are indistinguishable from phishing attempts. If Acme's VISA card moves from BankCorp to MegaBank, it should not inform customers about it by leaving phone messages asking them to visitAcmeViSAupdate.com
[a domain the customers have never used before] but I had a real that did precisely that (names changed to protect the guilty), but instead inform them how to get the information using the phone number or web site printed on their card.
– supercat
Apr 15 at 21:25
I've been known to receive purchase orders from a previously unknown sender saying simply "please find our purchase order attached". And of course the spam filter might well zap them before I have to make a decision.
– Michael Kay
Apr 15 at 22:36
|
show 1 more comment
There's one possible point to make that I haven't seen in other answers, but have seen in the real world.
Users say they "have no way of distinguishing a legitimate email that they would receive day to day from truly malicious phishing emails". What this may tell you, is that legitimate emails about password renewals, service changes and such, do not obey the rules that users are expected to follow.
I have certainly seen organisations whose training materials tell users not to click links in emails, and definitely not to put their passwords into the sites those links point to, or to install software from them. And the service teams at those organisations then send out mass emails about service updates that require action (such as password updates, software installs, etc), with helpful links to click.
One thing that might help would be to clarify that users should report these legitimate emails. It might not help the users directly, but it may help to remind the service team that their emails have rules to follow, which should make things clearer for users in the long run.
There's one possible point to make that I haven't seen in other answers, but have seen in the real world.
Users say they "have no way of distinguishing a legitimate email that they would receive day to day from truly malicious phishing emails". What this may tell you, is that legitimate emails about password renewals, service changes and such, do not obey the rules that users are expected to follow.
I have certainly seen organisations whose training materials tell users not to click links in emails, and definitely not to put their passwords into the sites those links point to, or to install software from them. And the service teams at those organisations then send out mass emails about service updates that require action (such as password updates, software installs, etc), with helpful links to click.
One thing that might help would be to clarify that users should report these legitimate emails. It might not help the users directly, but it may help to remind the service team that their emails have rules to follow, which should make things clearer for users in the long run.
answered Apr 15 at 11:02
James_picJames_pic
1,8161417
1,8161417
15
This. I've worked in an organisation that sends similar phishing test emails, but then regularly sends "legitimate" emails that are indistinguishable from spam/phishing, often containing links to external sites (sometimes requiring logins) which my company has previously had no connection with. The problem may very well be that the legitimate mails are too spammy, rather than your tests going to o far.
– Mohirl
Apr 15 at 12:27
6
This is absolutely a problem. If the organization is sending out legitimate emails that the users are expected to click links in, and you are not explicitly identifying those emails as legitimate and teaching the users how to identify them, your legitimate emails are actively working against and undoing the training you are trying to provide.
– Colin Young
Apr 15 at 13:41
8
I used to make a point of reporting emails from IT security to IT security as apparent phishing attempts. They never liked it.
– Michael Kay
Apr 15 at 16:37
@MichaelKay: It really irks me that so many organizations send out real messages that are indistinguishable from phishing attempts. If Acme's VISA card moves from BankCorp to MegaBank, it should not inform customers about it by leaving phone messages asking them to visitAcmeViSAupdate.com
[a domain the customers have never used before] but I had a real that did precisely that (names changed to protect the guilty), but instead inform them how to get the information using the phone number or web site printed on their card.
– supercat
Apr 15 at 21:25
I've been known to receive purchase orders from a previously unknown sender saying simply "please find our purchase order attached". And of course the spam filter might well zap them before I have to make a decision.
– Michael Kay
Apr 15 at 22:36
|
show 1 more comment
15
This. I've worked in an organisation that sends similar phishing test emails, but then regularly sends "legitimate" emails that are indistinguishable from spam/phishing, often containing links to external sites (sometimes requiring logins) which my company has previously had no connection with. The problem may very well be that the legitimate mails are too spammy, rather than your tests going to o far.
– Mohirl
Apr 15 at 12:27
6
This is absolutely a problem. If the organization is sending out legitimate emails that the users are expected to click links in, and you are not explicitly identifying those emails as legitimate and teaching the users how to identify them, your legitimate emails are actively working against and undoing the training you are trying to provide.
– Colin Young
Apr 15 at 13:41
8
I used to make a point of reporting emails from IT security to IT security as apparent phishing attempts. They never liked it.
– Michael Kay
Apr 15 at 16:37
@MichaelKay: It really irks me that so many organizations send out real messages that are indistinguishable from phishing attempts. If Acme's VISA card moves from BankCorp to MegaBank, it should not inform customers about it by leaving phone messages asking them to visitAcmeViSAupdate.com
[a domain the customers have never used before] but I had a real that did precisely that (names changed to protect the guilty), but instead inform them how to get the information using the phone number or web site printed on their card.
– supercat
Apr 15 at 21:25
I've been known to receive purchase orders from a previously unknown sender saying simply "please find our purchase order attached". And of course the spam filter might well zap them before I have to make a decision.
– Michael Kay
Apr 15 at 22:36
15
15
This. I've worked in an organisation that sends similar phishing test emails, but then regularly sends "legitimate" emails that are indistinguishable from spam/phishing, often containing links to external sites (sometimes requiring logins) which my company has previously had no connection with. The problem may very well be that the legitimate mails are too spammy, rather than your tests going to o far.
– Mohirl
Apr 15 at 12:27
This. I've worked in an organisation that sends similar phishing test emails, but then regularly sends "legitimate" emails that are indistinguishable from spam/phishing, often containing links to external sites (sometimes requiring logins) which my company has previously had no connection with. The problem may very well be that the legitimate mails are too spammy, rather than your tests going to o far.
– Mohirl
Apr 15 at 12:27
6
6
This is absolutely a problem. If the organization is sending out legitimate emails that the users are expected to click links in, and you are not explicitly identifying those emails as legitimate and teaching the users how to identify them, your legitimate emails are actively working against and undoing the training you are trying to provide.
– Colin Young
Apr 15 at 13:41
This is absolutely a problem. If the organization is sending out legitimate emails that the users are expected to click links in, and you are not explicitly identifying those emails as legitimate and teaching the users how to identify them, your legitimate emails are actively working against and undoing the training you are trying to provide.
– Colin Young
Apr 15 at 13:41
8
8
I used to make a point of reporting emails from IT security to IT security as apparent phishing attempts. They never liked it.
– Michael Kay
Apr 15 at 16:37
I used to make a point of reporting emails from IT security to IT security as apparent phishing attempts. They never liked it.
– Michael Kay
Apr 15 at 16:37
@MichaelKay: It really irks me that so many organizations send out real messages that are indistinguishable from phishing attempts. If Acme's VISA card moves from BankCorp to MegaBank, it should not inform customers about it by leaving phone messages asking them to visit
AcmeViSAupdate.com
[a domain the customers have never used before] but I had a real that did precisely that (names changed to protect the guilty), but instead inform them how to get the information using the phone number or web site printed on their card.– supercat
Apr 15 at 21:25
@MichaelKay: It really irks me that so many organizations send out real messages that are indistinguishable from phishing attempts. If Acme's VISA card moves from BankCorp to MegaBank, it should not inform customers about it by leaving phone messages asking them to visit
AcmeViSAupdate.com
[a domain the customers have never used before] but I had a real that did precisely that (names changed to protect the guilty), but instead inform them how to get the information using the phone number or web site printed on their card.– supercat
Apr 15 at 21:25
I've been known to receive purchase orders from a previously unknown sender saying simply "please find our purchase order attached". And of course the spam filter might well zap them before I have to make a decision.
– Michael Kay
Apr 15 at 22:36
I've been known to receive purchase orders from a previously unknown sender saying simply "please find our purchase order attached". And of course the spam filter might well zap them before I have to make a decision.
– Michael Kay
Apr 15 at 22:36
|
show 1 more comment
- When is phishing education going too far?
When the cost exceeds the benefit. Benefit is generally measured in lower click-through rates and increased rates of reporting of genuine phishing emails. Cost can be measured in:
- the effort to implement the test
- false positive reporting of (not) phishing emails
- lower engagement rates on legitimate emails
- ill will towards the Security group.
The last is the hardest to measure, and often ignored, but if your job is to trick your own people, you shouldn't be surprised if they start viewing you with suspicion.
- Is pushback from the end users demonstrative that their awareness is still lacking and need further training,
specifically the inability to recognize legitimate from
malicious emails?
Um, maybe?
If their click-through rates remain high, then awareness is still lacking and they need further training.
If click-through rates in general have dropped, but the test emails consistently fool them, then their concerns about the testing may be legitimate.
It sounds like your content is pretty closely tailored to your users and even their job roles. This may be what is generating the negative reaction. Ideally, a phishing test should not rely upon knowledge or understanding of internal email practices, just as an attacker should not have access to those. (And note, your internal messaging should not look like your external messaging, for the same reason).
You may want to consider outsourcing your phishing tests. The organizations that are dedicated to offering this service have a better feel for what "in the wild" looks like, and their tools for measuring and reporting on engagement rates are usually better than you can do on your own.
Personally, I'm not fond of phish testing, because I believe it erodes trust between users and Security. But the fact of the matter is it's one of the best ways to improve your users' defences.
1
Forgive me if i am wrong.But if a few people click,wouldnt that be a failure?
– Vipul Nair
Apr 14 at 17:44
8
@VipulNair eradication is not a realistic goal for phish training. I believe I've seen 10-20% click-through described as ideal improvement. I have seen organizations celebrate pushing down below 50%.
– gowenfawr
Apr 14 at 17:55
4
@gowenfawr most recent research shows that getting below 10% is not realistic. Even CISOs click phishing emails (one CISO I know gets 600 emails a day and sometimes he clicks on a well-crafted phish).
– schroeder♦
Apr 14 at 19:11
Where are you guys getting these stats on targets for click through? I'm not in our IS group but I'm on their steering team, we're routinely around 5 - 6% click through for a fairly non-technical workforce of around 500 employees, and what I would consider very realistic test emails. I'm surprised that your comments seem to imply we're way ahead of average (or my interpretation of how difficult our simulated emails are is totally wrong).
– dwizum
Apr 15 at 13:23
2
@dwizum Lance Spitzner, who's a SME in this area, claims <5% is "good". However, my comments about 10-20% and starting >50% stem from personal experience with a handful of organizations. My gut says that Lance has a self-selecting population ("people who care enough about this to hire him") and that 10-20% is a realistic churn point for good organizations. You may very well be doing better than average :)
– gowenfawr
Apr 15 at 13:39
|
show 5 more comments
- When is phishing education going too far?
When the cost exceeds the benefit. Benefit is generally measured in lower click-through rates and increased rates of reporting of genuine phishing emails. Cost can be measured in:
- the effort to implement the test
- false positive reporting of (not) phishing emails
- lower engagement rates on legitimate emails
- ill will towards the Security group.
The last is the hardest to measure, and often ignored, but if your job is to trick your own people, you shouldn't be surprised if they start viewing you with suspicion.
- Is pushback from the end users demonstrative that their awareness is still lacking and need further training,
specifically the inability to recognize legitimate from
malicious emails?
Um, maybe?
If their click-through rates remain high, then awareness is still lacking and they need further training.
If click-through rates in general have dropped, but the test emails consistently fool them, then their concerns about the testing may be legitimate.
It sounds like your content is pretty closely tailored to your users and even their job roles. This may be what is generating the negative reaction. Ideally, a phishing test should not rely upon knowledge or understanding of internal email practices, just as an attacker should not have access to those. (And note, your internal messaging should not look like your external messaging, for the same reason).
You may want to consider outsourcing your phishing tests. The organizations that are dedicated to offering this service have a better feel for what "in the wild" looks like, and their tools for measuring and reporting on engagement rates are usually better than you can do on your own.
Personally, I'm not fond of phish testing, because I believe it erodes trust between users and Security. But the fact of the matter is it's one of the best ways to improve your users' defences.
1
Forgive me if i am wrong.But if a few people click,wouldnt that be a failure?
– Vipul Nair
Apr 14 at 17:44
8
@VipulNair eradication is not a realistic goal for phish training. I believe I've seen 10-20% click-through described as ideal improvement. I have seen organizations celebrate pushing down below 50%.
– gowenfawr
Apr 14 at 17:55
4
@gowenfawr most recent research shows that getting below 10% is not realistic. Even CISOs click phishing emails (one CISO I know gets 600 emails a day and sometimes he clicks on a well-crafted phish).
– schroeder♦
Apr 14 at 19:11
Where are you guys getting these stats on targets for click through? I'm not in our IS group but I'm on their steering team, we're routinely around 5 - 6% click through for a fairly non-technical workforce of around 500 employees, and what I would consider very realistic test emails. I'm surprised that your comments seem to imply we're way ahead of average (or my interpretation of how difficult our simulated emails are is totally wrong).
– dwizum
Apr 15 at 13:23
2
@dwizum Lance Spitzner, who's a SME in this area, claims <5% is "good". However, my comments about 10-20% and starting >50% stem from personal experience with a handful of organizations. My gut says that Lance has a self-selecting population ("people who care enough about this to hire him") and that 10-20% is a realistic churn point for good organizations. You may very well be doing better than average :)
– gowenfawr
Apr 15 at 13:39
|
show 5 more comments
- When is phishing education going too far?
When the cost exceeds the benefit. Benefit is generally measured in lower click-through rates and increased rates of reporting of genuine phishing emails. Cost can be measured in:
- the effort to implement the test
- false positive reporting of (not) phishing emails
- lower engagement rates on legitimate emails
- ill will towards the Security group.
The last is the hardest to measure, and often ignored, but if your job is to trick your own people, you shouldn't be surprised if they start viewing you with suspicion.
- Is pushback from the end users demonstrative that their awareness is still lacking and need further training,
specifically the inability to recognize legitimate from
malicious emails?
Um, maybe?
If their click-through rates remain high, then awareness is still lacking and they need further training.
If click-through rates in general have dropped, but the test emails consistently fool them, then their concerns about the testing may be legitimate.
It sounds like your content is pretty closely tailored to your users and even their job roles. This may be what is generating the negative reaction. Ideally, a phishing test should not rely upon knowledge or understanding of internal email practices, just as an attacker should not have access to those. (And note, your internal messaging should not look like your external messaging, for the same reason).
You may want to consider outsourcing your phishing tests. The organizations that are dedicated to offering this service have a better feel for what "in the wild" looks like, and their tools for measuring and reporting on engagement rates are usually better than you can do on your own.
Personally, I'm not fond of phish testing, because I believe it erodes trust between users and Security. But the fact of the matter is it's one of the best ways to improve your users' defences.
- When is phishing education going too far?
When the cost exceeds the benefit. Benefit is generally measured in lower click-through rates and increased rates of reporting of genuine phishing emails. Cost can be measured in:
- the effort to implement the test
- false positive reporting of (not) phishing emails
- lower engagement rates on legitimate emails
- ill will towards the Security group.
The last is the hardest to measure, and often ignored, but if your job is to trick your own people, you shouldn't be surprised if they start viewing you with suspicion.
- Is pushback from the end users demonstrative that their awareness is still lacking and need further training,
specifically the inability to recognize legitimate from
malicious emails?
Um, maybe?
If their click-through rates remain high, then awareness is still lacking and they need further training.
If click-through rates in general have dropped, but the test emails consistently fool them, then their concerns about the testing may be legitimate.
It sounds like your content is pretty closely tailored to your users and even their job roles. This may be what is generating the negative reaction. Ideally, a phishing test should not rely upon knowledge or understanding of internal email practices, just as an attacker should not have access to those. (And note, your internal messaging should not look like your external messaging, for the same reason).
You may want to consider outsourcing your phishing tests. The organizations that are dedicated to offering this service have a better feel for what "in the wild" looks like, and their tools for measuring and reporting on engagement rates are usually better than you can do on your own.
Personally, I'm not fond of phish testing, because I believe it erodes trust between users and Security. But the fact of the matter is it's one of the best ways to improve your users' defences.
edited Apr 14 at 18:21
schroeder♦
82.2k33184220
82.2k33184220
answered Apr 14 at 17:26
gowenfawrgowenfawr
55.8k11118165
55.8k11118165
1
Forgive me if i am wrong.But if a few people click,wouldnt that be a failure?
– Vipul Nair
Apr 14 at 17:44
8
@VipulNair eradication is not a realistic goal for phish training. I believe I've seen 10-20% click-through described as ideal improvement. I have seen organizations celebrate pushing down below 50%.
– gowenfawr
Apr 14 at 17:55
4
@gowenfawr most recent research shows that getting below 10% is not realistic. Even CISOs click phishing emails (one CISO I know gets 600 emails a day and sometimes he clicks on a well-crafted phish).
– schroeder♦
Apr 14 at 19:11
Where are you guys getting these stats on targets for click through? I'm not in our IS group but I'm on their steering team, we're routinely around 5 - 6% click through for a fairly non-technical workforce of around 500 employees, and what I would consider very realistic test emails. I'm surprised that your comments seem to imply we're way ahead of average (or my interpretation of how difficult our simulated emails are is totally wrong).
– dwizum
Apr 15 at 13:23
2
@dwizum Lance Spitzner, who's a SME in this area, claims <5% is "good". However, my comments about 10-20% and starting >50% stem from personal experience with a handful of organizations. My gut says that Lance has a self-selecting population ("people who care enough about this to hire him") and that 10-20% is a realistic churn point for good organizations. You may very well be doing better than average :)
– gowenfawr
Apr 15 at 13:39
|
show 5 more comments
1
Forgive me if i am wrong.But if a few people click,wouldnt that be a failure?
– Vipul Nair
Apr 14 at 17:44
8
@VipulNair eradication is not a realistic goal for phish training. I believe I've seen 10-20% click-through described as ideal improvement. I have seen organizations celebrate pushing down below 50%.
– gowenfawr
Apr 14 at 17:55
4
@gowenfawr most recent research shows that getting below 10% is not realistic. Even CISOs click phishing emails (one CISO I know gets 600 emails a day and sometimes he clicks on a well-crafted phish).
– schroeder♦
Apr 14 at 19:11
Where are you guys getting these stats on targets for click through? I'm not in our IS group but I'm on their steering team, we're routinely around 5 - 6% click through for a fairly non-technical workforce of around 500 employees, and what I would consider very realistic test emails. I'm surprised that your comments seem to imply we're way ahead of average (or my interpretation of how difficult our simulated emails are is totally wrong).
– dwizum
Apr 15 at 13:23
2
@dwizum Lance Spitzner, who's a SME in this area, claims <5% is "good". However, my comments about 10-20% and starting >50% stem from personal experience with a handful of organizations. My gut says that Lance has a self-selecting population ("people who care enough about this to hire him") and that 10-20% is a realistic churn point for good organizations. You may very well be doing better than average :)
– gowenfawr
Apr 15 at 13:39
1
1
Forgive me if i am wrong.But if a few people click,wouldnt that be a failure?
– Vipul Nair
Apr 14 at 17:44
Forgive me if i am wrong.But if a few people click,wouldnt that be a failure?
– Vipul Nair
Apr 14 at 17:44
8
8
@VipulNair eradication is not a realistic goal for phish training. I believe I've seen 10-20% click-through described as ideal improvement. I have seen organizations celebrate pushing down below 50%.
– gowenfawr
Apr 14 at 17:55
@VipulNair eradication is not a realistic goal for phish training. I believe I've seen 10-20% click-through described as ideal improvement. I have seen organizations celebrate pushing down below 50%.
– gowenfawr
Apr 14 at 17:55
4
4
@gowenfawr most recent research shows that getting below 10% is not realistic. Even CISOs click phishing emails (one CISO I know gets 600 emails a day and sometimes he clicks on a well-crafted phish).
– schroeder♦
Apr 14 at 19:11
@gowenfawr most recent research shows that getting below 10% is not realistic. Even CISOs click phishing emails (one CISO I know gets 600 emails a day and sometimes he clicks on a well-crafted phish).
– schroeder♦
Apr 14 at 19:11
Where are you guys getting these stats on targets for click through? I'm not in our IS group but I'm on their steering team, we're routinely around 5 - 6% click through for a fairly non-technical workforce of around 500 employees, and what I would consider very realistic test emails. I'm surprised that your comments seem to imply we're way ahead of average (or my interpretation of how difficult our simulated emails are is totally wrong).
– dwizum
Apr 15 at 13:23
Where are you guys getting these stats on targets for click through? I'm not in our IS group but I'm on their steering team, we're routinely around 5 - 6% click through for a fairly non-technical workforce of around 500 employees, and what I would consider very realistic test emails. I'm surprised that your comments seem to imply we're way ahead of average (or my interpretation of how difficult our simulated emails are is totally wrong).
– dwizum
Apr 15 at 13:23
2
2
@dwizum Lance Spitzner, who's a SME in this area, claims <5% is "good". However, my comments about 10-20% and starting >50% stem from personal experience with a handful of organizations. My gut says that Lance has a self-selecting population ("people who care enough about this to hire him") and that 10-20% is a realistic churn point for good organizations. You may very well be doing better than average :)
– gowenfawr
Apr 15 at 13:39
@dwizum Lance Spitzner, who's a SME in this area, claims <5% is "good". However, my comments about 10-20% and starting >50% stem from personal experience with a handful of organizations. My gut says that Lance has a self-selecting population ("people who care enough about this to hire him") and that 10-20% is a realistic churn point for good organizations. You may very well be doing better than average :)
– gowenfawr
Apr 15 at 13:39
|
show 5 more comments
There's one way in which this may have gone too far:
We have adopted a highly targeted strategy based not only on the user's job role but also on the content such employees are likely to see.
You need to ask yourself whether employees at your company will actually be subject to this level of spearphishing. If the answer is no, then you've gone too far. Of course, this is all dependent on what the group does. If its the DNC, then the answer is yes.
add a comment |
There's one way in which this may have gone too far:
We have adopted a highly targeted strategy based not only on the user's job role but also on the content such employees are likely to see.
You need to ask yourself whether employees at your company will actually be subject to this level of spearphishing. If the answer is no, then you've gone too far. Of course, this is all dependent on what the group does. If its the DNC, then the answer is yes.
add a comment |
There's one way in which this may have gone too far:
We have adopted a highly targeted strategy based not only on the user's job role but also on the content such employees are likely to see.
You need to ask yourself whether employees at your company will actually be subject to this level of spearphishing. If the answer is no, then you've gone too far. Of course, this is all dependent on what the group does. If its the DNC, then the answer is yes.
There's one way in which this may have gone too far:
We have adopted a highly targeted strategy based not only on the user's job role but also on the content such employees are likely to see.
You need to ask yourself whether employees at your company will actually be subject to this level of spearphishing. If the answer is no, then you've gone too far. Of course, this is all dependent on what the group does. If its the DNC, then the answer is yes.
answered Apr 15 at 0:54
Cliff ABCliff AB
2314
2314
add a comment |
add a comment |
You've seemingly committed a very common mistake among us security professionals: You have gone too much into the mindset of the attacker and you are trying too hard to defeat your fellow employees, instead of making them your allies.
Your phishing campaign should be based on your threat model and risk analysis. Are your employees likely to be a target of carefully crafted spearphishing attacks, or is the higher risk the more common untargeted, mass-phishing campaign of moderate attacker skill?
In the later case, don't do things to your employees that are exceptionally unlikely according to your risk analysis. You simply can't explain to management why you're doing it and it will seem that you are trying to get a high out of appearing smarter and "beating" regular employees. (which of course you can in your field of expertise, just like they could beat you hands down in budgeting, handling customer complaints or supply management).
If you do have targeted, high-skill spearphishing campaigns in your threat model, then you need to gradually escalate and plan a campaign in multiple steps. Because your goal is to teach, not to defeat and embarass. So you do what every teacher does: You start with the simple base excersise and then follow with the more difficult ones.
Example
For example, in a three-step process, you would start with a mail that is fairly easy to spot as a fake, but also contains elements that are more difficult to see. When a user correctly identifies it as a phishing mail, you congratulate them and then point out all the clues, including the better hidden ones. This is the learning part - they get positive reinforcement for the clues they spotted, and are taught additional clues that they missed.
In the second round, you send a phishing mail that is roughly targeted (say, to a department or function) and has fewer obvious and more of the difficult to spot clues. At least half of them should include those that were taught in the previous mail.
Again when a user correctly spots the phishing attempt, you congratulate and point out all the clues, including the new ones you introduced. This reinforces, teaches new clues and raises awareness that some clues can be more difficult to spot than the user thought before.
In the third round, you send your personally targeted mails, with no obvious clues, but at least half the hidden clues must be in the set the user was taught before.
Again, if a user correctly identifies, you congratulate and highlight all the clues, so he can again learn even more.
In all the cases, if a user misidentifies the phishing mail, you also point out all the clues, and then repeat that step until he gets it. Don't progress to more difficult lessons while the learning person is still struggling with the current one.
This is much more work on your part, but will provide a much stronger reinforcement and higher involvement on the employees side, and in the end you are doing it for them.
add a comment |
You've seemingly committed a very common mistake among us security professionals: You have gone too much into the mindset of the attacker and you are trying too hard to defeat your fellow employees, instead of making them your allies.
Your phishing campaign should be based on your threat model and risk analysis. Are your employees likely to be a target of carefully crafted spearphishing attacks, or is the higher risk the more common untargeted, mass-phishing campaign of moderate attacker skill?
In the later case, don't do things to your employees that are exceptionally unlikely according to your risk analysis. You simply can't explain to management why you're doing it and it will seem that you are trying to get a high out of appearing smarter and "beating" regular employees. (which of course you can in your field of expertise, just like they could beat you hands down in budgeting, handling customer complaints or supply management).
If you do have targeted, high-skill spearphishing campaigns in your threat model, then you need to gradually escalate and plan a campaign in multiple steps. Because your goal is to teach, not to defeat and embarass. So you do what every teacher does: You start with the simple base excersise and then follow with the more difficult ones.
Example
For example, in a three-step process, you would start with a mail that is fairly easy to spot as a fake, but also contains elements that are more difficult to see. When a user correctly identifies it as a phishing mail, you congratulate them and then point out all the clues, including the better hidden ones. This is the learning part - they get positive reinforcement for the clues they spotted, and are taught additional clues that they missed.
In the second round, you send a phishing mail that is roughly targeted (say, to a department or function) and has fewer obvious and more of the difficult to spot clues. At least half of them should include those that were taught in the previous mail.
Again when a user correctly spots the phishing attempt, you congratulate and point out all the clues, including the new ones you introduced. This reinforces, teaches new clues and raises awareness that some clues can be more difficult to spot than the user thought before.
In the third round, you send your personally targeted mails, with no obvious clues, but at least half the hidden clues must be in the set the user was taught before.
Again, if a user correctly identifies, you congratulate and highlight all the clues, so he can again learn even more.
In all the cases, if a user misidentifies the phishing mail, you also point out all the clues, and then repeat that step until he gets it. Don't progress to more difficult lessons while the learning person is still struggling with the current one.
This is much more work on your part, but will provide a much stronger reinforcement and higher involvement on the employees side, and in the end you are doing it for them.
add a comment |
You've seemingly committed a very common mistake among us security professionals: You have gone too much into the mindset of the attacker and you are trying too hard to defeat your fellow employees, instead of making them your allies.
Your phishing campaign should be based on your threat model and risk analysis. Are your employees likely to be a target of carefully crafted spearphishing attacks, or is the higher risk the more common untargeted, mass-phishing campaign of moderate attacker skill?
In the later case, don't do things to your employees that are exceptionally unlikely according to your risk analysis. You simply can't explain to management why you're doing it and it will seem that you are trying to get a high out of appearing smarter and "beating" regular employees. (which of course you can in your field of expertise, just like they could beat you hands down in budgeting, handling customer complaints or supply management).
If you do have targeted, high-skill spearphishing campaigns in your threat model, then you need to gradually escalate and plan a campaign in multiple steps. Because your goal is to teach, not to defeat and embarass. So you do what every teacher does: You start with the simple base excersise and then follow with the more difficult ones.
Example
For example, in a three-step process, you would start with a mail that is fairly easy to spot as a fake, but also contains elements that are more difficult to see. When a user correctly identifies it as a phishing mail, you congratulate them and then point out all the clues, including the better hidden ones. This is the learning part - they get positive reinforcement for the clues they spotted, and are taught additional clues that they missed.
In the second round, you send a phishing mail that is roughly targeted (say, to a department or function) and has fewer obvious and more of the difficult to spot clues. At least half of them should include those that were taught in the previous mail.
Again when a user correctly spots the phishing attempt, you congratulate and point out all the clues, including the new ones you introduced. This reinforces, teaches new clues and raises awareness that some clues can be more difficult to spot than the user thought before.
In the third round, you send your personally targeted mails, with no obvious clues, but at least half the hidden clues must be in the set the user was taught before.
Again, if a user correctly identifies, you congratulate and highlight all the clues, so he can again learn even more.
In all the cases, if a user misidentifies the phishing mail, you also point out all the clues, and then repeat that step until he gets it. Don't progress to more difficult lessons while the learning person is still struggling with the current one.
This is much more work on your part, but will provide a much stronger reinforcement and higher involvement on the employees side, and in the end you are doing it for them.
You've seemingly committed a very common mistake among us security professionals: You have gone too much into the mindset of the attacker and you are trying too hard to defeat your fellow employees, instead of making them your allies.
Your phishing campaign should be based on your threat model and risk analysis. Are your employees likely to be a target of carefully crafted spearphishing attacks, or is the higher risk the more common untargeted, mass-phishing campaign of moderate attacker skill?
In the later case, don't do things to your employees that are exceptionally unlikely according to your risk analysis. You simply can't explain to management why you're doing it and it will seem that you are trying to get a high out of appearing smarter and "beating" regular employees. (which of course you can in your field of expertise, just like they could beat you hands down in budgeting, handling customer complaints or supply management).
If you do have targeted, high-skill spearphishing campaigns in your threat model, then you need to gradually escalate and plan a campaign in multiple steps. Because your goal is to teach, not to defeat and embarass. So you do what every teacher does: You start with the simple base excersise and then follow with the more difficult ones.
Example
For example, in a three-step process, you would start with a mail that is fairly easy to spot as a fake, but also contains elements that are more difficult to see. When a user correctly identifies it as a phishing mail, you congratulate them and then point out all the clues, including the better hidden ones. This is the learning part - they get positive reinforcement for the clues they spotted, and are taught additional clues that they missed.
In the second round, you send a phishing mail that is roughly targeted (say, to a department or function) and has fewer obvious and more of the difficult to spot clues. At least half of them should include those that were taught in the previous mail.
Again when a user correctly spots the phishing attempt, you congratulate and point out all the clues, including the new ones you introduced. This reinforces, teaches new clues and raises awareness that some clues can be more difficult to spot than the user thought before.
In the third round, you send your personally targeted mails, with no obvious clues, but at least half the hidden clues must be in the set the user was taught before.
Again, if a user correctly identifies, you congratulate and highlight all the clues, so he can again learn even more.
In all the cases, if a user misidentifies the phishing mail, you also point out all the clues, and then repeat that step until he gets it. Don't progress to more difficult lessons while the learning person is still struggling with the current one.
This is much more work on your part, but will provide a much stronger reinforcement and higher involvement on the employees side, and in the end you are doing it for them.
answered Apr 16 at 9:22
TomTom
5,879935
5,879935
add a comment |
add a comment |
The question of "going too far" requires context; what part is going too far?
The thing that phishing tests are trying to do is to make people suspicious of their email, because when they aren't then they are at risk of literally inviting unauthorized users onto the network.
So there shouldn't be an overwhelming amount of emails to the point that they are sifting through known bad emails to get to the ones they need to do their job, but there should be enough that it is commonly known that someone in the organization is portraying an attacker and trying to get them to click the wrong link because there are already people outside the organization trying to get them to do that.
The question then becomes when someone does go for the ploy, are you glad that you caught them instead of a malicious actor? As other people have mentioned here (and @BoredToolBox should not have been downvoted in my opinion) this is about education.
If you put that into the wording of the question then, I'm sure that it's not meant as "How much education is going too far?" right?
What is probably going too far in most organizations is the reaction to people who are clicking thorough, and especially if there is a punitive aspect to it. You should be glad when you are the one that caught the action, because it is a chance for you to help the user understand what could possibly have happened and why you are performing this exercise. People should not be punished or shamed.
Imagine that this was an exercise on how to prevent an illness from spreading worker to worker. A deadly virus that will lay dormant until it has found an appropriate host and will then possibly kill everyone, but they don't know that it is spread by people that are randomly coming in the front door handing them packages.
We have enough common sense to know not to just accept packages from people that walk into the building, but what people don't see is that this is exactly what is happening with their emails. So this is about a change in culture and perspective, and I don't really see what part of the knowledge of this is going too far when you are talking about education.
5
The purpose of phishing simulations is not to make people suspicious, but to practice the procedures and behavours taught in a safe simulation of an attack.
– schroeder♦
Apr 14 at 19:16
1
Right...but if they leave that simulation without being suspicious of emails then what was the point? They should be suspicious of anything that looks different, and the point of training is to make them so, right?
– Roostercrab
Apr 14 at 20:00
3
No. That's my entire point. The goal is not suspicion. I'm afraid to explain further will be to simply repeat my first comment.
– schroeder♦
Apr 14 at 20:43
I guess the question then is what do you want them to think when they look through their email inbox if not suspicion...I know that I am suspicious of emails and having users share my suspicion is the prime objective.
– Roostercrab
Apr 15 at 2:13
add a comment |
The question of "going too far" requires context; what part is going too far?
The thing that phishing tests are trying to do is to make people suspicious of their email, because when they aren't then they are at risk of literally inviting unauthorized users onto the network.
So there shouldn't be an overwhelming amount of emails to the point that they are sifting through known bad emails to get to the ones they need to do their job, but there should be enough that it is commonly known that someone in the organization is portraying an attacker and trying to get them to click the wrong link because there are already people outside the organization trying to get them to do that.
The question then becomes when someone does go for the ploy, are you glad that you caught them instead of a malicious actor? As other people have mentioned here (and @BoredToolBox should not have been downvoted in my opinion) this is about education.
If you put that into the wording of the question then, I'm sure that it's not meant as "How much education is going too far?" right?
What is probably going too far in most organizations is the reaction to people who are clicking thorough, and especially if there is a punitive aspect to it. You should be glad when you are the one that caught the action, because it is a chance for you to help the user understand what could possibly have happened and why you are performing this exercise. People should not be punished or shamed.
Imagine that this was an exercise on how to prevent an illness from spreading worker to worker. A deadly virus that will lay dormant until it has found an appropriate host and will then possibly kill everyone, but they don't know that it is spread by people that are randomly coming in the front door handing them packages.
We have enough common sense to know not to just accept packages from people that walk into the building, but what people don't see is that this is exactly what is happening with their emails. So this is about a change in culture and perspective, and I don't really see what part of the knowledge of this is going too far when you are talking about education.
5
The purpose of phishing simulations is not to make people suspicious, but to practice the procedures and behavours taught in a safe simulation of an attack.
– schroeder♦
Apr 14 at 19:16
1
Right...but if they leave that simulation without being suspicious of emails then what was the point? They should be suspicious of anything that looks different, and the point of training is to make them so, right?
– Roostercrab
Apr 14 at 20:00
3
No. That's my entire point. The goal is not suspicion. I'm afraid to explain further will be to simply repeat my first comment.
– schroeder♦
Apr 14 at 20:43
I guess the question then is what do you want them to think when they look through their email inbox if not suspicion...I know that I am suspicious of emails and having users share my suspicion is the prime objective.
– Roostercrab
Apr 15 at 2:13
add a comment |
The question of "going too far" requires context; what part is going too far?
The thing that phishing tests are trying to do is to make people suspicious of their email, because when they aren't then they are at risk of literally inviting unauthorized users onto the network.
So there shouldn't be an overwhelming amount of emails to the point that they are sifting through known bad emails to get to the ones they need to do their job, but there should be enough that it is commonly known that someone in the organization is portraying an attacker and trying to get them to click the wrong link because there are already people outside the organization trying to get them to do that.
The question then becomes when someone does go for the ploy, are you glad that you caught them instead of a malicious actor? As other people have mentioned here (and @BoredToolBox should not have been downvoted in my opinion) this is about education.
If you put that into the wording of the question then, I'm sure that it's not meant as "How much education is going too far?" right?
What is probably going too far in most organizations is the reaction to people who are clicking thorough, and especially if there is a punitive aspect to it. You should be glad when you are the one that caught the action, because it is a chance for you to help the user understand what could possibly have happened and why you are performing this exercise. People should not be punished or shamed.
Imagine that this was an exercise on how to prevent an illness from spreading worker to worker. A deadly virus that will lay dormant until it has found an appropriate host and will then possibly kill everyone, but they don't know that it is spread by people that are randomly coming in the front door handing them packages.
We have enough common sense to know not to just accept packages from people that walk into the building, but what people don't see is that this is exactly what is happening with their emails. So this is about a change in culture and perspective, and I don't really see what part of the knowledge of this is going too far when you are talking about education.
The question of "going too far" requires context; what part is going too far?
The thing that phishing tests are trying to do is to make people suspicious of their email, because when they aren't then they are at risk of literally inviting unauthorized users onto the network.
So there shouldn't be an overwhelming amount of emails to the point that they are sifting through known bad emails to get to the ones they need to do their job, but there should be enough that it is commonly known that someone in the organization is portraying an attacker and trying to get them to click the wrong link because there are already people outside the organization trying to get them to do that.
The question then becomes when someone does go for the ploy, are you glad that you caught them instead of a malicious actor? As other people have mentioned here (and @BoredToolBox should not have been downvoted in my opinion) this is about education.
If you put that into the wording of the question then, I'm sure that it's not meant as "How much education is going too far?" right?
What is probably going too far in most organizations is the reaction to people who are clicking thorough, and especially if there is a punitive aspect to it. You should be glad when you are the one that caught the action, because it is a chance for you to help the user understand what could possibly have happened and why you are performing this exercise. People should not be punished or shamed.
Imagine that this was an exercise on how to prevent an illness from spreading worker to worker. A deadly virus that will lay dormant until it has found an appropriate host and will then possibly kill everyone, but they don't know that it is spread by people that are randomly coming in the front door handing them packages.
We have enough common sense to know not to just accept packages from people that walk into the building, but what people don't see is that this is exactly what is happening with their emails. So this is about a change in culture and perspective, and I don't really see what part of the knowledge of this is going too far when you are talking about education.
edited Apr 14 at 19:13
schroeder♦
82.2k33184220
82.2k33184220
answered Apr 14 at 19:00
RoostercrabRoostercrab
111
111
5
The purpose of phishing simulations is not to make people suspicious, but to practice the procedures and behavours taught in a safe simulation of an attack.
– schroeder♦
Apr 14 at 19:16
1
Right...but if they leave that simulation without being suspicious of emails then what was the point? They should be suspicious of anything that looks different, and the point of training is to make them so, right?
– Roostercrab
Apr 14 at 20:00
3
No. That's my entire point. The goal is not suspicion. I'm afraid to explain further will be to simply repeat my first comment.
– schroeder♦
Apr 14 at 20:43
I guess the question then is what do you want them to think when they look through their email inbox if not suspicion...I know that I am suspicious of emails and having users share my suspicion is the prime objective.
– Roostercrab
Apr 15 at 2:13
add a comment |
5
The purpose of phishing simulations is not to make people suspicious, but to practice the procedures and behavours taught in a safe simulation of an attack.
– schroeder♦
Apr 14 at 19:16
1
Right...but if they leave that simulation without being suspicious of emails then what was the point? They should be suspicious of anything that looks different, and the point of training is to make them so, right?
– Roostercrab
Apr 14 at 20:00
3
No. That's my entire point. The goal is not suspicion. I'm afraid to explain further will be to simply repeat my first comment.
– schroeder♦
Apr 14 at 20:43
I guess the question then is what do you want them to think when they look through their email inbox if not suspicion...I know that I am suspicious of emails and having users share my suspicion is the prime objective.
– Roostercrab
Apr 15 at 2:13
5
5
The purpose of phishing simulations is not to make people suspicious, but to practice the procedures and behavours taught in a safe simulation of an attack.
– schroeder♦
Apr 14 at 19:16
The purpose of phishing simulations is not to make people suspicious, but to practice the procedures and behavours taught in a safe simulation of an attack.
– schroeder♦
Apr 14 at 19:16
1
1
Right...but if they leave that simulation without being suspicious of emails then what was the point? They should be suspicious of anything that looks different, and the point of training is to make them so, right?
– Roostercrab
Apr 14 at 20:00
Right...but if they leave that simulation without being suspicious of emails then what was the point? They should be suspicious of anything that looks different, and the point of training is to make them so, right?
– Roostercrab
Apr 14 at 20:00
3
3
No. That's my entire point. The goal is not suspicion. I'm afraid to explain further will be to simply repeat my first comment.
– schroeder♦
Apr 14 at 20:43
No. That's my entire point. The goal is not suspicion. I'm afraid to explain further will be to simply repeat my first comment.
– schroeder♦
Apr 14 at 20:43
I guess the question then is what do you want them to think when they look through their email inbox if not suspicion...I know that I am suspicious of emails and having users share my suspicion is the prime objective.
– Roostercrab
Apr 15 at 2:13
I guess the question then is what do you want them to think when they look through their email inbox if not suspicion...I know that I am suspicious of emails and having users share my suspicion is the prime objective.
– Roostercrab
Apr 15 at 2:13
add a comment |
Faced something similar and currently part of a team that runs something similar. Here are my two cents:
Education is a very tricky concept as the way people learn are
different for different individuals. But what I have seen is that if
you try to concise the information you want to convey in 2-4 points,
in as few words as possible that always help. We do something like
this when it comes to educating people:
Whenever you get an email from someone outside the org ask these questions:
- Do you personally know this email id?
- Does the email id and the domain name look fishy to you?
- Do you really want to click that link or want to give this guy your personal info?
And lastly we always mention that:
if you are not sure please forward this email to email id that verifies this@yourorg.com
- Definitely. Since all they need to do (I guess) is to ignore that email or maybe forward it to your internal security team for review.
I guess what needs to be done here is more on education. Because the employees need to know how a successful phish can not only hurt the company but also the employee as well.
5
The question is when does phishing campaign's,to educate employees crosses the line.You are answering on "how to better educate them"
– Vipul Nair
Apr 14 at 17:49
Downvoted for the reason @VipulNair stated
– Kevin Voorn
Apr 14 at 18:38
@VipulNair Isn't "not being able to educate" is education gone too far?
– BoredToolBox
Apr 15 at 4:38
And the top voted one, says the exact same thing.
– BoredToolBox
Apr 15 at 4:39
add a comment |
Faced something similar and currently part of a team that runs something similar. Here are my two cents:
Education is a very tricky concept as the way people learn are
different for different individuals. But what I have seen is that if
you try to concise the information you want to convey in 2-4 points,
in as few words as possible that always help. We do something like
this when it comes to educating people:
Whenever you get an email from someone outside the org ask these questions:
- Do you personally know this email id?
- Does the email id and the domain name look fishy to you?
- Do you really want to click that link or want to give this guy your personal info?
And lastly we always mention that:
if you are not sure please forward this email to email id that verifies this@yourorg.com
- Definitely. Since all they need to do (I guess) is to ignore that email or maybe forward it to your internal security team for review.
I guess what needs to be done here is more on education. Because the employees need to know how a successful phish can not only hurt the company but also the employee as well.
5
The question is when does phishing campaign's,to educate employees crosses the line.You are answering on "how to better educate them"
– Vipul Nair
Apr 14 at 17:49
Downvoted for the reason @VipulNair stated
– Kevin Voorn
Apr 14 at 18:38
@VipulNair Isn't "not being able to educate" is education gone too far?
– BoredToolBox
Apr 15 at 4:38
And the top voted one, says the exact same thing.
– BoredToolBox
Apr 15 at 4:39
add a comment |
Faced something similar and currently part of a team that runs something similar. Here are my two cents:
Education is a very tricky concept as the way people learn are
different for different individuals. But what I have seen is that if
you try to concise the information you want to convey in 2-4 points,
in as few words as possible that always help. We do something like
this when it comes to educating people:
Whenever you get an email from someone outside the org ask these questions:
- Do you personally know this email id?
- Does the email id and the domain name look fishy to you?
- Do you really want to click that link or want to give this guy your personal info?
And lastly we always mention that:
if you are not sure please forward this email to email id that verifies this@yourorg.com
- Definitely. Since all they need to do (I guess) is to ignore that email or maybe forward it to your internal security team for review.
I guess what needs to be done here is more on education. Because the employees need to know how a successful phish can not only hurt the company but also the employee as well.
Faced something similar and currently part of a team that runs something similar. Here are my two cents:
Education is a very tricky concept as the way people learn are
different for different individuals. But what I have seen is that if
you try to concise the information you want to convey in 2-4 points,
in as few words as possible that always help. We do something like
this when it comes to educating people:
Whenever you get an email from someone outside the org ask these questions:
- Do you personally know this email id?
- Does the email id and the domain name look fishy to you?
- Do you really want to click that link or want to give this guy your personal info?
And lastly we always mention that:
if you are not sure please forward this email to email id that verifies this@yourorg.com
- Definitely. Since all they need to do (I guess) is to ignore that email or maybe forward it to your internal security team for review.
I guess what needs to be done here is more on education. Because the employees need to know how a successful phish can not only hurt the company but also the employee as well.
edited Apr 14 at 18:25
schroeder♦
82.2k33184220
82.2k33184220
answered Apr 14 at 17:24
BoredToolBoxBoredToolBox
325
325
5
The question is when does phishing campaign's,to educate employees crosses the line.You are answering on "how to better educate them"
– Vipul Nair
Apr 14 at 17:49
Downvoted for the reason @VipulNair stated
– Kevin Voorn
Apr 14 at 18:38
@VipulNair Isn't "not being able to educate" is education gone too far?
– BoredToolBox
Apr 15 at 4:38
And the top voted one, says the exact same thing.
– BoredToolBox
Apr 15 at 4:39
add a comment |
5
The question is when does phishing campaign's,to educate employees crosses the line.You are answering on "how to better educate them"
– Vipul Nair
Apr 14 at 17:49
Downvoted for the reason @VipulNair stated
– Kevin Voorn
Apr 14 at 18:38
@VipulNair Isn't "not being able to educate" is education gone too far?
– BoredToolBox
Apr 15 at 4:38
And the top voted one, says the exact same thing.
– BoredToolBox
Apr 15 at 4:39
5
5
The question is when does phishing campaign's,to educate employees crosses the line.You are answering on "how to better educate them"
– Vipul Nair
Apr 14 at 17:49
The question is when does phishing campaign's,to educate employees crosses the line.You are answering on "how to better educate them"
– Vipul Nair
Apr 14 at 17:49
Downvoted for the reason @VipulNair stated
– Kevin Voorn
Apr 14 at 18:38
Downvoted for the reason @VipulNair stated
– Kevin Voorn
Apr 14 at 18:38
@VipulNair Isn't "not being able to educate" is education gone too far?
– BoredToolBox
Apr 15 at 4:38
@VipulNair Isn't "not being able to educate" is education gone too far?
– BoredToolBox
Apr 15 at 4:38
And the top voted one, says the exact same thing.
– BoredToolBox
Apr 15 at 4:39
And the top voted one, says the exact same thing.
– BoredToolBox
Apr 15 at 4:39
add a comment |
I don't know whether this applies to your case or not, but one potential problem may be if your expectations about user awareness are higher than the security norms put into use. For example:
- You may educate users to always check the https certificates, but at the same time some internal web sites may use self-signed or expired certificates, or even require submitting usernames and passwords through plain unencrypted http.
- Or you may educate users that all official internal tools reside on your company domain, but in reality you use popular third-party services like Gmail or Slack connected with OAuth.
While the first example is an actual issue with the infrastructure, the second one is a safe practice paired with out-of-date recommendations. I have seen both happening in the wild and in these cases the principles that you are trying to teach can not be applied in day-to-day practice and may ultimately lead to confusion and failure to comply.
add a comment |
I don't know whether this applies to your case or not, but one potential problem may be if your expectations about user awareness are higher than the security norms put into use. For example:
- You may educate users to always check the https certificates, but at the same time some internal web sites may use self-signed or expired certificates, or even require submitting usernames and passwords through plain unencrypted http.
- Or you may educate users that all official internal tools reside on your company domain, but in reality you use popular third-party services like Gmail or Slack connected with OAuth.
While the first example is an actual issue with the infrastructure, the second one is a safe practice paired with out-of-date recommendations. I have seen both happening in the wild and in these cases the principles that you are trying to teach can not be applied in day-to-day practice and may ultimately lead to confusion and failure to comply.
add a comment |
I don't know whether this applies to your case or not, but one potential problem may be if your expectations about user awareness are higher than the security norms put into use. For example:
- You may educate users to always check the https certificates, but at the same time some internal web sites may use self-signed or expired certificates, or even require submitting usernames and passwords through plain unencrypted http.
- Or you may educate users that all official internal tools reside on your company domain, but in reality you use popular third-party services like Gmail or Slack connected with OAuth.
While the first example is an actual issue with the infrastructure, the second one is a safe practice paired with out-of-date recommendations. I have seen both happening in the wild and in these cases the principles that you are trying to teach can not be applied in day-to-day practice and may ultimately lead to confusion and failure to comply.
I don't know whether this applies to your case or not, but one potential problem may be if your expectations about user awareness are higher than the security norms put into use. For example:
- You may educate users to always check the https certificates, but at the same time some internal web sites may use self-signed or expired certificates, or even require submitting usernames and passwords through plain unencrypted http.
- Or you may educate users that all official internal tools reside on your company domain, but in reality you use popular third-party services like Gmail or Slack connected with OAuth.
While the first example is an actual issue with the infrastructure, the second one is a safe practice paired with out-of-date recommendations. I have seen both happening in the wild and in these cases the principles that you are trying to teach can not be applied in day-to-day practice and may ultimately lead to confusion and failure to comply.
answered Apr 15 at 11:58
ZoltanZoltan
1857
1857
add a comment |
add a comment |
I'm not sure the size of your organization, but the most practical advice I can offer is that you can go too far when you overthink it.
- Make some spoofy emails, send them to users, see what users do.
We use a tool (KnowBe4)- run a few trials against the users, and use that to educate them/get them aware. We capture who passed, who failed, and use the overall process to educate and demonstrate that we educate.
Don't overthink the audience with custom targeting; don't do complicated data analysis... If you are, you are probably wasting time you could spend on the next challenge.
If you see there's spear phishing at your execs or certain folks, engage them personally and often, and maybe do something operational to make sure that if they are fooled, you catch it. By operational change, for example, if someone's trying to get your CFO to release wire payments- then the CFO better have an additional maker/checker process, or get secondary non-email (Voice?) confirmation that a wire should go out.
add a comment |
I'm not sure the size of your organization, but the most practical advice I can offer is that you can go too far when you overthink it.
- Make some spoofy emails, send them to users, see what users do.
We use a tool (KnowBe4)- run a few trials against the users, and use that to educate them/get them aware. We capture who passed, who failed, and use the overall process to educate and demonstrate that we educate.
Don't overthink the audience with custom targeting; don't do complicated data analysis... If you are, you are probably wasting time you could spend on the next challenge.
If you see there's spear phishing at your execs or certain folks, engage them personally and often, and maybe do something operational to make sure that if they are fooled, you catch it. By operational change, for example, if someone's trying to get your CFO to release wire payments- then the CFO better have an additional maker/checker process, or get secondary non-email (Voice?) confirmation that a wire should go out.
add a comment |
I'm not sure the size of your organization, but the most practical advice I can offer is that you can go too far when you overthink it.
- Make some spoofy emails, send them to users, see what users do.
We use a tool (KnowBe4)- run a few trials against the users, and use that to educate them/get them aware. We capture who passed, who failed, and use the overall process to educate and demonstrate that we educate.
Don't overthink the audience with custom targeting; don't do complicated data analysis... If you are, you are probably wasting time you could spend on the next challenge.
If you see there's spear phishing at your execs or certain folks, engage them personally and often, and maybe do something operational to make sure that if they are fooled, you catch it. By operational change, for example, if someone's trying to get your CFO to release wire payments- then the CFO better have an additional maker/checker process, or get secondary non-email (Voice?) confirmation that a wire should go out.
I'm not sure the size of your organization, but the most practical advice I can offer is that you can go too far when you overthink it.
- Make some spoofy emails, send them to users, see what users do.
We use a tool (KnowBe4)- run a few trials against the users, and use that to educate them/get them aware. We capture who passed, who failed, and use the overall process to educate and demonstrate that we educate.
Don't overthink the audience with custom targeting; don't do complicated data analysis... If you are, you are probably wasting time you could spend on the next challenge.
If you see there's spear phishing at your execs or certain folks, engage them personally and often, and maybe do something operational to make sure that if they are fooled, you catch it. By operational change, for example, if someone's trying to get your CFO to release wire payments- then the CFO better have an additional maker/checker process, or get secondary non-email (Voice?) confirmation that a wire should go out.
answered Apr 15 at 21:01
subssubs
1
1
add a comment |
add a comment |
It sounds to me like their may be two issues here:
Users are frustrated that they are regularly being lambasted because they fail a test they consider impossible.
Users are annoyed that IT is wasting their time with endless tests of dubious value.
RE #1, there are three possibilities:
A: You ARE making impossible demands on your users. At least, impossible in the sense that you are demanding they demonstrate a level of sophistication far beyond what can reasonably be expected of people who are not experts on security. To spin an analogy, it might be reasonable to demand that all employees be prepared to perform basic first aid: put on a bandage, give someone an aspirin, etc. But surely you would not expect all employees to be able to perform emergency heart surgery. If you start giving them practice drills on emergency heart surgery and blast the employees who are unable to adequately describe how they would implant a stent or who can't correctly list all 182 steps in a heart transplant, clearly that would be unreasonable. Making unrealistic demands and then berating employees for failing to meet them accomplishes nothing except building resentment and killing morale.
B: Your expectations are completely reasonable, and the employees are insufficiently trained. If that's the case, the obvious answer is to provide training. If you have never provided any training, and you are now berating employees for not knowing something that they have never been taught, again, you are being unreasonable. Bear in mind that what is "obvious" to a computer security professional is not necessarily obvious to someone with no such background. I'm sure there are many things about accounting that are obvious to professional accountants but not to me, or things about auto maintenance that are obvious to professional mechanics, etc.
C: Your expectations are completely reasonable, and the employees are too lazy or irresponsible to make the effort. If that's the case, it's a management issue. Someone has to give the employees the proper incentive to work harder, which could range from an encouraging pep talk to firing those who don't measure up.
RE #2: When I was in the Air Force, of course security was a major concern. We had people who wanted to destroy our aircraft and kill us. But even in that extreme situation, the security people were well aware that more strict security is not always best. The standard was that security should be as effective as possible to deal with realistic risks while interfering as little as possible with people doing their jobs.
In this case, of course it's a bad thing if some hostile hacker gets hold of passwords and steals or vandalizes your data. That could cost you big money, maybe even drive you out of business. But unless the threat is huge, you can't expect the employees to spend 90% of their time warding off threats and only 10% doing work that brings in income for the company. That's a recipe for going broke, too. You have to have a reasonable balance between protecting against threats and making it impossible for anyone to do their job.
add a comment |
It sounds to me like their may be two issues here:
Users are frustrated that they are regularly being lambasted because they fail a test they consider impossible.
Users are annoyed that IT is wasting their time with endless tests of dubious value.
RE #1, there are three possibilities:
A: You ARE making impossible demands on your users. At least, impossible in the sense that you are demanding they demonstrate a level of sophistication far beyond what can reasonably be expected of people who are not experts on security. To spin an analogy, it might be reasonable to demand that all employees be prepared to perform basic first aid: put on a bandage, give someone an aspirin, etc. But surely you would not expect all employees to be able to perform emergency heart surgery. If you start giving them practice drills on emergency heart surgery and blast the employees who are unable to adequately describe how they would implant a stent or who can't correctly list all 182 steps in a heart transplant, clearly that would be unreasonable. Making unrealistic demands and then berating employees for failing to meet them accomplishes nothing except building resentment and killing morale.
B: Your expectations are completely reasonable, and the employees are insufficiently trained. If that's the case, the obvious answer is to provide training. If you have never provided any training, and you are now berating employees for not knowing something that they have never been taught, again, you are being unreasonable. Bear in mind that what is "obvious" to a computer security professional is not necessarily obvious to someone with no such background. I'm sure there are many things about accounting that are obvious to professional accountants but not to me, or things about auto maintenance that are obvious to professional mechanics, etc.
C: Your expectations are completely reasonable, and the employees are too lazy or irresponsible to make the effort. If that's the case, it's a management issue. Someone has to give the employees the proper incentive to work harder, which could range from an encouraging pep talk to firing those who don't measure up.
RE #2: When I was in the Air Force, of course security was a major concern. We had people who wanted to destroy our aircraft and kill us. But even in that extreme situation, the security people were well aware that more strict security is not always best. The standard was that security should be as effective as possible to deal with realistic risks while interfering as little as possible with people doing their jobs.
In this case, of course it's a bad thing if some hostile hacker gets hold of passwords and steals or vandalizes your data. That could cost you big money, maybe even drive you out of business. But unless the threat is huge, you can't expect the employees to spend 90% of their time warding off threats and only 10% doing work that brings in income for the company. That's a recipe for going broke, too. You have to have a reasonable balance between protecting against threats and making it impossible for anyone to do their job.
add a comment |
It sounds to me like their may be two issues here:
Users are frustrated that they are regularly being lambasted because they fail a test they consider impossible.
Users are annoyed that IT is wasting their time with endless tests of dubious value.
RE #1, there are three possibilities:
A: You ARE making impossible demands on your users. At least, impossible in the sense that you are demanding they demonstrate a level of sophistication far beyond what can reasonably be expected of people who are not experts on security. To spin an analogy, it might be reasonable to demand that all employees be prepared to perform basic first aid: put on a bandage, give someone an aspirin, etc. But surely you would not expect all employees to be able to perform emergency heart surgery. If you start giving them practice drills on emergency heart surgery and blast the employees who are unable to adequately describe how they would implant a stent or who can't correctly list all 182 steps in a heart transplant, clearly that would be unreasonable. Making unrealistic demands and then berating employees for failing to meet them accomplishes nothing except building resentment and killing morale.
B: Your expectations are completely reasonable, and the employees are insufficiently trained. If that's the case, the obvious answer is to provide training. If you have never provided any training, and you are now berating employees for not knowing something that they have never been taught, again, you are being unreasonable. Bear in mind that what is "obvious" to a computer security professional is not necessarily obvious to someone with no such background. I'm sure there are many things about accounting that are obvious to professional accountants but not to me, or things about auto maintenance that are obvious to professional mechanics, etc.
C: Your expectations are completely reasonable, and the employees are too lazy or irresponsible to make the effort. If that's the case, it's a management issue. Someone has to give the employees the proper incentive to work harder, which could range from an encouraging pep talk to firing those who don't measure up.
RE #2: When I was in the Air Force, of course security was a major concern. We had people who wanted to destroy our aircraft and kill us. But even in that extreme situation, the security people were well aware that more strict security is not always best. The standard was that security should be as effective as possible to deal with realistic risks while interfering as little as possible with people doing their jobs.
In this case, of course it's a bad thing if some hostile hacker gets hold of passwords and steals or vandalizes your data. That could cost you big money, maybe even drive you out of business. But unless the threat is huge, you can't expect the employees to spend 90% of their time warding off threats and only 10% doing work that brings in income for the company. That's a recipe for going broke, too. You have to have a reasonable balance between protecting against threats and making it impossible for anyone to do their job.
It sounds to me like their may be two issues here:
Users are frustrated that they are regularly being lambasted because they fail a test they consider impossible.
Users are annoyed that IT is wasting their time with endless tests of dubious value.
RE #1, there are three possibilities:
A: You ARE making impossible demands on your users. At least, impossible in the sense that you are demanding they demonstrate a level of sophistication far beyond what can reasonably be expected of people who are not experts on security. To spin an analogy, it might be reasonable to demand that all employees be prepared to perform basic first aid: put on a bandage, give someone an aspirin, etc. But surely you would not expect all employees to be able to perform emergency heart surgery. If you start giving them practice drills on emergency heart surgery and blast the employees who are unable to adequately describe how they would implant a stent or who can't correctly list all 182 steps in a heart transplant, clearly that would be unreasonable. Making unrealistic demands and then berating employees for failing to meet them accomplishes nothing except building resentment and killing morale.
B: Your expectations are completely reasonable, and the employees are insufficiently trained. If that's the case, the obvious answer is to provide training. If you have never provided any training, and you are now berating employees for not knowing something that they have never been taught, again, you are being unreasonable. Bear in mind that what is "obvious" to a computer security professional is not necessarily obvious to someone with no such background. I'm sure there are many things about accounting that are obvious to professional accountants but not to me, or things about auto maintenance that are obvious to professional mechanics, etc.
C: Your expectations are completely reasonable, and the employees are too lazy or irresponsible to make the effort. If that's the case, it's a management issue. Someone has to give the employees the proper incentive to work harder, which could range from an encouraging pep talk to firing those who don't measure up.
RE #2: When I was in the Air Force, of course security was a major concern. We had people who wanted to destroy our aircraft and kill us. But even in that extreme situation, the security people were well aware that more strict security is not always best. The standard was that security should be as effective as possible to deal with realistic risks while interfering as little as possible with people doing their jobs.
In this case, of course it's a bad thing if some hostile hacker gets hold of passwords and steals or vandalizes your data. That could cost you big money, maybe even drive you out of business. But unless the threat is huge, you can't expect the employees to spend 90% of their time warding off threats and only 10% doing work that brings in income for the company. That's a recipe for going broke, too. You have to have a reasonable balance between protecting against threats and making it impossible for anyone to do their job.
answered Apr 16 at 13:59
JayJay
84955
84955
add a comment |
add a comment |
I suspect that your simulation is using knowledge about your intended targets that no genuine phisher would ever know. That is why they complain about your fakes being too hard to distinguish from the real thing. In a word, you are cheating.
3
Not neccissarily, there can be malicious actors within an orginisation.
– meowcat
Apr 16 at 1:39
1
Please review Shannon's Maxim: The enemy knows the system.
– forest
Apr 16 at 2:31
What things might a "genuine phisher" not know?
– schroeder♦
Apr 16 at 7:39
adding to @meowcat, you'll also be surprised how much information you can find online on someone (varies per person).
– Alex Probert
Apr 16 at 9:23
If a malicious actor inside the system can send me an email that MS Exchange assures me comes from my employer, but has spoofed the sender, so it appears to come from my manager, but doesn't, then no amount of training is going to let me reliably distinguish good from bad. I can devote effort to examining emails from outside the organization to see if they are trustworthy. If I have to expend the same effort on every single internal email then the battle is already lost.
– BoarGules
Apr 16 at 10:23
|
show 1 more comment
I suspect that your simulation is using knowledge about your intended targets that no genuine phisher would ever know. That is why they complain about your fakes being too hard to distinguish from the real thing. In a word, you are cheating.
3
Not neccissarily, there can be malicious actors within an orginisation.
– meowcat
Apr 16 at 1:39
1
Please review Shannon's Maxim: The enemy knows the system.
– forest
Apr 16 at 2:31
What things might a "genuine phisher" not know?
– schroeder♦
Apr 16 at 7:39
adding to @meowcat, you'll also be surprised how much information you can find online on someone (varies per person).
– Alex Probert
Apr 16 at 9:23
If a malicious actor inside the system can send me an email that MS Exchange assures me comes from my employer, but has spoofed the sender, so it appears to come from my manager, but doesn't, then no amount of training is going to let me reliably distinguish good from bad. I can devote effort to examining emails from outside the organization to see if they are trustworthy. If I have to expend the same effort on every single internal email then the battle is already lost.
– BoarGules
Apr 16 at 10:23
|
show 1 more comment
I suspect that your simulation is using knowledge about your intended targets that no genuine phisher would ever know. That is why they complain about your fakes being too hard to distinguish from the real thing. In a word, you are cheating.
I suspect that your simulation is using knowledge about your intended targets that no genuine phisher would ever know. That is why they complain about your fakes being too hard to distinguish from the real thing. In a word, you are cheating.
answered Apr 15 at 21:16
BoarGulesBoarGules
971
971
3
Not neccissarily, there can be malicious actors within an orginisation.
– meowcat
Apr 16 at 1:39
1
Please review Shannon's Maxim: The enemy knows the system.
– forest
Apr 16 at 2:31
What things might a "genuine phisher" not know?
– schroeder♦
Apr 16 at 7:39
adding to @meowcat, you'll also be surprised how much information you can find online on someone (varies per person).
– Alex Probert
Apr 16 at 9:23
If a malicious actor inside the system can send me an email that MS Exchange assures me comes from my employer, but has spoofed the sender, so it appears to come from my manager, but doesn't, then no amount of training is going to let me reliably distinguish good from bad. I can devote effort to examining emails from outside the organization to see if they are trustworthy. If I have to expend the same effort on every single internal email then the battle is already lost.
– BoarGules
Apr 16 at 10:23
|
show 1 more comment
3
Not neccissarily, there can be malicious actors within an orginisation.
– meowcat
Apr 16 at 1:39
1
Please review Shannon's Maxim: The enemy knows the system.
– forest
Apr 16 at 2:31
What things might a "genuine phisher" not know?
– schroeder♦
Apr 16 at 7:39
adding to @meowcat, you'll also be surprised how much information you can find online on someone (varies per person).
– Alex Probert
Apr 16 at 9:23
If a malicious actor inside the system can send me an email that MS Exchange assures me comes from my employer, but has spoofed the sender, so it appears to come from my manager, but doesn't, then no amount of training is going to let me reliably distinguish good from bad. I can devote effort to examining emails from outside the organization to see if they are trustworthy. If I have to expend the same effort on every single internal email then the battle is already lost.
– BoarGules
Apr 16 at 10:23
3
3
Not neccissarily, there can be malicious actors within an orginisation.
– meowcat
Apr 16 at 1:39
Not neccissarily, there can be malicious actors within an orginisation.
– meowcat
Apr 16 at 1:39
1
1
Please review Shannon's Maxim: The enemy knows the system.
– forest
Apr 16 at 2:31
Please review Shannon's Maxim: The enemy knows the system.
– forest
Apr 16 at 2:31
What things might a "genuine phisher" not know?
– schroeder♦
Apr 16 at 7:39
What things might a "genuine phisher" not know?
– schroeder♦
Apr 16 at 7:39
adding to @meowcat, you'll also be surprised how much information you can find online on someone (varies per person).
– Alex Probert
Apr 16 at 9:23
adding to @meowcat, you'll also be surprised how much information you can find online on someone (varies per person).
– Alex Probert
Apr 16 at 9:23
If a malicious actor inside the system can send me an email that MS Exchange assures me comes from my employer, but has spoofed the sender, so it appears to come from my manager, but doesn't, then no amount of training is going to let me reliably distinguish good from bad. I can devote effort to examining emails from outside the organization to see if they are trustworthy. If I have to expend the same effort on every single internal email then the battle is already lost.
– BoarGules
Apr 16 at 10:23
If a malicious actor inside the system can send me an email that MS Exchange assures me comes from my employer, but has spoofed the sender, so it appears to come from my manager, but doesn't, then no amount of training is going to let me reliably distinguish good from bad. I can devote effort to examining emails from outside the organization to see if they are trustworthy. If I have to expend the same effort on every single internal email then the battle is already lost.
– BoarGules
Apr 16 at 10:23
|
show 1 more comment
30
I would re-word the title from "education" to "testing" or "simulations"
– schroeder♦
Apr 14 at 19:05
10
This question seems to me like it lacks key details. Why are your users claiming that the phishing emails you send them are indistinguishable from legitimate ones? Is it because they truly are (at least with the tools at a normal user's disposal), or is it because they're screwing up? Receiving an email from a person you've not previously had contact with is not inherently suspicious, so it matters how you are measuring failure. Based on them actually handing over sensitive information? Or just based on them clicking a link in an email that they could not reasonably know was fake in advance?
– Mark Amery
Apr 15 at 13:00
1
Comments are not for extended discussion; this conversation has been moved to chat.
– Rory Alsop♦
Apr 15 at 18:22