,

React to incidents in an organised way by using the Playbook model

Imagine that someone detects a breach in one of your systems. How would you react? Would you dig into a all of your network and host logs immediately? Or would you contain the situation first, by disconnecting the machine(s) from the network?

Actually, you shouldn’t just start thinking about these questions when the incident has already occurred. Incident response procedures should be described in a standardised way and your team should be able to use them without hesitation.

Simply said: you need an Incident Response Playbook.

How the Playbook works

The Playbook collects ‘plays’. Each play contains a list of actions that are needed to accomplish an incident response task. Plays are extremely useful. They aren’t just a lot of complex queries of code to detect whatever ‘bad stuff’ hit you. In your plays, you will find fully documented prescriptive procedures. They allow you to find – and act upon – undesired activity in a structured way.

Every play contains a set of sections:

  • Report ID and title: with a specific structure, you indicate the data source, the type of report – such as ‘investigative’ or ‘containment’ – and the title.
  • Objective statement: here you describe the ‘what’ and ‘why’ of a play. This should provide background information and reasoning on why the play exists. Don’t give too many specifics, this should be high-level.
  • Scope and applicability: describe who should run the playbook and when or how often.
  • Methodology and procedures: this is the ‘meat’ of the play; here you describe the procedures in detail.

Every Playbook counters a different threat. A playbook can handle malware traffic, phishing, ransomware and many more situations.

The Playbook follows your way

The biggest benefit of the Playbook is its flexibility. It is not a rigid framework? It has an open-ended nature of play objectives. This allows your security experts to explore different ways of achieving their objectives.

Need a hand setting up a Playbook? Feel free to contact me for assistance.

,

Why I’m happy to help the CCB

As you may know, the CCB (Center for Cybersecurity Belgium) is working on a vulnerability disclosure policy. It is meant to be an enabler for ethical hacking in Belgium. Organisations embracing and publishing such a policy can allow (external) ethical hackers to verify and test their security posture and to disclose any issues found, in a coordinated and responsible way.

Note that hacking, hacking tool possession and such are illegal in Belgium. The new rule will be that an allowance can be granted by the company who is the hacking target. This can be based on a contract (for professional companies) or a vulnerability disclosure policy (like the one the CCB will propose).

The CCB wants to make sure that the advice responds to the needs of security researchers and professionals. So they invited a number of people who are involved daily with ethical hacking and who know about responsible disclosure, to participate. Such as yours truly.

It’s too soon to talk about the outcome of the discussions with the CCB, but the policy itself is definitely something we are all looking forward to. We are very hopeful that the inclusion of professionals in the conversation will improve the chances that the policy will add value and clarity to the current murky legal situation of ethical hackers. And that is what everyone active in cybersecurity and ethical hacking has been longing for: clear legal limits within which one can act without risking prosecution.

A disclosure policy would state, among other things, how ethical hackers can disclose vulnerabilities to the company in a correct way, what the do’s and don’ts are for ethical hackers when probing their targets, basically what the acceptable boundaries are for hacking that specific organisation. If the hackers play by those rules, they can’t be prosecuted for their security research at that organisation.

What would you like to add to such a policy? Any advice? Looking forward to reading it in the comments.

, , ,

The youth is out there…

Have you read the research from Kaspersky Lab, on how a lack of guidance for youth results in their temptation to exacerbate cyber-crime instead of preventing it? At Toreon, we didn’t need an extensive and expensive study to realise that youth is the future and that the interest for IT and cybersecurity can’t be sparked young enough. That is why, at the end of the Cyber Security Awareness Month and in collaboration with BruCON, we met up with kids and students to teach them about IT, hacking and cybersecurity.

Hak4Kidz
During the second Hak4Kidz Belgium event, BruCON invited children and youngsters between 7 and 15 for Hak4Kidz Belgium. Six Toreon volunteers assisted in teaching how much fun IT and science are. The event was fully booked in no time.

A few of the things that the children learned:

  • Issues as a fun puzzle waiting to be solved
  • Failure means you get to try again
  • By sharing knowledge, you can focus on solving new problems instead of solving resolved issues over and over again.

slack-for-ios-upload-2 slack-for-ios-upload-1 14917096_10154698227818734_110096449645637643_o 14917084_10154698227018734_4754124548059580092_o 14883569_10154698226353734_5182076026402427068_o14714871_10154698227433734_6605774733114813769_o

Student CTF
During the Student CTF, we took it to the next level. For most CTF’s the gap between the skillsets needed and those taught in school is too large, making it impossible for students to participate. That’s why we created 39 challenges for some hundred students of both specialised and less specialised fields of study, from the University of Ghent and HOWEST. We didn’t expect them to just solve the challenges, but started with introductions on SQL Injection, Traffic analysis, Android reverse engineering and gave lots of tips and tricks.

brucon_ctf3 brucon_ctf

We learned a lot too!
The children and students were not the only ones who learned a lot during these days. We were able to reaffirm how important it is to reach and guide youth in time, but most of all: what an incredible amount of talent is getting ready to enter the real world. The winning team of the Student CTF was even able to solve 36 of the 39 challenges!

What do you think? Did we teach the right things? Would you handle it differently? Or are you interested in a next edition of one of these events? You can let us know in the comments!

,

4 pitfalls to avoid when building a CSOC

Setting up a new Cyber Security Operations Center (CSOC) within your organisation is a big step in increasing your incident monitoring and response efficiency, providing you can avoid the following mistakes:

1.        Putting technology before people and processes

We’ve all been there: new technology is released that is promising you and your CSOC team the world: better detection rates, less false positives, more visibility, better intelligence, etc… You’ve seen the demos, did the Proof of Concepts and you feel convinced and ready to buy…

But first also consider the operational cost of running and maintaining the new solution.

Don’t get me wrong: technologies like SIEM, Breach Detection, Advanced Endpoint Protection and Live Forensics can help your organisation quickly and efficiently detect, block, analyse and remediate attacks, but they also require:

  • Sufficient CSOC personnel with the correct skill-set and free time to use the solution, interpret the results and update the rule-sets, filters and Indicators of Compromise.
  • Sufficient CSOC and IT personnel to handle the extra events generated by new technology.
  • Sufficiently documented CSOC processes regarding incident detection, management and response.

Without these key resources, your new investment will not provide you with the expected results.

2        Doing too many things at once
Most organisations have a limited budget and limited resources assigned to CSOC activities. Most CSOCs perform some form of incident monitoring, analysis and remediation tasks. Other tasks like manual intelligence gathering and advanced malware analysis can also be helpful to detect and respond to very advanced attacks, but these require a lot of resources or require people with a very specific (read: expensive) skillset. It might not be realistic to incorporate these tasks in your CSOC’s daily activities, without sacrificing some of the more “basic” capacities.

However, most of these “advanced capacities” can be outsourced or automated in one way or another, eliminating the need for specific CSOC personnel to execute this task. For intelligence gathering, there are free and commercial threat intelligence feeds you can hook up to your SIEM. For automated malware analysis there are free sandboxing solutions like malwr.org and Cuckoo. For manual in-depth malware analysis, it might make more sense to hire an external malware analysis resource when you need it.

What your CSOC does internally and what will be outsourced or automated will depend on the budget, the maturity level of your organisation and the skill set of your CSOC staff.

3.        Starting without corporate buy-in

Your CSOC needs executive support to be able to do its job properly. Endless discussion can arise about what the CSOC is or isn’t authorised to do, especially when a major incident occurs. A good example is whether or not it is allowed to disconnect an infected machine from the corporate network.

You can prevent this from happening by creating a CSOC charter, which is basically a policy stating what tasks the CSOC is authorised to do and the resources and efforts that are expected from the other departments. This document should be formally approved by the top-level executives.

4.        Lacking a playbook
All tasks within the security incident handling process should be formally documented beforehand. Don’t fall in the trap of starting to document only when an incident arrives!

There have to be step-by-step guides on how to perform incident response tasks for your team, for example how to detect C&C connections using a SIEM or how to perform a reinstallation of an infected workstation. A good format to use for this type of documentation is the incident response playbook.