Back to Course

Incident Response

  1. Incident Response
    Incident Reporting
  2. Incident Response Use Cases
    Lab Setup
  3. Role Playing - Shift Manager
  4. Demonstrating: Investigating and Escalating
  5. Report from Malware Analyst
  6. Exercise 1.1: Exploring Suspicious Executable Detected using SIEM
  7. Exercise 1.2: Investigating Multiple Failed Logins using SIEM
  8. Exercise 3: Mitigating Risk
  9. Exercise 4.1: Asking the Right Questions
  10. Scenario 4.1: Asking the Right Questions
  11. Scenario 4.2: Suspicious or Malicious?
  12. Exercise 4.2: Reviewing the Shift Log
  13. Exercise 4.3: Investigating an Unauthorized Login Attempt
  14. Exercise 4.4: Investigating Firewall Traffic
  15. Exercise 4.5: Reviewing the Security Operations Mailbox
  16. Exercise 5.1: Reviewing New Intelligence
  17. Exercise 5.2: Assessing Threat Severity
  18. Exercise 6: Recommending Remediation
  19. Exercise 7: Conducting a Post-Incident Review
  20. Exercise 8: Communicating with Operations and Senior Management
  21. Business Continuity
    Business Continuity Plan Development
    8 Topics
  22. BCP Invocation Process
    2 Topics
  23. Emergency Procedures
    7 Topics
  24. Crisis Management Team
    10 Topics
  25. BCP Seating Plan
  26. Overview
  27. Disaster Recovery
    Scope of Critical Services
  28. Network Services
  29. Application Hosting Service
  30. File Hosting Services
  31. Call Centre and Voice Recording Services
  32. Regulatory Links
  33. Thin Client Environment
  34. Voice System (Non-Service Desk)
  35. Printing Services
  36. Recovery Time Objective (RTO) & Recovery Point Objective
  37. Single Point of Failure
  38. Redundancy Requirements
  39. Alternate Locations
  40. Contact Protocol
    4 Topics
Lesson 32 of 40
In Progress

Regulatory Links

1.1                    Firewall & Network Architecture

Fig.1 – DC Firewall

Fig.2 – DR Firewall

1.2                    Scenario 1 >>>>>> 3rd party services down (CBE / ISCORE / NPC / 123)

Description:  All the 3rd party services are connected to ADIB through our firewall system.

There are two service providers for each function connected to DC Firewall, and another two connections to DR Firewall as shown in Fig.1 & Fig.2 in 7.1

Impact: high

Probability: Low, as we have two links to DC, and another two to DR.

BCP: Each 3rd party function is connected to our DC Firewall through two service providers, so in case of failure to one link the second link will work automatically.

If both links went down, then we have to switch this function to our DR site, which another two links are already connected to the same function.

If the failure is in the Cisco router, then we will switch to DR site.

RTO:

  • 30 min. to switch the function from DC to DR and do the required routing.
  • 4 hours to replace the faulty router, but the service will be restored within 30 min. only after switching to DR.

1.3                    Scenario 2 >>>>>> Firewall equipment malfunction

Description:  All the 3rd party services are connected to ADIB through our firewall system.

There are two service providers for each function connected to DC Firewall, and another two connections to DR Firewall as shown in Fig.1 & Fig.2 in 7.1

Impact: Medium, as we have several layers of backup plans.

Probability: Low, as we have redundancy Firewall and switches in the DC, and we have another single Firewall in DR.

BCP: In case of failure of any of the DC Firewall component, automatic failover to the redundant firewall setup in DC.

In case of double failure to the DC firewall setup, we will switch to DR firewall and will route all the functions to DR.

RTO: 2 hours to switch All functions from DC to DR.

1.4                    Scenario 3 >>>>>> Swift link failure

Description:  The swift service is connected to ADIB through our firewall system.

There are two service providers for the swift function (Orange & Link.NET) connected to DC Firewall, and they are connected to DR Firewall as well as shown in Fig.1 & Fig.2 in 7.1

Impact: high

Probability: Low, as we have two links to DC, and another two to DR.

BCP: Swift is connected to our DC Firewall through two service providers, so in case of failure to one link the second link will work automatically.

If both links went down, then we have to switch Swift to our DR site, which another two links are already connected to the Swift router.

If the failure is in the Cisco router, then we will switch to DR site.

RTO:

  • 30 min. to switch Swift from DC to DR and do the required routing.
  • 4 hours to replace the faulty router, but the service will be restored within 30 min. only after switching to DR.

1.1                    Reuters Network diagram

Fig. 1

1.2                    Scenario 1 >>>>>> Reuters link failure

Description:  Treasury is using several systems in their work. They have Reuters as shown in the Fig.1 in 8.1, Bloomberg, Connection with CBE, SunGard.

Their main office and connections in Garden City head office, and their BCP location is in Lebanon square.

Impact: Severe. Treasury is very important department for the bank and they cannot afford losing their systems even for one hour.

Probability: Low. As BT connectivity to GC is with high standard, and there is a backup site in Lebanon square.

BCP:

For Reuters, we have three services:

  • Reuters dealing. This is connected to BT line and another BT line is connected to BCP site.
  • Reuter Eikon (News). Connected to BT line and the internet both in GC and BCP site.
  • Insert link. Connected to BT line in GC, and they insert the data manually in BCP site.

For SunGard, It’s connected to the Reuters server in GC. No need in BCP.

For CBE connections, Treasury has 2 CBE machine in GC, and another one in BCP site.

For Bloomberg, It’s connected to the internet through our firewall in GC, and it’s working through the internet in the BCP site as well.

RTO:

30 min. required to switch BT line from Garden City to BCP site.

10 min. required to re-map the CBE IPs from Garden City to BCP site.