Back to Course

Incident Response

0% Complete
0/71 Steps
  1. Incident Response
    Incident Reporting
  2. Incident Response Use Cases
    Lab Setup
  3. Role Playing - Shift Manager
  4. Demonstrating: Investigating and Escalating
  5. Report from Malware Analyst
  6. Exercise 1.1: Exploring Suspicious Executable Detected using SIEM
  7. Exercise 1.2: Investigating Multiple Failed Logins using SIEM
  8. Exercise 3: Mitigating Risk
  9. Exercise 4.1: Asking the Right Questions
  10. Scenario 4.1: Asking the Right Questions
  11. Scenario 4.2: Suspicious or Malicious?
  12. Exercise 4.2: Reviewing the Shift Log
  13. Exercise 4.3: Investigating an Unauthorized Login Attempt
  14. Exercise 4.4: Investigating Firewall Traffic
  15. Exercise 4.5: Reviewing the Security Operations Mailbox
  16. Exercise 5.1: Reviewing New Intelligence
  17. Exercise 5.2: Assessing Threat Severity
  18. Exercise 6: Recommending Remediation
  19. Exercise 7: Conducting a Post-Incident Review
  20. Exercise 8: Communicating with Operations and Senior Management
  21. Business Continuity
    Business Continuity Plan Development
    8 Topics
  22. BCP Invocation Process
    2 Topics
  23. Emergency Procedures
    7 Topics
  24. Crisis Management Team
    10 Topics
  25. BCP Seating Plan
  26. Overview
  27. Disaster Recovery
    Scope of Critical Services
  28. Network Services
  29. Application Hosting Service
  30. File Hosting Services
  31. Call Centre and Voice Recording Services
  32. Regulatory Links
  33. Thin Client Environment
  34. Voice System (Non-Service Desk)
  35. Printing Services
  36. Recovery Time Objective (RTO) & Recovery Point Objective
  37. Single Point of Failure
  38. Redundancy Requirements
  39. Alternate Locations
  40. Contact Protocol
    4 Topics
Lesson 29 of 40
In Progress

Application Hosting Service

1.1                    Server Architecture

Fig. 1

1.2                    Scenario 1 >>>>>> Mail Exchange failure

Description:  This could be due to failure in the Data Base, or H/W failure of the Exchange server.

Impact: High. If we have DB failure some users will face denial of email access, while if we have H/W failure the users will face slow email response.

Probability: Low, as we have redundancy Exchange servers in DC & DR. Please refer to Fig.1 in 4.1

BCP: There are 2 CAS and 2 MB virtual exchange servers in DC, and another 1 CAS and 1 MB in DR. These servers are on two high end redundant physical servers in DC, one high end and 4 smaller physical servers in DR.

If we face DB failure, the backup will be restored.

If we face H/W failure, we will depend on the redundant servers, and restore the faulty exchange server as per the maintenance contract and the SLA with HP.

RTO: 2 Hours to restore the DB backup.

          6 Hour to replace the faulty H/W part

1.3                    Scenario 2 >>>>>> File server failure

Description:  This could be due to failure in the Data Base, or H/W failure of the file server.

Impact: High. It will be service outage for Check clearing, BO, and shared folders.

Probability: Low, as we have redundancy file servers in DC & DR. Please refer to Fig.1 in 4.1

BCP: There are 3 virtual file servers in DC, and another 3 in DR. These servers are on two high end redundant physical servers in DC, one high end and 4 smaller physical servers in DR.

If we face DB failure, the file server snapshot will be restored.

If we face H/W failure, we will depend on the redundant servers, and restore the faulty server as per the maintenance contract and the SLA with HP.

RTO: 2 Hours to restore the DB backup.

          6 Hour to replace the faulty H/W part

1.4                    Scenario 3 >>>>>> GFS server Down

Description:  This could be due to failure in the Data Base, or H/W failure of the GFS server.

Impact: High, as 25% of Global FS users will face denial of access till fixing the issue.

Probability: Low. As we have redundancy GFS servers in DC & DR. Please refer to Fig.1 in 4.1

BCP: There are 4 virtual GFS servers in DC, and another 4 in DR. These servers are on two high end redundant physical servers in DC, one high end and 4 smaller physical servers in DR.

If we face DB failure, the last GFS server snapshot will be restored.

If we face H/W failure, we will depend on the redundant servers, and restore the faulty server as per the maintenance contract and the SLA with HP.

RTO: 2 Hours to restore the DB backup.

          6 Hour to replace the faulty H/W part

1.5                    Scenario 4 >>>>>> Ethix Finance (IBS) Server Down

Description:  This could be due to failure in the Data Base, or H/W failure of the Ethix finance server.

Impact: High. It will be slower performance for 35% of Ethix finance users as there is load balance between the three virtual Ethix servers.

Probability: Low, as we have redundancy Ethix finance servers in DC & DR. Please refer to Fig.1 in 4.1

BCP: There are 3 virtual Ethix finance servers in DC, and another 3 in DR. These servers are on two high end redundant physical servers in DC, one high end and 4 smaller physical servers in DR.

If we face DB failure, the last Ethix finance server snapshot will be restored.

If we face H/W failure, we will depend on the redundant servers, and restore the faulty server as per the maintenance contract and the SLA with HP.

RTO: 2 Hours to restore the DB backup.

          6 Hour to replace the faulty H/W part

1.6                    Scenario 5 >>>>>> Xenapp Server Failure

Description:  This could be due to failure in the Citrix farm Data Base, or H/W failure of the Xen server.

Impact: Medium. It will be impact all systems clients and all services, but it will cause minor performance degradation of application published on Citrix such as PHX.  

Probability: Low, as we have redundancy Xen servers in DC & DR. Please refer to Fig.1 in 4.1

BCP: There are 70 virtual Citrix farm servers in DC, and another 70 in DR. These servers are on two high end redundant physical servers and 5 smaller servers in DC, one high end and 7 smaller physical servers in DR.

If we face DB failure, the last Xenapp server snapshot will be restored.

If we face H/W failure, we will depend on the redundant servers, and restore the faulty server as per the maintenance contract and the SLA with HP.

RTO: 1 Hours to restore the DB backup.

          6 Hour to replace the faulty H/W part

1.7                    Scenario 6 >>>>>> Swift Server Failure

Description:  This could be due to H/W failure, software corruption, or Oracle Data Base failure.

Impact: High, as it will cause unavailability of processing Swift massages.

Probability: Medium, as there is two servers in DC with on-line replication with another server in DR as shown in Fig.1 in 4.1

BCP: In case of any H/W or S/W failure, Swift services will be switched to DR

RTO: 1 Hour to switch to DR

1.8                    Scenario 7 >>>>>> Core Banking DB/Sybase down

Description:  This could be done in case of DB failure either after the end of day or during the working day.

Impact: Severe. As it will impact All banking services and will require full switch to DR

Probability: low, as it’s available on two high end Sun / Solaris DB redundant servers in DC, and another one in DR as shown in Fig.1 in 4.1

BCP: IT head will evaluate the fault and will take the decision with O&T head regarding the time allowed to fix the DB or to do full switch to DR.

If the corruption done before the starting of the day then we will restore full backup as of last night.

If the corruption done during the working hours, we will restore the full backup as of last night plus the incremental backup till the time of the failure, up to 10 min. 

RTO: 4 hours for the full switch to DR and restore the required DB.

1.9                    Scenario 8 >>>>>> Core banking DB/SQL down

Description:  This could be done in case of DB failure either after the end of day or during the working day.

Impact: Severe. As it will impact GFS / Ethix Finance (IBS) / Signature services and will require full switch to DR

Probability: low, as it’s available on two high end VMware redundant servers in DC with online replication with another one high end and 4 smaller servers in DR as shown in Fig.1 in 4.1

BCP: IT head will evaluate the fault and will take the decision with O&T head regarding the time allowed to fix the DB or to do full switch to DR.

If the corruption is in the production DB (DC) only, we will do full switch to DR and operate directly.

If the corruption is in Production DB (DC) and in the DR as well, then we need to build new DB in DC, restore full backup as of last night, and restore the incremental backup up to 10 min. 

RTO: 4 hours for the full switch to DR and restore the required DB.

1.10               Scenario 9 >>>>>> Share Point Failure

Description:  This could be due to failure in the Data Base, or H/W failure of the File servers.

Impact: Medium. It will be outage of the bank forms services.

Probability: Medium

BCP: There are 1 Shared Point service in DC, and another one in DR. These servers are on VMware host.

In case of S/W corruption, we will restore server from last backup.

RTO: 4 Hours to restore the server backup.

1.11               Scenario 10 >>>>>> Business objective server Failure

Description:  This could be due to failure in the Data Base, or H/W failure of the File servers.

Impact: Medium. It will result of performance degradation of BO.

Probability: Medium

BCP: There are 4 Shared Point servers in DC, and another 4 in DR. These servers are on VMware host as shown in Fig.1 in 4.1

In case of S/W corruption, we will restore server from last backup.

RTO: 2 Hours to restore the server backup.