Back to Course

Incident Response

0% Complete
0/71 Steps
  1. Incident Response
    Incident Reporting
  2. Incident Response Use Cases
    Lab Setup
  3. Role Playing - Shift Manager
  4. Demonstrating: Investigating and Escalating
  5. Report from Malware Analyst
  6. Exercise 1.1: Exploring Suspicious Executable Detected using SIEM
  7. Exercise 1.2: Investigating Multiple Failed Logins using SIEM
  8. Exercise 3: Mitigating Risk
  9. Exercise 4.1: Asking the Right Questions
  10. Scenario 4.1: Asking the Right Questions
  11. Scenario 4.2: Suspicious or Malicious?
  12. Exercise 4.2: Reviewing the Shift Log
  13. Exercise 4.3: Investigating an Unauthorized Login Attempt
  14. Exercise 4.4: Investigating Firewall Traffic
  15. Exercise 4.5: Reviewing the Security Operations Mailbox
  16. Exercise 5.1: Reviewing New Intelligence
  17. Exercise 5.2: Assessing Threat Severity
  18. Exercise 6: Recommending Remediation
  19. Exercise 7: Conducting a Post-Incident Review
  20. Exercise 8: Communicating with Operations and Senior Management
  21. Business Continuity
    Business Continuity Plan Development
    8 Topics
  22. BCP Invocation Process
    2 Topics
  23. Emergency Procedures
    7 Topics
  24. Crisis Management Team
    10 Topics
  25. BCP Seating Plan
  26. Overview
  27. Disaster Recovery
    Scope of Critical Services
  28. Network Services
  29. Application Hosting Service
  30. File Hosting Services
  31. Call Centre and Voice Recording Services
  32. Regulatory Links
  33. Thin Client Environment
  34. Voice System (Non-Service Desk)
  35. Printing Services
  36. Recovery Time Objective (RTO) & Recovery Point Objective
  37. Single Point of Failure
  38. Redundancy Requirements
  39. Alternate Locations
  40. Contact Protocol
    4 Topics
Lesson 36 of 40
In Progress

Recovery Time Objective (RTO) & Recovery Point Objective

 RTO summary
OutageRTORPOActionCondition
Hardware failure    
Hardware failure for any HP server in the bank6 hoursZero second as the configuration exists on All the replicated servers for the Core banking and exist on the storage for the entire bank servers. So no loss for any server configuration.Replace the faulty hardware as per the maintenance contract and the SLA with HP.Although the RTO for the hardware replacement is 6 hours, but the entire services will not be affected because of the redundancy servers we have.
Hardware failure for any Cisco equipment in the bank4 hoursZero second as the configuration of any Cisco equipment is also exist on HP NA server. Any changes in the configuration is logged and the old configuration is kept as well.Replace the faulty hardware as per the maintenance contract and the SLA with BMB.Although the RTO for the hardware replacement is 4 hours, but the entire services will not be affected because of the redundancy switches and routers we have.
Hardware failure for any Avaya equipment in the bank6 hoursN.AReplace the faulty hardware as per the maintenance contract and the SLA with SISCOM.Although the RTO for the hardware replacement is 6 hours, but the entire services will not be affected because of the redundancy IVR and Avaya servers we have.
Network Services    
6th of October Telecom Egypt exchange total loss4 hoursN.ASwitch the entire CORE banking system and applications from DC to DRIn case Telecom Egypt failed to solve the problem before that time
Core switch / Router Malfunction4 hoursN.A  Switch the entire CORE banking system and applications from DC to DRJust In case of both redundant switches or Routers failed in DC
Omar Makram Head Office isolated3 hoursN.AActivate the BCP plan of moving the users to work from other places.In case Telecom Egypt failed to solve the problem before that time
Garden City Head Office isolated3 hoursN.AActivate the BCP plan of moving the users to work from other places.In case Telecom Egypt failed to solve the problem before that time
DC cable cut30 min.N.ASwitch to microwave link and operate the bank again.If both our redundant cables failed; The fiber cable and the copper cable
Branch Isolated1 hourN.AThe bank clients will be routed to the nearest branch.This is in case we lost the two service providers, and the 3G connection
Application Hosting Service    
Mail Exchange failure2 hoursZero second as it’s on line replication. In case of both DB failure the RPO will be one dayRestore the DB backup.In case we faced Database failure
File server failure2 hoursZero second as it’s on line replication. In case of both DB failure the RPO will be one dayFile server snapshot will be restored.In case we faced Database failure
GFS server Down2 hoursN.AThe last GFS server snapshot will be restored.In case we faced Database failure
Ethix Finance (IBS) Server Down2 hoursN.AThe last Ethix finance server snapshot will be restored.In case we faced Database failure
Xenapp Server Failure1 hourN.AThe last Xenapp server snapshot will be restoredIn case we faced Database failure
Oracle DB (ADILease, Collection, IScore, MF) on VMware4 hoursZero second as it’s on line replication. In case of both DB failure the RPO will be one dayOracle DB will be restored from the last day backupIn case we faced Database failure
Swift Server Failure1 hourZero second as it’s on line replication.Swift services will be switched to DRIn case of H/W failure, software corruption, or Oracle Data Base failure.
Core Banking DB/Sybase down4 hours10 Min.Full switch to DR and restore the required DB.In case of DB failure either after the end of day or during the working day.
Core banking DB/SQL down4 hoursZero second as it’s on line replication. In case of both DB failure the RPO will be one dayFull switch to DR and restore the required DB.In case of DB failure either after the end of day or during the working day.
Share Point Failure4 hoursZero second as it’s on line replication. In case of both DB failure the RPO will be one dayRestore server from last backupIn case of S/W corruption
Business objective server Failure2 hoursN.ARestore server from last backupIn case of S/W corruption
CMS / AS400 (card management system for debit card)  4 hoursZero second as it’s on line replication. In case of both DB failure the RPO will be one day    
Interfaces (base2,..etc)  4 hoursN.A  
Master card online  0 hoursN.A  
Salset (payroll system)  4 hoursN.A  
File hosting Services (Storage)    
Core Storage Malfunction4 hoursZero second as it’s on line replication. In case of both DB failure the RPO will be one day  Full services switch to DR.In case of H/W failure of the EMC storage system
Virtualization storage HW malfunction2 hoursN.ARestore the server backupHigh redundancy available, so Minor performance degradation only
Call Center and Voice recording services.    
Call Center server / DB / IVR failure30 min.
1 hour
1 daySwitch the IVR to DR site
switch the short number to Borsa
In case of IVR failure
In case of call center agent’s move
Avaya Voice recording failure1 hour1 daySwitch the short number to Borsain case of call center agent’s move
Central bank, Regulatory secured links (Firewall)    
3rd party services down (CBE / ISCORE / NPC / 123)30 min.N.ASwitch the faulty function from DC to DR and do the required routing.This is in case we lost the two redundancy links to DC
Firewall equipment malfunction2 hoursN.ASwitch the entire third party functions from DC to DRThis is in case we lost the main firewall and the redundant firewall at DC
Swift link failure30 min.N.ASwitch Swift from DC to DR and do the required routingThis is in case of failure of the two service providers for the swift function (Orange & Link.NET) connected to DC Firewall
Treasury systems (Reuters, Bloomberg, SunGard, CBE)    
Reuters link failure30 min.
10 min.
N.ASwitch BT line from Garden City to BCP site
Re-map the CBE IPs from Garden City to BCP site.
In case of Reuters link down or Garden City premises not accessible
Thin client environment    
Failure in the Citrix farm Data Base1 hourN.ALast server snapshot will be restoredIn case of Database failure in DC
    Voice system (Non Call Center)    
Cisco Unified Communication Manager – CUCM / other voice components failure4 hoursN.AReplace the faulty hardware as per the maintenance contract and the SLA with BMB.Just in case of the failure of the 3 CUCM subscribers at the same time in 3 different locations. Any failure in the publisher CUCM, The DB will only be locked, so no changes will be allowed to the system, but will not stop the call flow.