Incident Response
-
Incident ResponseIncident Reporting
-
Incident Response Use CasesLab Setup
-
Role Playing - Shift Manager
-
Demonstrating: Investigating and Escalating
-
Report from Malware Analyst
-
Exercise 1.1: Exploring Suspicious Executable Detected using SIEM
-
Exercise 1.2: Investigating Multiple Failed Logins using SIEM
-
Exercise 3: Mitigating Risk
-
Exercise 4.1: Asking the Right Questions
-
Scenario 4.1: Asking the Right Questions
-
Scenario 4.2: Suspicious or Malicious?
-
Exercise 4.2: Reviewing the Shift Log
-
Exercise 4.3: Investigating an Unauthorized Login Attempt
-
Exercise 4.4: Investigating Firewall Traffic
-
Exercise 4.5: Reviewing the Security Operations Mailbox
-
Exercise 5.1: Reviewing New Intelligence
-
Exercise 5.2: Assessing Threat Severity
-
Exercise 6: Recommending Remediation
-
Exercise 7: Conducting a Post-Incident Review
-
Exercise 8: Communicating with Operations and Senior Management
-
Business ContinuityBusiness Continuity Plan Development8 Topics
-
BCP Invocation Process2 Topics
-
Emergency Procedures7 Topics
-
Crisis Management Team10 Topics
-
BCP Seating Plan
-
Overview
-
Disaster RecoveryScope of Critical Services
-
Network Services
-
Application Hosting Service
-
File Hosting Services
-
Call Centre and Voice Recording Services
-
Regulatory Links
-
Thin Client Environment
-
Voice System (Non-Service Desk)
-
Printing Services
-
Recovery Time Objective (RTO) & Recovery Point Objective
-
Single Point of Failure
-
Redundancy Requirements
-
Alternate Locations
-
Contact Protocol4 Topics
Participants3
Thin Client Environment
1.1 Server Architecture
1.2 Scenario 1 >>>>>> Server down
Description: This could be due to failure in the Citrix farm Data Base, or H/W failure.
Impact: Medium. It will be impact all Thin Client users, but it will cause minor performance degradation of application published on Citrix.
Probability: Low, as we have redundancy in DC & DR. Please refer to Fig.1 in 9.1
BCP: There are 70 virtual Citrix farm servers in DC, and another 70 in DR. These servers are on two high end redundant physical servers and 5 smaller servers in DC, one high end and 7 smaller physical servers in DR.
If we face DB failure, the last server snapshot will be restored.
If we face H/W failure, we will depend on the redundant servers, and restore the faulty server as per the maintenance contract and the SLA with HP.
RTO: 1 Hours to restore the DB backup.
6 Hour to replace the faulty H/W part