Difference between revisions of "ADVISE Atomic Formalism"

From Mobius Wiki
Jump to: navigation, search
(Created page with "TODO")
 
Line 1: Line 1:
TODO
+
=== ADVISE ===
 +
 
 +
Mobius supports multiple modeling formalisms, including ADversary VIew Security Evaluation (ADVISE). ADVISE allows modelers to create and analyze a specific adversary profile in an executable state-based security model of a system (11LEM02). Using graphical primitives, ADVISE provides a high-level modeling formalism to easily create the adversary profile and the executable state-based security model of the system under consideration.
 +
 
 +
====ADVISE Primitives====
 +
 
 +
ADVISE models consist of five primitive objects: attack steps, accesses, knowledges, skills, and goals. Attack steps represent the actions taken by the adversary. Accesses, knowleges, and skills represent the accesses, knowledge, and skills of the adversary, respectively. Goals represent the goals (or flags) of the adversary.
 +
 
 +
=====Attack Step=====
 +
 
 +
Attack steps represent the actions taken by the adversary. Attack steps are represented graphically as rectangles. An attack step is fully described by its input, output, and properties. An attack step can have inputs from accesses, knowledges, skills, and goals. It can also have outputs to the same four types of elements (accesses, knowledges, skills, and goals). An attack step also contain four properties: attack cost, attack evaluation time, preconditions, and outcomes.
 +
 
 +
The attack cost of an attack step represents the relative cost of the attack step with respect to the adversary. For example, if an attack step is difficult or expensive (such as hacking into a locked vault door), then the attack cost will be relatively large. If the attack step is easy or inexpensive (such as opening an unlocked door), then the attack cost will be relatively small. Since an adversary is more likely to take the easiest route to the goal, she will likely choose an attack step with a lower attack cost over one with a higher attack cost.
 +
 
 +
The attack execution time of an attack step represents the relative time it will take to complete the attack step.
 +
For example, if an attack step will take a long time to complete (such as downloading several terabytes of logs), then the attack execution time will be relatively large. If the attack step will not take much time to complete (such as downloading a few kilobytes of a small subset of the logs), then the attack execution time will be relatively small. Since an adversary is more likely to take the quickest route to the goal, she will likely choose an attack step with a shorter attack execution time over one with a longer attack execution time.
 +
 
 +
The preconditions of an attack step represent the conditions that must take place before the adversary is able to attempt the attack step. The preconditions of an attack step are closely related to the inputs of the attack step since the inputs of the attack step provide the state variables of the model that can be used in the conditional expression. For example, if the attack step requires a certain access and level of skill to attempt, then the inputs of the attack step would be that access and that skill, and the precondition of the attack step would be a conditional expression such as \verb|return (access1 && (skill1 > 0.7)|. As a more concrete example, suppose the attack step is to pick the lock on a safe. The access in this case would be having close proximity to the lock, and the skill would be lock-picking.
 +
 
 +
The outcomes of an attack step represent the outcomes that occur if the attack step is successfully completed. The outcomes of an attack step are closely related to the outputs of the attack step since the outputs of the attack step provide the state variables of the model that can be modified depending on the resulting outcome. Every attack step has one or more outcomes, each with a probability of occurring. The sum of the probability of each of the outcomes of the attack step is always 1. Each outcome also has an associated detection probability which represents the probability that the adversary will be detected if this outcome occurs. Since an adversary is likely to want to avoid being detected, she will likely choose attack steps with a lower weighted detection probability of its outcomes over an attack step with a higher weighted detection probability of its outcomes.
 +
 
 +
=====Access=====
 +
 
 +
Accesses represent the relevant accesses the adversary may have (or eventually gain) in the executable state-based security model of the system. Accesses are state variables that store whether or not the adversary has the given access. An access may represent a physical access (such as close proximity to a target or having a key to a relevant door in the model) or a more abstract access (such as administrator privileges in a target machine). Accesses are represented graphically as purple squares.
 +
 
 +
=====Knowledge=====
 +
 
 +
Knowledges represent the relevant knowledge the adversary may have (or eventually gain) in the executable state-based security model of the system. Knowledges are state variables that store whether or not the adversary has knowledge of the given information. Knowledge may represent a fact such as the adversary knowing a certain password of a target system or the type of encryption that is used between two target nodes. Knowledges are represented graphically as green circles.
 +
 
 +
=====Skill=====
 +
 
 +
Skills represent the relevant skills the adversary may have (or sometimes may eventually gain) in the executable state-based security model of the system. Skills are state variables that store the extent to which the adversary is skilled in that given area. For example, a certain adversary may have a lock-picking skill of 0.7 which could mean that she is adept at lock-picking, but not yet a master of the skill. Skills are represented graphically as blue triangles.
 +
 
 +
=====Goal=====
 +
 
 +
Goals (or flags) represent what the adversary is trying to ultimately achieve. Goals are state variables that indicate whether or not the adversary has accomplished the goal yet. Goals can represent achievements such as successfully accessing sensitive information, shutting down a target system, or escaping from a bank with stolen jewels. Goals are represented graphically as gold ovals.
 +
 
 +
====Editor====
 +
 
 +
This section looks into the atomic formalism that represents ADVISE with emphasis on creation, editing, and manipulation of atomic models using the Mobius ADVISE editor.
 +
 
 +
TODO - Finish this section
 +
 
 +
====Edit====
 +
 
 +
TODO - finish this section
 +
 
 +
====View====
 +
 
 +
TODO - finish this section
 +
 
 +
====Elements====
 +
 
 +
Elements are ADVISE model primitives. The Elements menu includes the following types of elements:
 +
 
 +
:* Attack step - rectangle
 +
:* Access - purple square
 +
:* Knowledge - green circle
 +
:* Skill - blue triangle
 +
:* Goal - gold oval
 +
 
 +
TODO - finish this section

Revision as of 22:21, 29 October 2014

ADVISE

Mobius supports multiple modeling formalisms, including ADversary VIew Security Evaluation (ADVISE). ADVISE allows modelers to create and analyze a specific adversary profile in an executable state-based security model of a system (11LEM02). Using graphical primitives, ADVISE provides a high-level modeling formalism to easily create the adversary profile and the executable state-based security model of the system under consideration.

ADVISE Primitives

ADVISE models consist of five primitive objects: attack steps, accesses, knowledges, skills, and goals. Attack steps represent the actions taken by the adversary. Accesses, knowleges, and skills represent the accesses, knowledge, and skills of the adversary, respectively. Goals represent the goals (or flags) of the adversary.

Attack Step

Attack steps represent the actions taken by the adversary. Attack steps are represented graphically as rectangles. An attack step is fully described by its input, output, and properties. An attack step can have inputs from accesses, knowledges, skills, and goals. It can also have outputs to the same four types of elements (accesses, knowledges, skills, and goals). An attack step also contain four properties: attack cost, attack evaluation time, preconditions, and outcomes.

The attack cost of an attack step represents the relative cost of the attack step with respect to the adversary. For example, if an attack step is difficult or expensive (such as hacking into a locked vault door), then the attack cost will be relatively large. If the attack step is easy or inexpensive (such as opening an unlocked door), then the attack cost will be relatively small. Since an adversary is more likely to take the easiest route to the goal, she will likely choose an attack step with a lower attack cost over one with a higher attack cost.

The attack execution time of an attack step represents the relative time it will take to complete the attack step. For example, if an attack step will take a long time to complete (such as downloading several terabytes of logs), then the attack execution time will be relatively large. If the attack step will not take much time to complete (such as downloading a few kilobytes of a small subset of the logs), then the attack execution time will be relatively small. Since an adversary is more likely to take the quickest route to the goal, she will likely choose an attack step with a shorter attack execution time over one with a longer attack execution time.

The preconditions of an attack step represent the conditions that must take place before the adversary is able to attempt the attack step. The preconditions of an attack step are closely related to the inputs of the attack step since the inputs of the attack step provide the state variables of the model that can be used in the conditional expression. For example, if the attack step requires a certain access and level of skill to attempt, then the inputs of the attack step would be that access and that skill, and the precondition of the attack step would be a conditional expression such as \verb|return (access1 && (skill1 > 0.7)|. As a more concrete example, suppose the attack step is to pick the lock on a safe. The access in this case would be having close proximity to the lock, and the skill would be lock-picking.

The outcomes of an attack step represent the outcomes that occur if the attack step is successfully completed. The outcomes of an attack step are closely related to the outputs of the attack step since the outputs of the attack step provide the state variables of the model that can be modified depending on the resulting outcome. Every attack step has one or more outcomes, each with a probability of occurring. The sum of the probability of each of the outcomes of the attack step is always 1. Each outcome also has an associated detection probability which represents the probability that the adversary will be detected if this outcome occurs. Since an adversary is likely to want to avoid being detected, she will likely choose attack steps with a lower weighted detection probability of its outcomes over an attack step with a higher weighted detection probability of its outcomes.

Access

Accesses represent the relevant accesses the adversary may have (or eventually gain) in the executable state-based security model of the system. Accesses are state variables that store whether or not the adversary has the given access. An access may represent a physical access (such as close proximity to a target or having a key to a relevant door in the model) or a more abstract access (such as administrator privileges in a target machine). Accesses are represented graphically as purple squares.

Knowledge

Knowledges represent the relevant knowledge the adversary may have (or eventually gain) in the executable state-based security model of the system. Knowledges are state variables that store whether or not the adversary has knowledge of the given information. Knowledge may represent a fact such as the adversary knowing a certain password of a target system or the type of encryption that is used between two target nodes. Knowledges are represented graphically as green circles.

Skill

Skills represent the relevant skills the adversary may have (or sometimes may eventually gain) in the executable state-based security model of the system. Skills are state variables that store the extent to which the adversary is skilled in that given area. For example, a certain adversary may have a lock-picking skill of 0.7 which could mean that she is adept at lock-picking, but not yet a master of the skill. Skills are represented graphically as blue triangles.

Goal

Goals (or flags) represent what the adversary is trying to ultimately achieve. Goals are state variables that indicate whether or not the adversary has accomplished the goal yet. Goals can represent achievements such as successfully accessing sensitive information, shutting down a target system, or escaping from a bank with stolen jewels. Goals are represented graphically as gold ovals.

Editor

This section looks into the atomic formalism that represents ADVISE with emphasis on creation, editing, and manipulation of atomic models using the Mobius ADVISE editor.

TODO - Finish this section

Edit

TODO - finish this section

View

TODO - finish this section

Elements

Elements are ADVISE model primitives. The Elements menu includes the following types of elements:

  • Attack step - rectangle
  • Access - purple square
  • Knowledge - green circle
  • Skill - blue triangle
  • Goal - gold oval

TODO - finish this section

TODO