Simulation environment for attacks on computer networks
You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 
 
 
Go to file
Thorsten Sick b0822522c6 removed confusing stuff 2 years ago
.github/workflows Update makefile.yml 2 years ago
app Updated documentation 3 years ago
doc Cleanup and adding a starter pack (hello_world, license, readme) 2 years ago
plugins removed confusing stuff 2 years ago
systems Fixed a bug in package manager waiting for confirmation 2 years ago
templates Adding line numbers and file names to metasploit attacks 3 years ago
tests Cleanup and adding a starter pack (hello_world, license, readme) 2 years ago
tools shipit extended to accept deeper nested files. Especially in plugin resource folders 3 years ago
.gitignore Initial update 3 years ago
CONTRIBUTING.txt Documentation upgrade improved contributing.txt 3 years ago
CONTRIBUTORS.txt Basic contribution files added 3 years ago
LICENSE.txt Cleanup and adding a starter pack (hello_world, license, readme) 2 years ago
Makefile Updating auto python tests 2 years ago
README.md Cleanup and adding a starter pack (hello_world, license, readme) 2 years ago
caldera_control.py PEP8 fixes 3 years ago
caldera_subset_classic.txt Caldera experiments can be independently controlled by files. Those will overwrite the caldera attacks in the experiment files. Good for batch processing 3 years ago
doc_generator.py Some PEP8 cleanup 3 years ago
experiment_control.py TODO cleanup 3 years ago
hello_world.yaml removed confusing stuff 2 years ago
init.sh Enhanced basic getting started readme 2 years ago
machine_control.py pylinting 3 years ago
metasploit_control.py pylinting round 4 3 years ago
plugin_manager.py Unit tests for pluginmanager 3 years ago
pydantic_test.py Made pylint happy 3 years ago
pylint.rc Made pylint happy 3 years ago
requirements.txt Unit tests work. Still some PEP errors and no integration checks 3 years ago
requirements_dev.txt Documentation can be built now 3 years ago
setup.py Unit tests working now 3 years ago
template.yaml Cleanup and adding a starter pack (hello_world, license, readme) 2 years ago
tox.ini All auto tests work. Integration tests not yet run ! 3 years ago

README.md

main branch test

PurpleDome creates simulated systems which hack each other

It creates several virtual machines to simulate a target network. A Kali attacker will be spawned and use configured attacks to blast at the targets. Those attacks can be Kali command line tools, Caldera abilities or Metasploit tools.

The goal is to test sensors and detection logic on the targets and in the network and improve them.

The system is at the same time reproducible and quite flexible (target system wise, vulnerabilities on the targets, attacks).

Installation

On a current Ubuntu system, just execute the init.sh to install the required packages and set up the virtual env.

./init.sh

Default vm will be vagrant and virtualbox

Before using any PurpleDome commands switch into the python environment:

source venv/bin/activate

My first experiment

Run

python3 ./experiment_control.py -vvv  run --configfile hello_world.yaml

This will:

  • Use vagrant to generate attacker and target
  • run them
  • run several attacks from the attacker to the target
  • zip sensor logs and attack logs together

Building the machines from vagrant will take some time ont he first run. Please be patient.

After the experiment ran, open the zip file with the attack log and all the sensor logs:

file-roller loot/2021_11_11___12_13_14/2021_11_11___12_13_14.zip

or jump directly into the human readable attack log

evince tools/human_readable_documentation/build/latex/purpledomesimulation.pdf

(which is included in the zip as well)

Running the basic commands

All command line tools have a help included. You can access it by the "--help" parameter

python3 ./experiment_control.py -v  run
  • -v is verbosity. To spam stdout use -vvv
  • run is the default command
  • --configfile is optional. If not supplied it will take experiment.yaml

Most of the configuration is done in the yaml config file. For more details check out the full documentation

Testing

Basic code and unit tests can be run by

make test

That way you can also see if your env is set up properly.

It will also check the plugins you write for compatibility.

the tool

./pydantic_test.py

is not included in the make test. But you can use it manually to verify your yaml config files. As they tend to become quite complex this is a time safer.

More documentation

This README is just a short overview. In depth documentation can be found in the doc folder.

Documentation is using sphinx. To compile it, go into this folder and call

make html

Use your browser to open build/html/index.html and start reading.

Development

The code is stored in https://github.com/avast/PurpleDome. Feel free to fork it and create a pull request.

Development happens in feature branches branched of from develop branch. And all PRs go back there. The branch release is a temporary branch from develop and will be used for bug fixing before a PR to main creates a new release. Commits in main will be marked with tags and the changelog.txt file in human readable form describe the new features.

https://nvie.com/posts/a-successful-git-branching-model/

Short:

  • As a user, the main branch is relevant for you
  • Start a feature branch from develop
  • When doing a hotfix, branch from main

GIT

Branching your own feature branch

$ git checkout development $ git pull --rebase=preserve $ git checkout -b my_feature

Do some coding, commit.

Rebase before pushing

$ git checkout development $ git pull --rebase=preserve $ git checkout my_feature $ git rebase development

Code review will be happening on github. If everything is nice, you should squash the several commits you made into one (so one commit = one feature). This will make code management and debugging a lot simpler when you commit is added to develop and main branches

.. TODO: git rebase --interactive git push --force