Artificial Intelligence (AI), in its current state, cannot replace ethical
hackers and will not take their jobs anytime soon. What AI can
actually offer is a recommender system that helps ethical hackers optimize
their time to solve (TTS) by providing suggestions at subsequent
timesteps and milestones of the hacking process. These suggestions
are based on the available information about the target system, and
the state of the current access capabilities.
In this master thesis work, we shift the focus from training ethical
hacking autonomous AI agents to building a dataset of attack trees
serving as a basis for a recommender system. We propose a comprehensive
model for building attack trees, including possible human
actions. We considered machine pwning a case study and modeled
attack trees for a set of machine boxes used for training by some of
the most popular ethical hacking training platforms. At the same
time, our attack trees serve as training materials, quick and concise
cheat sheets replacing long and detailed walkthroughs. Furthermore,
we conducted a human study to find that our attack trees help reduce
the TTS for learners with diverse technical backgrounds and expertise.
Finally, we compare our work to similar approaches in the literature
and sketch preliminary ideas of how our approach can benefit an AIbased
recommender system.