Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
MITRE ATT&CK (https://attack.mitre.org/) is an amazing knowledge base of adversary techniques.
MITRE ATT&CK® is a globally-accessible knowledge base of adversary tactics and techniques based on real-world observations. The ATT&CK knowledge base is used as a foundation for the development of specific threat models and methodologies in the private sector, in government, and in the cybersecurity product and service community.
With the creation of ATT&CK, MITRE is fulfilling its mission to solve problems for a safer world — by bringing communities together to develop more effective cybersecurity. ATT&CK is open and available to any person or organization for use at no charge.
This is in development to bring into the new user interface. This is still tracked by the back-end and available via reporting, but the ATT&CK matrix itself still needs to be ported over to the new React interface.
Commands can be automatically tagged with MITRE ATT&CK Techniques (this is what populates the "Commands by ATT&CK" output). To locate this, you just need to look at the associated python files for each command:
/Mythic/Payload_Types/[agent name]/mythic/agent_functions[cmd_name].py
In addition to this file defining the general properties of the command (such as parameters, description, help information, etc). There's a field called attackmapping
that takes an array of MITRE's T#
values. For example, looking at the apfell
agent's download
command:
When this command syncs to the Mythic server, those T numbers are stored and used to populate the ATT&CK Matrix. When you issue this download
command, Mythic does a lookup to see if there's any MITRE ATT&CK associations with the command, and if there are, Mythic creates entries for the "Tasks by ATT&CK" mappings. This is why you're able to see the exact command associated.
As long as you're keeping with the old MITRE ATT&CK mappings, simply add your T# to the list like shown above, then run sudo ./mythic-cli payload start [agent name]
. That'll restart the agent's container and trigger a re-sync of information.
Good point. The current Mythic instance doesn't support the new sub technique view or mapping scheme. It's on the roadmap to incorporate now that MITRE finalized the structure.
How to install Mythic and agents in an offline environment
This guide will assume you can install Mythic on a box that has Internet access and then migrate to your offline testing/development environment.
Install Mythic following the normal installation
With Mythic running, install any other agents or profiles you might need/want.
3. Export your docker containers. Make sure you also save the tags.
4. Download donut from pypi. (this is apollo specific, so there might be others depending on your agent)
Download Apollo dependencies (apollo specifically installs these dynamically within the Docker container at build-time, so pre-fetch these)
5. Tar Mythic directoy.
6. Push mythic_images.tar
, mythic_tags
, and mythic.tar.gz
to your offline box.
7. Import docker images and restore tags.
8. Extract Mythic directory.
9. Update Apollo's Dockerfile (at the time of use, it might not be 0.1.1 anymore, check #current-payloadtype-versions the latest). This is apollo specific, so you might need to copy in pieces for other agents/c2 profiles depending on what components they dynamically try to install.
10. Start Mythic
Normally, Mythic containers will try to re-build every time you bring them down and back up. This might not be great for an offline environment. The configuration variable, REBUILD_ON_START
, can be set to false
to tell Mythic that the containers should specifically NOT be rebuilt when restarted.
Internal documentation docker container
Mythic provides an additional docker container that serves static information about the agents and c2 profiles. In the main Mythic/.env
file you'll see which port you wan to run this on via the "DOCUMENTATION_PORT" key. You don't need to worry about this port too much since Mythic uses an Nginx reverse proxy to transparently proxy connections back to this container based on the web requests you make.
The documentation stands up a Golang HTTP server based on Hugo (it's HTTP, not HTTPS). It reads all of the markdown files in the /Mythic/documentation-docker/
folder and creates a static website from it. For development purposes, if you make changes to the markdown files here, the website will automatically update in real time. From the Mythic UI if you hit /docs/
then you'll be directed automatically to the main documentation home page.
Pull the code from the official GitHub repository:
This is made to work with docker and docker-compose, so they both need to be installed. If docker is not installed on your ubuntu machine, you can use the ./install_docker_ubuntu.sh
script to install it for you.
If you're running on debian, use the ./install_docker_debian.sh
instead.
You need to have Docker server version 20.10.22
or above (latest version is 23.0.1
) for Mythic and the docker containers to work properly. If you do sudo apt upgrade
and sudo apt install docker-compose-plugin
on a new version of Ubuntu or Debian, then you should be good. You can check your version with sudo docker version
.
Mythic must be installed on Linux. While macOS supports Docker and Docker-Compose, macOS doesn't handle the shared host networking that Mythic relies on for C2 containers. You can still access the Browser interface from any OS, but the Mythic instance must be installed on Linux
It's recommended to run Mythic on a VM with at least 2CPU and 4GB Ram.
All configuration is done via the mythic-cli
binary. However, to help with GitHub sizes, the mythic-cli
binary is no longer distributed with the main Mythic repository. Instead, you will need to make the binary via sudo make
from the main Mythic folder. This will create the build container for the mythic-cli, build the binary, and copy it into your main Mythic folder automatically. From there on, you can use the mythic-cli
binary like normal.
Mythic configuration is all done via Mythic/.env
, which means for your configuration you can either add/edit values there or add them to your environment.
Mythic/.env doesn't exist by default. You can either let Mythic create it for you when you run sudo ./mythic-cli start
for the first time or you can create it ahead of time with just the variables you want to configure.
If you need to run mythic-cli
as root for Docker and you set your environment variables as a user, be sure to run sudo -E ./mythic-cli
so that your environment variables are carried over into your sudo call. The following are the default values that Mythic will generate on first execution of sudo ./mythic-cli mythic start
unless overridden:
A few important notes here:
MYTHIC_SERVER_PORT
will be the port opened on the server where you're running Mythic. The NGINX_PORT
is the one that's opened by Nginx and acts as a reverse proxy to all other services. The NGINX_PORT
is the one you'll connect to for your web user interface and should be the only port you need to expose externally (unless you prefer to SSH port forward your web UI port).
The allowed_ip_blocks
allow you to restrict access to the login
page of Mythic. This should be set as a series of netblocks with NO host bits set - i.e. 127.0.0.0/16,192.168.10.0/24,10.0.0.0/8
*_BIND_LOCALHOST_ONLY
- these settings determine if the associated container binds the port to 127.0.0.1:port
or 0.0.0.0:port
. These are all set to true (except for the nginx container) by default so that you're not exposing these services externally.
If you want to have a services (agent, c2 profile, etc) on a host other than where the Mythic server is running, then you need to make sure that RABBITMQ_BIND_LOCALHOST_ONLY and MYTHIC_SERVER_BIND_LOCALHOST_ONLY are both set to false
so that your remote services can access Mythic.
The above configuration does NOT affect the port or SSL information related to your agents or callback information. It's strictly for your operator web UI.
When the mythic_server
container starts for the first time, it goes through an initialization step where it uses the password and username from Mythic/.env
to create the mythic_admin_user
user. Once the database exists, the mythic_server
container no longer uses that value.
The mythic-cli
binary is used to start/stop/configure/install components of Mythic. You can see the help menu at any time with mythic-cli -h
, mythic-cli --help
or mythic-cli help
.
By default, Mythic does not come with any Payload Types (agents) or C2 Profiles. This is for a variety of reasons, but one of the big ones being time/space requirements - all Payload Types and C2 Profiles have their own Docker containers, and as such, collectively they could eat up a lot of space on disk. Additionally, having them split out into separate repositories makes it much easier to keep them updated.
Available Mythic Agents can be found on GitHub at https://github.com/MythicAgents
Available Mythic C2 Profiles can be found on GitHub at https://github.com/MythicC2Profiles
To install a Payload Type or C2 Profile, use the mythic-cli
binary with:
If you have an agent already installed, but want to update it, you can do the same command again. If you supply a -f
at the end, then Mythic will automatically overwrite the current version that's installed, otherwise you'll be prompted for each piece.
You won't be able to create any payloads within Mythic until you have at least one Agent and a matching C2 Profile installed
If you're wanting to enable SIEM-based logging, install the basic_logger
via the mythic cli sudo ./mythic-cli install github https://github.com/MythicC2Profiles/basic_logger
. This profile listens to the emit_log
RabbitMQ queue and allows you to configure how you want to save/modify the logs. By default they just go to stdout, but you can configure it to write out to files or even submit the events to your own SIEM.
If you came here right from the previous section, your Mythic instance should already be up and running. Check out the next section to confirm that's the case. If at any time you wish to stop Mythic, simply run sudo ./mythic-cli stop
and if you want to start it again run sudo ./mythic-cli start
. If Mythic is currently running and you need to make a change, you can run sudo ./mythic-cli restart
again without any issue, that command will automatically stop things and then restart them.
The default username is mythic_admin
, but that user's password is randomly generated when Mythic is started for the first time. You can find this random value in the Mythic/.env
file. Once Mythic has started at least once, this value is no longer needed, so you can edit or remove this entry from the Mythic/.env
file.
Mythic starts with NO C2 Profiles or Agents pre-installed. Due to size issues and the growing number of agents, this isn't feasible. Instead. use the ./mythic-cli install github <url> [branch] [-f]
command to install an agent from a GitHub (or GitLab) repository.
If something seems off, here's a few places to check:
Run sudo ./mythic-cli status
to give a status update on all of the docker containers. They should all be up and running. If one is exited or has only been up for less than 30 seconds, that container might be your issue. All of the Mythic services will also report back a health check which can be useful to determine if a certain container is having issues. The status command gives a lot of information about what services are running, on which ports, and if they're externally accessible or not.
To check the logs of any container, run sudo ./mythic-cli logs [container_name]
. For example, to see the output of mythic_server, run sudo ./mythic-cli logs mythic_server
. This will help track down if the last thing that happened was an error of some kind.
If all of that looks ok, but something still seems off, it's time to check the browser.
First open up the developer tools for your browser and see if there are any errors that might indicate what's wrong. If there's no error though, check the network tab to see if there are any 404 errors.
If that's not the case, make sure you've selected a current operation (more on this in the Quick Usage section). Mythic uses websockets that pull information about your current operation to provide data. If you're not currently in an active operation (indicated at the top of your screen in big letters), then Mythic cannot provide you any data.
Mythic starts every service (web server, database, each payload type, each C2 profile, rabbitmq, documentation) in its own Docker container. As much as possible, these containers leverage common image bases to reduce size, but due to the nature of so many components, there's going to be a decent footprint. For consideration, here's the Docker footprint for a fresh install of Mythic:
If you want to save space or if you know you're not going to be using a specific container, you can remove that container from docker-compose with sudo ./mythic-cli remove [container name]
Mythic uses docker containers to logically separate different components and functions. There are three main categories:
Mythic's main core. This consists of docker containers stood up with docker-compose:
mythic_server - An GoLang gin webserver instance
mythic_postgres - An instance of a postgresql database
mythic_rabbitmq - An instance of a rabbitmq container for message passing between containers
mythic_nginx - A instance of a reverse Nginx proxy
mythic_graphql - An instance of a Hasura GraphQL server
mythic_jupyter - An instance of a Jupyter notebook
mythic_documentation - An instance of a Hugo webserver for localized documentation
Installed Services
Any folder in Mythic/InstalledServices
will be treated like a docker container (payload types, c2 profiles, webhooks, loggers, translation containers, etc)
To stop a specific container, run sudo ./mythic-cli stop {c2_profile_name}
.
If you want to reset all of the data in the database, use sudo ./mythic-cli database reset
.
If you want to start/restart any specific payload type container, you can do sudo ./mythic-cli start {payload_type_name}
and just that container will start/restart. If you want to start multiple, just do spaces between them: sudo ./mythic-cli start {container 1} {container 2}
.
Mythic shares the networking with the host it's on. This allows Mythic to not worry about exposing specific ports ahead of time for each container since they can be dynamically set by users. However, this does mean that Mythic needs to run as root
if any ports under 1024 need to be used.
All of Mythic's containers share a single docker-compose file. When you install an agent or C2 Profile this docker-compose file will automatically be updated. However, you can always add/remove from this file via mythic-cli
and list out what's registered in the docker-compose file vs what you have available on your system:
This makes it easy to track what's available to you and what you're currently using.
Mythic's architecture is broken out in the following diagram:
Operators connect via a browser to the main Mythic server, a GoLang gin
web server. This main Mythic server connects to a PostgreSQL database where information about the operations lives. Each of these are in their own docker containers. When Mythic needs to talk to any payload type container or c2 profile container, it does so via RabbitMQ, which is in its own docker container as well.
When an agent calls back, it connects through these c2 profile containers which have the job of transforming whatever the c2 profile specific language/style is back into the normal RESTful API calls that the Mythic server needs.
There is currently a base container that contains Python 3.11, Mono, .Net Core 7.0, Golang 1.20, and the macOS 12.1 SDK. More containers will be added in the future, but having the base containers pre-configured on DockerHub speeds up install time for operators.
In the Mythic UI, you can click the hamburger icon (three horizontal lines) in the top left to see the current Server version and UI version.
There are two scenarios to updating Mythic: updates within the minor version (1.4.1 to 1.4.2) and updates to the minor (1.4.1 to 1.5) or major version (1.4 to 2.0).
This is when you're on version 1.2 for example and want to pull in new updates (but not a new minor version like 1.3 or 1.4). In this case, the database schema should not have changed.
Pull in the latest code for your version (if you're still on the current version, this should be as easy as a git pull
)
Make a new mythic-cli binary with sudo make
Restart Mythic to pull in the latest code changes into the docker containers with sudo ./mythic-cli mythic start
This is when you're on version 1.2 for example and want to upgrade to version 1.3 or 2.1 for example. In this case, the database schema has changed.
In order to upgrade, you'll end up losing all of the information in your database, so make sure you download any files, export any reports, or save any tasking you've done. This is irreversible! The entire database will be cleared and reset back to default.
Reset the database with sudo ./mythic-cli database reset
Make sure Mythic is stopped, sudo ./mythic-cli stop
Purge all of your containers, sudo docker system prune -a
Pull in the version you want to upgrade to (if you're wanting to upgrade to the latest, it's as easy as git pull
)
Delete your Mythic/.env
file - this file contains all of the per-install generated environment variables. There might be new environment variables leveraged by the updated Mythic, so be sure to delete this file and a new one will be automatically generated for you.
Restart Mythic to pull in the latest code changes into the docker containers with sudo ./mythic-cli start
Agents and C2 Profiles are hosted in their own repositories, and as such, might have a different update schedule than the main Mythic repo itself. So, you might run into a scenario where you update Mythic, but now the current Agent/C2Profiles services are no longer supported.
You'll know if they're no longer supported because when the services check in, they'll report their current version number. Mythic has a range of supported version numbers for Agents, C2 Profiles, Translation services, and even scripting. If something checks in that isn't in the supported range, you'll get a warning notification in the UI about it.
To update these (assuming that the owner/maintainer of that Agent/C2 profile has already done the updates), simply stop the services (sudo ./mythic-cli stop agentname
or sudo ./mythic-cli stop profileName
) and run the install command again. The install command should automatically determine that a previous version exists, remove it, and copy in the new components. Then you just need to either start those individual services, or restart mythic overall.
By default, the server will bind to 0.0.0.0
on port 7443
with a self-signed certificate(unless otherwise configured). This IP is an alias meaning that it will be listening on all IPv4 addresses on the machine. Browse to either https://127.0.0.1:7443
if you’re on the same machine that’s running the server, or you can browse to any of the IPv4 addresses on the machine that’s running the server.
Browse to the server with any modern web browser. You will be automatically redirected to the /login
url. This url is protected by allowed_ip_blocks
.
The default username is mythic_admin
and the default password is randomized. The password is stored in Mythic/.env
after first launch, but you can also view it with sudo ./mythic-cli config get MYTHIC_ADMIN_PASSWORD
. You can opt to set this before you initially start if you want (or you can change this later through the UI) by setting that environment variable before staring Mythic for the first time.
Mythic uses JSON Web Tokens (JWT) for authentication. When you use the browser (vs the API on the command line), Mythic stores your access and refresh tokens in a cookie as well as in the local session storage. This should be seamless as long as you leave the server running; however, the history of the refresh tokens is saved in memory. So, if you authenticate in the browser, then restart the server, you’ll have to sign in again.
If you're using Chrome and a self-signed certificate that's default generated by Mythic, you will probably see a warning like this when you try to connect:
This is fine and expected since we're not using a LetsEncrypt or a proper domain certificate. To get around this, simply click somewhere within the window and type thisisunsafe
. Your browser will now Temporarily accept the cert and allow you through.
At some point in the future, your browser will decide to remind you that you're using a self-signed certificate. Mythic cannot actually read this error message due to Chrome's security policies. When this happens, simply refresh your page. You'll be brought back to the same big warning page as the image above and you can type thisisunsafe
again to continue your operations.
A cross-platform, post-exploit, red teaming framework designed to provide a collaborative and user friendly interface for operators.
Mythic is a multiplayer, command and control platform for red teaming operations. It is designed to facilitate a plug-n-play architecture where new agents, communication channels, and modifications can happen on the fly. Some of the Mythic project's main goals are to provide quality of life improvements to operators, improve maintainability of agents, enable customizations, and provide more robust data analytic capabilities to operations.
Fundamentally, Mythic uses a web-based front end (React) and Docker containers for the back-end. A GoLang server handles the bulk of the web requests via GraphQL APIs and WebSockets. This server then handles connections to the PostgreSQL database and communicates to the other Docker containers via RabbitMQ. This enables the individual components to be on separate physical computers or in different virtual machines if desired.
A helpful view of the current state of the C2 Profiles and Agents for Mythic can be found here:
A reverse Nginx proxy provides a single port to connect through to reach back-end services. Through this reverse proxy, operators can connect to:
React UI
Hugo documentation container (documentation on a per-agent and per-c2 profile basis)
Hasura GraphQL console (test GraphQL queries, explore/modify the database)
Jupyter Notebook (with Mythic Scripting pre-installed and pre-created examples)
Data modeling, tracking, and analysis are core aspects of Mythic's ability to provide quality of life improvements to operators. From the very beginning of creating a payload, Mythic tracks the specific command and control profile parameters used, the commands loaded into the payload and their versions, who created it, when, and why. All of this is used to provide a more coherent operational view when a new callback checks in. Simple questions such as “which payload triggered this callback”, "who issued this task", and even “why is there a new callback” now all have contextual data to give answers. From here, operators can start automatically tracking their footprints in the network for operational security (OpSec) concerns and to help with deconflictions. All agents and commands can track process creates, file writes, API calls, network connections, and more. These artifacts can be automatically recorded when the task is issued or even reported back by agents as they happen.
Mythic also incorporates MITRE ATT&CK mappings into the standard workflow. All commands can be tagged with ATT&CK techniques, which will propagate to the corresponding tasks issued as well. In a MITRE ATT&CK Matrix view (and ATT&CK Navigator view), operators can view coverage of possible commands as well as coverage of commands issued in an operation. Operators can also provide comments on each task as they operate to help make notes about why a task was important. Additionally, to make deconfliction and reporting easier, all commands, output, and comments can be globally searched through the main web interface.
Check out the code on GitHub: https://github.com/its-a-feature/Mythic
Join the #Mythic channel in the BloodHound public slack https://bloodhoundgang.herokuapp.com/
Reach out on Twitter: @its_a_feature_
Mythic is meant to be used by multiple operators working together to accomplish operations. That typically means there's a lead operator, multiple other operators, and potentially people that are just spectating. Let's see how operators come into play throughout Mythic.
Every user has their own password for authenticating to Mythic. On initial startup, one account is created with a password specified via the MYTHIC_ADMIN_PASSWORD
environment variable or by creating a Mythic/.env
file with MYTHIC_ADMIN_PASSWORD=passwordhere
entry. This account can then be used to provision other accounts (or they can be created via the ability). If the admin password isn't specified via the environment variable or via the Mythic/.env
file, then a random password is used. This password is only used on initial setup to create the first user, after that this value is no longer used.
Every user's password must be at least 12 characters long. If somebody tries to log in with an unknown account, all operations will get a notification about it. Similarly, if a user fails to log into their account 10 times in a row, the account will lock. The only account that will not lock out is that initial account that's created. Instead, that account will throttle authentication attempts to 1 a minute.
There are a few different kinds of operator permissions throughout Mythic.
Admin - This is a global setting that grants users the ability to see all operations, unlock all callbacks, and interact with everything in Mythic. The only account that has this initially is the first account created and only Admin accounts can grant other admin accounts this level of permission. Similarly, only admin accounts can remove admin permissions from other admin accounts.
Operation Admin - This is the lead of a specific operation. The operation admin can unlock anybody else's callback, can bypass any opsec check, and has full rights over that operation.
Operator - This is the normal permissions for a user. They can be added to operations by Admins or by the Operation Admin, and within an operation, they can bypass opsec checks for operators and only unlock the callbacks they locked.
Spectator - This account permission has no permissions to make any modifications within Mythic for their operation. They can still query and see all tasks/responses/artifacts/etc within an operation, but they cannot issue tasks, lock callbacks, create payloads, etc.
Browser Scripts allow users to script the output of agent commands. They are JavaScript functions that can return structured data to indicate for the React user interface to generate tables, buttons, and more.
Browser Scripts are located in the hamburger icon in the top left -> "Operations" -> BrowserScripts.
Every user has the default browser scripts automatically imported upon user creation based on which agents are installed.
Anybody can create their own browser scripts and they'll be applied only to that operator. You can also deactivate your own script so that you don't have to delete it, but it will no longer be applied to your output. This deactivates it globally and takes affect when the task is toggled open/close. For individual tasking you can use the speed dial at the bottom of the task and select to "Toggle Browserscript".
If you are an overall admin or the lead of this operation, then you can optionally Apply
the script to the operation. This forces your script to apply to the output of everybody's command in the operation and will override anybody else's script that would otherwise be applied. If you are curious as to whether your script will be in effect or not, the In Effect
column indicates this.
The bottom table shows all of the scripts that are in effect for the current operation and will allow you to view the contents of the scripts even if you don't have permission to modify them.
When you're creating a script, the function declaration will always be function(task, responses)
where task
is a JSON representation of the current task you're processing and responses
is an array of the responses displayed to the user. This will always be a string. If you actually returned JSON data back, be sure to run JSON.parse
on this to convert it back to a JSON dictionary. So, to access the first response value, you'd say responses[0]
.
You should always return a value. It's recommended that you do proper error checking and handling. You can check the status of the task by looking at the task
variable and checking the status
and completed
attributes.
Even if a browser script is pushed out for a command, its output can be toggled on and off individually.
This is a quick primer on using Mythic for the first time
This section will quickly go from first connection to running a basic agent. This walkthrough assumes you have the apfell
agent and the http
c2 profile installed.
When you log in with the admin account, you'll automatically have your current operation set to the default operation. Your current operation is indicated in the top bar in big letters. When other operators sign in for the first time, they won't have an operation set to their current operation. You can always click on the operation name to get back to the operations management page (or click the hamburger icon on the left and select operations on the side).
You need a payload to use. Click the hazard icon at the top and then select "New Payload" on the top right of the new screen. You can also get here by selecting the hamburger icon on the top left and selecting "Create" -> "Create Payload".
You'll be prompted to select which operating system. This is used to filter down possible payloads to generate. Next select the payload type you're wanting to build and fill out any necessary build parameters for the agent. Select any commands you want stamped into the payload initially. This will show commands not yet selected on the left and commands already selected on the right. There are some that can be pre-selected for you based on the agent developer (some are built in and can't be removed, some suggested, etc). If you hover over any of the commands you can see descriptive information about them. You can potentially load commands in later, but for this walkthrough select all of them. Click Next
.
For c2 profiles, toggle the HTTP
profile. Change the Callback host
parameter to be where you want the agent to connect to (if you're using redirectors, you specify that here), similarly specify the Callback port
for where you want the agent to connect to.
The HTTP profile by default listens on port 80. If you want to connect to port 443 with SSL instead, you need to go to the C2 profile management page (click the headphones at the top) and adjust the configuration for the HTTP profile.
Provide a name for the agent (a default one is auto populated) and provide a description that will auto populate the description field for any callbacks created based on this payload. Click Next
.
Once you click submit, you'll get a series of popups in the top giving feedback about the creation process. The blue notification popups will go away after a few seconds, but the green success or red error messages must be manually dismissed. This provides information about your newly created agent.
If the server within the HTTP profile container wasn't running when you created the payload (it's not by default), the Mythic server will automatically start it for you as part of this creation process.
Click the hazard icon on the top again to go to the created payloads page.. This is where you'll be able to see all of the payloads created for the current operation. You can delete the payload, view the configuration, or download the payload. For this walkthrough, download the payload (green download icon).
Now move the payload over to your target system and execute it. The apfell.js
payload can be run with osascript
and the file name on macOS. Once you've done that, head to the Active Callbacks page from the top navigation bar via the phone icon.
This is where you'll be able to interact with any callback in the operation. Click the button for the row with your new agent to bring up information in the bottom pane where you can type out commands and issue them to the agent.
Operations are collections of operators, payloads, tasks, artifacts, callbacks, and files. While payload types and c2 profiles are shared across an entire Mythic instance, operations allow fine grained control over the visibility and access during an assessment.
Operation information can be found via the hamburger icon in the top left, then selecting "Operations" -> "Modify Operations" page. If you're a global Mythic admin, you'll see all operations here. Otherwise, you'll only see operations that are associated with your account. Only a global Mythic admin can create new operations.
Every operation has at least one member - the lead operator. Other operators can be assigned to the operation with varied levels of access.
operator
is your normal user.
lead
is the lead of that operation
spectator
can't do anything within Mythic. They essentially have Read-Only access across the entire operation. They can't create payloads, issue tasking, add comments, send messages, etc. They can search and view callbacks/tasking, but that's it.
For more fine-grained control than that listed above, you can also create block lists. These are named lists of commands that an operator is not allowed to execute for a specific payload type. These block lists are then tied to specific operators. This offers a middle-ground between normal operator with full access and a spectator with no access. You can edit these block lists via the yellow edit button.
For the configure button for the operation, there are many options. You can specify a Slack webhook along with the channel. By default, whenever you create a payload via the "Create Payloads" page, it is tagged as alert-able - any time a new callback is created based on that payload, this slack webhook will be invoked. If you want to prevent that for a specific payload, go to the payloads page, select the "Actions" dropdown for the payload in question, and select to stop alerting. If you have the Slack webhook set on the operation overall, other payloads will continue to generate alerts, but not the ones you manually disable. You can always enable this feature again in the same way.
For the operators edit button, you can edit who is assigned to the operation, what their roles are, and specify which (if any) block lists should be assigned to that user.
Because many aspects of an assessment are tied to a specific operation (payloads, callbacks, tasks, files, artifacts, etc), there are many things that will appear empty within the Mythic UI until you have an operation selected as your current operation. This lets the Mythic back-end know which data to fetch for you. If you don't have an operation as your active one, then you'll see no operation name listed on the top center of your screen. Go to the operations page and, if you're assigned to an operation that you can see, you can select to "Make Current". This process will require you to log out and log back in for the effect to take place and the new data to be fetched.
This section will highlight a few of the pieces of Mythic that operators are most likely to use on a daily basis.
- use JavaScript to transform your command output into tables, buttons, links, and more
- the main operational page for interacting with callbacks, also allows you to see graph/tree views of your callbacks
- view the uploads and downloads for the operation
- search commands, command parameters, and command output across the operation
- view/comment/edit/add credentials for your operation
Expanded - this allows you to view callbacks as the full screen so that you have more operational screen space
- all of the screencaptures throughout the operation can be viewed and downloaded here
- view all of the events going on throughout an operation (new payloads, new callbacks, users signing in, etc) as well as a basic chat program to send messages to all operators in the operation
The base containers for the Mythic agents are located with the itsafeaturemythic
DockerHub repository: .
Since Mythic now has all of the C2 Profiles and Payload Types split out into different GitHub Organizations ( and ), you might need to update those projects as well.
Click Register New Script
to create a new one. This is for one-off scripts you create. If you want to make it permanent across databases and for other operators, then you need to add the script to the corresponding Payload Type's container. More information about that process can be found here: .
Comments are a single text description that can be added to any task in an operation. All members of the operation can see and modify the comment, but the last person that adds or modifies it will show up as the one that added it.
Comments can be found in many places throughout Mythic. On almost any page where you see a task and output, you'll be able to see task comments. These comments can be added by selecting the dropdown for the task status and selecting comment. When there is a comment, you can click the chat bubble icon to show/hide them.
Comments can be removed by either clicking the red trash icon or editing the comment to be a blank string ""
.
Comments are a nice way to highlight certain tasks and output as important for later use, but just like everything else, they can easily get lost in an operation. When searching across any primary object on the search page (tasks, files, credentials, etc), you can opt to search by comment as well.
Socks proxy capabilities are a way to tunnel other traffic through another protocol. Within Mythic, this means tunneling other proxy-aware traffic through your normal C2 traffic. Mythic specifically leverages a modified Socks5 protocol without authentication (it's going through your C2 traffic afterall).
The Mythic server runs within a Docker container, and as such, you have to define which ports to expose externally. Mythic/.env
has a special environment variable you can use to expose a range of ports at a time for this exact reason - MYTHIC_SERVER_DYNAMIC_PORTS="7000-7010"
. By default this uses ports 7000-7010, but you can change this to any range you want and then simply restart Mythic to make the changes.
Click the main Search field at the top and click the "Socks" icon on the far right (or click the socks icon at the top bar).
When you issue a command to start a socks proxy with Mythic, you specify an action "start/stop" and a port number. The port number you specify is the one you access remotely and leverage with your external tooling (such as proxychains).
An operator issues a command to start socks on port 3333. This command goes to the associated payload type's container which does an RPC call to Mythic to open that port for Socks.
Mythic opens port 3333 in a go routine.
An operator configures proxychains to point to the Mythic server on port 3333.
An operator runs a tool through proxychains (ex: proxychains curl https://www.google.com
)
Proxychains connects to Mythic on port 3333 and starts the Socks protocol negotiations.
The tool sends data through proxychains, and Mythic stores it in memory. In this temporary data, Mythic assigns each connection its own ID number.
The next time the agent checks in, Mythic takes this socks data and hands it off to the agent as part of the normal Action: get_tasking or post_response process.
The agent checks if it's seen that ID before. If it has, it looks up the appropriate TCP connection and sends off the data. If it hasn't, it parses the Socks data to see where to open the connection. Then sends the resulting data and same randomID back to Mythic via Action: post_response.
Mythic gets the response, parses out the Socks specific data, and sends it back to proxychains
The above is a general scenario for how data is sent through for Socks. The Mythic server itself doesn't look at any of the data that's flowing - it simply tracks port to Callback mappings and shuttles data appropriately.
Your proxy connections are at the mercy of the latency of your C2 channel. If your checkin time is every 10s, then you'll get one message of traffic sent every 20s (round trip time). This breaks a LOT of protocols. Therefore, it's recommended that you change the sleep of your agent down to something very low (0 or as close to it).
Don't forget to change the sleep interval of your agent back to your normal intervals when you're done with Socks so that you reduce the burden on both the server and your agent.
Credentials can be found from the search page on the top navigation bar or by clicking the key icon at the top.
As part of command output, credentials can be registered automatically if the agent parses out the material. Otherwise, users can also manually register credentials. There are a few pieces of information required:
The type of credential - This is more for situational awareness right now, but in the future will help the flow of how to treat the credential before use.
Account - the account this credential applies to
Realm - the domain for the credential or a generic realm in case this is a credential for something else. If the account is a local account, the Domain is the name of the computer.
Credential - the actual credential
Comment - any comment you want to store about the credential
On this page you can also see the task that created credentials (which can be Manual Entry
), who added in the credential, and when it was added.
Command parameters can hook into this for auto populating by selecting which data they're interested in and then further manipulating it if they want.
Tasks can register credentials with the server in their responses by following Credentials format.
Unified, Persistent File Browser
The file browser is a visual, file browser representation of the directory listings that agents perform. Not all agents support this feature however.
From any callback dropdown in the "Active Callbacks" window, select "File Browser" and the view will be rendered in the lower-half of the screen. This information is a combination of the data across all of the callbacks, and is persistent.
The view is divided into two pieces - a graphical hierarchy on the left and a more detailed view of a folder on the right. The top layer on the left will be the hostname and everything below it will correspond to the file structure for that host.
You'll notice a green checkmark for the files
folder. The green checkmark means that an agent reported back information for that folder specifically (i.e. somebody tasked an ls
of that folder or issued a list
command via the button on the table side). This is in contrast the other folders in that tree - those folders are "implicitly" known because we have the full path returned for the folder we did access. If there is a red circle with an exclamation point, it means that you tried to perform an ls
on the directory, but it failed.
On the right hand side, the table view has a few pieces along the top:
The text field is the path
associated with the information below with the corresponding hostname right above it. If you haven't received any information from any agent yet or you haven't clicked on a path, this will default to the current directory .
.
The first button is the list
button. This looks at the far right hand side Callback number, finds the associated payload type, then looks for the command with is_file_browse
set to true
. Then issues that command with the host
and path
shown in the first two fields. If you want to list the contents of a directory that you can't see in the UI, just modify these two values and hit list
.
The second button is the upload
button. This will look for the is_upload
field for the payload type associated with the identified Callback and execute that command. In most cases this will cause a popup dialog where you can upload your file.
The last field allows you to toggle viewing deleted files or not.
For each entry in the table menu on the right, there are some actions you can do by clicking the gear icon:
The file browser only shows some information that's returned. There are portions that are Operating Specific though - like UNIX permissions, extended attributes, or SDDLs. This information doesn't make sense to display in the main table, so clicking the View Permissions
action will display a popup with more specific information.
The Download History
button will display information about all the times that file has been downloaded. This is useful when you repeatedly download the same file over and over again (ex: downloading a user's Chrome Cookie's file every day). If you've downloaded a file, there will be a green download icon next to the filename. This will always point to the latest version of the file, but you can use the download history
option to view all other instances in an easy pane. This popup will also show the comments associated with the tasks that issued the download commands.
The other three are self explanatory - tasking to list a file/folder, download a file, or remove a file/folder. If a file is removed and reports back the removal to hook into the file browser, then the filename will have a small trash icon next to it and the name will have a strikethrough.
For any active callback, select the dropdown next to it and select "Expand Callback". This will open a new tab for that callback where you can actually view the tasking full screen with metadata on the side.
All uploads and downloads for an operation can be tracked via the clip icon or the search icon at the top.
This page simply shows all uploads and downloads tracked by Mythic and breaks them up by task.
From here, you can see who download or uploaded a file, when it happened, and where it went to or came from. Clicking the download button will download the file to the user's machine.
If you want to download multiple files at once from the Downloads
section, click the toggle for all the files you want and select the Zip & Download selected
button at the top right to download them. You can also preview the first 512KB of a file either as a string or as a hex xxd formatted view. This makes it easier to browse downloaded files without having to actually download them and open them up in a new tool.
Each file has additional information such as the SHA1 and MD5 of each file that can be viewed by clicking the blue info icon. If there's a comment on the task associated with the file upload or download, that comment will be visible here as well.
Mythic allows you to track types of tags as well as instances of tags. A tag type would be something like "contains credential" or "objective 1" - these take a name, a description, and a color to be displayed to the user. An instance of a tag would then include more detailed information such as the source of the information, the actual credential contained or maybe why that thing is tagged as "objective 1", and can even include a link for more information.
Tagging allows more logical grouping of various aspects of an operation. You can create a tag for "objective 1" then apply that tag to tasks, credentials, files, keylogs, etc. This information can then be used for easier deconflictions, attack path narratives, and even a way to signal information to other members of your assessment that something might be worth while to look at.
The tag icon at the top of the screen takes you to the tag management page where you can view/edit/create various types of tags and see how many times that tag is used in the current operation.
Tags are available throughout the various Mythic pages - anywhere you see the tag icon you can view/edit/add tags.
The main page to see and interactive with active callbacks can be found from the phone icon at the top of the screen.
The top table has a list of current callbacks with a bunch of identifying information. All of the table headers can be clicked to sort the information in ascending or descending order.
Callback - The identifying callback number. The blue or red button will bring the bottom section into focus, load the previously issued tasks for that callback, and populate the bottom section with the appropriate information (discussed in the next section).
If the integrity_level
of the callback is <= 2, then the callback button will be blue. Otherwise it'll be red (indicating high integrity) and there will be an *
next to the username. It's up to the agent to report back its own integrity level
Host - The hostname for the machine the callback is from
IP - The IP associated with the host
User - The current user context of the callback
PID - The process ID for the callback
OS (arch) - This is the OS and architecture information for the host
Initial Checkin - The time when the callback first checked in. This date is stored in UTC in the database, but converted to the operator's local time zone on the page.
Last Checkin - How long it's been since the last checkin in day:hour:minute:second time\
Description - The current description of the callback. The default value for this is specified by the default description
section when creating a payload. This can be changed either via the callback's dropdown.
Next to the Interact
button is a dropdown button that provides more accessible information:
Expand Callback - This opens up the callback in a separate window where you can either just view that whole callback full screen, or selectively add other callbacks to view in a split view
Edit Description - This allows you to edit the description of a callback. This will change the side description at the end and also rename the tab at the bottom when somebody clicks interact
. To set this back to the default value, interact with the callback and type set description reset
. or set this to an empty string
Hide Callback - This removes the callback from the current view and sets it to inactive. Additionally, from the Search page, you can make the callback Active
again which will bring it back into view here.
Hide Multiple - allows you to hide multiple callbacks at once instead of doing one at a time.
Process Browser - This allows you to view a unified process listing from all agents related to this host
, but issue new process listing requests from within this callback's context
Locked - If a callback is locked by a specific user, this will be indicated here (along with a changed user and lock icon instead of a keyboard on the interacting button).
File Browser - this allows you to view a process browser across all of the agents.
Task Multiple - this allows you to task multiple callbacks of the same Payload Type at once.
The bottom area is where you'll find the tasks, process listings, file browsers, and comments related to specific callbacks. Clicking the keyboard icon on a callback will open or select the corresponding tab in this area.
When you start typing a command, you can press Tab
to finish out and cycle through the matching commands. If you don't type anything and hit Tab
then you'll cycle through all available commands. You can use the up and down arrow keys to cycle through the tasking history for that callback, and you can use ctrl+r
to do a reverse grep search through your previous history as well.
Submitting a command goes through a few phases that are also color coded to help visually see the state of your task:
Preprocessing - This is when the command is submitted to Mythic, but execution is passed to the associated Payload Type's command file for processing. These capabilities are covered in more depth in the Payload Types section.
Submitted- The task has finished pre-processing and is ready for the agent to request it.
Processing - The agent has pulled down the task, but has not returned anything.
Processed - The agent has returned at least one response for the task, but hasn't explicitly marked the task as completed
Completed - The agent has reported the task done successfully
Error -The agent reported that there was an error with executing the task.
Once you've submitted tasking, there's a bit of information that'll be automatically displayed.
The user that submitted the task
The task number - You can click on this task number to view just that task and its output in a separate page. This makes it easy to share the output of a task between members of an operation.
The command and any parameters supplied by the operator
The very bottom right hand of the screen has a little filter button that you can click to filter out what you see in your callbacks. The filtering only applies as long as you're on that callback page (i.e. it gets reset when you refresh the page).
Screenshots for an entire operation can be accessed via the camera icon in the top bar or the search page.
The screenshots display as they're coming in and will indicate how many chunks are left before you have the full image. At any point you can click on the image and view what's available so far.
The event feed is a live feed of all events happening within Mythic. This is where Mythic records messages for new callbacks, payload creations, users signing in/out, etc
The event feed is located at the alarm bell icon in the top right.
The event feed is a running list of all that's going on within an operation. If Mythic has an error, these will be recorded in the event log with a red background and a button to allow "resolution" of the problem. If you resolve the problem, then the background will change to green. You can also delete messages as needed:
Here we can see that the operator selects the different payload options they desire in the web user interface and clicks submit. That information goes to Mythic which looks up all the database objects corresponding to the user's selection. Mythic then registers a payload in a building
state. Mythic sends all this information to the corresponding Payload Type container to build an agent to meet the desired specifications. The corresponding build
command parses these parameters, stamps in any required user parameters (such as callback host, port, jitter, etc) and uses any user supplied build parameters (such as exe/dll/raw) to build the agent.
In the build process, there's a lot of room for customizing. Since it's all async through rabbitMQ, you are free to stamp code together, spin off subprocesses (like mono or go) to build your agent, or even make web requests to CI/CD pipelines to build the agent for you. Eventually, this process either returns an agent or some sort of error. That final result gets send back to Mythic via rabbitMQ which then updates the database and user interface to allow an operator to download their payload.
API tokens are special JSON web tokens (JWTs) that Mythic can create per-user that don't expire automatically. This allows you to do long-term scripting capabilities without having to periodically check if your current access-token is expired, going through the refresh process, and then continuing along with whatever you were doing.
They're located in your settings page (click your name in the top right and click settings).
When making a request with an API token, set the Header
of apitoken
with a value of your API token. This is in contrast to normal JWT usage where the header is Authorization
and the value is Bearer: <token here>
.
This page describes how messages flow within Mythic
The following subpages have Mermaid sequence diagrams explaining how messages flow amongst the various microservices for Mythic when doing things like creating payloads, issuing tasks, and trasnferring files.
Here we can see an agent sends a message to Mythic. The C2 Profile container is simply a fancy redirector that know show to pull the message off the wire, it doesn't do anything else than that. From there, Mythic starts processing the message. It pulls out the UUID so it can determine which agent/callback we're talking about. This is where a decision point happens:
If the Payload Type associated with the payload/callback for the UUID of the message has a translation container associated with it, then Mythic will send the message there. It's here that the rest of the message is converted from the agent's special sauce C2 format into the standard JSON that Mythic expects. Additionally, if the Payload Type handles encryption for itself, then this is where that happens.
If there is not translation container associated with the payload/callback for the UUID in the message, then Mythic moves on to the next step and starts processing the message.
Mythic then processes the message according to the "action" listed.
Mythic then potentially goes back to the translation container to convert the response message back to the agent's custom C2 spec before finally returning everything back through the C2 Profile docker container.
What happens when you want to transfer a file from Mythic -> Agent? There's two different options: tracking a file via a UUID and pulling down chunks or just sending the file as part of your tasking.
This is an example of an operator uploading a file, it getting processed at the Payload Type's create_tasking
function where it tracks and registers the file within Mythic. Now the tasking has a UUID for the file rather than the file contents itself. This allows Mythic and the Agent to uniquely reference a file. The agent gets tasking, sees the file id, and submits more requests to fetch the file. Upon finally getting the full file, it resolves the relative upload path into an absolute path and sends an update back to Mythic to let it know that the file the operator said to upload to ./test
is actually at /abs/pah/to/test
on the target host.
Conversely, you can opt to not track the file (or track the file within Mythic, but not send the UUID down to the agent). In this case, you can't easily reference the same instance of the file between the Agent and Mythic:
You're able to upload and transfer the file just fine, but when it comes to reporting back information on it, Mythic and the Agent can't agree on the same file, so it doesn't get updated. You might be thinking that this is silly, of course the two know what the file is, it was just uploaded. Consider the case of files being deleted or multiple instances of a file being uploaded.
When you're downloading a file from the Agent to Mythic (such as a file on disk, a screenshot, or some large piece of memory that you want to track as a file), you have to indicate in some way that this data is specific to a file and not destined to be part of the information displayed to the user. The way this works is pretty much the inverse of what happens for uploads. Specifically, an agent has a file it wants to transfer, so it tells Mythic "i have data, i'm gonna send it as X chunks of size Y, can you give me a UUID so we can track this". Mythic tracks the data and gives back a UUID. Now the agent sends each chunk individually and Mythic can track it. This allows a single task to be able to send back multiple files concurrently or sequentially and still track it all.
Commands keep track of a wealth of information such as name, description, help information, if it needs admin permissions, the current version, any parameters, artifacts, MITRE ATT&CK mappings, which payload type the command corresponds to, who created or last editing the command, and when. That is a lot of information, so let’s break that down a bit.
All PayloadTypes get 2 commands for free - clear
and help
. The reason these two commands are 'free' is because they don't actually make it down to the agent itself. Instead, they cause actions to be taken on the Mythic server.
The clear command does just that - it clears tasks that are sitting in the queue waiting for an agent to pick them up. It can only get tasks that are in the submitted stage, not ones that are already to the processing stage because that means that an agent has already requested it.
clear
- entering the command just like this will clear all of the tasks in that callback are in the appropriate stages.
clear all
- entering the command just like this will clear all
tasks you've entered on that callback that are in the appropriate stages.
clear #
- entering the command just like this will attempt to clear the task indicated by the number after clear.
If a command is successfully cleared by this command before an agent can get to it, then that task will get an automated response stating that it was cleared and which operator cleared it. The clear task itself will get back a list of all the tasks it cleared.
The help
command allows users to get lists of commands that are currently loaded into the agent. Just help
gives basic descriptions, but help [command]
gives users more detailed command information. These commands look at the loaded commands for a callback and looks at the backing Python files for the command to give information about usage, command parameters, and elevation requirements.
More information on P2P message communication can be found here.
1. P2P agents do their "C2 Comms" (which in this case is reaching out to agent1 over the decided TCP port) and start their Checkin/Encrypted Key Exchange/etc.
2. When Agent1 gets a connection from Agent2, there's a lot of unknowns. Agent2 could be a new payload that isn't registered as a callback in Mythic yet. It could be an already existing callback that you're re-linking to or linking to for the first time from this callback. Either way, Agent1 doesn't know anything about Agent2 other than it connected to the right port, so it generates a temporary UUID to refer to that connection and waits for a message from Agent 2 (the first message sent through should always be from Agent2->Agent1 with a checkin message). Agent1 sends this information out with its next message as a "Delegate" Message of the form:
This "delegates" key sits at the same level as the "action" key and can go with any message (upload, checkin, post_response, etc). The "message" field is the checkin message from Agent2, the "uuid" field is the tempUUID that agent1 generated, and the "c2_profile" is the name of the C2 profile that the two agents are using to connect.
3. When Mythic parses this delegate message, it can automatically assume that there's a connection between Agent1 and Agent2 because Agent1's message has a Deleggate from Agent 2.
4. When Mythic is done processing Agent2's checkin message, it takes that result and adds it as a "delegate" message back for Agent1's message.
5. When Agent1 gets its message back, it sees that there is a delegate message. That message is of the format:
6. You can see that the response format is a little different. We don't need to echo back the C2 Profile because the agent already knows that information. The "message" field is the Mythic response that goes back to Agent 2. The "uuid" field is the same tempUUID that the agent sent in the message to Mythic. The "mythic_uuid" field though is Mythic indicating back to Agent1 that it doesn't know what tempUUID
is, but the agent that sent that message actually has this UUID. That allows the agent to update its records. The main reason this is important is in the case where the connection between Agent1 and Agent2 goes away. Agent1 has to have some way of indicating to Mythic that Agent2 is no longer talking to it. Mythic only knows Agent2 by its UUID, so if Agent1 tried to report that it could no longer talk to tempUUID, Mythic would have no idea who that is.
There's a lot of moving pieces within Mythic and its agents, so it's helpful to take a step back and see how messages are flowing between the different components.
Here we can see an operator issue tasking to the Mythic server. The Mythic server registers the task as "preprocessing" and informs the operator that it got the task. Mythic then sends the task off to the corresponding Payload Type container for processing. The container looks up the corresponding command python file, parses the arguments, validates the arguments, and passes the resulting parameters to the create_tasking function. This function can leverage a bunch of RPC functionality going back to Mythic to register files, send output, etc. When it's done, it sends the final parameters back to Mythic which updates the Task to either Submitted
or Error
. Now that the task is out of the preprocessing
state, when an agent checks in, it can receive the task.
MITRE ATT&CK is a great way to track what both offense and defense are doing in the information security realm. To help Mythic operators keep track, each command can be tagged with its corresponding MITRE ATT&CK information:
There can be as many or as few mappings as desired for each command. This information is used in two different ways, but both located in the MITRE ATT&CK button at the top.
The "Fetch All Commands Mapped to MITRE" button takes this information to populate out what is the realm of possible
with all of the payload types and commands registered within Mythic. This gives a coverage map of what could be done. Clicking each matrix cell gives a breakdown of which commands from which payload types achieve that objective:
The "Fetch All Issued Tasks Mapped to MITRE" only shows this information for commands that have already been executed in the current operation. This shows what's been done, rather than what's possible. Clicking on a cell with this information loaded gives the exact task and command arguments that occurred with that task:
The database schema describes the current state of Mythic within the mythic_postgres
Docker container. Mythic tracks everything in the Postgres database so that if an operator needs to close their browser or the server where Mythic runs reboots, nothing is lost. The benefit of having all data tracked within the database and simply streamed to the operator's interface means that all operators stay in sync about the state of the operation - each operator doesn't have to browse all of the file shares themselves to see what's going on and you don't have to grep through a plethora of log files to find that one task you ran that one time.
The database lives in the postgres-docker
folder and is mapped into the mythic_postgres
container as a volume. This means that if you need to move Mythic to a new server, simply stop mythic with ./mythic-cli stop
, copy the Mythic
folder to its new home, and start everything back up again with ./mythic-cli start
.
On the first start of Mythic, the database schema is loaded from a schema file located in mythic-docker
: .
Since the database schema is the source of truth for all of Mythic, mythic scripting, and all of the operator's interfaces, it needs to be easily accessible in a wide range of cases.
The mythic_server
container connects directly to the mythic_postgres
container to sync the containers and quickly react to agent messages. The mythic_graphql
container (Hasura) also directly connects to the database and provides a GraphQL interface to the underlying data. This GraphQL interface is what both the React UI and mythic scripting use to provide a role-based access control (RBAC) layer on top of the database.
How do you, as an operator or developer, find out more about the database schema? The easiest way is to click the hamburger icon in the top left of Mythic, select "Services", and then select the "GraphQL Console". This drops you into the Hasura Login screen; the password for Hasura can be found randomly generated in your Mythic/.env
file.
From here, the API tab, shown below, provides an easy way to dynamically explore the various queries, subscriptions, and modifications you can make to the database right here or via scripting.
Since the Mythic Scripting simply uses this GraphQL interface as well, anything you put in that center body pane you can submit as a POST request to Mythic's grapqhl endpoint (shown above) to achieve the same result. The majority of the functions within Mythic Scripting are simply ease-of-use wrappers around these same queries.
If you want to have even more fun exploring how the GraphQL interface manipulates the database schema, you can check out the built-in Jupyter Notebook and test out your modifications there as well. As shown in the two screenshots below, you can create scripts to interact with the GraphQL endpoints to return only the data you want.
This scripting, combined with the Hasura GraphQL console allows operators to very easily get direct access and real-time updates to the database without having to know any specific SQL syntax or worry about accidentally making a schema change.
Payload types are the different kinds of agents that can be created and used with Mythic.
Payload type information is located in the C2 Profiles and Payload Types page by clicking the headphone icon in the top of the page.
From this initial high-level view, a few important pieces of information are shown:
Container status indicates if the backing container is online or offline based on certain RabbitMQ Queues existing or not. This status is checked every 5 seconds or so.
The name of the payload type which must be unique
Which operating systems the agent supports
To modify the Payload Type itself, you need to modify the corresponding class in the Payload Type's docker container. This class will extend the PayloadType class.
The documentation container contains detailed information about the commands, OPSEC considerations, supported C2 profiles, and more for each payload type when you install it. From the Payload Types page, you can click the blue document icon to automatically open up the local documentation website to that agent.
Every command is different – some take no parameters, some take arrays, strings, integers, or a number of other things. To help accommodate this, you can add parameters to commands so that operators know what they need to be providing. You must give each parameter a name and they must be unique within that command. You can also indicate if the parameter is required or not.
You can then specify what type of parameter it is from the following:
String
CredentialJson
Number
Array
Boolean
Choose One
Choose Many
File
PayloadList
ConnectionInfo
LinkInfo
If you select "PayloadList", the user is given a dropdown list of already created (but not deleted) payloads to choose from as a template. The associated tasking can then take this to fetch the payload contents, send down the payload's UUID, or use it to generate a new payload based on the one selected.
If you select "ConnectionInfo", the user is able to select and associate payloads/callbacks with hosts. This also allows the user to select the payload's p2p information for linking agents. The main example here is if you've generated a payload as part of lateral movement, priv esc, or persistence tasking and now want to connect to the payload via p2p (or if you lost connection and are reconnecting to the callback).
If you select "LinkInfo", the user gets a list of already established p2p "links" and allows you to select ones to re-establish or to remove.
Lastly, if you choose File, then the user will be allowed to upload a file via the GUI when you type out the command. By default, the file UUID is sent as the parameter so that you're ready to do file transfers via chunking.
If a command takes named parameters, but none are supplied on the command line, a GUI modal will pop up to assist the operator.
There is no absolute requirement that the input parameters be in JSON format, it's just recommended.
In order to modify the command or any of its components, you need to modify the corresponding python class in the Payload Type container.
All of a payload type's parameters are configurable from within the Payload Type's docker container. Edit the corresponding information in the class that extends the PayloadType class.
There are a few interesting pieces to call out here:
"Is this payload a wrapper for another payload"
A "Payload Wrapper" is a special form of payload type that simply acts a a wrapper for another payload type. An easy example of this is msbuild
or macros
from the Windows environment. These are payloads you might drop onto a system, but they aren't the real payload you're trying to execute. They're just wrappers for the actual end payload. That's the same goal here.
Does this payload support dynamic loading - This is where you can specify if your payload allows you to load new modules in it or not. If this is false, then when creating a payload, you will not be able to choose which commands you want stamped into it - they'll ALL always be stamped in. If this is set to true, it does allow dynamic loading, then you can freely choose which commands you want stamped in at creation time and load in new commands later.
Command and Control (C2) profiles are the way an agent actually communicates with Mythic to get tasking and post responses. There are two main pieces for every C2 profile:
Server code - code that runs in a docker container to convert the C2 profile communication specification (twitter, slack, dropbox, websocket, etc) into the corresponding RESTful endpoints that Mythic uses
Agent code - the code that runs in a callback to implement the C2 profile on the target machine.
C2 profiles can be found by going to Payload Types and C2 Profiles (headphone icon) from the top navigational bar.
Each C2 profile is in its own docker container, the status of which is indicated on the C2 Profiles page.
Each docker container has a python or golang service running in it that connects to a RabbitMQ message broker to receive tasking. This allows Mythic to modify files, execute programs, and more within other docker containers.
The documentation container contains detailed information about the OPSEC considerations, traffic flow, and more for each container when you install the c2 profile. From the C2 Profiles page, you can click the blue document icon to automatically open up the local documentation website to that profile.
All installed docker containers are located at Mythic/InstalledServices/
each with their own folder. The currently running ones can be checked with the sudo ./mythic-cli status
. Check A note about containers for more information about them.
Containers allow Mythic to have each Payload Type establish its own operating environment for payload creation without causing conflicting or unnecessary requirements on the host system.
Payload Type containers only come into play for a few special scenarios:
Payload Creation
Tasking
Processing Responses
For more information on editing or creating new containers for payload types, see Payload Type Development.
C2 Profiles can optionally provide some operational security checks before allowing a payload to be created. For example, you might want to prevent operators from using a known-bad named pipe name, or you might want to prevent them from using infrastructure that you know is burned.
These checks all happen within a single function per C2 profile with a function called opsec
:
From the code snippet above, you can see that this function gets in a request with all of the parameter values for that C2 Profile that the user provided. You can then either return success or error with a message as to why it passed or why it failed. If you return the error case, then the payload won't be built.
C2 servers know the most about their configuration. You can pass in the configuration for an agent and check it against the server's configuration to make sure everything matches up or get additional insight into how to configure potential redirectors.
C2 servers know the most about how their configurations work. You can pass in an agent's configuration and get information about how to generate potential redirector rules so that only your agent's traffic makes it through.
There are two kinds of C2 profiles - egress profiles that talk directly out of the target network or peer-to-peer (p2p) profiles that talk to neighboring agents.
The default HTTP and the dynamicHTTP profiles are both examples of egress profiles. They talk directly out of the target network. Egress profiles have associated Docker containers that allow you to do the translation between your special sauce c2 profile and the normal RESTful web requests to the main Mythic server. More information on how this works and how to create your own can be found here: C2 Related Development.
Peer-to-peer profiles in general are a bit different. They don't talk directly out to the internet; instead, they allow agents to talk to each other.
This distinction between P2P and Egress for Mythic is made by a simple boolean indicating the purpose of the c2 container.
P2P profiles announce their connections to Mythic via P2P Connections. When Mythic gets these messages, it can start mapping out what the internal mesh looks like. To help view this from an operator perspective, there is an additional views on the main Callbacks page.
This view uses a D3 directed graph to illustrate the connections between the agents. There's a central "Mythic Server" node that all egress agents connect to. When a route is announced, the view is updated to move one of the callbacks to be a child of another callback.
The "HTTP" C2 profile speaks the exact same protocol as the Mythic server itself. All other C2 profiles will translate between their own special sauce back to this format. This profile has a docker container as well that you can start that uses a simple JSON configuration to redirect traffic on another port (with potentially different SSL configurations) to the main Mythic server.
This container code starts a small Golang gin web server that accepts messages on the specified port and proxies all connections to the /agent_message
endpoint within Mythic. This allows you to host the Mythic instance on port 7443 for example and expose the default HTTP profile on port 443 or 80.
Clicking the "Configure" button gives a few options for how to edit and interact with the profile.
If you want to use SSL with this profile, edit the configuration to use_ssl
to true
and the C2 profile will automatically generate some self-signed certificates. If you want to use your own certificates though, you can upload them through the UI by clicking the "Manage Files" button next to the http
profile and uploading your files. Then simply update the configuration with the names of the files you uploaded.
This sections allows you to see some information about the C2 profile, including sample configurations.
The name of a C2 profile cannot be changed once it's created, but everything else can change. The Supported Payloads
shows which payload types can speak the language of this C2 profile.
This dialog displays the current parameters associated with the C2 profile. These are the values you must supply when using the C2 profile to create an agent.
There are a few things to note here:
randomize
- This specifies if you want to randomize the value that's auto-populated for the user.
format_string
- This is where you can specify how to generate the string in the hint when creating a payload. For example, setting randomize
to true
and a format_string
of \d{10}
will generate a random 10 digit integer.
This can be seen with the same test
parameter in the above screenshot.
Every time you view the parameters, select to save an instance of the parameters, or go to create a new payload, another random instance from this format_string will be auto-populated into that c2 profile parameter's hint field.
An operator can provide non-default, but specific values for all of fields of a C2 profile and save it off as an instance. These instances can then be used to auto-populate all of the C2 profile's values when creating a payload so that you don't have to manually type them each time.
This is a nice time saver when creating multiple payloads throughout an operation. It's likely that in an operation you will have multiple different domains, domain fronts, and other external infrastructure. It's more convenient and less error prone to provide the specifics for that information once and save it off than requiring operators to type in that information each time when creating payloads.
The Save Parameters
button is located next to each C2 profile by clicking the "headphones" icon at the top of the screen.
To create a new named instance, select the Save Instance
button to the right of a c2 profile and fill out any parameters you want to change. The name must be unique at the top though.
This page describes how the HTTP profile works and configuration options
This profile uses HTTP Get and Post messages to communicate with the main Mythic server. Unlike the default HTTP though, this profile allows a lot of customization from both the client and server. There are two pieces to this as with most C2 profiles - Server side code and Agent side code. In general, the flow looks like:
From the UI perspective, there are only two parameters - a JSON configuration and the auto-populated operation specific AES key. For the JSON configuration, there is a generic structure:
There are some important pieces to notice here:
GET
- this block defines what "GET" messages look like. This section is used for requesting tasking.
ServerBody
- this block defines what the body of the server's response will look like
ServerHeaders
- this block defines what the server's headers will look like
ServerCookies
- this block defines what the server's cookies will look like
AgentMessage
- This block defines the different forms of agent messages for doing GET requests. This defines what query parameters are used, what cookies, headers, URLs, etc.The format here is generally the same as the "POST" messages and will be described in the next section.
POST
- this block defines what "POST" messages look like
ServerBody
- this block defines what the body of the server's response will look like
ServerHeaders
- this block defines what the server's headers will look like
ServerCookies
- this block defines what the server's cookies will look like
AgentMessage
- This block defines the different forms of agent messages for doing POST requests. This defines what query parameters are used, what cookies, headers, URLs, etc.
jitter
- this is the jitter percentage for callbacks
interval
- this is the interval in seconds between callbacks
chunk_size
- this is the chunk size used for uploading/downloading files
key_exchange
- this specifies if Apfell does an encrypted key exchange or just uses a static encryption key
kill_date
- this defines the date agents should stop checking in. This date is checked when the agent first starts and before each tasking request, if it is the specified date or later, the agent will automatically exit. This is in the YYYY-MM-DD format.
Now let's talk about the AgentMessage
parameter. This is where you define all of the key components about how your GET and POST messages from agent to Mythic look, as well as indicating where in those requests you want to put the actual message the agent is trying to send. The format in general looks like this:
In the AgentMessage
section for the "GET"s and "POST"s, you can have 1 or more instances of the above AgentMessage
format (the above is an example of one instance). When the agent goes to make a GET or POST request, it randomly picks one of the formats listed and uses it. Let's look into what's actually being described in this AgentMessage
:
urls
- this is a list of URLs. One of these is randomly selected each time this overall AgentMessage
is selected. This allows you to supply fallback mechanisms in case one IP or domain gets blocked.
uri
- This is the URI to be used at the end of each of the URLs specified. This can be a static value, like /downloads.php
, or can be one that's changed for each request. For example, in the above scenario we supply /<test:string>
. The meaning behind that format is explained in the HTTP for the server side configuration, but the point here to look at is the next piece - urlFunctions
urlFunctions
- This describes transforms for modifying the URI of the request. In the above example, we replace the <test:string>
with a random selection from ["jquery-3.3.1.min.js", "jquery-3.3.1.map"]
.
AgentHeaders
- This defines the different headers that the agent will set when making requests
Note: if you're doing domain fronting, this is where you'd set that value
QueryParameters
- This defines the query parameters (if any) that will be sent with the request. When doing transforms and dynamic modifications, there is a standard format that's described in the next section.
When doing query parameters, if you're going to do anything base64 encoded, make sure it's URL safe encoding. Specifically, /
, +
, =
, and characters need to be URL encoded (i.e. with their %hexhex equivalents)
Cookies
- This defines any cookies that are sent with the agent messages
Body
- This defines any modifications to the body of the request that should be made
The defining feature of the HTTP profile is being able to do transforms on the various elements of HTTP requests. What does this look like though?
These transforms have a few specific parts:
name
- this is the parameter name supplied in the request. For query parameters, this is the name in front of the =
sign (ex:/test.php?q=abc123
). For cookie parameters, this is the name of the cookie (ex: q=abc123;id=abc1
).
value
- this is the starting value before the transforms take place. You can set this to whatever you want, but if you set it to message
, then the starting value for the transforms will be the message that the agent is trying to send to Apfell.
transforms
- this is a list of transforms that are executed. The value starts off as indicated in the value
field, then each resulting value is passed on to the next parameter. In this case, the value starts as ""
, then gets 30 random alphabet letters, then those letters are base64 encoded.
Transforms have 2 parameters: the name of the function to execute and an array of parameters to pass in to it.
The initial set of supported functions are:
base64
takes no parameters
prepend
takes a single parameter of the thing to prepend to the input value
append
takes a single parameter of the thing to append to the input value
random_mixed
takes a single parameter of the number of elements to append to the input value. The elements are chosen from upper case, lower case, and numbers.
random_number
takes a single parameter of the number of elements to append to the input value. The elements are chosen from numbers 0-9.
random_alpha
takes a single parameter of the number of elements to append to the input value. The elements are chosen from upper case and lower case letters.
choose_random
takes an array of elements to choose from
To add new transforms, a few things need to happen:
In the HTTP profile's server
code, the function and a reverse of the function need to be added. The options need to be added to the create_value
and get_value
functions.
This allows the server to understand the new transforms
If you look in the server
code, you'll see functions like prepend
(which prepends the value) and r_prepend
which does the reverse.
In the agent's HTTP profile code, the options for the functions need to also be added so that the agent understands the functions. Ideally, when you do this you add the new functions to all agents, otherwise you start to lose parity.
A final example of an agent configuration can be seen below:
Like with all C2 profiles, the HTTP profile has its own docker container that handles connections. The purpose of this container is to accept connections from HTTP agents, undo all of the special configurations you specified for your agent to get the real message back out, then forward that message to the actual Mythic server. Upon getting a response from Mythic, the container performs more transforms on the message and sends it back to the Agent.
The docker container just abstracts all of the C2 features out from the actual Mythic server so that you're free to customize and configure the C2 as much as you want without having to actually adjust anything in the main server.
There is only ONE HTTP docker container per Mythic instance though, not one per operation. Because of this, the HTTP profile's server-side configuration will have to do that multiplexing for you. Below is an example of the setup:
Let's look into what this sort of configuration entails. We already discussed the agent side configuration above, so now let's look into what's going on in the HTTP C2 Docker container. The Server configuration has the following general format:
There are a couple key things to notice here:
instances
- You generally have one instance per operation, but there's no strict limit there. This is an array of configurations, so when the Docker server starts up, it loops through these instances and starts each one
no_match
- this allows you to specify what happens if there's an issue reaching the main apfell server or if you get a request to the docker container that doesn't match one of your specified endpoints. This is also what happens if you get a message to the right endpoint, but it's not properly encoded with the proper agent message (ex: fails to decrypt the agent message or fails to get the agent message from the url)
action
- this allows you to specify which of the options you want to leverage
redirect
- this simply returns a 302 redirect to the specified url
proxy_get
- this proxies the request to the following url and returns that url's contents with the status code specified
proxy_post
- this proxies the request to the following url and returns that url's contents with the specified status code
return_file
- this allows you to return the contents of a specified file. This is useful to have a generic 404 page or saved static page for a sight you might be faking.
port
- which port this instance should listen on
key_path
- the path locally to where the key file is located for an SSL connection. If you upload the file through the web UI then the path here should simply be the name of the file.
cert_path
- the path locally to where the cert file is located for an SSL connection.If you upload the file through the web UI, then the path here should simply be the name of the file. Both this and the key_path
must be specified and have valid files for the connection to the container to be SSL.
debug
- set this to true to allow debug messages to be printed. There can be a lot though, so once you have everything working, be sure to set this to false
to speed things up a bit.
GET
and POST
- simply take the GET
and POST
sections from your agent configuration
mentioned above and paste those here. No changes necessary.
When it comes to the URIs you choose to use for this, there's an additional feature you can leverage. You can choose to keep them static (like /download.php
) and specify/configure the query parameters/cookies/body, but you can also choose to register more dynamic URIs. There's an example of this above:
What this does for the server side is register a pattern via Sanic's Request Parameters. This allows you to register URIs that can change every time, but still be valid. For example:
this would require you to have the URI of something like /downloads/category/abcdefh/page/5
or the docker container will hit a not_found
case and follow that action path. Combine this with the urlFunctions
and you can have something to always generate unique URIs as follows:
Specifically for urlFunctions
, the "name" must match the thing that'll be replaced. Unlike query parameters and cookie values where the name specifies the name for the value, the name here specifies which field is to be replaced with the result of the transforms
One last thing to note about this. You cannot have two URIs in a single instance within the GET or the POST that collide. For example, you can't have two URIs that are /download.php
but vary in query parameters. As far as the docker container's service is concerned, the differentiation is between the URIs, not the query parameters, cookies, or body configuration. Now, in two different instances you can have overlapping URIs, but that's because they are different web servers bound to different ports.
What if you want all of your messages to be "POST"
requests or "GET"
requests? Well, Apfell by default tries to do GET requests when getting tasking and POST requests for everything else; however, if there are no GET agent_message
instances in that array (i.e. {"GET":{"AgentMessage":[]}}
) then the agent should use POST messages instead and vise versa. This allows you to have yet another layer of customization in your profiles.
Because this is a profile that offers a LARGE amount of flexibility, it can get quite confusing. So, included with the dynamicHTTP is a linter (a program to check your configurations for errors). In C2_Profiles/dynamicHTTP/c2_code/
is a config_linter.py
file. Run this program with ./config_linter.py [path_to_agent_config.json]
where [path_to_agent_config.json] is a saved version of the configuration you'd supply your agent. The program will then check that file for errors, will check your server's config.json
for errors, and then verify that there's a match between the agent config and server config (if there was no match, then your server wouldn't be able to understand your agent's messages). Here's an example:
Mythic can generate JSON or XML style reports. If you need a PDF version, simply generate the XML, open it up locally, and then in your browser save it off to PDF.
Report generation is located from the checker flag icon from the top navigation bar.
You can select your output format, if you want to include MITRE ATT&CK mappings inline with each tasking and if you want a MITRE ATT&CK Summary at the end. You can also optionally exclude certain callbacks, usernames, and hostnames from being included in the generated report.
The final generated report can be downloaded from this screen when it's ready via a toast notification. If you navigate away before it's done though, the report is also always available from the "files" section of the search page (click the paper clip icon at the top and select "Uploads" instead of "Downloads").
Artifacts track potential indicators of compromise and other notable events occurring throughout an operation.
A list of all current artifacts can be found by clicking the fingerprint icon at the top of Mythic.
This page tracks all of the artifacts automatically created by executing tasks. This should provide a good idea for both defenders and red teamers about the artifacts left behind during an operation and should help with deconfliction requests.
Artifacts are created in a few different ways:
A command's tasking automatically creates an artifact.
An agent reports back a new artifact in an ad-hoc fashion
How to use the Scripting API
Mythic utilizes a combination of a GoLang Gin webserver and a Hasura GraphQL interface. Most actions that happen with Mythic go through the GraphQL interface (except for file transfers). We can hit the same GraphQL endpoints and listen to the same WebSocket endpoints that the main user interface uses as part of scripting, which means scripting can technically be done in any language.
Install the PyPi package via pip pip3 install mythic
. The current mythic package is version 0.1.0rc2
. The code for it is public -
The easiest way to play around with the scripting is to do it graphically - select the hamburger icon (three horizontal lines) in the top left of Mythic, select "Services", then "GraphQL Console". This will open up /console
in a new tab.
From here, you need to authenticate to Hasura - run sudo ./mythic-cli config get hasura_secret
on the Mythic server and you'll get the randomized Hasura secret to log in. At this point you can browser around the scripting capabilities (API at the top) and even look at all the raw Database data via the "Data" tab.
The Jupyter container has a lot of examples of using the Mythic Scripting to do a variety of things. You can access the Jupyter container by clicking on the hamurber icon (three horizontal lines) in the top left of Mythic, select "Services", then "Jupyter Notebooks". This will open up a /jupyter
in a new tab.
From here, you need to authenticate to Jupyter - run sudo ./mythic-cli config get jupyter_token
on the Mythic server to get the authentication token. By default, this is mythic
, but can be changed at any time.
All of the following features describe information that can be included in responses. These sections describe some additional JSON formats and data that can be used to have your responses be tracked within Mythic or cause the creation of additional elements within Mythic (such as files, credentials, artifacts, etc).
You can hook multiple features in a single response because they're all unique. For example, to display something to the user, it should be in the user_output
field, such as:
When we talk about Hooking Features
in the message of an agent, we're really talking about a specific set of Dictionary key value pairs that have special meaning. All responses from the agent to the Mythic server already have to be in a structured format. Each of the following sections goes into what their reserved keywords mean, but some simpler ones are:
task_id - string - UUID associated with tasks
user_output - string - used with any command to display information back to the user
completed - boolean - used with any command to indicate that the task is done (switches to the green completed icon)
status - string - used to indicate that a command is not only done, but has encountered an error or some other status to return to the user
process_response - this is passed to your command's python file for processing in the process_response
function.
Actions are special messages that don't adhere to the normal message types that you see for the rest of the features in this section. There are only a handful of these messages:
- Initial checkin messages and key exchanges
- Getting tasking
- Sending tasking responses
Inside of this is where the features listed throughout this section appear
As you're developing an agent to hook into these features, it's helpful to know where to look if you have questions. All of the Task, Command, and Parameter definitions/functions available to you are defined in the mythic_container
PyPi package, which is hosted on the MythicMeta Organization on GitHub. Information about the Payload Type itself (BuildResponse, SupportedOS, BuildParameters, PayloadType, etc) can be found in the file in the same PyPi repo.
MITRE ATT&CK is a knowledge base of adversary tactics and techniques mapped out to various threat groups. It provides a common language between red teams and blue teams when discussing operations, TTPs, and threat hunting. For Mythic, this provides a great way to track all of the capabilities the agents provide and to track all of the capabilities so far exercised in an operation.
For more information on MITRE ATT&CK, check out the following:
MITRE ATT&CK integrations can be found by clicking the chart icon from the top navigation bar.
There are a few different ways to leverage this information.
Clicking the "Fetch All Commands Mapped to MITRE" action will highlight all of the matrix cells that have a command registered to that ATT&CK technique. Clicking on a specific cell will bring up more specific information on which payload type and which command is mapped to that technique. All of this information comes from the MITRE ATT&CK portion of commands.
This is a slightly different view. This button will highlight and show the cells that have been exercised in the current operation. A cell will only be highlighted if a command was executed in the current operation with that ATT&CK technique mapped to it.
The cell view will show the exact command that caused the cell to be highlighted with a link (via task number) back to the full display of the task:
If there is an issue with the mapping, clicking the red X button will remove the mapping.
Agent reports new artifacts created on the system or network
Any command is able to reply with its own artifacts that are created along the way. The following response can be returned as a separate C2 message or as part of the command's normal output.
The following response is part of the normal agent response. So, it is base64 encoded and put in the normal response format
Agents can report back their own artifacts they create at any time. They just include an artifacts
keyword with an array of the artifacts. There are two components to this:
base_artifact
is the type of base artifact being reported. If this base_artifact type isn't already captured in the "Global Configurations" -> "Artifact Types" page, then this base_artifact
value will be created.
artifact
is the actual artifact being created. This is a free-form field.
Artifacts created this way will be tracked in Artifacts page (click the fingerprint icon at the top)
This refers to the act of connecting two agents together with a peer-to-peer protocol. The details here are agnostic of the actual implementation of the protocol (could be SSH, TCP, Named Pipes, etc), but the goal is to provide a way to give one agent the context it needs to link
or establish some peer-to-peer connectivity to a running agent.
This also comes into play when trying to connect to a new executed payload that hasn't gone through the checkin process with Mythic yet to get registered as a Callback.
When creating a command, give a parameter of type ParameterType.ConnectionInfo
. Now, when you type your command without parameters, you'll get a popup like normal. However, for this type, there will be three dropdown menus for you to fill out:
This field is auto populated based on two properties:
The list of all hosts where you have registered callbacks
The list of all hosts where Mythic is aware you've moved moved a payload to
Once you've selected a host, the Payload
dropdown will populate with the associated payloads that Mythic knows are on that host. These payloads are in two main groups:
The payloads that spawned the current callbacks on that host
The payloads that were moved over via a command that created a new payload and registered it to the new host
This payload simply acts as a template of information so that you can select the final piece.
Select the green +
next to host, manually specify which host your payload lives on, then select from the dropdown the associated payload that was used. Then click add. Now Mythic is also tracking that the selected payload lives on the indicated host. You can continue with the host/payload/c2_profile dropdowns like normal.
When trying to connect to a new agent, you have to specify which specific profile you're wanting to connect to. This is because on any given host and for any given payload, there might be multiple c2 profiles within it (think HTTP, SMB, TCP, etc). This field will auto populate based on the C2 profiles that are in the payload selected in the drop down above it.
You'll only be able to select C2 profiles that are marked as is_p2p
for peer-to-peer profiles. This is because it doesn't make any sense to remotely link to an HTTP callback profile for example.
Once you've selected all of the above pieces, the task will insert all of that selected profile's specific instantiations as part of the task for the agent to use when connecting. This can include things like specific ports to connect to, specific pipe names to use, or any other information that might be needed to make the connection.
All of the above is to help an operator identify exactly which payload/callback they're trying to connect to and via which p2p protocol. As a developer, you have the freedom to instead allow operators to specify more generic information via the command-line such as: link-command-name hostname
or link-command-name hostname other-identifier
. The caveat is this now requires the operator to know more detailed information about the connection ahead of time.
The ParameterType.ConnectionInfo
parameter type is useful when you want to make a new connection between a callback to a payload you just executed or to another callback that your current callback hasn't connected to before. A common command that leverages this parameter type would be link
. However, this isn't too helpful if you want to remove a certain connection or if you just want to re-establish a connection that died. To help with this, there's the ParameterType.LinkInfo
which, as the name implies, gives information about the links associated with your callback.
When you use a parameter type of ParameterType.LinkInfo
, you'll get a dropdown menu where the user can select from live or dead links to leverage. When you select a current/dead link, the data that's sent down to your create_tasking
function is the exact same as when you use the ParameterType.ConnectionInfo
- i.e. information about the host, payload uuid, callback uuid, and the p2p c2 profile parameter information.
This describes how to report back p2p connection information to the server
This message type allows agents to report back new or removed connections between themselves or elsewhere within a p2p mesh. Mythic uses these messages to construct a graph of connectivity that's displayed to the user and for handling routing for messages through the mesh.
The agent message to Mythic has the following form:
Just like other messages, this message has the same UUID and encryption requirements found in . Some things to note about the fields:
edges
is an array of JSON objects describing the state of the connections that the agent is adding/removing. Each edge in this array has the following fields:
source
this is one end of the p2p connection (more often than not, this is the agent that's reporting this information)
destination
this is the other end of the p2p connection
metadata
is additional information about the connection that the agent wants to report. For example, when dealing with SMB bind pipes, this could contain information about the specific pipe name instances that are being used if they're being programmatically generated.
action
this indicates if the connection described above is to be added or removed from Mythic.
c2_profile
this indicates which c2 profile is used for the connection
After getting a message like this, Mythic responds with a message of the following form:
This is very similar to most other response messages from the Mythic server.
When an agent sends a message to Mythic with a delegate
component, Mythic will automatically add a route between the delegate and the agent that sent the message.
For example: If agentA is an egress agent sending messages to a C2 Docker container and it links to agentB, a p2p agent. There are now a few options:
If the p2p protocol involves sending messages back-and-forth between the two agents, then agentA can determine if agentB is a new payload, trying to stage, or a complete callback. When agentB is a complete callback, agentA can announce a new route to agentB.
When agentA sends a message to Mythic with a delegate message from agentB, Mythic will automatically create a route between the two agents.
This distinction is important due to how the p2p protocol might work. It could be the case that agentB never sends a get_tasking
request and simply waits for messages to it. In this case, agentA would have to do some sort of p2p comms with agentB to determine who it is so it can announce the route or start the staging process for agentB.
Unified process listing across multiple callbacks for a single host
The supported ui features flag on the command that does this tasking needs to have the following set:supported_ui_features = ["process_browser:list"]
if you want to be able to issue a process listing from the UI process listing table. If you don't care about that, then you don't need that feature set for your command.
There are many instances where you might have multiple agents running on a single host and you run common tasking like process lists over and over and over again. You often do this because the tasking has scrolled out of view, maybe it's just stale information, or maybe you couldn't quite remember which callback actually had that task. This is where the unified process listing comes into play.
With a special format for process listing, Mythic can track all the different process lists together for a single host. It doesn't matter which host you ran the task on, as long as you pull up the process_list view for that host, all of the tasks will be available and viewable.
Naturally, this has a special format for us to make it the most useful. Like almost everything else in Mythic, this requires structured output for each process we want the following:
All that's needed is an array of all the processes with the above information in the processes
field of your post_response
action. That allows Mythic to create a process hierarchy (if you supply both process_id
and parent_process_id
) and a sortable/filterable table of processes. The above example shows a post_response
with one response in it. That one response has a processes
field with an array of processes it wants to report.
Any field that ends with _time
expects the value to be an int64 of unix epoch time in milliseconds. You're welcome to supply any additional field you want about a process - it all gets aggregated together and provided as part of the "metadata" for the process that you can view in the UI in a nice table listing.
For example, a macOS agent might report back signing flags and entitlements and a windows agent might report back integrity level and user session id.
Discussion / Explanation for Common Errors
All Payload containers leverage the mythic_container
PyPi package or the github.com/MythicMeta/MythicContainer
golang package. These packages keeps track of a version that syncs up with Mythic when the container starts. As Mythic gains new functionality or changes how things are done, these containers might not be supported anymore. At any given time, Mythic could support only a single version or a range of versions. A list of all PyPi reported versions and their corresponding Mythic version/DockerImage versions can be found here.
The agent in question needs to have its container updated or downgraded to be within the range specified by your version of Mythic. If you're using a Dockerimage from itsafeaturemythic
(i.e. in your Mythic/Payload_Types/[agent name]/Dockerfile
it says FROM itsafeaturemythic/something
) then you can look here to see which version you should change to.
This is a warning that a C2 Profile's internal service was started, but has since stopped. Typically this happens as a result of rebooting the Mythic server, but if for some reason a C2 Profile's Docker container restarts, you'll get this notification as well.
If a C2 Profile is manually stopped by an operator instead of it stopping automatically for some other reason, the warning message will reflect the variation:
Go to the C2 Profiles page and click to "Start Internal Server". If the container went down and is still down, then you won't be able to start it from the UI and you'll see a different button that you "Can't Start Server". If that's the case, you need to track down why that container stopped on the host.
The "Failed to correlate UUID" message means that data came in through some C2 Profile, made its way to Mythic, Mythic base64 decoded the data successfully and looked at the first characters for a UUID. In Mythic messages, the only piece that's normally not encrypted is this random UUID4 string in the front that means something to Mythic, but is generally meaningless to everybody else. Mythic uses that UUID to look up the callback/payload/stage crypto keys and other related information for processing. In this case though, the UUID that Mythic sees isn't registered within Mythic's database. Normally people see this because they have old agents still connecting in, but they've since reset their database.
Looking at the rest of the message, we can see additional data. All C2 Profile docker containers add an additional header when forwarding messages to the Mythic server with mythic: c2profile_name
. So, in this case we see 'mythic': 'http'
which means that the http
profile is forwarding this message along to the Mythic server.
First check if you have any agents that you forgot about from other engagements, tests, or deployments. The error message should tell you where they're connecting from. If the UUID that Mythic shows isn't actually a UUID format, then that means that some other data made its way to Mythic through a C2 profile. In that case, check your Firewall rules to see if there's something getting through that shouldn't be getting through. This kind of error does not impact the ability for your other agents to work (if they're working successfully), but does naturally take resources away from the Mythic server (especially if you're getting a lot of these).
If you are going back-and-forth between windows and linux doing edits on files, then you might accidentally end up with mixed line endings in your files. This typically manifests after an edit and when you restart the container, it goes into a reboot loop. The above error can be seen by using sudo ./mythic-cli logs [agent name]
.
Running dos2unix
on your files will convert the line endings to the standard linux characters and you should then be able to restart your agent sudo ./mythic-cli start [agent name]
. At that point everything should come back up.
Agents can report back credentials they discover
The agent can report back multiple credentials in a single response. The credential_type
field represents the kind of credential and must be one of the following:
plaintext
certificate
hash
key
ticket
cookie
The other fields are pretty straightforward, but they must all be provided for each credential. There is one optional field that can be specified here: comment
. You can do this manually on the credentials page, but you can also add comments to every credential to provide a bit more context about it.
Mythic has a special page specifically for viewing screenshots by clicking the camera icon at the top of any of the pages.
When it comes to registering screenshots with Mythic, the process is almost identical to File Downloads (Agent -> Mythic); however, we set the is_screenshot
flag to true
in the download
portion of the message:
Download a file from the target to the Mythic server
This section is specifically for downloading a file from the agent to the Mythic server. Because these can be very large files that you task to download, a bit of processing needs to happen: the file needs to be chunked and routed through the agents.
In general, there's a few steps that happen for this process (visually this can be found on the Message Flow page):
The operator issues a task to download a file, such as download file.txt
This gets sent down to the agent in tasking, the agent locates the file, and determines it has the ability to read it. Now it needs to send the data back
The agent first gets the full path of the file so that it can return that data. That's a quality of life requirement so that operators don't need to supply the full path when specifying files, but so that Mythic can still properly track all files, especially ones that have the same name.
The agent then sends an initial message to the Mythic server indicating that it has a file to send. The agent specifies how many chunks it'll take, and which task this is for. If the agent specifies that there are -1
total chunks, then Mythic will expect at some point for the agent to return a total chunk count so that Mythic knows the transfer is over. This can be helpful when the agent isn't able to seek the file length ahead of time.
The Mythic server then registers that file in the database so it can be tracked. This results in a file UUID. The Mythic server sends back this UUID so that the agent and Mythic can make sure they're talking about the same file for the actual transfer.
The agent then starts chunking up the data and sending it chunk by chunk. Each message will have chunk_size amount of data base64 encoded, the file UUID from step 5, and which chunk number is being sent. Chunk numbers start at 1.
The Mythic server responds back with a successful acknowledgement that it received each chunk
It's not an extremely complex process, but it does require a bit more back-and-forth than a fire-and-forget style. This process allows Mythic to track how many chunks it'll take to download a file and how many have been downloaded so far. The rest of this page will walk through those steps with more concrete code examples.
When an agent is ready to transfer a file from agent to Mythic, it first needs to get the full_path
of the file and determine how many chunks it'll take to transfer the file. It then creates the following structure:
The host
field allows us to track if you're downloading files on the current host or remotely. If you leave this out or leave it blank (""
), then it'll automatically be populated with the callback's hostname. Because you can use this same process for downloading files and downloading screenshots from the remote endpoint in a chunked fashion, the is_screenshot
flag allows this distinction. This helps the UI track whether something should be shown in the screenshot pages or in the files pages. If this information is omitted, then the Mythic server assumes it's a file (i.e. is_screenshot
is assumed to be false
). This message is what's sent as an Action: post_response message.
Mythic will respond with a file_id:
The agent sends back each chunk sequentially and calls out the file_id its referring to along with the actual chunked data.
The chunk_num
field is 1-based. So, the first chunk you send is "chunk_num": 1
.
Mythic tracks the size of chunks, but doesn't search via offsets to insert chunks out of order, so make sure you send the chunks in order.
If your agent language is strongly typed or you need to supply all of the fields in every request, then for these additional file transfer messages, make sure the total_chunks
field is set to null
, otherwise Mythic will think you're trying to transfer another file.
For each chunk, Mythic will respond with a success message if all went well:
Once all of the chunks have arrived, the file will be downloadable from the Mythic server.
Upload a file from Mythic to the target
This section is specifically for uploading a file from the Mythic server to an agent. Because these messages aren't just to an agent directly (they can be routed through a p2p mesh), they can't just be as simple as a GET request to download a file. The file needs to be chunked and routed through the agents. This isn't specific to the upload
command, this is for any command that wants to leverage a file from Mythic.
In general, there's a few steps that happen for this process (this can be seen visually on the page):
The operator issues some sort of tasking that has a parameter type of "File". You'll notice this in the Mythic user interface because you'll always see a popup for you to supply a file from your file system.
Once you select a file and hit submit, Mythic loops through all of the files selected and registers them. This process sends each one down to Mythic, saves it off to disk, and assigns it a UUID. These UUIDs are what's stored in place of the raw bytes when you submit your task. So, if you had an upload command that takes a file and a path, your arguments would end up looking like {"file":"uuid", "path": "/some/path"}
rather than {"file": raw bytes of file, "path": "/some/path"}
.
In the Payload Type's corresponding command python file there is a function called create_tasking
that handles the processing of tasks from users before handing them off to the database to be fetched by an agent. If your agent supports chunking and transferring files that way, then you don't need to do anything else, but if your agent requires that you send down the entire file's contents as part of your parameters, you need to get the associated file.
To get the file with Mythic, there's an RPC call we can do:
Lines 2-5 is where we do an RPC call with Mythic to search for files, specifically, we want ones where the file_id matches the one that was passed down as part of our parameters. This should only return one result, but the result, for consistency, will always come back as an Array. So, file_resp.response
is an array of file information, we'll take the filename
from the first entry. We can use this to get the original filename back out from Mythic from the user's upload command. If there's something we want to modify about the file (such as adding a comment automatically or indicating that the file should be deleted from disk after the agent fetches it) we can use the update_file
RPC function to do so.
At this point, if you wanted to use the raw bytes of a file instead of the UUID as part of your tasking, your get_file
query should indicate get_contents=True
, then you can access the raw bytes via file_resp.response[0]["contents"]
. You can then swap out the contents of the parameter with task.args.add_arg("arg name", "base64 of file contents here")
.
If you want to register a NEW file with Mythic from the payload container that the user didn't first upload, you need to use the create_file
RPC call.
4. The agent gets the tasking and sees that there's a file UUID it needs to pull. It sends an initial message to Mythic saying that it will be downloading the file in chunks of a certain size and requests the first chunk. If the agent is going to be writing the file to disk (versus just pulling down the file into memory), then the agent should also send full_path
. This allows Mythic to track a new entry in the database with an associated task for uploads. The full_path
key lives at the same level as user_output
, total_chunks
, etc in the post_response
.
5. The Mythic server gets the request for the file, makes sure the file exists and belongs to this operation, then gets the first chunk of the file as specified by the agent's chunk_size
and also reports to the agent how many chunks there are.
6. The Agent can now use this information to request the rest of the chunks of the file.
The agent reporting back full_path
is what allows Mythic to track the file in the Files search page as a file that has been written to disk. If you don't report back a full_path
or have full_path
as an empty string, then Mythic thinks that the file transfer only lived in memory and didn't touch disk. This is separate from reporting that a file was written to disk as part of artifact tracking on the Reporting Artifacts page.
It's not an extremely complex process, but it does require a bit more back-and-forth than a fire-and-forget style. The rest of this page will walk through those steps with more concrete code examples.
There is no expectation when doing uploads or downloads that the operator must type the absolute path to a file, that's a bit of a strict requirement. Instead, Mythic allows operators to specify relative paths and has an option in the upload action to specify the actual full path (this option also exists for downloading files so that the absolute path can be returned for better tracking). This allows Mythic to properly track absolute file system paths that might have the same resulting file name without an extra burden on the operator.
Files can (optionally) be pulled down multiple times (if you set delete_after_fetch
as True
, then the file won't exist on disk after the first fetch and thus can't be re-used). This is to prevent bloating up the server with unnecessary files.
An agent pulling down a file to the target is similar to downloading a file from the target to Mythic. The agent makes the following request to Mythic:
The full_path
parameter is helpful for accurate tracking. This allows an operator to be in the /Temp
directory and simply call the upload function to the current directory, but allows Mythic to track the full path for easier reporting and deconfliction.
The full_path
parameter is only needed if the agent plans to write the file to disk. If the agent is pulling down a file to load into memory, then there's no need to report back a full_path
.
The agent gets back a message like:
This process repeats as many as times as is necessary to pull down all of the contents of the file.
If there is an error pulling down a file, the server will respond with as much information as possible and blank out the rest (i.e.: {'action': 'post_response', 'responses': [ {'total_chunks': 0, 'chunk_num': 0, 'chunk_data': '', 'file_id': '', 'task_id': '', 'status': 'error', 'error': 'some error message'} ] }
)
If the task_id was there in the request, but there was an error with some other piece, then the task_id will be present in the response with the right value.
There's a reason that files aren't base64 encoded and placed inside the initial tasking blobs. Keeping the files tracked by the main Mythic system and used in a separate call allows the initial messages to stay small and allows for the agent and C2 mechanism to potentially cache or limit the size of the transfers as desired.
Consider the case of using DNS as the C2 mechanism. If the file mentioned in this section was sent through this channel, then the traffic would potentially explode. However, having the option for the agent to pull the file via HTTP or some other mechanism gives greater flexibility and granular control over how the C2 communications flow.
This page discusses how to register that commands are loaded/unloaded in a callback
It's a common feature for agents to be able to load new functionality. Even within Mythic, you can create agents that only start off with a base set of commands and more are loaded in later. Using the commands
keyword, agents can report back that commands are added ("action": "add"
) or removed ("action": "remove"
).
This is easily visible when interacting with an agent. When you start typing a command, you'll see an autocomplete list appear above the command prompt with a list of commands that include what you've typed so far. When you load a new command and register it back with mythic in this way, that new command will also appear in that autocomplete list.
Keystrokes are sent back from the agent to the Mythic server
Agents can report back keystrokes at any time. There are three components to a keystroke report:
user
- the user that is being keylogged
window_title
- the title of the window to which the keystrokes belong
keystrokes
- the actual recorded keystrokes
Having the information broken out into these separate pieces allows Mythic to do grouping based on the user and window_title for easier readability.
If the agent doesn't know the user or the window_title fields, they should still be included, but can be empty strings. If empty strings are reported for either of these two fields, they will be replaced with "UNKNOWN" in Mythic.
What happens if you need to send keystrokes for multiple users/windows?
You probably noticed as you used Mythic that there's a status associated with your task. This status goes through a variety of words/colors depending on where things are in the pipeline and what the agent has done with the task. This provides a way for the operator to know what's happening behind the scenes.
By default, a task goes through the following stages with the following statuses:
preprocessing
- The task is being sent to the Payload Type container for processing (parsing arguments, confirming values, doing RPC functionality to register files, etc)
submitted
- The task is now ready for an agent to pick it up
processing
- The task has been picked up by an agent, but there hasn't been any response back yet
processed
- The task has at least one response, but the agent hasn't marked it as done yet.
completed
- The task is marked as completed by the agent and there wasn't an error in execution
error:*
- The agent report back a status of "error"
The agent can set the status of the task to anything it wants as part of its normal post_response
information. Similarly, in a task's create_tasking
function, you're free to set the task.status
value. Anything of the error:*
format will show up as red in the Mythic UI as an error for the user.
Token awareness and Token tasking
Mythic supports Windows tokens in two ways: tracking which tokens are viewable and tracking which tokens are usable by the callback. The difference here, besides how they show up in the Mythic interface, is what the agent can do with the tokens. The idea is that you can list out all tokens on a computer, but that doesn't mean that the agent has a handle to the token for use with tasking.
As part of the normal responses sent back from the agent, there's an additional key you can supply, tokens
that is a list of token objects that can be stored in the Mythic database. These will be viewable from the "Search" -> "Tokens" page, but are not leveraged as part of further tasking.
token_id
is simply a way for your callback to refer to the various tokens it interacts with. You'll use this token_id
to register a token with your callback for use in subsequent tasking.
If you want to be able to leverage tokens as part of your tasking, you need to register those tokens with Mythic and the callback. This can be done as part of the normal post_response
responses like everything else. The key here is to identify the right token - specifically via the unique combination of token_id and host.
If the token 12345
hasn't been reported via the tokens
key then it will be created and then associated with Mythic.
Once the token is created and associated with the callback, there will be a new dropdown menu next to the tasking bar at the bottom of the screen where you can select to use the default token or one of the new ones specified. When you select a token to use in this way when issuing tasking, the create_tasking
function's task
object will have a new attribute, task.token
that contains a dictionary of all the token's associated attributes. This information can then be used to send additional data with the task down to the agent to indicate which tokens should be used for the task as part of your parameters.
Additionally, when getting tasks that have tokens associated with them, the TokenId
value will be passed down to the agent as an additional field:\
For the file browser, there are a few capabilities that need to be present and implemented correctly if you want to allow users to list, remove, download, or upload from the file browser. Specifically:
File Listing - there needs to be a command marked as supported_ui_features = ["file_browser:list"]
with your payload type that sends information in the proper format.
File Removal - there needs to be a command marked as supported_ui_features = ["file_browser:remove"]
with your payload type that reports back success properly
File Download - there needs to be a command marked as supported_ui_features = ["file_browser:download"]
with your payload type
File Upload - there needs to be a command marked as supported_ui_features = ["file_browser:upload"]
with your payload type
These components together allow an operator to browse the file system, request listings of new directories, track downloaded files, upload new files, and even remove files. Let's go into each of the components and see what they need to do specifically.
There are two components to file listing that need to be handled - what the file browser sends as initial tasking to the command marked as supported_ui_features = ["file_browser:list"]
and what data is sent back to Mythic for processing.
When doing a file listing via the file browser, the command_line for tasking will always be the following as as JSON string (this is what gets sent as the self.command_line
argument in your command's parse_arguments
function):
This might be different than the normal parameters you take for the command marked as supported_ui_features = ["file_browser:list"]
. Since the payload type's command handles the processing of arguments itself, we can handle this case and transform the parameters as needed. For example, the apfell
payload takes a single parameter path
as an argument for file listing, but that doesn't match up with what the file browser sends. So, we can modify it within the async def parse_arguments
function:
In the above example we check if we are given a JSON string or not by checking that the self.command_line
length is greater than 0 and that the first character is a {
. We can then parse it into a Python dictionary and check for the two cases. If we're given something with host
in it, then it must come from the file browser instead of the operator normally, so we take the supplied parameters and add them to what the command normally needs. In this case, since we only have the one argument path
, we take the path
and file
variables from the file browser dictionary and combine them for our path variable.
Now that we know how to translate file browsing file listing tasking to whatever our command needs, what kind of output do we need to send back?
We have another component to the post_response
for agents.
As a shortcut, if the file you're removing is on the same host as your callback, then you can omit the host
field or set it to ""
and Mythic will automatically add in your callback's host information instead.
If you're listing out the top level folder (/
on linux/macOS or a drive like C:\
on Windows, then the parent path should be "" or null.
Most of this is pretty self-explanatory, but there are some nuances. Only list out the inner files for the initial folder/file listed (i.e. don't recursively do this listing). For the files
array, you don't need to include host
or parent_path
because those are both inferred based on the info outside the files
array, and the success
flag won't be included since you haven't tried to actually list out the contents of any sub-folders. The permissions JSON blob allows you to include any additional information you want to show the user in the file browser. For example, with the apfell
agent, this blob includes information about extended attributes, posix file permissions, and user/group information. Because this is heavily OS specific, there's no requirement here other than it being a JSON blob (not a string).
By having this information in another component within the responses array, you can display any information to the user that you want without being forced to also display this listing each time to the user. You can if you want, but it's not required. If you wanted to do that, you could simply turn all of the file_browser
data into a JSON string and put it in the user_output
field. In the above example, the user output is a simple message stating why the tasking was issued, but it could be anything (even left blank).
There's a special key in there that doesn't really match the rest of the normal "file" data in that file_browser response - update_deleted
. If you include this key as True
and your success
flag is True
, then Mythic will use the data presented here to update which files are deleted.
By default, if you list the contents of ~/Downloads
twice, then the view you see in the UI is a merge of all the data from those two instance of listing that folder. However, that might not always be what you want. For instance, if a file was deleted between the first and second listing, that deletion won't be reflected in the UI because the data is simply merged together. If you want that delete to be automatically picked up and reported as a deleted file, use the update_deleted
flag to say to Mythic "hey, this should be everything that's in the folder, if you have something else that used to be there but I'm not reporting back right now, assume it's deleted".
You might be wondering why this isn't just the default behavior for listing files. There are two main other scenarios that we want to support that are counter to this idea - paginated results (only return 20 files at a time) and filtered results (only return files in the folder that end in .txt). In these cases, we don't want the rest of the data to be automatically marked as deleted because we're clearly not returning the full picture of what's in a folder. That's why it's an optional flag to say to performing the automatic updates. If you want to be explicit with things though (for example, if you delete a file and want to report it back without having to re-list the entire contents of the directory), you can use the next section - FIle Removal.
There are two components to file listing that need to be handled - what the file browser sends as initial tasking to the command marked as supported_ui_features = ["file_browser:remove"]
and what data is sent back to Mythic for processing.
This is the exact same as the supported_ui_features = ["file_browser:list"]
and the File Browser section above.
Ok, so we listed files and tasked one for removal. Now, how is that removed file tracked back to the file browsing to mark it as removed? Nothing too crazy, there's another field in the post_response
:
This removed_files
section simply returns an array of dictionaries that spell out the host and paths of the files that were deleted. On the back-end, Mythic takes these two pieces of information and searches the file browsing data to see if there's a matching path
for the specified host
in the current operation that it knows about. If there is, it gets marked as deleted
and in the UI you'll see a small trashcan next to it along with a strikethrough.
This response isn't ONLY for when a file is removed through the file browser though. You can return this from your normal removal commands as well and if there happens to be a matching file in the browser, it'll get marked as removed. This allows you to simply type things like rm /path/to/file
on the command-line and still have this information tracked in the file browser without requiring you to remove it through the file browser specifically.
There are two components to file listing that need to be handled - what the file browser sends as initial tasking to the command marked as supported_ui_features = ["file_browser:download"]
and what data is sent back to Mythic for processing.
This is the exact same as the supported_ui_features = ["file_browser:list"]
and the File Browser section above.
There's nothing special here outside of normal file download processes described in the Download section. When a new file is tracked within Mythic, there's a "host" field in addition to the full_path
that is reported. This information is used to look up if there's any matching browser objects and if so, they're linked together.
Having a command marked as supported_ui_features = ["file_browser:upload"]
will cause that command's parameters to pop-up and allow the operator to supply in the normal file upload information. To see this information reflected in the file browser, the user will then need to issue a new file listing.
This section describes new Payload Types
You want to create a new agent that fully integrates with Mythic. Since everything in Mythic revolves around Docker containers, you will need to ultimately create one for your payload type along with some specific files/folder structures. This can be done with docker containers on the same host as the Mythic server or with an .
Optionally: You can also have a custom C2 profile that this agent supports, see for that piece though
The first step is to look into the page to get everything generally squared away and look at the order of things that need your attention. The goal is to get you up and going as fast as possible!
There are a lot of things involved in making your own agent, c2 profile, or translation container. For the bare-bones implementations, you'll have to do very little, but Mythic tries to allow a lot of customizability. To help with this, Mythic uses PyPi packages for a lot of its configuration/connections and Docker images for distribution. This makes it easy for somebody to just kind of "hook up" and get going. If you don't want to do that though, all of this code is available to you within the MythicMeta organization on GitHub ().
This organization is broken out in five main repos:
Mythic_Scripting
- This holds all of the code for the PyPi package, mythic
, that you can use to script up actions.
MythicContainerPyPi
- This holds all of the code for the PyPi package
MythicContainer
- This holds all of the same sort of code as the PyPi package, but is for agents that wish to define commands in Golang instead of Python.
The rest of these sections talk about specifics for the docker container for your new agent.
There is one docker container base image, itsafeaturemythic/mythic_base_image
, that contains general environments for Python, Golang, .NET, and even includes a macOS SDK. This allows two payload types that might share the same language to still have different environment variables, build paths, tools installed, etc. Docker containers come into play for a few things:
Metadata about the payload type (this is in the form of python classes or Golang structs)
The payload type code base (whatever language your agent is in)
The code to create the payload based on all of the user supplied input
Metadata about all of the commands associated with that payload type
The code for all of those commands (whatever language your agent is in)
Browser scripts for commands and support scripts for the payload type as a whole (JavaScript)
The code to take user supplied tasking and turn it into tasking for your agent
This part has to happen outside of the web UI at the moment. The web UI is running in its own docker container, and as such, can't create, start, or stop entire docker containers. So, make sure this part is done via CLI access to where Mythic is installed.
Within Mythic/InstalledServices/
make a new folder that matches the name of your agent. Inside of this folder make a file called Dockerfile
. This is where you have a choice to make - either use the default docker container as a starting point and make your additions from there, or use a different container base.
The default container has the core pieces for many different languages. Start your Dockerfile
off with:
On the next lines, just add in any extra things you need for your agent to properly build, such as:
This happens all in a script for docker, so if a command might make a prompt for something (like apt-get), make sure to auto handle that or your stuff won't get installed properly
If you're curious what else goes into these containers, look in the docker-templates
folder within the Mythic repository.
The Mythic/InstalledServices/[agent name]
folder is mapped to /Mythic
in the docker container. Editing the files on disk results in the edits appearing in the docker container and visa versa.
Most things are left up to you, the agent developer, to decide how best to structure your code and agent code. However, there's a few files that need to be in certain places for Mythic's Docker container to properly kick off your container:
Mythic/InstalledServices/[agent name]/Dockerfile
<-- this defines the docker information for your agent and must be here for docker-compose to properly discover and build your container
Technically, that's all that's required. Within the Dockerfile
you will then need to do whatever is needed to kick of your main program that imports either the MythicContainer PyPi package or the MythicContainer Golang package. As some examples, here's what you can do for Python and Golang:
Mythic/InstalledServices/[agent name]/main.py
<-- if you plan on using Python as your definition language, this main.py
file is what will get executed by Python 3.10.
At that point, your main.py
file will import any other folders/files needed to define your agent/commands and import the mythic_container
pypi package. Here's an example from Apollo:
Things are a little different here as we're going from source code to compiled binaries. To keep things in a simplified area for building, running, and testing, a common file like a Makefile
is useful.
Mythic/InstalledServices/[agent name]/Makefile
From here, that make file can have different functions for what you need to do:
Here's an example of the Makefile
that allows you to specify custom environment variables when debugging locally, but also support Docker building:
Pay attention to the build
and run
commands - once you're done building your code, notice that it's copied from the current directory to /
in the Docker Image. This is because when the container starts, your source code is mapped into the Docker image, thus discarding any changes you made to that directory while building. This is also why the run
function copies the binary back into the current directory and executes it there. The reason it's executed this way instead of from /
is so that pathing and local folders are located where you expect them to be in relation to your binary.
To go along with that, a sample Docker file for Golang is as follows:
It's very similar to the Python version, except it runs make build
when building and make run
when running the code. The Python version doesn't need a Makefile
or multiple commands because it's an interpreted language.
There isn't too much that's different if you're going to use your own container, it's just on you to make sure that python3.10/Golang 1.20 is up and running and the entrypoint is set properly.
If you're having this hook up through Mythic via mythic-cli
and the one docker-compose
file, the dynaconf
and mythic-container
(or MythicContainer golang package) are the ones responsible for the Mythic/.env
configuration being applied to your container.
If your container/service is running on a different host than the main Mythic instance, then you need to make sure the rabbitmq_password
is shared over to your agent as well. By default, this is a randomized value stored in the Mythic/.env
file and shared across containers, but you will need to manually share this over with your agent either via an environment variable (MYTHIC_RABBITMQ_PASSWORD
) or by editing the rabbitmq_password
field in your rabbitmq_config.json file.
To start your new payload type docker container, you need to first make sure that the docker-compose file is aware of it (assuming you didn't install it via mythic-cli install github <url>
). You can simply do mythic-cli add <payload name>
. Now you can start just that one container via mythic-cli start <payload name>
.
Your container should pull down all the necessary files, run the necessary scripts, and finally start. If it started successfully, you should see it listed in the payload types section when you run sudo ./mythic-cli status
.
If you go back in the web UI at this point, you should see the red text next to your payload type change to green to indicate that it's now running. If it's not, then something went wrong along the way. You can use the sudo ./mythic-cli logs payload_type_name
to see the output from the container to potentially troubleshoot.
The containers will automatically sync all of their information with the Mythic server when they start, so the first time the Mythic server gets a message from a container it doesn't know about, it'll ask to sync. Similarly, as you do development and restart your Payload Type container, updates will automatically get synced to the main UI.
If you want to go interactive within the container to see what's up, use the following command:
The container has to be running though.
There are scenarios in which you need a Mythic container for an agent, but you can't (or don't want) to use the normal docker containers that Mythic uses. This could be for reasons like:
You have a custom build environment that you don't want to recreate
You have specific kernel versions or operating systems you're wanting to develop with
So, to leverage your own custom VM or physical computer into a Mythic recognized container, there are just a few steps.
External agents need to connect to mythic_rabbitmq
in order to send/receive messages. They also need to connect to the mythic_server
to transfer files and potentially use gRPC. By default, these container is bound on localhost only. In order to have an external agent connect up, you will need to adjust this in the Mythic/.env
file to have RABBITMQ_BIND_LOCALHOST_ONLY=false
and MYTHIC_SERVER_BIND_LOCALHOST_ONLY=false
and restart Mythic (sudo ./mythic-cli restart
).
Install python 3.10+ (or Golang 1.20) in the VM or on the computer
Create a folder on the computer or VM (let's call it path /pathA
). Essentially, your /pathA
path will be the new InstalledServices/[agent name]
folder. Create a sub folder for your actual agent's code to live, like /pathA/agent_code
. You can create a Visual Studio project here and simply configure it however you need.
Your command function definitions and payload definition are also helpful to have in a folder, like /pathA/agent_functions
.
Edit the /pathA/rabbitmq_config.json
with the parameters you need\
the mythic_server_host
value should be the IP address of the main Mythic install
the rabbitmq_host
value should be the IP address of the main Mythic install unless you're running rabbitmq on another host.
You'll need the password of rabbitmq from your Mythic instance. You can either get this from the Mythic/.env
file, by running sudo ./mythic-cli config get rabbitmq_password
, or if you run sudo ./mythic-cli config payload
you'll see it there too.
External agents need to connect to mythic_rabbitmq
in order to send/receive messages. By default, this container is bound on localhost only. In order to have an external agent connect up, you will need to adjust this in the Mythic/.env
file to have RABBITMQ_BIND_LOCALHOST_ONLY=false
and restart Mythic (sudo ./mythic-cli restart
). You'll also need to set MYTHIC_SERVER_BIND_LOCALHOST_ONLY=false
.
In the file where you define your payload type is where you define what it means to "build" your agent.
Run python3.10 main.py
and now you should see this container pop up in the UI
If you already had the corresponding payload type registered in the Mythic interface, you should now see the red text turn green.
You should see output similar to the following:
If you mythic instance has a randomized password for rabbitmq_password
, then you need to make sure that the password from Mythic/.env
after you start Mythic for the first time is copied over to your vm. You can either add this to your rabbitmq_config.json
file or set it as an environment variable (MYTHIC_RABBITMQ_PASSWORD
).
There are a few caveats to this process over using the normal process. You're now responsible for making sure that the right python version and dependencies are installed, and you're now responsible for making sure that the user context everything is running from has the proper permissions.
One big caveat people tend to forget about is paths. Normal containers run on *nix, but you might be doing this dev on Windows. So if you develop everything for windows paths hard-coded and then want to convert it to a normal Docker container later, that might come back to haunt you.
Whether you're using a Docker container or not, you can load up the code in your agent_code
folder in any IDE you want. When an agent is installed via mythic-cli
, the entire agent folder (agent_code
and mythic
) is mapped into the Docker container. This means that any edits you make to the code is automatically reflected inside of the container without having to restart it (pretty handy). The only caveat here is if you make modifications to the python or golang definition files will require you to restart your container to load up the changes sudo ./mythic-cli start [payload name]
. If you're making changes to those from a non-Docker instance, simply stop your python3.8 main.py
and start it again. This effectively forces those files to be loaded up again and re-synced over to Mythic.
If you're doing anything more than a typo fix, you're going to want to test the fixes/updates you've made to your code before you bother uploading it to a GitHub project, re-installing it, creating new agents, etc. Luckily, this can be super easy.
Say you have a Visual Studio project set up in your agent_code
directory and you want to just "run" the project, complete with breakpoints and configurations so you can test. The only problem is that your local build needs to be known by Mythic in some way so that the Mythic UI can look up information about your agent, your "installed" commands, your encryption keys, etc.
To do this, you first need to generate a payload in the Mythic UI (or via Mythic's Scripting). You'll select any C2 configuration information you need, any commands you want baked in, etc. When you click to build, all of that configuration will get sent to your payload type's "build" function in mythic/agent_functions/builder.py
. Even if you don't have your container running or it fails to build, no worries, Mythic will first save everything off into the database before trying to actually build the agent. In the Mythic UI, now go to your payloads page and look for the payload you just tried to build. Click to view the information about the payload and you'll see a summary of all the components you selected during the build process, along with some additional pieces of information (payload UUID and generated encryption keys).
Take that payload UUID and the rest of the configuration and stamp it into your agent_code
build. For some agents this is as easy as modifying the values in a Makefile, for some agents this can all be set in a config
file of some sort, but however you want to specify this information is up to you. Once all of that is set, you're free to run your agent from within your IDE of choice and you should see a callback in Mythic. At this point, you can do whatever code mods you need, re-run your code, etc.
Following from the previous section, if you just use the payload UUID and run your agent, you should end up with a new callback each time. That can be ideal in some scenarios, but sometimes you're doing quick fixes and want to just keep tasking the same callback over and over again. To do this, simply pull the callback UUID and encryption keys from the callback information on the active callbacks page and plug that into your agent. Again, based on your agent's configuration, that could be as easy as modifying a Makefile, updating a config file, or you might have to manually comment/uncomment some lines of code. Once you're reporting back with the callback UUID instead of the payload UUID and using the right encryption keys, you can keep re-running your build without creating new callbacks each time.
The latest container versions and their associated mythic_container
PyPi versions can be found here: .
pip3 install mythic-container
(this has all of the definitions and functions for the container to sync with Mythic and issue RPC commands). Make sure you get the right version of this PyPi package for the version of Mythic you're using (). Alternatively, go get -u github.com/MythicMeta/MythicContainer
for golang.
When your container starts up, it connects to the rabbitMQ broker system. Mythic then tries to look up the associated payload type and, if it can find it, will update the running status. However, if Mythic cannot find the payload type, then it'll issue a "sync" message to the container. Similarly, when a container starts up, the first thing it does upon successfully connecting to the rabbitMQ broker system is to send its own synced data.
This data is simply a JSON representation of everything about your payload - information about the payload type, all the commands, build parameters, command parameters, browser scripts, etc.
Syncing happens at a few different times and there are some situations that can cause cascading syncing messages.
When a payload container starts, it sends all of its synced data down to Mythic
If a C2 profile syncs, it'll trigger a re-sync of all Payload Type containers. This is because a payload type container might say it supports a specific C2, but that c2 might not be configured to run or might not have check-ed in yet. So, when it does, this re-sync of all the payload type containers helps make sure that every agent that supports the C2 profile is properly registered.
When a Wrapper Payload Type container syncs, it triggers a re-sync of all non-wrapper payload types. This is because a payload type might support a wrapper that doesn't exist yet in Mythic (configured to not start, hasn't checked in yet, etc). So, when that type does check in, we want to make sure all of the wrapper payload types are aware and can update as necessary.
Latest versions can always be found on the Mythic README.
What are the first things to do when creating a new payload type in Mythic?
The first step is to copy down the example repository https://github.com/MythicMeta/ExampleContainers. Inside of the Payload_Type
folder, there are two projects - one for GoLang and one for Python depending on which language you prefer to code your agent definitions in (this has nothing to do with the language of your agent itself, it's simply the language to define commands and parameters). Pick whichever service you're interested in and copy that folder into your Mythic/InstalledServices
folder. If you want your new container to be referenced as my_agent
then copy the folder over to Mythic/InstalledServices/my_agent
.
Docker does not allow capital letters in container names. So, if you plan on using Mythic mythic-cli
to control and install your agent, then your agent's name can't have any capital letters in it. Only lowercase, numbers, and _. It's a silly limitation by Docker, but it's what we're working with.
The example services has a single container that offers multiple options (Payload Type, C2 Profile, Translation Container, Webhook, and Logging). While a single container can have all of that, for now we're going to focus on just the payload type piece, so delete the rest of it. For the python_services
folder this would mean deleting the mywebhook
, translator
, and websocket
folders. For the go_services
folder, this would mean deleting the http
, my_logger
, my_webhooks
, no_actual_translation
folders. For both cases, this will result in removing some imports at the top of the remaining main.py
and main.go
files.
For the python_services
folder, we'll update the apfell/agent_functions/builder.py
file. This file can technically be anywhere that main.py
can reach and import, but for convenience it's in a folder, agent_functions
along with all of the command definitions for the agent. Below is an example from that builder that defines the agent:
More information on each component in the file can be found in Payload Type Info. Now you can run sudo ./mythic-cli add my_agent
and sudo ./mythic-cli start my_agent
and you'll see the container build, start, and you'll see it sync with the Mythic server (more about that process at Container Syncing).
Congratulations! You now have a payload type that Mythic recognizes!
Now you'll want to actually configure your Docker Container, look into building your agent, how to declare new commands, how to process tasking to these commands, and finally hooking your agent into all the cool features of Mythic.
Command information is tracked in your Payload Type's container. Each command has its own Python class or GoLang struct. In Python, you leverage CommandBase
and TaskArguments
to define information about the command and information about the command's arguments.
CommandBase defines the metadata about the command as well as any pre-processing functionality that takes place before the final command is ready for the agent to process. This class includes the create_go_tasking
(Create_Tasking) and process_response
(Process Response) functions.
****TaskArguments does two things:
defines the parameters that the command needs
verifies / parses out the user supplied arguments into their proper components
this includes taking user supplied free-form input (like arguments to a sleep command - 10 4
) and parsing it into well-defined JSON that's easier for the agent to handle (like {"interval": 10, "jitter": 4}
). This can also take user-supplied dictionary input and parse it out into the rightful CommandParameter objects.
This also includes verifying all the necessary pieces are present. Maybe your command requires a source and destination, but the user only supplied a source. This is where that would be determined and error out for the user. This prevents you from requiring your agent to do that sort of parsing in the agent.
If you're curious how this all plays out in a diagram, you can find one here: #operator-submits-tasking.
Creating your own command requires extending this CommandBase class (i.e. class ScreenshotCommand(CommandBase)
and providing values for all of the above components.
cmd
- this is the command name. The name of the class doesn't matter, it's this value that's used to look up the right command at tasking time
needs_admin
- this is a boolean indicator for if this command requires admin permissions
help_cmd
- this is the help information presented to the user if they type help [command name]
from the main active callbacks page
description
- this is the description of the command. This is also presented to the user when they type help.
suported_ui_features
- This is an array of values that indicates where this command might be used within the UI. For example, from the active callbacks page, you see a table of all the callbacks. As part of this, there's a dropdown you can use to automatically issue an exit
task to the callback. How does Mythic know which command to actually send? It's this array that dictates that. The following are used by the callback table, file browser, and process listing, but you're able to add in any that you want and leverage them via browser scripts for additional tasking:
supported_ui_features = ["callback_table:exit"]
supported_ui_features = ["file_browser:list"]
supported_ui_features = ["process_browser:list"]
supported_ui_features = ["file_browser:download"]
supported_ui_features = ["file_browser:remove"]
supported_ui_features = ["file_browser:upload"]
version
- this is the version of the command you're creating/editing. This allows a helpful way to make sure your commands are up to date and tracking changes
argument_class
- this correlates this command to a specific TaskArguments
class for processing/validating arguments
attackmapping
- this is a list of strings to indicate MITRE ATT&CK mappings. These are in "T1113" format.
agent_code_path
is automatically populated for you like in building the payload. This allows you to access code files from within commands in case you need to access files, functions, or create new pieces of payloads. This is really useful for a load
command so that you can find and read the functions you're wanting to load in.
You can optionally add in the attributes
variable. This is a new class called CommandAttributes
where you can set whether or not your command supports being injected into a new process (some commands like cd
or exit
don't make sense for example). You can also provide a list of supported operating systems. This is helpful when you have a payload type that might compile into multiple different operating system types, but not all the commands work for all the possible operating systems. Instead of having to write "not implemented" or "not supported" function stubs, this will allow you to completely filter this capability out of the UI so users don't even see it as an option.
Available options are:
supported_os
an array of SupportedOS fields (ex: [SupportedOS.MacOS]
)
spawn_and_injectable
is a boolean to indicate if the command can be injected into another process
builtin
is a boolean to indicate if the command should be always included in the build process and can't be unselected
load_only
is a boolean to indicate if the command can't be built in at the time of payload creation, but can be loaded in later
suggested_command
is a boolean to indicate if the command should be pre-selected for users when building a payload
filter_by_build_parameter
is a dictionary of parameter_name:value
for what's required of the agent's build parameters. This is useful for when some commands are only available depending on certain values when building your agent (such as agent version).
You can also add in any other values you want for your own processing. These are simply key=value
pairs of data that are stored. Some people use this to identify if a command has a dependency on another command. This data can be fetched via RPC calls for things like a load
command to see what additional commands might need to be included.
This ties into the CommandParameter fields choice_filter_by_command_attributes
, choices_are_all_commands
, and choices_are_loaded_commands
.
The create_go_tasking
function is very broad and covered in Create_Tasking
The process_response
is similar, but allows you to specify that data shouldn't automatically be processed by Mythic when an agent checks in, but instead should be passed to this function for further processing and to use Mythic's RPC functionality to register the results into the system. The data passed here comes from the post_response
message (Process Response).
The script_only
flag indicates if this Command will be use strictly for things like issuing subtasking, but will NOT be compiled into the agent. The nice thing here is that you can now generate commands that don't need to be compiled into the agent for you to execute. These tasks never enter the "submitted" stage for an agent to pick up - instead they simply go into the create_tasking scenario (complete with subtasks and full RPC functionality) and then go into a completed state.
The TaskArguments class defines the arguments for a command and defines how to parse the user supplied string so that we can verify that all required arguments are supplied. Mythic now tracks where tasking came from and can automatically handle certain instances for you. Mythic now tracks a tasking_location
field which has the following values:
command_line
- this means that the input you're getting is just a raw string, like before. It could be something like x86 13983 200
with a series of positional parameters for a command, it could be {"command": "whoami"}
as a JSON string version of a dictionary of arguments, or anything else. In this case, Mythic really doesn't know enough about the source of the tasking or the contents of the tasking to provide more context.
parsed_cli
- this means that the input you're getting is a dictionary that was parsed by the new web interface's CLI parser. This is what happens when you type something on the command line for a command that has arguments (ex: shell whoami
or shell -command whoami
). Mythic can successfully parse out the parameters you've given into a single parameter_group and gives you a dictionary
of data.
modal
- this means that the input you're getting is a dictionary that came from the tasking modal. Nothing crazy here, but it does at least mean that there shouldn't be any silly shenanigans with potential parsing issues.
browserscript
- if you click a tasking button from a browserscript table and that tasking button provides a dictionary to Mythic, then Mythic can forward that down as a dictionary. If the tasking button from a browserscript table submits a String
instead, then that gets treated as command_line
in terms of parsing.
With this ability to track where tasking is coming from and what form it's in, an agent's command file can choose to parse this data differently. By default, all commands must supply a parse_arguments
function in their associated TaskArguments
subclass. If you do nothing else, then all of these various forms will get passed to that function as strings (if it's a dictionary it'll get converted into a JSON string). However, you can provide another function, parse_dictionary
that can handle specifically the cases of parsing a given dictionary into the right CommandParameter objects as shown below:
In self.args
we define an array of our arguments and what they should be along with default values if none were provided.
In parse_arguments
we parse the user supplied self.command_line
into the appropriate arguments. The hard part comes when you allow the user to type arguments free-form and then must parse them out into the appropriate pieces.
The main purpose of the TaskArguments class is to manage arguments for a command. It handles parsing the command_line
string into CommandParameters
, defining the CommandParameters
, and providing an easy interface into updating/accessing/adding/removing arguments as needed.
As part of the TaskArguments
subclass, you have access to the following pieces of information:
self.command_line
- the parameters sent down for you to parse
self.raw_command_line
- the original parameters that the user typed out. This is useful in case you have additional pieces of information to process or don't want information processed into the standard JSON/Dictionary format that Mythic uses.
self.tasking_location
- this indicates where the tasking came from
self.task_dictionary
- this is a dictionary representation of the task you're parsing the arguments for. You can see things like the initial parameter_group_name
that Mythic parsed for this task, the user that issued the task, and more.
self.parameter_group_name
- this allows you to manually specify what the parameter group name should be. Maybe you don't want Mythic to do automatic parsing to determine the parameter group name, maybe you have additional pieces of data you're using to determine the group, or maybe you plan on adjusting it alter on. Whatever the case might be, if you set self.parameter_group_name = "value"
, then Mythic won't continue trying to identify the parameter group based on the current parameters with values.
The class must implement the parse_arguments
method and define the args
array (it can be empty). This parse_arguments
method is the one that allows users to supply "short hand" tasking and still parse out the parameters into the required JSON structured input. If you have defined command parameters though, the user can supply the required parameters on the command line (via -commandParameterName
or via the popup tasking modal via shift+enter
).
When syncing the command with the UI, Mythic goes through each class that extends the CommandBase, looks at the associated argument_class
, and parses that class's args
array of CommandParameters
to create the pop-up in the UI. While the TaskArgument's parse_arguments
method simply parses the user supplied input into JSON, it's the CommandParameter's class that actually verifies that every required parameter has a value, that all the values are appropriate, and that default values are supplied if necessary.
CommandParameters, similar to BuildParameters, provide information for the user via the UI and validates that the values are all supplied and appropriate.
name
- the name of the parameter that your agent will use. cli_name
is an optional variation that you want user's to type when typing out commands on the command line, and display_name
is yet another optional name to use when displaying the parameter in a popup tasking modal.
type
- this is the parameter type. The valid types are:
String
Boolean
File
Upload a file through your browser. In your create tasking though, you get a String UUID of the file that can be used via SendMythicRPC* calls to get more information about the file or the file contents
Array
An Array of string values
ChooseOne
ChooseMultiple
An Array of string values
Credential_JSON
Select a specific credential that's registered in the Mythic credential store. In your create tasking, get a JSON representation of all data for a credential
Number
Payload
Select a payload that's already been generated and get the UUID for it. This is helpful for using that payload as a template to automatically generate another version of it to use as part of lateral movement or spawning new agents.
ConnectionInfo
Select the Host, Payload/Callback, and P2P profile for an agent or callback that you want to link to via a P2P mechanism. This allows you to generate random parameters for payloads (such as named-pipe names) and not require you to remember them when linking. You can simply select them and get all of that data passed to the agent.
When this is up in the UI, you can also track new payloads on hosts in case Mythic isn't aware of them (maybe you moved and executed payloads in a method outside of Mythic). This allows Mythic to track that payload X is now on host Y and you can use the same selection process as the first bullet to filter down and select it for linking.
LinkInfo
Get a list of all active/dead P2P connections for a given agent. Selecting one of these links gives you all the same information that you'd get from the ConnectionInfo
parameter. The goal here is to allow you to easily select to "unlink" from an agent or to re-link to a very specific agent on a host that you were previously connected to.
description
- this is the description of the parameter that's presented to the user when the modal pops up for tasking
choices
- this is an array of choices if the type is ChooseOne
or ChooseMultiple
If your command needs you to pick from the set of commands (rather than a static set of values), then there are a few other components that come into play. If you want the user to be able to select any command for this payload type, then set choices_are_all_commands
to True
. Alternatively, you could specify that you only want the user to choose from commands that are already loaded into the callback, then you'd set choices_are_loaded_commands
to True
. As a modifier to either of these, you can set choice_filter_by_command_attributes
to filter down the options presented to the user even more based on the parameters of the Command's attributes
parameter. This would allow you to limit the user's list down to commands that are loaded into the current callback that support MacOS for example. An example of this would be:
validation_func
- this is an additional function you can supply to do additional checks on values to make sure they're valid for the command. If a value isn't valid, an exception should be raised
value
- this is the final value for the parameter; it'll either be the default_value or the value supplied by the user
default_value
- this is a value that'll be set if the user doesn't supply a value
supported_agents
- If your parameter type is Payload
then you're expecting to choose from a list of already created payloads so that you can generate a new one. The supported_agents
list allows you to narrow down that dropdown field for the user. For example, if you only want to see agents related to the apfell
payload type in the dropdown for this parameter of your command, then set supported_agents=["apfell"]
when declaring the parameter.
supported_agent_build_parameters
- allows you to get a bit more granular in specifying which agents you want to show up when you select the Payload
parameter type. It might be the case that a command doesn't just need instance of the atlas
payload type, but maybe it only works with the Atlas
payload type when it's compiled into .NET 3.5. This parameter value could then be supported_agent_build_parameters={"atlas": {"version":"3.5"}}
. This value is a dictionary where the key is the name of the payload type and the value is a dictionary of what you want the build parameters to be.
dynamic_query_function
- More information can be found here, but you can provide a function here for ONLY parameters of type ChooseOne or ChooseMultiple where you dynamically generate the array of choices you want to provide the user when they try to issue a task of this type.
Most command parameters are pretty straight forward - the one that's a bit unique is the File type (where a user is uploading a file as part of the tasking). When you're doing your tasking, this value
will be the base64 string of the file uploaded.
To help with conditional parameters, Mythic 2.3 introduced parameter groups. Every parameter must belong to at least one parameter group (if one isn't specified by you, then Mythic will add it to the Default
group and make the parameter required
).
You can specify this information via the parameter_group_info
attribute on CommandParameter
class. This attribute takes an array of ParameterGroupInfo
objects. Each one of these objects has three attributes: group_name
(string), required
(boolean) ui_position
(integer). These things together allow you to provide conditional parameter groups to a command.
Let's look at an example - the new apfell
agent's upload
command now leverages conditional parameters. This command allows you to either:
specify a remote_path
and a filename
- Mythic then looks up the filename to see if it's already been uploaded to Mythic before. If it has, Mythic can simply use the same file identifier and pass that along to the agent.
specify a remote_path
and a file
- This is uploading a new file, registering it within Mythic, and then passing along that new file identifier
Notice how both options require the remote_path
parameter, but the file
and filename
parameters are mutually exclusive.
So, the file
parameter has one ParameterGroupInfo
that calls out the parameter as required. The filename
parameter also has one ParameterGroupInfo
that calls out the parameter as required. It also has a dynamic_query_function
that allows the task modal to run a function to populate the selection box. Lastly, the remote_path
parameter has TWO ParameterGroupInfo
objects in its array - one for each group. This is because the remote_path
parameter applies to both groups. You can also see that we have a ui_position
specified for these which means that regardless of which option you're viewing in the tasking modal, the parameter remote_path
will be the first parameter shown. This helps make things a bit more consistent for the user.
If you're curious, the function used to get the list of files for the user to select is here:
In the above code block, we're searching for files, not getting their contents, not limiting ourselves to just what's been uploaded to the callback we're tasking, and looking for all files (really it's all files that have "" in the name, which would be all of them). We then go through to de-dupe the filenames and return that list to the user.
So, with all that's going on, it's helpful to know what gets called, when, and what you can do about it.
If you want to have a different form of communication between Mythic and your agent than the specific JSON messages that Mythic uses, then you'll need a "translation container".
The first thing you'll need to do is specify the name of the container in your associated Payload Type class code. Update the Payload Type's class to include a line like translation_container = "binaryTranslator"
. Now we need to create the container.
The process for making a translation container is almost identical to a c2 profile or payload type container, we're simply going to change which classes we instantiate, but the rest of it is the same.
Unlike Payload Type and C2 Profile containers that mainly do everything over RabbitMQ for potentially long-running queues of jobs, Translation containers use gRPC for fast responses.
If a translation_container
is specified for your Payload Type, then the three functions defined in the following two examples will be called as Mythic processes requests from your agent.
You then need to get the new container associated with the docker-compose file that Mythic uses, so run sudo ./mythic-cli add binaryTranslator
. Now you can start the container with sudo ./mythic-cli start binaryTranslator
and you should see the container pop up as a sub heading of your payload container.
Additionally, if you're leveraging a payload type that has mythic_encrypts = False
and you're doing any cryptography, then you should use this same process and perform your encryption and decryption routines here. This is why Mythic provides you with the associated keys you generated for encryption, decryption, and which profile you're getting a message from.
For the Python version, we simply instantiate our own subclass of the TranslationContainer class and provide three functions. In our main.py
file, simply import the file with this definition and then start the service:
For the GoLang side of things, we instantiate an instance of the translationstructs.TranslationContainer struct with our same three functions. For GoLang though, we have an Initialize function to add this struct as a new definition to track.
Then, in our main.go
code, we call the Initialize function and start the services:
These examples can be found at the MythicMeta organization on GitHub:
Docker doesn't allow you to have capital letters in your image names, and when Mythic builds these containers, it uses the container's name as part of the image name. So, you can't have capital letters in your agent/translation container names. That's why you'll see things like service_wrapper
instead of serviceWrapper
Just like with Payload Types, a Translation container doesn't have to be a Dockerized instance. To turn any VM into a translation container just follow the general flow at #turning-a-vm-into-a-mythic-container
Payload Type information must be set and pulled from a definition either in Python or in GoLang. Below is a python example of the apfell
agent:
There are a couple key pieces of information here:
Line 1-2 imports all of basic classes needed for creating an agent
line 7 defines the new class (our agent). This can be called whatever you want, but the important piece is that it extends the PayloadType
class as shown with the ()
.
Lines 8-27 defines the parameters for the payload type that you'd see throughout the UI.
the name is the name of the payload type
supported_os is an array of supported OS versions
supports_dynamic_loading indicates if the agent allows you to select only a subset of commands when creating an agent or not
build_parameters is an array describing all of the build parameters when creating your agent
c2_profiles is an array of c2 profile names that the agent supports
Line 18 defines the name of a "translation container" which we will talk about in another section, but this allows you to support your own, non-mythic message format, custom crypto, etc.
The last piece is the function that's called to build the agent based on all of the information the user provides from the web UI.
The PayloadType
base class is in the PayloadBuilder.py
file. This is an abstract class, so your instance needs to provide values for all these fields.
Build parameters define the components shown to the user when creating a payload.
The BuildParameter
class has a couple of pieces of information that you can use to customize and validate the parameters supplied to your build:
name is the name of the parameter, if you don't provide a longer description, then this is what's presented to the user when building your payload
parameter_type describes what is presented to the user - valid types are:
BuildParameterType.String
BuildParameterType.ChooseOne
BuildParameterType.Array
BuildParameterType.Date
BuildParameterType.Dictionary
BuildParameterType.Boolean
required indicates if there must be a value supplied. If no value is supplied by the user and no default value supplied here, then an exception is thrown before execution gets to the build
function.
verifier_regex is a regex the web UI can use to provide some information to the user about if they're providing a valid value or not
default_value is the default value used for building if the user doesn't supply anything
choices is where you can supply an array of options for the user to pick from if the parameter_type is ChooseOne
dictionary_choices are the choices and metadata about what to display to the user for key-value pairs that the user might need to supply
value is the component you access when building your payload - this is the final value (either the default value or the value the user supplied)
verifier_func is a function you can provide for additional checks on the value the user supplies to make sure it's what you want. This function should either return nothing or raise an exception if something isn't right
As a recap, where does this come into play? In the first section, we showed a section like:
You have to implement the build
function and return an instance of the BuildResponse
class. This response has these fields:
status - an instance of BuildStatus (Success or Error)
Specifically, BuildStatus.Success
or BuildStatus.Error
payload - the raw bytes of the finished payload (if you failed to build, set this to None
or empty bytes like b''
in Python.
build_message - any stdout data you want the user to see
build_stderr - any stderr data you want the user to see
build_stdout - any stdout data you want the user to see
updated_filename - if you want to update the filename to something more appropriate, set it here. For example: the user supplied a filename of apollo.exe
but based on the build parameters, you're actually generating a dll, so you can update the filename to be apollo.dll
. This is particularly useful if you're optionally returning a zip of information so that the user doesn't have to change the filename before downloading. If you plan on doing this to update the filename for a wide variety of options, then it might be best to leave the file extension field in your payload type definition blank ""
so that you can more easily adjust the extension.
The most basic version of the build function would be:
Once the build
function is called, all of your BuildParameters
will already be verified (all parameters marked as required
will have a value
of some form (user supplied or default_value) and all of the verifier functions will be called if they exist). This allows you to know that by the time your build
function is called that all of your parameters are valid.
Your build function gets a few pieces of information to help you build the agent (other than the build parameters):
From within your build function, you'll have access to the following pieces of information:
self.uuid
- the UUID associated with your payload
This is how your payload identifies itself to Mythic before getting a new Staging and final Callback UUID
self.commands
- a wrapper class around the names of all the commands the user selected.
Access this list via self.commands.get_commands()
self.agent_code_path
- a pathlib.Path
object pointing to the path of the agent_code
directory that holds all the code for your payload. This is something you pre-define as part of your agent definition.
To access "test.js" in that "agent_code" folder, simply do:
f = open(self.agent_code_path / "test.js", 'r')
.
With pathlib.Path
objects, the /
operator allows you to concatenate paths in an OS agnostic manner. This is the recommended way to access files so that your code can work anywhere.
self.get_parameter("parameter name here")
The build parameters that are validated from the user. If you have a build_parameter with a name of "version", you can access the user supplied or default value with self.get_parameter("version")
self.selected_os
- This is the OS that was selected on the first step of creating a payload
self.c2info
- this holds a list of dictionaries of the c2 parameters and c2 class information supplied by the user. This is a list because the user can select multiple c2 profiles (maybe they want HTTP and SMB in the payload for example). For each element in self.c2info, you can access the information about the c2 profile with get_c2profile()
and access to the parameters via get_parameters_dict()
. Both of these return a dictionary of key-value pairs.
the dictionary returned by self.c2info[0].get_c2profile()
contains the following:
name
- name of the c2 profile
description
- description of the profile
is_p2p
- boolean of if the profile is marked as a p2p profile or not
the dictionary returned by self.c2info[0].get_parameters_dict()
contains the following:
key
- value
where each key
is the key
value defined for the c2 profile's parameters and value
is what the user supplied. You might be wondering where to get these keys? Well, it's not too crazy and you can view them right in the UI - Name Fields.
If the C2 parameter has a value of crypto_type=True
, then the "value" here will be a bit more than just a string that the user supplied. Instead, it'll be a dictionary with three pieces of information: value
- the value that the user supplied, enc_key
- a base64 string (or None) of the encryption key to be used, dec_key
- a base64 string (or None) of the decryption key to be used. This gives you more flexibility in automatically generating encryption/decryption keys and supporting crypto types/schemas that Mythic isn't aware of. In the HTTP profile, the key AESPSK
has this type set to True, so you'd expect that dictionary.
If the C2 parameter has a type of "Dictionary", then things are a little different.
Let's take the "headers" parameter in the http
profile for example. This allows you to set header values for your http
traffic such as User-Agent, Host, and more. When you get this value on the agent side, you get an array of values that look like the following:
{"User-Agent": "the user agent the user supplied", "MyCustomHeader": "my custom value"}
. You get the final "dictionary" that's created from the user supplied fields.
One way to leverage this could be:
Finally, when building a payload, it can often be helpful to have both stdout and stderr information captured, especially if you're compiling code. Because of this, you can set the build_essage
,build_stderr
, and build_stdout
fields of the BuildResponse
to have this data. For example:
Depending on the status of your build (success or error), either the message or build_stderr values will be presented to the user via the UI notifications. However, at any time you can go back to the Created Payloads page and view the build message, build errors, and build stdout for any payload.
When building your payload, if you have to modify files on disk, then it's helpful to do this in a "copy" of the files. You can make a temporary copy of your code and operate there with the following sample:
The last thing to mention are build steps. These are defined as part of the agent and are simply descriptions of what is happening during your build process. The above example makes some RPC calls for SendMythicRPCPayloadUpdatebuildStep
to update the build steps back to Mythic while the build process is happening. For something as fast as the apfell
agent, it'll appear as though all of these happen at the same time. For something that's more computationally intensive though, it's helpful to provide information back to the user about what's going on - stamping in values? obfuscating? compiling? more obfuscation? opsec checks? etc. Whatever it is that's going on, you can provide this data back to the operator complete with stdout and stderr.
So, what's the actual, end-to-end execution flow that goes on? A diagram can be found here: #what-happens-for-building-payloads.
PayloadType container is started, it connects to Mythic and sends over its data (by parsing all these python files)
An operator wants to create a payload from it, so they go to "Create Components" -> "Create Payload"
The operator selects an OS type that the agent supports (ex. Linux, macOS, Windows)
The operator selects all c2 profiles they want included
and for each c2 selected, provides any c2 required parameters
The operator selects this payload type
The operator fills out/selects all of the payload type's build parameters
The operator selects all commands they want included in the payload
Mythic takes all of this information and sends it to the payload type container
The container sends the BuildResponse
message back to the Mythic server.
MythicRPC provides a way to execution functions against Mythic and Mythic's database programmatically from within your command's tasking files.
MythicRPC lives as part of the mythic_container
PyPi package (and github.com/MythicMeta/MythicContainer
GoLang package) that's included in all of the itsafeaturemythic
Docker images. This PyPi package uses rabbitmq's RPC functionality to execute functions that exist within Mythic.
The full list of commands can be found here: for Python and for GoLang.
Sub-tasking is the ability for a task to spin off sub-tasks and wait for them to finish before continuing execution of its own. Tasks will wait for all of their sub-tasks to complete before potentially entering a "submitted" state themselves for an agent to pick them up.
When a task has outstanding subtasks, its status will change to "delegating" while it waits for them all to finish.
Task callbacks are functions that get executed when a task enters a "completed=True" state (i.e. when it completes successfully or encounters an error). These can be registered on a task itself
or on a subtask:
When Mythic calls these callbacks, it looks for the defined name in the command's completed_functions
attribute like:
Where the key
is the same name of the function specified and the value
is the actual reference to the function to call.
Like everything else associated with a Command, all of this information is stored in your command's Python/GoLang file. Sub-tasks are created via RPC functions from within your command's create_tasking
function (or any other function - i.e. you can issue more sub-tasks from within task callback functions). Let's look at what a callback function looks like:
This is useful for when you want to do some post-task processing, actions, analysis, etc when a task completes or errors out. In the above example, the formulate_output
function simply just displays a message to the user that the task is done. In more interesting examples though, you could use the get_responses
RPC call like we saw above to get information about all of the output subtasks have sent to the user for follow-on processing.
It's often useful to perform some operational security checks before issuing a task based on everything you know so far, or after you've generated new artifacts for a task but before an agent picks it up. This allows us to be more granular and context aware instead of the blanket command blocking that's available from the Operation Management page in Mythic.
OPSEC checks and information for a command is located in the same file where everything else for the command is located. Let's take an example all the way through:
In the case of doing operational checks before a task's create_tasking
is called, we have the opsec_pre
function. Similarly, the opsec_post
function happens after your create_tasking
, but before your task is finally ready for an agent to pick it up.
opsec_pre/post_blocked
- this indicates True/False for if the function decides the task should be blocked or not
opsec_pre/post_message
- this is the message to the operator about the result of doing this OPSEC check
opsec_pre/post_bypass_role
- this determines who should be able to bypass this check. The default is lead
to indicate that only the lead of the operation should be able to bypass it, but you can set it to operator
to allow any operators to bypass the check. You can also set this to other_operator
to indicate that somebody other than the operator that issued the task must approve it. This is helpful in cases where it's not necessarily a "block", but something you want to make sure operators acknowledge as a potential security risk
As the name of the functions imply, the opsec_pre
check happens before create_tasking
function runs and the opsec_post
check happens after the create_tasking
function runs. If you set opsec_pre_blocked
to True, then the create_tasking
function isn't executed until an approved operator bypasses the check. Then, execution goes back to create_tasking
and the opsec_post
. If that one also sets blocked to True, then it's again blocked at the user to bypass it. At this point, if it's bypassed, the task status simply switched to Submitted
so that an agent can pick up the task on next checkin.
Sometimes when creating a command, the options you present to the operator might not always be static. For example, you might want to present them with a list of files that have been download; you might want to show a list of processes to choose from for injection; you might want to reach out to a remote service and display output from there. In all of these scenarios, the parameter choices for a user might change. Mythic can now support this.
Since we're talking about a command's arguments, all of this lives in your Command's class that subclasses TaskArguments. Let's take an augmented shell
example:
Here we can see that the files
CommandParameter has an extra component - dynamic_query_function
. This parameter points to a function that also lives within the same class, get_files
in this case. This function is a little different than the other functions in the Command file because it occurs before you even have a task - this is generating parameters for when a user does a popup in the user interface. As such, this function gets one parameter - a dictionary of information about the callback itself. It should return an array of strings that will be presented to the user.
Dynamic queries are only supported for the ChooseOne and ChooseMultiple CommandParameter types
You have access to a lot of the same RPC functionality here that you do in create_tasking
, but except for one notable exception - you don't have a task yet, so you have to do things based on the callback_id
. You won't be able to create/delete entries via RPC calls, but you can still do pretty much every query capability. In this example, we're doing a get_file
query to pull all files that exist within the current callback and present their filenames to the user.
What information do you have at your disposal during this dynamic function call? Not much, but enough to do some RPC calls depending on the information you need to complete this function. Specifically, the PTRPCDynamicQueryFunctionMessage parameter has the following fields:
command - name of the command
parameter_name - name of the parameter
payload_type - name of the payload type
callback - the ID of the callback used for RPC calls
Manipulate tasking before it's sent to the agent
All commands must have a create_go_tasking function with a base case like:
create_go_tasking
is new in Mythic v3.0.0. Prior to this, there was the create_tasking
function. The new change supports backwards compatibility, but the new function provides a lot more information and structured context that's not available in the create_tasking
function. The create_go_tasking
function also mirrors the GoLang's create_tasking
function.
When an operator types a command in the UI, whatever the operator types (or whatever is populated based on the popup modal) gets sent to this function after the input is parsed and validated by the TaskArguments and CommandParameters functions mentioned in .
It's here that the operator has full control of the task before it gets sent down to an agent. The task is currently in the "preprocessing" stage when this function is executed and allows you to do many things via Remote Procedure Calls (RPC) back to the Mythic server.
So, from this create_tasking function, what information do you immediately have available? <-- this class definition provides the basis for what's available.
taskData.Task
- Information about the Task that's issued
taskData.Callback
- Information about the Callback for this task
taskData.Payload
- Information about the packing payload for this callback
taskData.Commands
- A list of the commands currently loaded into this callback
taskData.PayloadType
- The name of this payload type
taskData.BuildParameters
- The build parameters and their values used when building the payload for this callback
taskData.C2Profiles
- Information about the C2 Profiles included inside of this callback.
taskData.args
- access to the associated arguments class for this command that already has all of the values populated and validated. Let's say you have an argument called "remote_path", you can access it via taskData.args.get_arg("remote_path")
.
Want to change the value of that to something else? taskData.args.add_arg("remote_path", "new value")
.
Want to change the value of that to a different type as well? taskData.args.add_arg("remote_path", 5, ParameterType.Number)
Want to add a new argument entirely for this specific instance as part of the JSON response? taskData.args.add_arg("new key", "new value")
. The add_arg
functionality will overwrite the value if the key exists, otherwise it'll add a new key with that value. The default ParameterType for args is ParameterType.String
, so if you're adding something else, be sure to change the type. Note: If you have multiple parameter groups as part of your tasking, make sure you specify which parameter group your new argument belongs to. By default, the argument gets added to the Default
parameter group. This could result in some confusion where you add an argument, but it doesn't get picked up and sent down to the agent.
You can also remove args taskData.args.remove_arg("key")
, rename args taskData.args.rename_arg("old key", "new key")
You can also get access to the user's commandline as well via taskData.args.commandline
Want to know if an arg is in your args? taskData.args.has_arg("key")
taskData.Task.TokenID
- information about the token that was used with the task. This requires that the callback has at some point returned tokens for Mythic to track, otherwise this will be 0.
Success
- did your processing succeed or not? If not, set Error
to a string value representing the error you encountered.
CommandName
- If you want the agent to see the command name for this task as something other than what the actual command's name is, reflect that change here. This can be useful if you are creating an alias for a command. So, your agent has the command ls
, but you create a script_only command dir
. During the processing of dir
you set the CommandName
to ls
so that the agent sees ls
and processes it as normal.
TaskStatus
- If something went wrong and you want to reflect a specific status to the user, you can set that value here. Status that start with error:
will appear red
in the UI.
Stdout
and Stderr
- set these if you want to provide some additional stdout/stderr for the task but don't necessarily want it to clutter the user's interface. This is helpful if you're doing additional compliations as part of your tasking and want to store debug or error information for later.
Completed
- If this is set to True
then Mythic will mark the task as done and won't allow an agent to pick it up.
CompletionFunctionName
- if you want to have a specific local function called when the task completes (such as to do follow-on tasking or more RPC calls), then specify that function name here. This requires a matching entry in the command's completion_functions
like follows:
ParameterGroupName
- if you want to explicitly set the parameter group name instead of letting Mythic figure it out based on which parameters have values, you can specify that here.
DisplayParams
- you can set this value to a string that you'd want the user to see instead of the taskData.Task.OriginalParams
. This allows you to leverage the JSON structure of the popup modals for processing, but return a more human-friendly version of the parameters for operators to view. There's a new menu-item in the UI when viewing a task that you can select to view all the parameters, so on a case-by-case basis an operator can view the original JSON parameters that were sent down, but this provides a nice way to prevent large JSON blobs that are hard to read for operators while still preserving the nice JSON scripting features on the back-end.
They all follow the same format:
Browser scripting is a way for you, the agent developer or the operator, to dynamically adjust the output that an agent reports back. You can turn data into tables, download buttons, screenshot viewers, and even buttons for additional tasking.
As a developer, your browser scripts live in a folder called browser_scripts
in your mythic
folder. These are simply JavaScript files that you then reference from within your command files such as:
As an operator, they exist in the web interface under "Operations" -> "Browser Scripts". You can enable and disable these for yourself at any time, and you can even modify or create new ones from the web interface as well. If you want these changes to be persistent or available for a new Mythic instance, you need to save it off to disk and reference it via the above method.
You need to supply for_new_ui=True
in order for the script to be leveraged for the new user interface. If you don't do this, Mythic will attach the script to the old user interface. All of the following documentation is for the new user interface.
Browser Scripts are JavaScript files that take in a reference to the task and an array of the responses available, then returns a Dictionary representing what you'd like Mythic to render on your behalf. This is pretty easy if your agent returns structured output that you can then parse and process. If you return unstructured output, you can still manipulate it, but it will just be harder for you.
The most basic thing you can do is return plaintext for Mythic to display for you. Let's take an example that simply aggregates all of the response data and asks Mythic to display it:
This function reduces the Array called responses
and aggregates all of the responses into one string called combined
then asks Mythic to render it via: {'plaintext': combined}
.
Plaintext is also used when you don't have a browserscript set for a command in general or when you toggle it off. This uses the react-ace text editor to present the data. This view will also try to parse the output as JSON and, if it can, will re-render the output in pretty print format.
A slightly more complex example is to render a button for Mythic to display a screenshot.
This function does a few things:
If the task status includes the word "error", then we don't want to process the response like our standard structured output because we returned some sort of error instead. In this case, we'll do the same thing we did in the first step and simply return all of the output as plaintext
.
If the task is completed and isn't an error, then we can verify that we have our responses that we expect. In this case, we simply expect a single response with some of our data in it. The one piece of information that the browser script needs to render a screenshot is the agent_file_id
or file_id
of the screenshot you're trying to render. If you want to return this information from the agent, then this will be the same file_id
that Mythic returns to you for transferring the file. If you display this information via process_response
output from your agent, then you're likely to pull the file data via an RPC call, and in that case, you're looking for the agent_file_id
value. You'll notice that this is an array of identifiers. This allows you to supply multiple at once (for example: you took 5 screenshots over a few minutes or you took screenshots of multiple monitors) and Mythic will create a modal where you can easily click through all of them.
To actually create a screenshot, we return a dictionary with a key called screenshot
that has an array of Dictionaries. We do this so that you can actually render multiple screenshots at once (such as if you fetched information for multiple monitors at a time). For each screenshot, you just need three pieces of information: the agent_file_id
, the name
of the button you want to render, and the variant
is how you want the button presented (contained
is a solid button and outlined
is just an outline for the button).
If we didn't error and we're not done, then the status will be processed
. In that case, if we have data we want to also display the partial screenshot, but if we have no responses yet, then we want to just inform the user that we don't have anything yet.
When downloading files from a target computer, the agent will go through a series of steps to register a file id with Mythic and then start chunking and transferring data. At the end of this though, it's super nice if the user is able to click a button in-line with the tasking to download the resulting file(s) instead of then having to go to another page to download it. This is where the download browser script functionality comes into play.
With this script, you're able to specify some plaintext along with a button that links to the file you just downloaded. However, remember that browser scripts run in the browser and are based on the data that's sent to the user to view. So, if the agent doesn't send back the new agent_file_id
for the file, then you won't be able to link to it in the UI. Let's take an example and look at what this means:
So, from above we can see a few things going on:
Like many other browser scripts, we're going to modify what we display to the user based on the status of the task as well as if the agent has returned anything for us to view or not. That's why there's checks based on the task.status
and task.completed
fields.
Assuming the agent returned something back and we completed successfully, we're going to parse what the agent sent back as JSON and look for the file_id
field.
We can then make the download button with a few fields:
agent_file_id
is the file UUID of the file we're going to download through the UI
variant
allows you to control if the button is a solid or just outlined button (contained
or outlined
)
name
is the text inside the button
plaintext
is any leading text data you want to dispaly to the user instead of just a single download button
So, let's look at what the agent actually sent for this message as well as what this looks like visually:
Notice here in what the agent sends back that there are two main important pieces: file_id
which we use to pass in as agent_file_id
for the browser script, and total_chunks
. total_chunks
isn't strictly necessary for anything, but if you look back at the script, you'll see that we display that as plaintext to the user while we're waiting for the download to finish so that the user has some sort of idea how long it'll take (is it 1 chunk, 5, 50, etc).
And here you can see that we have our plaintext leading up to our button. You'll also notice how the download
key is an array. So yes, if you're downloading multiple files, as long as you can keep track of the responses you're getting back from your agent, you can render and show multiple download buttons.
Sometimes you'll want to link back to the "search" page (tasks, files, screenshots, tokens, credentials, etc) with specific pieces of information so that the user can see a list of information more cleanly. For example, maybe you run a command that generated a lot of credentials (like mimikatz) and rather than registering them all with Mythic and displaying them in the task output, you'd rather register them with Mythic and then link the user over to them. Thats where the search links come into play. They're formatted very similar to the download button, but with a slight tweak.
This is almost exactly the same as the download
example, but the actual dictionary we're returning is a little different. Specifically, we have:
plaintext
as a string we want to display before our actual link to the search page
hoverText
as a string for what to display as a tooltip when you hover over the link to the search page
search
is the actual query parameters for the search we want to do. In this case, we're showing that we want to be on the files
tab, with the searchField
of Filename
, and we want the actual search
parameter to be what is shown to the user in the display parameters (display_params
). If you're ever curious about what you should include here for your specific search, whenever you're clicking around on the search page, the URL will update to reflect what's being shown. So, you can navigate to what you'd want, then copy and paste it here.
name
is the text represented that is the link to the search page.
Just like with the download button, you can have multiple of these search
responses.
Creating tables is a little more complicated, but not by much. The biggest thing to consider is that you're asking Mythic to create a table for you, so there's a few pieces of information that Mythic needs such as what are the headers, are there any custom styles you want to apply to the rows or specific cells, what are the rows and what's the column value per row, in each cell do you want to display data, a button, issue more tasking, etc. So, while it might seem overwhelming at first, it's really nothing too crazy.
Let's take an example and then work through it - we're going to render the following screenshot
This looks like a lot, but it's nothing crazy - there's just a bunch of error handling and dealing with parsing errors or task errors. Let's break this down into a few easier to digest pieces:
In the end, we're returning a dictionary with the key table
which has an array of Dictionaries. This means that you can have multiple tables if you want. For each one, we need three things: information about headers, the rows, and the title of the table itself. Not too bad right? Let's dive into the headers:
Headers is an array of Dictionaries with three values each - plaintext
, type
, and optionally width
. As you might expect, plaintext
is the value that we'll actually use for the title of the column. type
is controlling what kind of data will be displayed in that column's cells. There are a few options here: string
(just displays a standard string), size
(takes a size in bytes and converts it into something human readable - i.e. 1024 -> 1KB), date
(process date values and display them and sort them properly), number
(display numbers and sort them properly), and finally button
(display a button of some form that does something). The last value here is width
- this is a pixel value of how much width you want the column to take up by default. If you want one or more columns to take up the remaining widths, specify "fillWidth": true
. Columns by default allow for sorting, but this doesn't always make sense. If you want to disable this ( for example, for a button column), set "disableSort": true
in the header information.
Now let's look at the actual rows to display:
Ok, lots of things going on here, so let's break it down:
As you might expect, you can use this key to specify custom styles for the row overall. In this example, we're adjusting the display based on if the current row is for a file or a folder.
If we're displaying anything other than a button for a column, then we need to include the plaintext
key with the value we're going to use. You'll notice that aside from rowStyle
, each of these other keys match up with the plaintext
header values so that we know which values go in which columns.
In addition to just specifying the plaintext
value that is going to be displayed, there are a few other properties we can specify:
startIcon
- specify the name of an icon to use at the beginning of the plaintext
value. The available startIcon
values are:
folder/openFolder, closedFolder, archive/zip, diskimage, executable, word, excel, powerpoint, pdf/adobe, database, key, code/source, download, upload, png/jpg/image, kill, inject, camera, list, delete
^ the above values also apply to the endIcon
attribute
startIconHoverText
- this is text you want to appear when the user hovers over the icon
endIcon
this is the same as the startIcon
except it's at the end of the text
endIconHoverText
this is the text you want to appear when the user hovers over the icon
plaintextHoverText
this is the text you want to appear when the user hovers over the plaintext value
copyIcon
- use this to indicate true/false if you want a copy
icon to appear at the front of the text. If this is present, this will allow the user to copy all of the text in plaintext
to the clipboard. This is handy if you're displaying exceptionally long pieces of information.
startIconColor
- You can specify the color for your start icon. You can either do a color name, like "gold"
or you can do an rgb value like "rgb(25,142,117)"
.
endIconColor
- this is the same as the startIconColor
but applies to any icon you want to have at the end of your text
The first kind of button we can do is just a popup to display additional information that doesn't fit within the table. In this example, we're displaying all of Apple's extended attributes via an additional popup.
The button field takes a few values, but nothing crazy. name
is the name of the button you want to display to the user. the type
field is what kind of button we're going to display - in this case we use dictionary
to indicate that we're going to display a dictionary of information to the user. The other type is task
that we'll cover next. The value
here should be a Dictionary value that we want to display. We'll display the dictionary as a table where the first column is the key and the second column is the value, so we can provide the column titles we want to use. We can optionally make this button disabled by providing a disabled
field with a value of true
. Just like with the normal plaintext
section, we can also specify startIcon
, startIconColor.
Lastly, we provide a title
field for what we want to title the overall popup for the user.
If the data you want to display to the user isn't structured (not a dictionary, not an array), then you probably just want to display it as a string. This is pretty common if you have long file paths or other data you want to display but don't fit nicely in a table form.
Just like with the other button types, we can use startIcon
, startIconColor
, and hoverText
for this button as well.
This button type allows you to issue additional tasking.
This button has the same name
and type
fields as the dictionary button. Just like with the dictionary button we can make the button disabled or not with the disabled
field. You might be wondering which task we'll invoke with the button. This works the same way we identify which command to issue via the file browser or the process browser - ui_feature
. These can be anything you want, just make sure you have the corresponding feature listed somewhere in your commands or you'll never be able to task it. Just like with the dictionary button, we can specify startIcon
and startIconColor
. The openDialog
flag allows you to specify that the tasking popup modal should open and be partially filled out with the data you supplied in the parameters
field. Similarly, the getConfirmation
flag allows you to force an accept/cancel
dialog to get the user's confirmation before issuing a task. This is handy, especially if the tasking is something potentially dangerous (killing a process, removing a file, etc). If you're setting getConfirmation
to true, you can also set acceptText
to something that makes sense for your tasking, like "yes", "remove", "delete", "kill", etc.
The last thing here is the parameters
. If you provide parameters, then Mythic will automatically use them when tasking. In this example, we're pre-creating the full path for the files in question and passing that along as the parameters to the download
function.
Remember: your parse_arguments
function gets called when your input isn’t a dictionary or if your parse_dictionary
function isn’t defined. So keep that in mind - string arguments go here
when you issue ls -Path some_path
on the command line, Mythic’s UI is automatically parsing that into {"Path": "some_path"}
for you and since you have a dictionary now, it goes to your parse_dictionary
function
when you set the parameters in the browser script, Mythic doesn’t first try to pre-process them like it does when you’re typing on the command.
If you want to pass in a parsed parameter set, then you can just pass in a dictionary. So, "parameters": {"Path": "my path value"}.
If you set "parameters": "-Path some_path"
just like you would type on the command line, then you need to have a parse_arguments
function that will parse that out into the appropriate command parameters you have. If your command doesn't take any parameters and just uses the input as a raw command line, then you can do like above and have "parameters": "path here"
Sometimes the data you want to display is an array rather than a dictionary or big string blob. In this case, you can use the table
button type and provide all of the same data you did when creating this table to create a new table (yes, you can even have menu buttons on that table).
Tasking and extra data display button is nice and all, but if you have a lot of options, you don't want to have to waste all that valuable text space with buttons. To help with that, there's one more type of button we can do: menu
. With this we can wrap the other kinds of buttons:
Notice how we have the exact same information for the task
and dictionary
buttons as before, but they're just in an array format now. It's as easy as that. You can even keep your logic for disabling entries or conditionally not even add them. This allows us to create a dropdown menu like the following screenshot:
These menu items also support the startIcon
, startIconColor
, and hoverText
, properties.
From the opsec_pre
and opsec_post
functions, you have access to the entire task/callback information like you do in . Additionally, you have access to the entire RPC suite just like in .
In the PTTaskCreateTaskingMessageResponse
, you can set a variety of attributes to reflect changes back to Mythic as a result of your processing:
This additional functionality is broken out into a series of files () file that you can import at the top of your Python command file.
How to design C2 profile code for Mythic
There are two components for C2 code: agent side coding and server side coding.
This is the component that goes into your agent and handles hooking into the Mythic features, does chunking for files, reports back artifacts, keystrokes, etc.
This is the component that lives within the C2 Docker container and actually translates the C2 requests from special sauce to Mythic's HTTP + JSON.
See the C2 Related Development section for more SOCKS specific message details.
To start / stop SOCKS (or any interactive based protocol), use the SendMythicRPCProxyStart
and SendMythicRPCProxyStop
RPC calls within your Payload Type's tasking functions.
For SOCKS, you want to set LocalPort
to the port you want to open up on the Mythic Server - this is where you'll point your proxy-aware tooling (like proxychains) to then tunnel those requests through your C2 channel and out your agent. For SOCKS, the RemotePort and RemoteIP don't matter. The PortType
will be CALLBACK_PORT_TYPE_SOCKS
(i.e. socks
).
See the C2 Related Development section for more RPFWD specific message details.
To start / stop RPFWD (or any interactive based protocol), use the SendMythicRPCProxyStart
and SendMythicRPCProxyStop
RPC calls within your Payload Type's tasking functions.
For RPFWD, you want to set LocalPort
to the port you want to open up on the host where your agent is running. RemoteIP
and RemotePort
are used for Mythic to make remote connections based on the incoming connections your agent gets on LocalPort
within the target network. The PortType
will be CALLBACK_PORT_TYPE_RPORTFWD
(i.e. rpfwd
).
Within a Command
class, there are two functions - create_tasking
and process_response
. As the names suggest, the first one allows you to create and manipulate a task before an agent pulls it down, and the second one allows you to process the response that comes back. If you've been following along in development, then you know that Mythic supports many different fields in its post_response
action so that you can automatically create artifacts, register keylogs, manipulate callback information, etc. However, all of that requires that your agent format things in a specific way for Mythic to pick them up and process. That can be tiring.
The process_response
function takes in one argument class that contains two pieces of information: The task
that generated the response in the first place, and the response
that was sent back from the agent. Now, there's a specific process_response
keyword you have to send for mythic to shuttle data off to this function instead of processing it normally. When looking at a post_response
message, it's structured as follows:
Now, anything in that process_response
key will get sent to the process_response
function in your Payload Type container. This value for process_response
can be any type - int, string, dictionary, array, etc.
Some caveats:
You can send any data you want in this way and process it however you want. In the end, you'll be doing RPC calls to Mythic to register the data
Not all things make sense to go this route. Because this is an async process to send data to the container for processing, this happens asynchronously and in parallel to the rest of the processing for the message. For example, if your message has just the task_id
and a process_container
key, then as soon as the data is shipped off to your process_response
function, Mythic is going to send the all clear back down to the agent and say everything was successful. It doesn't wait for your function to finish processing anything, nor does it expect any output from your function.
We do this sort of separation because your agent shouldn't be waiting on the hook for unnecessary things. We want the agent to get what it wants as soon as possible so it can go back to doing agent things.
Some functionality like SOCKS and file upload/download don't make sense for the process_response
functionality because the agent needs the response in order to keep functioning. Compare this to something like registering a keylog, creating an artifact, or providing some output to the user which the agent tends to think of in a "fire and forget" style. These sorts of things are fine for async parallel processing with no response to the agent.
The function itself is really simple:
where task
is the same task data you'd get from your create_go_tasking
function, and response
is whatever you sent back.
You have full access to all of the RPC methods to Mythic from here just like you do from the other functions.
So, you want to add a new command to a Payload Type. What does that mean, where do you go, what all do you have to do?
Luckily, the Payload Type containers are the source of truth for everything related to them, so that's the only place you'll need to edit. If your payload type uses its own custom message format, then you might also have to edit your associated translation container, but that's up to you.
Make a new .py
file with the rest of your commands.
This new file should match the requirements of the rest of the commands
Once you're done making edits, restart your payload type container via: ./mythic-cli start [payload type name]
. This will restart just that one payload type container, reloading the python files automatically, and re-syncing the data with Mythic.
Command and Control (C2) profiles are a little different in Mythic than you might be used to. Specifically, C2 profiles live in their own docker containers and act as a translation mechanism between whatever your special sauce C2 protocol is and what the back-end Mythic server understands (HTTP + JSON). Their entire role in life is to get data off the wire from whatever special communications format you're using and forward that to Mythic.
By defining a C2 protocol specification, other payload types can register that they speak that C2 protocol as well and easily hook in without having to do back-end changes. By having C2 protocols divorced from the main Mythic server, you can create entirely new C2 protocols more easily and you can do them in whatever language you want. If you want to do all your work in GoLang, C#, or some other language for the C2 protocol, go for it. It's all encapsulated in the C2's Docker container with whatever environment you desire.
Since there's so much variability possible within a C2 Docker container, there's some required structure and python files similar to how Payload Types are structured. This is covered in C2 Profile Code.
When we look at how C2 Profiles work within Mythic, there are two different stages to consider:
How is the C2 Profile defined so that Mythic can track all of the parameters and present them to the user when generating payloads.
How does the C2 Profile's code run so that it can listen for agent traffic and communicate with Mythic.
Just like with Payload Types, C2 Profiles can either run within Docker or on a separate host or within a VM somewhere. This isn't a hard requirement, but makes it easier to share them. The format is the same as for Payload Types (and even Translation Containers) - the only difference is which classes/structs we instantiate. Check out Payload Type Development for the general structure.
If you're going to be using the mythic-cli
to install and run your C2 Profile, then Mythic will mount your Mythic/InstalledServices/[c2 profile name]
folder as /Mythic
inside of the Docker container as a volume. This means that any changes to the Mythic/InstalledServices/[c2 profile name]
folder that happen on disk will be mirrored inside of the Docker container.
Some differences to note:
Just like how Payload Types have two sections (agent code and Mythic definition files), C2 Profiles have the same sort of thing (agent code and Mythic definition files).
Wherever your server code is located, there's a required file, config.json
, that the user can edit from the Mythic UI.
config.json
- this is a JSON file that exposes any configuration parameters that you want to expose to the user (such as which port to open up, do you need SSL, etc).
server_binary_path
- this is the actual program that Mythic executes when you "start" a C2 Profile. This file can be whatever you want as long as it's executable.
The Mythic definition files for your profile (what kind of parameters does it take, what's the name, etc) as well as your OPSEC checks and potential redirector generation code.
Once you get all of that created, you'll want to register the new C2 Profile with Mythic. Normally you install C2 Profiles from the web with sudo ./mythic-cli install github https://github.com/C2Profiles/[profile name]
. However, since you already have the code and folder structure in your Mythic/InstalledServices
folder, we can just 'tell' Mythic that it exists. You can do this via sudo ./mythic-cli add [profile name]
. You can then start just that one container with sudo ./mythic-cli start [profile name]
. When the container starts, a few things happen:
The Docker container kicks off main.py
or main
depending on Python or GoLang
The optional rabbitmq_config.json
as well as environment variables passed in are processed and used to start the service. It then processes all of the files within the c2_functions
folder to look for your C2 Profile class (You'll notice here that your class extends the C2Profile class). Once it finds that class, it gets a dictionary representation of all of that information (C2 profile name, parameters, etc) and then connects to RabbitMQ to send that data to Mythic.
When Mythic gets that synchronization message from the container with all of the dictionary information, it ties to import the C2 Profile. If it is able to successfully import (or update the current instance), then it'll report an event message that the C2 profile successfully synced.
Once the sync happens, the Docker container sends periodic heartbeat messages to Mythic to let it know that the container is still up and going. This is how the UI is able to determine if the container is up or down.
The C2 Profile doesn't need to know anything about the actual content of the messages that are coming from agents and in most cases wouldn't be able to read them anyway since they'll be encrypted. Depending on the kind of communications you're planning on doing, your C2 Profile might wrap or break up an agent's message (eg: splitting a message to go across DNS and getting it reassembled), but then once your C2 Profile re-assembles the agent message, it can just forward it along. In most cases, simply sending the agent message as an HTTP POST message to the location specified by your container's http://MythicServerHost:MythicServerPort/agent_message
endpoint where MythicServerHost
and MythicServerPort
are both available via environment variable is good enough. You'll get an immediate result back from that which your C2 profile should hand back to the agent.
Mythic will try to automatically start your server file when the container starts. This same file is what gets executed when you click to "start" the profile in the UI.
Every Docker container has environment variables, MYTHIC_SERVER_HOST
which points to 127.0.0.1
by default and MYTHIC_SERVER_PORT
which points to 17443
by default. This information is pulled from the main /Mythic/.env
file. So, if you change Mythic's main UI to HTTP on port 7444, then each C2 Docker container's MYTHIC_SERVER_PORT
environment variable will update. This allows your code within the docker container to always know where to forward requests so that the main Mythic server can process them.
The C2 Profile has nothing to do with the content of the messages that are being sent. It has no influence on the encryption or what format the agent messages are in (JSON, binary, stego, etc). If you want to control that level of granularity, you need to check out the Translation Containers.
When forwarding messages to Mythic, they must be in a specific format: Base64(UUID + message). This just allows Mythic to have a standard way to process messages that are coming in and pull out the needed pieces of information. The UUID allows mythic to look up the associated Payload Type and see what needs to happen (is it a payload that's staging, is it a callback, does processing need to go to a translation container first, etc).The message
is typically an encrypted blob, but could be anything.
C2 Profiles can access the same RPC functions that Payload Types can; however, since C2 profiles don't have things like a task_id
, there is some functionality they won't be able to leverage.
This one is a little less intuitive than the C2 Docker container directly reaching out to the Mythic server for functionality. This functionality allows tasking as an operator to directly manipulate a C2 component. This functionality has no "default" functions, it's all based on the C2 profile itself. Technically, this goes both ways - C2 Profiles can reach back and execute functionality from Payload Types as well.
Payload Types and C2 Profiles can specify an attribute, custom_rpc_functions
, which are dictionaries of key
-value
pairs (much like the completion functions) where the key
is the name of the function that a remote services can call, and the value
is the actual function itself. These functions have the following format:
The incoming data is a dictionary in the incomingMsg.ServiceRPCFunctionArguments
and the resulting data goes back through the Result
key.
This section talks about the different components for creating messages from the agent to a C2 docker container and how those can be structured within a C2 profile. Specifically, this goes into the following components:
Files
How are formatted
How to perform and do encrypted
How to
How to
Another major component of the agent side coding is the actual C2 communications piece within your agent. This piece is how your agent actually implements the C2 components to do its magic.
Every C2 profile has zero or more C2 Parameters that go with it. These describe things like callback intervals, API keys to use, how to format web requests, encryption keys, etc. These parameters are specific to that C2 profile, so any agent that "speaks" that c2 profile's language will leverage these parameters. If you look at the parameters in the UI, you'll see:
Name
- When creating payloads or issuing tasking, you will get a dictionary of name
-> user supplied value
for you to leverage. This is a unique key per C2 profile (ex: callback_host
)
description
- This is what's presented to the user for the parameter (ex: Callback host or redirector in URL format
)
default_value
- If the user doesn't supply a value, this is the default one that will be used
verifier_regex
- This is a regex applied to the user input in the UI for a visual cue that the parameter is correct. An example would be ^(http|https):\/\/[a-zA-Z0-9]+
for the callback_host
to make sure that it starts with http:// or https:// and contains at least one letter/number.
required
- Indicate if this is a required field or not.
randomized
- This is a boolean indicating if the parameter should be randomized each time. This comes into play each time a payload is generated with this c2 profile included. This allows you to have a random value in the c2 profile that's randomized for each payload (like a named pipe name).
format_string
- If randomized
is true
, then this is the regex format string used to generate that random value. For example, [a-z0-9]{8}-[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{12}
will generate a UUID4 each time.
This page describes how an agent message is formatted
All messages go to the /agent_message
endpoint via the associated C2 Profile docker container. These messages can be:
POST request
message content in body
GET request
message content in FIRST header value
message content in FIRST cookie value
message content in FIRST query parameter
For query parameters, the Base64 content must be URL Safe Encoded - this has different meaning in different languages, but means that for the "unsafe" characters of +
and /
, they need to be swapped out with -
and _
instead of %encoded. Many languages have a special Base64 Encode/Decode function for this. If you're curious, this is an easy site to check your encoding:
message content in body
All agent messages have the same general structure, but it's the message inside the structure that varies.
Each message has the following general format:
There are a couple of components to note here in what's called an agentMessage
:
UUID
- This UUID varies based on the phase of the agent (initial checkin, staging, fully staged). This is a 36 character long of the format b50a5fe8-099d-4611-a2ac-96d93e6ec77b
. Optionally, if your agent is dealing with more of a binary-level specification rather than strings, you can use a 16 byte big-endian value here for the binary representation of the UUID4 string.
EncBlob
- This section is encrypted, typically by an AES256 key, but when agents are staging, this could be encrypted with RSA keys or as part of some other custom crypto/staging you're doing as part of your payload type container. .
JSON
- This is the actual message that's being sent by the agent to Mythic or from Mythic to an agent. If you're doing your own custom message format and leveraging a translation container, this this format will obviously be different and will match up with your custom version; however, in your translation container you will need to convert back to this format so that Mythic can process the message.
action
- This specifies what the rest of the message means. This can be one of the following:
staging_rsa
checkin
get_tasking
post_response
translation_staging (you're doing your own staging)
+
- when you see something like UUID + EncBlob
, that's referring to byte concatenation of the two values. You don't need to do any specific processing or whatnot, just right after the first elements bytes put the second elements bytes
If you want to have a completely custom agent message format (different format for JSON, different field names/formatting, a binary or otherwise formatted protocol, etc), then there's only two things you have to do for it to work with Mythic.
Base64 encode the message
The first bytes of the message must be the associated UUID (payload, staging, callback).
Mythic uses these first few bytes to do a lookup in its database to find out everything about the message. Specifically for this case, it looks up if the associated payload type has a translation container, and if so, ships the message off to it first before trying to process it.
Delegate messages are messages that an agent is forwarding on behalf of another agent. The use case here is an agent forwarding peer-to-peer messages for a linked agent. Mythic supports this by having an optional delegates
array in messages. An example of what this looks like is in the next section, but this delegates
array can be part of any message from an agent to mythic.
When sending delegate messages, there's a simple standard format:
Within a delegates array are a series of JSON dictionaries:
UUID
- This field is some UUID identifier used by the agent to track where a message came from and where it should go back to. Ideally this is the same as the UUID for the callback on the other end of the connection, but can be any value. If the agent uses a value that does not match up with the UUID of the agent on the other end, Mythic will indicate that in the response. This allows the middle-man agent to generate some UUID identifier as needed upon first connection and then learn of and use the agent's real UUID once the messages start flowing.
message
- this is the actual message that the agent is transmitting on behalf of the other agent
c2_profile
- This field indicates the name of the C2 Profile associated with the connection between this agent and the delegated agent. This allows Mythic to know how these two agents are talking to each other when generating and tracking connections.
The new_uuid
field indicates that the uuid
field the agent sent doesn't match up with the UUID in the associated message. If the agent uses the right UUID with the agentMessage then the response would be:
Why do you care and why is this important? This allows an agent to randomly generate its own UUID for tracking connections with other agents and provides a mechanism for Mythic to reveal the right UUID for the callback on the other end. This implicitly gives the agent the right UUID to use if it needs to announce that it lost the route to the callback on the other end. If Mythic didn't correct the agent's use of UUID, then when the agent loses connection to the P2P agent, it wouldn't be able to properly indicate it to Mythic.
Ok, so let's walk through an example:
agentA is an egress agent speaking HTTP to Mythic. agentA sends messages directly to Mythic, such as the {"action": "get_tasking", "tasking_size": 1}
. All is well.
somehow agentB gets deployed and executed, this agent (for sake of example) opens a port on its host (same host as agentA or another one, doesn't matter)
agentA connects to agentB (or agentB connects to agentA if agentA opened the port and agentB did a connection to it) over this new P2P protocol (smb, tcp, etc)
agentB sends to agentA a staging message if it's doing EKE, a checkin message if it's already an established callback (like the example of re-linking to a callback), or a checkin message if it's doing like a static PSK or plaintext. The format of this message is exactly the same as if it wasn't going through agentA
agentA gets this message, and is like "new connection, who dis?", so it makes a random UUID to identify whomever is on the other end of the line and forwards that message off to Mythic with the next message agentA would be sending anyway. So, if the next message that agentA would send to Mythic is another get tasking, then it would look like: {"action": "get_tasking", "tasking_size": 1, "delegates": [ {"message": agentB's message, "c2_profile": "Name of the profile we're using to communicate", "uuid": "myRandomUUID"} ] }
. That's the message agentA sends to Mythic.
Mythic gets the message, processes the get_tasking for agentA, then sees it has delegate
messages (i.e. messages that it's passing along on behalf of other agents). So Mythic recursively processes each of the messages in this array. Because that message
value is the same as if agentB was talking directly to Mythic, Mythic can parse out the right UUIDs and information. The c2_profile
piece allows Mythic to look up any c2-specific encryption information to pass along for the message. Once Mythic is done processing the message, it sends a response back to agentA like: {"action": "get_tasking", "tasks": [ normal array of tasks ], "delegates": [ {"message": "response back to what agentB sent", "uuid": "myRandomUUID that agentA generated", "new_uuid": "the actual UUID that Mythic uses for agentB"} ] }
. If this is the first time that Mythic has seen a delegate from agentB through agentA, then Mythic knows that there's a route between the two and via which C2 profile, so it can automatically display that in the UI
agentA gets the response back, processes its get_tasking like normal, sees the delegates
array and loops through those messages. It sees "oh, it's myRandomUUID, i know that guy, let me forward it along" and also sees that it's been calling agentB by the wrong name, it now knows agentB's real name according to Mythic. This is important because if agentA and agentB ever lose connection, agentA can report back to Mythic that it can no longer to speak to agentB with the right UUID that Mythic knows.
This same process repeats and just keeps nesting for agentC that would send a message to agentB that would send the message to agentA that sends it to Mythic. agentA can't actually decrypt the messages between agentB and Mythic, but it doesn't need to. It just has to track that connection and shuttle messages around.
Now that there's a "route" between the two agents that Mythic is aware of, a few things happen:
when agentA now does a get_tasking
message (with or without a delegate message from agentB), if mythic sees a tasking for agentB, Mythic will automatically add in the same delegates
message that we saw before and send it back with agentA so that agentA can forward it to agentB. That's important - agentB never had to ask for tasking, Mythic automatically gave it to agentA because it knew there was a route between the two agents.
if you DON"T want that to happen though - if you want agentB to keep issuing get_tasking requests through agentA with periodic beaconing, then in agentA's get_tasking you can add get_delegate_tasks
to False. i.e ({"action": "get_tasking", "tasking_size": 1, "get_delegate_tasks": false}
) then even if there are tasks for agentB, Mythic WILL NOT send them along with agentA. agentB will have to ask for them directly
If this wasn't part of some task, then there would be no task_id to use. In this case, we can add the same edges
structure at a higher point in the message:
This page describes the format for getting new tasking
The contents of the JSON message from the agent to Mythic when requesting tasking is as follows:
There are two things to note here:
tasking_size
- This parameter defaults to one, but allows an agent to request how many tasks it wants to get back at once. If the agent specifies -1
as this value, then Mythic will return all of the tasking it has for that callback.
delegates
- This parameter is not required, but allows for an agent to forward on messages from other callbacks. This is the peer-to-peer scenario where inner messages are passed externally by the egress point. Each of these agentMessage is a self-contained "" and the c2_profile
indicates the name of the C2 Profile used to connect the two agents. This allows Mythic to properly decode/translate the messages even for nested messages.
get_delegate_tasks
- This is an optional parameter. If you don't include it, it's assumed to be True
. This indicates whether or not this get_tasking
request should also check for tasks that belong to callbacks that are reachable from this callback. So, if agentA has a route to agentB, agentB has a task in the submitted
state, and agentA issues a get_tasking
, agentA can decide if it wants just its own tasking or if it also wants to pick up agentB's task as well.
Why does this matter? This is helpful if your linked agents issue their own periodic get_tasking
messages rather than simply waiting for tasking to come to them. This way the parent callback (agentA in this case) doesn't accidentally consume and toss aside the task for agentB; instead, agentB's own periodic get_tasking
message has to make its way up to Mythic for the task to be fetched.
Mythic responds with the following message format for get_tasking requests:
There are a few things to note here:
tasks
- This parameter is always a list, but contains between 0 and tasking_size
number of entries.
parameters
- this encapsulates the parameters for the task. If a command has parameters like: {"remote_path": "/users/desktop/test.png", "file_id": "uuid_here"}
, then the params
field will have that JSON blob as a STRING value (i.e. the command is responsible to parse that out).
delegates
- This parameter contains any responses for the messages that came through in the first message.
The contents of the JSON message from the agent to Mythic when posting tasking responses is as follows:
There are two things to note here:
responses
- This parameter is a list of all the responses for each tasking.
For each element in the responses array, we have a dictionary of information about the response. We also have a task_id
field to indicate which task this response is for. After that though, comes the actual response output from the task.
If you don't want to hook a certain feature (like sending keystrokes, downloading files, creating artifacts, etc), but just want to return output to the user, the response section can be as simple as:
{"task_id": "uuid of task", "user_output": "output of task here"}
To continue adding to that JSON response, you can indicate that a command is finished by adding "completed": true
or indicate that there was an error with "status": "error"
.
Mythic responds with the following message format for post_response requests:
There are two things to note here:
responses
- This parameter is always a list and contains a success or error + error message for each task that was responded to.
delegates
- This parameter contains any responses for the messages that came through in the first message
This page has the various different ways the initial checkin can happen and the encryption schemes used.
You will see a bunch of UUIDs mentioned throughout this section. All UUIDs are UUIDv4 formatted UUIDs (36 characters in length) and formatted like:
In general, the UUID concatenated with the encrypted message provides a way to give context to the encrypted message without requiring a lot of extra pieces and without having to do a bunch of nested base64 encodings. 99% of the time, your messages will use your callbackUUID in the outer message. The outer UUID gives Mythic information about how to decrypt or interpret the following encrypted blob. In general:
payloadUUID as the outer UUID tells Mythic to look up that payload UUID, then look up the C2 profile associated with it, find a parameter called AESPSK
, and use that as the key to decrypt the message
tempUUID as the outer UUID tells Mythic that this is a staging process. So, look up the UUID in the staging database to see information about the blob, such as if it's an RSA encrypted blob or is part of a Diffie-Hellman key exchange
callbackUUID as the outerUUID tells Mythic that this is a full callback with an established encryption key or in plaintext.
However, when your payload first executes, it doesn't have a callbackUUID, it's just a payloadUUID. This is why you'll see clarifiers as to which UUID we're referring to when doing specific messages. The whole goal of the checkin
process is to go from a payload (and thus payloadUUID) to a full callback (and thus callbackUUID), so at the end of staging and everything you'll end up with a new UUID that you'll use as the outer UUID.
If your already existing callback sends a checkin message more than once, Mythic simply uses that information to update information about the callback rather than trying to register a new callback.
The plaintext checkin is useful for testing or when creating an agent for the first time. When creating payloads, you can generate encryption keys per c2 profile. To do so, the C2 Profile will have a parameter that has an attribute called crypto_type=True
. This will then signal to Mythic to either generate a new per-payload AES256_HMAC key or (if your agent is using a translation container) tell your agent's translation container to generate a new key. In the http
profile for example, this is a ChooseOne
option between aes256_hmac
or none
. If you're doing plaintext comms, then you need to set this value to none
when creating your payload. Mythic looks at that outer PayloadUUID
and checks if there's an associated encryption key with it in the database. If there is, Mythic will automatically try to decrypt the rest of the message, which will fail. This checkin has the following format:
integrity_level
is an integer from 1-4 that indicates the integrity level of the callback. On Windows, these levels correspond to low integrity (1) , medium integrity (2), high integrity (3), or SYSTEM integrity (4). On Linux, these don't have a great mapping, but you can think of (2) as a standard user, (3) as a user that's in the sudoers file or is able to run sudo, and (4) as the root user.
The JSON section is not encrypted in any way, it's all plaintext.
The checkin has the following response:
From here on, the agent messages use the new UUID instead of the payload UUID. This allows Mythic to track a payload trying to make a new callback vs a callback based on a payload.
This method uses a static AES256 key for all communications. This will be different for each payload that's created. When creating payloads, you can generate encryption keys per c2 profile. To do so, the C2 Profile will have a parameter that has an attribute called crypto_type=True
. This will then signal to Mythic to either generate a new per-payload AES256_HMAC key or (if your agent is using a translation container) tell your agent's translation container to generate a new key. In the http
profile for example, this is a ChooseOne
option between aes256_hmac
or none
. The key passed down to your agent during build time will be the base64 encoded version of the 32Byte key.
The message sent will be of the form:
The message response will be of the form:
From here on, the agent messages use the new UUID instead of the payload UUID.
This first message from Agent -> Mythic has the Payload UUID as the outer UUID and the Payload UUID inside the checkin JSON message. Once the agent gets the reply with a callbackUUID, all future messages will have this callbackUUID as the outer UUID.
Padding: PKCS7, block size of 16
Mode: CBC
IV is 16 random bytes
Final message: IV + Ciphertext + HMAC
where HMAC is SHA256 with the same AES key over (IV + Ciphertext)
There are two currently supported options for doing an encrypted key exchange in Mythic:
Client-side generated RSA keys
leveraged by the apfell-jxa and poseidon agents
Agent specific custom EKE
The agent starts running and generates a new 4096 bit Pub/Priv RSA key pair in memory. The agent then sends the following message to Mythic:
where the AES key initially used is defined as the initial encryption value when generating the payload. When creating payloads, you can generate encryption keys per c2 profile. To do so, the C2 Profile will have a parameter that has an attribute called crypto_type=True
. This will then signal to Mythic to either generate a new per-payload AES256_HMAC key or (if your agent is using a translation container) tell your agent's translation container to generate a new key. In the http
profile for example, this is a ChooseOne
option between aes256_hmac
or none
.
This message causes the following response:
The response is encrypted with the same initial AESPSK value as before. However, the session_key
value is encrypted with the public RSA key that was in the initial message and base64 encoded. The response also includes a new staging UUID for the agent to use. This is not the final UUID for the new callback, this is a temporary UUID to indicate that the next message will be encrypted with the new AES key.
The next message from the agent to Mythic is as follows:
This checkin data is the same as all the other methods of checking in, the key things here are that the tempUUID is the temp UUID specified in the other message, the inner uuid is the payload UUID, and the AES key used is the negotiated one. It's with this information that Mythic is able to track the new messages as belonging to the same staging sequence and confirm that all of the information was transmitted properly. The final response is as follows:
From here on, the agent messages use the new UUID instead of the payload UUID or temp UUID and continues to use the new negotiated AES key.
Padding: PKCS7, block size of 16
Mode: CBC
IV is 16 random bytes
Final message: IV + Ciphertext + HMAC
where HMAC is SHA256 with the same AES key over (IV + Ciphertext)
PKCS1_OAEP
This is specifically OAEP with SHA1
4096Bits in size
Mythic looks up the information for the payloadUUID and calls your translation container's translate_from_c2_format
function. That function gets a dictionary of information like the following:
To get the enc_key
, dec_key
, and type
, Mythic uses the payloadUUID to then look up information about the payload. It uses the profile
associated with the message to look up the C2 Profile parameters and look for any parameter with a crypto_type
set to true
. Mythic pulls this information and forwards it all to your translate_from_c2_format
function.
Ok, so that message gets your payloadUUID/crypto information and forwards it to your translation container, but then what?
Normally, when the translate_to_c2_format
function is called, you just translate from your own custom format to the standard JSON dictionary format that Mythic uses. No big deal. However, we're doing EKE here, so we need to do something a little different. Instead of sending back an action of checkin
, get_tasking
, post_response
, etc, we're going to generate an action of staging_translation
.
Mythic is able to do staging and EKE because it can save temporary pieces of information between agent messages. Mythic allows you to do this too if you generate a response like the following:
Let's break down these pieces a bit:
action
- this must be "staging_translation". This is what indicates to Mythic once the message comes back from the translate_from_c2_format
function that this message is part of staging.
session_id
- this is some random character string you generate so that we can differentiate between multiple instances of the same payload trying to go through the EKE process at the same time.
enc_key
/ dec_key
- this is the raw bytes of the encryption/decryption keys you want for the next message. The next time you get the translate_from_c2_format
message for this instance of the payload going through staging, THESE are the keys you'll be provided.
crypto_type
- this is more for you than anything, but gives you insight into what the enc_key
and dec_key
are. For example, with the http
profile and the staging_rsa
, the crypto type is set to aes256_hmac
so that I know exactly what it is. If you're handling multiple kinds of encryption or staging, this is a helpful way to make sure you're able to keep track of everything.
next_uuid
- this is the next UUID that appears in front of your message (instead of the payloadUUID). This is how Mythic will be able to look up this staging information and provide it to you as part of the next translate_from_c2_format
function call.
message
- this is the actual raw bytes of the message you want to send back to your agent.
This process just repeats as many times as you want until you finally return from translate_from_c2_format
an actual checkin
message.
What if there's other information you need/want to store though? There are three RPC endpoints you can hit that allow you to store arbitrary data as part of your build process, translation process, or custom c2 process:\
create_agentstorage
- this take a unique_id string value and the raw bytes data value. The unique_id
is something that you need to generate, but since you're in control of it, you can make sure it's what you need. This returns a dictionary:
{"unique_id": "your unique id", "data": "base64 of the data you supplied"}
get_agentstorage
- this takes the unique_id string value and returns a dictionary of the stored item:
{"unique_id": "your unique id", "data": "base64 of the data you supplied"}
delete_agentstorage
- this takes the unique_id string value and removes the entry from the database
...
- This section varies based on the action that's being performed. The different variations here can be found in , , and
delegates
- This section contains messages from other agents that are being passed along. This is how messages from nested peer-to-peer agents can be forwarded out through and egress callback. If your agent isn't forwarding messages on from others (such as in a p2p mesh or as an egress point), then you don't need this section. More info can be found here:
What happens when agentA and agentB can no longer communicate though? agentA needs to send a message back to Mythic to indicate that the connection is lost. This can be done with the key. Using all of the information agentA has about the connection, it can announce that Mythic should remove a edge between the two callbacks. This can either happen as part of a response to a tasking (such as an explicit task to unlink two agents) or just something that gets noticed (like a computer rebooted and now the connection is lost). In the first case, we see the example below as part of a normal post_response message:
Each response style is described in . The format described in each of the Hooking features sections replaces the ... response message
piece above
delegates
- This parameter is not required, but allows for an agent to forward on messages from other callbacks. This is the peer-to-peer scenario where inner messages are passed externally by the egress point. Each of these messages is a self-contained "".
This section requires you to have a associated with your payload type. The agent sends your own custom message to Mythic:
When creating payloads, Mythic will send a C2 Profile's parameters to the associated C2 Profile container for an "opsec check". This is a function that you can choose to write (or not) to look over the C2-specific parameter values that an operator selected to see if they pass your risk tolerance. This function is part of your C2 Profile's class definition:
In the end, the function is returning success or error for if the OPSEC check passed or not.
OPSEC checks for C2 profiles are executed every time a Payload is created. This means when an operator does it through the UI, when somebody scripts it out, and when a payload is automatically generated as part of tasking (such as for lateral movement or spawning new callbacks).
Check Agent's Configuration Before Generation
Configuration checks are optional checks implemented by a C2 profile to alert the operator if they're generating an agent with a C2 configuration that doesn't match the current C2 docker services.
This check occurs every time an agent is generated, and this output is added to the payload's build_message
. Thus, an operator sees it when generating a payload, but it's always viewable again from the created payloads page.
The function is part of your C2 Profile's class definition, so it has access to your local config.json
file as well as the instance configuration from the agent.
This is a function operators can manually invoke for a payload to ask the payload's C2 profiles to generate a set of redirection rules for that payload. Nothing in Mythic knows more about a specific C2 profile than the C2 profile itself, so it makes sense that a C2 profile should be able to generate its own redirection rules for a given payload.
These redirection rules are up to the C2 Profile creators, but can include things like Apache mod_rewrite rules, Nginx configurations, and more.
Operationally, users can invoke this function from the created payloads page with a dropdown menu for the payload they're interested in. Functionally, this code lives in the class definition of your C2 Profile.
This function gets passed the same sort of information that the opsec check and configuration check functions get; namely, information about all of the payload's supplied c2 profile parameter values. This function can also access the C2 Profile's current configuration.
The format of the function is as follows:
How does Reverse Port Forward work within Mythic
Reverse port forwards provide a way to tunnel incoming connections on one port out to another IP:Port somewhere else. It normally provides a way to expose an internal service to a network that would otherwise not be able to directly access it.
Agents transmit dictionary messages that look like the following:
These messages contain three components:
exit
- boolean True or False. This indicates to either Mythic or your Agent that the connection has been terminated from one end and should be closed on the other end (after sending data
). Because Mythic and 2 HTTP connections sit between the actual tool you're trying to proxy and the agent that makes those requests on your tool's behalf, we need this sort of flag to indicate that a TCP connection has closed on one side.
server_id
- unsigned int32. This number is how Mythic and the agent can track individual connections. Every new connection will generate a new server_id
. Unlike SOCKS where Mythic is getting the initial connection, the agent is getting the initial connection in a reverse port forward. In this case, the agent needs to generate this random uint32 value to track connections.
data
- base64 string. This is the actual bytes that the proxied tool is trying to send.
In Python translation containers, if exit
is True, then data
can be None
These RPFWD messages are passed around as an array of dictionaries in get_tasking
and post_response
messages via a (added if needed) rpfwd
key:
or in the post_response
messages:
Notice that they're at the same level of "action" in these dictionaries - that's because they're not tied to any specific task, the same goes for delegate messages.
For the most part, the message processing is pretty straight forward:
Agent opens port X on the target host where it's running
ServerA makes a connection to PortX
Agent accepts the connection, generates a new uint32 server_id, and sends any data received to Mythic via rpfwd
key.
Mythic looks up the server_id
, if Mythic has seen this server_id, then it can pass it off to the appropriate thread or channel to continue processing. If we've never seen the server_id before, then it's likely a new connection that opened up, so we need to handle that appropriately. Mythic makes a new connection out to the RemoteIP:RemotePort specified when starting the rpfwd
session. Mythic forwards the data along and waits for data back. Any data received is sent back via the rpfwd
key the next time the agent checks in.
For existing connections, the agent looks at if exit
is True or not. If exit
is True, then the agent should close its corresponding TCP connection and clean up those resources. If it's not exit, then the agent should base64 decode the data
field and forward those bytes through the existing TCP connection.
The agent should also be streaming data back from its open TCP connections to Mythic in its get_tasking
and post_response
messages.
That's it really. The hard part is making sure that you don't exhaust all of the system resources by creating too many threads, running into deadlocks, or any number of other potential issues.
While not perfect, the poseidon agent have a generally working implementation for Mythic: https://github.com/MythicAgents/poseidon/blob/master/Payload_Type/poseidon/poseidon/agent_code/rpfwd/rpfwd.go
How does SOCKS work within Mythic
Socks provides a way to negotiate and transmit TCP connections through a proxy (https://en.wikipedia.org/wiki/SOCKS). This allows operators to proxy network tools through the Mythic server and out through supported agents. SOCKS5 allows a lot more options for authentication compared to SOCKS4; however, Mythic currently doesn't leverage the authenticated components, so it's important that if you open up this port on your Mythic server that you lock it down.
Opened SOCKS5 ports in Mythic do not leverage additional authentication, so MAKE SURE YOU LOCK DOWN YOUR PORTS.
Without going into all the details of the SOCKS5 protocol, agents transmit dictionary messages that look like the following:
These messages contain three components:
exit
- boolean True or False. This indicates to either Mythic or your Agent that the connection has been terminated from one end and should be closed on the other end (after sending data
). Because Mythic and 2 HTTP connections sit between the actual tool you're trying to proxy and the agent that makes those requests on your tool's behalf, we need this sort of flag to indicate that a TCP connection has closed on one side.
server_id
- integer. This number is how Mythic and the agent can track individual connections. Every new connection from a proxied tool (like through proxychains) will generate a new server_id
that Mythic will send with data to the Agent.
data
- base64 string. This is the actual bytes that the proxied tool is trying to send.
In Python translation containers, if exit
is True, then data
can be None
These SOCKS messages are passed around as an array of dictionaries in get_tasking
and post_response
messages via a (added if needed) socks
key:
or in the post_response
messages:
Notice that they're at the same level of "action" in these dictionaries - that's because they're not tied to any specific task, the same goes for delegate messages.
For the most part, the message processing is pretty straight forward:
Get a new SOCKS array
Get the first element from the list
If we know the server_id
, then we can forward the message off to the appropriate thread or channel to continue processing. If we've never seen the server_id before, then it's likely a new connection that opened up from an operator starting a new tool through proxychains, so we need to handle that appropriately.
For new connections, the first message is always a SOCKS Request message with encoded data for IP:PORT to connect to. This means that SOCKS authenticaion is already done. There's also a very specific message that gets sent back as a response to this. This small negotiation piece isn't something that Mythic created, it's just part of the SOCKS protocol to ensure that a tool like proxychains gets confirmation the agent was able to reach the desired IP:PORT
For existing connections, the agent looks at if exit
is True or not. If exit
is True, then the agent should close its corresponding TCP connection and clean up those resources. If it's not exit, then the agent should base64 decode the data
field and forward those bytes through the existing TCP connection.
The agent should also be streaming data back from its open TCP connections to Mythic in its get_tasking
and post_response
messages.
That's it really. The hard part is making sure that you don't exhaust all of the system resources by creating too many threads, running into deadlocks, or any number of other potential issues.
While not perfect, the poseidon agent have a generally working implementation for Mythic: https://github.com/MythicAgents/poseidon/blob/master/Payload_Type/poseidon/poseidon/agent_code/socks/socks.go
This section talks about the different components for creating the server side docker container for a new C2 profile. This piece accepts messages from the agent, decodes them, forwards them off to the Mythic server, and replies back with the response. Specifically, this goes into the following components:
Docker containers
Configuration Files
Server
All C2 profiles are backed by a Docker container or intermediary layer of some sort.
What do the C2 docker containers do? Why are things broken out this way? In order to make things more modular within Mythic, most services are broken out into their own containers. When it comes to C2 profiles, they simply serve as an intermediary layer that translates between your special sauce C2 mechanism and the normal RESTful interface that Mythic uses. This allows you to create any number of completely disjoint C2 types without having to modify anything in the main Mythic codebase.
C2 Profile Contianers, like Payload Type and Translation Containers, all use the same mythic_base_container
Docker image. From there you can leverage the github.com/MythicMeta/MythicContainer
GoLang package or the mythic_container
PyPi package depending on if you want to write the meta information about your C2 profile in GoLang or Python. The actual code that binds to ports and accepts messages can be written in any language.
There are a few things needed to make a C2 container. For reference on general code structure options, check out Payload Type Development. The general structure is the same for Payload Types, C2 Profiles, and Translation containers.
Instead of declaring a new class of Payload Type though, you declare a new class of type C2Profile. For example, in Python you can do:
The key here for a C2 Profile though is the server_binary_path
- this indicates what actually gets executed to start listening for agent messages. This can be whatever you want, in any language you want, but you just need to make sure it's executable and identified here. If you want this to pick up something from the environment (and it's a script), be sure to put it as a #!
at the top. For example, the default containers leverage python3, so they have #! /usr/bin/env python3
at the top. This file is always executed via bash, so as a sub-process like ./server
Your server
code *MUST* send an HTTP header of Mythic: ProfileNameHere
when connecting to Mythic. This allows Mythic to know which profile is connecting
Within the server_folder_path
should be a file called config.json
, this is what the operator is able to edit through the UI and should contain all of the configuration components. The one piece that doesn't need to be here are if the operator needs to add additional files (like SSL certs).
This is a small class that just defines the metadata aspects of the C2 profile. A big piece here is the definition of the parameters
array. Each element here is a C2ProfileParameter
class instance with a few possible arguments:
There are a few main values you can supply for parameter_type
when defining the parameters of your c2 profile:
String
- This is simply a text box where you can input a string value
ChooseOne
- this is a dropdown choice where the operator makes a choice from a pre-defined list of options
Array
- This allows an operator to input an array of values rather than a single string value. When this is processed on the back-end, it becomes a proper array value like ["val1", "val2"]
.
Date
- This is a date in the YYYY-MM-DD format. However, when specifying a default value for this, you simply supply an offset of the number of days from the current day. For example, to have a default value for the user be one year from the current date, the default_value would be 365
. Similarly, you can supply negative values and it'll be days in the past. This manifests as a date-picker option for the user when generating a payload.
Dictionary
- This one is a bit more complicated, but allows you to specify an arbitrary dictionary for the user to generate and allows you to set some restrictions on the data. Let's take a look at this one more closely:
This is saying that we have a Dictionary called "headers" with a few pre-set options.