This section describes new Payload Types
You want to create a new agent that fully integrates with Mythic. Since everything in Mythic revolves around Docker containers, you will need to ultimately create one for your payload type. This can be done with docker containers on the same host as the Mythic server or with an external VM/host machine.
What does Mythic's setup look like? We'll use the diagram below - don't worry though, it looks complicated but isn't too bad really:
Mythic itself is a docker-compose
file that stands up a bunch of microservices. These services expose various pieces of functionality like a database (PostgreSQL
), the web UI (React Web UI
), internal documentation (Hugo Documentation
), and a way for the various services to communicate with each other (RabbitMQ
). Don't worry though, you don't worry about 99% of this.
Mythic by itself doesn't have any agents or command-and-control profiles - these all are their own Docker containers that connect up via RabbitMQ
and gRPC
. This is what you're going to create - a separate container that connects in (the far right hand side set of containers in the above diagram).
When you clone down the Mythic repo, run make
to generate the mythic-cli
binary, and then run sudo ./mythic-cli start
, you are creating a docker-compose
file automatically that references a bunch of Dockerfile
in the various folders within the Mythic
folder. These folders within Mythic
are generally self-explanatory for their purpose, such as postgres-docker
, rabbitmq-docker
, mythic-docker
, and MythicReactUI
.
When you use the mythic-cli
to install an agent or c2 profile, these all go in the Mythic/InstalledServices
folder. This makes it super easy to see what you have installed.
Throughout development you'll have a choice - do development remotely from the Mythic server and hook in manually, or do development locally on the Mythic server. After all, everything boils down to code that connects to RabbitMQ
and gRPC
- Mythic doesn't really know if the connection is locally from Docker or remotely from somewhere else.
The first step is to clone down the example repository https://github.com/MythicMeta/ExampleContainers. The format of the repository is that of the External Agent template. This is the format you'll see for all of the agents and c2 profiles on the overview page.
Inside of the Payload_Type
folder, there are two folders - one for GoLang and one for Python depending on which language you prefer to code your agent definitions in (this has nothing to do with the language of your agent itself, it's simply the language to define commands and parameters). We're going to go step-by-step and see what happens when you install something via mythic-cli
, but doing it manually.
Pick whichever service you're interested in and copy that folder into your Mythic/InstalledServices
folder. When you normally install via mythic-cli
, it clones down your repository and does the same thing - it copies what's in that repository's Payload_Type
and C2_Profiles
folders into the Mythic/InstalledServices
folder.
Now that a folder is in the Mythic/InstalledServices
folder, we need to let the docker-compose
file know that it's there. Assuming you copied over python_services
, you then need to run sudo ./mythic-cli add python_services
. This adds that python_services
folder to the docker-compose
. This is automatically done normally as part of the install process.
As part of updating docker-compose
, this process adds a bunch of environment variables to what will be the new container.
Now that docker-compose
knows about the new service, we need to build the image that will be used to make the agent's container. We can use sudo ./mythic-cli build python_services
. This tells docker
to look in the Mythic/InstalledServices/python_services
folder for a Dockerfile
and use it to build a new image called python_services
. As part of this, Mythic will automatically then use that new image to create a container and run it. If it doesn't, then you can create and start the container with sudo ./mythic-cli start python_services
.
Again, all of this happens automatically as part the normal installation process when you use sudo ./mythic-cli install
. We're doing this step-by-step though so you can see what happens.
At this point, your new example agent should be visible within Mythic. If it's not, we can check logs to see what the issue might be with sudo ./mythic-cli logs python-services
(this is a wrapper around sudo docker logs python-services
and truncates to the latest 500 lines).
Steps 2.1-2.4 all happen automatically when you install a service via mythic-cli
. If you don't want to install via mythic-cli
then you can do these steps manually like we did here.
Now that you've seen the pieces and steps for installing an existing agent, it's time to start diving into what's going on within that python_services
folder.
The only thing that absolutely MUST exist within this folder is a Dockerfile
so that docker-compose
can build your image and start the container. You can use anything as your base image, but Mythic provides a few to help you get started with some various environments:
itsafeaturemythic/mythic_go_base
has GoLang 1.21 installed
itsafeaturemythic/mythic_go_dotnet
has GoLang 1.21 and .NET
itsafeaturemythic/mythic_go_macos
has GoLang 1.21 and the macOS SDK
itsafeaturemythic/mythic_python_base
has Python 3.11 and the mythic_container
pypi package
itsafeaturemythic/mythic_python_go
has Python 3.11, the mythic_container
pypi package, and GoLang v1.21
itsafeaturemythic/mythic_python_macos
has Python 3.11, the mythic_container
pypi package, and the macOS SDK
This allows two payload types that might share the same language to still have different environment variables, build paths, tools installed, etc. Docker containers come into play for a few things:
Sync metadata about the payload type (this is in the form of python classes or GoLang structs)
Contains the payload type code base (whatever language your agent is in)
The code to create the payload based on all of the user supplied input (builder function)
Sync metadata about all of the commands associated with that payload type
The code for all of those commands (whatever language your agent is in)
Browser scripts for commands (JavaScript)
The code to take user supplied tasking and turn it into tasking for your agent
Start your Dockerfile
off with one of the above images:
From itsafeaturemythic/mythic_python_base:latest
On the next lines, just add in any extra things you need for your agent to properly build, such as:
RUN pip install python_module_name
RUN shell_command
RUN apt-get install -y tool_name
The latest container versions and their associated mythic_container
PyPi versions can be found here: Container Syncing. The mythic_python_*
containers will always have the latest PyPi
version installed if you're using the :latest
version.
If you're curious what else goes into these containers, look in the docker-templates
folder within the Mythic repository.
Within the Dockerfile
you will then need to do whatever is needed to kick off your main program that imports either the MythicContainer PyPi package or the MythicContainer GoLang package. As some examples, here's what you can do for Python and GoLang:
Mythic/InstalledServices/[agent name]/main.py
<-- if you plan on using Python as your definition language, this main.py
file is what will get executed by Python 3.11 assuming you use the Dockerfile shown below. If you want a different structure, just change the CMD
line to execute whatever it is you want.
FROM itsafeaturemythic/mythic_python_base:latest
RUN python3 -m pip install donut-shellcode
WORKDIR /Mythic/
CMD ["python3", "main.py"]
At that point, your main.py
file should import any other folders/files needed to define your agent/commands and import the mythic_container
PyPi package.
Any changes you make to your Python code is automatically reflected within the container. Simply do sudo ./mythic-cli start [agent name]
to restart the container and have python reprocess your files.
If you want to do local testing without docker
, then you can add a rabbitmq_config.json
in the root of your directory (i.e. [agent name]/rabbitmq_config.json
) that defines the environment parameters that help the container connect to Mythic:
{
"rabbitmq_host": "127.0.0.1",
"rabbitmq_password": "PqR9XJ957sfHqcxj6FsBMj4p",
"mythic_server_host": "127.0.0.1",
"webhook_default_channel": "#mythic-notifications",
"debug_level": "debug",
"rabbitmq_port": 5432,
"mythic_server_grpc_port": 17444,
"webhook_default_url": "",
"webhook_default_callback_channel": "",
"webhook_default_feedback_channel": "",
"webhook_default_startup_channel": "",
"webhook_default_alert_channel": "",
"webhook_default_custom_channel": "",
}
Things are a little different here as we're compiling binaries. To keep things in a simplified area for building, running, and testing, a common file like a Makefile
is useful. This Makefile
would be placed at Mythic/InstalledServices/[agent name]/Makefile
.
From here, that make file can have different functions for what you need to do. Here's an example of the Makefile
that allows you to specify custom environment variables when debugging locally, but also support Docker building:
BINARY_NAME?=main
DEBUG_LEVEL?="warning"
RABBITMQ_HOST?="127.0.0.1"
RABBITMQ_PASSWORD?="password here"
MYTHIC_SERVER_HOST?="127.0.0.1"
MYTHIC_SERVER_GRPC_PORT?="17444"
WEBHOOK_DEFAULT_URL?=
WEBHOOK_DEFAULT_CHANNEL?=
WEBHOOK_DEFAULT_FEEDBACK_CHANNEL?=
WEBHOOK_DEFAULT_CALLBACK_CHANNEL?=
WEBHOOK_DEFAULT_STARTUP_CHANNEL?=
build:
go mod tidy
go build -o ${BINARY_NAME} .
cp ${BINARY_NAME} /
run:
cp /${BINARY_NAME} .
./${BINARY_NAME}
run_custom:
DEBUG_LEVEL=${DEBUG_LEVEL} \
RABBITMQ_HOST=${RABBITMQ_HOST} \
RABBITMQ_PASSWORD=${RABBITMQ_PASSWORD} \
MYTHIC_SERVER_HOST=${MYTHIC_SERVER_HOST} \
MYTHIC_SERVER_GRPC_PORT=${MYTHIC_SERVER_GRPC_PORT} \
WEBHOOK_DEFAULT_URL=${WEBHOOK_DEFAULT_URL} \
WEBHOOK_DEFAULT_CHANNEL=${WEBHOOK_DEFAULT_CHANNEL} \
WEBHOOK_DEFAULT_FEEDBACK_CHANNEL=${WEBHOOK_DEFAULT_FEEDBACK_CHANNEL} \
WEBHOOK_DEFAULT_CALLBACK_CHANNEL=${WEBHOOK_DEFAULT_CALLBACK_CHANNEL} \
WEBHOOK_DEFAULT_STARTUP_CHANNEL=${WEBHOOK_DEFAULT_STARTUP_CHANNEL} \
./${BINARY_NAME}
To go along with that, a sample Docker file for Golang is as follows:
FROM itsafeaturemythic/mythic_go_base:latest
WORKDIR /Mythic/
COPY [".", "."]
RUN make build
CMD make run
It's very similar to the Python version, except it runs make build
when building and make run
when running the code. The Python version doesn't need a Makefile
or multiple commands because it's an interpreted language.
If your container/service is running on a different host than the main Mythic instance, then you need to make sure the rabbitmq_password
is shared over to your agent as well. By default, this is a randomized value stored in the Mythic/.env
file and shared across containers, but you will need to manually share this over with your agent either via an environment variable (MYTHIC_RABBITMQ_PASSWORD
) or by editing the rabbitmq_password
field in your rabbitmq_config.json file. You also need to make sure that the MYTHIC_RABBITMQ_LISTEN_LOCALHOST_ONLY
is set to false
and restart Mythic to make sure the RabbitMQ
port isn't bound exclusively to 127.0.0.1.
The folder that gets copied into Mythic/InstalledServices
is what's used to create the docker
image and container names. It doesn't necessarily have to be the same as the name of your agent / c2 profile (although that helps).
Docker does not allow capital letters in container names. So, if you plan on using Mythic's mythic-cli
to control and install your agent, then your agent's name can't have any capital letters in it. Only lowercase, numbers, and _. It's a silly limitation by Docker, but it's what we're working with.
The example services has a single container that offers multiple options (Payload Type, C2 Profile, Translation Container, Webhook, and Logging). While a single container can have all of that, for now we're going to focus on just the payload type piece, so delete the rest of it.
For the python_services
folder this would mean deleting the mywebhook
, translator
, and websocket
folders. For the go_services
folder, this would mean deleting the http
, my_logger
, my_webhooks
, no_actual_translation
folders. For both cases, this will result in removing some imports at the top of the remaining main.py
and main.go
files.
For the python_services
folder, we'll update the basic_python_agent/agent_functions/builder.py
file. This file can technically be anywhere that main.py
can reach and import, but for convenience it's in a folder, agent_functions
along with all of the command definitions for the agent. Below is an example from that builder that defines the agent:
#from mywebhook.webhook import *
import mythic_container
import asyncio
import basic_python_agent
#import websocket.mythic.c2_functions.websocket
#from translator.translator import *
#from my_logger import logger
mythic_container.mythic_service.start_and_run_forever()
package main
import (
basicAgent "GoServices/basic_agent/agentfunctions"
//httpfunctions "GoServices/http/c2functions"
//"GoServices/my_logger"
//"GoServices/my_webhooks"
//mytranslatorfunctions "GoServices/no_actual_translation/translationfunctions"
"github.com/MythicMeta/MythicContainer"
)
func main() {
// load up the agent functions directory so all the init() functions execute
//httpfunctions.Initialize()
basicAgent.Initialize()
//mytranslatorfunctions.Initialize()
//my_webhooks.Initialize()
//my_logger.Initialize()
// sync over definitions and listen
MythicContainer.StartAndRunForever([]MythicContainer.MythicServices{
//MythicContainer.MythicServiceC2,
//MythicContainer.MythicServiceTranslationContainer,
//MythicContainer.MythicServiceWebhook,
//MythicContainer.MythicServiceLogger,
MythicContainer.MythicServicePayload,
})
}
Check out the Payload Type page for information on what the various components in the agent definition means and how to start customizing how your agent looks within Mythic.
To make your agent installable via mythic-cli
, the repo/folder needs to be in a common format. This format just makes it easier for mythic-cli
to add things to the right places. This is based on the External Agent format here (https://github.com/MythicMeta/Mythic_External_Agent). If you're creating a new payload type, then add your entire folder into the Payload_Type
folder. Similarly, when you get around to making documentation for your agent, you can add it to the documentation folder. If there's things you don't want to include, then in the config.json
file you can mark specific sections to exclude.
If you want your new C2 profile or Agent to show up on the overview page (https://mythicmeta.github.io/overview/) then you need to reach out to @its_a_feature_
on twitter or @its_a_feature_
in the Bloodhound slack to get your agent added to the agents list here (https://github.com/MythicMeta/overview/blob/main/agent_repos.txt). You could also make a PR to that file if you wanted too.
Having your agent hosted on the https://github.com/MythicAgents
organization means that it's easier for people to find your agent and we can collect stats on its popularity. For an example of what this means, check out the overview page and see the biweekly clone stats as well as the green chart icon for a historic list of view/clones of the repo.
If you don't want to have your agent hosted on the MythicAgents organization, but still want to make it available on that site, that's fine too. Just let me know or update the PR for that file appropriately.
In addition to simply hosting the agent/c2 profile, there's now a sub-page that shows off all of the agent's capabilities so it's easier to compare and see which ones meet your needs. That page is here (https://mythicmeta.github.io/overview/agent_matrix.html) and is populated based on a agent_capabilities.json
file in the root of your repository. This is just a json
file that gets ingested at midnight every day and used to update that matrix. The format is as follows:
Unexpected error with integration github-files: Integration is not installed on this space
The os
key provides all the operating systems your agent supports. These are the things that would be available after installing your agent for the user to select when building a payload. The languages
key identifies what languages your agent supports (typically only one, but could be multiple). The features
section identifies which features your agent supports. For the mythic
sub-key, the options are at the bottom of the matrix page, along with their descriptions and links to documentation for if you want to implement that feature in your agent. The custom
sub-key is just additional features that your agent supports that you want to call out. The payload_output
key identifies which output formats your agent supports as well as the architectures
key identifying which architectures your agent can be built for. The c2
key identifies which C2 Profiles your agent supports and the supported_wrappers
key identifies which wrapper
payloads your agent supports. As you might expect, the mythic_version
is which Mythic version your agent supports and the agent_version
is the current agent version in use.
Payload Type information must be set and pulled from a definition either in Python or in GoLang. Below are basic examples in Python and GoLang:
from mythic_container.PayloadBuilder import *
from mythic_container.MythicCommandBase import *
from mythic_container.MythicRPC import *
import json
class Apfell(PayloadType):
name = "apfell"
file_extension = "js"
author = "@its_a_feature_"
supported_os = [SupportedOS.MacOS]
wrapper = False
wrapped_payloads = []
note = """This payload uses JavaScript for Automation (JXA) for execution on macOS boxes."""
supports_dynamic_loading = True
c2_profiles = ["http", "dynamichttp"]
mythic_encrypts = True
translation_container = None # "myPythonTranslation"
build_parameters = []
agent_path = pathlib.Path(".") / "apfell"
agent_icon_path = agent_path / "agent_functions" / "apfell.svg"
agent_code_path = agent_path / "agent_code"
build_steps = [
BuildStep(step_name="Gathering Files", step_description="Making sure all commands have backing files on disk"),
BuildStep(step_name="Configuring", step_description="Stamping in configuration values")
]
async def build(self) -> BuildResponse:
# this function gets called to create an instance of your payload
resp = BuildResponse(status=BuildStatus.Success)
return resp
There are a couple key pieces of information here:
Line 1-2 imports all of basic classes needed for creating an agent
line 7 defines the new class (our agent). This can be called whatever you want, but the important piece is that it extends the PayloadType
class as shown with the ()
.
Lines 8-27 defines the parameters for the payload type that you'd see throughout the UI.
the name is the name of the payload type
supported_os is an array of supported OS versions
supports_dynamic_loading indicates if the agent allows you to select only a subset of commands when creating an agent or not
build_parameters is an array describing all of the build parameters when creating your agent
c2_profiles is an array of c2 profile names that the agent supports
Line 18 defines the name of a "translation container" which we will talk about in another section, but this allows you to support your own, non-mythic message format, custom crypto, etc.
The last piece is the function that's called to build the agent based on all of the information the user provides from the web UI.
The PayloadType
base class is in the PayloadBuilder.py
file. This is an abstract class, so your instance needs to provide values for all these fields.
package agentfunctions
import (
"bytes"
"encoding/json"
"fmt"
agentstructs "github.com/MythicMeta/MythicContainer/agent_structs"
"github.com/MythicMeta/MythicContainer/mythicrpc"
"os"
"os/exec"
"path/filepath"
"strings"
)
var payloadDefinition = agentstructs.PayloadType{
Name: "basicAgent",
FileExtension: "bin",
Author: "@xorrior, @djhohnstein, @Ne0nd0g, @its_a_feature_",
SupportedOS: []string{agentstructs.SUPPORTED_OS_LINUX, agentstructs.SUPPORTED_OS_MACOS},
Wrapper: false,
CanBeWrappedByTheFollowingPayloadTypes: []string{},
SupportsDynamicLoading: false,
Description: "A fully featured macOS and Linux Golang agent",
SupportedC2Profiles: []string{"http", "websocket", "poseidon_tcp"},
MythicEncryptsData: true,
BuildParameters: []agentstructs.BuildParameter{
{
Name: "mode",
Description: "Choose the build mode option. Select default for executables, c-shared for a .dylib or .so file, or c-archive for a .Zip containing C source code with an archive and header file",
Required: false,
DefaultValue: "default",
Choices: []string{"default", "c-archive", "c-shared"},
ParameterType: agentstructs.BUILD_PARAMETER_TYPE_CHOOSE_ONE,
},
{
Name: "architecture",
Description: "Choose the agent's architecture",
Required: false,
DefaultValue: "AMD_x64",
Choices: []string{"AMD_x64", "ARM_x64"},
ParameterType: agentstructs.BUILD_PARAMETER_TYPE_CHOOSE_ONE,
},
{
Name: "proxy_bypass",
Description: "Ignore HTTP proxy environment settings configured on the target host?",
Required: false,
DefaultValue: false,
ParameterType: agentstructs.BUILD_PARAMETER_TYPE_BOOLEAN,
},
{
Name: "garble",
Description: "Use Garble to obfuscate the output Go executable.\nWARNING - This significantly slows the agent build time.",
Required: false,
DefaultValue: false,
ParameterType: agentstructs.BUILD_PARAMETER_TYPE_BOOLEAN,
},
},
BuildSteps: []agentstructs.BuildStep{
{
Name: "Configuring",
Description: "Cleaning up configuration values and generating the golang build command",
},
{
Name: "Compiling",
Description: "Compiling the golang agent (maybe with obfuscation via garble)",
},
},
}
func build(payloadBuildMsg agentstructs.PayloadBuildMessage) agentstructs.PayloadBuildResponse {
payloadBuildResponse := agentstructs.PayloadBuildResponse{
PayloadUUID: payloadBuildMsg.PayloadUUID,
Success: true,
UpdatedCommandList: &payloadBuildMsg.CommandList,
}
return payloadBuildResponse
}
A quick note about wrapper payload types - there's only a few differences between a wrapper payload type and a normal payload type. A configuration variable, wrapper
, determines if something is a wrapper or not. A wrapper payload type takes as input the output of a previous build (normal payload type or wrapper payload type) along with build parameters and generates a new payload. A wrapper payload type does NOT have any c2 profiles associated with it because it's simply wrapping an existing payload.
An easy example is thinking of the service_wrapper
- this wrapper payload type takes in the shellcode version of another payload and "wraps" it in the execution of a service so that it'll properly respond to the service control manager on windows. A similar example would be to take an agent and wrap it in an MSBuild format. These things don't have their own C2, but rather just package/wrap an existing agent into a new, more generic, format.
To access the payload that you're going to wrap, use the self.wrapped_payload
attribute during your build
execution. This will be the base64 encoded version of the payload you're going to wrap.
To access the payload that you're going to wrap, use the payloadBuildMsg.WrappedPayload
attribute during your build
execution. This will be the raw bytes of the payload you're going to wrap.
When you're done generating the payload, you'll return your new result the exact same way as normal payloads (as part of the build process).
Build parameters define the components shown to the user when creating a payload.
The BuildParameter
class has a couple of pieces of information that you can use to customize and validate the parameters supplied to your build:
The most up-to-date code is available in the https://github.com/MythicMeta/MythicContainerPyPi
repository.
class BuildParameterType(str, Enum):
"""Types of parameters available for building payloads
Attributes:
String:
A string value
ChooseOne:
A list of choices for the user to select exactly one
ChooseMultiple:
A list of choices for the user to select 0 or more
Array:
The user can supply multiple values in an Array format
Date:
The user can select a Date in YYYY-MM-DD format
Dictionary:
The user can supply a dictionary of values
Boolean:
The user can toggle a switch for True/False
File:
The user can select a file that gets uploaded - a file UUID gets passed in during build
TypedArray:
The user can supply an array where each element also has a drop-down option of choices
"""
String = "String"
ChooseOne = "ChooseOne"
ChooseMultiple = "ChooseMultiple"
Array = "Array"
Date = "Date"
Dictionary = "Dictionary"
Boolean = "Boolean"
File = "File"
TypedArray = "TypedArray"
The most up-to-date code is available at https://github.com/MythicMeta/MythicContainer
.
type BuildParameterType = string
const (
BUILD_PARAMETER_TYPE_STRING BuildParameterType = "String"
BUILD_PARAMETER_TYPE_BOOLEAN = "Boolean"
BUILD_PARAMETER_TYPE_CHOOSE_ONE = "ChooseOne"
BUILD_PARAMETER_TYPE_CHOOSE_MULTIPLE = "ChooseMultiple"
BUILD_PARAMETER_TYPE_DATE = "Date"
BUILD_PARAMETER_TYPE_DICTIONARY = "Dictionary"
BUILD_PARAMETER_TYPE_ARRAY = "Array"
BUILD_PARAMETER_TYPE_NUMBER = "Number"
BUILD_PARAMETER_TYPE_FILE = "File"
BUILD_PARAMETER_TYPE_TYPED_ARRAY = "TypedArray"
)
// BuildParameter - A structure defining the metadata about a build parameter for the user to select when building a payload.
type BuildParameter struct {
// Name - the name of the build parameter for use during the Payload Type's build function
Name string `json:"name"`
// Description - the description of the build parameter to be presented to the user during build
Description string `json:"description"`
// Required - indicate if this requires the user to supply a value or not
Required bool `json:"required"`
// VerifierRegex - if the user is supplying text and it needs to match a specific pattern, specify a regex pattern here and the UI will indicate to the user if the value is valid or not
VerifierRegex string `json:"verifier_regex"`
// DefaultValue - A default value to show the user when building in the Mythic UI. The type here depends on the Parameter Type - ex: for a String, supply a string. For an array, provide an array
DefaultValue interface{} `json:"default_value"`
// ParameterType - The type of parameter this is so that the UI can properly render components for the user to modify
ParameterType BuildParameterType `json:"parameter_type"`
// FormatString - If Randomize is true, this regex format string is used to generate a value when presenting the option to the user
FormatString string `json:"format_string"`
// Randomize - Should this value be randomized each time it's shown to the user so that each payload has a different value
Randomize bool `json:"randomize"`
// IsCryptoType -If this is True, then the value supplied by the user is for determining the _kind_ of crypto keys to generate (if any) and the resulting stored value in the database is a dictionary composed of the user's selected and an enc_key and dec_key value
IsCryptoType bool `json:"crypto_type"`
// Choices - If the ParameterType is ChooseOne or ChooseMultiple, then the options presented to the user are here.
Choices []string `json:"choices"`
// DictionaryChoices - if the ParameterType is Dictionary, then the dictionary choices/preconfigured data is set here
DictionaryChoices []BuildParameterDictionary `json:"dictionary_choices"`
}
// BuildStep - Identification of a step in the build process that's shown to the user to eventually collect start/end time as well as stdout/stderr per step
type BuildStep struct {
Name string `json:"step_name"`
Description string `json:"step_description"`
}
name
is the name of the parameter, if you don't provide a longer description, then this is what's presented to the user when building your payload
parameter_type
describes what is presented to the user - valid types are:
BuildParameterType.String
During build, this is a string
BuildParameterType.ChooseOne
During build, this is a string
BuildParameterType.ChooseMultiple
During build, this is an array of strings
BuildParameterType.Array
During build, this is an array of strings
BuildParameterType.Date
During build, this is a string of the format YYYY-MM-DD
BuildParameterType.Dictionary
During build, this is a dictionary
BuildParameterType.Boolean
During build, this is a boolean
BuildParameterType.File
During build, this is a string UUID of the file (so that you can use a MythicRPC call to fetch the contents of the file)
BuildParameterType.TypedArray
During build, this is an arrray of arrays, always in the format [ [ type, value], [type value], [type, value] ...]
required
indicates if there must be a value supplied. If no value is supplied by the user and no default value supplied here, then an exception is thrown before execution gets to the build
function.
verifier_regex
is a regex the web UI can use to provide some information to the user about if they're providing a valid value or not
default_value
is the default value used for building if the user doesn't supply anything
choices
is where you can supply an array of options for the user to pick from if the parameter_type is ChooseOne
dictionary_choice
s are the choices and metadata about what to display to the user for key-value pairs that the user might need to supply
value
is the component you access when building your payload - this is the final value (either the default value or the value the user supplied)
verifier_func
is a function you can provide for additional checks on the value the user supplies to make sure it's what you want. This function should either return nothing or raise an exception if something isn't right
As a recap, where does this come into play? In the first section, we showed a section like:
build_parameters = [
BuildParameter(name="string", parameter_type=BuildParameterType.String,
description="test string", default_value="test"),
BuildParameter(name="choose one", parameter_type=BuildParameterType.ChooseOne,
description="test choose one",
choices=["a", "b"], default_value="a"),
BuildParameter(name="choose one crypto",
parameter_type=BuildParameterType.ChooseOne,
description="choose one crypto",
crypto_type=True, choices=["aes256_hmac", "none"]),
BuildParameter(name="date", parameter_type=BuildParameterType.Date,
default_value=30, description="test date offset from today"),
BuildParameter(name="array", parameter_type=BuildParameterType.Array,
default_value=["a", "b"],
description="test array"),
BuildParameter(name="dict", parameter_type=BuildParameterType.Dictionary,
dictionary_choices=[
DictionaryChoice(name="host", default_value="abc.com", default_show=False),
DictionaryChoice(name="user-agent", default_show=True, default_value="mozilla")
],
description="test dictionary"),
BuildParameter(name="random", parameter_type=BuildParameterType.String,
randomize=True, format_string="[a,b,c]{3}",
description="test randomized string"),
BuildParameter(name="bool", parameter_type=BuildParameterType.Boolean,
default_value=True, description="test boolean")
]
BuildParameters: []agentstructs.BuildParameter{
{
Name: "mode",
Description: "Choose the build mode option. Select default for executables, c-shared for a .dylib or .so file, or c-archive for a .Zip containing C source code with an archive and header file",
Required: false,
DefaultValue: "default",
Choices: []string{"default", "c-archive", "c-shared"},
ParameterType: agentstructs.BUILD_PARAMETER_TYPE_CHOOSE_ONE,
},
{
Name: "architecture",
Description: "Choose the agent's architecture",
Required: false,
DefaultValue: "AMD_x64",
Choices: []string{"AMD_x64", "ARM_x64"},
ParameterType: agentstructs.BUILD_PARAMETER_TYPE_CHOOSE_ONE,
},
{
Name: "proxy_bypass",
Description: "Ignore HTTP proxy environment settings configured on the target host?",
Required: false,
DefaultValue: false,
ParameterType: agentstructs.BUILD_PARAMETER_TYPE_BOOLEAN,
},
{
Name: "garble",
Description: "Use Garble to obfuscate the output Go executable.\nWARNING - This significantly slows the agent build time.",
Required: false,
DefaultValue: false,
ParameterType: agentstructs.BUILD_PARAMETER_TYPE_BOOLEAN,
},
},
You have to implement the build
function and return an instance of the BuildResponse
class. This response has these fields:
status
- an instance of BuildStatus (Success or Error)
Specifically, BuildStatus.Success
or BuildStatus.Error
payload
- the raw bytes of the finished payload (if you failed to build, set this to None
or empty bytes like b''
in Python.
build_message
- any stdout data you want the user to see
build_stderr
- any stderr data you want the user to see
build_stdout
- any stdout data you want the user to see
updated_filename
- if you want to update the filename to something more appropriate, set it here.
For example: the user supplied a filename of apollo.exe
but based on the build parameters, you're actually generating a dll, so you can update the filename to be apollo.dll
. This is particularly useful if you're optionally returning a zip of information so that the user doesn't have to change the filename before downloading. If you plan on doing this to update the filename for a wide variety of options, then it might be best to leave the file extension field in your payload type definition blank ""
so that you can more easily adjust the extension.
The most basic version of the build function would be:
async def build(self) -> BuildResponse:
# this function gets called to create an instance of your payload
return BuildResponse(status=BuildStatus.Success)
func build(payloadBuildMsg agentstructs.PayloadBuildMessage) agentstructs.PayloadBuildResponse {
payloadBuildResponse := agentstructs.PayloadBuildResponse{
PayloadUUID: payloadBuildMsg.PayloadUUID,
Success: true,
}
return payloadBuildResponse
}
Once the build
function is called, all of your BuildParameters
will already be verified (all parameters marked as required
will have a value
of some form (user supplied or default_value) and all of the verifier functions will be called if they exist). This allows you to know that by the time your build
function is called that all of your parameters are valid.
Your build function gets a few pieces of information to help you build the agent (other than the build parameters):
From within your build function, you'll have access to the following pieces of information:
self.uuid
- the UUID associated with your payload
This is how your payload identifies itself to Mythic before getting a new Staging and final Callback UUID
self.commands
- a wrapper class around the names of all the commands the user selected.
Access this list via self.commands.get_commands()
for cmd in self.commands.get_commands():
command_code += open(self.agent_code_path / "{}.js".format(cmd), 'r').read() + "\n"
self.agent_code_path
- a pathlib.Path
object pointing to the path of the agent_code
directory that holds all the code for your payload. This is something you pre-define as part of your agent definition.
To access "test.js" in that "agent_code" folder, simply do:
f = open(self.agent_code_path / "test.js", 'r')
.
With pathlib.Path
objects, the /
operator allows you to concatenate paths in an OS agnostic manner. This is the recommended way to access files so that your code can work anywhere.
self.get_parameter("parameter name here")
The build parameters that are validated from the user. If you have a build_parameter with a name of "version", you can access the user supplied or default value with self.get_parameter("version")
self.selected_os
- This is the OS that was selected on the first step of creating a payload
self.c2info
- this holds a list of dictionaries of the c2 parameters and c2 class information supplied by the user. This is a list because the user can select multiple c2 profiles (maybe they want HTTP and SMB in the payload for example). For each element in self.c2info, you can access the information about the c2 profile with get_c2profile()
and access to the parameters via get_parameters_dict()
. Both of these return a dictionary of key-value pairs.
the dictionary returned by self.c2info[0].get_c2profile()
contains the following:
name
- name of the c2 profile
description
- description of the profile
is_p2p
- boolean of if the profile is marked as a p2p profile or not
the dictionary returned by self.c2info[0].get_parameters_dict()
contains the following:
key
- value
where each key
is the key
value defined for the c2 profile's parameters and value
is what the user supplied. You might be wondering where to get these keys? Well, it's not too crazy and you can view them right in the UI - Name Fields.
If the C2 parameter has a value of crypto_type=True
, then the "value" here will be a bit more than just a string that the user supplied. Instead, it'll be a dictionary with three pieces of information: value
- the value that the user supplied, enc_key
- a base64 string (or None) of the encryption key to be used, dec_key
- a base64 string (or None) of the decryption key to be used. This gives you more flexibility in automatically generating encryption/decryption keys and supporting crypto types/schemas that Mythic isn't aware of. In the HTTP profile, the key AESPSK
has this type set to True, so you'd expect that dictionary.
If the C2 parameter has a type of "Dictionary", then things are a little different.
Let's take the "headers" parameter in the http
profile for example. This allows you to set header values for your http
traffic such as User-Agent, Host, and more. When you get this value on the agent side, you get an array of values that look like the following:
{"User-Agent": "the user agent the user supplied", "MyCustomHeader": "my custom value"}
. You get the final "dictionary" that's created from the user supplied fields.
One way to leverage this could be:
for c2 in self.c2info:
c2_code = ""
try:
profile = c2.get_c2profile()
c2_code = open(
self.agent_code_path
/ "c2_profiles"
/ "{}.js".format(profile["name"]),
"r",
).read()
for key, val in c2.get_parameters_dict().items():
if key == "AESPSK":
c2_code = c2_code.replace(key, val["enc_key"] if val["enc_key"] is not None else "")
elif not isinstance(val, str):
c2_code = c2_code.replace(key, json.dumps(val))
else:
c2_code = c2_code.replace(key, val)
except Exception as p:
build_msg += str(p)
pass
// PayloadBuildMessage - A structure of the build information the user provided to generate an instance of the payload type.
// This information gets passed to your payload type's build function.
type PayloadBuildMessage struct {
// PayloadType - the name of the payload type for the build
PayloadType string `json:"payload_type" mapstructure:"payload_type"`
// Filename - the name of the file the user originally supplied for this build
Filename string `json:"filename" mapstructure:"filename"`
// CommandList - the list of commands the user selected to include in the build
CommandList []string `json:"commands" mapstructure:"commands"`
// build param name : build value
// BuildParameters - map of param name -> build value from the user for the build parameters defined
// File type build parameters are supplied as a string UUID to use with MythicRPC for fetching file contents
// Array type build parameters are supplied as []string{}
BuildParameters PayloadBuildArguments `json:"build_parameters" mapstructure:"build_parameters"`
// C2Profiles - list of C2 profiles selected to include in the payload and their associated parameters
C2Profiles []PayloadBuildC2Profile `json:"c2profiles" mapstructure:"c2profiles"`
// WrappedPayload - bytes of the wrapped payload if one exists
WrappedPayload *[]byte `json:"wrapped_payload,omitempty" mapstructure:"wrapped_payload"`
// WrappedPayloadUUID - the UUID of the wrapped payload if one exists
WrappedPayloadUUID *string `json:"wrapped_payload_uuid,omitempty" mapstructure:"wrapped_payload_uuid"`
// SelectedOS - the operating system the user selected when building the agent
SelectedOS string `json:"selected_os" mapstructure:"selected_os"`
// PayloadUUID - the Mythic generated UUID for this payload instance
PayloadUUID string `json:"uuid" mapstructure:"uuid"`
// PayloadFileUUID - The Mythic generated File UUID associated with this payload
PayloadFileUUID string `json:"payload_file_uuid" mapstructure:"payload_file_uuid"`
}
// PayloadBuildC2Profile - A structure of the selected C2 Profile information the user selected to build into a payload.
type PayloadBuildC2Profile struct {
Name string `json:"name" mapstructure:"name"`
IsP2P bool `json:"is_p2p" mapstructure:"is_p2p"`
// parameter name: parameter value
// Parameters - this is an interface of parameter name -> parameter value from the associated C2 profile.
// The types for the various parameter names can be found by looking at the build parameters in the Mythic UI.
Parameters map[string]interface{} `json:"parameters" mapstructure:"parameters"`
}
type CryptoArg struct {
Value string `json:"value" mapstructure:"value"`
EncKey string `json:"enc_key" mapstructure:"enc_key"`
DecKey string `json:"dec_key" mapstructure:"dec_key"`
}
Finally, when building a payload, it can often be helpful to have both stdout and stderr information captured, especially if you're compiling code. Because of this, you can set the build_message
,build_stderr
, and build_stdout
fields of the BuildResponse
to have this data. For example:
async def build(self) -> BuildResponse:
# this function gets called to create an instance of your payload
resp = BuildResponse(status=BuildStatus.Success)
# create the payload
build_msg = ""
#create_payload = await MythicRPC().execute("create_callback", payload_uuid=self.uuid, c2_profile="http")
try:
command_code = ""
for cmd in self.commands.get_commands():
try:
command_code += (
open(self.agent_code_path / "{}.js".format(cmd), "r").read() + "\n"
)
except Exception as p:
pass
base_code = open(
self.agent_code_path / "base" / "apfell-jxa.js", "r"
).read()
await SendMythicRPCPayloadUpdatebuildStep(MythicRPCPayloadUpdateBuildStepMessage(
PayloadUUID=self.uuid,
StepName="Gathering Files",
StepStdout="Found all files for payload",
StepSuccess=True
))
base_code = base_code.replace("UUID_HERE", self.uuid)
base_code = base_code.replace("COMMANDS_HERE", command_code)
all_c2_code = ""
if len(self.c2info) != 1:
resp.build_stderr = "Apfell only supports one C2 Profile at a time"
resp.set_status(BuildStatus.Error)
return resp
for c2 in self.c2info:
c2_code = ""
try:
profile = c2.get_c2profile()
c2_code = open(
self.agent_code_path
/ "c2_profiles"
/ "{}.js".format(profile["name"]),
"r",
).read()
for key, val in c2.get_parameters_dict().items():
if key == "AESPSK":
c2_code = c2_code.replace(key, val["enc_key"] if val["enc_key"] is not None else "")
elif not isinstance(val, str):
c2_code = c2_code.replace(key, json.dumps(val))
else:
c2_code = c2_code.replace(key, val)
except Exception as p:
build_msg += str(p)
pass
all_c2_code += c2_code
base_code = base_code.replace("C2PROFILE_HERE", all_c2_code)
await SendMythicRPCPayloadUpdatebuildStep(MythicRPCPayloadUpdateBuildStepMessage(
PayloadUUID=self.uuid,
StepName="Configuring",
StepStdout="Stamped in all of the fields",
StepSuccess=True
))
resp.payload = base_code.encode()
if build_msg != "":
resp.build_stderr = build_msg
resp.set_status(BuildStatus.Error)
else:
resp.build_message = "Successfully built!\n"
except Exception as e:
resp.set_status(BuildStatus.Error)
resp.build_stderr = "Error building payload: " + str(e)
return resp
Unexpected error with integration github-files: Integration is not installed on this space
Depending on the status of your build (success or error), either the message or build_stderr values will be presented to the user via the UI notifications. However, at any time you can go back to the Created Payloads page and view the build message, build errors, and build stdout for any payload.
The last thing to mention are build steps. These are defined as part of the agent and are simply descriptions of what is happening during your build process. The above example makes some RPC calls for SendMythicRPCPayloadUpdatebuildStep
to update the build steps back to Mythic while the build process is happening. For something as fast as the apfell
agent, it'll appear as though all of these happen at the same time. For something that's more computationally intensive though, it's helpful to provide information back to the user about what's going on - stamping in values? obfuscating? compiling? more obfuscation? opsec checks? etc. Whatever it is that's going on, you can provide this data back to the operator complete with stdout and stderr.
So, what's the actual, end-to-end execution flow that goes on? A diagram can be found here: Message Flow.
PayloadType container is started, it connects to Mythic and sends over its data (by parsing all these python files or GoLang structs)
An operator wants to create a payload from it, so they click the hazard icon at the top of Mythic, click the "Actions" dropdown and select "Generate New Payload".
The operator selects an OS type that the agent supports (ex. Linux, macOS, Windows)
The operator selects the payload type they want to build (this one)
edits all build parameters as needed
The operator selects all commands they want included in the payload
The operator selects all c2 profiles they want included
and for each c2 selected, provides any c2 required parameters
Mythic takes all of this information and sends it to the payload type container
The container sends the BuildResponse
message back to the Mythic server.
Starting with Mythic v3.2.12, PyPi version 0.4.1, and MythicContainer version 1.3.1, there's a new function you can define as part of your Payload Type definition. In addition to defining a build
process, you can also define a on_new_callback
(or onNewCallbackFunction
) function that will get executed whenever there's a new callback based on this payload type.
Below are examples in Python and in Golang for how to define and leverage this new functionality. One of the great things about this is that you can use this to automatically issue tasking for new callbacks. The below examples will automatically issue a shell
command with parameters of whoami
.
These function calls get almost all the same data that you'll see in your Create Tasking calls, except they're missing information about a Task
. That's simply because there's no task yet, this is the moment that a new callback is created.
Mythic tracks an operator for all issued tasking. Since there's no operator directly typing out and issuing these tasks, Mythic associates the operator that built the payload with any tasks automatically created in this function.
class Apfell(PayloadType):
name = "apfell"
...
async def build ...
async def on_new_callback(self, newCallback: PTOnNewCallbackAllData) -> PTOnNewCallbackResponse:
new_task_resp = await SendMythicRPCTaskCreate(MythicRPCTaskCreateMessage(
AgentCallbackUUID=newCallback.Callback.AgentCallbackID,
CommandName="shell",
Params="whoami",
))
if new_task_resp.Success:
return PTOnNewCallbackResponse(AgentCallbackUUID=newCallback.Callback.AgentCallbackID, Success=True)
return PTOnNewCallbackResponse(AgentCallbackUUID=newCallback.Callback.AgentCallbackID, Success=False,
Error=new_task_resp.Error)
func onNewBuild(data agentstructs.PTOnNewCallbackAllData) agentstructs.PTOnNewCallbackResponse {
newTasking, err := mythicrpc.SendMythicRPCTaskCreate(mythicrpc.MythicRPCTaskCreateMessage{
AgentCallbackID: data.Callback.AgentCallbackID,
CommandName: "shell",
Params: "whoami",
})
if err != nil {
logging.LogError(err, "failed to create new task")
}
if newTasking.Success {
logging.LogInfo("created new task")
} else {
logging.LogError(err, "failed to create new tasking")
}
return agentstructs.PTOnNewCallbackResponse{
AgentCallbackID: data.Callback.AgentCallbackID,
Success: true,
Error: "",
}
}
func Initialize() {
agentstructs.AllPayloadData.Get("poseidon").AddPayloadDefinition(payloadDefinition)
agentstructs.AllPayloadData.Get("poseidon").AddBuildFunction(build)
agentstructs.AllPayloadData.Get("poseidon").AddOnNewCallbackFunction(onNewBuild)
agentstructs.AllPayloadData.Get("poseidon").AddIcon(filepath.Join(".", "poseidon", "agentfunctions", "poseidon.svg"))
}
When your container starts up, it connects to the rabbitMQ broker system. Mythic then tries to look up the associated payload type and, if it can find it, will update the running status. However, if Mythic cannot find the payload type, then it'll issue a "sync" message to the container. Similarly, when a container starts up, the first thing it does upon successfully connecting to the rabbitMQ broker system is to send its own synced data.
This data is simply a JSON representation of everything about your payload - information about the payload type, all the commands, build parameters, command parameters, browser scripts, etc.
Syncing happens at a few different times and there are some situations that can cause cascading syncing messages.
When a payload container starts, it sends all of its synced data down to Mythic
If a C2 profile syncs, it'll trigger a re-sync of all Payload Type containers. This is because a payload type container might say it supports a specific C2, but that c2 might not be configured to run or might not have check-ed in yet. So, when it does, this re-sync of all the payload type containers helps make sure that every agent that supports the C2 profile is properly registered.
When a Wrapper Payload Type container syncs, it triggers a re-sync of all non-wrapper payload types. This is because a payload type might support a wrapper that doesn't exist yet in Mythic (configured to not start, hasn't checked in yet, etc). So, when that type does check in, we want to make sure all of the wrapper payload types are aware and can update as necessary.
Latest versions can always be found on the Mythic README.
There are scenarios in which you need a Mythic container for an agent, but you can't (or don't want) to use the normal docker containers that Mythic uses. This could be for reasons like:
You have a custom build environment that you don't want to recreate
You have specific kernel versions or operating systems you're wanting to develop with
So, to leverage your own custom VM or physical computer into a Mythic recognized container, there are just a few steps.
External agents need to connect to mythic_rabbitmq
in order to send/receive messages. They also need to connect to the mythic_server
to transfer files and potentially use gRPC. By default, these container is bound on localhost only. In order to have an external agent connect up, you will need to adjust this in the Mythic/.env
file to have RABBITMQ_BIND_LOCALHOST_ONLY=false
and MYTHIC_SERVER_BIND_LOCALHOST_ONLY=false
and restart Mythic (sudo ./mythic-cli restart
).
Install python 3.10+ (or Golang 1.21) in the VM or on the computer
pip3 install mythic-container
(this has all of the definitions and functions for the container to sync with Mythic and issue RPC commands). Make sure you get the right version of this PyPi package for the version of Mythic you're using (Container Syncing). Alternatively, go get -u github.com/MythicMeta/MythicContainer
for golang.
Create a folder on the computer or VM (let's call it path /pathA
). Essentially, your /pathA
path will be the new InstalledServices/[agent name]
folder. Create a sub folder for your actual agent's code to live, like /pathA/agent_code
. You can create a Visual Studio project here and simply configure it however you need.
Your command function definitions and payload definition are also helpful to have in a folder, like /pathA/agent_functions
.
Edit the /pathA/rabbitmq_config.json
with the parameters you need
{
"rabbitmq_host": "127.0.0.1",
"rabbitmq_password": "PqR9XJ957sfHqcxj6FsBMj4p",
"mythic_server_host": "127.0.0.1",
"webhook_default_channel": "#mythic-notifications",
"debug_level": "debug",
"rabbitmq_port": 5432,
"mythic_server_grpc_port": 17444,
"webhook_default_url": "",
"webhook_default_callback_channel": "",
"webhook_default_feedback_channel": "",
"webhook_default_startup_channel": "",
"webhook_default_alert_channel": "",
"webhook_default_custom_channel": "",
}
the mythic_server_host
value should be the IP address of the main Mythic install
the rabbitmq_host
value should be the IP address of the main Mythic install unless you're running rabbitmq on another host.
You'll need the password of rabbitmq from your Mythic instance. You can either get this from the Mythic/.env
file, by running sudo ./mythic-cli config get rabbitmq_password
, or if you run sudo ./mythic-cli config payload
you'll see it there too.
External agents need to connect to mythic_rabbitmq
in order to send/receive messages. By default, this container is bound on localhost only. In order to have an external agent connect up, you will need to adjust this in the Mythic/.env
file to have RABBITMQ_BIND_LOCALHOST_ONLY=false
and restart Mythic (sudo ./mythic-cli restart
). You'll also need to set MYTHIC_SERVER_BIND_LOCALHOST_ONLY=false
.
In the file where you define your payload type is where you define what it means to "build" your agent.
Run python3.10 main.py
and now you should see this container pop up in the UI
If you already had the corresponding payload type registered in the Mythic interface, you should now see the red text turn green.
You should see output similar to the following:
itsafeature@spooky my_container % python3 main.py
INFO 2023-04-03 21:17:10,899 initialize 29 : [*] Using debug level: debug
INFO 2023-04-03 21:17:10,899 start_services 267 : [+] Starting Services with version v1.0.0-0.0.7 and PyPi version 0.2.0-rc9
INFO 2023-04-03 21:17:10,899 start_services 270 : [*] Processing webhook service
INFO 2023-04-03 21:17:10,899 syncWebhookData 261 : Successfully started webhook service
INFO 2023-04-03 21:17:10,899 start_services 281 : [*] Processing agent: apfell
INFO 2023-04-03 21:17:10,902 syncPayloadData 104 : [*] Processing command jsimport
INFO 2023-04-03 21:17:10,902 syncPayloadData 104 : [*] Processing command chrome_tabs
DEBUG 2023-04-03 21:17:10,915 SendRPCMessage 132 : Sending RPC message to pt_sync
INFO 2023-04-03 21:17:10,915 GetConnection 84 : [*] Trying to connect to rabbitmq at: 127.0.0.1:5672
INFO 2023-04-03 21:17:10,999 GetConnection 98 : [+] Successfully connected to rabbitmq
INFO 2023-04-03 21:17:11,038 ReceiveFromMythicDirectTopicExchange 306 : [*] started listening for messages on emit_webhook.new_callback
INFO 2023-04-03 21:17:11,038 ReceiveFromMythicDirectTopicExchange 306 : [*] started listening for messages on emit_webhook.new_feedback
INFO 2023-04-03 21:17:11,051 ReceiveFromMythicDirectTopicExchange 306 : [*] started listening for messages on emit_webhook.new_startup
INFO 2023-04-03 21:17:13,240 syncPayloadData 123 : [+] Successfully synced apfell
If you mythic instance has a randomized password for rabbitmq_password
, then you need to make sure that the password from Mythic/.env
after you start Mythic for the first time is copied over to your vm. You can either add this to your rabbitmq_config.json
file or set it as an environment variable (MYTHIC_RABBITMQ_PASSWORD
).
There are a few caveats to this process over using the normal process. You're now responsible for making sure that the right python version and dependencies are installed, and you're now responsible for making sure that the user context everything is running from has the proper permissions.
One big caveat people tend to forget about is paths. Normal containers run on *nix, but you might be doing this dev on Windows. So if you develop everything for windows paths hard-coded and then want to convert it to a normal Docker container later, that might come back to haunt you.
Whether you're using a Docker container or not, you can load up the code in your agent_code
folder in any IDE you want. When an agent is installed via mythic-cli
, the entire agent folder (agent_code
and mythic
) is mapped into the Docker container. This means that any edits you make to the code is automatically reflected inside of the container without having to restart it (pretty handy). The only caveat here is if you make modifications to the python or golang definition files will require you to restart your container to load up the changes sudo ./mythic-cli start [payload name]
. If you're making changes to those from a non-Docker instance, simply stop your python3.8 main.py
and start it again. This effectively forces those files to be loaded up again and re-synced over to Mythic.
If you're doing anything more than a typo fix, you're going to want to test the fixes/updates you've made to your code before you bother uploading it to a GitHub project, re-installing it, creating new agents, etc. Luckily, this can be super easy.
Say you have a Visual Studio project set up in your agent_code
directory and you want to just "run" the project, complete with breakpoints and configurations so you can test. The only problem is that your local build needs to be known by Mythic in some way so that the Mythic UI can look up information about your agent, your "installed" commands, your encryption keys, etc.
To do this, you first need to generate a payload in the Mythic UI (or via Mythic's Scripting). You'll select any C2 configuration information you need, any commands you want baked in, etc. When you click to build, all of that configuration will get sent to your payload type's "build" function in mythic/agent_functions/builder.py
. Even if you don't have your container running or it fails to build, no worries, Mythic will first save everything off into the database before trying to actually build the agent. In the Mythic UI, now go to your payloads page and look for the payload you just tried to build. Click to view the information about the payload and you'll see a summary of all the components you selected during the build process, along with some additional pieces of information (payload UUID and generated encryption keys).
Take that payload UUID and the rest of the configuration and stamp it into your agent_code
build. For some agents this is as easy as modifying the values in a Makefile, for some agents this can all be set in a config
file of some sort, but however you want to specify this information is up to you. Once all of that is set, you're free to run your agent from within your IDE of choice and you should see a callback in Mythic. At this point, you can do whatever code mods you need, re-run your code, etc.
Following from the previous section, if you just use the payload UUID and run your agent, you should end up with a new callback each time. That can be ideal in some scenarios, but sometimes you're doing quick fixes and want to just keep tasking the same callback over and over again. To do this, simply pull the callback UUID and encryption keys from the callback information on the active callbacks page and plug that into your agent. Again, based on your agent's configuration, that could be as easy as modifying a Makefile, updating a config file, or you might have to manually comment/uncomment some lines of code. Once you're reporting back with the callback UUID instead of the payload UUID and using the right encryption keys, you can keep re-running your build without creating new callbacks each time.
So, you want to add a new command to a Payload Type. What does that mean, where do you go, what all do you have to do?
Luckily, the Payload Type containers are the source of truth for everything related to them, so that's the only place you'll need to edit. If your payload type uses its own custom message format, then you might also have to edit your associated translation container, but that's up to you.
Make a new .py
file with your command class and make sure it gets imported before mythic_container.mythic_service.start_and_run_forever
is called so that the container is aware of the command before syncing over.
This new file should match the requirements of the rest of the commands
Once you're done making edits, restart your payload type container via: ./mythic-cli start [payload type name]
. This will restart just that one payload type container, reloading the python files automatically, and re-syncing the data with Mythic.
Make a new .go
file with your new command struct instance. You can either do this as part of an init
function so it gets picked up automatically when the package/file is imported, or you can have specific calls that initialize and register the command.
Eventually, run agentstructs.AllPayloadData.Get("agent name").AddCommand
so that the Mythic container is aware that the command exists. Make sure this line is executed before your MythicContainer.StartAndRunForever
function call.
This new file should match the requirements of the rest of the commands
Once you're done making edits, restart your payload type container via: ./mythic-cli build [payload type name]
. This will rebuild and restart just that one payload type container and re-syncing the data with Mythic.
Command information is tracked in your Payload Type's container. Each command has its own Python class or GoLang struct. In Python, you leverage CommandBase
and TaskArguments
to define information about the command and information about the command's arguments.
CommandBase defines the metadata about the command as well as any pre-processing functionality that takes place before the final command is ready for the agent to process. This class includes the create_go_tasking
(Create_Tasking) and process_response
(Process Response) functions.
****TaskArguments does two things:
defines the parameters that the command needs
verifies / parses out the user supplied arguments into their proper components
this includes taking user supplied free-form input (like arguments to a sleep command - 10 4
) and parsing it into well-defined JSON that's easier for the agent to handle (like {"interval": 10, "jitter": 4}
). This can also take user-supplied dictionary input and parse it out into the rightful CommandParameter objects.
This also includes verifying all the necessary pieces are present. Maybe your command requires a source and destination, but the user only supplied a source. This is where that would be determined and error out for the user. This prevents you from requiring your agent to do that sort of parsing in the agent.
If you're curious how this all plays out in a diagram, you can find one here: Message Flow.
from mythic_payloadtype_container.PayloadBuilder import *
from mythic_payloadtype_container.MythicCommandBase import *
class ScreenshotCommand(CommandBase):
cmd = "screenshot"
needs_admin = False
help_cmd = "screenshot"
description = "Use the built-in CGDisplay API calls to capture the display and send it back over the C2 channel. No need to specify any parameters as the current time will be used as the file name"
version = 1
author = ""
attackmapping = ["T1113"]
argument_class = ScreenshotArguments
browser_script = BrowserScript(script_name="screenshot", author="@its_a_feature_")
attributes = CommandAttributes(
spawn_and_injectable=True,
supported_os=[SupportedOS.MacOS],
builtin=False,
load_only=False,
suggested_command=False,
)
script_only = False
async def create_go_tasking(self, taskData: MythicCommandBase.PTTaskMessageAllData) -> MythicCommandBase.PTTaskCreateTaskingMessageResponse:
response = MythicCommandBase.PTTaskCreateTaskingMessageResponse(
TaskID=taskData.Task.ID,
Success=True,
)
return response
async def process_response(self, task: PTTaskMessageAllData, response: any) -> PTTaskProcessResponseMessageResponse:
resp = PTTaskProcessResponseMessageResponse(TaskID=task.Task.ID, Success=True)
return resp
Creating your own command requires extending this CommandBase class (i.e. class ScreenshotCommand(CommandBase)
and providing values for all of the above components.
cmd
- this is the command name. The name of the class doesn't matter, it's this value that's used to look up the right command at tasking time
needs_admin
- this is a boolean indicator for if this command requires admin permissions
help_cmd
- this is the help information presented to the user if they type help [command name]
from the main active callbacks page
description
- this is the description of the command. This is also presented to the user when they type help.
suported_ui_features
- This is an array of values that indicates where this command might be used within the UI. For example, from the active callbacks page, you see a table of all the callbacks. As part of this, there's a dropdown you can use to automatically issue an exit
task to the callback. How does Mythic know which command to actually send? It's this array that dictates that. The following are used by the callback table, file browser, and process listing, but you're able to add in any that you want and leverage them via browser scripts for additional tasking:
supported_ui_features = ["callback_table:exit"]
supported_ui_features = ["file_browser:list"]
supported_ui_features = ["process_browser:list"]
supported_ui_features = ["file_browser:download"]
supported_ui_features = ["file_browser:remove"]
supported_ui_features = ["file_browser:upload"]
supported_ui_features = ["task_response:interactive"]
version
- this is the version of the command you're creating/editing. This allows a helpful way to make sure your commands are up to date and tracking changes
argument_class
- this correlates this command to a specific TaskArguments
class for processing/validating arguments
attackmapping
- this is a list of strings to indicate MITRE ATT&CK mappings. These are in "T1113" format.
agent_code_path
is automatically populated for you like in building the payload. This allows you to access code files from within commands in case you need to access files, functions, or create new pieces of payloads. This is really useful for a load
command so that you can find and read the functions you're wanting to load in.
You can optionally add in the attributes
variable. This is a new class called CommandAttributes
where you can set whether or not your command supports being injected into a new process (some commands like cd
or exit
don't make sense for example). You can also provide a list of supported operating systems. This is helpful when you have a payload type that might compile into multiple different operating system types, but not all the commands work for all the possible operating systems. Instead of having to write "not implemented" or "not supported" function stubs, this will allow you to completely filter this capability out of the UI so users don't even see it as an option.
Available options are:
supported_os
an array of SupportedOS fields (ex: [SupportedOS.MacOS]
) (in Python for a new SupportedOS you can simply do SupportedOS("my os name")
.
spawn_and_injectable
is a boolean to indicate if the command can be injected into another process
builtin
is a boolean to indicate if the command should be always included in the build process and can't be unselected
load_only
is a boolean to indicate if the command can't be built in at the time of payload creation, but can be loaded in later
suggested_command
is a boolean to indicate if the command should be pre-selected for users when building a payload
filter_by_build_parameter
is a dictionary of parameter_name:value
for what's required of the agent's build parameters. This is useful for when some commands are only available depending on certain values when building your agent (such as agent version).
You can also add in any other values you want for your own processing. These are simply key=value
pairs of data that are stored. Some people use this to identify if a command has a dependency on another command. This data can be fetched via RPC calls for things like a load
command to see what additional commands might need to be included.
This ties into the CommandParameter fields choice_filter_by_command_attributes
, choices_are_all_commands
, and choices_are_loaded_commands
.
The create_go_tasking
function is very broad and covered in Create_Tasking
The process_response
is similar, but allows you to specify that data shouldn't automatically be processed by Mythic when an agent checks in, but instead should be passed to this function for further processing and to use Mythic's RPC functionality to register the results into the system. The data passed here comes from the post_response
message (Process Response).
The script_only
flag indicates if this Command will be use strictly for things like issuing subtasking, but will NOT be compiled into the agent. The nice thing here is that you can now generate commands that don't need to be compiled into the agent for you to execute. These tasks never enter the "submitted" stage for an agent to pick up - instead they simply go into the create_tasking scenario (complete with subtasks and full RPC functionality) and then go into a completed state.
The TaskArguments class defines the arguments for a command and defines how to parse the user supplied string so that we can verify that all required arguments are supplied. Mythic now tracks where tasking came from and can automatically handle certain instances for you. Mythic now tracks a tasking_location
field which has the following values:
command_line
- this means that the input you're getting is just a raw string, like before. It could be something like x86 13983 200
with a series of positional parameters for a command, it could be {"command": "whoami"}
as a JSON string version of a dictionary of arguments, or anything else. In this case, Mythic really doesn't know enough about the source of the tasking or the contents of the tasking to provide more context.
parsed_cli
- this means that the input you're getting is a dictionary that was parsed by the new web interface's CLI parser. This is what happens when you type something on the command line for a command that has arguments (ex: shell whoami
or shell -command whoami
). Mythic can successfully parse out the parameters you've given into a single parameter_group and gives you a dictionary
of data.
modal
- this means that the input you're getting is a dictionary that came from the tasking modal. Nothing crazy here, but it does at least mean that there shouldn't be any silly shenanigans with potential parsing issues.
browserscript
- if you click a tasking button from a browserscript table and that tasking button provides a dictionary to Mythic, then Mythic can forward that down as a dictionary. If the tasking button from a browserscript table submits a String
instead, then that gets treated as command_line
in terms of parsing.
With this ability to track where tasking is coming from and what form it's in, an agent's command file can choose to parse this data differently. By default, all commands must supply a parse_arguments
function in their associated TaskArguments
subclass. If you do nothing else, then all of these various forms will get passed to that function as strings (if it's a dictionary it'll get converted into a JSON string). However, you can provide another function, parse_dictionary
that can handle specifically the cases of parsing a given dictionary into the right CommandParameter objects as shown below:
async def parse_arguments(self):
if len(self.command_line) == 0:
raise ValueError("Must supply arguments")
if self.command_line[0] == "{":
try:
self.load_args_from_json_string(self.command_line)
return
except Exception as e:
pass
# if we got here, we weren't given a JSON string but raw text to parse
# here's an example, though error prone because it splits on " " characters
pieces = self.command_line.split(" ")
self.add_arg("arg1", pieces[0])
self.add_arg("arg2", pieces[1])
async def parse_dictionary(self, dictionary_arguments):
self.load_args_from_dictionary(dictionary_arguments)
In self.args
we define an array of our arguments and what they should be along with default values if none were provided.
In parse_arguments
we parse the user supplied self.command_line
into the appropriate arguments. The hard part comes when you allow the user to type arguments free-form and then must parse them out into the appropriate pieces.
class LsArguments(TaskArguments):
def __init__(self, command_line, **kwargs):
super().__init__(command_line, **kwargs)
self.args = [
CommandParameter(
name="path",
type=ParameterType.String,
default_value=".",
description="Path of file or folder on the current system to list",
parameter_group_info=[ParameterGroupInfo(
required=False
)]
)
]
async def parse_arguments(self):
self.add_arg("path", self.command_line)
async def parse_dictionary(self, dictionary):
if "host" in dictionary:
# then this came from the file browser
self.add_arg("path", dictionary["path"] + "/" + dictionary["file"])
self.add_arg("file_browser", type=ParameterType.Boolean, value=True)
else:
self.load_args_from_dictionary(dictionary)
The main purpose of the TaskArguments class is to manage arguments for a command. It handles parsing the command_line
string into CommandParameters
, defining the CommandParameters
, and providing an easy interface into updating/accessing/adding/removing arguments as needed.
As part of the TaskArguments
subclass, you have access to the following pieces of information:
self.command_line
- the parameters sent down for you to parse
self.raw_command_line
- the original parameters that the user typed out. This is useful in case you have additional pieces of information to process or don't want information processed into the standard JSON/Dictionary format that Mythic uses.
self.tasking_location
- this indicates where the tasking came from
self.task_dictionary
- this is a dictionary representation of the task you're parsing the arguments for. You can see things like the initial parameter_group_name
that Mythic parsed for this task, the user that issued the task, and more.
self.parameter_group_name
- this allows you to manually specify what the parameter group name should be. Maybe you don't want Mythic to do automatic parsing to determine the parameter group name, maybe you have additional pieces of data you're using to determine the group, or maybe you plan on adjusting it alter on. Whatever the case might be, if you set self.parameter_group_name = "value"
, then Mythic won't continue trying to identify the parameter group based on the current parameters with values.
The class must implement the parse_arguments
method and define the args
array (it can be empty). This parse_arguments
method is the one that allows users to supply "short hand" tasking and still parse out the parameters into the required JSON structured input. If you have defined command parameters though, the user can supply the required parameters on the command line (via -commandParameterName
or via the popup tasking modal via shift+enter
).
When syncing the command with the UI, Mythic goes through each class that extends the CommandBase, looks at the associated argument_class
, and parses that class's args
array of CommandParameters
to create the pop-up in the UI.
While the TaskArgument's parse_arguments
method simply parses the user supplied input and sets the values for the named arguments, it's the CommandParameter's class that actually verifies that every required parameter has a value, that all the values are appropriate, and that default values are supplied if necessary.
CommandParameters, similar to BuildParameters, provide information for the user via the UI and validates that the values are all supplied and appropriate.
class CommandParameter:
def __init__(
self,
name: str,
type: ParameterType,
display_name: str = None,
cli_name: str = None,
description: str = "",
choices: [any] = None,
default_value: any = None,
validation_func: callable = None,
value: any = None,
supported_agents: [str] = None,
supported_agent_build_parameters: dict = None,
choice_filter_by_command_attributes: dict = None,
choices_are_all_commands: bool = False,
choices_are_loaded_commands: bool = False,
dynamic_query_function: callable = None,
parameter_group_info: [ParameterGroupInfo] = None
):
self.name = name
if display_name is None:
self.display_name = name
else:
self.display_name = display_name
if cli_name is None:
self.cli_name = name
else:
self.cli_name = cli_name
self.type = type
self.user_supplied = False # keep track of if this is using the default value or not
self.description = description
if choices is None:
self.choices = []
else:
self.choices = choices
self.validation_func = validation_func
if value is None:
self._value = default_value
else:
self.value = value
self.default_value = default_value
self.supported_agents = supported_agents if supported_agents is not None else []
self.supported_agent_build_parameters = supported_agent_build_parameters if supported_agent_build_parameters is not None else {}
self.choice_filter_by_command_attributes = choice_filter_by_command_attributes if choice_filter_by_command_attributes is not None else {}
self.choices_are_all_commands = choices_are_all_commands
self.choices_are_loaded_commands = choices_are_loaded_commands
self.dynamic_query_function = dynamic_query_function
if not callable(dynamic_query_function) and dynamic_query_function is not None:
raise Exception("dynamic_query_function is not callable")
self.parameter_group_info = parameter_group_info
if self.parameter_group_info is None:
self.parameter_group_info = [ParameterGroupInfo()]
name
- the name of the parameter that your agent will use. cli_name
is an optional variation that you want user's to type when typing out commands on the command line, and display_name
is yet another optional name to use when displaying the parameter in a popup tasking modal.
type
- this is the parameter type. The valid types are:
String - gets a string value
Boolean - gets a boolean value
File
Upload a file through your browser. In your create tasking though, you get a String UUID of the file that can be used via SendMythicRPC* calls to get more information about the file or the file contents
Array
An Array of string values
TypedArray
An array of arrays, ex: [ ["int": "5"], ["char*", "testing"] ]
ChooseOne - gets a string value
ChooseMultiple
An Array of string values
Credential_JSON
Select a specific credential that's registered in the Mythic credential store. In your create tasking, get a JSON representation of all data for that credential
Number
Payload
Select a payload that's already been generated and get the UUID for it. This is helpful for using that payload as a template to automatically generate another version of it to use as part of lateral movement or spawning new agents.
ConnectionInfo
Select the Host, Payload/Callback, and P2P profile for an agent or callback that you want to link to via a P2P mechanism. This allows you to generate random parameters for payloads (such as named-pipe names) and not require you to remember them when linking. You can simply select them and get all of that data passed to the agent.
When this is up in the UI, you can also track new payloads on hosts in case Mythic isn't aware of them (maybe you moved and executed payloads in a method outside of Mythic). This allows Mythic to track that payload X is now on host Y and you can use the same selection process as the first bullet to filter down and select it for linking.
LinkInfo
Get a list of all active/dead P2P connections for a given agent. Selecting one of these links gives you all the same information that you'd get from the ConnectionInfo
parameter. The goal here is to allow you to easily select to "unlink" from an agent or to re-link to a very specific agent on a host that you were previously connected to.
description
- this is the description of the parameter that's presented to the user when the modal pops up for tasking
choices
- this is an array of choices if the type is ChooseOne
or ChooseMultiple
If your command needs you to pick from the set of commands (rather than a static set of values), then there are a few other components that come into play. If you want the user to be able to select any command for this payload type, then set choices_are_all_commands
to True
. Alternatively, you could specify that you only want the user to choose from commands that are already loaded into the callback, then you'd set choices_are_loaded_commands
to True
. As a modifier to either of these, you can set choice_filter_by_command_attributes
to filter down the options presented to the user even more based on the parameters of the Command's attributes
parameter. This would allow you to limit the user's list down to commands that are loaded into the current callback that support MacOS for example. An example of this would be:
CommandParameter(name="test name",
type=ParameterType.ChooseMultiple,
description="so many choices!",
choices_are_all_commands=True,
choice_filter_by_command_attributes={"supported_os": [SupportedOS.MacOS]}),
choices
- for the TypedArray
type, the choices
here is the list of options you want to provide in the dropdown for the user. So if you have choices as ["int", "char*"]
, then when the user adds a new array entry in the modal, those two will be the options. Additionally, if you set the default_value
to char*
, then char*
will be the value selected by default.
validation_func
- this is an additional function you can supply to do additional checks on values to make sure they're valid for the command. If a value isn't valid, an exception should be raised
value
- this is the final value for the parameter; it'll either be the default_value or the value supplied by the user. This isn't something you set directly.
default_value
- this is a value that'll be set if the user doesn't supply a value
supported_agents
- If your parameter type is Payload
then you're expecting to choose from a list of already created payloads so that you can generate a new one. The supported_agents
list allows you to narrow down that dropdown field for the user. For example, if you only want to see agents related to the apfell
payload type in the dropdown for this parameter of your command, then set supported_agents=["apfell"]
when declaring the parameter.
supported_agent_build_parameters
- allows you to get a bit more granular in specifying which agents you want to show up when you select the Payload
parameter type. It might be the case that a command doesn't just need instance of the atlas
payload type, but maybe it only works with the Atlas
payload type when it's compiled into .NET 3.5. This parameter value could then be supported_agent_build_parameters={"atlas": {"version":"3.5"}}
. This value is a dictionary where the key is the name of the payload type and the value is a dictionary of what you want the build parameters to be.
dynamic_query_function
- More information can be found here, but you can provide a function here for ONLY parameters of type ChooseOne or ChooseMultiple where you dynamically generate the array of choices you want to provide the user when they try to issue a task of this type.
typedarray_parse_function
- This allows you to have typed arrays more easily displayed and parsed throughout Mythic (useful for BOF/COFF work). More information for this can be found here.
Most command parameters are pretty straight forward - the one that's a bit unique is the File type (where a user is uploading a file as part of the tasking). When you're doing your tasking, this value
will be the base64 string of the file uploaded.
To help with conditional parameters, Mythic 2.3 introduced parameter groups. Every parameter must belong to at least one parameter group (if one isn't specified by you, then Mythic will add it to the Default
group and make the parameter required
).
You can specify this information via the parameter_group_info
attribute on CommandParameter
class. This attribute takes an array of ParameterGroupInfo
objects. Each one of these objects has three attributes: group_name
(string), required
(boolean) ui_position
(integer). These things together allow you to provide conditional parameter groups to a command.
Let's look at an example - the new apfell
agent's upload
command now leverages conditional parameters. This command allows you to either:
specify a remote_path
and a filename
- Mythic then looks up the filename to see if it's already been uploaded to Mythic before. If it has, Mythic can simply use the same file identifier and pass that along to the agent.
specify a remote_path
and a file
- This is uploading a new file, registering it within Mythic, and then passing along that new file identifier
Notice how both options require the remote_path
parameter, but the file
and filename
parameters are mutually exclusive.
class UploadArguments(TaskArguments):
def __init__(self, command_line, **kwargs):
super().__init__(command_line, **kwargs)
self.args = [
CommandParameter(
name="file", cli_name="new-file", display_name="File to upload", type=ParameterType.File, description="Select new file to upload",
parameter_group_info=[
ParameterGroupInfo(
required=True,
group_name="Default"
)
]
),
CommandParameter(
name="filename", cli_name="registered-filename", display_name="Filename within Mythic", description="Supply existing filename in Mythic to upload",
type=ParameterType.ChooseOne,
dynamic_query_function=self.get_files,
parameter_group_info=[
ParameterGroupInfo(
required=True,
group_name="specify already uploaded file by name"
)
]
),
CommandParameter(
name="remote_path",
cli_name="remote_path",
display_name="Upload path (with filename)",
type=ParameterType.String,
description="Provide the path where the file will go (include new filename as well)",
parameter_group_info=[
ParameterGroupInfo(
required=True,
group_name="Default",
ui_position=1
),
ParameterGroupInfo(
required=True,
group_name="specify already uploaded file by name",
ui_position=1
)
]
),
]
So, the file
parameter has one ParameterGroupInfo
that calls out the parameter as required. The filename
parameter also has one ParameterGroupInfo
that calls out the parameter as required. It also has a dynamic_query_function
that allows the task modal to run a function to populate the selection box. Lastly, the remote_path
parameter has TWO ParameterGroupInfo
objects in its array - one for each group. This is because the remote_path
parameter applies to both groups. You can also see that we have a ui_position
specified for these which means that regardless of which option you're viewing in the tasking modal, the parameter remote_path
will be the first parameter shown. This helps make things a bit more consistent for the user.
If you're curious, the function used to get the list of files for the user to select is here:
async def get_files(self, inputMsg: PTRPCDynamicQueryFunctionMessage) -> PTRPCDynamicQueryFunctionMessageResponse:
fileResponse = PTRPCDynamicQueryFunctionMessageResponse(Success=False)
file_resp = await MythicRPC().execute("get_file", callback_id=inputMsg.Callback,
limit_by_callback=False,
filename="",
max_results=-1)
if file_resp.status == MythicRPCStatus.Success:
file_names = []
for f in file_resp.response:
if f["filename"] not in file_names and f["filename"].endswith(".exe"):
file_names.append(f["filename"])
fileResponse.Success = True
fileResponse.Choices = file_names
return fileResponse
else:
fileResponse.Error = file_resp.error
return fileResponse
In the above code block, we're searching for files, not getting their contents, not limiting ourselves to just what's been uploaded to the callback we're tasking, and looking for all files (really it's all files that have "" in the name, which would be all of them). We then go through to de-dupe the filenames and return that list to the user.
So, with all that's going on, it's helpful to know what gets called, when, and what you can do about it.
Manipulate tasking before it's sent to the agent
All commands must have a create_go_tasking function with a base case like:
async def create_go_tasking(self, taskData: MythicCommandBase.PTTaskMessageAllData) -> MythicCommandBase.PTTaskCreateTaskingMessageResponse:
response = MythicCommandBase.PTTaskCreateTaskingMessageResponse(
TaskID=taskData.Task.ID,
Success=True,
)
return response
TaskFunctionCreateTasking: func(taskData *agentstructs.PTTaskMessageAllData) agentstructs.PTTaskCreateTaskingMessageResponse {
response := agentstructs.PTTaskCreateTaskingMessageResponse{
Success: true,
TaskID: taskData.Task.ID,
}
return response
},
When an operator types a command in the UI, whatever the operator types (or whatever is populated based on the popup modal) gets sent to this function after the input is parsed and validated by the TaskArguments and CommandParameters functions mentioned in Commands.
It's here that the operator has full control of the task before it gets sent down to an agent. The task is currently in the "preprocessing" stage when this function is executed and allows you to do many things via Remote Procedure Calls (RPC) back to the Mythic server.
So, from this create tasking function, what information do you immediately have available? https://github.com/MythicMeta/MythicContainerPyPi/blob/main/mythic_container/MythicCommandBase.py#L1071-L1088 <-- this class definition provides the basis for what's available.
taskData.Task
- Information about the Task that's issued
taskData.Callback
- Information about the Callback for this task
taskData.Payload
- Information about the packing payload for this callback
taskData.Commands
- A list of the commands currently loaded into this callback
taskData.PayloadType
- The name of this payload type
taskData.BuildParameters
- The build parameters and their values used when building the payload for this callback
taskData.C2Profiles
- Information about the C2 Profiles included inside of this callback.
taskData.args
- access to the associated arguments class for this command that already has all of the values populated and validated. Let's say you have an argument called "remote_path", you can access it via taskData.args.get_arg("remote_path")
.
Want to change the value of that to something else? taskData.args.add_arg("remote_path", "new value")
.
Want to change the value of that to a different type as well? taskData.args.add_arg("remote_path", 5, ParameterType.Number)
Want to add a new argument entirely for this specific instance as part of the JSON response? taskData.args.add_arg("new key", "new value")
. The add_arg
functionality will overwrite the value if the key exists, otherwise it'll add a new key with that value. The default ParameterType for args is ParameterType.String
, so if you're adding something else, be sure to change the type. Note: If you have multiple parameter groups as part of your tasking, make sure you specify which parameter group your new argument belongs to. By default, the argument gets added to the Default
parameter group. This could result in some confusion where you add an argument, but it doesn't get picked up and sent down to the agent.
You can also remove args taskData.args.remove_arg("key")
, rename args taskData.args.rename_arg("old key", "new key")
You can also get access to the user's commandline as well via taskData.args.commandline
Want to know if an arg is in your args? taskData.args.has_arg("key")
taskData.Task.TokenID
- information about the token that was used with the task. This requires that the callback has at some point returned tokens for Mythic to track, otherwise this will be 0.
In the PTTaskCreateTaskingMessageResponse
, you can set a variety of attributes to reflect changes back to Mythic as a result of your processing: https://github.com/MythicMeta/MythicContainerPyPi/blob/main/mythic_container/MythicCommandBase.py#L820
Success
- did your processing succeed or not? If not, set Error
to a string value representing the error you encountered.
CommandName
- If you want the agent to see the command name for this task as something other than what the actual command's name is, reflect that change here. This can be useful if you are creating an alias for a command. So, your agent has the command ls
, but you create a script_only command dir
. During the processing of dir
you set the CommandName
to ls
so that the agent sees ls
and processes it as normal.
TaskStatus
- If something went wrong and you want to reflect a specific status to the user, you can set that value here. Status that start with error:
will appear red
in the UI.
Stdout
and Stderr
- set these if you want to provide some additional stdout/stderr for the task but don't necessarily want it to clutter the user's interface. This is helpful if you're doing additional compliations as part of your tasking and want to store debug or error information for later.
Completed
- If this is set to True
then Mythic will mark the task as done and won't allow an agent to pick it up.
CompletionFunctionName
- if you want to have a specific local function called when the task completes (such as to do follow-on tasking or more RPC calls), then specify that function name here. This requires a matching entry in the command's completion_functions
like follows:
completion_functions = {"formulate_output": formulate_output}
ParameterGroupName
- if you want to explicitly set the parameter group name instead of letting Mythic figure it out based on which parameters have values, you can specify that here.
DisplayParams
- you can set this value to a string that you'd want the user to see instead of the taskData.Task.OriginalParams
. This allows you to leverage the JSON structure of the popup modals for processing, but return a more human-friendly version of the parameters for operators to view. There's a new menu-item in the UI when viewing a task that you can select to view all the parameters, so on a case-by-case basis an operator can view the original JSON parameters that were sent down, but this provides a nice way to prevent large JSON blobs that are hard to read for operators while still preserving the nice JSON scripting features on the back-end.
This additional functionality is broken out into a series of files (https://github.com/MythicMeta/MythicContainerPyPi/tree/main/mythic_container/MythicGoRPC) file that you can import at the top of your Python command file.
They all follow the same format:
async def SendMythicRPC*(MythicRPC*Message) -> MythicRPC*MessageResponse
This section talks about the different components for creating messages from the agent to a C2 docker container and how those can be structured within a C2 profile. Specifically, this goes into the following components:
How agent messages are formatted
How to perform initial checkins and do encrypted key exchanges
How to Get Tasking
How to Post Responses
Uploading Files
Another major component of the agent side coding is the actual C2 communications piece within your agent. This piece is how your agent actually implements the C2 components to do its magic.
Every C2 profile has zero or more C2 Parameters that go with it. These describe things like callback intervals, API keys to use, how to format web requests, encryption keys, etc. These parameters are specific to that C2 profile, so any agent that "speaks" that c2 profile's language will leverage these parameters. If you look at the parameters in the UI, you'll see:
Name
- When creating payloads or issuing tasking, you will get a dictionary of name
-> user supplied value
for you to leverage. This is a unique key per C2 profile (ex: callback_host
)
description
- This is what's presented to the user for the parameter (ex: Callback host or redirector in URL format
)
default_value
- If the user doesn't supply a value, this is the default one that will be used
verifier_regex
- This is a regex applied to the user input in the UI for a visual cue that the parameter is correct. An example would be ^(http|https):\/\/[a-zA-Z0-9]+
for the callback_host
to make sure that it starts with http:// or https:// and contains at least one letter/number.
required
- Indicate if this is a required field or not.
randomized
- This is a boolean indicating if the parameter should be randomized each time. This comes into play each time a payload is generated with this c2 profile included. This allows you to have a random value in the c2 profile that's randomized for each payload (like a named pipe name).
format_string
- If randomized
is true
, then this is the regex format string used to generate that random value. For example, [a-z0-9]{8}-[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{12}
will generate a UUID4 each time.
This page describes how an agent message is formatted
All messages go to the /agent_message
endpoint via the associated C2 Profile docker container. These messages can be:
POST request
message content in body
GET request
message content in FIRST header value
message content in FIRST cookie value
message content in FIRST query parameter
For query parameters, the Base64 content must be URL Safe Encoded - this has different meaning in different languages, but means that for the "unsafe" characters of +
and /
, they need to be swapped out with -
and _
instead of %encoded. Many languages have a special Base64 Encode/Decode function for this. If you're curious, this is an easy site to check your encoding: https://www.base64url.com/
message content in body
All agent messages have the same general structure, but it's the message inside the structure that varies.
Each message has the following general format shown below. The message is a JSON string, which is then typically encrypted (doesn't have to be though), with a UUID prepended, and then the entire thing base64 encoded:
base64(
UUID + EncBlob( //the following is all encrypted
JSON({
"action": "", //indicating what the message is - required
"...": ... // JSON data relating to the action - required
//this piece is optional and just for p2p mesh forwarding
"delegates": [
{"message": agentMessage, "c2_profile": "ProfileName", "uuid": "uuid here"},
{"message": agentMessage, "c2_profile": "ProfileName", "uuid": "uuid here"}
]
})
)
)
There are a couple of components to note here in what's called an agentMessage
:
UUID
- This UUID varies based on the phase of the agent (initial checkin, staging, fully staged). This is a 36 character long of the format b50a5fe8-099d-4611-a2ac-96d93e6ec77b
. Optionally, if your agent is dealing with more of a binary-level specification rather than strings, you can use a 16 byte big-endian value here for the binary representation of the UUID4 string.
EncBlob
- This section is encrypted, typically by an AES256 key, but when agents are staging, this could be encrypted with RSA keys or as part of some other custom crypto/staging you're doing as part of your payload type container. .
JSON
- This is the actual message that's being sent by the agent to Mythic or from Mythic to an agent. If you're doing your own custom message format and leveraging a translation container, this this format will obviously be different and will match up with your custom version; however, in your translation container you will need to convert back to this format so that Mythic can process the message.
action
- This specifies what the rest of the message means. This can be one of the following:
staging_rsa
checkin
get_tasking
post_response
translation_staging (you're doing your own staging)
...
- This section varies based on the action that's being performed. The different variations here can be found in Hooking Features , Initial Checkin, and Agent Responses
delegates
- This section contains messages from other agents that are being passed along. This is how messages from nested peer-to-peer agents can be forwarded out through and egress callback. If your agent isn't forwarding messages on from others (such as in a p2p mesh or as an egress point), then you don't need this section. More info can be found here: Delegates (p2p)
+
- when you see something like UUID + EncBlob
, that's referring to byte concatenation of the two values. You don't need to do any specific processing or whatnot, just right after the first elements bytes put the second elements bytes
Let's look at a few concrete examples without encryption and already base64 decoded:
a21bab2e-462e-49ab-9800-fbedaf53ad15
{
"action":"checkin",
"uuid":"a21bab2e-462e-49ab-9800-fbedaf53ad15",
"user":"bob",
"domain":"domain.com",
"pid":123,
}
a21bab2e-462e-49ab-9800-fbedaf53ad15
{
"action":"get_tasking",
"tasking_size": -1
}
a21bab2e-462e-49ab-9800-fbedaf53ad15
{
"action":"get_tasking",
"tasking_size": -1,
"delegates": [
{"message": agentMessage, "c2_profile": "tcp", "uuid": "uuid here"},
{"message": agentMessage, "c2_profile": "smb", "uuid": "uuid here"}
]
}
a21bab2e-462e-49ab-9800-fbedaf53ad15
{
"action":"post_response",
"responses": [
{
"task_id": "c34bab2e-462e-49ab-9800-fbedaf53ad15",
"completed": true,
"user_output": "hello world",
},
{
"task_id": "bab3ab2e-462e-49ab-9800-fbedaf53ad15",
"completed": false,
"user_output": "downloading file...",
"download": {
"total_chunks": 12,
"chunk_size": 512000,
"filename": "test.txt",
"full_path": "C:\\Users\\test\\test.txt",
"host": "ABC.COM",
"is_screenshot": false
}
},
]
}
If you want to have a completely custom agent message format (different format for JSON, different field names/formatting, a binary or otherwise formatted protocol, etc), then there's only two things you have to do for it to work with Mythic.
Base64 encode the message
The first bytes of the message must be the associated UUID (payload, staging, callback).
Mythic uses these first few bytes to do a lookup in its database to find out everything about the message. Specifically for this case, it looks up if the associated payload type has a translation container, and if so, ships the message off to it first before trying to process it.
This page has the various different ways the initial checkin can happen and the encryption schemes used.
You will see a bunch of UUIDs mentioned throughout this section. All UUIDs are UUIDv4 formatted UUIDs (36 characters in length) and formatted like:
b446b886-ab97-49b2-b240-969a75393c06
In general, the UUID concatenated with the encrypted message provides a way to give context to the encrypted message without requiring a lot of extra pieces and without having to do a bunch of nested base64 encodings. 99% of the time, your messages will use your callbackUUID in the outer message. The outer UUID gives Mythic information about how to decrypt or interpret the following encrypted blob. In general:
payloadUUID as the outer UUID tells Mythic to look up that payload UUID, then look up the C2 profile associated with it, find a parameter called AESPSK
, and use that as the key to decrypt the message
tempUUID as the outer UUID tells Mythic that this is a staging process. So, look up the UUID in the staging database to see information about the blob, such as if it's an RSA encrypted blob or is part of a Diffie-Hellman key exchange
callbackUUID as the outerUUID tells Mythic that this is a full callback with an established encryption key or in plaintext.
However, when your payload first executes, it doesn't have a callbackUUID, it's just a payloadUUID. This is why you'll see clarifiers as to which UUID we're referring to when doing specific messages. The whole goal of the checkin
process is to go from a payload (and thus payloadUUID) to a full callback (and thus callbackUUID), so at the end of staging and everything you'll end up with a new UUID that you'll use as the outer UUID.
The plaintext checkin is useful for testing or when creating an agent for the first time. When creating payloads, you can generate encryption keys per c2 profile. To do so, the C2 Profile will have a parameter that has an attribute called crypto_type=True
. This will then signal to Mythic to either generate a new per-payload AES256_HMAC key or (if your agent is using a translation container) tell your agent's translation container to generate a new key. In the http
profile for example, this is a ChooseOne
option between aes256_hmac
or none
. If you're doing plaintext comms, then you need to set this value to none
when creating your payload. Mythic looks at that outer PayloadUUID
and checks if there's an associated encryption key with it in the database. If there is, Mythic will automatically try to decrypt the rest of the message, which will fail. This checkin has the following format:
Base64( PayloadUUID + JSON({
"action": "checkin", // required
"uuid": "payload uuid", //uuid of the payload - required
"ips": ["127.0.0.1"], // internal ip addresses - optional
"os": "macOS 10.15", // os version - optional
"user": "its-a-feature", // username of current user - optional
"host": "spooky.local", // hostname of the computer - optional
"pid": 4444, // pid of the current process - optional
"architecture": "x64", // platform arch - optional
"domain": "test", // domain of the host - optional
"integrity_level": 3, // integrity level of the process - optional
"external_ip": "8.8.8.8", // external ip if known - optional
"encryption_key": "base64 of key", // encryption key - optional
"decryption_key": "base64 of key", // decryption key - optional
"process_name": "osascript", // name of the current process - optional
})
)
The JSON section is not encrypted in any way, it's all plaintext.
Here's an example checkin message message:
ODA4NDRkMTktOWJmYy00N2Y5LWI5YWYtYzZiOTE0NGMwZmRjeyJhY3Rpb24iOiJjaGVja2luIiwiaXBzIjpbIjE3Mi4xNi4xLjEiLCIxOTIuMTY4LjAuMTE4IiwiMTkyLjE2OC4yMjguMCIsIjE5Mi4xNjguNTMuMSIsIjE5OC4xOS4yNDkuMyIsImZkMDc6YjUxYTpjYzY2OjA6YTYxNzpkYjVlOmFiNzplOWYxIiwiZmQ1MzpkYTlmOjk4MWE6NWI0Mjo4YjA6MzNjOTplMGE1OjIyNTYiLCJmZTgwOjoxIiwiZmU4MDo6MTQ3ZDpkYWZmOmZlZWM6YjQ2NCIsImZlODA6OjE0N2Q6ZGFmZjpmZWVjOmI0NjUiLCJmZTgwOjoxNDdkOmRhZmY6ZmVlYzpiNDY2IiwiZmU4MDo6MTQ3ZDpkYWZmOmZlZWM6YjQ2NyIsImZlODA6OjIyOmQxYzk6MWMyZTo5Mjk3IiwiZmU4MDo6MzQ3ZDpkYWZmOmZlY2U6M2ExNyIsImZlODA6OjNjMmQ6ODZiYjo4ZDk5OjJjNjciLCJmZTgwOjo4ODU3OjJhZmY6ZmU2NToyNTExIiwiZmU4MDo6ODg1NzoyYWZmOmZlNjU6MjUxMSIsImZlODA6OmFlZGU6NDhmZjpmZTAwOjExMjIiLCJmZTgwOjpjZTgxOmIxYzpiZDJjOjY5ZSIsImZlODA6OmQxMDM6N2IyNDo2YzliOjhlMjIiXSwib3MiOiJWZXJzaW9uIDEzLjQgKEJ1aWxkIDIyRjY2KSIsInVzZXIiOiJpdHNhZmVhdHVyZSIsImhvc3QiOiJzcG9va3kubG9jYWwiLCJwaWQiOjY1ODYsInV1aWQiOiI4MDg0NGQxOS05YmZjLTQ3ZjktYjlhZi1jNmI5MTQ0YzBmZGMiLCJhcmNoaXRlY3R1cmUiOiJhbWQ2NCIsImRvbWFpbiI6IiIsImludGVncml0eV9sZXZlbCI6MiwiZXh0ZXJuYWxfaXAiOiIiLCJwcm9jZXNzX25hbWUiOiIvVXNlcnMvaXRzYWZlYXR1cmUvRG9jdW1lbnRzL015dGhpY0FnZW50cy9wb3NlaWRvbi9QYXlsb2FkX1R5cGUvcG9zZWlkb24vcG9zZWlkb24vYWdlbnRfY29kZS9wb3NlaWRvbl93ZWJzb2NrZXRfaHR0cC5iaW4ifQ==
The checkin has the following response:
Base64( PayloadUUID + JSON({
"action": "checkin",
"id": "UUID", // new UUID for the agent to use
"status": "success"
})
)
From here on, the agent messages use the new UUID instead of the payload UUID. This allows Mythic to track a payload trying to make a new callback vs a callback based on a payload.
This method uses a static AES256 key for all communications. This will be different for each payload that's created. When creating payloads, you can generate encryption keys per c2 profile. To do so, the C2 Profile will have a parameter that has an attribute called crypto_type=True
. This will then signal to Mythic to either generate a new per-payload AES256_HMAC key or (if your agent is using a translation container) tell your agent's translation container to generate a new key. In the http
profile for example, this is a ChooseOne
option between aes256_hmac
or none
. The key passed down to your agent during build time will be the base64 encoded version of the 32Byte key.
The message sent will be of the form:
Base64( PayloadUUID + AES256(
JSON({
"action": "checkin", // required
"uuid": "payload uuid", //uuid of the payload - required
"ips": ["127.0.0.1"], // internal ip addresses - optional
"os": "macOS 10.15", // os version - optional
"user": "its-a-feature", // username of current user - optional
"host": "spooky.local", // hostname of the computer - optional
"pid": 4444, // pid of the current process - optional
"architecture": "x64", // platform arch - optional
"domain": "test", // domain of the host - optional
"integrity_level": 3, // integrity level of the process - optional
"external_ip": "8.8.8.8", // external ip if known - optional
"encryption_key": "base64 of key", // encryption key - optional
"decryption_key": "base64 of key", // decryption key - optional
"process_name": "osascript", // name of the current process - optional
})
)
)
Here's an example message with encryption key of hfN9Nk29S8LsjrE9ffbT9KONue4uozk+/TVMyrxDvvM=
and message:
ODA4NDRkMTktOWJmYy00N2Y5LWI5YWYtYzZiOTE0NGMwZmRjnZ/FcM9jnfvzAv/RYFPAvkGH8+nWHAGqxcBXSlPvq8jbCRoZrVvSSZOxNwg15q3Etz9hEb7Qunv1Sm3/8SSzp+ne4fxFObunQWzHo+7tS68csvn/uxqhiyvD83KK66xtPyGzPFlK1ZXD+wxDbo2M3iSYPEp0m5w+rQhzm5aTA6Gk6p0KSXovYvnY3TsJtdgVPlY1cFt75UzTd0iIFU8hJ+KbhyMUjJujLA6++sVrXuFps2TbAi21Z5Hr/g3/S6HAk/RSedKyXEZ6Hbbgx3gESsHa/QuVjP9Lz+Y6H9I4DtgEunCHddvruJUPqYxFGT2m8WbGc6AH6+m2ucexym0yBUryuFWfsrW6QSfcGUaVb4DWrVHtqHcXctYRNb7pOf0T/P26pFt77fgii4j0RgzTGod9QDWhSfvte+ffUWjsWKyixUffjIffj45sgDS0tvtT2Rej8gFiIpAs9F/oOH/ps5pRQeflULd1eH0GKh5WUcDwsjUa89KeOcts44J+E5+7trQ3q2q9Uy8S96DM8Nr5QryokeCD7J0goKZQPdutVXzwIvI9RT7zCQpV8CrRTpQ63L9P9IhIpyT+TDvorQd0v/I/DGb6Ev/ZUAxbyAR0JLJGjYYv1NUno5Ru2Plv1wsn82YanVF1V2LE1ii6DC7jclrkgfKN9Qhli+hIiUwSJ3YvFTT1ybHf/Fyw4ZZ6PiOIZIWgcJmHUHx//1TNvlTrmABitRpwb75yuJ6ZfYnKv/BlrQtJ9nFveNeYKP/rL7uYwPq3RY9IJRK7DBOqy53qiiysRfhimraW//sXc6duBmASW0ijZ21HKaqdVr72PMIJpEWghIznzpzEVpJqYj0uR9K/bL5W6kfIP43dyDBzGAGd87VBIcUTsIJLWaOHGPVmO3OmmtIfW34ivsX1TElTVjyrmKneQ+OTWww0RbXZdE5swvucXqC8wTuwybgwQWVPCvrBTBlv3iXgkP4dOjbvr1YZS+HpdbT5OEhwIqnDCXIqItVYx9Hz5BdfcBFbXUXk0SIQzWQj9xw+olYYQMrxomNvjuGxBkOmhTJf6yUyRK1Mp8b992FPBzLVRexYFc5FZxrI8CJeS91R3C21gb3SZH4EdKk1S3mR40O427TGYG5Hcqzqz5n0M6+cWORxUp7LKT34kDwgzHQK1h5kEoaGvGB1QDtx8GLsbfk/BqBoV2oHGJP1HHbVgYMgBTrkYObXOKFW8WyaUWcB1p/dSmW5Ww==
The message response will be of the form:
Base64( PayloadUUID + AES256(
JSON({
"action": "checkin",
"id": "callbackUUID", // callback UUID for the agent to use
"status": "success"
})
)
)
Here's that sample message's response:
ODA4NDRkMTktOWJmYy00N2Y5LWI5YWYtYzZiOTE0NGMwZmRjyHcKh56jliiv87ReJE7QqK8edpLcV5cfywt8Lg1jWJzPc8b37zB9/mliG1HKH0dyF/jZqiSzUfSWEjgfhKa3DoLUqJOvnbpOYYsL3GvfWrps3/HQhZogSjwXnQmTehbADhXrOqA4622YMFjJbpykxdq7kpufn+12GDidwNybOlbg9ej8D/PpZVVdqL2RdASe
From here on, the agent messages use the new UUID instead of the payload UUID.
With that same example from above, the agent gets back a response of success with a new callback UUID. From there on, since it's a static encryption, we'll see a get tasking message like the following:
ODIyYmZmMWItYmRhMC00YmNlLWE0ZDMtYTZiZGIxMWI4YTVm3F56rkDEESX1GBAOQy3yaGiiAQABGkGxY66lNP7JS1rie8e7KbFHXwICOj67vvXpo5cik/9LYBqfQ8Ce5E3eUF1mExFX3EOzgAJd6Ey4fR93LoUTeMQQQZ3+ZMCnphaaDVbvJXCuWgoTMr/wO17H1k4zoAaMi+PHk0BXaaNyHMc=
Notice how the outer UUID is different, but the encryption key is still the same.
Padding: PKCS7, block size of 16
Mode: CBC
IV is 16 random bytes
Final message: IV + Ciphertext + HMAC
where HMAC is SHA256 with the same AES key over (IV + Ciphertext)
There are two currently supported options for doing an encrypted key exchange in Mythic:
Client-side generated RSA keys
leveraged by the apfell-jxa and poseidon agents
Agent specific custom EKE
The agent starts running and generates a new 4096 bit Pub/Priv RSA key pair in memory. The agent then sends the following message to Mythic:
Base64( PayloadUUID + AES256(
JSON({
"action": "staging_rsa",
"pub_key": "base64 of public RSA key",
"session_id": "20char string", // unique session ID for this callback
})
)
)
where the AES key initially used is defined as the initial encryption value when generating the payload. When creating payloads, you can generate encryption keys per c2 profile. To do so, the C2 Profile will have a parameter that has an attribute called crypto_type=True
. This will then signal to Mythic to either generate a new per-payload AES256_HMAC key or (if your agent is using a translation container) tell your agent's translation container to generate a new key. In the http
profile for example, this is a ChooseOne
option between aes256_hmac
or none
.
When it says "base64 of public RSA key" you can do one of two things:
Base64 encode the entire PEM exported key (including the ---BEGIN and ---END blocks)
Use the already base64 encoded data that's inbetween the ---BEGIN and ---END blocks
Here is an example of the first message using encryption key hfN9Nk29S8LsjrE9ffbT9KONue4uozk+/TVMyrxDvvM=
:
ODA4NDRkMTktOWJmYy00N2Y5LWI5YWYtYzZiOTE0NGMwZmRj8g4Anp52+vJpizSe8aymY4zNe2qz6xOMb1P69phayqfka57u2gdDBPzOKlkCEYjWlqIFr4Cpfa0krrXDTiaLLyT/wWKulFO8Z7h+/YqIi/6S8pW4hi+5Ht8543vJvlfuVMnK3YIL9ci/xJvkXoUPUI0Gb2fz2+AILD/+9mJrLx4OuJ/FAlVgSlfC4MOMJSOnOKX0D2Q2zsThJyfxzMs/sY9wUEOuYJMVZG5OZzupb7r7GPwZ0ZyeZrxDukR3r979E+2ZTSYWTDMv58PeyRUtLcaMhqPCZJTyDy4ZNJ04MxHbIQCYXsnlcybHczwMGUYw99/bqd1XVD9GKP5zmj3bP600+PbHg0G0N1qHhSrcagCQAIRka1ybSyYmlYILKYUwgmlVCmIT5ERmlXbJu9xqxzKCzfxYoBWpy6I72goDPpZDoK+LFsCIpQAJoRUA/u0KD61ujJCvr+gs/TRv9UIcd+AzR0r7m/ziawaoh6YdYJfPoJBEWi4eozNSaxrnQBOkCul3cOW/SZbZ/UVP84fThFlFLQdGiajmayoa0aLGDnKSh1l8pyX4Of1fajKX3XbY2bLALeU8Tw99E9daNSKhORqMAlmIrfvAHhDHs1vj3ZXj+rKl5We4JYSNSFOL9JzB5OlctV5bd+IuruFc3fLZVkdivjpGczz9iXh3p7Q3M5Xt6m+ZxUwuGa1otrJV55skF3Lns7p6owDw71weJH0h9JvvgoXOTtf1u9HI0ACBzHxThX+yMhmBBP0wU1Lngl6hF4o/1uwNk96fbAGLg0b9njziGC2OQ0D88kaqZ8jJ7C2XQyf4hetQCZCyYPSgtjMw1Yq1qRM0fHbU5cAkKvQmiJMeByHetctfDcs4SvnY28Tb1SfGCnxzxMJ+IQIbatKcQhwbhpq0iavsuG7NUVIPGBhB/8hw3PkkKDb3gqgoKuOD0y8zRK/+DrVbDT3DmzGrmJAkfFXqahjW/aaSNHmqdxxXoI/3Ft1FGocLYAj9bGclW4nzjarRpvtA8fUwMg/vX1RZqFVN15FTp8qsjzKsL8ld0aWlaGcRulfQr9oIKyC+P0EV3a0rMuBO2q2SuSWefyVRaMWCx0gY2Gtrm+bN3ddb+koyUsNdoI7lTY5HirQ3qG0unq28D6Clm8Cok91kMQEaGZ28pZvFZVs3iaLxiPxhmfj+UAQ95ncziJqGrbAiJgTVAmF0bUHAOSD2HORzVeGHxKgFsqSnJvK5B1NUCDIa1ok3sbGo8yg7tc/63pcUPBGcMRRQg3WBN8msj14fDoXAJg3MGG+qzomagdyRFQieMfFeOm1O8SU/a3U9uFwSqhwo4EsE5sIgKPTwN7OVEFbEzNA5tpr65lBxlzC4y1o9Juo25G5QXhuCuSN2frsu4QMlTnxi8P0HHed17hJjY8kaBG6Pm3h9HH098nxiIStZBWSYWQPSXy4AImrT+3vcovjLXPColbKd3M5wRog5WQ1j05O2NQGZwBFDktWioMqIDGCWECvHWgCvPiLmeeCwsWEncqnmrRCwLpI+DXxUVEX9oJFBZhlnfX2iaWeuDw==
This message causes the following response:
Base64( PayloadUUID + AES256(
JSON({
"action": "staging_rsa",
"uuid": "UUID", // new UUID for the next message
"session_key": Base64( RSAPub( new aes session key ) ),
"session_id": "same 20 char string back"
})
)
)
Here's that sample response from our above sample message:
ODA4NDRkMTktOWJmYy00N2Y5LWI5YWYtYzZiOTE0NGMwZmRjN6UJrODGAkQnyC4NyX2XVAzF9U95TS8xKaPdVd1MFVeWWDZE6f81wxuwzZBmZogjLzm+PNznszFvrSSXvRDiBy8ZpXUCirDOLlblL/LXMJ/aD8hzghvhf+q628NR9XX43IY2kNdQ2VMONDWpwwa1YLvrNe6YU2cCRmbE8mjrVhrj4j0t4tg1Kor6IXBhcZDTmFxBTHb19LSegwmjjr6Inmx8jCN0hnR77o5PsE6l4q+S9FPrlajMpsPKfs1fgdse0Qn6Fv/yJ3t6AyRAJJkxjtgRE64aHXm4cbSw73M9/QnCnzgFWVIlhwNKuHYfMo05XatXUOV76DXut9nkdzY288xQqRB7AV0mNkhR5BhuuUtHFJZ2/rgLl8Kp8B/9Izz4F7JZm4fyx4l1t0d0zAwx1lz32f4LUhX4cvhKm2qsICw7q34mcSgNYZVJu7KYgOZPb6D9GNtyTLWsnwK8mJmK+9xtVrtNM2ncdifcXVXhohVUAO+cWJUZyYQ2fu3Jx7zowAoUz/huSJ0SsvTYzifM8A+Ab6V2I2UE46TZwcnVBwsHCrVXLcDlHrzGLEynepq/RG5yNetx3nUka7hgX0sByWxTsiDwb6ks+TA9365GtKD351FTouEjraDbWYL6tOuqy/OtHVRenhuow7xH1vsUN/3bfCeaKCrow0SSxL12hNgDk/dbhQlV90F54EkYjFB7VKWjBlwngaF07akdQTgPhYy/bl94dwjHFzhWUDGWJBagzyQOHJo7UOrtN7qoWvbSRFwACd7yz7ugZmo7X6DVhcvIMFdeBA/nMmRSC4CbxSxvVYJWZwO4SFGYHDXLFmcpdM/MuPSXljMDZa+n4NqvWFpHV7bI0fAqut4oVv9Hd/X4q5gMzJnXhWgL+EwQbR3jSb0fR6iLK5jD3mRB4zIugkFHZFouhHJKKzjkMwCl8oPsZskN5INnFnqNz8+lBKcFd4Qh64CZAzLE5dZ62apb9GAG9DRPyxXqp1miCLKJsSENdPU80HQPQMxl01sehlC67RvFYM/8dc7VldDEP99Sa6l9/sJSfynnCA2lsPc5PsnSiCnnk9n85ZqDXy0daheEUA7DpJDO0pWl1f2U2edNKpXhn1oirsLOOSpaZbN/gFpVirfa0Vt8oe5FB2IHgw4B+K85eUuZsdFcGu+xhlRwE1pi34RspdBDeiWISyALXG0QRvRtviZmkX+gn41NrpmIFhOaDBCE3lrWzJjasHSr3H4kgRFrFy2qCDwtrdmVeh6Tpad8ZQN3DZQE6mtnpDgQLgT9/mRQr1/pyEn4CiRacIOvBu+xoiAhLrOcJoTAQI3pVYZaPDNQLyum9CTFXRmEEXTmRao0+qq/tCjYhF11b7u5BBB73gy+YLs1hT/RKNOFqBuQ/ywz+g0BOYJt1lvKU08FtVOJq6ddXhVtyvxQz7OriA1Giji1SayQNLIgVxUmEjhBkgccdD98YhTAMPBeeVru14cny/87Ohd8toqK9DW7MEYI4RyOYbVgpWSfdYAC6T2VbC/d87mb31vX4oCDOqZWL2nsvlybzWCObDi76hbPzhP2H6xcE81qo9QKGKG+2ZfNIwK/aHhPznO5fQ5Qcyuos/jzYVuwxau4S8vOnu7Wraivf8BoVZT+lLqC35bS3Xfhfz3yWqHcVJNjs9AlsC86HYwUfRPJDSrDFSMla7bhQ8fuJXAXKSxfjCspvaIu/5UQ7zFQl+jOEuCbmKcYhLJThEBXVJhTShYc/Euz4+I7wzmhBmfxueXlerB5Kg7tfQZDp8zsE+nccFpVJ5yTjKv+CgLFyRVNLpK9ISukKIKj3BXwhGjJEyY6A1BAwl6v4JnllLd+GLo4ZSWrBIkkednbImRATrFsHChkTkf4Crhj5Ihrc5objEx6sxC9ss3OvcagSbKZF8t/ojN5R1m7LyIXEInKuZktNwOY0tCRvCIEaObD+CRDLGx5sB6Jy8S+dLxmF3P2e9zs+/RG0qmPyKyuaSbkIPHB5mZh/GDbV+86n24QxpIk+udi0IDE2cgBBJBEhhEFF44+MX2E0DgY/f698RWpSNuZcWsOmpmcsk1vH9L2Mv3meairLxT3EptYLX2Tcg6RQDs+ZdFT5eoe3ld4NpHZgecr/RRy868jSPPNU5lL4DPsJSXNXz6cD1jvgqpLaOQCtq0fOreSgG1dL1F92lAeXkCf9P1UU4BeYST8Ar03/oZb+DlXrpzqJt9jE6zs+79ywV9ZSUwXoVMPaMre8p+anHf82qL6DVUMebzyI9JBEtMqsbEqrXuXgFOVj/GM1wqJjGXHb82BKtichi2QfxeS8vUxfxV+SBfJ7qT3i6jp5OC1na8xu+v6tME0ywlZd/LrOp2Rgqj0A58Jmw6HZ4b4SD4SOT2tyBkhIjyMrZiBvXAPwzesFdrYSA3hfj5VEJCHlr9dKo8q/emOmEb8womZ3qADTwzhKYu0fxGFY3vXqMgrpasHj6uoY6xrtNf1CBDCudq+dHQUclPx2PyRL7qcR+7f0ntbc1xEGgofhLdmFMiBskQSNSYnGZAEzOwdCFwiZlzwjqPltHge
The response is encrypted with the same initial AESPSK value as before. However, the session_key
value is encrypted with the public RSA key that was in the initial message and base64 encoded. The response also includes a new staging UUID for the agent to use. This is not the final UUID for the new callback, this is a temporary UUID to indicate that the next message will be encrypted with the new AES key.
The next message from the agent to Mythic is as follows:
Base64( tempUUID + AES256(
JSON({
"action": "checkin", // required
"uuid": "payload uuid", //uuid of the payload - required
"ips": ["127.0.0.1"], // internal ip addresses - optional
"os": "macOS 10.15", // os version - optional
"user": "its-a-feature", // username of current user - optional
"host": "spooky.local", // hostname of the computer - optional
"pid": 4444, // pid of the current process - optional
"architecture": "x64", // platform arch - optional
"domain": "test", // domain of the host - optional
"integrity_level": 3, // integrity level of the process - optional
"external_ip": "8.8.8.8", // external ip if known - optional
"encryption_key": "base64 of key", // encryption key - optional
"decryption_key": "base64 of key", // decryption key - optional
"process_name": "osascript", // name of the current process - optional
})
)
)
With our new temp UUID, the agent sends the following:
MzAzMzY1M2UtMjlkOC00ZDRjLTkxMDgtNTZjMTUxZjc3OWQ34uGQ1yO25qWInocvzRCTRenTlUB7u1oRScx+09PeZZfUrJtdfiEeMD6Xz/kdKUsZjr9LWhFdFcu/AmHzAqH3LmIuSOxnMexmlGT9ngU7NuMvSdRlYvVcsIPYMLbRptletLttCBIu7LhDbuifYFRNQ21TBDkpVgYTXoUk5+JzzTesGdWAhOwLlWvijpKM4nrPLx0fZagEHH4SycHRUuHlei8T7F1YFPm8RhxbONMAd1ckjDnPm5kdUPx0JwpuP975MV4cuHdez+mR6C/JP4B9yeP9hhnHmSFKq7OghnHQQ39prPQ9WArSJ+N8UJ+XOiACpjYon2Qyf0FqhRDdoojrY4sCRrF3Khw9mry+5j5WlHubsICpfi52X9QQMAGzUNeuUve6jMKLQwSclb+IzJ2KKUHtA0qcsdvqyQ2mvXxicAn0OinnP6Vk7ktqsn35UQi3+uuPP0PWf53Iji25/mRCO9MbEa8WQ7epon4H4Erc1yw+Dvfb61BoasPbzspFFVtcuqRkeUYUiIHkR9uzVmSgUJqk/R4cRFico7nK+Q0E6gL1Qyk4P7yPLm3E98wkvoB1Y108r9tKyAFjfZ13MrZpZCsdt7y335hrymeZpt0V6/+ug3BIY1brcxE74fAnO4H5fUan7kPnvQmf/SsO4B9nHHRR/2pC1KYZF+vGw1My6alFPyyGZpzBnrsqyouFqhqOjS3iv8Yv3JY/IxpgJ/T4tXEs1MvgI19lufsBqX1PK9PiB04Y8+Igld6+6RTAwF+vf4utJTp4I/eeH8b0KZ9ABWzvjrPwj1nf1hN2Q0FU4YYBoXzKZ5kE8yZvYtfJpSqDGbsGW2gFr0nC2DybQ2QweLQ1RJAcBlU462jwP4h1ohuHRL2cynGqKaJa7XeILg5Da0ubMTg5fdEPoXMxaNYHdwbzV04WE7zKCru2T16AoncwWxzwcTqy6bRONdRPNFY72HlXmLgQ3R0a1J8VhlAt+7Z5I7rnz8S67rKL9xD1B4mJhhSeCj4k8y1/AgQ9i7ZIeLrcjsUcOo7Hw5Dl4QOBncuOn74DHWnZHgxUDQtB41GBwbJyeoi6ryHdMjUBEOK34f5msUh+HMDCHkLZi+4oM84K3L5oCQrPh1+b6FH5oXGO8pOXi2wHCAtzfF9LF5MfEAa5Lt6ZJpk9fzqZ6fPbFB0R/X/lKLyp6VaMGk5CBpLvwmyNGZnjSXra1r212sqIqFdGu9sVzMJ5daVLsjFPCg==
This checkin data is the same as all the other methods of checking in, the key things here are that the tempUUID is the temp UUID specified in the other message, the inner uuid is the payload UUID, and the AES key used is the negotiated one. It's with this information that Mythic is able to track the new messages as belonging to the same staging sequence and confirm that all of the information was transmitted properly. The final response is as follows:
Base64( tempUUID + AES256(
JSON({
"action": "checkin",
"id": "UUID", // new UUID for the agent to use
"status": "success"
})
)
)
With our example, the agent gets back the following:
MzAzMzY1M2UtMjlkOC00ZDRjLTkxMDgtNTZjMTUxZjc3OWQ3Xgjq3vE9vduJliEd24jskrB+0gcqLc1ROCegwkvSjrqBLGFhrurNCsQnKIFYZ+YP6AGNjgzIlAXbLAPlsRAa6ge6BLQOsywskyHsE/2+65etgEH9plUzOdEv/nknwfdJKV7n7PHQsQ9w4nsV7j9DkeiuIQ+CnlBBRaPpCGYKo8m8keswNY7DssL1FE1t0DQ5
From here on, the agent messages use the new UUID instead of the payload UUID or temp UUID and continues to use the new negotiated AES key.
Lastly, here's an example after that exchange with the new callback UUID doing a get tasking request:
NTU4NWI2YzMtMmEzOC00ZGZlLWIwMDItNDI5ZjQ5Mzk4YzIxtpTh3cK5yOJ+RlbVJkeVLSRd8ExZbahaQoXg9AbW5SD+wdueD+tPhtB18kcJqy9s10qfsTx/8gMlcw5emRMVm+w9bnScW0BKARoldBlp+31La3/+HsqEKvYaEK9gGcBlEK7mDVqaJlYxgkwWRNGZs4i3eIHpKCc9Gyyz7dyaQUk=
Padding: PKCS7, block size of 16
Mode: CBC
IV is 16 random bytes
Final message: IV + Ciphertext + HMAC
where HMAC is SHA256 with the same AES key over (IV + Ciphertext)
PKCS1_OAEP
This is specifically OAEP with SHA1
4096Bits in size
This section requires you to have a Translation Containers associated with your payload type. The agent sends your own custom message to Mythic:
Base64( payloadUUID + customMessage )
Mythic looks up the information for the payloadUUID and calls your translation container's translate_from_c2_format
function. That function gets a dictionary of information like the following:
{
"enc_key": None or base64 of key if Mythic knows of one,
"dec_key": None or base64 of key if Mythic knows of one,
"uuid": uuid of the message,
"profile": name of the c2 profile,
"mythic_encrypts": True or False if Mythic thinks Mythic does the encryption or not,
"type": None or a keyword for the type of encryption. currently only option besides None is "AES256"
"message": base64 of the message that's currently in c2 specific format
}
To get the enc_key
, dec_key
, and type
, Mythic uses the payloadUUID to then look up information about the payload. It uses the profile
associated with the message to look up the C2 Profile parameters and look for any parameter with a crypto_type
set to true
. Mythic pulls this information and forwards it all to your translate_from_c2_format
function.
Ok, so that message gets your payloadUUID/crypto information and forwards it to your translation container, but then what?
Normally, when the translate_to_c2_format
function is called, you just translate from your own custom format to the standard JSON dictionary format that Mythic uses. No big deal. However, we're doing EKE here, so we need to do something a little different. Instead of sending back an action of checkin
, get_tasking
, post_response
, etc, we're going to generate an action of staging_translation
.
Mythic is able to do staging and EKE because it can save temporary pieces of information between agent messages. Mythic allows you to do this too if you generate a response like the following:
{
"action": "staging_translation",
"session_id": "some string session id you want to save",
"enc_key": the bytes of an encryption key for the next message,
"dec_key": the bytes of a decryption key for the next message,
"crypto_type": "what type of crypto you're doing",
"next_uuid": "the next UUID that'll be in front of the message",
"message": "the raw bytes of the message that'll go back to your agent"
}
Let's break down these pieces a bit:
action
- this must be "staging_translation". This is what indicates to Mythic once the message comes back from the translate_from_c2_format
function that this message is part of staging.
session_id
- this is some random character string you generate so that we can differentiate between multiple instances of the same payload trying to go through the EKE process at the same time.
enc_key
/ dec_key
- this is the raw bytes of the encryption/decryption keys you want for the next message. The next time you get the translate_from_c2_format
message for this instance of the payload going through staging, THESE are the keys you'll be provided.
crypto_type
- this is more for you than anything, but gives you insight into what the enc_key
and dec_key
are. For example, with the http
profile and the staging_rsa
, the crypto type is set to aes256_hmac
so that I know exactly what it is. If you're handling multiple kinds of encryption or staging, this is a helpful way to make sure you're able to keep track of everything.
next_uuid
- this is the next UUID that appears in front of your message (instead of the payloadUUID). This is how Mythic will be able to look up this staging information and provide it to you as part of the next translate_from_c2_format
function call.
message
- this is the actual raw bytes of the message you want to send back to your agent.
This process just repeats as many times as you want until you finally return from translate_from_c2_format
an actual checkin
message.
What if there's other information you need/want to store though? There are three RPC endpoints you can hit that allow you to store arbitrary data as part of your build process, translation process, or custom c2 process:
create_agentstorage
- this take a unique_id string value and the raw bytes data value. The unique_id
is something that you need to generate, but since you're in control of it, you can make sure it's what you need. This returns a dictionary:
{"unique_id": "your unique id", "data": "base64 of the data you supplied"}
get_agentstorage
- this takes the unique_id string value and returns a dictionary of the stored item:
{"unique_id": "your unique id", "data": "base64 of the data you supplied"}
delete_agentstorage
- this takes the unique_id string value and removes the entry from the database
This page describes the format for getting new tasking
The contents of the JSON message from the agent to Mythic when requesting tasking is as follows:
Base64( CallbackUUID + JSON(
{
"action": "get_tasking",
"tasking_size": 1, //indicate the maximum number of tasks you want back
//if passing on messages for other agents, include the following
"delegates": [
{"message": agentMessage, "c2_profile": "ProfileName", "uuid": "uuid here"},
{"message": agentMessage, "c2_profile": "ProfileName", "uuid": "uuid here"}
],
"get_delegate_tasks": true, //optional, defaults to true
}
)
)
There are two things to note here:
tasking_size
- This parameter defaults to one, but allows an agent to request how many tasks it wants to get back at once. If the agent specifies -1
as this value, then Mythic will return all of the tasking it has for that callback.
delegates
- This parameter is not required, but allows for an agent to forward on messages from other callbacks. This is the peer-to-peer scenario where inner messages are passed externally by the egress point. Each of these agentMessage is a self-contained "Agent Message" and the c2_profile
indicates the name of the C2 Profile used to connect the two agents. This allows Mythic to properly decode/translate the messages even for nested messages.
get_delegate_tasks
- This is an optional parameter. If you don't include it, it's assumed to be True
. This indicates whether or not this get_tasking
request should also check for tasks that belong to callbacks that are reachable from this callback. So, if agentA has a route to agentB, agentB has a task in the submitted
state, and agentA issues a get_tasking
, agentA can decide if it wants just its own tasking or if it also wants to pick up agentB's task as well.
Why does this matter? This is helpful if your linked agents issue their own periodic get_tasking
messages rather than simply waiting for tasking to come to them. This way the parent callback (agentA in this case) doesn't accidentally consume and toss aside the task for agentB; instead, agentB's own periodic get_tasking
message has to make its way up to Mythic for the task to be fetched.
Mythic responds with the following message format for get_tasking requests:
Base64( CallbackUUID + JSON(
{
"action": "get_tasking",
"tasks": [
{
"command": "command name",
"parameters": "command param string",
"timestamp": 1578706611.324671, //timestamp provided to help with ordering
"id": "task uuid",
}
],
//if we were passing messages on behalf of other agents
"delegates": [
{"message": agentMessage, "c2_profile": "ProfileName", "uuid": "uuid here"},
{"message": agentMessage, "c2_profile": "ProfileName", "uuid": "uuid here"}
]
}
)
)
There are a few things to note here:
tasks
- This parameter is always a list, but contains between 0 and tasking_size
number of entries.
parameters
- this encapsulates the parameters for the task. If a command has parameters like: {"remote_path": "/users/desktop/test.png", "file_id": "uuid_here"}
, then the params
field will have that JSON blob as a STRING value (i.e. the command is responsible to parse that out).
delegates
- This parameter contains any responses for the messages that came through in the first message.
The main difference between submitting a response with a post_response
and submitting responses with get_tasking
is that in a get_tasking
message with a responses
key, you'll also get back additional tasking that's available. With a post_response
message and a responses
key, you won't get back additional tasking that's ready for your agent. You can still get socks
, rpfwd
, interact
, and delegates
messages as part of your message back from Mythic, but you won't have a tasks
key.
The contents of the JSON message from the agent to Mythic when posting tasking responses is as follows:
Base64( CallbackUUID + JSON(
{
"action": "post_response",
"responses": [
{
"task_id": "uuid of task",
... response message (see below)
},
{
"task_id": "uuid of task",
... response message (see below)
}
], //if we were passing messages on behalf of other agents
"delegates": [
{"message": agentMessage, "c2_profile": "ProfileName", "uuid": "uuid here"},
{"message": agentMessage, "c2_profile": "ProfileName", "uuid": "uuid here"}
]
}
)
)
There are two things to note here:
responses
- This parameter is a list of all the responses for each tasking.
For each element in the responses array, we have a dictionary of information about the response. We also have a task_id
field to indicate which task this response is for. After that though, comes the actual response output from the task.
If you don't want to hook a certain feature (like sending keystrokes, downloading files, creating artifacts, etc), but just want to return output to the user, the response section can be as simple as:
{"task_id": "uuid of task", "user_output": "output of task here"}
Each response style is described in Hooking Features. The format described in each of the Hooking features sections replaces the ... response message
piece above
To continue adding to that JSON response, you can indicate that a command is finished by adding "completed": true
or indicate that there was an error with "status": "error"
.
delegates
- This parameter is not required, but allows for an agent to forward on messages from other callbacks. This is the peer-to-peer scenario where inner messages are passed externally by the egress point. Each of these messages is a self-contained "Agent Message".
Mythic responds with the following message format for post_response requests:
Base64( CallbackUUID + JSON(
{
"action": "post_response",
"responses": [
{
"task_id": UUID,
"status": "success" or "error",
"error": 'error message if it exists'
}
],
//if we were passing messages on behalf of other agents
"delegates": [
{"message": agentMessage, "c2_profile": "ProfileName", "uuid": "uuid here"},
{"message": agentMessage, "c2_profile": "ProfileName", "uuid": "uuid here"}
]
}
)
)
If your initial responses
array to Mythic has something improperly formatted and Mythic can't deserialize it into GoLang structs, then Mythic will simply set the responses
array going back as empty. So, you can't always check for a matching response array entry for each response you send to Mythic. In this case, Mythic can't respond back with task_id
in this response array because it failed to deserialize it completely.
There are two things to note here:
responses
- This parameter is always a list and contains a success or error + error message for each task that was responded to.
delegates
- This parameter contains any responses for the messages that came through in the first message
How does SOCKS work within Mythic
Socks provides a way to negotiate and transmit TCP connections through a proxy (https://en.wikipedia.org/wiki/SOCKS). This allows operators to proxy network tools through the Mythic server and out through supported agents. SOCKS5 allows a lot more options for authentication compared to SOCKS4; however, Mythic currently doesn't leverage the authenticated components, so it's important that if you open up this port on your Mythic server that you lock it down.
Opened SOCKS5 ports in Mythic do not leverage additional authentication, so MAKE SURE YOU LOCK DOWN YOUR PORTS.
Without going into all the details of the SOCKS5 protocol, agents transmit dictionary messages that look like the following:
{
"exit": True,
"server_id": 1234567,
"data": ""
}
These messages contain three components:
exit
- boolean True or False. This indicates to either Mythic or your Agent that the connection has been terminated from one end and should be closed on the other end (after sending data
). Because Mythic and 2 HTTP connections sit between the actual tool you're trying to proxy and the agent that makes those requests on your tool's behalf, we need this sort of flag to indicate that a TCP connection has closed on one side.
server_id
- uint32. This number is how Mythic and the agent can track individual connections. Every new connection from a proxied tool (like through proxychains) will generate a new server_id
that Mythic will send with data to the Agent.
data
- base64 string. This is the actual bytes that the proxied tool is trying to send.
In Python translation containers, if exit
is True, then data
can be None
These SOCKS messages are passed around as an array of dictionaries in get_tasking
and post_response
messages via a (added if needed) socks
key:
{
"action": "get_tasking",
"tasking_size": 1,
"socks": [
{
"exit": False,
"server_id": 2,
"data": "base64 string"
},{
"exit": True,
"server_id": 1,
"data": ""
}
],
"delegates": []
}
or in the post_response
messages:
{
"action": "post_response",
"responses": [
{
"user_output": "blah",
"task_id": "uuid here",
"completed": true
}
],
"socks": [
{
"exit": False,
"server_id": 2,
"data": "base64 string"
},{
"exit": True,
"server_id": 1,
"data": ""
}
],
"delegates": []
Notice that they're at the same level of "action" in these dictionaries - that's because they're not tied to any specific task, the same goes for delegate messages.
For the most part, the message processing is pretty straight forward:
Get a new SOCKS array
Get the first element from the list
If we know the server_id
, then we can forward the message off to the appropriate thread or channel to continue processing. If we've never seen the server_id before, then it's likely a new connection that opened up from an operator starting a new tool through proxychains, so we need to handle that appropriately.
For new connections, the first message is always a SOCKS Request message with encoded data for IP:PORT to connect to. This means that SOCKS authenticaion is already done. There's also a very specific message that gets sent back as a response to this. This small negotiation piece isn't something that Mythic created, it's just part of the SOCKS protocol to ensure that a tool like proxychains gets confirmation the agent was able to reach the desired IP:PORT
For existing connections, the agent looks at if exit
is True or not. If exit
is True, then the agent should close its corresponding TCP connection and clean up those resources. If it's not exit, then the agent should base64 decode the data
field and forward those bytes through the existing TCP connection.
The agent should also be streaming data back from its open TCP connections to Mythic in its get_tasking
and post_response
messages.
That's it really. The hard part is making sure that you don't exhaust all of the system resources by creating too many threads, running into deadlocks, or any number of other potential issues.
While not perfect, the poseidon agent have a generally working implementation for Mythic: https://github.com/MythicAgents/poseidon/blob/master/Payload_Type/poseidon/poseidon/agent_code/socks/socks.go
How does Reverse Port Forward work within Mythic
Reverse port forwards provide a way to tunnel incoming connections on one port out to another IP:Port somewhere else. It normally provides a way to expose an internal service to a network that would otherwise not be able to directly access it.
Agents transmit dictionary messages that look like the following:
{
"exit": True,
"server_id": 1234567,
"data": "",
"port": 80, // optional, but if you support multiple rpfwd ports within a single callback, you need this so Mythic knows which rpfwd you're getting data from
}
These messages contain three components:
exit
- boolean True or False. This indicates to either Mythic or your Agent that the connection has been terminated from one end and should be closed on the other end (after sending data
). Because Mythic and 2 HTTP connections sit between the actual tool you're trying to proxy and the agent that makes those requests on your tool's behalf, we need this sort of flag to indicate that a TCP connection has closed on one side.
server_id
- unsigned int32. This number is how Mythic and the agent can track individual connections. Every new connection will generate a new server_id
. Unlike SOCKS where Mythic is getting the initial connection, the agent is getting the initial connection in a reverse port forward. In this case, the agent needs to generate this random uint32 value to track connections.
data
- base64 string. This is the actual bytes that the proxied tool is trying to send.
port
- an optional uint32 value that specifies the port you're listening on within your agent. If your agent allows for multiple rpfwd commands within a single callback, then you need to specify this port
so that Mythic knows which rpfwd command this data is associated with and can redirect it out to the appropriate remote IP:Port combination. This port
value is specifically the local port your agent is listening on, not the port for the remote connection.
In Python translation containers, if exit
is True, then data
can be None
These RPFWD messages are passed around as an array of dictionaries in get_tasking
and post_response
messages via a (added if needed) rpfwd
key:
{
"action": "get_tasking",
"tasking_size": 1,
"rpfwd": [
{
"exit": False,
"server_id": 2,
"data": "base64 string",
"port": 80,
},{
"exit": True,
"server_id": 1,
"data": "",
"port": 445,
}
],
"delegates": []
}
or in the post_response
messages:
{
"action": "post_response",
"responses": [
{
"user_output": "blah",
"task_id": "uuid here",
"completed": true
}
],
"rpfwd": [
{
"exit": False,
"server_id": 2,
"data": "base64 string",
"port": 80,
},{
"exit": True,
"server_id": 1,
"data": "",
"port": 80,
}
],
"delegates": []
Notice that they're at the same level of "action" in these dictionaries - that's because they're not tied to any specific task, the same goes for delegate messages.
For the most part, the message processing is pretty straight forward:
Agent opens port X on the target host where it's running
ServerA makes a connection to PortX
Agent accepts the connection, generates a new uint32 server_id, and sends any data received to Mythic via rpfwd
key. If the agent is tracking multiple ports, then it should also send the port the connection was received on with the message.
Mythic looks up the server_id
(and optionally port) for that Callback if Mythic has seen this server_id, then it can pass it off to the appropriate thread or channel to continue processing. If we've never seen the server_id before, then it's likely a new connection that opened up, so we need to handle that appropriately. Mythic makes a new connection out to the RemoteIP:RemotePort specified when starting the rpfwd
session. Mythic forwards the data along and waits for data back. Any data received is sent back via the rpfwd
key the next time the agent checks in.
For existing connections, the agent looks at if exit
is True or not. If exit
is True, then the agent should close its corresponding TCP connection and clean up those resources. If it's not exit, then the agent should base64 decode the data
field and forward those bytes through the existing TCP connection.
The agent should also be streaming data back from its open TCP connections to Mythic in its get_tasking
and post_response
messages.
That's it really. The hard part is making sure that you don't exhaust all of the system resources by creating too many threads, running into deadlocks, or any number of other potential issues.
While not perfect, the poseidon agent have a generally working implementation for Mythic: https://github.com/MythicAgents/poseidon/blob/master/Payload_Type/poseidon/poseidon/agent_code/rpfwd/rpfwd.go
Delegate messages are messages that an agent is forwarding on behalf of another agent. The use case here is an agent forwarding peer-to-peer messages for a linked agent. Mythic supports this by having an optional delegates
array in messages. An example of what this looks like is in the next section, but this delegates
array can be part of any message from an agent to mythic.
When sending delegate messages, there's a simple standard format:
{
"action": "some action here",
"delegates": [
{
"message": agentMessage,
"uuid": UUID,
"c2_profile": "ProfileName"
}
]
}
Within a delegates array are a series of JSON dictionaries:
UUID
- This field is some UUID identifier used by the agent to track where a message came from and where it should go back to. Ideally this is the same as the UUID for the callback on the other end of the connection, but can be any value. If the agent uses a value that does not match up with the UUID of the agent on the other end, Mythic will indicate that in the response. This allows the middle-man agent to generate some UUID identifier as needed upon first connection and then learn of and use the agent's real UUID once the messages start flowing.
message
- this is the actual message that the agent is transmitting on behalf of the other agent
c2_profile
- This field indicates the name of the C2 Profile associated with the connection between this agent and the delegated agent. This allows Mythic to know how these two agents are talking to each other when generating and tracking connections.
{
"action": "some action here",
"delegates": [
{
"message": agentMessage,
"uuid": "same UUID as the message agent -> mythic",
"new_uuid": UUID that mythic uses
}
]
}
The new_uuid
field indicates that the uuid
field the agent sent doesn't match up with the UUID in the associated message. If the agent uses the right UUID with the agentMessage then the response would be:
{
"action": "some action here",
"delegates": [
{
"message": agentMessage,
"uuid": "same UUID as the message agent -> mythic"
}
]
}
Why do you care and why is this important? This allows an agent to randomly generate its own UUID for tracking connections with other agents and provides a mechanism for Mythic to reveal the right UUID for the callback on the other end. This implicitly gives the agent the right UUID to use if it needs to announce that it lost the route to the callback on the other end. If Mythic didn't correct the agent's use of UUID, then when the agent loses connection to the P2P agent, it wouldn't be able to properly indicate it to Mythic.
Ok, so let's walk through an example:
agentA is an egress agent speaking HTTP to Mythic. agentA sends messages directly to Mythic, such as the {"action": "get_tasking", "tasking_size": 1}
. All is well.
somehow agentB gets deployed and executed, this agent (for sake of example) opens a port on its host (same host as agentA or another one, doesn't matter)
agentA connects to agentB (or agentB connects to agentA if agentA opened the port and agentB did a connection to it) over this new P2P protocol (smb, tcp, etc)
agentB sends to agentA a staging message if it's doing EKE, a checkin message if it's already an established callback (like the example of re-linking to a callback), or a checkin message if it's doing like a static PSK or plaintext. The format of this message is exactly the same as if it wasn't going through agentA
agentA gets this message, and is like "new connection, who dis?", so it makes a random UUID to identify whomever is on the other end of the line and forwards that message off to Mythic with the next message agentA would be sending anyway. So, if the next message that agentA would send to Mythic is another get tasking, then it would look like: {"action": "get_tasking", "tasking_size": 1, "delegates": [ {"message": agentB's message, "c2_profile": "Name of the profile we're using to communicate", "uuid": "myRandomUUID"} ] }
. That's the message agentA sends to Mythic.
Mythic gets the message, processes the get_tasking for agentA, then sees it has delegate
messages (i.e. messages that it's passing along on behalf of other agents). So Mythic recursively processes each of the messages in this array. Because that message
value is the same as if agentB was talking directly to Mythic, Mythic can parse out the right UUIDs and information. The c2_profile
piece allows Mythic to look up any c2-specific encryption information to pass along for the message. Once Mythic is done processing the message, it sends a response back to agentA like: {"action": "get_tasking", "tasks": [ normal array of tasks ], "delegates": [ {"message": "response back to what agentB sent", "uuid": "myRandomUUID that agentA generated", "new_uuid": "the actual UUID that Mythic uses for agentB"} ] }
. If this is the first time that Mythic has seen a delegate from agentB through agentA, then Mythic knows that there's a route between the two and via which C2 profile, so it can automatically display that in the UI
agentA gets the response back, processes its get_tasking like normal, sees the delegates
array and loops through those messages. It sees "oh, it's myRandomUUID, i know that guy, let me forward it along" and also sees that it's been calling agentB by the wrong name, it now knows agentB's real name according to Mythic. This is important because if agentA and agentB ever lose connection, agentA can report back to Mythic that it can no longer to speak to agentB with the right UUID that Mythic knows.
This same process repeats and just keeps nesting for agentC that would send a message to agentB that would send the message to agentA that sends it to Mythic. agentA can't actually decrypt the messages between agentB and Mythic, but it doesn't need to. It just has to track that connection and shuttle messages around.
Now that there's a "route" between the two agents that Mythic is aware of, a few things happen:
when agentA now does a get_tasking
message (with or without a delegate message from agentB), if mythic sees a tasking for agentB, Mythic will automatically add in the same delegates
message that we saw before and send it back with agentA so that agentA can forward it to agentB. That's important - agentB never had to ask for tasking, Mythic automatically gave it to agentA because it knew there was a route between the two agents.
if you DON"T want that to happen though - if you want agentB to keep issuing get_tasking requests through agentA with periodic beaconing, then in agentA's get_tasking you can add get_delegate_tasks
to False. i.e ({"action": "get_tasking", "tasking_size": 1, "get_delegate_tasks": false}
) then even if there are tasks for agentB, Mythic WILL NOT send them along with agentA. agentB will have to ask for them directly
What happens when agentA and agentB can no longer communicate though? agentA needs to send a message back to Mythic to indicate that the connection is lost. This can be done with the edges key. Using all of the information agentA has about the connection, it can announce that Mythic should remove a edge between the two callbacks. This can either happen as part of a response to a tasking (such as an explicit task to unlink two agents) or just something that gets noticed (like a computer rebooted and now the connection is lost). In the first case, we see the example below as part of a normal post_response message:
{
"user_output": "some ouser output here",
"task_id": "uuid of task here",
"edges": [
{
"source": "uuid of source callback",
"destination": "uuid of destination callback",
"action": "remove"
"c2_profile": "name of the c2 profile used in this connection"
}
]
}
If this wasn't part of some task, then there would be no task_id to use. In this case, we can add the same edges
structure at a higher point in the message:
{
"action": "get_tasking" (could be "post_response", "upload", etc)
"edges": [
{
"source": "uuid of source callback",
"destination": "uuid of destination callback",
"action": "add" or "remove"
"c2_profile": "name of the c2 profile used in this connection"
}
]
}
How is an agent supposed to work with a Peer-to-peer (P2P) profile? It's pretty simple and largely the same as working with a Push C2 egress connection:
If a payload is executed (it's not a callback yet), then make a connection to your designated P2P method (named pipes, tcp ports, etc). Once a connection is established, start your normal encrypted key exchange or checkin process.
If an existing callback loses connection for some reason, then make a connection to your designated P2P method (named pipes, tcp ports, etc). Once a connection is established, send your checkin message again to inform Mythic of your existence
At this point, just wait for messages to come to you (no need to do a get_tasking poll) and as you get any data (socks, edges, alerts, responses, etc) just send them out through your p2p connection.
Messages for interactive tasking have three pieces:
{
"task_id": "UUID of task",
"data": "base64 of data",
"message_type": int enum of types
}
If you have a command called pty
and issue it, then when that task gets sent to your agent, you have your normal tasking structure. That tasking structure includes an id for the task that's a UUID. All follow-on interactive input for that task uses the same UUID (task_id
in the above message).
The data
is pretty straight forward - it's the base64 of the raw data you're trying to send to/from this interactive task. The message_type
field is an enum of int
. It might see complicated at first, but really it boils down to providing a way to support sending control codes through the web UI, scripting, and through an opened port.
const (
Input = 0
Output = 1
Error = 2
Exit = 3
Escape = 4 //^[ 0x1B
CtrlA = 5 //^A - 0x01 - start
CtrlB = 6 //^B - 0x02 - back
CtrlC = 7 //^C - 0x03 - interrupt process
CtrlD = 8 //^D - 0x04 - delete (exit if nothing sitting on input)
CtrlE = 9 //^E - 0x05 - end
CtrlF = 10 //^F - 0x06 - forward
CtrlG = 11 //^G - 0x07 - cancel search
Backspace = 12 //^H - 0x08 - backspace
Tab = 13 //^I - 0x09 - tab
CtrlK = 14 //^K - 0x0B - kill line forwards
CtrlL = 15 //^L - 0x0C - clear screen
CtrlN = 16 //^N - 0x0E - next history
CtrlP = 17 //^P - 0x10 - previous history
CtrlQ = 18 //^Q - 0x11 - unpause output
CtrlR = 19 //^R - 0x12 - search history
CtrlS = 20 //^S - 0x13 - pause output
CtrlU = 21 //^U - 0x15 - kill line backwards
CtrlW = 22 //^W - 0x17 - kill word backwards
CtrlY = 23 //^Y - 0x19 - yank
CtrlZ = 24 //^Z - 0x1A - suspend process
end
)
When something is coming from Mythic -> Agent, you'll typically see Input
, Exit
, or Escape
-> CtrlZ
. When sending data back from Agent -> Mythic, you'll set either Output
or Error
. This enum example also includes what the user typically sees in a terminal (ex: ^C
when you type CtrlC) along with the hex value that's normally sent. Having data split out this way can be helpful depending on what you're trying to do. Consider the case of trying to do a tab-complete
. You want to send down data and the tab character (in that order). For other things though, like escape
, you might want to send down escape
and then data (in that order for things like control sequences).
You'll probably notice that some letters are missing from the control codes above. There's no need to send along a special control code for \n
or \r
because we can send those down as part of our input. Similarly, clearing the screen isn't useful through the web UI because it doesn't quite match up as a full TTY.
This data is located in a similar way to SOCKS and RPFWD:
{
"action": "some action",
"interactive": [ {"task_id": UUID, "data": "base64", "message_type": 0 } ]
}
the interactive
keyword takes an array of these sorts of messages to/from the agent. This keyword is at the same level in the JSON structure as action
, socks
, responses
, etc.
When sending responses back for interactive tasking, you send back an array in the interactive
keyword just like you got the data in the first place.
MythicRPC provides a way to execution functions against Mythic and Mythic's database programmatically from within your command's tasking files via RabbitMQ.
MythicRPC lives as part of the mythic_container
PyPi package (and github.com/MythicMeta/MythicContainer
GoLang package) that's included in all of the itsafeaturemythic
Docker images. This PyPi package uses RabbitMQ's RPC functionality to execute functions that exist within Mythic.
The full list of commands can be found here: https://github.com/MythicMeta/MythicContainerPyPi/tree/main/mythic_container/MythicGoRPC for Python and https://github.com/MythicMeta/MythicContainer/tree/main/mythicrpc for GoLang.
Browser scripting is a way for you, the agent developer or the operator, to dynamically adjust the output that an agent reports back. You can turn data into tables, download buttons, screenshot viewers, and even buttons for additional tasking.
As a developer, your browser scripts live in a folder called browser_scripts
in your mythic
folder. These are simply JavaScript files that you then reference from within your command files such as:
browser_script = BrowserScript(script_name="download_new", author="@its_a_feature_", for_new_ui=True)
As an operator, they exist in the web interface under "Operations" -> "Browser Scripts". You can enable and disable these for yourself at any time, and you can even modify or create new ones from the web interface as well. If you want these changes to be persistent or available for a new Mythic instance, you need to save it off to disk and reference it via the above method.
Browser Scripts are JavaScript files that take in a reference to the task and an array of the responses available, then returns a Dictionary representing what you'd like Mythic to render on your behalf. This is pretty easy if your agent returns structured output that you can then parse and process. If you return unstructured output, you can still manipulate it, but it will just be harder for you.
The most basic thing you can do is return plaintext for Mythic to display for you. Let's take an example that simply aggregates all of the response data and asks Mythic to display it:
function(task, responses){
const combined = responses.reduce( (prev, cur) => {
return prev + cur;
}, "");
return {'plaintext': combined};
}
This function reduces the Array called responses
and aggregates all of the responses into one string called combined
then asks Mythic to render it via: {'plaintext': combined}
.
Plaintext is also used when you don't have a browserscript set for a command in general or when you toggle it off. This uses the react-ace text editor to present the data. This view will also try to parse the output as JSON and, if it can, will re-render the output in pretty print format.
A slightly more complex example is to render a button for Mythic to display a screenshot.
function(task, responses){
if(task.status.toLowercase().includes("error")){
const combined = responses.reduce( (prev, cur) => {
return prev + cur;
}, "");
return {'plaintext': combined};
}else if(task.completed){
if(responses.length > 0){
let data = JSON.parse(responses[0]);
return {"screenshot":[{
"agent_file_id": [data["agent_file_id"]],
"variant": "contained",
"name": "View Screenshot",
"hoverText": "View screenshot in modal"
}]};
}else{
return {"plaintext": "No data to display..."}
}
}else if(task.status === "processed"){
// this means we're still downloading
if(responses.length > 0){
let data = JSON.parse(responses[0]);
return {"screenshot":[{
"agent_file_id": [data["agent_file_id"]],
"variant": "contained",
"name": "View Partial Screenshot",
"hoverText": "View partial screenshot in modal"
}]};
}
return {"plaintext": "No data yet..."}
}else{
// this means we shouldn't have any output
return {"plaintext": "Not response yet from agent..."}
}
}
This function does a few things:
If the task status includes the word "error", then we don't want to process the response like our standard structured output because we returned some sort of error instead. In this case, we'll do the same thing we did in the first step and simply return all of the output as plaintext
.
If the task is completed and isn't an error, then we can verify that we have our responses that we expect. In this case, we simply expect a single response with some of our data in it. The one piece of information that the browser script needs to render a screenshot is the agent_file_id
or file_id
of the screenshot you're trying to render. If you want to return this information from the agent, then this will be the same file_id
that Mythic returns to you for transferring the file. If you display this information via process_response
output from your agent, then you're likely to pull the file data via an RPC call, and in that case, you're looking for the agent_file_id
value. You'll notice that this is an array of identifiers. This allows you to supply multiple at once (for example: you took 5 screenshots over a few minutes or you took screenshots of multiple monitors) and Mythic will create a modal where you can easily click through all of them.
To actually create a screenshot, we return a dictionary with a key called screenshot
that has an array of Dictionaries. We do this so that you can actually render multiple screenshots at once (such as if you fetched information for multiple monitors at a time). For each screenshot, you just need three pieces of information: the agent_file_id
, the name
of the button you want to render, and the variant
is how you want the button presented (contained
is a solid button and outlined
is just an outline for the button).
If we didn't error and we're not done, then the status will be processed
. In that case, if we have data we want to also display the partial screenshot, but if we have no responses yet, then we want to just inform the user that we don't have anything yet.
When downloading files from a target computer, the agent will go through a series of steps to register a file id with Mythic and then start chunking and transferring data. At the end of this though, it's super nice if the user is able to click a button in-line with the tasking to download the resulting file(s) instead of then having to go to another page to download it. This is where the download browser script functionality comes into play.
With this script, you're able to specify some plaintext along with a button that links to the file you just downloaded. However, remember that browser scripts run in the browser and are based on the data that's sent to the user to view. So, if the agent doesn't send back the new agent_file_id
for the file, then you won't be able to link to it in the UI. Let's take an example and look at what this means:
function(task, responses){
if(task.status.includes("error")){
const combined = responses.reduce( (prev, cur) => {
return prev + cur;
}, "");
return {'plaintext': combined};
}else if(task.completed){
if(responses.length > 0){
try{
let data = JSON.parse(responses[0]);
return {"download":[{
"agent_file_id": data["file_id"],
"variant": "contained",
"name": "Download",
"plaintext": "Download the file here: ",
"hoverText": "download the file"
}]};
}catch(error){
const combined = responses.reduce( (prev, cur) => {
return prev + cur;
}, "");
return {'plaintext': combined};
}
}else{
return {"plaintext": "No data to display..."}
}
}else if(task.status === "processed"){
if(responses.length > 0){
const task_data = JSON.parse(responses[0]);
return {"plaintext": "Downloading a file with " + task_data["total_chunks"] + " total chunks..."};
}
return {"plaintext": "No data yet..."}
}else{
// this means we shouldn't have any output
return {"plaintext": "Not response yet from agent..."}
}
}
So, from above we can see a few things going on:
Like many other browser scripts, we're going to modify what we display to the user based on the status of the task as well as if the agent has returned anything for us to view or not. That's why there's checks based on the task.status
and task.completed
fields.
Assuming the agent returned something back and we completed successfully, we're going to parse what the agent sent back as JSON and look for the file_id
field.
We can then make the download button with a few fields:
agent_file_id
is the file UUID of the file we're going to download through the UI
variant
allows you to control if the button is a solid or just outlined button (contained
or outlined
)
name
is the text inside the button
plaintext
is any leading text data you want to dispaly to the user instead of just a single download button
So, let's look at what the agent actually sent for this message as well as what this looks like visually:
Notice here in what the agent sends back that there are two main important pieces: file_id
which we use to pass in as agent_file_id
for the browser script, and total_chunks
. total_chunks
isn't strictly necessary for anything, but if you look back at the script, you'll see that we display that as plaintext to the user while we're waiting for the download to finish so that the user has some sort of idea how long it'll take (is it 1 chunk, 5, 50, etc).
And here you can see that we have our plaintext leading up to our button. You'll also notice how the download
key is an array. So yes, if you're downloading multiple files, as long as you can keep track of the responses you're getting back from your agent, you can render and show multiple download buttons.
Sometimes you'll want to link back to the "search" page (tasks, files, screenshots, tokens, credentials, etc) with specific pieces of information so that the user can see a list of information more cleanly. For example, maybe you run a command that generated a lot of credentials (like mimikatz) and rather than registering them all with Mythic and displaying them in the task output, you'd rather register them with Mythic and then link the user over to them. Thats where the search links come into play. They're formatted very similar to the download button, but with a slight tweak.
function(task, responses){
if(task.status.includes("error")){
const combined = responses.reduce( (prev, cur) => {
return prev + cur;
}, "");
return {'plaintext': combined};
}else if(task.completed){
if(responses.length > 0){
try{
let data = JSON.parse(responses[0]);
return {"search": [{
"plaintext": "View on the search page here: ",
"hoverText": "opens a new search page",
"search": "tab=files&searchField=Filename&search=" + task.display_params,
"name": "Click Me!"
}]};
}catch(error){
const combined = responses.reduce( (prev, cur) => {
return prev + cur;
}, "");
return {'plaintext': combined};
}
}else{
return {"plaintext": "No data to display..."}
}
}else if(task.status === "processed"){
if(responses.length > 0){
const task_data = JSON.parse(responses[0]);
return {"plaintext": "Downloading a file with " + task_data["total_chunks"] + " total chunks..."};
}
return {"plaintext": "No data yet..."}
}else{
// this means we shouldn't have any output
return {"plaintext": "Not response yet from agent..."}
}
}
This is almost exactly the same as the download
example, but the actual dictionary we're returning is a little different. Specifically, we have:
plaintext
as a string we want to display before our actual link to the search page
hoverText
as a string for what to display as a tooltip when you hover over the link to the search page
search
is the actual query parameters for the search we want to do. In this case, we're showing that we want to be on the files
tab, with the searchField
of Filename
, and we want the actual search
parameter to be what is shown to the user in the display parameters (display_params
). If you're ever curious about what you should include here for your specific search, whenever you're clicking around on the search page, the URL will update to reflect what's being shown. So, you can navigate to what you'd want, then copy and paste it here.
name
is the text represented that is the link to the search page.
Just like with the download button, you can have multiple of these search
responses.
Creating tables is a little more complicated, but not by much. The biggest thing to consider is that you're asking Mythic to create a table for you, so there's a few pieces of information that Mythic needs such as what are the headers, are there any custom styles you want to apply to the rows or specific cells, what are the rows and what's the column value per row, in each cell do you want to display data, a button, issue more tasking, etc. So, while it might seem overwhelming at first, it's really nothing too crazy.
Let's take an example and then work through it - we're going to render the following screenshot
function(task, responses){
if(task.status.includes("error")){
const combined = responses.reduce( (prev, cur) => {
return prev + cur;
}, "");
return {'plaintext': combined};
}else if(task.completed && responses.length > 0){
let folder = {
backgroundColor: "mediumpurple",
color: "white"
};
let file = {};
let data = "";
try{
data = JSON.parse(responses[0]);
}catch(error){
const combined = responses.reduce( (prev, cur) => {
return prev + cur;
}, "");
return {'plaintext': combined};
}
let ls_path = "";
if(data["parent_path"] === "/"){
ls_path = data["parent_path"] + data["name"];
}else{
ls_path = data["parent_path"] + "/" + data["name"];
}
let headers = [
{"plaintext": "name", "type": "string", "fillWidth": true},
{"plaintext": "size", "type": "size", "width": 200,},
{"plaintext": "owner", "type": "string", "width": 400},
{"plaintext": "group", "type": "string", "width": 400},
{"plaintext": "posix", "type": "string", "width": 100},
{"plaintext": "xattr", "type": "button", "width": 100, "disableSort": true},
{"plaintext": "DL", "type": "button", "width": 100, "disableSort": true},
{"plaintext": "LS", "type": "button", "width": 100, "disableSort": true},
{"plaintext": "CS", "type": "button", "width": 100, "disableSort": true}
];
let rows = [{
"rowStyle": data["is_file"] ? file : folder,
"name": {
"plaintext": data["name"],
"copyIcon": true,
"startIcon": data["is_file"] ? "file":"openFolder",
"startIconHoverText": data["name",
"startIconColor": "gold",
},
"size": {"plaintext": data["size"]},
"owner": {"plaintext": data["permissions"]["owner"]},
"group": {"plaintext": data["permissions"]["group"]},
"posix": {"plaintext": data["permissions"]["posix"]},
"xattr": {"button": {
"name": "View XATTRs",
"type": "dictionary",
"value": data["permissions"],
"leftColumnTitle": "XATTR",
"rightColumnTitle": "Values",
"title": "Viewing XATTRs",
"hoverText": "View more attributes"
}},
"DL": {"button": {
"name": "DL",
"type": "task",
"disabled": !data["is_file"],
"ui_feature": "file_browser:download",
"parameters": ls_path
}},
"LS": {"button": {
"name": "LS",
"type": "task",
"ui_feature": "file_browser:list",
"parameters": ls_path
}},
"CS": {"button": {
"name": "CS",
"type": "task",
"ui_feature": "code_signatures:list",
"parameters": ls_path
}}
}];
for(let i = 0; i < data["files"].length; i++){
let ls_path = "";
if(data["parent_path"] === "/"){
ls_path = data["parent_path"] + data["name"] + "/" + data["files"][i]["name"];
}else{
ls_path = data["parent_path"] + "/" + data["name"] + "/" + data["files"][i]["name"];
}
let row = {
"rowStyle": data["files"][i]["is_file"] ? file: folder,
"name": {"plaintext": data["files"][i]["name"]},
"size": {"plaintext": data["files"][i]["size"]},
"owner": {"plaintext": data["files"][i]["permissions"]["owner"]},
"group": {"plaintext": data["files"][i]["permissions"]["group"]},
"posix": {"plaintext": data["files"][i]["permissions"]["posix"],
"cellStyle": {
}
},
"xattr": {"button": {
"name": "View XATTRs",
"type": "dictionary",
"value": data["files"][i]["permissions"],
"leftColumnTitle": "XATTR",
"rightColumnTitle": "Values",
"title": "Viewing XATTRs"
}},
"DL": {"button": {
"name": "DL",
"type": "task",
"disabled": !data["files"][i]["is_file"],
"ui_feature": "file_browser:download",
"parameters": ls_path
}},
"LS": {"button": {
"name": "LS",
"type": "task",
"ui_feature": "file_browser:list",
"parameters": ls_path
}},
"CS": {"button": {
"name": "CS",
"type": "task",
"ui_feature": "code_signatures:list",
"parameters": ls_path
}}
};
rows.push(row);
}
return {"table":[{
"headers": headers,
"rows": rows,
"title": "File Listing Data"
}]};
}else if(task.status === "processed"){
// this means we're still downloading
return {"plaintext": "Only have partial data so far..."}
}else{
// this means we shouldn't have any output
return {"plaintext": "Not response yet from agent..."}
}
}
This looks like a lot, but it's nothing crazy - there's just a bunch of error handling and dealing with parsing errors or task errors. Let's break this down into a few easier to digest pieces:
return {"table":[{
"headers": headers,
"rows": rows,
"title": "File Listing Data"
}]};
In the end, we're returning a dictionary with the key table
which has an array of Dictionaries. This means that you can have multiple tables if you want. For each one, we need three things: information about headers, the rows, and the title of the table itself. Not too bad right? Let's dive into the headers:
let headers =[
{"plaintext": "name", "type": "string", "fillWidth": true},
{"plaintext": "size", "type": "size", "width": 200,},
{"plaintext": "owner", "type": "string", "width": 400},
{"plaintext": "group", "type": "string", "width": 400},
{"plaintext": "posix", "type": "string", "width": 100},
{"plaintext": "xattr", "type": "button", "width": 100, "disableSort": true},
{"plaintext": "DL", "type": "button", "width": 100, "disableSort": true},
{"plaintext": "LS", "type": "button", "width": 100, "disableSort": true},
{"plaintext": "CS", "type": "button", "width": 100, "disableSort": true}
];
Headers is an array of Dictionaries with three values each - plaintext
, type
, and optionally width
. As you might expect, plaintext
is the value that we'll actually use for the title of the column. type
is controlling what kind of data will be displayed in that column's cells. There are a few options here: string
(just displays a standard string), size
(takes a size in bytes and converts it into something human readable - i.e. 1024 -> 1KB), date
(process date values and display them and sort them properly), number
(display numbers and sort them properly), and finally button
(display a button of some form that does something). The last value here is width
- this is a pixel value of how much width you want the column to take up by default. If you want one or more columns to take up the remaining widths, specify "fillWidth": true
. Columns by default allow for sorting, but this doesn't always make sense. If you want to disable this ( for example, for a button column), set "disableSort": true
in the header information.
Now let's look at the actual rows to display:
let rows = [{
"rowStyle": data["is_file"] ? file : folder,
"name": {
"plaintext": data["name"],
"copyIcon": true,
"startIcon": data["is_file"] ? "file":"openFolder",
"startIconHoverText": data["name",
"startIconColor": "gold",
},
"size": {"plaintext": data["size"]},
"owner": {"plaintext": data["permissions"]["owner"]},
"group": {"plaintext": data["permissions"]["group"]},
"posix": {"plaintext": data["permissions"]["posix"]},
"xattr": {"button": {
"name": "View XATTRs",
"type": "dictionary",
"value": data["permissions"],
"leftColumnTitle": "XATTR",
"rightColumnTitle": "Values",
"title": "Viewing XATTRs",
"hoverText": "View more attributes"
}},
"DL": {"button": {
"name": "DL",
"type": "task",
"disabled": !data["is_file"],
"ui_feature": "file_browser:download",
"parameters": ls_path
}},
"LS": {"button": {
"name": "LS",
"type": "task",
"ui_feature": "file_browser:list",
"parameters": ls_path
}},
"CS": {"button": {
"name": "CS",
"type": "task",
"ui_feature": "code_signatures:list",
"parameters": ls_path
}}
}];
Ok, lots of things going on here, so let's break it down:
As you might expect, you can use this key to specify custom styles for the row overall. In this example, we're adjusting the display based on if the current row is for a file or a folder.
If we're displaying anything other than a button for a column, then we need to include the plaintext
key with the value we're going to use. You'll notice that aside from rowStyle
, each of these other keys match up with the plaintext
header values so that we know which values go in which columns.
In addition to just specifying the plaintext
value that is going to be displayed, there are a few other properties we can specify:
startIcon
- specify the name of an icon to use at the beginning of the plaintext
value. The available startIcon
values are:
folder/openFolder, closedFolder, archive/zip, diskimage, executable, word, excel, powerpoint, pdf/adobe, database, key, code/source, download, upload, png/jpg/image, kill, inject, camera, list, delete
^ the above values also apply to the endIcon
attribute
startIconHoverText
- this is text you want to appear when the user hovers over the icon
endIcon
this is the same as the startIcon
except it's at the end of the text
endIconHoverText
this is the text you want to appear when the user hovers over the icon
plaintextHoverText
this is the text you want to appear when the user hovers over the plaintext value
copyIcon
- use this to indicate true/false if you want a copy
icon to appear at the front of the text. If this is present, this will allow the user to copy all of the text in plaintext
to the clipboard. This is handy if you're displaying exceptionally long pieces of information.
startIconColor
- You can specify the color for your start icon. You can either do a color name, like "gold"
or you can do an rgb value like "rgb(25,142,117)"
.
endIconColor
- this is the same as the startIconColor
but applies to any icon you want to have at the end of your text
The first kind of button we can do is just a popup to display additional information that doesn't fit within the table. In this example, we're displaying all of Apple's extended attributes via an additional popup.
{
"button":
{
"name": "View XATTRs",
"type": "dictionary",
"value": data["permissions"],
"leftColumnTitle": "XATTR",
"rightColumnTitle": "Values",
"title": "Viewing XATTRs",
"hoverText": "View additional attributes"
}
}
The button field takes a few values, but nothing crazy. name
is the name of the button you want to display to the user. the type
field is what kind of button we're going to display - in this case we use dictionary
to indicate that we're going to display a dictionary of information to the user. The other type is task
that we'll cover next. The value
here should be a Dictionary value that we want to display. We'll display the dictionary as a table where the first column is the key and the second column is the value, so we can provide the column titles we want to use. We can optionally make this button disabled by providing a disabled
field with a value of true
. Just like with the normal plaintext
section, we can also specify startIcon
, startIconColor.
Lastly, we provide a title
field for what we want to title the overall popup for the user.
If the data you want to display to the user isn't structured (not a dictionary, not an array), then you probably just want to display it as a string. This is pretty common if you have long file paths or other data you want to display but don't fit nicely in a table form.
{
"button":
{
"name": "View Strings",
"type": "string",
"value": "my data string\nwith newlines as well",
"title": "Viewing XATTRs",
"hoverText": "View additional attributes"
}
}
Just like with the other button types, we can use startIcon
, startIconColor
, and hoverText
for this button as well.
This button type allows you to issue additional tasking.
{
"button":
{
"name": "DL",
"type": "task",
"disabled": !data["is_file"],
"ui_feature": "file_browser:download",
"parameters": ls_path,
"hoverText": "List information about the file/folder",
"openDialog": false,
"getConfirmation": false
}
}
This button has the same name
and type
fields as the dictionary button. Just like with the dictionary button we can make the button disabled or not with the disabled
field. You might be wondering which task we'll invoke with the button. This works the same way we identify which command to issue via the file browser or the process browser - ui_feature
. These can be anything you want, just make sure you have the corresponding feature listed somewhere in your commands or you'll never be able to task it. Just like with the dictionary button, we can specify startIcon
and startIconColor
. The openDialog
flag allows you to specify that the tasking popup modal should open and be partially filled out with the data you supplied in the parameters
field. Similarly, the getConfirmation
flag allows you to force an accept/cancel
dialog to get the user's confirmation before issuing a task. This is handy, especially if the tasking is something potentially dangerous (killing a process, removing a file, etc). If you're setting getConfirmation
to true, you can also set acceptText
to something that makes sense for your tasking, like "yes", "remove", "delete", "kill", etc.
The last thing here is the parameters
. If you provide parameters, then Mythic will automatically use them when tasking. In this example, we're pre-creating the full path for the files in question and passing that along as the parameters to the download
function.
Remember: your parse_arguments
function gets called when your input isn’t a dictionary or if your parse_dictionary
function isn’t defined. So keep that in mind - string arguments go here
when you issue ls -Path some_path
on the command line, Mythic’s UI is automatically parsing that into {"Path": "some_path"}
for you and since you have a dictionary now, it goes to your parse_dictionary
function
when you set the parameters in the browser script, Mythic doesn’t first try to pre-process them like it does when you’re typing on the command.
If you want to pass in a parsed parameter set, then you can just pass in a dictionary. So, "parameters": {"Path": "my path value"}.
If you set "parameters": "-Path some_path"
just like you would type on the command line, then you need to have a parse_arguments
function that will parse that out into the appropriate command parameters you have. If your command doesn't take any parameters and just uses the input as a raw command line, then you can do like above and have "parameters": "path here"
Sometimes the data you want to display is an array rather than a dictionary or big string blob. In this case, you can use the table
button type and provide all of the same data you did when creating this table to create a new table (yes, you can even have menu buttons on that table).
{
"button":
{
"name": "view table",
"type": "table",
"title": "my custom new table",
"value": {
"headers": [
{"plaintext": "test1", "width": 100, "type": "string"}, {"plaintext": "Test2", "type": "string"}
],
"rows": [
{"test1": {"plaintext": "row1 col 1"}, "Test2": {"plaintext": "row 1 col 2"}}
]
}
}
}
Tasking and extra data display button is nice and all, but if you have a lot of options, you don't want to have to waste all that valuable text space with buttons. To help with that, there's one more type of button we can do: menu
. With this we can wrap the other kinds of buttons:
"button": {
"name": "Actions",
"type": "menu",
"value": [
{
"name": "View XATTRs",
"type": "dictionary",
"value": data["files"][i]["permissions"],
"leftColumnTitle": "XATTR",
"rightColumnTitle": "Values",
"title": "Viewing XATTRs"
},
{
"name": "Get Code Signatures",
"type": "task",
"ui_feature": "code_signatures:list",
"parameters": ls_path
},
{
"name": "LS Path",
"type": "task",
"ui_feature": "file_browser:list",
"parameters": ls_path
},
{
"name": "Download File",
"type": "task",
"disabled": !data["files"][i]["is_file"],
"ui_feature": "file_browser:download",
"parameters": ls_path
}
]
}
Notice how we have the exact same information for the task
and dictionary
buttons as before, but they're just in an array format now. It's as easy as that. You can even keep your logic for disabling entries or conditionally not even add them. This allows us to create a dropdown menu like the following screenshot:
These menu items also support the startIcon
, startIconColor
, and hoverText
, properties.
If you have certain kinds of media you'd like to display right inline with your tasking, you can do that. All you need is the agent_file_id
(the UUID value you get back when registering a file with Mythic) and the filename
of whatever media it is you're trying to show.
return { "media": [{
"filename": `${task.display_params}`,
"agent_file_id": data["file_id"],
}]};
Above is an example using the media
key that sets the filename
to be the display parameters for the task (in this case it was a download command so the display parameters are the path to the file to download) and the agent_file_id
is set to the file_id
that was returned as part of the agent's tasking. In this case, the raw agent user_output
was:
{"file_id": "ff41f25d-fcaa-4d5b-a573-061d40238e33", "total_chunks": "1"}
File Transfer Update: 100% complete
Finished Downloading
If you don't want to have media auto-render for you as part of this browser script, you can either disable the browser script or go to your user settings and there's a new toggle for if you want to auto-render media. If you set that to off and save it, then the next time a browser script (or anything else) tries to auto-render media, it'll first give you a button to click to authorize it before showing.
This also applies to the new media
button on the file downloads page.
If you want to render your data in a graph view rather than a table, then you can do that now too! This uses the same graphing engine that the active callback's graph view uses. There are three main pieces for returning graph data: nodes
, edges
, and a group_by
string where you can optionally group your nodes via certain properties.
Each node has a few properties:
id
- this is a unique way to identify this node compared to others. This is also how you'll identify the node when it comes to creating edges.
img
- if you want to display an image for your node, you give the name of the image here. Because this is React, we need to identify these all ahead of time. For now, the available images are as follows: group, computer, user, lan, language, list, container, help, diamond, skull. We can always add more though - if you find a free icon on Font Awesome or Material UI then let me know and I can get that added.
style
- this is where you can provide React styles that you want applied to your image. These are the same as CSS styles, except that -
are removed and camel casing is used instead. For example, instead of an attribute of background-color
like normal CSS, you'd define it as backgroundColor
.
overlay_img
- this is the same as img
except that you can define a SECOND image to have overlayed on the top right of your original one.
overlay_style
- this is the same as style
except that it applied to overlay_image
instead of img
.
data
- this is where any information about your actual node lives
there should be a label
value in here that's used to display the text under your node
buttons
- this is an array of button actions you'd like to add to the context menu of your node. This is the same as the normal buttons from Tables, except there's no need for a menu
button.
Each edge has a few properties:
source
- all of the data (as a dictionary) about the source node. Mythic will try to do things like source.id
to get the ID for the source node.
destination
- all of the data (as a dictionary) about the destination/target node.
label
- the text to display as a label on the edge
data
- dictionary of information about your edge
animate
- boolean true/false if you want the edge the be animated or a solid color
color
- the color you want the edge to be
buttons
- this is an array of button actions you'd like to add to the context menu of your edge.
Sometimes when creating a command, the options you present to the operator might not always be static. For example, you might want to present them with a list of files that have been download; you might want to show a list of processes to choose from for injection; you might want to reach out to a remote service and display output from there. In all of these scenarios, the parameter choices for a user might change. Mythic can now support this.
Since we're talking about a command's arguments, all of this lives in your Command's class that subclasses TaskArguments. Let's take an augmented shell
example:
class ShellArguments(TaskArguments):
def __init__(self, command_line):
super().__init__(command_line)
self.args = {
"command": CommandParameter(
name="command", type=ParameterType.String, description="Command to run"
),
"files": CommandParameter(name="files", type=ParameterType.ChooseOne, default_value=[],
dynamic_query_function=self.get_files)
}
async def get_files(self, inputMsg: PTRPCDynamicQueryFunctionMessage) -> PTRPCDynamicQueryFunctionMessageResponse:
fileResponse = PTRPCDynamicQueryFunctionMessageResponse(Success=False)
file_resp = await SendMythicRPCFileSearch(MythicRPCFileSearchMessage(
CallbackID=inputMsg.Callback,
LimitByCallback=False,
Filename="",
))
if file_resp.Success:
file_names = []
for f in file_resp.Files:
if f.Filename not in file_names and f.Filename.endswith(".exe"):
file_names.append(f.Filename)
fileResponse.Success = True
fileResponse.Choices = file_names
return fileResponse
else:
fileResponse.Error = file_resp.Error
return fileResponse
async def parse_arguments(self):
if len(self.command_line) > 0:
if self.command_line[0] == "{":
self.load_args_from_json_string(self.command_line)
else:
self.add_arg("command", self.command_line)
else:
raise ValueError("Missing arguments")
Here we can see that the files
CommandParameter has an extra component - dynamic_query_function
. This parameter points to a function that also lives within the same class, get_files
in this case. This function is a little different than the other functions in the Command file because it occurs before you even have a task - this is generating parameters for when a user does a popup in the user interface. As such, this function gets one parameter - a dictionary of information about the callback itself. It should return an array of strings that will be presented to the user.
You have access to a lot of the same RPC functionality here that you do in create_tasking
, but except for one notable exception - you don't have a task yet, so you have to do things based on the callback_id
. You won't be able to create/delete entries via RPC calls, but you can still do pretty much every query capability. In this example, we're doing a get_file
query to pull all files that exist within the current callback and present their filenames to the user.
What information do you have at your disposal during this dynamic function call? Not much, but enough to do some RPC calls depending on the information you need to complete this function. Specifically, the PTRPCDynamicQueryFunctionMessage parameter has the following fields:
command - name of the command
parameter_name - name of the parameter
payload_type - name of the payload type
callback - the ID of the callback used for RPC calls
Sub-tasking is the ability for a task to spin off sub-tasks and wait for them to finish before potentially entering a "submitted" state themselves for an agent to pick them up. When creating subtasks, your create_go_tasking
function will finish completing like normal (it doesn't wait for subtasks to finish).
When a task has outstanding subtasks, its status will change to "delegating" while it waits for them all to finish.
Subtasking provides a way to separate out complex logic into multiple discrete steps. For example, if a specific task you're trying to do ends up with a complex series of steps, then it might be more beneficial for the agent developer and operator to see them broken out. For example, a psexec
command actually involves a lot of moving pieces from making sure that you:
have a service executable (or some sort-lived task that is ok to get killed by the service control manager)
can access the remote file system
can write to the remote file system in some way (typically smb)
can create a scheduled task
can delete the scheduled task
can remove the remote file
That's a lot of steps and conditionals to report back. If any step fails, are you able to track down where it failed and the status of any of the cleanup steps (if any performed at all)? That starts to become a massive task, especially when other parts of the task might already be separate tasks within the agent. Creating/manipulating scheduled tasks could be its own command, same with copying files to a remote share. So, at that point you're either duplicating code, or you have some sort of shared dependency. It would be easier if you could just issue these all as subtasks and let each one handle its job as needed in smaller, isolated chunks.
Creating subtasks are pretty easy:
async def create_go_tasking(self, taskData: PTTaskMessageAllData) -> PTTaskCreateTaskingMessageResponse:
response = PTTaskCreateTaskingMessageResponse(
TaskID=taskData.Task.ID,
Success=True,
)
await SendMythicRPCTaskCreateSubtask(MythicRPCTaskCreateSubtaskMessage(
TaskID=taskData.Task.ID,
CommandName="run",
Params="cmd.exe /S /c {}".format(taskData.args.command_line)
))
return response
This function called be called from within your create_go_tasking
or even task callbacks (in the next section). We're specifying the name of the command to run along with the parameters to issue (as a string). We can even specify a SubtaskCallbackFunction
to get called within our current task when the subtask finishes. It's a way for the parent task to say "when this subtask is done, call this function so I can make more decisions based on what happened". These callback functions look like this:
async def downloads_complete(completionMsg: PTTaskCompletionFunctionMessage) -> PTTaskCompletionFunctionMessageResponse:
response = PTTaskCompletionFunctionMessageResponse(Success=True)
...
response.Success = False
response.TaskStatus = "error: Failed to search for files"
await SendMythicRPCResponseCreate(MythicRPCResponseCreateMessage(
TaskID=completionMsg.TaskData.Task.ID,
Response=f"error: Failed to search for files: {files.Error}".encode()
))
return response
Notice how this function's parameters don't start with self
. This isn't a function in your command class, but rather a function outside of it. With the data passed in via the PTTaskCompletionFunctionMessage
you should still have all you need to do MythicRPC* calls though.
This PTTaskCompletionFunctionMessage
has all the normal information you'd expect for the parent task (just like you'd see in your create_go_tasking
function) as well as all the same information for your subtask
. This makes it easy to manipulate both tasks from this context.
These callback functions are called in the parent task that spawned the subtask in the first place.
If you're creating subtasks and you want tokens associated with them (such as matching the token supplied for the parent task), then you must manually supply it as part of creating your subtask (ex: Token=taskData.Task.TokenID
). Mythic doesn't assume subtasks also need the token applied.
Here we have the flow for a command, shell
, that issues a subtask called run
and registers two completion handlers - one for when run
completes and another for when shell
completes. Notice how execution of shell
's create tasking function continues even after it issues the subtask run
. That's because this is all asynchronous - the result you get back from issuing a subtask is only an indicator of if Mythic successfully registered the task to not, not the final execution of the task.
Task callbacks are functions that get executed when a task enters a "completed=True" state (i.e. when it completes successfully or encounters an error). These can be registered on a task itself
async def create_go_tasking(self, taskData: MythicCommandBase.PTTaskMessageAllData) -> MythicCommandBase.PTTaskCreateTaskingMessageResponse:
response = MythicCommandBase.PTTaskCreateTaskingMessageResponse(
TaskID=taskData.Task.ID,
CompletionFunctionName="formulate_output",
Success=True,
)
return response
or on a subtask:
When Mythic calls these callbacks, it looks for the defined name in the command's completed_functions
attribute like:
completion_functions = {"formulate_output": formulate_output}
Where the key
is the same name of the function specified and the value
is the actual reference to the function to call.
Like everything else associated with a Command, all of this information is stored in your command's Python/GoLang file. Sub-tasks are created via RPC functions from within your command's create_tasking
function (or any other function - i.e. you can issue more sub-tasks from within task callback functions). Let's look at what a callback function looks like:
async def formulate_output( task: PTTaskCompletionFunctionMessage) -> PTTaskCompletionFunctionMessageResponse:
# Check if the task is complete
response = PTTaskCompletionFunctionMessageResponse(Success=True, TaskStatus="success")
if task.TaskData.Task.Completed is True:
# Check if the task was a success
if not task.TaskData.Task.Status.includes("error"):
# Get the interval and jitter from the task information
interval = task.TaskData.args.get_arg("interval")
jitter = task.TaskData.args.get_arg("interval")
# Format the output message
output = "Set sleep interval to {} seconds with a jitter of {}%.".format(
interval / 1000, jitter
)
else:
output = "Failed to execute sleep"
# Send the output to Mythic
resp = await SendMythicRPCResponseCreate(MythicRPCResponseCreateMessage(
TaskID=taskData.Task.ID,
Response=output.encode()
))
if not resp.Success:
raise Exception("Failed to execute MythicRPC function.")
return response
This is useful for when you want to do some post-task processing, actions, analysis, etc when a task completes or errors out. In the above example, the formulate_output
function simply just displays a message to the user that the task is done. In more interesting examples though, you could use the get_responses
RPC call like we saw above to get information about all of the output subtasks have sent to the user for follow-on processing.
It's often useful to perform some operational security checks before issuing a task based on everything you know so far, or after you've generated new artifacts for a task but before an agent picks it up. This allows us to be more granular and context aware instead of the blanket command blocking that's available from the Operation Management page in Mythic.
OPSEC checks and information for a command is located in the same file where everything else for the command is located. Let's take an example all the way through:
class ShellCommand(CommandBase):
cmd = "shell"
needs_admin = False
help_cmd = "shell {command}"
description = """This runs {command} in a terminal by leveraging JXA's Application.doShellScript({command}).
WARNING! THIS IS SINGLE THREADED, IF YOUR COMMAND HANGS, THE AGENT HANGS!"""
version = 1
author = "@its_a_feature_"
attackmapping = ["T1059", "T1059.004"]
argument_class = ShellArguments
attributes = CommandAttributes(
suggested_command=True
)
async def opsec_pre(self, taskData: PTTaskMessageAllData) -> PTTTaskOPSECPreTaskMessageResponse:
response = PTTTaskOPSECPreTaskMessageResponse(
TaskID=taskData.Task.ID, Success=True, OpsecPreBlocked=True,
OpsecPreBypassRole="other_operator",
OpsecPreMessage="Implemented, but not blocking, you're welcome!",
)
return response
async def opsec_post(self, taskData: PTTaskMessageAllData) -> PTTTaskOPSECPostTaskMessageResponse:
response = PTTTaskOPSECPostTaskMessageResponse(
TaskID=taskData.Task.ID, Success=True, OpsecPostBlocked=True,
OpsecPostBypassRole="other_operator",
OpsecPostMessage="Implemented, but not blocking, you're welcome! Part 2",
)
return response
async def create_go_tasking(self, taskData: MythicCommandBase.PTTaskMessageAllData) -> MythicCommandBase.PTTaskCreateTaskingMessageResponse:
response = MythicCommandBase.PTTaskCreateTaskingMessageResponse(
TaskID=taskData.Task.ID,
Success=True,
)
await SendMythicRPCArtifactCreate(MythicRPCArtifactCreateMessage(
TaskID=taskData.Task.ID, ArtifactMessage="{}".format(taskData.args.get_arg("command")),
BaseArtifactType="Process Create"
))
response.DisplayParams = taskData.args.get_arg("command")
return response
In the case of doing operational checks before a task's create_tasking
is called, we have the opsec_pre
function. Similarly, the opsec_post
function happens after your create_tasking
, but before your task is finally ready for an agent to pick it up.
opsec_pre/post_blocked
- this indicates True/False for if the function decides the task should be blocked or not
opsec_pre/post_message
- this is the message to the operator about the result of doing this OPSEC check
opsec_pre/post_bypass_role
- this determines who should be able to bypass this check. The default is operator
to allow any operator to bypass it, but you can change it to lead
to indicate that only the lead of the operation should be able to bypass it. You can also set this to other_operator
to indicate that somebody other than the operator that issued the task must approve it. This is helpful in cases where it's not necessarily a "block", but something you want to make sure operators acknowledge as a potential security risk
As the name of the functions imply, the opsec_pre
check happens before create_tasking
function runs and the opsec_post
check happens after the create_tasking
function runs. If you set opsec_pre_blocked
to True, then the create_tasking
function isn't executed until an approved operator bypasses the check. Then, execution goes back to create_tasking
and the opsec_post
. If that one also sets blocked to True, then it's again blocked at the user to bypass it. At this point, if it's bypassed, the task status simply switched to Submitted
so that an agent can pick up the task on next checkin.
From the opsec_pre
and opsec_post
functions, you have access to the entire task/callback information like you do in Create_Tasking. Additionally, you have access to the entire RPC suite just like in Create_Tasking.
If you want to have a different form of communication between Mythic and your agent than the specific JSON messages that Mythic uses, then you'll need a "translation container".
The first thing you'll need to do is specify the name of the container in your associated Payload Type class code. Update the Payload Type's class to include a line like translation_container = "binaryTranslator"
. Now we need to create the container.
The process for making a translation container is almost identical to a c2 profile or payload type container, we're simply going to change which classes we instantiate, but the rest of it is the same.
Unlike Payload Type and C2 Profile containers that mainly do everything over RabbitMQ for potentially long-running queues of jobs, Translation containers use gRPC for fast responses.
If a translation_container
is specified for your Payload Type, then the three functions defined in the following two examples will be called as Mythic processes requests from your agent.
You then need to get the new container associated with the docker-compose file that Mythic uses, so run sudo ./mythic-cli add binaryTranslator
. Now you can start the container with sudo ./mythic-cli start binaryTranslator
and you should see the container pop up as a sub heading of your payload container.
Additionally, if you're leveraging a payload type that has mythic_encrypts = False
and you're doing any cryptography, then you should use this same process and perform your encryption and decryption routines here. This is why Mythic provides you with the associated keys you generated for encryption, decryption, and which profile you're getting a message from.
For the Python version, we simply instantiate our own subclass of the TranslationContainer class and provide three functions. In our main.py
file, simply import the file with this definition and then start the service:
mythic_container.mythic_service.start_and_run_forever()
Unexpected error with integration github-files: Integration is not installed on this space
For the GoLang side of things, we instantiate an instance of the translationstructs.TranslationContainer struct with our same three functions. For GoLang though, we have an Initialize function to add this struct as a new definition to track.
Unexpected error with integration github-files: Integration is not installed on this space
Then, in our main.go
code, we call the Initialize function and start the services:
mytranslatorfunctions.Initialize()
// sync over definitions and listen
MythicContainer.StartAndRunForever([]MythicContainer.MythicServices{
MythicContainer.MythicServiceTranslationContainer,
})
These examples can be found at the MythicMeta organization on GitHub: https://github.com/MythicMeta/ExampleContainers/tree/main/Payload_Type
Docker doesn't allow you to have capital letters in your image names, and when Mythic builds these containers, it uses the container's name as part of the image name. So, you can't have capital letters in your agent/translation container names. That's why you'll see things like service_wrapper
instead of serviceWrapper
Just like with Payload Types, a Translation container doesn't have to be a Dockerized instance. To turn any VM into a translation container just follow the general flow at 1. Payload Type Development
Within a Command
class, there are two functions - create_go_tasking
and process_response
. As the names suggest, the first one allows you to create and manipulate a task before an agent pulls it down, and the second one allows you to process the response that comes back. If you've been following along in development, then you know that Mythic supports many different fields in its post_response
action so that you can automatically create artifacts, register keylogs, manipulate callback information, etc. However, all of that requires that your agent format things in a specific way for Mythic to pick them up and process. That can be tiring.
The process_response
function takes in one argument class that contains two pieces of information: The task
that generated the response in the first place, and the response
that was sent back from the agent. Now, there's a specific process_response
keyword you have to send for mythic to shuttle data off to this function instead of processing it normally. When looking at a post_response
message, it's structured as follows:
{
"action": "post_response",
"responses": [
{
"task_id": "some uuid",
"process_response": {"myown": "data format"},
// all of the other fields you want to leverage
}
]
}
Now, anything in that process_response
key will get sent to the process_response
function in your Payload Type container. This value for process_response
can be any type - int, string, dictionary, array, etc.
Some caveats:
You can send any data you want in this way and process it however you want. In the end, you'll be doing RPC calls to Mythic to register the data
Not all things make sense to go this route. Because this is an async process to send data to the container for processing, this happens asynchronously and in parallel to the rest of the processing for the message. For example, if your message has just the task_id
and a process_container
key, then as soon as the data is shipped off to your process_response
function, Mythic is going to send the all clear back down to the agent and say everything was successful. It doesn't wait for your function to finish processing anything, nor does it expect any output from your function.
We do this sort of separation because your agent shouldn't be waiting on the hook for unnecessary things. We want the agent to get what it wants as soon as possible so it can go back to doing agent things.
Some functionality like SOCKS and file upload/download don't make sense for the process_response
functionality because the agent needs the response in order to keep functioning. Compare this to something like registering a keylog, creating an artifact, or providing some output to the user which the agent tends to think of in a "fire and forget" style. These sorts of things are fine for async parallel processing with no response to the agent.
The function itself is really simple:
async def process_response(self, task: PTTaskMessageAllData, response: any) -> PTTaskProcessResponseMessageResponse:
resp = PTTaskProcessResponseMessageResponse(TaskID=task.Task.ID, Success=True)
return resp
where task
is the same task data you'd get from your create_go_tasking
function, and response
is whatever you sent back.
You have full access to all of the RPC methods to Mythic from here just like you do from the other functions.
This is an optional function to supply when you have a command parameter of type TypedArray
. This doesn't apply to BuildParameters or C2Profile Parameters because you can only supply those values through the GUI or through scripting. When issuing tasks though, you generally have the option of using a modal popup window or freeform text. This typedarray_parse_function
is to help with parsing freeform text when issuing tasks.
This is an optional function you can supply as part of your command parameter definitions.
A TypedArray
provides two things from the operator to the agent - an array of values and a type for each value. This makes its way to the payload type containers during tasking and building as an array of arrays (ex: [ ["int", "5"], ["string", "hello"] ]
). However, nobody types like that on the command line when issuing tasks. That's where this function comes into play.
Let's say you have a command, my_bof
, with a TypedArray parameter called bof_args
. bof_args
has type options (i.e. the choices
of the parameter) of int
, wstring
, and char*
. When issuing this command on the command line, you'd want the operator to be able to issue something a bit easier than multiple nested arrays. Something like:
my_bof -bof_args int:5 char*:testing wstring:"this is my string"
my_bof -bof_args int/5 char*/testing wstring/"this is my string"
my_bof -bof_args int(5) char*(testing) wstring(this is my string)
...
The list of options can go on and on and on. There's no ideal way to do it, and everybody's preferences for doing something like this is a bit different. This is where the TypedArray parsing function comes into play. Your payload type can define or explain in the command description how arguments should be formatted when using freeform text, and then your parsing function can parse that data back into the proper array of arrays.
This function takes in a list of strings formatted how the user presented it on the command line, such as:
[ "int:5", "char*:testing", "wstring:this is my string" ]
The parse function should take that and split it back out into our array of arrays:
[ ["int": "5"], ["char*", "testing"], ["wstring", "this is my string"] ]
This function gets called in a few different scenarios:
The user types out my_bof -bof_args int:5 char*:testing
on the command line and hits enter
If the modal is used then this function is not called because we already can create the proper array of arrays from the UI
The user uses Mythic scripting or browser scripts to submit the task
The user types out my_bof -bof_args int:5 char*:testing
on the command line and hits SHIFT+enter to open up a modal dialog box. This will call your parsing function to turn that array into an array of arrays so that the modal dialog can display what the user has typed out so far.
These functions are essentially the first line of processing that can happen on the parameters the user provides (the first optional parsing being done in the browser). For your typed array parameter, bof_args
in this case, you just need to make sure that one of the following is true after you're done parsing with either of these functions:
the typed array parameter's value is set to an array of strings, ex: ["int:5", "char*:testing"]
the typed array parameter's value is set to an array of arrays where the first entry is empty and the second entry is the value, ex: [ ["", "int:5"], ["", "char*:testing"] ]
After your parse_arguments
or parse_dictionary
function is called, the mythic_container code will check for any typed_array parameters and check their value. If the value is one of the above two instances, it'll make it match what's expected for the typedarray parse function, call your function, then automatically update the value.
If you're using the Mythic UI and want to make sure your parsed array happened correctly, it's pretty easy to see. Expand your task, click the blue plus sign, then go to the keyboard icon. It'll open up a dialog box showing you all the various stages of parameter parsing that's happened. If you just typed out the information on the command line (no modal), then you'll likely see an array of arrays where the first element is always ""
displayed to the user, but the agents parameters
section will show you the final value after all the parsing happens. That's where you'll see the result of your parsing.
See the C2 Related Development section for more SOCKS specific message details.
To start / stop SOCKS (or any interactive based protocol), use the SendMythicRPCProxyStart
and SendMythicRPCProxyStop
RPC calls within your Payload Type's tasking functions.
For SOCKS, you want to set LocalPort
to the port you want to open up on the Mythic Server - this is where you'll point your proxy-aware tooling (like proxychains) to then tunnel those requests through your C2 channel and out your agent. For SOCKS, the RemotePort and RemoteIP don't matter. The PortType
will be CALLBACK_PORT_TYPE_SOCKS
(i.e. socks
).
See the C2 Related Development section for more RPFWD specific message details.
To start / stop RPFWD (or any interactive based protocol), use the SendMythicRPCProxyStart
and SendMythicRPCProxyStop
RPC calls within your Payload Type's tasking functions.
For RPFWD, you want to set LocalPort
to the port you want to open up on the host where your agent is running. RemoteIP
and RemotePort
are used for Mythic to make remote connections based on the incoming connections your agent gets on LocalPort
within the target network. The PortType
will be CALLBACK_PORT_TYPE_RPORTFWD
(i.e. rpfwd
).
In general, your agent will open up LocalPort
on your target machine. When it gets a connection to that port, it'll generate a random uint32 to identify that connection, and forward along what it gets from the connection to Mythic along with that identifier. Mythic will get that information and open a new connection to RemoteIP:RemotePort
and forward that data along. At that point, both sides are just forwarding data back and forth. Eventually, one end of the connection will terminate and at that point a final message will get sent to tell the other side to also close.
Using a C2 platform, you don't generally have the ability to send follow-on information to a running task. Each task is run in isolation, especially if you're running a "shell" command. However, there are times when it's very useful to have an interactive shell on the target host. Depending on the operating system and configuration, you could of course attempt to use SOCKS to make a local SSH connection, or even use RPFWD to cause a shell to make a "remote" connection to yourself and tunnel it externally. These have their own pros/cons.
Another option would be to allow follow-on input within your current task without requiring additional network connections or going to something like sleep 0
for it to work. This is where "interactive" tasking comes into play.
If your command specifies a supported_ui_feature
of task_response:interactive
, then Mythic will display a slightly different interface for that task's output.
Notice here that as part of the task response, there's a whole new set of controls and input fields. The controls allow you to send things like CtrlC, CtrlZ, CtrlD through the browser down to your agent. Additionally, on the right-hand side there's a control for what kind of line endings to set when you hit enter - None, LF, CR, or CRLF. The None
option is particularly useful you want to send something along like "q" or " " to the other side without sending "q\n" or " \n".
The other three buttons there allow you to toggle ANSI Terminal coloring, indicators for your tasking, and word wrapping respectively.
Just like with SOCKS and RPFWD, you can use MythicRPCProxyStart to open up a port with this sort of task. If you do, then you can interactively work with the task through your own terminal instead of through the web UI.
Note: When tasking through the UI, all input is tracked as normal tasking, but no "tasks" are created when interacting via the opened port. Output is still saved and displayed within the UI.
For information on the message format, check out Interactive Tasking.