If you want to look into all the new features available to you as a payload developer from an agent perspective, check out Agents 2.1.* -> 2.2.2. The rest of this page is for higher-level updates and UI changes. Either way, to leverage these updates and to upgrade your Mythic instance, you will need to delete your current database sudo ./mythic-cli database reset
and then pull in the updates and sudo ./mythic-cli mythic start
again.
If you're coming from the 2.1.* Mythic instances, your current agents and c2 profiles will NOT work. They will NOT work. This update changed a lot of the underlying ways those agents/c2 profiles communicated and synced with mythic, so you'll need to update those. In most cases, once the developer has announced their stuff is updated, you can use the ./mythic-cli install github <url> [branch] [-f]
command to remove the old version and pull in the new version.
When updating minor versions (2.1 to 2.2), make sure you drop your database with sudo ./mythic-cli database reset
and delete your Mythic/.env
file.
One of the issues we've had with Mythic in the past is that people just install it and start it, without any additional configuration. There's no issue with this, but this does mean that all payload types and all c2 profiles have their containers pulled down and started. If you're not expecting it, this can be a lot of space. There are ways to configure this, but there needs to be a way thats a bit more aparent. So, to help with this (and to help with versioning/updates overall), all Payload Types and C2 Profiles are now split out from the main Mythic repository.
Agents - https://github.com/MythicAgents
C2 Profiles - https://github.com/MythicC2Profiles
To install an agent and c2 profile, you can use the ./mythic-cli install
script and point it to the address for an agent or c2 profile:
The hope is that by making this an explicit step will allow people to be more cognizent about which agents/c2 profiles they're installing, how much space they're using, and allow payload types/c2 profiles to be updated at a higher frequency than the Mythic server itself.
The format required for these external agents/c2 profiles is documented and templated out for you here: https://github.com/its-a-feature/Mythic_External_Agent. Simply mirror that folder structure and config file, then host your agent on github, and everybody will be able to install and leverage your code.
If you're interested in creating new agents or c2 profiles for people to use, let me know on twitter, and I can add you to the appropriate organization so that you have full control over that project. The nice thing about having all of the agents/profiles aggregated under the same organization is that it makes it easier for people to find rather than a bunch of random github links.
In addition to moving all of the Payload Types and C2 Profiles into their own Mythic-based organizations on GitHub, a lot of the "meta" information for Mythic that was stored with it is now in its own Mythic organized called MythicMeta.
Inside this organization you'll find the code for:
Docker Templates - This contains the code and files used to create the Docker Images that are used with the standard Mythic Agents. If you're curious what actually makes a itsafeaturemythic/csharp_payoad:0.0.11
Docker image, this is where you'll find that.
Payload Type PyPi Container - This is the code that's in the mythic_payloadtype_container
PyPi package hosted on PyPi.
C2 Profile PyPi Container - This is the code that's in the mythic_c2_container
PyPi package hosted on PyPi.
Translator PyPi Container - This is the code that's in the mythic_translator_container
PyPi package hosted on PyPi.
Mythic Scripting - This is the code that's used for the mythic
(and right now specifically the mythic_rest
) scripting capabilities and hosted on PyPi.
For any of these, if you don't want to leverage the standard Docker images, or if you want to turn your own VM into a supported container, feel free to leverage this code.
Mythic used to leverage a large number of bash scripts to accomplish the task of start/stoping docker-compose and the agent/c2 profiles, resetting the database, processing various configuration files, and more. While bash is on all Linux systems where Mythic can run, that doesn't mean that all of the additional support binaries exist (jq, realpath, openssl, docker, etc). This can result in a bit of a headache; plus, maintaining bash scripts is a nightmare. To get around this, Mythic now comes with a pre-compiled Golang binary, mythic-cli
, with the source code available to all at the Mythic_CLI repository.
All of the documentation on this website should already be updated to show how to use the mythic-cli
binary instead of the support scripts, but if you find a place that doesn't, be sure to report it.
As part of this update, Mythic now leverages a single docker-compose file for all containers. Mythic used to leverage docker-compose for all of the core services and then regular docker run
commands for the agents and c2 profiles. This worked initially, but now that more and more environment variables are randomized and configurable, there needs to be a way to centrally configure everything. So, starting with Mythic 2.2.4, all configuration happens within the Mythic/.env
file and all containers are in the Mythic/docker-compose.yml
file. When you install agents, Mythic will automatically parse and update this docker-compose file to add in the necessary information. Similarly, you can add/remove agents/c2 profiles at any time from this file via mythic-cli {payload|c2} {add|remove} [name]
. If you already have agents installed that you want to register, the add
command will allow you to update your docker-compose file without having to re-install your agent or c2 profile. You can also use the mythic-cli {payload|c2} list
feature to show what containers exist within your docker-compose file and which ones exist on disk.
This will take a little bit of time to get used to, but it will be easier for maintenance and expansion going forward than a bunch of bash scripts.
C2 profiles gained a new function, opsec
, where they can take in all of the parameters an operator supplies when creating a payload and determine if they're safe or not. This is an optional function with more detail on the OPSEC page for C2 Profiles and general overview.
Payloads in Mythic got a few updates as well to make it easier for developers and analysis.
On the Payloads page, if you delete a payload, then that payload can no longer be used to generate callbacks. If that payload tries to callback to Mythic, you'll get a warning in the UI and in the event feed letting you know that a deleted payload is trying to check in. This is helpful for if a payload gets "burned" and you want to make sure it can't flood your system with callbacks.
Once you've created a payload, on the Payloads page there is a new element in the actions dropdown for "trigger a new build". This will take that payload and task Mythic with generating it again. This is helpful when doing development so that you can quickly troubleshoot build errors without having to apply all of your same build settings each time.
If you want to take a specific build configuration and use it across Mythic installs or us it after you've done a sudo ./mythic-cli database reset
, you can go to the Payloads page and select to export a payload configuration. This takes all the information as needed to generate a payload and saves it as a new JSON file through your browser. Then, at some time later, on the Payloads creation page, there's a new button on the top right to import a configuration. You can load in this JSON file and automatically trigger a new build of that payload.
Sometimes it's helpful to see payload information for payloads that you didn't build manually. This can be for commands that might build new instances of payloads for spawning, lateral movement, or privilege escalation. These are "ephemeral" payloads within Mythic and are typically hidden from view. On the Payloads page, you can click a button to view ephemeral payloads so that you can see configurations, build stats, etc for these as well. Similarly, if you delete these then they can't be used to generate new callbacks.
Payloads can now track their stdout and stderr in addition to a build_message. There are a lot of things that go into building dynamic payloads, so it's helpful to track stdout and stderr for later analysis or troubleshooting. This information can be seen from the Payloads page. This can be set during the build
function as:
All accounts within Mythic now must have a password that's at least 12 characters long. If you try to set a password that's shorter, then you'll get a warning message and the password setting will fail.
When a user ties to log in with a name that Mythic doesn't know, all operations will get a warning about it. If a user tries to log in 10 times unsuccessfully, their account will be locked and an admin account will need to re-enable it via the Settings tab in the top right. The initial admin account though can't be locked out (how else would anybody ever get in) - instead, this account goes into a throttle phase where you can only do one password attempt a minute. In both cases (throttle and lock out), all operations will get a notification that this is happening.
We removed the ability for operators to self-create an account via a "register" button and instead require an admin to pre-create accounts (or use scripting to do so) for all users. To do this via scripting, simply:
One of the big issues we found for Mythic over time has been that with a large number of callbacks (200+) or with a medium number with a low sleep (like 15 agents at sleep 0), then the performance for Mythic (server and the UI) is pretty noticeably deteriorated. After digging into it, it turns out that Mythic was only ever using a single core due to how the event loop was leveraged. Mythic now spawns multiple worker processes and shares the load across the available CPUs.
This presents a different set of problems from what we were doing before - for example, you can't cache requests or data in memory because that's not shared amongst the worker processes. So, to help with this, we added a small Redis database. This might seem weird that we now have two databases within Mythic - Redis and Postgres. The use cases and data stored on them is wildly different though.
The Redis database is cleared each time Mythic starts and is used to hold temporary data that's needed by all of the different worker processes. Currently, this is two things:
JWT refresh tokens
SOCKS messages
Access to this data needs to be super fast, but doesn't need to be persistent, which is why it's not stored in the Postgres database.
There have been issues with the current SOCKS implementation within Mythic, so with the help of Thiago Mallart's pull request for Reverse Port Forwarding, updated the implementation within Mythic to not leverage an external binary, but instead do it all with threads in the main Mythic server. Once we put this implementation through more intensive testing, we will be pulling in Thiago's pull request for reverse port forwarding as well.
One of the things Mythic strives to do is allowing an extensible and customizable framework for you to create an agent that functions however you want. While the current Mythic format allows you to generate an agent however you want, the messages that you use are still pretty heavily tied to Mythic's JSON format. Depending on your agent and language of choice, JSON might not be feasible. Even if JSON is feasible, you might not want to use Mythic's JSON messages. So, to help make it easier for your agents to do their own thing, we're introducing the idea of "Translation" containers.
These containers simply act as a way to "translate" between your custom format and the JSON that Mythic needs. In addition to just doing a translation of messages, this container can also handle encryption, decryption, and generation of crypto keys.
If you issue commands to your agent and don't want to adhere to Mythic's format for hooking into features (like the web browser), but do want to utilize that feature, you can use the process_response
key in your post_response messages (Process Response) and have your own custom messages sent back to your Command's Python file and then use Mythic's RPC functionality to register the same information. This really does give you the ability to do pretty much everything custom, while still hooking into Mythic.
Translation containers can sit side-saddle with your Payload Type locally in Mythic as well as in your Github repository so that it's easy to bundle and install them together.
More information about translation containers can be found on the Translation Containers page.
The event feed for Mythic got a bunch of updates to help with some of the issues people have faced recently.
There have been instances where people have tens of thousands of messages in the event feed, which was causing page loads to be really slow. So, Mythic now only fetches the latest 100 messages and has a button at the top to fetch the previous 100. This allows you to scroll back without having to load it all in at once.
Since all of the messages aren't loaded into the browser at once, it can be hard to find a warning message from Mythic. So, in addition to a button to fetch the previous 100 messages, there's a button to fetch the next most recent error. This makes it easy to find the unresolved errors while still making things much more performant in the browser.
When developing, it's often very helpful to see messages as they travel through Mythic. There are a lot of things going on between getting messages, decrypting, potentially sending to translation containers, processes requests, bundling it all back up, etc. If you pass in an environment variable of MYTHIC_DEBUG=True
when starting Mythic, then all of these stages will send event messages to the event feed with context and information. This can be extremely helpful during development, but will be very overwhelming during production.
One of the issues we saw people face is an overwhelming number of alert messages. Specifically, when Mythic is too open to the internet and gets scanned, Mythic will report back that it can't find the agent messages in all of this scanning traffic. As an operator, you should still be notified that something is sending messages to you that Mythic can't process, but we don't want to bog down the system. So, Mythic will now "group" like messages. This only happens for "warning" messages, but instead of creating a new event feed entry and a new popup in the UI, Mythic will now increment a count assocaited with the message. If you mark this warning as "resolved" and you get another one of these messages, then you will get another event entry and another popup.
There were a lot of updates for this Mythic update, and there were a few things put in place to support some of the updates going forward.
As Mythic expands and provides more context tracking and features, the back-end database will continue to expand. The current UI relies on a REST-style back-end. While this was easy to set up and useful initially, it means that we have to add a bunch more web routes as we add new features and makes maintaining it / testing it more tedious. If we need data in a different format or a different sub-section of data, we have to create new routes and expose them that way. Instead, if we just had a way to expose one endpoint and have the client describe the data it wants, then we can more easily add features to the UI without having to adjust the main Mythic server as well.
This is where GraphQL comes into play. This takes a little bit to get used to, but Mythic now includes the Hasura docker container to expose parts of the Postgres database via a GraphQL engine. This has its own permission model and relies on the main Mythic server for authentication.
As we continue to add more components to Mythic (Documentation container, Mythic server, GraphQL, etc), it's not practical to keep exposing multiple ports that you then need to lock down separately. Instead, we can open a single port externally and proxy connections back to all the components that need them. To accomplish this, we now include an Nginx docker container as a reverse proxy. You can still reach Mythic via the normal https://mythic_ip:7443
, but now your connections are transparently proxied back to the documentation container and the graphql container for you.
Now that Mythic (formerly Apfell) has been out for a few years, I decided to take a look back at how the UI worked. I learned a lot over the years and decided that it was time to redo the UI. It currently is a mixture of Jinja2, Vue, JavaScript, Jquery, and requires adding new templates/routes to the Mythic Server in order to create new pages. It's not particularly easy to go through and trace data or update for people. So, to help with this (and to leverage the new GraphQL components), we're slowly adding a new UI via React.
Using React makes the UI more extensible and makes it easier for other people to contribute. This is going to be a slow process, so for a while there will be two UIs available to you. If you browse to Mythic like normal, you'll see the old/current UI and would never know that there's a new UI as well. If you browse to /new/login
then you'll be presented the login for the new UI and can start seeing where Mythic is going in the future. We've already started implementing some of our newer features in this React UI such as:
better Graph views of callbacks via dagre (built on d3) rather than the current manual D3 force directed graphs
representing some sleep info in the table view for callbacks
indicating if callbacks have direct routes to Mythic, if there are linked agents, and if there's still a route at all
The first big update for the payload type containers is to make things easier to update. To do this, many of the files within agentName/mythic
are now in a PyPi package (mythic-payloadtype-container
) hosted at (https://github.com/MythicMeta/Mythic_PayloadType_Container). Specifically, your agentName/mythic
folder will now look like:
agent_functions/
browser_scripts/
mythic_service.py
payload_service.sh
rabbitmq_config.json
All of those other files can be deleted.
If you are using a DockerImage from the itsafeaturemythic
repo, update to the latest:
csharp_payload==0.0.14
python38_payload==0.0.7
xgolang_payload==0.0.12
leviathan_payload==0.0.7
If you're rolling your own, make sure you use the mythic_payloadtype_container==0.0.45
PyPi package.
This is the main file that kicks off and runs all of the container code for sending heartbeats and interacting with Mythic. This is now a super simple script:
If you want more debugging information from this container, simply set debug=True
.
This file just changes to the right directory inside of the Docker container, makes sure that the new Mythic directory is in your Python environment path, and kicks off that service file we just covered:
The Docker files and Docker images for Mythic agents have all been updated. Instead of housing all of this information inside of the main Mythic repo, these have all been moved to the MythicMeta organization under Mythic_Docker_Templates. They all now include a python requirements.txt
file so it's easy to see what software is needed if you want to create your own Docker image or turn a VM into the Mythic "container" for your agent.
For agents, the requirements.txt is as follows:
The mythic-payloadtype-container
version of 0.0.45
corresponds to container version 9. Mythic now tracks the range of supported versions for all of the containers that connect up to it. This makes it easier to determine if you've updated mythic but don't have updated containers or visa versa.
This is a super awesome addition that allows you to configure your PayloadType container with the rabbitmq_config.json
file OR via environment variables of the same names, but prefixed with MYTHIC_
. So, if your rabbitmq instance isn't on the same host as your payload type container (or VM), you could either:
Set the host
key to the ip of your rabbitmq instance
Set a MYTHIC_HOST
environment variable to the ip of your rabbitmq instance
Either one of those will be pulled in (the environment configuration supersedes the json configuration).
The next set of changes come to how you build your agent. Nothing major changed, just some additional features and conventions changed since we're using a PyPi package now.
Your main agent defintion file in the agentName/mythic/agent_functions/
(typically called builder.py) has some updates.
Replace your previous import:
with a new set of imports:
There are two new things for your agent class that extends the functionality of your PayloadType
- mythic_encrypts and translation_container.
This will be covered more in the Translation Container
section, but your agent can now handle its own encryption/decryption if it wants. To signal this to Mythic, add mythic_encrypts = False
in the same set of parameters where you declare the agent name, extension, and author.
If you want to have your own message format, do your own encryption, or generate your own keys, you need to have a "translation container". This will be covered more in depth in the Translation Container
section, but if you have one for your payload type, declare it here with the name of the container. For example, if my translation container will be called "translator", then I'd add translation_container = "translator"
to the same set of parameters where you declare the agent name, extension, and author.
There's a pre-existing Docker container you can use for this, FROM itsafeaturemythic/python38_translator_container:0.0.3
or you can look at the corresponding GitHub repo (https://github.com/MythicMeta/Mythic_Docker_Templates/tree/master/Docker_Translation_base_files) to create your own.
Translation containers, just like the other containers, have a simple mythic_service.py
script to kick them off which consists of:
Nothing fundamentally has changed with parsing C2 Profile Parameters for your agent, but there's some additional components. You still get an array (self.c2info
) where each entry is related to one of the c2 profiles that the user is trying to add to your payload. For each of those entries you can still call the get_c2profile()
function to get information about the profile and get_parameters.dict().items()
to iterate over each key,value pair of parameter values. This is where the change happens:
Crypto
When doing crypto with Mythic, there used to be a lot of hard coded components, such as needing a parameter name to be exactly AESPSK
. Now, that is no longer the case. C2 Profiles can declare any parameter to be a crypto related one simply by putting crypto=True
in that C2 Profile's python definition file. As the payload creator though, what this means for you is that you don't just get a single base64 string back. The point here is to make crypto more modular and expansive, so you might get matching encryption/decryption keys, you might get a public key and a private key, you might get blank values to use for plaintext, etc. To suppor this, if a C2 profile parameter has crypto=True
, then the value
you get when iterating will be a dictionary
. This dictionary has three components:
What does this look like in practice? For the http
profile, the crypto parameter gives the user a choice to select aes256_hmac
or none
. One of those two values will be the value
, then there will either be None
or the base64 of an aes256 key for the enc_key
and dec_key
components. You can then use these values however you need inside of your agent.
Arrays / Dictionaries
C2 Profiles can also provide a parameter type of dictionary
which, when creating the agent, allows the operator to provide Key-Value pairs. This is useful for things like specifying Header values for HTTP (User-Agent, Host, etc). Instead of trying to deal with nested data structures for arrays vs single values in these dictionaries, the end result is an array of tuples. So, when creating your agent and looping through C2 parameter values, you might get an array
of key-value pairs.
Example
Let's look at what this stuff all means with an example. For the apfell
agent, stamping in C2 profile parameter values used to look like this:
Now that we have a "dictionary" type for our http
Profile's headers
and we have the new crypto components, let's see what the section of code now looks like:
Here we're checking if the value is an instance of dict
which means we're looking at crypto information, we check if the value is a str
which is the normal other data, and if it's neither of those, then it's the array of dictionaires for our header values. You can of course also check if the key
matches the name of the C2 Profile Parameter values (AESPSK
and headers
) and do your check that way too.
The last piece that's updated about building is that you can now be more explicit about what's going on. Rather than only being able to report back success/failure and a single message (which can get really messy with debug output), you can now set the following in your BuildResponse
object:
build_message - this is a simple standard message you display to the user when things build correctly (either myBuildResp.build_message = "congrats, new agent created"
or myBuildResp.set_build_message("congrats, new agent created")
.
build_stderr - this is error information if something goes wrong (either myBuildResp.build_stderr = "compile error here"
or myBuildResp.set_build_stderr("compile error here")
.
build_stdout - this is helpful stdout information that you might not want to present to the user on success, but would be helpful to look at later on (either myBuildResp.build_stdout = "additional debugging info here"
or myBuildResp.set_build_stdout("additional debugging info here")
. )
The old style of payloads used message
instead of build_message
, so you will have to change that one.
There are a lot of small updates with commands that give us a lot of new features, so let's dive into those. These sections are talking about all of the individual command files you have in agentName/mythic/agent_functions/commandName.py
. This section will use Apfell's shell
command as an example.
Just like with the payload building, we need to change our imports at the top. There are two main changes here. First, the from CommandBase import *
needs to change to from mythic_payloadtype_container.MythicCommandBase import *
. Secondly, if you were importing various files for RPC, such as from MythicResponseRPC import *
, those have all been collapsed into a single file now, so you'd need to import from mythic_payloadtype_container.MythicRPC import *
.
Not much changed here, but there are a few things to call out:
CommandParameter's now have a ui_position
attribute you can set to order your arguments in a specific way
The name
attribute does NOT have to match the value of the args key. The name
attribute is what's presented to the user for the short name of the parameter, the description
is what's displayed when the user hovers over that short name, and the dictionary key that's associated with the CommandParameter
object overall is what's used when sending information down to the agent. Let's take an example:
In this case, the user would see in their popup modal Select a File
with a file selection button. If they hovered their mouse over Select a File
, they'd see the description, file to upload
, and when the data goes down to the agent, the agent will get {"file": "uuid here"}
. When interacting with these kinds of mis-matched names from the create_tasking
function, you want to reference the dictionary key value, not the name
value (i.e. task.args.get_arg("file")
).
All the different kinds of Parameter types can be found on the Public Github https://github.com/MythicMeta/Mythic_PayloadType_Container.
When using a parameter type of ChooseOne
or ChooseMultiple
, choices is an array of choices for the user. If your command needs you to pick from the set of commands (rather than a static set of values), then there are a few other components that come into play. If you want the user to be able to select any command for this payload type, then set choices_are_all_commands
to True. Alternatively, you could specify that you only want the user to choose from commands that are already loaded into the callback, then you'd set choices_are_loaded_commands
to True. As a modifier to either of these, you can set choice_filter_by_command_attributes
to filter down the options presented to the user even more based on the parameters of the Command's attributes parameter. This would allow you to limit the user's list down to commands that are loaded into the current callback that support MacOS for example. An example of this would be:
Just like how there's a TaskArgments
class and a CommandBase
class, there's a CommandOPSEC
class now that you can implement for your commands. This will expand over time, but for now there are a few things you can do:
There are currently three tracked attributes about a command - injection_method
, process_creation
, and authentication
which all are free-form text fields that you can set to help describe what all your command might be doing on host that's an OPSEC consideration
implement the opsec_pre
and opsec_post
functions. This is more detailed, so let's take the next few sections to walk through an example.
Your instance of CommandOpsec, is then tied to your command in the same way your instance of TaskArgument is - simply add opsec_class = ShellOPSEC
(but the name of your Subclass) to the same area where you have cmd
, needs_admin
, and description
in your command class.
This function async def opsec_pre(self, task: MythicTask)
if implemented, is called before a task's create_tasking
function. The point of this function is to do some operational security pre-flight tests before passing execution on to your create_tasking
function.
Let's take an example - you're operating in an environment and are about to run a command that does spawn and inject. Before you do that command, you want to query what you know so far about the environment to see if there are any EDR products running that might alert on that activity. So, before even passing execution to the create_tasking
function, in the opsec_pre
function, you can use RPC calls back to Mythic to query information. In our example, let's query the Process
table to see if we have any process data for the host where our task is running. We can do this with a simple RPC call:
processes
now has a response object back from Mythic that holds some information:
processes.status - this is a MythicStatus value that indicates Success
or Error
for the RPC call overall
processes.error_message - if the status is MythicStatus.Error
, then this is populated with the error message
processes.response - this is the actual response we got back from Mythic. This will vary wildly with the function call that you did.
Now that we have the basics, let's expand this example out to say that if we don't have any process data, we should block the function execution until we do have process data. If we do have processes, we should do another query to see if any of the processes match something dangerous, like Microsoft Defender
:
You'll notice that we are setting a few fields on the task depending on what we see:
task.opsec_pre_blocked = True - we set this if we want to stop execution here and report something back to the user
task.opsec_pre_message - this is the message we want to send to the user
All of the RPC calls that are available during create_tasking are available here as well.
task.opsec_pre_bypass_role - we can set this to "operator" or "lead" based on who should be allowed to bypass this opsec issue.
If a message is blocked, the task status in the UI will indicate it's blocked. When you click on the task status there will be two additional menu options - submit a bypass request and view the message.
This is very similar to opsec_pre
, except it happens after the create_tasking
call. This is useful for when you're generating artifacts as part of your tasking (such as generating new DLLs) and want to make sure they're properly sanitized or obfuscated before allowing an agent to pick it up. All of the same components apply, it's just opsec_post_*
.
This will expand over time, but currently we're adding in an attributes
component to Commands that describes non-opsec related attributes. Currently, this only includes two things: if a command is spawn_and_injectable and what kind of operating systems the command supports.
This looks like the following:
The supported_os
attribute lists the same supported OS types as the Payload Type. Most of the time, these two will match up; however, if your agent can compile to multiple different operating systems, this is one way to make it so that during payload creation, the user can only see the commands that are associated with the kind of payload they're trying to make. The spawn_and_injectable variable helps provide some quality of life to operators in case they try to inject a command like "exit" or "cd" into a remote process, which doesn't really make sense.
There are a bunch of features within the Mythic UI that look for specific agent functions to call. Historically, this was indicated with a series of boolean values - is_exit
, is_process_list
, etc. That was fine initially, but doesn't make it easy to expand. So, all of these are now grouped up into a new attribute called ui_features
in a way that's more explicit about where a function will be referenced. These are now mapped as follows:
This also more easily allows us to expand this going forward for future UI elements and allows us to eventually leverage a single command in multiple areas more explicitly. If none of those is_*
attributes are True
, then you don't even need to supply the supported_ui_features
attribute.
This now allows us to go from:
to
which is way more descriptive about what and where the command is used.
The create_tasking
function got a few updates as well. Firstly, all of the calls for RPC functionality has changed. It used to be the case where you had to know which RPC function you want to execute, you had to know which RPC file had it, and you had to know all of the parameters for it. This gets complicated fast, and there isn't always a clear location for a function. For example, where would a search_database
function go?
To help with this, there is now a single RPC file, from mythic_payloadtype_container.MythicRPC import *
that has all of the functionality within it. Every function call will be of the form:
You can programmatically get all of the available functions, their prototypes, and their docstrings by running the get_functions
function. More information and all of the current 2.2.2 functions can be found on the MythicRPC page:
This will go through to query the Mythic server for all of the functions that are exposed via RPC and give detailed information for them.
For each task, you can now track the stdout and stderr for later reference. Simply set it via:
Then, you can see this via the UI when you select a task's status. This is helpful when you're doing commands that might result in additional compilation or analysis (such as load creating new modules).
For the agent and for Mythic, using structured arguments and output is extremely beneficial. However, from the operator standpoint, this is hard to display, hard to quick glance at, and quickly bloats the screen. To help with this, tasks can set their own display_params
by simply doing:
Then, once your create_tasking
function returns, the commandline information that the operator sees is updated to this new custom value. If you select the status for the task though, you can select to "view all parameters" - this will create a new popup with information for three different stages of parameters:
The process_response
function is now finally callable within the Command files. The point of this function is to have a custom, programmatic execution based on the output of a command. When reporting data back from your agent, in your post_response
array (the same place you'd set user_output
), you can specify process_response
with whatever data you want. This is then wrapped up with the task information into an AgentResponse
object with two attributes:
task - this is just the same task information you get in create_tasking
response - this is the data you put in the process_response
key.
An example of using this to update a callback's sleep information is:
This function also has access to all of the same RPC functionality that the create_tasking
has. Here we're simply updating the callback's sleep_info
with the result of the response. This allows us to programmatically update it based on if the agent was successful or not, without requiring Mythic to expose a custom response attribute for this field.
To go along with the sub-tasking and task callback functionality, you might run into scenarios where you want to provide some sort of dynamic check or execution within the Command file, but don't it to actually be a task that gets sent down to the agent. You can now specify script_only=True
in your Command file and this will be exactly the case.
Commands that are marked as script_only=True
will NOT appear when you go to build a payload because these commands are transparent to your agent. They WILL appear in the type hints when you start typing commands though and they appear in the list of full available commands when you view metadata about a callback.
You might be wondering why this is useful? Consider the following: psexec
is a command you want to implement in your agent, but you implement this functionality manually rather than simply running the Microsoft psexec binary. You can create a script_only
psexec
command that when a user types it, will spin of further sub-tasks to check that the computer is reachable, that the port is open, that you have access, copies over the file, creates the service, then cleans it all up. The psexec
command in that case is kind of like a conductor that controls all the other tasks, switches based on success/error for each sub-task, and ultimately accomplishes the task dynamically without needing the command itself to be a compiled specific task in your agent.
The RPC functionality within Mythic as of 2.2.7 is more dynamic, allowing more functionality to be added to the back-end and automatically usable by the Payload Type containers without requiring new PyPi packages or new Docker images. To facilitate this, you can always get the latest RPC available functionality within your tasking via:
That will print out all of the information for the available functions if you're ever in doubt about what's available or how to call the functions. When you find a function you want to call, you do it as follows:
where you always call await MythicRPC().execute
with the first parameter being the name of the function to call and all of the other arguments being passed in like normal function arguments.
The function set is moving towards a standard nomenclature - create_*
for when you want to create/register/add something to the database, get_*
for when you want to fetch something from the database, and delete_*
when you want to remove something from the database or mark it as deleted. The current set of functionality for 2.2.8 is as follows: