All pages
Powered by GitBook
1 of 4

Eventing

What is it?

Eventing is a way to automatically perform actions based on some "event" within Mythic. The format and flow for this was heavily modeled after GitHub Actions, so if you're familiar with that then this should come pretty easily.

Components

There are a few components that make up an eventing workflow. Let's see what happens when you click the "New Event Groups" but on on the left hand side and select a file/multiple files to upload:

  1. The original eventing workflow file. This is a YAML, JSON, or TOML file that contains information about the entire workflow. In Mythic's GraphQL API, each eventgroup has a file_id that points to this file so that the contents can always be fetched later. Clicking the image folder icon under "actions" will display this file in the UI.

  2. After loading the workflow file and creating an eventgroup, each step is analyzed and parsed into an eventstep associated with that event group.

  3. When an event happens and triggers a workflow, then an eventgroupinstance is created to track that specific instance along with eventstepinstance for each step. This is how Mythic is able to tell all the different pieces from each other. These instances are tracked in the table below.

Once you select a workflow on the left, you'll see a bunch more information appear on the right. Let's look at that top table first:

  • Author - this is the user that uploaded the workflow file

  • Created At - this is when the user uploaded the workflow file

  • Trigger - this is what "event" within Mythic will cause this workflow to execute

  • Keywords - these are optional additional words you can use to kick off this workflow. For example, maybe you want something to execute each time there's a new callback. However, as part of something else you're doing, you want to execute this workflow anyway - you can associate a keyword with the workflow and then use that to execute the workflow at any time (more on that later).

  • Context - this is additional context about the workflow to help decide if/when it should execute. For example, with the new callback trigger, you can use this context to limit which types of payloads you want to execute. A good example would be that you want to run some situational awareness commands when there's a new callback, but you only want to do it for Windows payloads. This is trigger_data in Workflow Triggers.

  • Env - this is extra, global data you can access and pass into all of the steps in a workflow. You can set this via the environment keyword.

  • Actions - this is a set of additional actions you can take on this workflow

    • If the trigger is manual or keyword, then the green play button will appear here and you can manually trigger this workflow.

    • The popout icon allows you to see the step flow graph in a bigger view

    • The file image icon allows you to see the backing file for this workflow

    • The paperclip icon allows you to upload additional files for this workflow. These files can then be referenced from within your steps (this is how, for example, you can handle issuing tasks that might need to upload files)

    • The layered square icon gives context about additional services that might need to be running for this workflow to execute successfully. If this is red (with a number), then one or more additional services are needed (such as custom functions and conditional checks), but they're offline.

  • Graph: The big graph in the middle shows all the steps associated with this workflow and their dependencies. You can right-click any step and click "View Details" to get more contextual information about each specific step within the workflow overall.

A disabled event group workflow can't run new instances, but still shows up by default in the UI. A deleted event group workflow can't create new instance and doesn't show up by default in the UI.

Operator Context (run_as)

Event workflow actions as ...?

If you're thinking about creating an eventing workflow, but curious who these actions would run as, then you're in the right spot! It shouldn't be the case where something takes actions as you without you knowing it. Mythic tracks everything that's happening, so if an action happens on your behalf that you didn't authorize, then it starts to break down that model. Instead, there's the run_as field in the workflow file and required consent. Let's break it down.

bot

Every time an operation is created, a "bot" account is also created and added to the operation as a standard "operator". More detail on the accounts can be found here. The default operational context for a workflow is the "bot" account for the operation. However, we don't want just anybody to upload workflows to do arbitrary things within Mythic. That could get dangerous. Instead, anybody can upload the workflow files, but if the execution context is for the bot, then the admin of the operation must approve it to run.

At any point, the admin of the operation can go back and change prior approval to a deny or a deny to an approval. The last time it changed will always be tracked. Also, if the admin of the operation changes, then all of their prior approvals are removed and the new admin must re-approve them.

Since the "bot" account is a normal operator, you can apply block lists as well. That makes it easy to block what scripts are able to execute vs what you want to require a human intervention to task. Bots are always easy to identify when assigning operators because they have a robot symbol next to their name:

If you leave run_as blank or omit it entirely, then bot is the default value used.

self

A run_as value of self means that the workflow will execute under the context of the operator that uploaded it.

trigger

A run_as value of trigger means that the workflow will execute under the context of the operator that triggered it (or bot if there wasn't an explicit trigger). For this case, each operator must provide their consent or it'll fail to run for operators that don't provide consent.

lead

A run_as value of lead means that the workflow will execute under the context of the operation admin. Naturally, the operation admin must approve this before this can execute.

anything else

If you supply a value to run_as that's none of the above header values (bot, self, trigger, lead), then it's assumed that you're trying to run within the context of a specific operator. If the name matches an existing operator, then that operator must be part of the operation and have granted consent. If you specify the name of a bot, then the lead of the operation must grant consent first.

Workflow Triggers

What are they?

Triggers are the "events" that kick off your workflow. They typically involve something "happening" within Mythic's sphere of influence and sometimes allow you to add some additional context via trigger_data.

trigger_data isn't set in stone and can be expanded upon over time. If you have additional ideas for trigger data, let me know!

Trigger Options

  • manual - This workflow is triggered manually in the UI via the green run icon.

    • trigger_data - N/A

  • keyword - This workflow is triggered by a keyword and optional dictionary of contextual data.

    • trigger_data - dictionary of any extra data you want to send along. Normally, this is an extra way of triggering a workflow that's normally triggered in another way. In that case, you should probably pass along in the trigger_data whatever your workflow normally expects.

  • mythic_start - This workflow is triggered when Mythic starts.

    • trigger_data - N/A

  • cron - This workflow is triggered on a cron schedule.

    • trigger_data - Dictionary with the following keys:

      • cron - a normal cron string indicating when you want to execute this workflow. This is a handy place to check out for cron execution strings (https://crontab.guru/).

  • payload_build_start - This workflow is triggered when a Payload first starts being built.

    • trigger_data - Dictionary with the following keys:

      • payload_types - a list of all the payload types where you want this to trigger. If you don't specify any, then it will trigger for all payload types.

  • payload_build_finish - This workflow is triggered when a Payload finishes being built (either successfully or with an error).

    • trigger_data - Dictionary with the following keys:

      • payload_types - a list of all the payload types where you want this to trigger. If you don't specify any, then it will trigger for all payload types.

  • task_create - This workflow is triggered when a Task is first created and sent for preprocessing.

    • trigger_data - N/A

  • task_start - This workflow is triggered when a Task is picked up by an agent to start executing.

    • trigger_data - N/A

  • task_finish - This workflow is triggered when a Task finishes (at any point in the task lifecycle) either successfully or with an error.

    • trigger_data - N/A

  • user_output - This workflow is triggered when a Task returns new output in the user_output field for the user to see in the UI.

    • trigger_data - N/A

  • file_download - This workflow is triggered when a file finishes downloading from a callback.

    • trigger_data - N/A

  • file_upload - This workflow is triggered when a file finishes uploading to Mythic.

    • trigger_data - N/A

  • screenshot - This workflow is triggered when a screenshot finishes downloading from a callback.

    • trigger_data - N/A

  • alert - This workflow is triggered when an agent sends an alert back to Mythic.

    • trigger_data - N/A

  • callback_new - This workflow is triggered when a new callback is created.

    • trigger_data - Dictionary with the following keys:

      • payload_types - a list of all the payload types where you want this to trigger. If you don't specify any, then it will trigger for all payload types.

  • task_intercept - This workflow is triggered after a Task finishes its opsec_post check to allow one more chance for a task to be blocked.

    • trigger_data - N/A

  • response_intercept - This workflow is triggered when a Task returns new output in the user_output field for the user to see in the UI, but first passes that output to this workflow for modification before saving it in the database.

    • trigger_data - N/A

Steps

What are they?

An eventgroup workflow has a trigger that causes the entire workflow to start. Once it starts, it's the individual steps that perform actions. Once a steps is complete, the next step(s) start based on which other steps they depend on. Each step has the following:

  • name - string - a unique name within the eventgroup workflow

  • description - string - a description of what the step is for

  • depends_on - array of strings - a list of other step names that this step needs to complete first before this step can start

  • action - string - the specific action to take for this step

  • action_data - dictionary - per-action specific data that is necessary for that step (ex: think payload config information for creating a payload or command name and parameters for issuing a task).

  • environment - dictionary - per-step unique environment information you want available during step execution

  • inputs - dictionary - the inputs that are going into the step that come from things outside of your knowledge when you first upload the workflow.

  • outputs - dictionary - the data you want to export from this step so that it's available to be used as part of the inputs of another step

  • continue_on_error - a boolean - indicates if you want to continue onto the next step if this step fails. Normally, if a step hits an error case then the entire workflow is cancelled from then on.

inputs

Each step can define inputs that are necessary for the step that may or may not be known at the time that the eventgroup workflow is uploaded to Mythic. The point of this information is that you want to use it to replace sections of your action_data based on other factors. A simple example is that you have a workflow that triggers on new callbacks, and you want to issue a new task to that callback. Well, in order for you to issue a task, you need to know which callback was just created.

Inputs are a dictionary where the key is the value that's swapped out in the action_data or made available in actions like custom_function and the value indicates where the data comes from. Let's take an example to make that clearer:

inputs example

name: "whoami on new callbacks"
description: "automatically issue whoami on new callbacks"
trigger: callback_new
trigger_data:
  payload_types:
    - poseidon
keywords:
  - poseidon_callback

steps:
  - name: "issue whoami"
    description:
    inputs:
      CALLBACK_ID: env.display_id
      API: mythic.apitoken
      COMMAND: shell
      RUBEUS_FILE_ID: upload.Rubeus.exe
    action: task_create
    action_data:
      callback_display_id: CALLBACK_ID
      params: whoami
      command_name: COMMAND

In the above example we have an eventgroup workflow called whoami on new callbacks, which, as you might expect, wants to automatically issue whoami on new callbacks. This is triggered based on callback_new , but limited to only the poseidon payload types. This can also be triggered via keywords using the poseidon_callback keyword (more on that later). This workflow has one step called issue whoami that takes 4 inputs, has an action of task_create, and has some action_data.

input value keywords

The inputs dictionary has keys on the left (here they're all caps, but it doesn't matter) and on the right are some specially formatted strings. You'll notice that most are in the form X.Y. Mythic checks these and determines if the X is a special keyword. The possible keywords are:

  • env - fetch the named value from the environment (including information that triggered the workflow). In the above example, we fetch display_id from the callback information that triggered the workflow and store it in the CALLBACK_ID variable for use in action_data.

    • If you're ever curious what data is automatically available to you in this field, you can trigger the event, then right click one of the steps and click "View Details". The first dropdown, "Original and Instance Metadata", will show inputs, outputs, and environment data.

  • upload - this allows you to specify the name of a file that has been uploaded to Mythic. The resulting value is the agent_file_id UUID value for that file or "" if it doesn't exist.

  • download - this allows you to specify the name of a file that was downloaded to Mythic. The resulting value is the agent_file_id UUID value for that file, or "" if it doesn't exist.

  • workflow - this allows you to specify the name of a file that was uploaded as part of the workflow. You can see these files and upload/remove them by clicking the paperclip icon in the actions column when viewing the eventgroup workflow.

  • mythic - this allows you to get various pieces of information from Mythic that you can't get elsewhere. This will expand over time, but currently the only options for after the . are the following:

    • apitoken - this will generate an APIToken that you can use for this step to interact with Mythic's GraphQL scripting. This APIToken will be invalid after the step completes, so plan accordingly. Any actions taken with this APIToken are tracked and available by right clicking an instance of a step that uses this and viewing details.

  • anything else - if your input is of the format X.Y, but doesn't match one of the above keywords, then Mythic checks if X matches the name of a step. If so, then it looks for an output key in that step that matches Y. If that exists, then Mythic will swap it out. If either of those conditions aren't met though, then just the static valued is used.

After going through to find the values for the various inputs fields, the corresponding data is replaced within the action_data - for example, CALLBACK_ID from our inputs key matches the CALLBACK_ID in the action_data, so it gets swapped out. Same with COMMAND.

Actions

  • payload_create - This action allows you to start building a payload. This action is over once the payload finishes (success or error).

    • action_data - a dictionary of the data used to create a new payload. This is the same sort of data when you click to "export" a payload's configuration on the payloads page.

      • description - the description for the new payload

      • payload_type - the name of the payload type for the payload

      • selected_os - the name of the selected operating system

      • filename - the name of the file you want at the end

      • wrapped_payload - if you're wrapping another payload, specify the wrapped payload's UUID here

      • c2_profiles - an array of c2 profile data which is as follows:

        • c2_profile - the name of the c2 profile

        • c2_profile_parameters - a dictionary of the parameters for the c2 profile in key-value format

      • build_parameters - an array of build parameter dictionary values as follows:

        • name - the name of the build parameter

        • value - the value of the build parameter

    ex:

    steps:
      - name: "apollo bin"
        description: "generate shellcode"
        action: "payload_create"
        action_data:
          payload_type: "apollo"
          description: "apollo test payload shellcode"
          selected_os: "Windows"
          build_parameters:
          - name: "output_type"
            value: "Shellcode"
          filename: "apollo.bin"
          c2_profiles:
          - c2_profile: "websocket"
            c2_profile_parameters:
              AESPSK: "aes256_hmac"
              callback_host: "ws://192.168.0.118"
              tasking_type: "Push"
          commands:
          - shell
          - exit
          - load
        outputs:
          PayloadUUID: "uuid"
        environment:
      - name: "apollo service"
        description: "service exe with apollo shellcode"
        action: "payload_create"
        inputs:
          WRAPPER_UUID: "apollo bin.PayloadUUID"
        depends_on:
        - "bin opsec checker"
        action_data:
          payload_type: "service_wrapper"
          description: "apollo service exe"
          selected_os: "Windows"
          build_parameters:
          - name: "version"
            value: "4.0"
          - name: "arch"
            value: "x64"
          filename: "apollo_service.exe"
          wrapped_payload: WRAPPER_UUID
        outputs:
          PayloadUUID: "uuid"
  • callback_create - This action allows you to create a new callback.

    • action_data - a dictionary of data used to create a new callback:

      • payload_uuid - the payload UUID that was used to create this callback

      • c2_profile - the name of the c2 profile that was used to "create" this callback

      • encryption_key - base64 bytes of an optional encryption key to use

      • decryption_key - base64 bytes of an optional decryption key to use

      • crypto_type - string type of crypto to use (just like you'd select from the dropdown menu when generating a payload) if you want to use something other than what was selected for your payload/c2 profile.

      • user - the username for the callback context

      • host - the hostname of the computer where the callback is executing

      • pid - the pid of the callback process

      • extra_info - a string of any extra information you want to save/store about this callback

      • sleep_info - a string of extra information you want to save/store about the sleep context for this callback (interval, jitter, skew, etc)

      • ip - the ip of the callback (if you only collect one)

      • ips - an array of ip strings

      • external_ip - if you know your external ip and want to supply it

      • os - specific os information about the host

      • domain - the domain name associated with the host

      • architecture - the architecture of the process

      • description - a custom description for the callback

      • process_name - the name of the binary that's backing your process execution

  • task_create - This action allows you to create a new task. This action is over once the task finishes (success or error).

    • action_data - a dictionary of the data used to create a new task:

      • callback_display_id - the display id of the callback to task

      • command_name - the name of the command you want to issue

      • payload_type - if you're leveraging a command_augment container and the command name isn't unique, then specify which payload_type you are referring to here.

      • params - if you just have a string of parameters to supply, you can provide that here

      • params_dictionary - if your command optionally supports providing structured data, provide a dictionary here of your parameter names -> values

      • parameter_group_name - if you are using parameter groups and want to explicitly say which one to use, put that here

      • token - if you're using a token associated with the callback, specify that here. If you don't put the token id then you won't use a token at all

      • parent_task_id - if you're wanting to issue a task as a subtask of another task, specify the ID of the parent task here

      • is_interactive_task - if the task you're issuing is part of an ongoing interactive task, specify that here (and be sure to specify the parent_task_id too, which would be the one that started the interactive tasking session)

      • interactive_task_type - if this is an interactive task, specify what kind of task input it is as an int value

    action_data:
          callback_display_id: CALLBACK_ID
          params: whoami
          command_name: COMMAND
  • custom_function - This action allows you to execute a custom function within a custom event container that you create or install. The benefit here is that you can get access to the entirety of Mythic's Scripting and GraphQL API by using an input of mythic.apitoken . This allows you to do almost anything.

    • action_data - dictionary with the following two things. As part of the function execution, you get access to the environment, inputs, and action data, so there's plenty of ways to get additional context and information for your custom function.

      • container_name - the name of the container that has the custom function you want to execute

      • function_name - the name of the function you want to execute within that container

  • conditional_check - This action allows you to run a custom function within a custom event container that you create or install with the purpose of identifying if certain steps should be skipped or not. Normally, if a step hits an error then the entire workflow is cancelled, but you can use this conditional_check to run custom code to determine if potentially problematic steps should be skipped or not.

    • action_data - dictionary with the following three things. As part of the function execution, you get access tot he environment, inputs, and action data, so there's plenty of ways to get additional context and information for your custom function.

      • container_name - the name of the container that has the custom conditional check you want to execute

      • function_name - the name of the conditional check you want to execute within that container

      • steps - an array of step names that you want to potentially skip based on the result of this function call

  • task_intercept - This allows you to intercept a task after the task's opsec_post function finishes to have one final opportunity to block the task from executing. This can only be used in conjunction with the task_intercept trigger. You can have additional steps as part of the workflow, but one step must be task_intercept if you have a trigger of task_intercept.

    • action_data - dictionary with the following. You get access to the environment, inputs, action data, and taskid so that you can fetch more information about the task and use GraphQL to perform any additional actions you might need.

      • container_name - the name of the container that has the task intercept function you want to execute. There can only be one per container, so you don't need to specify a function name.

  • response_intercept - This allows you to intercept the user_output response of an agent before it goes to the Mythic UI for an operator. Just like with task_intercept, this must exist if you have a trigger of response_intercept and can't be used if that's not the trigger. This allows you to get access to the output that the agent returned, then modify it before forwarding it along to Mythic. This means you can modify the response (add context, change values, etc) before the user ever sees it. Tasks with output that's been intercepted will have a special symbol next to them in the UI.

    • action_data - a dictionary with the following. You get access to the environment, inputs, action data, and response id so that you can fetch more information about the response (and thus task) to use with GraphQL to perform any additional actions you might need.

      • container_name - the name of the container that has the response intercept function you want to execute. There can only be one per container, so you don't need to specify a function name

outputs

Outputs from a step allow you to expose something from one step to another step. Say for example that you have a step that creates a new payload, but you want to use that new payload's UUID as input for the next step that creates a wrapper around that payload. You need some way to expose that specific UUID (there might be multiple if you're creating multiple payloads). This is where outputs come into play. We've already seen from inputs in that last example, anything else, where prior steps' outputs can be examined to look for values to use as inputs to new steps.

When Mythic processes outputs for a step, it loops through certain data depending on the action:

  • payload_create - Currently, the following are available as keyword outputs:

    • uuid - this returns the UUID of the payload that was created

    • build_phase - this returns the string build phase of the payload that was created

    • id - this returns the ID of the payload that was created

  • task_create - Currently, the following are available as keyword outputs:

    • status - the string status of the task

    • params - the string parameters that were passed down to the agent

    • original_params - the original parameters that were supplied as a string

    • command_name - the name of the command that was sent down to the agent (if any)

    • display_id - the display id integer of the task that was created

    • token - the token id associated with the task if any

    • parent_task_id - the id of the parent task associated with this task (if any)

    • agent_task_id - the UUID of the task that the agent would see

  • callback_create - Currently, the following are available as keyword outputs:

    • display_id - the display ID of the callback that was created

    • agent_callback_id - the UUID of the callback that was created that agents see

    • id - the int ID of the callback that was created

Everything else that's returned by a step in "outputs" is unmodified. Let's take an example:

steps:
  - name: "apollo bin"
    description: "generate shellcode"
    action: "payload_create"
    action_data:
      payload_type: "apollo"
      description: "apollo test payload shellcode"
      selected_os: "Windows"
      build_parameters:
      - name: "output_type"
        value: "Shellcode"
      filename: "apollo.bin"
      c2_profiles:
      - c2_profile: "websocket"
        c2_profile_parameters:
          AESPSK: "aes256_hmac"
          callback_host: "ws://192.168.0.118"
          tasking_type: "Push"
      commands:
      - shell
      - exit
      - load
    outputs:
      PayloadUUID: "uuid"
      Whatever: "my thing"

In the above example, we have an action of payload_create, which means we have a few output keywords that can be replaced. In our outputs dictionary, we have PayloadUUID that should have the value of uuid (this is one of our keywords specifically for payload_create), so this value is swapped out with the UUID of the payload we created. We have another output, Whatever, that has a value of my thing (this isn't one of our keywords, so it's left as is). This means another step that comes after this one can do something like this:

    inputs:
      WrapperPayloadUUID: "apollo bin.PayloadUUID"
      APIToken: "mythic.apitoken"

Notice we have an input into a step called WrapperPayloadUUID that needs the value of the PayloadUUID output from the step apollo bin. We also have a special scenario where we get the mythic.apitoken (a special API token that'll only exist for this step).