Payload Type Development
This section describes new Payload Types
Creating a new Apfell agent
You want to create a new agent that fully integrates with Apfell - Apfell builds the code and Docker containers for the agent are controlled by Apfell. What all does that entail? It's not too crazy, just a few pieces are needed:
Agent base code that's command agnostic, commands, C2 profile information
This allows Apfell to piece everything together at build time
An Apfell container for the payload type that's taskable
This can be done locally or with an external VM/host machine
Transforms that'll turn your code into the final product
Optionally: A custom C2 profile that this agent supports, see C2 Profile Development for that piece though
Registering the agent in the UI
The first step is to register the agent in the web user interface.
Go to "Manage Operations" -> "Payload Management".
Click "+ Payload Type"
The "name" of the agent will be how you identify the agent, the corresponding docker container, and paths within the system.
The "payload file extension" is the file extension of the files that make up the agent. For example, the apfell-jxa agent uses JavaScript files (.js) and the viper agent uses (.py) files. So, here you'd specify things like js, py, c, go, etc (without the leading
.
).The "supported os" field isn't necessary, but provides context for the user
Leave the two toggles un-checked. This agent is not a wrapper and is being created within apfell.
The "Select payload code files to upload" button allows you to upload base agent code for the agent.
Arbitrary folder structures are now available when editing commands and payload types, so if you need code within specific folders, wait until the payload type is registered to then go through to add folders and upload files to specific folders. Any code added at this point will be added to the root directory of your agent: i.e.
apfell-docker/app/payloads/{name}/payload/{uploading here now}
.
The rest of the information is explained here and isn't strictly necessary
For information about the actual agent code, see the Agent Code section.
You should now see the new payload type appear in the area at the bottom. Initially the light will be green, but after 30 seconds it'll flash red to indicate that there's no heartbeat happening from the docker container (which doesn't exist yet). So, let's do that next.
Creating a docker container for your new agent
There are docker containers for each payload type that are customized build environments. This allows two payload types that might share the same language to still have different environment variables, build paths, tools installed, etc. There are only two requirements for the containers:
They must have python 3.6 installed and in the environment path
They must have the same directory structure and entry point as the base payload_type container
These two requirements makes sure that the container can connect to Apfell, receive tasking, and execute the python 3.6 based transforms.
This part has to happen outside of the web UI at the moment. The web UI is running in its own docker container, and as such, can't create, start, or stop entire docker containers. So, make sure this part is done via CLI access to where Apfell is installed.
Within Apfell/Payload_Types/
make a new folder that matches the name of your agent. Inside of this folder make a file called Dockerfile
. This is where you have a choice to make - either use the default Payload Type docker container as a starting point and make your additions from there, or use a different container base.
Using the default container base
The default container base can be seen at Apfell/Payload_Types/Dockerfile
. It's a python:3.6-jessie
container that is pretty bare bones except for python 3.6. Start your Dockerfile
off with:
On the next lines, just add in any extra things you need for your agent to properly build, such as:
This happens all in a script for docker, so if a command might make a prompt for something (like apt-get), make sure to auto handle that or your stuff won't get installed properly
If you want to copy a file into the container, place the file in your folder, Apfell/Payload_Types/{payload type name}/file_to_copy
and copy it in with the following line:
The file system within the container is separate from the OS, so make sure to copy in what you need.
Using your own container
There isn't too much that's different if you're going to use your own container, it's just on you to make sure that python3.6 is up and running and the entrypoint is set properly. Here's an example from the Poseidon
container that uses xgo
to build:
Notice that it installs python3.6, sets it up correctly, installs the required packages (aio_pika) for Apfell, and gets the directory structure all set up like the default container template does. In this case, you also need to copy the apfell_heartbeat.py
, apfell_service.py
, and payload_service.sh
files into your Apfell/Payload_Types/{payload type name}/
directory so that they can be copied in correctly.
Starting your Docker container
To start your new payload type docker container, use the ./start_payload_types.sh
script and supply the name of your payload type. For example, to start just the viper payload type container, run:
Your container should pull down all the necessary files, run the necessary scripts, and finally start. If it started successfully, you should see it listed in the payload types section when you run sudo ./status_check.sh
.
If you go back in the web UI at this point, you should see the red light next to your payload type change to green to indicate that it's now getting heartbeats. If it's not, then something went wrong along the way. You can use the sudo ./display_output.sh payload_type_name
to see the output from the container to potentially troubleshoot.
If you want to go interactively within the container to see what's up, use the following command:
From there, you can start exploring around and seeing why it's not running.
Uploading Files
At this point, you should have your container up and running and the payload type registered within the system. Now, if you want to upload more files relating to your base agent, you can go back to "Manage Operations" -> "Payload Management", click the yellow edit drop down at the bottom of your payload type, click "Edit Info and Files". You'll see the file section look something like:
The top section allows you to upload, download, delete files or even create/delete folders that relate to the the payload's code. The bottom portion, "Server Files", relates to uploading, downloading, and deleting files that relate to the docker container that does the processing for your new payload type. If you need to upload a new config, upload new files, etc, you can do that here.
If there is no associated Apfell container for the payload type or it is currently offline, then the bottom portion for "Server Files" will not show up since it is not taskable.
Registering Commands
From the "Manage Operations" -> "Payload Management" page, you can start adding commands to your payload type. You don't technically have to upload code with these commands, but the command has to be registered here other the main tasking UI won't allow you to issue the command to your agent.
At the bottom of the screen for the payload type you just registered, you'll see something like the above screenshot. Click the green "+" to add a command, the yellow edit to edit a command, or the red trash button to delete commands.
The main information about all the pieces is already captured in the Understand Commands section of this documentation. You can upload any files you want when clicking the green "+", but they will all be at the base path of apfell-docker/app/payloads/{payload type name}/commands/{command name}/{uploads go here}
. If you want things in a different folder structure, you will need to do that from the edit button after the command has been registered. If you just type code into the visual editor, then that will be saved as commandName.payload_type_extension
. So a command of test
with a payload type extension of py
would result in a test.py
file being created.
When you click to edit a command, the display will look like:
Notice that there's nothing in the code editor piece. To edit a specific file, click the blue edit button next to the file you want to edit. This will bring that file's code to the code editing portion:
From here, make any changes you want and click the green save icon. That'll save the code back for that file. To edit another file, just click the blu edit button for that file and the top portion will be replaced.
C2 profile associations
Now that you have the payload type registered, commands registered, and the associated container running, the next step is to associate your payload type with the C2 profiles that it understands. In the "Manage Operations" -> "C2 Profile Management" page, you'll see all of the C2 profiles.
The point here is that the C2 communications piece of an agent isn't tied into the commands or the base agent, it's a separate implementation. The interface should be the same (upload, download, get tasking, etc), but the actual end implementation shouldn't matter to the rest of your agent's code. That C2 comms piece of the code is what you're uploading here.
For each C2 profile that your agent supports, click the yellow dropdown and select "Supported Payloads". You should see your new payload type as an option. Make sure it is selected (in addition to all of the others that were already selected), and you should see a new row appear for you to upload files for your payload type. Upload the C2 comms piece here. To confirm your files uploaded, select the "Download/Delete listener files" from the same yellow dropdown, and you should see a table for your payload type with the files you uploaded.
Creating your agent with transforms
The last piece is to combine it all together into the final payload. This could mean a single script file, a compiled binary, or a series of files - it's up to you. First, decide what you need to happen with all of your files at a high level and decide how you're identifying the final thing you're going to call your payload.
In "Manage Operations" -> "Transform Management" is a big python file. You can edit in the browser or download, edit, and re-upload if you want. The important piece to look at is the TransformOperations
class:
When you run your transforms, they'll happen sequentially and are all functions in this class. So, you can save your own state between transforms by simply writing to self.saved_state
dictionary and pull it out in another transform.
Your transform code will run within the payload type Apfell container (could be a docker container, VM, or host machine) with the self.working_dir
set to a random newly created folder for each instance of you creating a payload. If you want to see what the final state of all your modified files are before trying to do any special transforms, simply use one transform called outputAsZipFolder
. This transform just zips up the self.working_dir
directory and returns it.
In this case, your final payload (regardless of what you call it) would actually be a zip file. This is helpful as you start troubleshooting so you can see how everything is getting stamped together, making sure folder structures are as you expect, etc. You can also write out debugging information or temporary files into your self.working_dir
folder and then this will zip those up as well.
Once you're confident that your code looks and is formatted the way you want, you can start doing other transforms instead. These can be as simple or as complex as you want, can execute shell commands, leverage environment variables, call code from other files you loaded into the container, etc.
Every transform has the same general format:
Since we're doing python 3.6 and asynchronous calls, the function is always async def
followed by the function name. The first parameter is self, so you can access things like self.working_dir
. Then there are two important parameters:
prior_output
is the output of the previous transform. If you're using more than one transform, then the output of one transform is provided (via this parameter) as input to the next transform. You don't have to use this, but it can be helpful.parameter
is an optional value provided as input to that step in the transform chain specifically. It will come in as a string, but you can parse and cast it however you need to. This is helpful if you want to take additional context such as a compilation flag that's optional.
The last transform in your series must return the actual contents of the payload as bytes. Whatever is returned in the final transform is what will be saved to the payload's name you specify when creating a payload. If you want to stop somewhere along the process and return an error message or information back to the operator, simply raise an exception with the message you want to show, such as raise Exception("My personal error message here")
.
Creating an external agent
This is the fastest way to get up and going within Apfell as you create an agent.
In the "Payload Management" page, select to create a new Payload Type
See the Payload Types section on what the pieces mean and fill out the initial sections, being sure to select that
Is this payload going to be created externally to Apfell
as blue.You don't need to upload any files since you're going to take care of all of that on your end
Submit it. You should now see a new Payload Type show up on the screen with a Blue light. If it's green or red, give it about 30 seconds and it'll settle on Blue (indicating that it's an external payload type with no container associated with it)
Because there is no container, you will not be able to do
Create Transforms
,Load Transforms
, orCommand Transforms
for this payload type. For more information on what those mean, check out Create and Load Transforms and Command Transforms.
Now you need to associate commands with your payload type
Click the green plus sign next to the Commands for the payload type to add a command.
Check out Understanding Commands for information about all of the pieces. You don't need to submit any code though since your agent is created externally.
In this section, focus on adding the commands and parameters that your agent will support.
Go to the "C2 Profiles Management" page
Your agent will need to "speak" the language of a C2 profile. You can either hit the RESTful endpoints directly (
default
c2 profile), or you can modify the mapping between the endpoints your agent hits and the actual endpoints (RESTful Patchthrough
profile). For more information on how the RESTful Patchthrough profile works, see the RESTful Patchthrough section.Select the edit dropdown for the profile your agent will speak, and select "Supported Payloads"
In addition to the payloads highlighted, also select your new one. This is a running list of all the payload types that speak this profile. Since this payload type is externally created, you don't need to upload any code.
Now you need to create a payload in the database. Normally, this process would create the actual agent, but since you're creating it externally, all this will do is register the metadata in the database for when you do create your actual agent
Go to "Create Components" -> "Create Payload"
Select the C2 profile you're going to use for this payload and fill in any necessary parameter values
Select your agent type, provide a name, and a default tag that will be used to populate the description field of any new callback based on this payload
Select the commands that you plan to include in the agent
Hit submit
You'll get back a payload UUID and a new entry in the "Payload Management" page
Use this information in your agent
Check out the Agent Code section for how to hit the different endpoints for the agent, what data is required, and what the responses look like. The main thing you need from Step 6 is the payload UUID that you will need to include in the initial checkin (Initial Checkin).
Run your agent and you should get a new callback in your UI that you can task.
Once you have basic tasking, check out Hooking Features for what kinds of information you can include in your responses to hook into more of the features of Apfell.
Turning a VM into an Apfell container
There are scenarios in which you need an Apfell container for an agent, but you can't (or don't want) to use the normal docker containers that Apfell uses. This could be for reasons like:
You have a very custom build environment that you don't want to recreate
You have specific kernel versions or operating systems you're wanting to develop with
You want interactive access to the container where the compilations/transforms are happening for debugging or greater control
So, to leverage your own custom VM or physical computer into an Apfell recognized container, there are just a few steps.
Install python 3.6 in the VM
pip3 install aio_pika (this is used to communicate via rabbitmq)
Create a folder on the computer or VM (let's call it path
/pathA
).Copy the following files into the
/pathA
folder from the main Apfell install/Apfell/Payload_Types/apfell_heartbeat.py
/Apfell/Payload_Types/apfell_service.py
/Apfell/Payload_Types/apfell_service.sh
/Apfell/Payload_Types/rabbitmq_config.json
Edit the
rabbitmq_config.json
with the parameters you needthe
host
value should be the IP address of the main Apfell installthe
name
value should be the name of the payload type (this is tied into how the routing is done within rabbitmq)the
container_files_path
should be the absolute path to where you're going to host your files within the container. When you look at your 'container's files to upload/download/delete, it'll all be from within this path. This is also where the temporary directories and zip files will be created during payload creation.
Finally, run
/Apfell_service/apfell_service.sh
and now your VM should be calling back to the main Apfell server's rabbitmq service.this script just runs the
apfell_heartbeat.py
file in the background and starts theapfell_service.py
file.
If you already had the corresponding payload type registered in the Apfell interface, you should now see the red light turn green. If you haven't done that yet, register a new payload type within the Apfell UI with the name matching the hostname of the VM.
When everything fully registers, the main Apfell server will send down the transforms.py
file and save it to a sub-directory in the container_files_path
called apfell
.
Caveats
There are a few caveats to this process over using the normal process. You're now responsible for making sure that the right python version and dependencies are installed, and you're now responsible for making sure that the user context everything is running from has the proper permissions.
Last updated