Unlocking the Doors: Navigating many AWS ECS Containers with ECS Exec Command
A versatile tool for seamless interactive shell integration in secure isolated environments
In this post I will showcase a small utility that has proven useful across many projects to quickly gain interactive shell to secure isolated environments that are heavily restricted and monitored with a strong emphasis on having all access audited and tightly controlled through a central authentication and authorization tool, which for these cases, I have steered towards AWS Identity and Access Management (IAM).
In a previous post I have went over the architecture design and how to implement an isolated network that can be accessed via AWS ECS Exec, so I will refer you to this post an in-depth dive on how to roll it out:
Given the above foundational work done, I will now cover a useful utility script which with the help of a consistent and predictable naming convention, allows all authorized developers and support personnel to quickly access any service’s container across any environment.
Without further ado, the script:
#!/bin/bash
set -e
# environment is first argument passed to script
ENV=$1
# service is second argument passed to script
SERVICE=$2
# container name is third argument passed to script
CONTAINER_NAME="${3:-$(echo $SERVICE | tr 'A-Z-' 'a-z_')}"
# region for the service
REGION=${4:-"eu-west-1"}
usage_string="Usage: ./ecs-exec.sh <env> <service> [<container_name>:lower(service)] [<region>:eu-west-1]"
# check if ENV is passed
if [ -z "$ENV" ]; then
echo "environment not passed in."
echo $usage_string
exit 1
fi
# check if SERVICE is passed
if [ -z "$SERVICE" ]; then
echo "service target is not passed in."
echo $usage_string
exit 1
fi
# Check if required commands are available
for cmd in aws; do
if ! command -v "$cmd" > /dev/null; then
echo "Error: $cmd is not installed." >&2
exit 1
fi
done
CLUSTER_NAME="EngineerMindscape-ECS-$ENV"
SERVICE_NAME="EngineerMindscape-$SERVICE-$ENV"
echo "Cluster Name: $CLUSTER_NAME"
echo "Service Name: $SERVICE_NAME"
echo "Container Name: $CONTAINER_NAME"
TASK_ARN=$(aws ecs list-tasks --region $REGION --cluster $CLUSTER_NAME --service-name $SERVICE_NAME --query 'taskArns[0]' --output text --no-cli-pager)
echo "Task ARN: $TASK_ARN"
aws ecs execute-command --region $REGION --cluster $CLUSTER_NAME --task $TASK_ARN --container $CONTAINER_NAME --command '/bin/sh' --interactive
We will now discuss this script in the context of a theoretical service named EngineerMindscape-EFS-Util-DEV
.
Option 1: Basic Example with all Defaults
In it’s simplest form, leveraging all defaults, the script can simply be invoked as such:
./ecs-exec.sh DEV EFS-Util
where
DEV: The environment we are targeting, could be any existing environment such as DEV, QA, UAT, INT, PROD, etc.
EFS-Util: This is the name of the ECS Service we are targeting. Note that the container name must follow a strict naming convention, namely, it must be the same as the service, all lowercase, with “-” replaced by “_”. As you may have noticed, this only works if there is only one, default container. If you have more containers, depending on your needs, the standardization can be extended to include sidecar containers as required. For the scope of this demo, we will keep it to one container, but have mentioned the possibility of one or more sidecar containers nevertheless as part of the script optional parameters for flexibility and extensibility.
Option 2: Sidecar container present
In this slightly more advanced example, we have a sidecar container, for example a Stackdriver container, sending data to a custom Prometheus target:
./ecs-exec.sh DEV EFS-Util stackdriver
where the first two arguments are the same as before and
stackdriver: Exact name of the target container to connect to. This parameter can also be the name of the default, essential container within the service if it does not adhere to the expected naming convention due to any reason.
Option 3: Sidecar container present in another region
In this scenario, there is a sidecar container in a service located in another region. Of course the script can be expanded to also make the cluster name configurable to accomodate multiple clusters in the same region or in different ones, but for now, le’ts focus on the region:
./ecs-exec.sh DEV EFS-Util stackdriver us-east-1
where the first three arguments are the same as before and
us-east-1: Exact name of the AWS region to connect to.
As an example of how the terminal session might look like while using this:
If you have read my other post, namely
You will find an example use case where this script is used. Within the above posts repository, it can be found here.
Conclusion
In this brief post, we explored a powerful utility script that streamlines access to containers within secure, isolated AWS environments. Emphasizing stringent access controls and auditing through AWS Identity and Access Management (IAM), this tool is a testament to the importance of security and efficiency in modern cloud infrastructure management. By leveraging AWS ECS Exec, the script enables authorized developers and support staff to swiftly connect to any service container, across various environments, with minimal hassle.
Whether dealing with a primary service container or navigating through sidecar containers, possibly even across different regions, this utility facilitates essential interactive shell access, thereby enhancing operational flexibility and responsiveness.
Through practical examples, from basic usage to more complex scenarios involving sidecar containers and different AWS regions, the post demonstrates the script's versatility. This utility not only exemplifies the power of standardized naming conventions but also underscores the critical role of accessible, yet secure, container management in today's cloud-centric landscapes.