Amazon ECS is a fully managed container orchestration service that helps you quickly deploy, manage, and scale containerized applications within Amazon Web Services. AWS manages the ECS control plane with operational best practices built in. Within Amazon ECS, your containers are defined in a task definition that you use to run individual tasks or tasks within a service. In this context, a service is a configuration that you can use to run and maintain a specified number of tasks simultaneously in a cluster. You can run your tasks and services on a serverless infrastructure that AWS Fargate manages. Alternatively, you can run your tasks and services on a cluster of Amazon EC2 instances that you manage for more control over your infrastructure.

Amazon ECS integrates with other AWS services. For example, you can use AWS Identity and Access Management (IAM) to get granular permissions for your containers, route container traffic through Elastic Load Balancing, or attach an Auto Scaling policy to your ECS deployment.

Amazon ECS Components

Clusters 

An Amazon ECS cluster is a logical grouping of tasks or services. You can use clusters to isolate your applications so they don't use the same underlying infrastructure.

Containers and images 

Application components need to be configured to run containerized applications on Amazon ECS. A container is a standardized unit of software development that holds everything that your software application requires to run, including relevant code, runtime, system tools, and system libraries. Containers are created from a read-only template called an image, and images are typically built from a Dockerfile. A Dockerfile is a plaintext file that specifies all of the components included in the container. Images are stored in a registry such as Dockerhub or Amazon ECR from where the ECS runtime downloads them.

Task definitions 

A task definition is a text file in JSON format that describes one or more containers that form your application. You can use it to describe up to a maximum of ten containers. Task definition is the blueprint for your application, and it specifies the various parameters for your application. For example, you can specify parameters for the operating system, which containers to use, which ports to open for your application, and what data volumes to use with the containers in the task.

Tasks 

A task is the instantiation of a task definition within a cluster. After creating a task definition for your application within Amazon ECS, you can specify the number of tasks to run on your cluster. You can run a standalone task or run a task as part of a service.

Services

You can use an Amazon ECS service to simultaneously run and maintain your desired number of tasks in an Amazon ECS cluster. If any of your tasks fail or stop for any reason, the Amazon ECS service scheduler launches another instance based on your task definition. It re-runs the task and thereby maintains your desired number of tasks in the service.

Container agent 

The container agent runs on each container instance within an Amazon ECS cluster. The agent sends information about your containers' current running tasks and resource utilization to Amazon ECS. It starts and stops tasks whenever it receives a request from Amazon ECS.

Monitoring and Logging in Amazon ECS

It's good to monitor your ECS infrastructure to ensure containers are launched, provisioned, and terminated as expected. Additionally, you'll also want to monitor the overall health and performance of your ECS containers and virtual machines. This section will discuss setting up the DataSet task definition to automatically pick up logs and metrics from your tasks running within Amazon ECS.

By default, containers use the same logging driver that the Docker daemon uses. However, you can use a different logging driver than the Docker daemon by specifying a log driver with this parameter in the container definition. We recommend using Docker default log driver - json-file; however, DataSet also supports syslog logging driver. Also, note that container logs, by default, go to stdout and stderr, but you can add other log locations by specifying them in the task definition.

{
    "networkMode": "bridge",
    "taskRoleArn": null,
    "containerDefinitions": [
        {
            "memory": 500,
            "extraHosts": null,
            "dnsServers": null,
            "disableNetworking": null,
            "dnsSearchDomains": null,
            "portMappings": null,
            "hostname": null,
            "essential": true,
            "entryPoint": null,
            "mountPoints": [
                {
                    "containerPath": "/var/scalyr/docker.sock",
                    "sourceVolume": "var_run_docker_sock",
                    "readOnly": null
                },
                {
                    "containerPath": "/var/lib/docker/containers",
                    "sourceVolume": "var_lib_docker_containers",
                    "readOnly": null
                },
                {
                    "readOnly": null,
                    "containerPath": "/var/log/host/ecs",
                    "sourceVolume": "var_log_ecs"
                }
            ],
            "name": "scalyr-docker-agent",
            "ulimits": null,
            "dockerSecurityOptions": null,
            "environment": [
                {
                    "name": "scalyr_api_key",
                    "value": "Your DataSet/Scalyr Write API Keys"
                }
            ],
            "links": null,
            "workingDirectory": null,
            "readonlyRootFilesystem": null,
            "image": "scalyr/scalyr-agent-docker",
            "command": null,
            "user": null,
            "dockerLabels": null,
            "cpu": 15,
            "privileged": null,
            "memoryReservation": null
        }
    ],
    "volumes": [
        {
            "host": {
                "sourcePath": "/var/run/docker.sock"
            },
            "name": "var_run_docker_sock"
        },
        {
            "host": {
                "sourcePath": "/var/lib/docker/containers"
            },
            "name": "var_lib_docker_containers"
        },
        {
            "host": {
                "sourcePath": "/var/log/ecs"
            },
            "name": "var_log_ecs"
        }
    ],
    "family": "scalyr-agent"
}

You can either add task definition options via the console fields or use the above json file in the Configure via JSON option as shown below:

What About Custom Parsers?

What if you want to use a specific parser for your application logs? In a few steps, you can configure the DataSet agent to use a custom parser:

First, you will create and mount an output logs directory, /var/log/applicationLog in this example, and include that in the task definition of the application container.

"volumes": [
    {
        "efsVolumeConfiguration": null,
        "name": "log-vol",
        "host": {
            "sourcePath": "/var/log/applicationLog"
        },
        "dockerVolumeConfiguration": null
    }
]
    
 "mountPoints": [
    {
        "readOnly": null,
        "containerPath": "/var/log/logtest",
        "sourceVolume": "log-vol"
    }
]

Then update the task definition of the scalyr-agent to add and mount the volume you defined above.

"volumes": [
  {
    "efsVolumeConfiguration": null,
    "name": "log-vol",
    "host": {
        "sourcePath": "/var/log/applicationLog"
    },
    "dockerVolumeConfiguration": null
  } 
]
    
"mountPoints": [
  {
    "readOnly": null,
    "containerPath": "/var/scalyr/logtest",
    "sourceVolume": "log-vol"
  }
]

Finally, you can modify the Scalyr Agent configuration by updating agent.json to use a specific parser:

logs": [

  {
    "path": "/var/scalyr/logtest/app.log",
    "attributes": {
        "parser": "appLog"
    }
  }
]

You can create a custom image of Scalyr Agent with your updated agent.json file so that these modifications can persist despite environment restarts. More information to create custom images for Scalyr Agent is available in our documentation.

Get Started with Monitoring ECS Applications

DataSet provides flexible logging and monitoring capabilities for ECS deployments.

Sign up for a free and fully-functional trial of DataSet and start to get value from your data immediately.