I am using aws batch with ECS.
ECS tasks can be configured in task definition to use awslogs logDriver and send the logs to cloudwatch, which prevent them taking up space in EC2 instance. But the ECS container agent itself also lives in a docker container and all the docker container logs for it are store in EC2 instance, which fill up the memory very quickly. Is there anyway to set up logDriver for ECS container agent itself?
Also ECS agent stores logs in /var/log/ecs/ecs-agent.log.timestamp, which also take up a lot of space. Any idea how to redirect them to cloudwatch?
You can have small script in UserData
(as part of LaunchConfiguration) to install awslogs
and configure for the same. Please find below the sample snippet.
# Install awslogs and the jq JSON parser
yum install -y awslogs jq
# Inject the CloudWatch Logs configuration file contents
cat > /etc/awslogs/awslogs.conf <<- ''EOF''
[general]
state_file = /var/lib/awslogs/agent-state
[/var/log/dmesg]
file = /var/log/dmesg
log_group_name = ${EnvName}-${EnvNumber}#ecs#dmesg
log_stream_name = {cluster}/{container_instance_id}
[/var/log/messages]
file = /var/log/messages
log_group_name = ${EnvName}-${EnvNumber}#ecs#messages
log_stream_name = {cluster}/{container_instance_id}
datetime_format = %b %d %H:%M:%S
[/var/log/docker]
file = /var/log/docker
log_group_name = ${EnvName}-${EnvNumber}#ecs#docker
log_stream_name = {cluster}/{container_instance_id}
datetime_format = %Y-%m-%dT%H:%M:%S.%f
[/var/log/ecs/ecs-init.log]
file = /var/log/ecs/ecs-init.log.*
log_group_name = ${EnvName}-${EnvNumber}#ecs#ecs-init.log
log_stream_name = {cluster}/{container_instance_id}
datetime_format = %Y-%m-%dT%H:%M:%SZ
[/var/log/ecs/ecs-agent.log]
file = /var/log/ecs/ecs-agent.log.*
log_group_name = ${EnvName}-${EnvNumber}#ecs#ecs-agent.log
log_stream_name = {cluster}/{container_instance_id}
datetime_format = %Y-%m-%dT%H:%M:%SZ