Sharing AWS Credentials Between Users On a Docker Container
18 Jul 2015 Tags: docker and aws Suggest changesTL;DR:
AWS CLI configurations and credentials are stored in the user’s home directory by
default (~/.aws
). This becomes a problem when other users such as
root
, www-data
, nobody
, or cron jobs need access to these credentials.
This post shows how to get around this in a Docker environment.
Docker is a great application containment tool for building and shipping software of any type. You can learn more about Docker here. We love and use Docker at Humanlink with AWS Elastic Beanstalk.
Being on Elastic Beanstalk means utilizing other amazing Amazon services as well. AWS CLI and boto (for Python developers) are the de-facto tools for interacting with AWS.
There are a few ways to provide credentials to these tools.
On production, it is a good idea to serve them as environment
variables (such as AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
).
During development, however, it is recommended that these values are set via
the aws configure
command, which in turn places config files under ~/.aws/
.
I was planning on having a full-blown example with detailed explanations, but that quickly became a long post and lost its focus.
In short, we do not want to pollute the Dockerfile just for the development environment. When running the Docker image locally, we can mount the ~/.aws
directory AND set the $HOME
environment variable. Otherwise, $HOME
defaults
to /root
which causes permission problems for non-root users.
Since mounted file permissions are copied over to the Docker container, we need to give read permissions to everyone (on your host machine).
$ chmod 644 ~/.aws/*
We can now run the container:
$ docker run -it --rm \
-e "HOME=/home" \
-v $HOME/.aws:/home/.aws \
myapp
Now the credentials are available system-wide in the Docker container for any user to use and communicate with AWS.