Usage
0. If using docker instead of dgctl installation
Prepend any "dgctl" command with following docker command
For example to run init
1. Clone the repository
2. Initialise Digger project
Create infra
directory at the top level of your repository and cd into it:
Now run the init command:
This should create the following file structure under the infra
folder:
3. Add a container block
A new block will be added into dgctl.json
and a new myApp
directory will be created next to it:
4. Build your infrastructure for AWS
A new generated
folder will be created, containing a complete set of Terraform templates needed to run your stack on AWS
If you are not familiar with Terraform - don't worry, you don't need to learn it! Terraform is used as an "assembly language" that Digger compiles into. You can add your own Terraform or customise it completely, but you don't have to; it works as-is.
5. Provision resources on AWS
It will ask for your AWS key pair and save it in ~/.aws/credentials
if you don't have it there
Then it will provision your infrastructure using terraform
under the hood (plan
and apply
)
This may take a few minutes ☕
6. Deploy your application code
Or you can cd ..
and run dgctl block deploy myApp --context=.
from the parent directory
This will run docker build
, docker push
and update the task definition in AWS ECS
You can customise the script in the config.json
file of a corresponding block
The block deploy
command is only a convenience shortcut; it is not meant to be used in production scenarios. In CI pipelines we recommend using docker
and aws
commands directly. ECR repository URL can be obtained by running dgctl block info
command.
TODO: get rid of cd'ing into infra directory. The CLI should create infra dir and operate from the root dir instead.
Last updated