Deploying on AWS with CloudFormation
Replicated 3 Node Cluster
Prerequisites
zip
The CloudFormation script requires that the region in which Grainite will be deployed must have a minimum of 3 availability zones. For example, us-west-1 has only 2 available zones but us-east-2 has 3.
Tokens:
Helm deploy token (same as GitLab deploy token)
Helm username
Quay username
Quay password
Recommendation: All of the steps below should be performed from a linux VM running within the same virtual private cloud as the target cluster.
Download scripts
The scripts package contains scripts that make it easier to deploy and manage Grainite clusters with scripts for creating roles, VPCs, Grainite clusters, etc.
Run the following to download the CloudFormation AWS and Terraform GCP scripts package tar:
Replace <token>
with the deploy token provided to you.
Also, replace <version>
with the desired version of Grainite (e.g. 2316.1) that needs to be deployed or latest
for the latest available version of Grainite.
2. Run the following to extract the script package tar:
Setup and Create Roles
This step only needs to be performed once per region. Subsequent clusters can be created by reusing these roles.
1. Setup Roles
Example:
The command above should have automatically created and registered execution roles for AWSQS::EKS::CLUSTER
and AWSQS::KUBERNETES::HELM
in the same region where the role is created. Please confirm the roles were created and are privately registered as shown below.
2. Create a VPC Proxy Role
As the role that will be created through this command is global, it only needs to be run once.
Setup VPC
Create a dedicated VPC with 1 public subnet and NatGateway. This step can be skipped if the workload will be deployed using an existing VPC (with a NatGateway).
Example:
Create an unsecured 3 node EKS cluster
Create a 3-node cluster in an existing VPC and deploy grainite workload using 3-node-with-vpc.yaml
:
Where:
helm chart version
: The release version for the helm chart.Example: for release 2316.1, specify
-C 23.16.1
or for release2317
, specify-C 23.17.0
environment_name
: Environment tagsubnet_cidr
: CIDR for 3 private subnets to be created in the VPC. These subnets should be already created.awsqs_stack_name
: The name of the stack used to create the role to activate the AWSQS extensionsinstance key:
Name of the ssh key pair to connect to worker nodestemplate_file
: The path and filename of the template to use to create the cluster. Default to 3-node-with-vpc.yaml
Example (Need to replace username, passwords and ssh key):
Deploy a cluster with TLS and encryption enabled
Encryption can also be enabled on an existing cluster, see the following page for details:
pageEnabling Disk EncryptionOptionally, the following script can be used to create a cluster with encryption and TLS enabled directly:
Note: This will not create the client certificates necessary for TLS. To create these, follow Step 2 under Enabling TLS.
Where:
All flags are the same as those passed in Create a 3 node GKE Cluster except for
-e
which when passed enablesencryption at desk
.
Access the Kubernetes cluster
The Kubernetes cluster control plane and load balancer are private endpoints and can only be accessible from the same VPC and peered VPCs:
First, connect to the cluster by running the following command:
Next, get the cluster's service endpoint by running the following command:
Example:
Destroy the cluster
Run the following command to destroy the cluster:
Some of the subnets might not be deleted due to lingering lambda functions. To manually cleanup:
aws lambda list-functions
--region <region-name> to find the function name for the subnetsaws lambda delete-function --function-name < function_name > --region <region-name>
Persistent volumes for statefulsets are also automatically removed with aws-grainite cluster-delete.
Optional: Clean up persistent volumes if cluster-delete
fails
cluster-delete
fails1. List persistent volumes used in the cluster
2. Get the volume ID
Where:
<volume id>
is the ID of the volume.
Last updated