Prerequisites
Installing Grainite into an existing Kubernetes cluster requires at least three nodes dedicated to Grainite. Grainite pods run in a 1:1 relationship with the nodes in the cluster. For high availability, we recommend that the nodes within the cluster be spread across multiple availability zones. Grainite will remain available as long as two out of the three nodes within the cluster are operational.
Nodes within the cluster
To run a Grainite server, nodes within the cluster must be configured as follows
Number of nodes within the cluster: 3
Storage requirement: 2256 GiB SSD
Operating system: Ubuntu Linux 22.04 LTS
Deployment virtual machine
The deployment virtual machine will be used to install the Grainite server into an existing Kubernetes cluster using Helm Charts. For this virtual machine, we recommend the following configuration
Storage requirement: 200 GiB
Operating system: Ubuntu Linux 22.04 LTS
Software packages deployed in the deployment virtual machine
Helm
Copy curl https://baltocdn.com/helm/signing.asc | sudo apt-key add -
sudo apt-get install apt-transport-https --yes
echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
Kubernetes CLI
Cloud provider CLI
Terraform
jq
sudo apt install -y jq
zip
sudo apt install zip
grepcidr
sudo apt install -y grepcidr
ipcalc
sudo apt install -y ipcalc
Tokens
Make sure you have each of the below credentials which have been sent to you by the Grainite team. The credentials will be passed in as arguments to some of the commands in this guide:
Helm deploy token (same as GitLab deploy token)
Cluster Preparation
Before you begin to install Grainite into your cluster, please ensure that the context is set up correctly. The following command helps to get and set the context of currently running clusters in your environment.
Get-Context
This will describe one or many contexts within the environment.
Copy kubectl config get-contexts
Use-Context
Copy kubectl config use-context < context nam e >
Get-nodes
Obtain a list of nodes within the cluster
Label-nodes
First, you may have to delete the existing labels on the nodes. This may only be required if you already have existing labels on the node.
Copy kubectl label node < node_nam e > < labe l > -
Label the nodes where you would like to install Grainite
Copy kubectl label node < node_nam e > grainite-env.node=grainite-node --overwrite
Create-namespace
Copy kubectl create namespace < grainit e >
Show-labels
Verify the nodes have been labeled correctly
Copy kubectl get nodes --show-labels
Show-namespace
Verify the namespace has been set correctly
Grainite Setup
Set up a Helm Chart repo
Copy helm repo add grainite-repo https://gitlab.com/api/v4/projects/26443204/packages/helm/stable --username <Helm username> --password <Helm deploy token>
Download the Grainite Helm Chart
Latest version
Copy helm pull grainite-repo/grainite
Specific version
Copy helm pull grainite-repo/grainite --version < chart versio n >
Note: Helm Chart version is specified in the format 23.23.0
Verify file has been downloaded
Copy ~ /grainite/scripts/bin$ ls -l grainite * .tgz
-rw-r--r-- 1 user user 5967 Jun 5 16 :31 grainite-23.23.0.tgz
Install workload
GCP:
Copy helm install < NAM E > < CHAR T > -n < namespac e > \
--create-namespace < namespac e > \
--set kms_sa_name=missing \
--set imageCredentials.registry=quay.io \
--set imageCredentials.username= < Quay usernam e > \
--set imageCredentials.password= < Quay passwor d > \
--set grainiteImage=quay.io/grainite/cluster: < versio n > \
--set cloudType=gcp \
--set cluster= < cluster nam e > \
--set configMap.dat_size_gb= 1 Ti \
--set configMap.num_servers= < nodes in cluster: default 3> \
--set storeCapacity.grainite-dat= 1 Ti \
--set storeCapacity.grainite-meta= 1 Ti \
--set storeCapacity.grainite-sav= 64 Gi \
--set volumeType=pd-ssd \
--set vpc_name= < vpc-nam e > \
--set internalLB= true \
[--set sourceCIDRList= 1.2 .3.4/24,5.6.7.8/24 needed when internalLB= false ]
Note: --create-namespace <namespace>
is optional and can be specified if namespace was not created using it in the earlier step.
Example:
Copy helm install grainite ./grainite-23.23.0.tgz -n grainite \
--create-namespace grainite \
--set kms_sa_name=missing \
--set imageCredentials.registry=quay.io \
--set imageCredentials.username= < Quay usernam e > \
--set imageCredentials.password= < Quay passwor d > \
--set grainiteImage=quay.io/grainite/cluster:2323 \
--set cloudType=gcp \
--set cluster=bob-2323 \
--set configMap.dat_size_gb= 1 Ti \
--set configMap.num_servers= 3 \
--set storeCapacity.grainite-dat= 1 Ti \
--set storeCapacity.grainite-meta= 1 Ti \
--set storeCapacity.grainite-sav= 64 Gi \
--set volumeType=pd-ssd \
--set vpc_name=bob-vpc \
--set internalLB= false \
--set sourceCIDRList= 1.2 .3.4/24,5.6.7.8/24
AWS:
Copy helm install < NAM E > < CHAR T > -n < namespac e > \
--create-namespace < namespac e > \
--set kms_sa_name=missing \
--set imageCredentials.registry=quay.io \
--set imageCredentials.username= < Quay Usernam e > \
--set imageCredentials.password= < Quay Passwor d > \
--set grainiteImage=quay.io/grainite/cluster: < versio n > \
--set cloudType=aws \
--set terminationGracePeriodSeconds= 180 \
--set volumeType=gp3 \
--set configMap.num_servers= < nodes in cluster default: 3> \
--set cluster= < cluster nam e > \
--set storeCapacity.grainite-dat= < 1T i > \
--set storeCapacity.grainite-meta= < 1T i > \
--set storeCapacity.grainite-sav= < 64G i > \
--set internalLB= true \
[--set sourceCIDRList= 1.2 .3.4/24,5.6.7.8/24 needed when internalLB= false ]
Note: --create-namespace <namespace>
is optional and can be specified if namespace was not created in the earlier step.
Example
Copy helm install grainite ./grainite-23.23.0.tgz -n grainite \
--create-namespace grainite \
--set kms_sa_name=missing \
--set imageCredentials.registry=quay.io \
--set imageCredentials.username= < Quay Usernam e > \
--set imageCredentials.password= < Quay Passwor d > \
--set grainiteImage=quay.io/grainite/cluster:2323 \
--set cloudType=aws \
--set terminationGracePeriodSeconds= 180 \
--set volumeType=gp3 \
--set configMap.num_servers= 3 \
--set cluster=bob-2323 \
--set storeCapacity.grainite-dat= 1 Ti \
--set storeCapacity.grainite-meta= 1 Ti \
--set storeCapacity.grainite-sav= 64 Gi \
--set internalLB= false \
--set sourceCIDRList= 1.2 .3.4/24,5.6.7.8/24
Azure:
Copy helm install < NAM E > < CHAR T > -n < namespac e > \
--create-namespace < namespac e > \
--set imageCredentials.password= < Quay Passwor d > \
--set imageCredentials.registry=quay.io/grainite \
--set imageCredentials.username= < Quay Usernam e > \
--set grainiteImage=quay.io/grainite/cluster: < versio n > \
--set volumeType=StandardSSD_LRS \
--set storeCapacity.grainite-dat= < dat_siz e > \
--set storeCapacity.grainite-sav= < sav_siz e > \
--set storeCapacity.grainite-meta= < dat_siz e > \
--set configMap.dat_size_gb= < dat_siz e > \
--set configMap.num_servers= 3 \
--set kms_sa_name=missing \
--set cloudType=azure \
(--version < chart_versio n > if using helm repo directly ) \
--set cluster= < cluster_nam e > \
--set vpc_name= < vpc_nam e > \
--set externalTrafficPolicy=Local \
--set internalLB= true \
[--set sourceCIDRList= 1.2 .3.4/24,5.6.7.8/24 needed when internalLB= false ]
Note: --create-namespace <namespace>
is optional and can be specified if namespace was not created in the earlier step.
Example
Copy helm install grainite grainite ./grainite-23.23.0.tgz -n grainite \
--set imageCredentials.password= < Quay Passwor d > \
--set imageCredentials.registry=quay.io/grainite \
--set imageCredentials.username= < Quay Usernam e > \
--set grainiteImage=quay.io/grainite/cluster:2319.9 \
--set volumeType=StandardSSD_LRS \
--set storeCapacity.grainite-dat= 1 Ti \
--set storeCapacity.grainite-sav= 64 Gi \
--set storeCapacity.grainite-meta= 1 Ti \
--set configMap.dat_size_gb= 1 Ti \
--set configMap.num_servers= 3 \
--set kms_sa_name=missing \
--set cloudType=azure \
--set cluster=bob-2319.9 \
--set vpc_name=bob-vpc \
--set externalTrafficPolicy=Local \
--set internalLB= false \
--set sourceCIDRList= 1.2 .3.4/24,5.6.7.8/24
VMware Tanzu
Copy helm install < NAM E > < CHAR T > -n < namespac e > \
--create-namespace < namespac e > \
--set imageCredentials.password= < Image repository passwor d > \
--set imageCredentials.registry= < image repository ur l > \
--set imageCredentials.username= < Image repository usernam e > \
--set grainiteImage= < grainite image nam e > \
--set storeCapacity.grainite-dat= < dat volume siz e > \
--set storeCapacity.grainite-sav= < sav volume siz e > \
--set storeCapacity.grainite-meta= < meta volume siz e > \
--set configMap.dat_size_gb= < dat volume siz e > \
--set configMap.num_servers= < num nodes in cluste r > \
--set kms_sa_name=missing \
--set cloudType=tanzu \
--set namespace= < Grainite namespac e > grainite-repo/grainite
Example
Copy #!/bin/bash
dat_size = 64 Gi
DAT_SAV_RATIO = 16
dat_unit = $( echo ${dat_size} | sed 's/[0-9]//g' )
dat_num = $( echo ${dat_size} | sed 's/[GT]i//g' )
sav_num = $((dat_num 1024 / DAT_SAV_RATIO))
if [[ "${dat_unit}" == "Gi" ]]; then
sav_size = " $((dat_num 1024 / DAT_SAV_RATIO)) Mi"
elif [[ "${dat_unit}" == "Ti" ]]; then
sav_size = " $((dat_num 1024 / DAT_SAV_RATIO)) Gi"
fi
image_pass =< Quay password>
image_user =< Quay username>
image_name = "quay.io/grainite/cluster:2323.0"
helm install -n grainite-ns --create-namespace grainite --set imageCredentials.password=${image_pass} --set imageCredentials.registry=quay.io/grainite --set imageCredentials.username=${image_user} --set grainiteImage=${image_name} --set storeCapacity.grainite-dat=${dat_size} --set storeCapacity.grainite-sav=${sav_size} --set storeCapacity.grainite-meta=${dat_size} --set configMap.dat_size_gb=${dat_size} --set configMap.num_servers=3 --set kms_sa_name=missing --set cloudType=tanzu --set namespace=grainitne-ns grainite-repo/grainite
Upon successful deployment, you should see a message similar to the one below
Copy NAME: grainite
LAST DEPLOYED: Tue Jun 6 14 :29:40 2023
NAMESPACE: grainite
STATUS: deployed
REVISION: 1
TEST SUITE: None
Verify the installation
The first step to verify your installation is to ensure you have set the correct namespace set for Kubectl.
Copy kubectl config set-context --current --namespace= < namespace >
Ensure all your pods are running
You should see an output like the one below.
Copy NAME READY STATUS RESTARTS AGE
gxs-0 1 /2 Running 0 33 h
gxs-1 1 /2 Running 0 33 h
gxs-2 1 /2 Running 0 34 h
Error Recovery
In case of errors, you can restart the Grainite installation by first uninstalling the pods from the Kubernetes cluster.
Copy helm delete grainite -n < namespac e >
If you want to destroy the persistent volumes
Copy kubectl get pvc | grep gxs | awk '{print $1}' | xargs kubectl delete pvc
Last updated 9 months ago