🔧 Setting up a CI/CD pipeline
Nachrichtenbereich: 🔧 Programmierung
🔗 Quelle: dev.to
Terraform Configuration for AWS EKS Cluster
Github Link
Kubernetes DevSecOps CICD Project Using Github Actions and ArgoCD
Github Link
Part one: Setting up the environment
ssh exchange between my local computer and my github account
ssh-keygen
export GIT_SSH_COMMAND="ssh -i ~/.ssh/key"
This command tells Git to use the specified SSH key for authentication during operations like git clone, git pull, and git push.
Steps to Create an IAM User and Generate Access Key
In the IAM dashboard, click on Users on the left-hand menu.
Click the Add users button.
Attach existing policies directly. Search for and select AdministratorAccess to give the user full access.
On the success screen, you will see an Access key ID and Secret access key for the user. Make sure to download the CSV or copy these to a safe place as they will not be retrievable again.
Set up an S3 bucket on AWS to store Terraform state files.
This is a common practice in Infrastructure-as-Code (IaC) deployments to ensure the state is stored remotely, securely, and is accessible for team collaboration.
The bucket will eventually populate once we run terraform init and store the file shown.
Store the AWS credentials
(Access Key, Secret Key, and BUCKET NAME) securely in GitHub Secrets for use in a CI/CD pipeline. These credentials will be used by GitHub Actions to authenticate with AWS services, such as deploying infrastructure or uploading state files to an S3 bucket.
The Terraform files define AWS infrastructure resources like S3, DynamoDB, and IAM roles, automating deployment through code.
*Deploying AWS resources using Terraform Configuration *Github Link
On push, the CI/CD pipeline in GitHub triggers automatic builds and deployments based on the latest code changes.
The pipeline ran successfully, completing all tasks such as building and deploying without failures.
Part two: Configuring the environment
The jump host server, often used for secure access to private network resources, has been successfully deployed. Lets SSH into it.
Confirm the tools such as docker, terraform, aws cli, kubectl, trivy and eksctl are installed.
Create an eks cluster
eksctl create cluster --name quizapp-eks-cluster --region us-east-1 --node-type t2.large --nodes-min 2 --nodes-max 4
Part three: Setting up mongodb, SonarQube, snyk and docker hub
Create a MongoDB cluster and a user.
Click "Build a Cluster" and choose a cloud provider, region, and configuration (shared/ free plan).
Click Create Cluster and wait for deployment.
Go to Database Access under the Security tab.
Click "Add New Database User".
Set a username and password, and assign the role (e.g., Atlas Admin or read/write).
Choose where the user can connect from (allow access from your IP or anywhere).
Set up SonarQube variables for a CI/CD pipeline
SONAR_ORGANIZATION: Specify your organization name in SonarQube.
SONAR_PROJECT_KEY: Define a unique key for your project within the organization.
SONAR_TOKEN: Generate a secure token from your SonarQube account to authenticate API requests.
SONAR_URL: Set the base URL (https://sonarcloud.io).
These variables help integrate SonarQube for code quality analysis within your CI/CD pipeline, ensuring secure authentication and project identification.
GitHub Personal Access Token (PAT)
GitHub Personal Access Token is crucial for enabling secure and controlled interactions between the CI/CD pipeline and the GitHub repository, ensuring both functionality and security.
Authenticating with Snyk
SNYK_TOKEN is essential for securely authenticating with Snyk, automating vulnerability scanning, and ensuring controlled access within your CI/CD pipeline.
Setting up Docker Hub token
Docker Hub serves as a centralized repository to store and manage your Docker images. This makes it easy to version control your images and share them across different environments.
These keys and tokens are essential for securely authenticating and authorizing access to various services and tools in a CI/CD pipeline. They enable automation, enhance security, and facilitate collaboration across different environments and teams.
Part Four: Deploying React Application
On push, the CI/CD pipeline in GitHub triggers automatic builds and deployments based on the latest code changes.
CI/CD pipeline Github Link
The pipeline ran successfully, completing all tasks such as building and deploying without failures.
Create an eks cluster using the below commands.
The command allows connection to the EKS cluster created allowing Kubernetes operations on that cluster.
aws eks update-kubeconfig --region us-east-1 --name quizapp-eks-cluster
validate whether nodes are ready
kubectl get nodes
Configure the Load Balancer on our EKS because our application will have an ingress controller. Download the policy for the LoadBalancer prerequisite.
curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.5.4/docs/install/iam_policy.json
Create the IAM policy
aws iam create-policy --policy-name AWSLoadBalancerControllerIAMPolicy --policy-document file://iam_policy.json
Create OIDC Provider
To allows the cluster to integrate with AWS IAM for assigning IAM roles to Kubernetes service accounts, enhancing security and management.
eksctl utils associate-iam-oidc-provider --region=us-east-1 --cluster=quizapp-eks-cluster --approve
Create Service Account
eksctl create iamserviceaccount --cluster=quizapp-eks-cluster --namespace=kube-system --name=aws-load-balancer-controller --role-name AmazonEKSLoadBalancerControllerRole --attach-policy-arn=arn:aws:iam::<ACCOUNT-ID>:policy/AWSLoadBalancerControllerIAMPolicy --approve --region=us-east-1
Deploy the AWS Load Balancer Controller using Helm
sudo snap install helm --classic
helm repo add eks https://aws.github.io/eks-charts
helm repo update eks
helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=quizapp-eks-cluster --set serviceAccount.create=false --set serviceAccount.name=aws-load-balancer-controller
check whether aws-load-balancer-controller pods are running or not.
kubectl get deployment -n kube-system aws-load-balancer-controller
Configure ArgoCD
Create the namespace for the EKS Cluster.
kubectl create namespace quiz
kubectl get namespaces
Create a separate namespace for it and apply the argocd configuration for installation.
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.4.7/manifests/install.yaml
Confirm argoCD pods are running
kubectl get pods -n argocd
Expose the argoCD server as LoadBalancer
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
Get the password for our argoCD server to perform the deployment.
sudo apt install jq -y
export ARGOCD_SERVER=`kubectl get svc argocd-server -n argocd -o json | jq --raw-output '.status.loadBalancer.ingress[0].hostname'`
export ARGO_PWD=`kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d`
echo $ARGO_PWD
Set up the Monitoring for our EKS Cluster using Prometheus and Grafana
Helm Add all the helm repos, the prometheus, grafana repos
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo add grafana https://grafana.github.io/helm-charts
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
Install the prometheus
helm install prometheus prometheus-community/kube-prometheus-stack -n monitoring --create-namespace
Install the Grafana
helm install grafana grafana/grafana -n monitoring --create-namespace
Get Grafana admin user password
kubectl get secret --namespace monitoring grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
Confirm the services and validate from AWS LB console.
kubectl get svc -n monitoring
Access your Prometheus Dashboard Paste the Prometheus-LB-DNS:9090 in your browser. Click on Status and select Target. You will see a lot of Targets.
Click on Data Source, Select Prometheus and in the Connection, paste your Prometheus-LB-DNS:9090
Create a dashboard to visualize our Kubernetes Cluster Logs.
Import a type of Kubernetes Dashboard. 6417 ID
Deploy Quiz Application using ArgoCD
Configure the app_code github repository in ArgoCD
Create our application which will deploy the frontend, backend, database and ingress.
Deployment is synced and healthy
🔧 Data Pipeline vs. ETL Pipeline
📈 20.49 Punkte
🔧 Programmierung
🔧 Setting up a CI/CD pipeline
📈 18.74 Punkte
🔧 Programmierung
🔧 The Rails asset pipeline, old and new
📈 10.24 Punkte
🔧 Programmierung
🔧 The Pipeline Pattern in Go
📈 10.24 Punkte
🔧 Programmierung