Lädt...


🔧 Setting up a CI/CD pipeline


Nachrichtenbereich: 🔧 Programmierung
🔗 Quelle: dev.to

Terraform Configuration for AWS EKS Cluster
Github Link

Kubernetes DevSecOps CICD Project Using Github Actions and ArgoCD
Github Link

Part one: Setting up the environment

ssh exchange between my local computer and my github account

ssh-keygen 
export GIT_SSH_COMMAND="ssh -i ~/.ssh/key"

This command tells Git to use the specified SSH key for authentication during operations like git clone, git pull, and git push.
setting up 1

setting up 2

Steps to Create an IAM User and Generate Access Key

In the IAM dashboard, click on Users on the left-hand menu.
Click the Add users button.
Generate Access Key

Attach existing policies directly. Search for and select AdministratorAccess to give the user full access.
AdministratorAccess

On the success screen, you will see an Access key ID and Secret access key for the user. Make sure to download the CSV or copy these to a safe place as they will not be retrievable again.

Access key ID and Secret access key

Set up an S3 bucket on AWS to store Terraform state files.

This is a common practice in Infrastructure-as-Code (IaC) deployments to ensure the state is stored remotely, securely, and is accessible for team collaboration.

The bucket will eventually populate once we run terraform init and store the file shown.
eventually populate

Store the AWS credentials

(Access Key, Secret Key, and BUCKET NAME) securely in GitHub Secrets for use in a CI/CD pipeline. These credentials will be used by GitHub Actions to authenticate with AWS services, such as deploying infrastructure or uploading state files to an S3 bucket.

Github Secrets

The Terraform files define AWS infrastructure resources like S3, DynamoDB, and IAM roles, automating deployment through code.

*Deploying AWS resources using Terraform Configuration *Github Link

iac code

On push, the CI/CD pipeline in GitHub triggers automatic builds and deployments based on the latest code changes.

GitHub triggers

The pipeline ran successfully, completing all tasks such as building and deploying without failures.

successfullyn

Part two: Configuring the environment

The jump host server, often used for secure access to private network resources, has been successfully deployed. Lets SSH into it.
Confirm the tools such as docker, terraform, aws cli, kubectl, trivy and eksctl are installed.

jump host server

Create an eks cluster
eksctl create cluster --name quizapp-eks-cluster --region us-east-1 --node-type t2.large --nodes-min 2 --nodes-max 4

eks cluster1

eks cluster2

Part three: Setting up mongodb, SonarQube, snyk and docker hub

Create a MongoDB cluster and a user.

Click "Build a Cluster" and choose a cloud provider, region, and configuration (shared/ free plan).
Click Create Cluster and wait for deployment.

MongoDB cluster

Go to Database Access under the Security tab.
Click "Add New Database User".
Set a username and password, and assign the role (e.g., Atlas Admin or read/write).
Choose where the user can connect from (allow access from your IP or anywhere).

Database Access

Set up SonarQube variables for a CI/CD pipeline

SONAR_ORGANIZATION: Specify your organization name in SonarQube.
SONAR_PROJECT_KEY: Define a unique key for your project within the organization.
SONAR_TOKEN: Generate a secure token from your SonarQube account to authenticate API requests.
SONAR_URL: Set the base URL (https://sonarcloud.io).

These variables help integrate SonarQube for code quality analysis within your CI/CD pipeline, ensuring secure authentication and project identification.

SonarQube1

SonarQube2

GitHub Personal Access Token (PAT)

GitHub Personal Access Token is crucial for enabling secure and controlled interactions between the CI/CD pipeline and the GitHub repository, ensuring both functionality and security.

Personal Access Token

Authenticating with Snyk

SNYK_TOKEN is essential for securely authenticating with Snyk, automating vulnerability scanning, and ensuring controlled access within your CI/CD pipeline.

SNYK_TOKEN

Setting up Docker Hub token

Docker Hub serves as a centralized repository to store and manage your Docker images. This makes it easy to version control your images and share them across different environments.

Docker Hub1

Docker Hub2

These keys and tokens are essential for securely authenticating and authorizing access to various services and tools in a CI/CD pipeline. They enable automation, enhance security, and facilitate collaboration across different environments and teams.

keys and tokens

Part Four: Deploying React Application

On push, the CI/CD pipeline in GitHub triggers automatic builds and deployments based on the latest code changes.

CI/CD pipeline Github Link

CI/CD pipeline1

The pipeline ran successfully, completing all tasks such as building and deploying without failures.

CI/CD pipeline2

Create an eks cluster using the below commands.

The command allows connection to the EKS cluster created allowing Kubernetes operations on that cluster.

aws eks update-kubeconfig --region us-east-1 --name quizapp-eks-cluster

validate whether nodes are ready
kubectl get nodes
Configure the Load Balancer on our EKS because our application will have an ingress controller. Download the policy for the LoadBalancer prerequisite.

curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.5.4/docs/install/iam_policy.json

Create the IAM policy

aws iam create-policy --policy-name AWSLoadBalancerControllerIAMPolicy --policy-document file://iam_policy.json

Create OIDC Provider
To allows the cluster to integrate with AWS IAM for assigning IAM roles to Kubernetes service accounts, enhancing security and management.

eksctl utils associate-iam-oidc-provider --region=us-east-1 --cluster=quizapp-eks-cluster --approve

Create Service Account

eksctl create iamserviceaccount --cluster=quizapp-eks-cluster --namespace=kube-system --name=aws-load-balancer-controller --role-name AmazonEKSLoadBalancerControllerRole --attach-policy-arn=arn:aws:iam::<ACCOUNT-ID>:policy/AWSLoadBalancerControllerIAMPolicy --approve --region=us-east-1

eks cluster

Deploy the AWS Load Balancer Controller using Helm

sudo snap install helm --classic
helm repo add eks https://aws.github.io/eks-charts
helm repo update eks
helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=quizapp-eks-cluster --set serviceAccount.create=false --set serviceAccount.name=aws-load-balancer-controller

Load Balancer Controller

check whether aws-load-balancer-controller pods are running or not.
kubectl get deployment -n kube-system aws-load-balancer-controller

load-balancer

Configure ArgoCD

Create the namespace for the EKS Cluster.

kubectl create namespace quiz
kubectl get namespaces

Create a separate namespace for it and apply the argocd configuration for installation.

kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.4.7/manifests/install.yaml

argocd configuration

Confirm argoCD pods are running
kubectl get pods -n argocd

argoCD pods

Expose the argoCD server as LoadBalancer

kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'

rgoCD server as LoadBalancer

Get the password for our argoCD server to perform the deployment.
sudo apt install jq -y

export ARGOCD_SERVER=`kubectl get svc argocd-server -n argocd -o json | jq --raw-output '.status.loadBalancer.ingress[0].hostname'`
export ARGO_PWD=`kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d`
echo $ARGO_PWD

argoCD serve

Set up the Monitoring for our EKS Cluster using Prometheus and Grafana

Helm Add all the helm repos, the prometheus, grafana repos

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo add grafana https://grafana.github.io/helm-charts
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update

Install the prometheus

helm install prometheus prometheus-community/kube-prometheus-stack -n monitoring --create-namespace

Install the Grafana

helm install grafana grafana/grafana -n monitoring --create-namespace

Grafana

Get Grafana admin user password

kubectl get secret --namespace monitoring grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

grafana

Confirm the services and validate from AWS LB console.
kubectl get svc -n monitoring

services

Access your Prometheus Dashboard Paste the Prometheus-LB-DNS:9090 in your browser. Click on Status and select Target. You will see a lot of Targets.

Prometheus Dashboard

Click on Data Source, Select Prometheus and in the Connection, paste your Prometheus-LB-DNS:9090

Data Source

Create a dashboard to visualize our Kubernetes Cluster Logs.
Import a type of Kubernetes Dashboard. 6417 ID
Prometheus

dashboard

Deploy Quiz Application using ArgoCD

Configure the app_code github repository in ArgoCD

ArgoCD1

Create our application which will deploy the frontend, backend, database and ingress.

application

Deployment is synced and healthy
synced and healthy

ArgoCD2

...

📰 Colonial Pipeline Initiates Restart of Pipeline Operations After Ransomware Attack


📈 20.49 Punkte
📰 IT Security Nachrichten

📰 Colonial Pipeline cyberattack shuts down pipeline that supplies 45% of East Coast's fuel


📈 20.49 Punkte
📰 IT Security Nachrichten

🔧 Turn Your Existing DevOps Pipeline Into an MLOps Pipeline With ModelKits


📈 20.49 Punkte
🔧 Programmierung

🔧 Turn Your Existing DevOps Pipeline Into an MLOps Pipeline With ModelKits


📈 20.49 Punkte
🔧 Programmierung

🔧 Data Pipeline vs. ETL Pipeline


📈 20.49 Punkte
🔧 Programmierung

🔧 AWS Code Pipeline - CloudFront - S3 CI/CD Pipeline


📈 20.49 Punkte
🔧 Programmierung

🔧 Setting up a CI/CD pipeline


📈 18.74 Punkte
🔧 Programmierung

🔧 Setting Up a Comprehensive Python Build Validation Pipeline in Azure DevOps


📈 18.74 Punkte
🔧 Programmierung

🔧 Setting Up a Basic CI/CD Pipeline with Automated Build and Test Stages


📈 18.74 Punkte
🔧 Programmierung

🔧 4 Mistakes to Avoid When Setting Up a CI/CD Pipeline


📈 18.74 Punkte
🔧 Programmierung

🔧 Day 13 of my 90-Devops project: Setting Up a CI/CD Pipeline with Docker and Kubernetes on GitLab


📈 18.74 Punkte
🔧 Programmierung

🔧 Setting Up a CI/CD Pipeline with Jenkins, GitHub Actions, or CircleCI


📈 18.74 Punkte
🔧 Programmierung

🔧 Setting Up The Home Lab: Setting up Kubernetes Using Ansible


📈 16.99 Punkte
🔧 Programmierung

🕵️ POSCMS 3.2.10 Setting.php index $cache['setting']['ucssocfg'] privilege escalation


📈 16.99 Punkte
🕵️ Sicherheitslücken

🔧 The Rails asset pipeline, old and new


📈 10.24 Punkte
🔧 Programmierung

🔧 How to Build an LLM RAG Pipeline with Upstash Vector Database


📈 10.24 Punkte
🔧 Programmierung

🔧 Building an Agnostic Data Pipeline: Pros and Cons


📈 10.24 Punkte
🔧 Programmierung

🔧 The Pipeline Pattern in Go


📈 10.24 Punkte
🔧 Programmierung

📰 NYC Department of Education builds the pipeline for future cybersecurity professionals


📈 10.24 Punkte
📰 IT Security Nachrichten

🕵️ Low CVE-2020-2256: Jenkins Pipeline maven integration


📈 10.24 Punkte
🕵️ Sicherheitslücken

📰 Experte: Sechs Mini-LED-Produkte von Apple in der Pipeline


📈 10.24 Punkte
📰 IT Nachrichten

📰 Pipeline Cybersecurity Rules Show the Need for Public-Private Partnerships


📈 10.24 Punkte
📰 IT Security Nachrichten

🕵️ Colonial Pipeline ransomware attack: Everything you need to know


📈 10.24 Punkte
🕵️ Hacking

🕵️ Low CVE-2020-2118: Jenkins Pipeline github notify step


📈 10.24 Punkte
🕵️ Sicherheitslücken

📰 US oil pipeline taken down by ransomware attack


📈 10.24 Punkte
📰 IT Security Nachrichten

📰 Vulnerabilities - there's more trouble in the pipeline


📈 10.24 Punkte
📰 IT Security Nachrichten

📰 Ein Jahr nach den Ransomware-Angriffen auf Colonial Pipeline & Co.


📈 10.24 Punkte
📰 IT Security Nachrichten

📰 Arista CI Pipeline Modernizes Network Operations


📈 10.24 Punkte
📰 IT Nachrichten

📰 Spiele-Engine: Unity 2018.1 gibt Entwicklern Zugriff auf die Rendering-Pipeline


📈 10.24 Punkte
📰 IT Nachrichten

matomo