์ด๋ฒ ํฌ์คํ
์ Karpenter ์ค์น๋ค.
์ค์นํ๋ค ๋งํ๋ฉด ๊ฐ์ด๋๋ฅผ ๋ณด์.
https://karpenter.sh/v0.27.3/getting-started/migrating-from-cas/
karpenter ๋ฅผ ์ค์นํ๊ธฐ ์ ์ ๋จผ์ ์
ํ
ํด์ผ ํ ๊ฒ๋ค์ด ์๋ค.
CLUSTER_NAME=myeks # your clouster name AWS_PARTITION="aws" # aws or aws-gov or aws-cn AWS_REGION="$(aws configure list | grep region | tr -s " " | cut -d" " -f3)" OIDC_ENDPOINT="$(aws eks describe-cluster --name ${CLUSTER_NAME} \\ --query "cluster.identity.oidc.issuer" --output text)" AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query 'Account' \\ --output text) export KARPENTER_VERSION=v0.27.3 # latast version ```bash ํ๊ฒฝ ๋ณ์ ์ค์ ์ด๋ค. ํด๋ฌ์คํฐ ์ด๋ฆ / ๋ฆฌ์ / OIDC ENDPOINT / ์ด์นด์ดํธ ๋๋ฒ / karpenter ๋ฒ์ ์ด ๊ทธ๊ฒ์ด๋ค. Karpenter ๋ฅผ ์ค์นํ ๋๋ ๋ง์ ๊ถํ์ ์๊ตฌ๋ก ํ๋ค. ```bash echo '{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "ec2.amazonaws.com" }, "Action": "sts:AssumeRole" } ] }' > node-trust-policy.json aws iam create-role --role-name "KarpenterNodeRole-${CLUSTER_NAME}" \\ --assume-role-policy-document file://node-trust-policy.json aws iam attach-role-policy --role-name "KarpenterNodeRole-${CLUSTER_NAME}" \\ --policy-arn arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy aws iam attach-role-policy --role-name "KarpenterNodeRole-${CLUSTER_NAME}" \\ --policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy aws iam attach-role-policy --role-name "KarpenterNodeRole-${CLUSTER_NAME}" \\ --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly aws iam attach-role-policy --role-name "KarpenterNodeRole-${CLUSTER_NAME}" \\ --policy-arn arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore aws iam create-instance-profile \\ --instance-profile-name "KarpenterNodeInstanceProfile-${CLUSTER_NAME}" aws iam add-role-to-instance-profile \\ --instance-profile-name "KarpenterNodeInstanceProfile-${CLUSTER_NAME}" \\ --role-name "KarpenterNodeRole-${CLUSTER_NAME}" cat << EOF > controller-trust-policy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_ENDPOINT#*//}" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "${OIDC_ENDPOINT#*//}:aud": "sts.amazonaws.com", "${OIDC_ENDPOINT#*//}:sub": "system:serviceaccount:karpenter:karpenter" } } } ] } EOF aws iam create-role --role-name KarpenterControllerRole-${CLUSTER_NAME} \\ --assume-role-policy-document file://controller-trust-policy.json cat << EOF > controller-policy.json { "Statement": [ { "Action": [ "ssm:GetParameter", "ec2:DescribeImages", "ec2:RunInstances", "ec2:DescribeSubnets", "ec2:DescribeSecurityGroups", "ec2:DescribeLaunchTemplates", "ec2:DescribeInstances", "ec2:DescribeInstanceTypes", "ec2:DescribeInstanceTypeOfferings", "ec2:DescribeAvailabilityZones", "ec2:DeleteLaunchTemplate", "ec2:CreateTags", "ec2:CreateLaunchTemplate", "ec2:CreateFleet", "ec2:DescribeSpotPriceHistory", "pricing:GetProducts" ], "Effect": "Allow", "Resource": "*", "Sid": "Karpenter" }, { "Action": "ec2:TerminateInstances", "Condition": { "StringLike": { "ec2:ResourceTag/karpenter.sh/provisioner-name": "*" } }, "Effect": "Allow", "Resource": "*", "Sid": "ConditionalEC2Termination" }, { "Effect": "Allow", "Action": "iam:PassRole", "Resource": "arn:${AWS_PARTITION}:iam::${AWS_ACCOUNT_ID}:role/KarpenterNodeRole-${CLUSTER_NAME}", "Sid": "PassNodeIAMRole" }, { "Effect": "Allow", "Action": "eks:DescribeCluster", "Resource": "arn:${AWS_PARTITION}:eks:${AWS_REGION}:${AWS_ACCOUNT_ID}:cluster/${CLUSTER_NAME}", "Sid": "EKSClusterEndpointLookup" } ], "Version": "2012-10-17" } EOF aws iam put-role-policy --role-name KarpenterControllerRole-${CLUSTER_NAME} \\ --policy-name KarpenterControllerPolicy-${CLUSTER_NAME} \\ --policy-document file://controller-policy.json ```bash ํ๊ฒฝ ์ค์ ๊ณผ ์ ์ ํ ๊ถํ์ด ์ฃผ์ด์ ธ ์๋ค๋ฉด ์ด๊ณผ์ ์์ ์๋ฌ๋ ๋์ง ์๋๋ค. IAM๊น์ง ํ๋ค๋ฉด ๊ฑฐ์ ๋คํ๊ฑฐ๋ค. IAM์๋ KarpenterNodeRole ์ ๋ง๋ค๊ณ ๊ถํ๋ถ์ฌํ๊ณ KarpenterNodeInstanceProfile ์ Role์ ์ถ๊ฐํ๋ค. ์ด๋ฆ๊ณผ๋ ๊ฐ์ด ํ๋ก๋น์ ๋๋ Karpenter Node๊ฐ ๊ฐ์ง๊ฒ ๋ Role ์ด๋ค. ๋ KarpenterControllerRole ์ IRSA๋ก Karpenter Pod์ ๋ถ์ฌ๋ Role์ด๋ค. ๊ทธ๋ค์์ ํ์๋ก ์์ด์ผํ๋๊ฑด ์๋ธ๋ท๊ณผ ๋ณด์๊ทธ๋ฃน์ด๋ค. ์ธ์คํด์ค๊ฐ ํ๋ก๋น์ ๋ ๋๊ธฐ์ํ ํ์ ์กฐ๊ฑด์ด ๋ฐ๋ก ์ด๊ฒ์ด๋ค. ์ด ์ค์ต์์ Karpenter์ ๊ธฐ๋ณธ์ค์ ์ ๋ฐ๋ผ๊ฐ์ง๋ง ์ดํด๋ฅผ ๋๊ธฐ์ํด ์ง์ ํ๊น
์ ์ถ๊ฐํ๋ค. ๋ง์ผ NodeGroup๋ฅผ ์ฌ์ฉํ๊ณ ์๊ณ ์ ํํ๊ธธ ์ํ๋ค๋ฉด ๊ฐ์ด๋์ ๋์จ ์คํฌ๋ฆฝํธ๋ฅผ ์ฌ์ฉํด๋ ์ข๋ค.  ๋๋ ์ด๋ ๊ฒ ์ถ๊ฐํ๋ค **karpenter.sh/discovery = myeks** ๋ค ๊ทธ๋ค์์ ๋ณด์๊ทธ๋ฃน์ ํ๊ทธ๋ฅผ ์ถ๊ฐํด์ค๋ค  ์๋ธ๋ท๊ณผ ๋์ผํ๊ฒ ํ๋ค. ์ค์ ๋ก ์ฌ์ฉํ ๋์ AWSNodeTemplate ์ ์์ฑํ ๋ ์ฌ์ฉํ ํ๊ทธ์ด๋ค. ์ด๋ ๊ฒ ์งํํ๋ค๋ฉด ์ด์ Karpenter ์์ ํ๋ก๋น์ ๋ํ ๋
ธ๋๊ฐ ํด๋ฌ์คํฐ์ Join์ด ๊ฐ๋ฅํ๋๋ก ํ์ฉํด์ค์ผ ํ๋ค. ```bash kubectl edit configmap aws-auth -n kube-system apiVersion: v1 data: mapRoles: | - groups: - system:bootstrappers - system:nodes rolearn: arn:aws:iam::${AWS_ACCOUNT_ID}:role/KarpenterNodeRole-${CLUSTER_NAME} username: system:node:{{EC2PrivateDNSName}} ```bash mapRoles ์๋์ ๋ฃ๋๋ค. ๋ณ์๋ถ๋ถ ์์ ํด์ ๋ฃ์ด์ผํ๋ค. ์ด์ ๋๋์ด ์นดํํฐ๋ฅผ ์ค์นํ๋ค. ์ด๊ณผ์ ์๋ ํฌ๋ฆ์ด ํ์๋ค. ```bash helm template karpenter oci://public.ecr.aws/karpenter/karpenter \\ --version ${KARPENTER_VERSION} \\ --namespace karpenter \\ --set clusterName=${CLUSTER_NAME} \\ --set settings.aws.clusterName=${CLUSTER_NAME} \\ --set clusterEndpoint=${CLUSTER_ENDPOINT} \\ --set settings.aws.defaultInstanceProfile=KarpenterNodeInstanceProfile-${CLUSTER_NAME} \\ --set serviceAccount.annotations."eks\\.amazonaws\\.com/role-arn"="arn:${AWS_PARTITION}:iam::${AWS_ACCOUNT_ID}:role/KarpenterControllerRole-${CLUSTER_NAME}" \\ --set controller.resources.requests.cpu=1 \\ --set controller.resources.requests.memory=1Gi \\ --set controller.resources.limits.cpu=1 \\ --set controller.resources.limits.memory=1Gi > karpenter.yaml ```bash settings.aws.clusterName / clusterName ์ด๋๊ฐ์ง ์ต์
์ ๋ค๋ฅธ์ต์
์ด๋ค. ํท๊ฐ๋ฆฌ์ง ๋ง์. ์ฐ๋ฆฌ๋ NodeLess ๋ฅผ ์งํ์ค์ด๊ธฐ ๋๋ฌธ์ ์ฌ๊ธฐ์ Karpneter ๊ฐ์ด๋์ ๋ค๋ฅด๊ฒ ๊ฐ๋ค. ```bash aws eks create-fargate-profile --fargate-profile-name karpenter --cluster-name myeks --pod-execution-role-arn arn:aws:iam::${AWS_ACCOUNT_ID}:role/AmazonEKSFargatePodExecutionRole --subnets "subnet-1" "subnet-2" "subnet-3" ```bash ์ด๋ ๊ฒ karpenter Fargate Profile์ ์์ฑํ์๋ค๋ฉด ์ด์ Karpenter์ ์ปดํฌ๋ํธ์ CRD๋ฅผ ๊ฐ์ด ๋ฐฐํฌํด์ค ๋๋ค. ```bash kubectl create namespace karpenter kubectl create -f \\ https://raw.githubusercontent.com/aws/karpenter/${KARPENTER_VERSION}/pkg/apis/crds/karpenter.sh_provisioners.yaml kubectl create -f \\ https://raw.githubusercontent.com/aws/karpenter/${KARPENTER_VERSION}/pkg/apis/crds/karpenter.k8s.aws_awsnodetemplates.yaml kubectl apply -f karpenter.yaml ```bash ์ด๋ ๊ฒ ๋ฐฐํฌํ๋ฉด ํ๊ฒ์ดํธ์ ๋ฐฐํฌ๋ Karpenter ๋ฅผ ๋ง๋ ์ ์๋ค. ```bash k get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES karpenter-5bffc6f5d8-p2pxh 1/1 Running 0 9d 192.168.12.183 fargate-ip-192-168-12-183.ap-northeast-2.compute.internal <none> <none> karpenter-5bffc6f5d8-qgcwn 1/1 Running 0 9d 192.168.13.157 fargate-ip-192-168-13-157.ap-northeast-2.compute.internal <none> <none> ```bash Karpenter๋ ๋ฒ์ ์ ๋ฐ๋ผ Pod๋ด์ Container ๊ฐ 2๊ฐ์ธ ๊ฒฝ์ฐ๊ฐ ์๋ค. ์ด๊ฒฝ์ฐ์ ์ปจํธ๋กค๋ฌ์ ์นํ
์ฉ๋์ ์ปจํ
์ด๋๊ฐ ๋๊ฐ๊ฐ ๋์ํ๋ค. ์ผ์ ๋ฒ์ ์ด์์์๋ง Fargate์ ํ๋ก๋น์ ๋ ๋๋ค. ๊ทธ๋ฅ v0.27.3๋ฒ์ ์ด์์ฐ์. ํ๋ค๊ฐ ์๋๋ฉด ๋๋ถ๋ถ ์ ์ ๋ฌธ์ ๋ค. <https://karpenter.sh/v0.27.3/troubleshooting/> ์๋ณด์. ์ค์น๊ฐ ๋๋์ด ์๋ฃ๋ฌ๋ค. ๋ค์์ Karpenter ์ ๋๊ฐ์ง CRD์ ๋ํ ์ค๋ช
์ ํ ๊ฒ์ด๋ค.