Bruce Blog Bruce Blog
首页
  • CentOS
  • Ubuntu-Debian
  • 系统网络
  • 系统辅助工具
  • MySQL
  • Redis
  • Mongodb
  • Docker基础
  • Container基础
  • Kubernetes

    • Kubernetes基础
    • Kubernetes辅助
  • Container-Network
  • Jenkins
  • Gitlab
  • ArgoCD
  • Ansible
  • Terraform
  • AWS
  • MQ
  • NGINX
  • JumpServer
  • 基础
  • 函数模块
  • 框架
  • 基础

    • Golang环境
    • 语法
    • 数据类型与运算符
    • 分支语句
    • 循环语句
    • 数组
    • 切片
    • Map
    • String
    • 函数
    • 包的管理
    • 指针
    • 结构体
    • Go语言中的OOP
    • 方法和接口
    • 错误处理
  • Go进阶

    • Go进阶
  • Go框架

    • Go框架
  • Golang辅助

    • Golang辅助
  • CSS
  • HTML
  • JavaScript
  • 前端辅助
  • 常用命令
  • 性能监控工具
  • Windows下Docker使用
  • 日常学习
  • 其他导航

Bruce Tao

运维界的该溜子
首页
  • CentOS
  • Ubuntu-Debian
  • 系统网络
  • 系统辅助工具
  • MySQL
  • Redis
  • Mongodb
  • Docker基础
  • Container基础
  • Kubernetes

    • Kubernetes基础
    • Kubernetes辅助
  • Container-Network
  • Jenkins
  • Gitlab
  • ArgoCD
  • Ansible
  • Terraform
  • AWS
  • MQ
  • NGINX
  • JumpServer
  • 基础
  • 函数模块
  • 框架
  • 基础

    • Golang环境
    • 语法
    • 数据类型与运算符
    • 分支语句
    • 循环语句
    • 数组
    • 切片
    • Map
    • String
    • 函数
    • 包的管理
    • 指针
    • 结构体
    • Go语言中的OOP
    • 方法和接口
    • 错误处理
  • Go进阶

    • Go进阶
  • Go框架

    • Go框架
  • Golang辅助

    • Golang辅助
  • CSS
  • HTML
  • JavaScript
  • 前端辅助
  • 常用命令
  • 性能监控工具
  • Windows下Docker使用
  • 日常学习
  • 其他导航
  • Ansible

  • Terraform

    • terraform命令使用
    • terraform概述
    • terraform基础
    • terraform语法
    • Backend配置
    • 阿里云实践
    • 腾讯云实践
    • 华为云实践
    • Docker实践
    • AWS实践
    • Terraform扩展
    • Azure实践
    • K8S实践
    • AWS

    • Cloud
    • Terraform
    Bruce
    2022-10-27
    目录

    K8S实践

    # 一、项目目录结构

    • 这里是整体的terraform 管理Kubernetes的目录结构
    $ cd terraform-kubernetes-operator
    
    $ tree ./
    ./
    ├── kind  # 用kind创建K8S集群
    └── kubernetes  # kuernetes的总项目目录
        ├── apps    # 应用服务的目录
        ├── devops  # devops的项目目录
        └── k8s_configs   # kubernetes集群连接的配置文件目录
    
    5 directories, 10 files
    
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11

    # 二、使用Kind部署Kubernetes集群

    # provider-kind
    • https://github.com/tehcyx/terraform-provider-kind

    image-20220924233210621

    image-20220924220645273

    image-20220924220656267

    # kind_cluster
    • 集群名称、节点镜像、配置文件路径、是否等待集群ready
    provider "kind" {}
    
    resource "kind_cluster" "default" {
      name            = "test-cluster"
      node_image      = "kindest/node:v1.24.0"
      kubeconfig_path = pathexpand(var.kind_cluster_config_path)
      wait_for_ready  = true
      ......
    }
    
    1
    2
    3
    4
    5
    6
    7
    8
    9
    # Control-plane
    • 自定义kbueadm的配置开放端口80和443
      kind_config {
        kind        = "Cluster"
        api_version = "kind.x-k8s.io/v1alpha4"
    
        node {
          role = "control-plane"
          kubeadm_config_patches = [
            <<-EOT
              kind: InitConfiguration
              imageRepository: registry.aliyuncs.com/google_containers
              networking:
                serviceSubnet: 10.0.0.0/16
                apiServerAddress: "0.0.0.0"
              nodeRegistration:
                kubeletExtraArgs:
                  node-labels: "ingress-ready=true"
              ---
              kind: KubeletConfiguration
              cgroupDriver: systemd
              cgroupRoot: /kubelet
              failSwapOn: false
            EOT
          ]
    
          extra_port_mappings {
            container_port = 80
            host_port      = 80
          }
          extra_port_mappings {
            container_port = 443
            host_port      = 443
          }
          extra_port_mappings {
            container_port = 30643
            host_port      = 30643
          }
          extra_port_mappings {
            container_port = 6443
            host_port      = 6443
          }
        }
    
        node {
          role = "worker"
        }
    
        node {
          role = "worker"
        }
      }
    
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    # Ingerss-nginx
    resource "null_resource" "wait_for_install_ingress" {
      triggers = {
        key = uuid()
      }
    
    
      provisioner "local-exec" {
        command = <<EOF
        kind load docker-image k8s.gcr.io/ingress-nginx/controller:v1.2.0 --name test-cluster
        kind load docker-image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1 --name test-Cluster
        kubectl create ns ingress-nginx
        kubectl apply -f ingress.yaml -n ingress-nginx
        printf "\nWaiting for the nginx ingress controller...\n"
        kubectl wait --namespace ingress-nginx --for=condition=ready pod --selector=app.kubernetes.io/component=controller --timeout=90s
      EOF
      }
    
      depends_on = [kind_cluster.default]
    
    }
    
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    # kind安装kubernetes
    • 前提是需要安装docker服务
    # 下载kubectl命令
    curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
    mv kubectl  /usr/local/bin/
    ln -sfv /usr/local/bin/kubectl  /usr/bin/
    chmod +x /usr/local/bin/kubectl
    
    # linux下安装kind
    curl -Lo /usr/local/bin/kind https://github.com/kubernetes-sigs/kind/releases/download/v0.16.0/kind-linux-amd64
    chmod +x /usr/local/bin/kind
    ln -sfv /usr/local/bin/kind /usr/bin/
    
    # 提前下载镜像
    docker pull kindest/node:v1.24.0
    docker pull k8s.gcr.io/ingress-nginx/controller:v1.2.0
    docker pull k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1
    docker pull registry.k8s.io/ingress-nginx/controller:v1.3.1
    docker pull registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.3.0
    
    # 下载nginx-ingress编排文件
    wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.1/deploy/static/provider/cloud/deploy.yaml -P -O ingress.yaml
    
    curl -L https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.1/deploy/static/provider/cloud/deploy.yaml -o ingress.yaml
    
    
    # 进入kind项目目录
    $ cd terraform-kubernetes-operator/kind
    $ tree ./
    ./
    ├── ingress.yaml
    ├── knid.tf
    ├── main.tf
    ├── outputs.tf
    ├── variables.tf
    └── versions.tf
    
    0 directories, 6 files
    
    # 声明本地缓存配置文件
    export TF_CLI_CONFIG_FILE=/root/youdianzhishi-terraform/terraform-kubernetes-operator/.terraformrc
    
    # 执行terraform命令
    $ terraform init 
    $ terraform fmt 或 terraform init -recursive
    $ terraform validate
    $ terraform plan
    $ terraform apply 或 terraform apply -auto-approve
    
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    # 集群验证
    $ kubectl get nodes
    NAME                         STATUS   ROLES           AGE     VERSION
    test-cluster-control-plane   Ready    control-plane   6m14s   v1.24.0
    test-cluster-worker          Ready    <none>          5m43s   v1.24.0
    test-cluster-worker2         Ready    <none>          5m43s   v1.24.0
    
    1
    2
    3
    4
    5
    # 参考文档
    • https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/
    • https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file
    • kind官网: https://kind.sigs.k8s.io/docs/user/configuration/#getting-started
    • kind download: https://github.com/kubernetes-sigs/kind/releases/
    • https://registry.terraform.io/providers/kyma-incubator/kind/latest/docs/resources/cluster

    # 三、初始化K8S Provider配置

    image-20220925132943690

    image-20220925125415710

    # 初始化K8S Provider

    https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs

    https://github.com/hashicorp/terraform-provider-kubernetes

    $ cd terraform-kubernetes-operator/kubernetes/devops
    $  tree ./
    ./
    ├── main.tf
    └── versions.tf
    
    0 directories, 2 files
    
    # 声明terraform本地缓存配置的路径
    export TF_CLI_CONFIG_FILE=/root/youdianzhishi-terraform/terraform-kubernetes-operator/.terraformrc
    
    # 初始化 Kubernetes Provider
    $ terraform init 
    
    Initializing the backend...
    
    Initializing provider plugins...
    - Reusing previous version of hashicorp/kubernetes from the dependency lock file
    - Using previously-installed hashicorp/kubernetes v2.13.1
    
    Terraform has been successfully initialized!
    
    You may now begin working with Terraform. Try running "terraform plan" to see
    any changes that are required for your infrastructure. All Terraform commands
    should now work.
    
    If you ever set or change modules or backend configuration for Terraform,
    rerun this command to reinitialize your working directory. If you forget, other
    commands will detect it and remind you to do so if necessary.
    
    
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30

    # 四、创建Deployment、Service、Ingress资源

    https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/deployment_v1

    https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/service_v1

    https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/ingress_v1

    $ cd terraform-kubernetes-operator/kubernetes/devops
    $ tree ./
    ./
    ├── jenkins.tf
    ├── main.tf
    └── versions.tf
    
    0 directories, 3 files
    
    # 声明terraform本地缓存配置的路径
    export TF_CLI_CONFIG_FILE=/root/youdianzhishi-terraform/terraform-kubernetes-operator/.terraformrc
    
    # 配置kubernetes集群的连接配置文件
    $ cd terraform-kubernetes-operator/kubernetes/k8s_configs
    $ vim clustera.config
    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority-data: ......
        server: https://192.168.254.21:6443
      name: kind-test-cluster
    contexts:
    - context:
        cluster: kind-test-cluster
        user: kind-test-cluster
      name: kind-test-cluster
    current-context: kind-test-cluster
    kind: Config
    preferences: {}
    users:
    - name: kind-test-cluster
      user:
        client-certificate-data: ......
        client-key-data: ......
    
    # kind加载镜像到k8s集群中
     kind load docker-image jenkins/jenkins:2.346.3-2-lts-jdk8 --name=test-cluster
    
    # 执行terraform命令
    $ terraform init 
    $ terraform fmt 或 terraform init -recursive
    $ terraform validate
    $ terraform plan
    $ terraform apply 或 terraform apply -auto-approve
    
    # 查看kind创建的kubernetes集群信息
    $ kubectl get nodes
    NAME                         STATUS   ROLES           AGE    VERSION
    test-cluster-control-plane   Ready    control-plane   137m   v1.24.0
    test-cluster-worker          Ready    <none>          136m   v1.24.0
    test-cluster-worker2         Ready    <none>          136m   v1.24.0
    
    # 获取kubernetes集群的contexts信息
    $ kubectl config get-contexts
    CURRENT   NAME                CLUSTER             AUTHINFO            NAMESPACE
    *         kind-test-cluster   kind-test-cluster   kind-test-cluster 
    
    
    # kind加载jenkins镜像
    $ kind load docker-image jenkins/jenkins:2.346.3-2-lts-jdk8 --name=test-cluster
    Image: "jenkins/jenkins:2.346.3-2-lts-jdk8" with ID "sha256:0e48d93a1a731b327c9d8ea1af728563c3b40db5af4d409bfaab1a38d0ed0ede" not yet present on node "test-cluster-worker", loading...
    Image: "jenkins/jenkins:2.346.3-2-lts-jdk8" with ID "sha256:0e48d93a1a731b327c9d8ea1af728563c3b40db5af4d409bfaab1a38d0ed0ede" not yet present on node "test-cluster-control-plane", loading...
    Image: "jenkins/jenkins:2.346.3-2-lts-jdk8" with ID "sha256:0e48d93a1a731b327c9d8ea1af728563c3b40db5af4d409bfaab1a38d0ed0ede" not yet present on node "test-cluster-worker2", loading...
    
    
    # 查看创建的namespace及pod信息
    $ kubectl get pods -n devops
    NAME                       READY   STATUS    RESTARTS   AGE
    jenkins-7fbcb7c885-fvjbw   1/1     Running   0          17s
    
    $ kubectl get deployment -n devops
    NAME      READY   UP-TO-DATE   AVAILABLE   AGE
    jenkins   1/1     1            1           96s
    
    # 查看jenkins pod中的日志信息
    $ kubectl logs -f jenkins-7fbcb7c885-fvjbw -n devops
    Sep 25, 2022 7:11:57 AM Main verifyJavaVersion
    WARNING: You are running Jenkins on Java 1.8, support for which will end on or after September 1, 2022. Please refer to the documentation for details on upgrading to Java 11: https://www.jenkins.io/redirect/upgrading-jenkins-java-version-8-to-11
    Running from: /usr/share/jenkins/jenkins.war
    webroot: EnvVars.masterEnvVars.get("JENKINS_HOME")
    2022-09-25 07:11:57.569+0000 [id=1]	INFO	org.eclipse.jetty.util.log.Log#initialized: Logging initialized @901ms to org.eclipse.jetty.util.log.JavaUtilLog
    ......
    2022-09-25 07:12:05.180+0000 [id=26]	INFO	jenkins.install.SetupWizard#init: 
    
    *************************************************************
    *************************************************************
    *************************************************************
    
    Jenkins initial setup is required. An admin user has been created and a password generated.
    Please use the following password to proceed to installation:
    
    a3cd6075bf3646edabb9406c52d01d28
    
    This may also be found at: /var/jenkins_home/secrets/initialAdminPassword
    
    *************************************************************
    *************************************************************
    *************************************************************
    ......
    
    # 测试jenkin service是否正常
    $  kubectl get pods -n devops
    NAME                       READY   STATUS    RESTARTS   AGE
    jenkins-7fbcb7c885-fvjbw   1/1     Running   0          30m
    
    $  kubectl get svc -n devops
    NAME              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
    jenkins-service   ClusterIP   10.96.153.181   <none>        8080/TCP   3m39s
    
    $  kubectl exec -it jenkins-7fbcb7c885-fvjbw -n devops -- curl 10.96.153.181:8080
    <html><head><meta http-equiv='refresh' content='1;url=/login?from=%2F'/><script>window.location.replace('/login?from=%2F');</script></head><body style='background-color:white; color:white;'>
    
    Authentication required
    <!--
    -->
    </body></html>                  
    
    
    # 查看kubernetes ingress信息
    # 这里在kind安装nginx-ingress的时候使用`kind`,专用的ingress.yaml文件;最后才能在本地主机上正常curl测试ingress域名
    $ kubectl get ingress -n devops
    NAME              CLASS    HOSTS                ADDRESS   PORTS   AGE
    jenkins-ingress   <none>   devops.jenkins.com             80      4m59s
    
    # 本地hosts添加域名解析
    $  cat /etc/hosts
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    192.168.254.21 devops.jenkins.com
    
    
    # 本地宿主机curl测试访问ingress域名
    $ curl devops.jenkins.com
    <html><head><meta http-equiv='refresh' content='1;url=/login?from=%2F'/><script>window.location.replace('/login?from=%2F');</script></head><body style='background-color:white; color:white;'>
    
    Authentication required
    <!--
    -->
    </body></html>      
    
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    89
    90
    91
    92
    93
    94
    95
    96
    97
    98
    99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    • window主机本地hosts解析域名浏览器访问测试

      http://devops.jenkins.com/

    image-20220925182027020

    # 错误处理
    • 指定kubernetes集群
    ## provider = kubernetes.clustera
    
    kubernetes_namespace.devops: Creating...
    ╷
    │ Error: Post "http://localhost/api/v1/namespaces": read tcp 127.0.0.1:58000->127.0.0.1:80: read: connection reset by peer
    │ 
    │   with kubernetes_namespace.devops,
    │   on main.tf line 17, in resource "kubernetes_namespace" "devops":
    │   17: resource "kubernetes_namespace" "devops" {
    │ 
    
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    • 跳过证书
      • 在provider添加"insecure = true“跳过证书
    kubernetes_namespace.devops: Creating...
    ╷
    │ Error: Post "https://192.168.254.21:6443/api/v1/namespaces": x509: certificate is valid for 10.96.0.1, 172.18.0.2, 127.0.0.1, not 192.168.254.21
    │ 
    │   with kubernetes_namespace.devops,
    │   on main.tf line 18, in resource "kubernetes_namespace" "devops":
    │   18: resource "kubernetes_namespace" "devops" {
    │ 
    
    
    1
    2
    3
    4
    5
    6
    7
    8
    9

    # 五、阿里云ACK

    # 阿里云ACK产品参数

    https://help.aliyun.com/product/85222.html

    参考"阿里云ACK/阿里云ACK手动创建模板.md"

    image-20220925173215244

    image-20220925173707866

    # 创建ACK集群
    # 初始化Provider
    • Provider: alicloud/kubernetes

      https://registry.terraform.io/providers/aliyun/alicloud/latest/docs

    # 进入项目目录
    $ cd terraform-kubernetes-operator/kubernetes/ack
    $ tree  ./
    ./
    └── versions.tf
    
    0 directories, 1 file
    
    # 声明terraform本地缓存配置的路径
    export TF_CLI_CONFIG_FILE=/home/terraform/youdianzhishi-terraform/terraform-kubernetes-operator/.terraformrc
    
    # 初始化provider
    $ terraform  init 
    
    Initializing the backend...
    
    Initializing provider plugins...
    - Finding aliyun/alicloud versions matching "1.183.0"...
    - Using aliyun/alicloud v1.183.0 from the shared cache directory
    
    Terraform has created a lock file .terraform.lock.hcl to record the provider
    selections it made above. Include this file in your version control repository
    so that Terraform can guarantee to make the same selections by default when
    you run "terraform init" in the future.
    
    ╷
    │ Warning: Incomplete lock file information for providers
    │ 
    │ Due to your customized provider installation methods, Terraform was forced to calculate lock file checksums locally for the
    │ following providers:
    │   - aliyun/alicloud
    │ 
    │ The current .terraform.lock.hcl file only includes checksums for linux_amd64, so Terraform running on another platform will fail to
    │ install these providers.
    │ 
    │ To calculate additional checksums for another platform, run:
    │   terraform providers lock -platform=linux_amd64
    │ (where linux_amd64 is the platform to generate)
    ╵
    
    Terraform has been successfully initialized!
    
    You may now begin working with Terraform. Try running "terraform plan" to see
    any changes that are required for your infrastructure. All Terraform commands
    should now work.
    
    If you ever set or change modules or backend configuration for Terraform,
    rerun this command to reinitialize your working directory. If you forget, other
    commands will detect it and remind you to do so if necessary.
    
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    # 阿里云AK-SK
    • 阿里云控制台创建子账户或者使用根用户来创建AK-SK(具体操作步骤就不多叙述了)
    • 如果使用的是RAM子账户需要添加AliyunCSFullAccess及CS自定义的操作权限
    export ALICLOUD_ACCESS_KEY="xxxxxx"
    export ALICLOUD_SECRET_KEY="xxxxxx"
    export ALICLOUD_REGION="cn-shanghai"
    
    1
    2
    3
    # 申请网络资源
    • vpc、vswitch

    https://registry.terraform.io/providers/aliyun/alicloud/latest/docs/resources/vpc

    https://registry.terraform.io/providers/aliyun/alicloud/latest/docs/resources/vswitch

    $ cd terraform-kubernetes-operator/kubernetes/ack
    $ vim neetwork.tf
    resource "alicloud_vpc" "vpc" {
      vpc_name   = "k8s_vpc"
      cidr_block = "172.16.0.0/12"
    }
    
    resource "alicloud_vswitch" "vsw" {
      vpc_id     = alicloud_vpc.vpc.id
      cidr_block = "172.16.0.0/16"
      zone_id    = "cn-shanghai-a"
    }
    
    
    # 声明环境变量
    export ALICLOUD_ACCESS_KEY="xxxxxx"
    export ALICLOUD_SECRET_KEY="xxxxxx"
    export ALICLOUD_REGION="cn-shanghai"
    export TF_CLI_CONFIG_FILE=/home/terraform/youdianzhishi-terraform/terraform-kubernetes-operator/.terraformrc
    
    
    # 执行terraform命令
    $ terraform init 
    $ terraform fmt 或 terraform init -recursive
    $ terraform validate
    $ terraform plan
    $ terraform apply 或 terraform apply -auto-approve
    
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27

    image-20220925182834559

    image-20220925183004719

    # 申请ACK集群

    https://registry.terraform.io/providers/aliyun/alicloud/latest/docs/resources/vpc

    https://registry.terraform.io/providers/aliyun/alicloud/latest/docs/resources/vswitch

    https://registry.terraform.io/providers/aliyun/alicloud/latest/docs/resources/cs_managed_kubernetes

    https://registry.terraform.io/providers/aliyun/alicloud/latest/docs/resources/cs_kubernetes_node_pool

    阿里云参考文档: https://help.aliyun.com/document_detail/197780.html?spm=a2c4g.11186623.0.0.290e71d1xQfvrs

    $ cd terraform-kubernetes-operator/kubernetes/ack
    $  tree  ./
    ./
    ├── ack.tf
    ├── main.tf
    ├── network.tf
    ├── outputs.tf
    ├── variables.tf
    └── versions.tf
    
    0 directories, 6 files
    
    
    # 声明环境变量
    export ALICLOUD_ACCESS_KEY="xxxxxx"
    export ALICLOUD_SECRET_KEY="xxxxxx"
    export ALICLOUD_REGION="cn-shanghai"
    export TF_CLI_CONFIG_FILE=/home/terraform/youdianzhishi-terraform/terraform-kubernetes-operator/.terraformrc
    
    
    # 执行terraform命令
    $ terraform init 
    $ terraform fmt 或 terraform init -recursive
    $ terraform validate
    $ terraform plan
    $ terraform apply 或 terraform apply -auto-approve
    
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26

    备注

    • 在alicloud_cs_kubernetes_node_pool可以在kubelet中配置资源预留

      参考文档: https://registry.terraform.io/providers/aliyun/alicloud/latest/docs/resources/cs_kubernetes_node_pool

    • 在控制台中查看ACK资源的创建结果

    image-20220926003116182

    image-20220926003130570

    image-20220926003159046

    image-20220926003216658

    image-20220926003238210

    • 远程连接worker节点服务

    image-20221002145339862

    image-20221002145355259

    image-20221002145518321

    # 在ACK集群中部署应用

    https://registry.terraform.io/providers/hashicorp/kubernetes/latest

    https://registry.terraform.io/providers/aliyun/alicloud/latest/docs/resources/dns_record

    $ cd terraform-kubernetes-operator/kubernetes/ack-devops
    $  tree ./
    ./
    ├── alicoud-dns.tf
    ├── jenkins.tf
    ├── main.tf
    ├── outputs.tf
    └── versions.tf
    
    0 directories, 5 files
    
    
    # 声明terraform本地缓存文件路径
    export ALICLOUD_ACCESS_KEY="xxxxxx"
    export ALICLOUD_SECRET_KEY="xxxxxx"
    export ALICLOUD_REGION="cn-shanghai"
    export TF_CLI_CONFIG_FILE=/home/terraform/youdianzhishi-terraform/terraform-kubernetes-operator/.terraformrc
    
    # 执行terraform命令
    $ terraform init 
    $ terraform fmt 或 terraform init -recursive
    $ terraform validate
    $ terraform plan
    $ terraform apply 或 terraform apply -auto-approve
    
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24

    备注

    • 控制台查看创建的路由信息

    image-20220926004317417

    • 控制台查看dns解析

    image-20220926005634865

    $ nslookup jenkins.chsaos.com
    Server:         100.100.2.136
    Address:        100.100.2.136#53
    
    Non-authoritative answer:
    Name:   jenkins.chsaos.com
    Address: 47.117.160.23
    
    1
    2
    3
    4
    5
    6
    7

    image-20220926005914597

    # 六、创建ASK

    # 手动导入ask资源
    # 手动创建ask集群
    • 创建VPC

    image-20221002162450506

    image-20221002162522150

    image-20221002162818134

    image-20221002163129772

    image-20221002163210841

    image-20221002163405879

    image-20221002163619720

    image-20221002164311113

    • Terraform导入已经创建的ASK资源

    这里记录下ASK的ID: c954fd29cbbed40bb8946b530c05022e4

    $ cd terraform-kubernetes-operator/kubernetes/ask
    $ vim versions.tf
    terraform {
      required_version = "1.2.8" // 这里是terraform的版本号,可以通过`terraform -v`获取到
      required_providers {
        alicloud = {
          source  = "aliyun/alicloud"
          version = "1.178.0"
        }
      }
    }
    
    provider "alicloud" {
      # Configuration options
      region = "cn-shanghai"
    }
    
    $ vim main.tf
    resource "alicloud_cs_serverless_kubernetes" "main" {
    
    }
    
    # 声明terraform本地缓存文件路径
    export ALICLOUD_ACCESS_KEY="xxxxxx"
    export ALICLOUD_SECRET_KEY="xxxxxx"
    export ALICLOUD_REGION="cn-shanghai"
    export TF_CLI_CONFIG_FILE=/home/terraform/youdianzhishi-terraform/terraform-kubernetes-operator/.terraformrc
    
    # 执行terraform命令
    $ terraform init 
    $ terraform fmt 或 terraform init -recursive
    $ terraform validate
    $ terraform plan
    $ terraform apply 或 terraform apply -auto-approve
    
    # import导入ask资源
    ## 这里导入操作之后,main.tf中并不会有其他文件变化;只是在当前工作目录下会生成`terraform.tfstate`的状态文件
    $ terraform import alicloud_cs_serverless_kubernetes.main c954fd29cbbed40bb8946b530c05022e4
    alicloud_cs_serverless_kubernetes.main: Importing from ID "c954fd29cbbed40bb8946b530c05022e4"...
    alicloud_cs_serverless_kubernetes.main: Import prepared!
      Prepared alicloud_cs_serverless_kubernetes for import
    alicloud_cs_serverless_kubernetes.main: Refreshing state... [id=c954fd29cbbed40bb8946b530c05022e4]
    
    Import successful!
    
    The resources that were imported are shown above. These resources are now in
    your Terraform state and will henceforth be managed by Terraform.
    
    $ terraform state list
    alicloud_cs_serverless_kubernetes.main
    
    $ terraform state show alicloud_cs_serverless_kubernetes.main
    # alicloud_cs_serverless_kubernetes.main:
    resource "alicloud_cs_serverless_kubernetes" "main" {
        cluster_spec        = "ack.standard"
        deletion_protection = false
        id                  = "c954fd29cbbed40bb8946b530c05022e4"
        load_balancer_spec  = "slb.s2.small"
        logging_type        = "SLS"
        name                = "test-ask"
        resource_group_id   = "rg-acfmzpb34mgigsa"
        security_group_id   = "sg-uf67ktz6ahsrrltw4b39"
        tags                = {}
        version             = "1.22.15-aliyun.1"
        vpc_id              = "vpc-uf6fje0zfy40jw16ue3w6"
        vswitch_id          = "vsw-uf6fc2pslf66ttrlt2zkm"
        vswitch_ids         = [
            "vsw-uf6fc2pslf66ttrlt2zkm",
        ]
    
        timeouts {}
    }
    
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    # Terraoform创建ASK

    https://registry.terraform.io/providers/aliyun/alicloud/latest/docs/resources/cs_serverless_kubernetes

    使用Terraform创建ASK集群: https://help.aliyun.com/document_detail/452133.html

    $ cd terraform-kubernetes-operator/kubernetes/ask
    $ vim versions.tf
    terraform {
      required_version = "1.2.8" // 这里是terraform的版本号,可以通过`terraform -v`获取到
      required_providers {
        alicloud = {
          source  = "aliyun/alicloud"
          version = "1.178.0"
        }
      }
    }
    
    $ vim main.tf
    # resource "alicloud_cs_serverless_kubernetes" "main" {
    
    # }
    
    
    resource "alicloud_vpc" "vpc" {
      vpc_name   = "k8s_vpc"
      cidr_block = "172.16.0.0/12"
    }
    
    resource "alicloud_vswitch" "vsw" {
      vpc_id     = alicloud_vpc.vpc.id
      cidr_block = "172.16.0.0/16"
      zone_id    = "cn-shanghai-d"
    }
    
    resource "alicloud_cs_serverless_kubernetes" "main" {
      cluster_spec        = "ack.standard"
      deletion_protection = false
      load_balancer_spec  = "slb.s2.small"
      logging_type        = "SLS"
      name                = "test-ask"
      resource_group_id   = "rg-acfmzpb34mgigsa"
      security_group_id   = "sg-uf67ktz6ahsrrltw4b39"
      version             = "1.22.15-aliyun.1"
      vpc_id              = "vpc-uf6fje0zfy40jw16ue3w6"
    
      timeouts {}
      addons {
        # SLB Ingress
        name   = "alb-ingress-controller"
        config = "{\"IngressSlbNetworkType\":\"internet\",\"IngressSlbSpec\":\"slb.s2.small\"}"
      }
      addons {
        name = "metrics-server"
      }
      #   addons {
      #     name = "knative"
      #   }
    }
    
    # 声明terraform本地缓存文件路径
    export ALICLOUD_ACCESS_KEY="xxxxxx"
    export ALICLOUD_SECRET_KEY="xxxxxx"
    export ALICLOUD_REGION="cn-shanghai"
    export TF_CLI_CONFIG_FILE=/home/terraform/youdianzhishi-terraform/terraform-kubernetes-operator/.terraformrc
    
    # 执行terraform命令
    $ terraform init 
    $ terraform fmt 或 terraform init -recursive
    $ terraform validate
    $ terraform plan
    $ terraform apply 或 terraform apply -auto-approve
    
    
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    上次更新: 2024/04/09, 16:48:42
    Azure实践
    AWS NAT-NetWork-Firwalld配置(一)

    ← Azure实践 AWS NAT-NetWork-Firwalld配置(一)→

    最近更新
    01
    AWS NAT-NetWork-Firwalld配置(一)
    04-09
    02
    AWS NAT-NetWork-Firwalld配置(二)
    04-09
    03
    kubernetes部署minio对象存储
    01-18
    更多文章>
    Theme by Vdoing | Copyright © 2019-2024 Bruce Tao Blog Space | MIT License
    • 跟随系统
    • 浅色模式
    • 深色模式
    • 阅读模式