之前测试的istio是1.5.1,现在升级到1.5.2。可以参考之前的istio定制安装的文章。http://www.huilog.com/?p=1299

  • 下载新版本,并准备好之前安装时的配置文件
wget https://github.com/istio/istio/releases/download/1.5.2/istio-1.5.2-linux.tar.gz
tar xvzf istio-1.5.2-linux.tar.gz 
cd istio-1.5.1/
cp -rf *.yaml ../istio-1.5.2/
cd ../istio-1.5.2/

cp bin/istioctl /usr/local/bin/
cp tools/istioctl.bash /usr/local/bin/
  • 查看支持的版本列表,验证 istoctl 命令是否支持从当前版本升级
[root@sh-saas-k8s1-master-dev-01 istio-1.5.2]# istioctl manifest versions

Operator version is 1.5.2.

The following installation package versions are recommended for use with this version of the operator:
  1.5.0

The following installation package versions are supported for upgrade by this version of the operator:
  >=1.4.0
   <1.6
  • 运行以下命令开始升级:
[root@sh-saas-k8s1-master-dev-01 istio-1.5.2]# istioctl upgrade --verbose -y -f custom_profile.yaml 
Control Plane - egressgateway pod - istio-egressgateway-755458b554-6bcfq - version: 1.5.1
Control Plane - ingressgateway pod - istio-ingressgateway-6887ddb56-hbm5d - version: 1.5.1
Control Plane - pilot pod - istiod-b985b6959-xxgrl - version: 1.5.1

Upgrade version check passed: 1.5.1 -> 1.5.2.

Upgrade check: Warning!!! The following IOPS will be changed as part of upgrade. Please double check they are correct:
components:
  egressGateways:
    '[?->0]': -> map[enabled:true k8s:map[resources:map[requests:map[cpu:10m memory:40Mi]]]
      name:istio-egressgateway]
    '[0->?]': map[enabled:false k8s:map[env:[map[name:ISTIO_META_ROUTER_MODE value:sni-dnat]]
      hpaSpec:map[maxReplicas:5 metrics:[map[resource:map[name:cpu targetAverageUtilization:80]
      type:Resource]] minReplicas:1 scaleTargetRef:map[apiVersion:apps/v1 kind:Deployment
      name:istio-egressgateway]] resources:map[limits:map[cpu:2000m memory:1024Mi]
      requests:map[cpu:100m memory:128Mi]] service:map[ports:[map[name:http2 port:80]
      map[name:https port:443] map[name:tls port:15443 targetPort:15443]]] strategy:map[rollingUpdate:map[maxSurge:100%
      maxUnavailable:25%]]] name:istio-egressgateway] ->
  ingressGateways:
    '[#0]':
      k8s: map[env:[map[name:ISTIO_META_ROUTER_MODE value:sni-dnat]] hpaSpec:map[maxReplicas:5
        metrics:[map[resource:map[name:cpu targetAverageUtilization:80] type:Resource]]
        minReplicas:1 scaleTargetRef:map[apiVersion:apps/v1 kind:Deployment name:istio-ingressgateway]]
        resources:map[limits:map[cpu:2000m memory:1024Mi] requests:map[cpu:100m memory:128Mi]]
        service:map[ports:[map[name:status-port port:15020 targetPort:15020] map[name:http2
        port:80 targetPort:80] map[name:https port:443] map[name:kiali port:15029
        targetPort:15029] map[name:prometheus port:15030 targetPort:15030] map[name:grafana
        port:15031 targetPort:15031] map[name:tracing port:15032 targetPort:15032]
        map[name:tls port:15443 targetPort:15443] map[name:tcp port:31400]]] strategy:map[rollingUpdate:map[maxSurge:100%
        maxUnavailable:25%]]] ->
installPackagePath: /tmp/istio-install-packages/istio-1.5.1/install/kubernetes/operator
  ->
values:
  prometheus: map[contextPath:/prometheus hub:docker.io/prom ingress:map[annotations:<nil>
    enabled:false hosts:[prometheus.local] tls:<nil>] nodeSelector:map[] podAntiAffinityLabelSelector:[]
    podAntiAffinityTermLabelSelector:[] provisionPrometheusCert:true retention:6h
    scrapeInterval:15s security:map[enabled:true] tag:v2.15.1 tolerations:[]] ->
  sidecarInjectorWebhook:
    rewriteAppHTTPProbe: false -> true

2020-05-09T07:43:26.355345Z     info    Running the following hooks which match source->target versions 1.5.1->1.5.2: istio.io/istio/operator/pkg/hooks.checkInitCrdJobs
2020-05-09T07:43:26.355365Z     info    Running hook istio.io/istio/operator/pkg/hooks.checkInitCrdJobs
Detected that your cluster does not support third party JWT authentication. Falling back to less secure first party JWT. See https://istio.io/docs/ops/best-practices/security/#configure-third-party-service-account-tokens for details.
- Applying manifest for component Base...
✔ Finished applying manifest for component Base.
- Applying manifest for component Pilot...
✔ Finished applying manifest for component Pilot.
  Waiting for resources to become ready...
  Waiting for resources to become ready...
  Waiting for resources to become ready...
  Waiting for resources to become ready...
  Waiting for resources to become ready...
  Waiting for resources to become ready...
  Waiting for resources to become ready...
- Applying manifest for component EgressGateways...
- Applying manifest for component IngressGateways...
- Applying manifest for component AddonComponents...
✔ Finished applying manifest for component EgressGateways.
✔ Finished applying manifest for component IngressGateways.
✔ Finished applying manifest for component AddonComponents.


✔ Installation complete

Upgrade submitted. Please use `istioctl version` to check the current versions.
To upgrade the Istio data plane, you will need to re-inject it.
If you’re using automatic sidecar injection, you can upgrade the sidecar by doing a rolling update for all the pods:
    kubectl rollout restart deployment --namespace <namespace with auto injection>
If you’re using manual injection, you can upgrade the sidecar by executing:
    kubectl apply -f < (istioctl kube-inject -f <original application deployment yaml>)
[root@sh-saas-k8s1-master-dev-01 istio-1.5.2]# 
[root@sh-saas-k8s1-master-dev-01 istio-1.5.2]# istioctl version
client version: 1.5.2
control plane version: 1.5.2
data plane version: 1.5.2 (3 proxies), 1.5.1 (6 proxies)

如果您安装 Istio 时,使用了 -f 选项,例如:istioctl manifest apply -f ,那么 istioctl upgrade 命令也必须使用相同的 -f 选项参数值。

如果省略了 -f 选项,Istio 将使用默认配置升级。

在执行多个步骤的检查后,istioctl 将要求您确认是否继续。

通过istioctl version可以看到,数据平台有6个proxyies还是1.5.1版本的,需要重启后才能完成升级。

  • 在使用 istioctl 命令升级完成后,手动重启包含 Istio sidecar 的 pod 来更新数据平面:
[root@sh-saas-k8s1-master-dev-01 istio-1.5.2]# kubectl delete pod -n public-ops-tomcat-dev public-ops-dubbo-demo-
public-ops-dubbo-demo-service-tomcat-dev-5dd45f959d-m9x8s  public-ops-dubbo-demo-web-tomcat-dev-79f758dcf-7x6np       
[root@sh-saas-k8s1-master-dev-01 istio-1.5.2]# kubectl delete pod -n public-ops-tomcat-dev public-ops-dubbo-demo-service-tomcat-dev-5dd45f959d-m9x8s 
pod "public-ops-dubbo-demo-service-tomcat-dev-5dd45f959d-m9x8s" deleted
[root@sh-saas-k8s1-master-dev-01 istio-1.5.2]# kubectl delete pod -n public-ops-tomcat-dev public-ops-dubbo-demo-web-tomcat-dev-79f758dcf-7x6np 
pod "public-ops-dubbo-demo-web-tomcat-dev-79f758dcf-7x6np" deleted
[root@sh-saas-k8s1-master-dev-01 istio-1.5.2]# istioctl version
client version: 1.5.2
control plane version: 1.5.2
data plane version: 1.5.2 (5 proxies), 1.5.1 (4 proxies)

删掉2个pod重启后,可以看到老版本的proxyies就变少了。 数据平面全部升级完后,整个istio的升级也就完成了。