目前在做同城双活,每个机房都有一个k8s集群,其中一些共享卷需要做到双活的2个集群上都可用。

比如我们现在在主机房的集群上有一个PVC:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    pv.kubernetes.io/bind-completed: "yes"
    pv.kubernetes.io/bound-by-controller: "yes"
    volume.beta.kubernetes.io/storage-provisioner: com.tencent.cloud.csi.cfs
  creationTimestamp: "2020-11-25T07:24:15Z"
  name: public-devops-teststatefulset-nginx-online-pvc
  namespace: test
  uid: 3d65a636-f4c3-4c49-b957-c45673f48bc6
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 30Gi
  storageClassName: cfs-hp
  volumeMode: Filesystem
  volumeName: pvc-3d65a636-f4c3-4c49-b957-c45673f48bc6

阅读全文

k8s网络策略

下面是一个 NetworkPolicy 的示例:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: test-network-policy
  namespace: default
spec:
  podSelector:
    matchLabels:
      role: db
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - ipBlock:
        cidr: 172.17.0.0/16
        except:
        - 172.17.1.0/24
    - namespaceSelector:
        matchLabels:
          project: myproject
    - podSelector:
        matchLabels:
          role: frontend
    ports:
    - protocol: TCP
      port: 6379
  egress:
  - to:
    - ipBlock:
        cidr: 10.0.0.0/24
    ports:
    - protocol: TCP
      port: 5978
  • 必需字段:与所有其他的 Kubernetes 配置一样,NetworkPolicy 需要 apiVersion、 kind 和 metadata 字段。关于配置文件操作的一般信息,请参考 使用 ConfigMap 配置容器, 和对象管理。 spec:NetworkPolicy 规约 中包含了在一个名字空间中定义特定网络策略所需的所有信息。

  • podSelector:每个 NetworkPolicy 都包括一个 podSelector,它对该策略所适用的一组 Pod 进行选择。示例中的策略选择带有 “role=db” 标签的 Pod。 空的 podSelector 选择名字空间下的所有 Pod。

  • policyTypes: 每个 NetworkPolicy 都包含一个 policyTypes 列表,其中包含 Ingress 或 Egress 或两者兼具。policyTypes 字段表示给定的策略是应用于 进入所选 Pod 的入站流量还是来自所选 Pod 的出站流量,或两者兼有。 如果 NetworkPolicy 未指定 policyTypes 则默认情况下始终设置 Ingress; 如果 NetworkPolicy 有任何出口规则的话则设置 Egress。

  • ingress: 每个 NetworkPolicy 可包含一个 ingress 规则的白名单列表。 每个规则都允许同时匹配 from 和 ports 部分的流量。示例策略中包含一条 简单的规则: 它匹配某个特定端口,来自三个来源中的一个,第一个通过 ipBlock 指定,第二个通过 namespaceSelector 指定,第三个通过 podSelector 指定。

  • egress: 每个 NetworkPolicy 可包含一个 egress 规则的白名单列表。 每个规则都允许匹配 to 和 port 部分的流量。该示例策略包含一条规则, 该规则将指定端口上的流量匹配到 10.0.0.0/24 中的任何目的地。

阅读全文

kubernetes源码分支:1.18

先说结论,kube-apiserver启动时:

  • –admission-control参数带的插件将是apiserver启动的插件,不包括默认插件
  • –admission-control和–enable-admission-plugins –disable-admission-plugins不能同时使用
  • –enable-admission-plugins参数不需要按加载顺序填写
  • 不使用–admission-control参数时,api server会同时启动默认插件
  • –enable-admission-plugins参数启用时,api server会同时启动默认插件,除非使用了–disable-admission-plugins显示的关闭某个插件
  • –enable-admission-plugins和–disable-admission-plugins如果同时填写了某一个插件,这个插件将会被加载

阅读全文

从使用角度,我们一般有两类存储需求,一类是独占存储,一种是共享存储。

独占存储就是每个pod一个独占的存储空间,不与其它POD共用。

共享存储就是多个POD共用一个存储空间,多个POD都可以读写。

不管是哪种存储,都需要存储类,我们先创建一个NFS的存储类:

阅读全文

1. istio 1.6灰度功能测试

istio 1.6新增了istio自身的灰度测试特性,我先测试其灰度功能。

在官网下载并解压istio 1.6,并把istioctl复制到对应目录:

[root@sh-saas-k8s1-master-dev-01 istio]# tar xfzf istio-1.6.0-linux-amd64.tar.gz
[root@sh-saas-k8s1-master-dev-01 istio]# cd istio-1.6.0
[root@sh-saas-k8s1-master-dev-01 istio-1.6.0]# cp bin/istioctl /usr/local/bin/
cp: overwrite ‘/usr/local/bin/istioctl’? y
[root@sh-saas-k8s1-master-dev-01 istio-1.6.0]# ll tools/
total 188
-rw-r--r-- 1 root root  2031 May 21 07:12 convert_RbacConfig_to_ClusterRbacConfig.sh
-rw-r--r-- 1 root root 10669 May 21 07:12 dump_kubernetes.sh
-rw-r--r-- 1 root root 88599 May 21 07:12 _istioctl
-rw-r--r-- 1 root root 85301 May 21 07:12 istioctl.bash
[root@sh-saas-k8s1-master-dev-01 istio-1.6.0]# cp tools/istioctl.bash /usr/local/bin/
cp: overwrite ‘/usr/local/bin/istioctl.bash’? y

[root@sh-saas-k8s1-master-dev-01 istio-1.6.0]# istioctl version
client version: 1.6.0
control plane version: 1.5.2
data plane version: 1.5.2 (6 proxies), 1.5.1 (4 proxies)

阅读全文

istio 1.5版本升级

之前测试的istio是1.5.1,现在升级到1.5.2。可以参考之前的istio定制安装的文章。http://www.huilog.com/?p=1299

  • 下载新版本,并准备好之前安装时的配置文件
wget https://github.com/istio/istio/releases/download/1.5.2/istio-1.5.2-linux.tar.gz
tar xvzf istio-1.5.2-linux.tar.gz 
cd istio-1.5.1/
cp -rf *.yaml ../istio-1.5.2/
cd ../istio-1.5.2/

cp bin/istioctl /usr/local/bin/
cp tools/istioctl.bash /usr/local/bin/
  • 查看支持的版本列表,验证 istoctl 命令是否支持从当前版本升级
[root@sh-saas-k8s1-master-dev-01 istio-1.5.2]# istioctl manifest versions

Operator version is 1.5.2.

The following installation package versions are recommended for use with this version of the operator:
  1.5.0

The following installation package versions are supported for upgrade by this version of the operator:
  >=1.4.0
   <1.6

阅读全文

最近在测试istio时,经常发现注入过sidecar的pod过段时间就变成了Init:CrashLoopBackOff状态。如:

[root@sh-saas-k8s1-master-dev-01 ~]# kubectl get pod --all-namespaces -o wide | grep  'Init'
public-ops-tomcat-dev           public-ops-dubbo-demo-web-tomcat-dev-79f758dcf-64qwr              0/2     Init:CrashLoopBackOff   7          21h     10.253.3.166   10.12.97.23   <none>           <none>

我的kubernetes版本为1.14.10,istio版本为:1.5.1 查看istio-init容器的日志,发现有如下的报错:

[root@sh-saas-k8s1-master-dev-01 ~]# kubectl logs -n public-ops-tomcat-dev           public-ops-dubbo-demo-web-tomcat-dev-79f758dcf-64qwr istio-init 
Environment:
------------
ENVOY_PORT=
INBOUND_CAPTURE_PORT=
ISTIO_INBOUND_INTERCEPTION_MODE=
ISTIO_INBOUND_TPROXY_MARK=
ISTIO_INBOUND_TPROXY_ROUTE_TABLE=
ISTIO_INBOUND_PORTS=
ISTIO_LOCAL_EXCLUDE_PORTS=
ISTIO_SERVICE_CIDR=
ISTIO_SERVICE_EXCLUDE_CIDR=

Variables:
----------
PROXY_PORT=15001
PROXY_INBOUND_CAPTURE_PORT=15006
PROXY_UID=1337
PROXY_GID=1337
INBOUND_INTERCEPTION_MODE=REDIRECT
INBOUND_TPROXY_MARK=1337
INBOUND_TPROXY_ROUTE_TABLE=133
INBOUND_PORTS_INCLUDE=*
INBOUND_PORTS_EXCLUDE=15090,15020
OUTBOUND_IP_RANGES_INCLUDE=*
OUTBOUND_IP_RANGES_EXCLUDE=
OUTBOUND_PORTS_EXCLUDE=
KUBEVIRT_INTERFACES=
ENABLE_INBOUND_IPV6=false

Writing following contents to rules file:  /tmp/iptables-rules-1588923880490327697.txt562915423
* nat
-N ISTIO_REDIRECT
-N ISTIO_IN_REDIRECT
-N ISTIO_INBOUND
-N ISTIO_OUTPUT
-A ISTIO_REDIRECT -p tcp -j REDIRECT --to-port 15001
-A ISTIO_IN_REDIRECT -p tcp -j REDIRECT --to-port 15006
-A PREROUTING -p tcp -j ISTIO_INBOUND
-A ISTIO_INBOUND -p tcp --dport 22 -j RETURN
-A ISTIO_INBOUND -p tcp --dport 15090 -j RETURN
-A ISTIO_INBOUND -p tcp --dport 15020 -j RETURN
-A ISTIO_INBOUND -p tcp -j ISTIO_IN_REDIRECT
-A OUTPUT -p tcp -j ISTIO_OUTPUT
-A ISTIO_OUTPUT -o lo -s 127.0.0.6/32 -j RETURN
-A ISTIO_OUTPUT -o lo ! -d 127.0.0.1/32 -m owner --uid-owner 1337 -j ISTIO_IN_REDIRECT
-A ISTIO_OUTPUT -o lo -m owner ! --uid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -m owner --uid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -o lo ! -d 127.0.0.1/32 -m owner --gid-owner 1337 -j ISTIO_IN_REDIRECT
-A ISTIO_OUTPUT -o lo -m owner ! --gid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -m owner --gid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -d 127.0.0.1/32 -j RETURN
-A ISTIO_OUTPUT -j ISTIO_REDIRECT
COMMIT

iptables-restore --noflush /tmp/iptables-rules-1588923880490327697.txt562915423
iptables-restore: line 2 failed
iptables-save 
# Generated by iptables-save v1.6.1 on Fri May  8 07:44:40 2020
*mangle
:PREROUTING ACCEPT [643414:2344563772]
:INPUT ACCEPT [643414:2344563772]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [616124:4267707048]
:POSTROUTING ACCEPT [616124:4267707048]
COMMIT
# Completed on Fri May  8 07:44:40 2020
# Generated by iptables-save v1.6.1 on Fri May  8 07:44:40 2020
*raw
:PREROUTING ACCEPT [643414:2344563772]
:OUTPUT ACCEPT [616124:4267707048]
COMMIT
# Completed on Fri May  8 07:44:40 2020
# Generated by iptables-save v1.6.1 on Fri May  8 07:44:40 2020
*nat
:PREROUTING ACCEPT [38474:2000648]
:INPUT ACCEPT [40999:2131948]
:OUTPUT ACCEPT [7987:560379]
:POSTROUTING ACCEPT [8763:600731]
:ISTIO_INBOUND - [0:0]
:ISTIO_IN_REDIRECT - [0:0]
:ISTIO_OUTPUT - [0:0]
:ISTIO_REDIRECT - [0:0]
-A PREROUTING -p tcp -j ISTIO_INBOUND
-A OUTPUT -p tcp -j ISTIO_OUTPUT
-A ISTIO_INBOUND -p tcp -m tcp --dport 22 -j RETURN
-A ISTIO_INBOUND -p tcp -m tcp --dport 15090 -j RETURN
-A ISTIO_INBOUND -p tcp -m tcp --dport 15020 -j RETURN
-A ISTIO_INBOUND -p tcp -j ISTIO_IN_REDIRECT
-A ISTIO_IN_REDIRECT -p tcp -j REDIRECT --to-ports 15006
-A ISTIO_OUTPUT -s 127.0.0.6/32 -o lo -j RETURN
-A ISTIO_OUTPUT ! -d 127.0.0.1/32 -o lo -m owner --uid-owner 1337 -j ISTIO_IN_REDIRECT
-A ISTIO_OUTPUT -o lo -m owner ! --uid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -m owner --uid-owner 1337 -j RETURN
-A ISTIO_OUTPUT ! -d 127.0.0.1/32 -o lo -m owner --gid-owner 1337 -j ISTIO_IN_REDIRECT
-A ISTIO_OUTPUT -o lo -m owner ! --gid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -m owner --gid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -d 127.0.0.1/32 -j RETURN
-A ISTIO_OUTPUT -j ISTIO_REDIRECT
-A ISTIO_REDIRECT -p tcp -j REDIRECT --to-ports 15001
COMMIT
# Completed on Fri May  8 07:44:40 2020
# Generated by iptables-save v1.6.1 on Fri May  8 07:44:40 2020
*filter
:INPUT ACCEPT [643414:2344563772]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [616124:4267707048]
COMMIT
# Completed on Fri May  8 07:44:40 2020
panic: exit status 1

goroutine 1 [running]:
istio.io/istio/tools/istio-iptables/pkg/dependencies.(*RealDependencies).RunOrFail(0xd819c0, 0x9739b8, 0x10, 0xc00000cbc0, 0x2, 0x2)
        istio.io/istio@/tools/istio-iptables/pkg/dependencies/implementation.go:44 +0x96
istio.io/istio/tools/istio-iptables/pkg/cmd.(*IptablesConfigurator).executeIptablesRestoreCommand(0xc000109d30, 0x7faeecd9a001, 0x0, 0x0)
        istio.io/istio@/tools/istio-iptables/pkg/cmd/run.go:474 +0x3aa
istio.io/istio/tools/istio-iptables/pkg/cmd.(*IptablesConfigurator).executeCommands(0xc000109d30)
        istio.io/istio@/tools/istio-iptables/pkg/cmd/run.go:481 +0x45
istio.io/istio/tools/istio-iptables/pkg/cmd.(*IptablesConfigurator).run(0xc000109d30)
        istio.io/istio@/tools/istio-iptables/pkg/cmd/run.go:428 +0x24e2
istio.io/istio/tools/istio-iptables/pkg/cmd.glob..func1(0xd5c740, 0xc0000ee700, 0x0, 0x10)
        istio.io/istio@/tools/istio-iptables/pkg/cmd/root.go:56 +0x14e
github.com/spf13/cobra.(*Command).execute(0xd5c740, 0xc00001e130, 0x10, 0x11, 0xd5c740, 0xc00001e130)
        github.com/spf13/cobra@v0.0.5/command.go:830 +0x2aa
github.com/spf13/cobra.(*Command).ExecuteC(0xd5c740, 0x40574f, 0xc00009e058, 0x0)
        github.com/spf13/cobra@v0.0.5/command.go:914 +0x2fb
github.com/spf13/cobra.(*Command).Execute(...)
        github.com/spf13/cobra@v0.0.5/command.go:864
istio.io/istio/tools/istio-iptables/pkg/cmd.Execute()
        istio.io/istio@/tools/istio-iptables/pkg/cmd/root.go:284 +0x2d
main.main()
        istio.io/istio@/tools/istio-iptables/main.go:22 +0x20

阅读全文

作者的图片

阿辉

容器技术及容器集群等分布式系统研究

容器平台负责人

上海