Knative架构——安裝与演示

1,439 阅读4分钟

这篇是knative的一些安装流程记录:

  • Serving组件安装
    • 部署CRD
    • 部署Serving核心
    • 安装istio
    • 配置DNS
  • Eventing组件安装
    • 部署CRD
    • 部署Eventing核心
    • 安装默认的消息channel
    • 安装默认的消息broker
  • 命令行kn编译安装
  • 例子演示

Serving组件安装

部署CRD

直接执行

kubectl apply --filename https://github.com/knative/serving/releases/download/v0.19.0/serving-crds.yaml

部署Serving核心

直接执行

kubectl apply --filename https://github.com/knative/serving/releases/download/v0.19.0/serving-core.yaml

安装istio

选择istio作为网络路由组件,其他可选项还有Ambassador,Contour,Gloo,Kong,Kourier,直接执行

kubectl apply --filename https://github.com/knative/net-istio/releases/download/v0.19.0/release.yaml

安装isito的过程稍微长一点:

配置DNS

获取isito网关的ip

kubectl --namespace istio-system get service istio-ingressgateway

采用magic DNS(xip.io),生产的话肯定是使用真实的dns。这里knative提供了一个简单的Kubernetes job ,称为“default domain”,它将Knative Serving配置为使用xip.io作为默认DNS后缀。

kubectl apply --filename https://github.com/knative/serving/releases/download/v0.19.0/serving-default-domain.yaml

可以看到安装安装后新建了很多pod,

Eventing组件安装

部署CRD

直接执行

kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.19.0/eventing-crds.yaml

执行过程如下图:

部署Eventing核心

直接执行

kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.19.0/eventing-core.yaml

执行过程如下图:

安装默认的消息channel

选择内存作为消息channel组件,其他可选项还有kafka,google的pubsub,nats,直接执行

kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.19.0/in-memory-channel.yaml

安装过程如下图:

安装默认的消息broker

选择内存作为消息broker组件,其他可选项还有kafka,直接执行

kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.19.0/mt-channel-broker.yaml

安装过程如下图:

要自定义使用哪种代理通道实现,更新以下ConfigMap以指定将哪些配置用于哪些名称空间:

apiVersion: v1
kind: ConfigMap
metadata:
  name: config-br-defaults
  namespace: knative-eventing
data:
  default-br-config: |
    # This is the cluster-wide default broker channel.
    clusterDefault:
      brokerClass: MTChannelBasedBroker
      apiVersion: v1
      kind: ConfigMap
      name: imc-channel
      namespace: knative-eventing
    # This allows you to specify different defaults per-namespace,
    # in this case the "some-namespace" namespace will use the Kafka
    # channel ConfigMap by default (only for example, you will need
    # to install kafka also to make use of this).
    namespaceDefaults:
      some-namespace:
        brokerClass: MTChannelBasedBroker
        apiVersion: v1
        kind: ConfigMap
        name: kafka-channel
        namespace: knative-eventing

引用的imc-channel和kafka-channel示例ConfigMap如下所示:

apiVersion: v1
kind: ConfigMap
metadata:
  name: imc-channel
  namespace: knative-eventing
data:
  channelTemplateSpec: |
    apiVersion: messaging.knative.dev/v1
    kind: InMemoryChannel
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: kafka-channel
  namespace: knative-eventing
data:
  channelTemplateSpec: |
    apiVersion: messaging.knative.dev/v1alpha1
    kind: KafkaChannel
    spec:
      numPartitions: 3
      replicationFactor: 1

eventing全部安装完成如下图:

命令行kn编译安装

clone仓库,执行编译命令

git clone https://github.com/knative/client.git
cd client/
hack/build.sh -f

执行完成后可以看到kn的二进制文件,

例子演示

使用kn部署官方的例子,我这里修改了镜像的仓库,

# Create a Knative service with the Knative CLI kn
kn service create helloworld-go --image gcr.io/knative-samples/helloworld-go --env TARGET="Go Sample v1"

安装过程如下图: 安装完成之后查看安装结果: 这里我修改了sidecar镜像的地址,如下图:

现在不会马上部署pod,需要在第一次调用过后才会部署pod,调用过程如下,

# curl http://helloworld-go.default.34.83.80.117.xip.io
Hello World: Go Sample v1!

可以看到下图结果:

看到pod数量的变化:

下面开始演示eventing的实例, 先生成默认的broker:

kubectl create -f - <<EOF
apiVersion: eventing.knative.dev/v1
kind: broker
metadata:
 name: default
 namespace: event-example
EOF

然后创建消费者:

kubectl -n event-example apply -f - << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-display
spec:
  replicas: 1
  selector:
    matchLabels: &labels
      app: hello-display
  template:
    metadata:
      labels: *labels
    spec:
      containers:
        - name: event-display
          image: gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display

---

kind: Service
apiVersion: v1
metadata:
  name: hello-display
spec:
  selector:
    app: hello-display
  ports:
  - protocol: TCP
    port: 80
    targetPort: 8080
EOF
kubectl -n event-example apply -f - << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: goodbye-display
spec:
  replicas: 1
  selector:
    matchLabels: &labels
      app: goodbye-display
  template:
    metadata:
      labels: *labels
    spec:
      containers:
        - name: event-display
          # Source code: https://github.com/knative/eventing-contrib/tree/master/cmd/event_display
          image: gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display

---

kind: Service
apiVersion: v1
metadata:
  name: goodbye-display
spec:
  selector:
    app: goodbye-display
  ports:
  - protocol: TCP
    port: 80
    targetPort: 8080
EOF

安装之后如下图:

然后定义触发器:

kubectl -n event-example apply -f - << EOF
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
  name: hello-display
spec:
  broker: default
  filter:
    attributes:
      type: greeting
  subscriber:
    ref:
     apiVersion: v1
     kind: Service
     name: hello-display
EOF

kubectl -n event-example apply -f - << EOF
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
  name: goodbye-display
spec:
  broker: default
  filter:
    attributes:
      source: sendoff
  subscriber:
    ref:
     apiVersion: v1
     kind: Service
     name: goodbye-display
EOF

安装之后如下图:

然后调用测试实例:

kubectl -n event-example apply -f - << EOF
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: curl
  name: curl
spec:
  containers:
    # This could be any image that we can SSH into and has curl.
  - image: radial/busyboxplus:curl
    imagePullPolicy: IfNotPresent
    name: curl
    resources: {}
    stdin: true
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    tty: true
EOF

结果如下图:

至此流程已经全部走完,后续会深入研究knative组件的架构和源码。