This scenario covers using Kafka ACLs when your client applications are outside the Istio mesh, but in the same Kubernetes cluster as your Kafka cluster. In this scenario, the client applications must present a client certificate to authenticate themselves.

Using Kafka ACLs when your client applications are outside the Istio mesh

Prerequisites 🔗︎

To use Kafka ACLs with Istio mTLS, you need:

  • a Kubernetes cluster (version 1.15 and above), with
  • at least 8 vCPU and 12 GB of memory, and
  • with the capability to provision LoadBalancer Kubernetes services.
  • A Kafka cluster.

Steps 🔗︎

This procedure uses cert-manager to issue client certificates to represent the client application.

  1. Enable ACLs and configure an external listener using Supertubes. Complete the following steps.
    1. Verify that your deployed Kafka cluster is up and running:

      supertubes cluster get --namespace <namespace-of-your-cluster> --kafka-cluster <name-of-your-kafka-cluster> --kubeconfig <path-to-kubeconfig-file>

      Expected output:

      Namespace  Name   State           Image                               Alerts  Cruise Control Topic Status  Rolling Upgrade Errors  Rolling Upgrade Last Success
      kafka      kafka  ClusterRunning  banzaicloud/kafka:2.13-2.5.0-bzc.1  0       CruiseControlTopicReady      0
    2. Enable ACLs and configure an external listener. The deployed Kafka cluster has no ACLs, and external access is disabled by default. Enable them by applying the following changes:

      supertubes cluster update --namespace kafka --kafka-cluster kafka --kubeconfig <path-to-kubeconfig-file> -f -<<EOF
      kind: KafkaCluster
        ingressController: "istioingress"
            mode: PASSTHROUGH
      readOnlyConfig: |

          - type: "plaintext"
              name: "external"
              externalStartingPort: 19090
              containerPort: 9094
    3. The update in the previous step reconfigures the Kafka cluster to receive rolling updates. Verify that this is reflected in the state of the cluster.

      supertubes cluster get --namespace kafka --kafka-cluster kafka --kubeconfig <path-to-kubeconfig-file>

      Expected output:

      Namespace  Name   State                    Image                               Alerts  Cruise Control Topic Status  Rolling Upgrade Errors  Rolling Upgrade Last Success
      kafka      kafka  ClusterRollingUpgrading  banzaicloud/kafka:2.13-2.5.0-bzc.1  0       CruiseControlTopicReady      0
    4. Wait until the reconfiguration is finished and the cluster is in the ClusterRunning state. This can take a while, as the rolling upgrade applies changes on a broker-by-broker basis.

  2. Install and configure cert-manager for the client. Complete the following steps.
    1. Connect to the cluster where your client application is running.

    2. Install cert-manager on the cluster. The cert-manager application will issue the client certificates for the client applications. If you already have cert-manager installed and configured on the cluster, skip this step.

      kubectl apply -f
    3. Specify a cluster issuer for cert-manager that has the same CA or root certificate as the Istio mesh, otherwise, the application’s client certificate won’t be valid for the mTLS enforced by Istio.

      1. Get the CA certificate used by Istio:

        kubectl get secrets -n istio-system istio-ca-secret -o yaml

        This secret has different fields than what cert-manager expects.

      2. Create a new secret from this in a format that works for cert-manager.

        kubectl create -f - <<EOF
        apiVersion: v1
        kind: Secret
          name: ca-key-pair
          namespace: cert-manager
          tls.crt: <tls-crt-from-istio-ca-secret>
          tls.key: <your-tls-key-from-istio-ca-secret>
        kubectl create -f - <<EOF
        kind: ClusterIssuer
          name: ca-issuer
          namespace: cert-manager
            secretName: ca-key-pair
    1. Create a Kafka user that the client application will use to identify itself. Grant this user access to the topics it needs.

      kubectl create -f - <<EOF
      kind: KafkaUser
        name: external-kafkauser
        namespace: default
          name: kafka
          namespace: kafka
        secretName: external-kafkauser-secret
          pkiBackend: "cert-manager"
            name: "ca-issuer"
            kind: "ClusterIssuer"
          - topicName: example-topic
            accessType: read
          - topicName: example-topic
            accessType: write
  3. (Optional) Deploy your client application and test that the configuration is working properly. The following steps use the kafkacat application as a sample client application.
    1. Deploy the kafkacat client application into the default namespace, which is outside the Istio mesh.

      kubectl create -f - <<EOF
      apiVersion: v1
      kind: Pod
        name: external-kafka-client
        namespace: default
        - name: external-kafka-client
          image: solsson/kafkacat:alpine
          # Just spin & wait forever
          command: [ "/bin/bash", "-c", "--" ]
          args: [ "while true; do sleep 3000; done;" ]
          - name: sslcerts
            mountPath: "/ssl/certs"
        - name: sslcerts
            secretName: external-kafkauser-secret
    2. List the topics using your client application, and provide the certificate that represents the previously created external-kafkauser Kafka user. (Otherwise, Istio automatically rejects the client application.)

      kafkacat -L -b kafka-all-broker.kafka:29092 -X security.protocol=SSL -X ssl.key.location=/ssl/certs/tls.key -X ssl.certificate.location=/ssl/certs/tls.crt -X
      Metadata for all topics (from broker -1: ssl://kafka-all-broker.kafka:29092/bootstrap):
        2 brokers:
          broker 0 at kafka-0.kafka.svc.cluster.local:29092 (controller)
          broker 1 at kafka-1.kafka.svc.cluster.local:29092
        1 topics:
        topic "example-topic" with 3 partitions:
          partition 0, leader 0, replicas: 0,1, isrs: 0,1
          partition 1, leader 1, replicas: 1,0, isrs: 0,1
          partition 2, leader 0, replicas: 0,1, isrs: 0,1

      The client application should be able to connect to the Kafka broker and access the topics you have granted it access to.