nginx代理转发kafka请求的问题

嗯٩(•̤̀ᵕ•̤́๑)ᵒᵏ 发表于: 2019-10-22   最后更新时间: 2019-10-22  

现场环境:机器A是生产者,机器B的IP10.2.21.33部署Nginx转发请求,集群C一共5台机器部署了一个Kafka集群

机器A可以访问机器B不能访问集群C,机器B可以访问集群C

已按照https://www.orchome.com/1393 该提问设置了Nginx,但是遇到一些问题,以下是配置

Nginx配置:

stream {
    server {
        listen 9092;
        proxy_pass kafka;
    }

    upstream kafka {
        server kafka01:9092 weight=1;
        server kafka02:9092 weight=1;
        server kafka03:9092 weight=1;
        server kafka04:9092 weight=1;
        server kafka05:9092 weight=1;
    }
}

客户端host配置:

10.2.21.33  kafka01
10.2.21.33  kafka02
10.2.21.33  kafka03
10.2.21.33  kafka04
10.2.21.33  kafka05

Nginx服务器上host配置:

10.2.5.1  kafka01
10.2.5.9  kafka02
10.2.5.10  kafka03
10.2.5.11  kafka04
10.2.5.72  kafka05

在没有配置kerberos情况下可以正常使用,但如果启用了kerberos就无法使用了,会报以下错误

org.apache.kafka.common.errors.SaslAuthenticationException: Authentication failed during authentication due to invalid credentials with SASL mechanism GSSAPI
Exception in thread "Thread-0" java.lang.NullPointerException
    at producer.ProducerStarter$1.onCompletion(ProducerStarter.java:86)
    at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:920)
    at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:846)
    at producer.ProducerStarter.run(ProducerStarter.java:79)
    at java.lang.Thread.run(Thread.java:745)

另外测试了如果把kafka集群拆掉,只做单点,启用kerberos,并进行如下修改

Nginx配置:

stream {
    server {
        listen 9092;
        proxy_pass kafka;
    }

    upstream kafka {
        server kafka01:9092 weight=1;
    }
}

客户端host配置:

10.2.21.33  kafka01

Nginx服务器上host配置:

10.2.5.1  kafka01

以上配置的情况下是可以正常使用的,有没有大神知道需要如何修改才可以将nginx作为一个代理转发kafka请求到带有kerberos的kafka服务器上。



您需要解锁本帖隐藏内容请: 点击这里
本帖隐藏的内容




上一条: kafka消费者拉取数据不均匀
下一条: kafka消费者根据时间戳进行消费后,总是报错无法提交offset

  • 不走nginx,看样子你的kerberos集群之间没配置成功呀,内部之间验证过吗。

    • 这个是发生错误时后台看到的日志

      Oct 22 15:18:54 cdh-master01 krb5kdc[5642](info): TGS_REQ (4 etypes {18 17 16 23}) 10.2.7.140: ISSUE: authtime 1571661422, etypes {rep=18 tkt=18 ses=18}, hdfs/cdh-master01@GREE.IO for HTTP/cdh-master03@GREE.IO 
      Oct 22 15:18:54 cdh-master01 krb5kdc[5642](info): TGS_REQ (4 etypes {18 17 16 23}) 10.2.7.140: ISSUE: authtime 1571661422, etypes {rep=18 tkt=18 ses=18}, hdfs/cdh-master01@GREE.IO for HTTP/cdh-master02@GREE.IO
      

      这个是正常时(即不走nginx转发)后台看到的日志,以上同为5个节点

      Oct 22 15:13:19 cdh-master01 krb5kdc[5642](info): AS_REQ (3 etypes {17 16 23}) 172.17.1.44: ISSUE: authtime 1571728399, etypes {rep=17 tkt=18 ses=17}, 260269@GREE.IO for krbtgt/GREE.IO@GREE.IO 
      Oct 22 15:13:19 cdh-master01 krb5kdc[5642](info): TGS_REQ (3 etypes {17 16 23}) 172.17.1.44: ISSUE: authtime 1571728399, etypes {rep=17 tkt=18 ses=17}, 260269@GREE.IO for kafka/kafka01@GREE.IO 
      Oct 22 15:13:20 cdh-master01 krb5kdc[5642](info): TGS_REQ (3 etypes {17 16 23}) 172.17.1.44: ISSUE: authtime 1571728399, etypes {rep=17 tkt=18 ses=17}, 260269@GREE.IO for kafka/kafka03@GREE.IO 
      Oct 22 15:13:20 cdh-master01 krb5kdc[5642](info): TGS_REQ (3 etypes {17 16 23}) 172.17.1.44: ISSUE: authtime 1571728399, etypes {rep=17 tkt=18 ses=17}, 260269@GREE.IO for kafka/kafka04@GREE.IO 
      Oct 22 15:13:20 cdh-master01 krb5kdc[5642](info): TGS_REQ (3 etypes {17 16 23}) 172.17.1.44: ISSUE: authtime 1571728399, etypes {rep=17 tkt=18 ses=17}, 260269@GREE.IO for kafka/kafka02@GREE.IO 
      Oct 22 15:13:20 cdh-master01 krb5kdc[5642](info): TGS_REQ (3 etypes {17 16 23}) 172.17.1.44: ISSUE: authtime 1571728399, etypes {rep=17 tkt=18 ses=17}, 260269@GREE.IO for kafka/kafka05@GREE.IO
      

      如果我把nginx的配置改一下,改成如下

      stream {
          server {
              listen 30000;
                proxy_pass kafka01;
          }
          upstream kafka01 {
              server kafka01:30000 weight=1;
          }
          server {
              listen 30001;
                proxy_pass kafka02;
          }
          upstream kafka02 {
              server kafka02:30001 weight=1;
          }
          server {
              listen 30002;
                proxy_pass kafka03;
          }
          upstream kafka03 {
              server kafka03:30002 weight=1;
          }
          server {
              listen 30003;
                proxy_pass kafka04;
          }
          upstream kafka04 {
              server kafka04:30003 weight=1;
          }
          server {
              listen 30004;
                proxy_pass kafka05;
          }
          upstream kafka05 {
              server kafka05:30004 weight=1;
          }
      

      HOST保持不变

      10.2.21.33  kafka01
      10.2.21.33  kafka02
      10.2.21.33  kafka03
      10.2.21.33  kafka04
      10.2.21.33  kafka05
      

      这样的话即可正常使用,我有一个猜测,不知道合不合理,之前的nginx配置是监听9092端口,把所有发到9092的数据随机发到kafka01-05的5台机器,有可能出现这种情况,生产者那边请求的地址是kafka01:9092却被nginx转发懂啊kafka02:9092,这样kerberos认证是不是会不通过。
      虽然让nginx监听5个端口可以解决这个问题,但是还是太不方便了。不知道大神有没有解决办法