呵呵哒

0 声望

这家伙太懒,什么都没留下

个人动态
  • 呵呵哒 回复 半兽人kafka使用SASL/SCRAM认证 中 :

    我觉得应该把zookeeper部分再加上才行

    18天前
  • 赞了 呵呵哒CDH6.3.2集群配套的2.2.1版本kafka配置SASL_SCRAM认证配置的问题 的评论!

    没用过CDH额,这个配置是默认生成的,不过文件内容很简单,只是为了方便你命令形态的东西,写到里面变成固定的,不用每次带那么多参数了,你可以下载一个官方的版本的kafka来获取。

    producer.properties的默认内容如下:

    cat config/producer.properties
    
    # Licensed to the Apache Software Foundation (ASF) under one or more
    # contributor license agreements.  See the NOTICE file distributed with
    # this work for additional information regarding copyright ownership.
    # The ASF licenses this file to You under the Apache License, Version 2.0
    # (the "License"); you may not use this file except in compliance with
    # the License.  You may obtain a copy of the License at
    #
    #    http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    # see kafka.producer.ProducerConfig for more details
    
    ############################# Producer Basics #############################
    
    # list of brokers used for bootstrapping knowledge about the rest of the cluster
    # format: host1:port1,host2:port2 ...
    bootstrap.servers=localhost:9092
    
    # specify the compression codec for all data generated: none, gzip, snappy, lz4
    compression.type=none
    
    # name of the partitioner class for partitioning events; default partition spreads data randomly
    #partitioner.class=
    
    # the maximum amount of time the client will wait for the response of a request
    #request.timeout.ms=
    
    # how long `KafkaProducer.send` and `KafkaProducer.partitionsFor` will block for
    #max.block.ms=
    
    # the producer will wait for up to the given delay to allow other records to be sent so that the sends can be batched together
    #linger.ms=
    
    # the maximum size of a request in bytes
    #max.request.size=
    
    # the default batch size in bytes when batching multiple records sent to a partition
    #batch.size=
    
    # the total bytes of memory the producer can use to buffer records waiting to be sent to the server
    #buffer.memory=
    
    20天前
  • 呵呵哒 回复 半兽人CDH6.3.2集群配套的2.2.1版本kafka配置SASL_SCRAM认证配置的问题 中 :

    实测cdh集群安装的kafka和原生kafka生成的配置文件不同,很多都不一样,原生kafka的这几个配置文件拿来用不了, 也不晓得魔改了啥,放弃了打算另寻他路,谢谢大佬帮助

    20天前
  • 半兽人 回复 呵呵哒CDH6.3.2集群配套的2.2.1版本kafka配置SASL_SCRAM认证配置的问题 中 :

    没用过CDH额,这个配置是默认生成的,不过文件内容很简单,只是为了方便你命令形态的东西,写到里面变成固定的,不用每次带那么多参数了,你可以下载一个官方的版本的kafka来获取。

    producer.properties的默认内容如下:

    cat config/producer.properties
    
    # Licensed to the Apache Software Foundation (ASF) under one or more
    # contributor license agreements.  See the NOTICE file distributed with
    # this work for additional information regarding copyright ownership.
    # The ASF licenses this file to You under the Apache License, Version 2.0
    # (the "License"); you may not use this file except in compliance with
    # the License.  You may obtain a copy of the License at
    #
    #    http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    # see kafka.producer.ProducerConfig for more details
    
    ############################# Producer Basics #############################
    
    # list of brokers used for bootstrapping knowledge about the rest of the cluster
    # format: host1:port1,host2:port2 ...
    bootstrap.servers=localhost:9092
    
    # specify the compression codec for all data generated: none, gzip, snappy, lz4
    compression.type=none
    
    # name of the partitioner class for partitioning events; default partition spreads data randomly
    #partitioner.class=
    
    # the maximum amount of time the client will wait for the response of a request
    #request.timeout.ms=
    
    # how long `KafkaProducer.send` and `KafkaProducer.partitionsFor` will block for
    #max.block.ms=
    
    # the producer will wait for up to the given delay to allow other records to be sent so that the sends can be batched together
    #linger.ms=
    
    # the maximum size of a request in bytes
    #max.request.size=
    
    # the default batch size in bytes when batching multiple records sent to a partition
    #batch.size=
    
    # the total bytes of memory the producer can use to buffer records waiting to be sent to the server
    #buffer.memory=
    
    21天前
  • 呵呵哒 回复 半兽人CDH6.3.2集群配套的2.2.1版本kafka配置SASL_SCRAM认证配置的问题 中 :

    producer.properties这个文件我就是没在config目录下找到 找了好几个都没看到 才觉得奇怪

    21天前
  • 半兽人 回复 呵呵哒CDH6.3.2集群配套的2.2.1版本kafka配置SASL_SCRAM认证配置的问题 中 :

    1、broker_java_opts这个文件,应该指的的是java的环境,你可以在bin/kafka-server-start.sh中,追加你的配置,如:

    export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G -Djava.security.auth.login.config=/etc/kafka/conf/kafka_server_jaas.conf"
    

    为了让jvm加载配置。

    2、producer.propertiesconfig目录中,是kafka默认自带的配置文件,此配置文件为命令行使用,而且还需要主动指定,例如:

    bin/kafka-console-producer.sh --broker-list localhost:9093 --topic test --producer.config config/producer.properties
    

    参考自:kafka实战SASL/SCRAM

    22天前