当前位置:首页 > 服务端 > Kafka下载安装

Kafka下载安装

2022年09月17日 14:20:10服务端4

 kafka知识点整理,写的真好 

开始上手:

1,官网下载地址

2,按照官网的方式quickstart(解压、启动、通信)

3,需要注意的是:

  1、kafka自带zookeeper,需要先启动zookeeper,再启动kafka。

  2、kafka版本差异较大,不同版本命令不同;

  3、后台启动方式:bin/kafka-server-start.sh -daemon ./config/server.properties

    4、若zookeeper在其它服务器,可设置ip连接.

  5、监听,设置为本机ip。  listeners=PLAINTEXT://192.168.1.103:9092   (监听)

   6、查看版本号方式,如:kafka_2.11-1.0.0 ,则 2.11为scala版本,1.0.0为kafka版本

4,使用自己的zookeeper服务器:

  a,在另外一台服务器安装zookeeper并启动。

  b,修改kafka配置文件config/server.properties,指定zk服务器的IP地址   

    zookeeper.connect=192.168.1.110:2181

  c,保存并启动Kafka。

 


 

Kafka配置详解

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# see kafka.server.KafkaConfig for additional details and defaults

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
#Broker的ID,每个broker必须又有唯一的值。
broker.id=1

############################# Socket Server Settings #############################

# The address the socket server listens on. It will get the value returned from 
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
#listeners就是主要用来定义Kafka Broker的Listener的配置项。 #Socket服务器监听的地址,如果没有设置,则监听java.net.InetAddress.getCanonicalHostName()返回的地址 listeners=PLAINTEXT://172.16.3.177:9092 # Hostname and port the broker will advertise to producers and consumers. If not set, # it uses the value for "listeners" if configured. Otherwise, it will use the value # returned from java.net.InetAddress.getCanonicalHostName(). #broker通知到producers和consumers的主机地址和端口号。如果未设置,使用listeners的配置。 #否则,使用java.net.InetAddress#.getCanonicalHostName()返回的值。 #如果均为设置,拿到的值为localhot:9092,将造成无法连接其他机器 #advertised.listeners=PLAINTEXT://your.host.name:9092 # Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details #listener名称到安全协议。默认所有的listener使用相同的安全协议。PLAINTEXT为明码文本协议 #listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL # The number of threads that the server uses for receiving requests from the network and sending responses to the network #服务器用来从网络接收请求和发送响应数据到网络的线程数 num.network.threads=3 # The number of threads that the server uses for processing requests, which may include disk I/O #服务器用来处理请求的线程数,可能包含磁盘I/O的线程。 num.io.threads=8 # The send buffer (SO_SNDBUF) used by the socket server #服务器发送数据的缓存大小。 socket.send.buffer.bytes=102400 # The receive buffer (SO_RCVBUF) used by the socket server #服务器接收数据的缓存大小。 socket.receive.buffer.bytes=102400 # The maximum size of a request that the socket server will accept (protection against OOM) #socket服务器接收的单个request大小的最大值。 socket.request.max.bytes=104857600 #Kafka的数据都是存储在log文件中的,log的配置即是data的配置 ############################# Log Basics ############################# #Log 基本设置: # A comma separated list of directories under which to store log files #Kafka存储log文件的目录,多个目录用逗号分隔。 log.dirs=/home/logs/kafka # The default number of log partitions per topic. More partitions allow greater # parallelism for consumption, but this will also result in more files across # the brokers. #topic的默认分割的分区个数。多分区允许消费者并行获取数据,但这也会造成brokers交叉保存多个文件。 num.partitions=10 # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown. # This value is recommended to be increased for installations with data dirs located in RAID array. #当Kafka启动时恢复数据和关闭时保存数据到磁盘时使用的线程个数。 num.recovery.threads.per.data.dir=2 ############################# Internal Topic Settings ############################# # The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state" # For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3. #副本个数。除了开发测试外,其他情况都建议将该值设置为大于1的值以保证高可用,比如:3。 offsets.topic.replication.factor=2 transaction.state.log.replication.factor=2 transaction.state.log.min.isr=2 ############################# Log Flush Policy ############################# #Log持久化策略: #Message会立即写入到文件系统。但我们只使用了fsync()方法,该操作并不会立即执行OS的写入磁盘操作。 #下面几个参数可以配置写入磁盘的策略。 # Messages are immediately written to the filesystem but by default we only fsync() to sync # the OS cache lazily. The following configurations control the flush of data to disk. # There are a few important trade-offs here: # 1. Durability: Unflushed data may be lost if you are not using replication. # 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush. # 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks. # The settings below allow one to configure the flush policy to flush data after a period of time or # every N messages (or both). This can be done globally and overridden on a per-topic basis. # The number of messages to accept before forcing a flush of data to disk #在持久化到磁盘前message最大接收条数。 #log.flush.interval.messages=10000 # The maximum amount of time a message can sit in a log before we force a flush #持久化的最大时间间隔。 #log.flush.interval.ms=1000 ############################# Log Retention Policy ############################# #Log保留策略: #下面的参数配置如何处理log分片。可以配置为删除超过指定时间的数据,或超过指定大小的数据。 #如果一个segment符合其中任意调节,就会被删除。 # The following configurations control the disposal of log segments. The policy can # be set to delete segments after a period of time, or after a given size has accumulated. # A segment will be deleted whenever *either* of these criteria are met. Deletion always happens # from the end of the log. # The minimum age of a log file to be eligible for deletion due to age #将已保存超过7天的数据删除。 log.retention.hours=168 # A size-based retention policy for logs. Segments are pruned from the log unless the remaining # segments drop below log.retention.bytes. Functions independently of log.retention.hours. #基于数据大小的log保存策略。当分片的大小超过该值时,就会被删除。改功能不依赖于log.retention.hours。 #log.retention.bytes=1073741824 # The maximum size of a log segment file. When this size is reached a new log segment will be created. #单个分片的上限,达到该大小后会生成新的日志分片。 log.segment.bytes=1073741824 # The interval at which log segments are checked to see if they can be deleted according # to the retention policies #日志分片的检测时间间隔,每隔该事件会根据log保留策略决定是否删除log分片。 log.retention.check.interval.ms=300000 ############################# Zookeeper ############################# # Zookeeper connection string (see zookeeper docs for details). # This is a comma separated host:port pairs, each corresponding to a zk # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002". # You can also append an optional chroot string to the urls to specify the # root directory for all kafka znodes. #Zookeeper连接字符串。是一个使用逗号分隔的host:port字符串。 zookeeper.connect=172.16.3.177:12181,172.16.3.178:12181,172.16.3.179:12181 # Timeout in ms for connecting to zookeeper #连接zookeeper的超时时间。 zookeeper.connection.timeout.ms=6000 ############################# Group Coordinator Settings ############################# #消费者分组协调者设置: # The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance. # The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms. # The default value for this is 3 seconds. # We override this to 0 here as it makes for a better out-of-the-box experience for development and testing. # However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup. #在开发测试环境下该值设置为0,保证启动后马上可以使用。但在生产环境下,默认值3秒更适合。 group.initial.rebalance.delay.ms=0

  


 

Kafka集群搭建

  在每个服务器中下载一个kafka,然后修改配置文件:

    broker.id=0  (编号,类似于zookeeper的myid)

    listeners=PLAINTEXT://192.168.1.103:9092   (监听)

    zookeeper.connect=192.168.1.110:2181    (zookeeper)

  配置好了这三项,启动就可以了。

 

 

 

 

 

 

 

 

作者:现世安稳。
来源链接:https://www.cnblogs.com/hero123/p/13835590.html

版权声明:
1、JavaClub(https://www.javaclub.cn)以学习交流为目的,由作者投稿、网友推荐和小编整理收藏优秀的IT技术及相关内容,包括但不限于文字、图片、音频、视频、软件、程序等,其均来自互联网,本站不享有版权,版权归原作者所有。

2、本站提供的内容仅用于个人学习、研究或欣赏,以及其他非商业性或非盈利性用途,但同时应遵守著作权法及其他相关法律的规定,不得侵犯相关权利人及本网站的合法权利。
3、本网站内容原作者如不愿意在本网站刊登内容,请及时通知本站(javaclubcn@163.com),我们将第一时间核实后及时予以删除。


本文链接:https://www.javaclub.cn/server/42755.html

标签: Kafka
分享给朋友:

“Kafka下载安装” 的相关文章

kafka集群搭建

kafka集群搭建

本文将记录使用kafka镜像,分别在两种场景下搭建3节点集群:1.在一台机器上使用容器方式安装kafka集群;2.在三台机器上使用容器方式安装kafka集群。 此次使用的是wurstmeister的,下载量是比较大的。使用下面命令下载: docker pull wur...

【kafka】安装部署kafka集群(kafka版本:kafka_2.12-2.3.0)

3.2.1 下载kafka并安装kafka_2.12-2.3.0.tgz tar -zxvf kafka_2.12-2.3.0.tgz 3.2.2 配置kafka集群 在config/server.properties中修改参数: [had...

kafka-server-stop.sh关闭Kafka失败

Kafka brokers need to finish the shutdown process before the zookeepers do. So start the zookeepers, then the kafka brokers wil...

Kafka 安装和简单使用

Kafka 安装和简单使用

文章目录 Kafka 安装和简单使用 kafka下载地址 windows 系统...

kafka的基本概念和工作流程分析

kafka的基本概念和工作流程分析

为什么需要消息队列   周末无聊刷着手机,某宝网APP突然蹦出来一条消息“为了回馈老客户,女朋友买一送一,活动仅限今天!”。买一送一还有这种好事,那我可不能错过!忍不住立马点了去。于是选了两个最新款,下单、支付一气呵成!满足的躺在床上,想着马上有女朋友了,竟然幸福的失眠了…...

kafka集群原理介绍

kafka集群原理介绍 @(博客文章)[kafka|大数据] 目录 kafka集群原理介绍 (一)基础理论 二、配置文件 三、错误处理 本系统文章共三篇,分别为 1、ka...

Kafka 快速入门(安装)

Kafka 快速入门(安装)

kafka学习目录:kafka目录 二、Kafka 快速入门 2.1、windows版安装 2.1.1、Quick Start 本次安装学习在Windows操作系统进行。(Linux版本的差别不大,运行脚本文件后缀从bat...

Docker 安装 kafka

Docker 安装 kafka

简单安装为了集成 SpringBoot,真实使用,增加增加更多配置,比如将log映射出来 1.安装 zookeeper [root@centos-linux ~]# docker pull wurstmeister/zookeeper [root@centos-l...

Kafka如何保证消息不丢失不重复

首先需要思考下边几个问题: 消息丢失是什么造成的,从生产端和消费端两个角度来考虑 消息重复是什么造成的,从生产端和消费端两个角度来考虑 如何保证消息有序 如果保证消息不重不漏,损失的是什么 大概总结下 消费端重复消费:建立去重表 消费端丢失数据...

kafka查看topic列表和topic消息

kafka查看topic列表和topic消息

查询topic 列表信息 前提是需要进入到kafka的目录 Linux 目录 \kafka_2.12-2.8.0\bin\ sh kafka-topics.sh --list --zookeeper localhost:2181 windo...

发表评论

访客

◎欢迎参与讨论,请在这里发表您的看法和观点。