Druid时序数据库升级流程
目前Druid集群版本为0.11.0,新版本0.12.1已支持Druid SQL和Redis,考虑到Druid新特性以及性能的提升,因此需要将Druid从0.11.0版本升级到0.12.1版本,下面将对Druid升级步骤做详细的介绍,升级时请严格按照此步骤进行升级,以免出现一些不可预知的问题。
1. Druid升级包
Druid官网下载druid-0.12.1-bin.tar.gz和mysql-metadata-storage-0.12.1.tar.gz
2. 配置Druid-0.12.1
- 解压druid-0.12.1-bin.tar.gz
[work@druid]$ tar -zxvf druid-0.12.1-bin.tar.gz[work@druid]$ rm -rf druid-0.12.1-bin.tar.gz
- 解压mysql-metadata-storage-0.12.1.tar.gz
[work@druid]$ tar -zxvf mysql-metadata-storage-0.12.1.tar.gz -C druid-0.12.1/extensions/[work@druid]$ rm -rf mysql-metadata-storage-0.12.1.tar.gz
3. 配置common.runtime.properties
[work@druid druid-0.11.0]$ cd conf/druid/_common[work@druid _common]$ vi common.runtime.properties
# If you specify `druid.extensions.loadList=[]`, Druid won't load any extension from file system.# If you don't specify `druid.extensions.loadList`, Druid will load all the extensions under root extension directory.# More info: http://druid.io/docs/latest/operations/including-extensions.htmldruid.extensions.loadList=["druid-kafka-eight", "druid-hdfs-storage", "druid-histogram", "druid-datasketches", "druid-lookups-cached-global", "mysql-metadata-storage"]# If you have a different version of Hadoop, place your Hadoop client jar files in your hadoop-dependencies directory# and uncomment the line below to point to your directory.#druid.extensions.hadoopDependenciesDir=/my/dir/hadoop-dependencies## Logging## Log all runtime properties on startup. Disable to avoid logging properties on startup:druid.startup.logging.logProperties=true## Zookeeper#druid.zk.service.host=172.16.XXX.XXX:2181druid.zk.paths.base=/druid## Metadata storage## For Derby server on your Druid Coordinator (only viable in a cluster with a single Coordinator, no fail-over):#druid.metadata.storage.type=derby#druid.metadata.storage.connector.connectURI=jdbc:derby://localhost:1527/var/druid/metadata.db;create=true#druid.metadata.storage.connector.host=localhost#druid.metadata.storage.connector.port=1527# For MySQL:druid.metadata.storage.type=mysqldruid.metadata.storage.connector.connectURI=jdbc:mysql://172.16.XXX.XXX:3306/druiddruid.metadata.storage.connector.user=rootdruid.metadata.storage.connector.password=123456# For PostgreSQL:#druid.metadata.storage.type=postgresql#druid.metadata.storage.connector.connectURI=jdbc:postgresql://db.example.com:5432/druid#druid.metadata.storage.connector.user=...#druid.metadata.storage.connector.password=...## Deep storage## For local disk (only viable in a cluster if this is a network mount):#druid.storage.type=local#druid.storage.storageDirectory=var/druid/segments# For HDFS:druid.storage.type=hdfsdruid.storage.storageDirectory=/druid/segments# For S3:#druid.storage.type=s3#druid.storage.bucket=your-bucket#druid.storage.baseKey=druid/segments#druid.s3.accessKey=...#druid.s3.secretKey=...## Indexing service logs## For local disk (only viable in a cluster if this is a network mount):#druid.indexer.logs.type=file#druid.indexer.logs.directory=var/druid/indexing-logs# For HDFS:druid.indexer.logs.type=hdfsdruid.indexer.logs.directory=/druid/indexing-logs# For S3:#druid.indexer.logs.type=s3#druid.indexer.logs.s3Bucket=your-bucket#druid.indexer.logs.s3Prefix=druid/indexing-logs## Service discovery#druid.selectors.indexing.serviceName=druid/overlorddruid.selectors.coordinator.serviceName=druid/coordinator## Monitoring#druid.monitoring.monitors=["io.druid.java.util.metrics.JvmMonitor"]druid.emitter=loggingdruid.emitter.logging.logLevel=info# Storage type of double columns# ommiting this will lead to index double as float at the storage layerdruid.indexing.doubleStorage=double
4. 复制HDFS配置文件
[work@druid _common]$ cp core-site.xml /alidata/server/druid-0.12.1/conf/druid/_common/[work@druid _common]$ cp hdfs-site.xml /alidata/server/druid-0.12.1/conf/druid/_common/[work@druid _common]$ cp mapred-site.xml /alidata/server/druid-0.12.1/conf/druid/_common/[work@druid _common]$ cp ya-site.xml /alidata/server/druid-0.12.1/conf/druid/_common/
5.启用Druid SQL功能
[work@druid broker]$ vi runtime.properties
druid.service=druid/brokerdruid.host=172.16.XXX.XXXdruid.port=8082# HTTP server threadsdruid.broker.http.numConnections=5druid.server.http.numThreads=9# Processing threads and buffersdruid.processing.buffer.sizeBytes=256000000druid.processing.numThreads=2# Query cache (we use a small local cache)druid.broker.cache.useCache=truedruid.broker.cache.populateCache=truedruid.cache.type=localdruid.cache.sizeInByte=2000000000# enable druid sql and httpdruid.sql.enable=truedruid.sql.avatica.enable=truedruid.sql.http.enable=true
备注:broker、overlord、coordinator、historical、middleManager等目录下的runtime.properties新增属性druid.host=ipAddress
6. 更新MiddleManager任务执行数capacity
work@druid middleManager]$ vi runtime.properties
druid.service=druid/middleManagerdruid.host=172.16.XXX.XXXdruid.port=8091# Number of tasks per middleManagerdruid.worker.capacity=20# Task launch parametersdruid.indexer.runner.javaOpts=-server -Xmx2g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManagerdruid.indexer.task.baseTaskDir=var/druid/task# HTTP server threadsdruid.server.http.numThreads=9# Processing threads and buffers on Peonsdruid.indexer.fork.property.druid.processing.buffer.sizeBytes=256000000druid.indexer.fork.property.druid.processing.numThreads=2# Hadoop indexingdruid.indexer.task.hadoopWorkingPath=var/druid/hadoop-tmpdruid.indexer.task.defaultHadoopCoordinates=["org.apache.hadoop:hadoop-client:2.7.3"]
7. 升级Historical
[work@druid druid-0.12.1]$ nohup >> nohuphistorical.out java `cat conf/druid/historical/jvm.config | xargs` -cp conf/druid/_common:conf/druid/historical:lib/* io.druid.cli.Main server historical &
8. 升级Overlord
[work@druid druid-0.12.1]$ nohup >> logs/nohupoverlord.out java `cat conf/druid/overlord/jvm.config | xargs` -cp conf/druid/_common:conf/druid/overlord:lib/* io.druid.cli.Main server overlord &
9. 升级MiddleManager
- 禁止Overlor再向指定服务的MiddleManager分配任务
http://<MiddleManager_IP:PORT>/druid/worker/v1/disable
- 查看指定MiddleManager任务列表
http://<MiddleManager_IP:PORT>/druid/worker/v1/tasks
- 启动MiddleManager
[work@druid druid-0.12.1]$ nohup >> logs/nohupmiddleManager.out java `cat conf/druid/middleManager/jvm.config | xargs` -cp conf/druid/_common:conf/druid/middleManager:lib/* io.druid.cli.Main server middleManager &
- 启用Overlord向指定MiddleManager分配任务
http://<MiddleManager_IP:PORT>/druid/worker/v1/enable
10. 升级Broker
[work@druid druid-0.12.1]$ nohup >> nohupbroker.out java `cat conf/druid/broker/jvm.config | xargs` -cp conf/druid/_common:conf/druid/broker:lib/* io.druid.cli.Main server broker &
11. 升级Coordinator
[work@druid druid-0.12.1]$ nohup >> nohupcoordinator.out java `cat conf/druid/coordinator/jvm.config | xargs` -cp conf/druid/_common:conf/druid/coordinator:lib/* io.druid.cli.Main server coordinator &
至此,Druid就完成从0.11.0版本升级到0.12.1版本。
作者:影魂的漫漫人生路
来源链接:https://www.cnblogs.com/yinghun/p/9265195.html
版权声明:
1、JavaClub(https://www.javaclub.cn)以学习交流为目的,由作者投稿、网友推荐和小编整理收藏优秀的IT技术及相关内容,包括但不限于文字、图片、音频、视频、软件、程序等,其均来自互联网,本站不享有版权,版权归原作者所有。
2、本站提供的内容仅用于个人学习、研究或欣赏,以及其他非商业性或非盈利性用途,但同时应遵守著作权法及其他相关法律的规定,不得侵犯相关权利人及本网站的合法权利。
3、本网站内容原作者如不愿意在本网站刊登内容,请及时通知本站(javaclubcn@163.com),我们将第一时间核实后及时予以删除。