我是mac本,我下载的版本是6.1.2,直接运行elasticsearch启动,第二个是copy第一个elasticsearch的文件夹,就是说两个elasticsearch在同一台机器上,两个elasticsearch的配置如下:其他的东西都没有改动
第一个:
cluster.name: my-es
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: my-es-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
node.master: true
node.data: true
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /Users/cream.ice/Documents/elasticsearch/data/my-es-1
#
# Path to log files:
#
path.logs: /Users/cream.ice/Documents/elasticsearch/log/my-es-1
discovery.zen.ping.unicast.hosts: ["127.0.0.1"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
discovery.zen.minimum_master_nodes: 1
第二个配置文件如下:
cluster.name: my-es
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: my-es-2
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
node.master: true
node.data: true
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /Users/cream.ice/Documents/elasticsearch/data/my-es-2
#
# Path to log files:
#
path.logs: /Users/cream.ice/Documents/elasticsearch/log/my-es-2
http.port: 9100
transport.tcp.port: 9110
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.zen.ping.unicast.hosts: ["127.0.0.1"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
discovery.zen.minimum_master_nodes: 1
大神帮看一下哪里配置不对吗,为啥不能自动组成集群
第一个elasticsearch启动的数据:
[2018-02-11T19:13:25,683][INFO ][o.e.n.Node ] [my-es-1] initializing ...
[2018-02-11T19:13:25,754][INFO ][o.e.e.NodeEnvironment ] [my-es-1] using [1] data paths, mounts [[/ (/dev/disk1)]], net usable_space [18.3gb], net total_space [232.6gb], types [hfs]
[2018-02-11T19:13:25,755][INFO ][o.e.e.NodeEnvironment ] [my-es-1] heap size [989.8mb], compressed ordinary object pointers [true]
[2018-02-11T19:13:25,771][INFO ][o.e.n.Node ] [my-es-1] node name [my-es-1], node ID [VTx9V2wGTz-dind_M-bhWg]
[2018-02-11T19:13:25,771][INFO ][o.e.n.Node ] [my-es-1] version[6.1.2], pid[60044], build[5b1fea5/2018-01-10T02:35:59.208Z], OS[Mac OS X/10.11.6/x86_64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_51/25.51-b03]
[2018-02-11T19:13:25,771][INFO ][o.e.n.Node ] [my-es-1] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/Users/cream.ice/Documents/software/elasticsearch-6.1.2, -Des.path.conf=/Users/cream.ice/Documents/software/elasticsearch-6.1.2/config]
[2018-02-11T19:13:26,480][INFO ][o.e.p.PluginsService ] [my-es-1] loaded module [aggs-matrix-stats]
[2018-02-11T19:13:26,481][INFO ][o.e.p.PluginsService ] [my-es-1] loaded module [analysis-common]
[2018-02-11T19:13:26,481][INFO ][o.e.p.PluginsService ] [my-es-1] loaded module [ingest-common]
[2018-02-11T19:13:26,481][INFO ][o.e.p.PluginsService ] [my-es-1] loaded module [lang-expression]
[2018-02-11T19:13:26,481][INFO ][o.e.p.PluginsService ] [my-es-1] loaded module [lang-mustache]
[2018-02-11T19:13:26,481][INFO ][o.e.p.PluginsService ] [my-es-1] loaded module [lang-painless]
[2018-02-11T19:13:26,481][INFO ][o.e.p.PluginsService ] [my-es-1] loaded module [mapper-extras]
[2018-02-11T19:13:26,481][INFO ][o.e.p.PluginsService ] [my-es-1] loaded module [parent-join]
[2018-02-11T19:13:26,481][INFO ][o.e.p.PluginsService ] [my-es-1] loaded module [percolator]
[2018-02-11T19:13:26,482][INFO ][o.e.p.PluginsService ] [my-es-1] loaded module [reindex]
[2018-02-11T19:13:26,482][INFO ][o.e.p.PluginsService ] [my-es-1] loaded module [repository-url]
[2018-02-11T19:13:26,482][INFO ][o.e.p.PluginsService ] [my-es-1] loaded module [transport-netty4]
[2018-02-11T19:13:26,482][INFO ][o.e.p.PluginsService ] [my-es-1] loaded module [tribe]
[2018-02-11T19:13:26,482][INFO ][o.e.p.PluginsService ] [my-es-1] no plugins loaded
[2018-02-11T19:13:27,492][INFO ][o.e.d.DiscoveryModule ] [my-es-1] using discovery type [zen]
[2018-02-11T19:13:27,934][INFO ][o.e.n.Node ] [my-es-1] initialized
[2018-02-11T19:13:27,934][INFO ][o.e.n.Node ] [my-es-1] starting ...
[2018-02-11T19:13:28,056][INFO ][o.e.t.TransportService ] [my-es-1] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
[2018-02-11T19:13:31,103][INFO ][o.e.c.s.MasterService ] [my-es-1] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {my-es-1}{VTx9V2wGTz-dind_M-bhWg}{fMn31g7cTcqXDux_1uBxmQ}{127.0.0.1}{127.0.0.1:9300}
[2018-02-11T19:13:31,107][INFO ][o.e.c.s.ClusterApplierService] [my-es-1] new_master {my-es-1}{VTx9V2wGTz-dind_M-bhWg}{fMn31g7cTcqXDux_1uBxmQ}{127.0.0.1}{127.0.0.1:9300}, reason: apply cluster state (from master [master {my-es-1}{VTx9V2wGTz-dind_M-bhWg}{fMn31g7cTcqXDux_1uBxmQ}{127.0.0.1}{127.0.0.1:9300} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2018-02-11T19:13:31,121][INFO ][o.e.h.n.Netty4HttpServerTransport] [my-es-1] publish_address {127.0.0.1:9200}, bound_addresses {[::1]:9200}, {127.0.0.1:9200}
[2018-02-11T19:13:31,121][INFO ][o.e.n.Node ] [my-es-1] started
[2018-02-11T19:13:31,192][INFO ][o.e.g.GatewayService ] [my-es-1] recovered [1] indices into cluster_state
[2018-02-11T19:13:31,358][INFO ][o.e.c.r.a.AllocationService] [my-es-1] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[customer][0]] ...]).
[2018-02-11T19:14:01,136][WARN ][o.e.c.r.a.DiskThresholdMonitor] [my-es-1] high disk watermark [90%] exceeded on [VTx9V2wGTz-dind_M-bhWg][my-es-1][/Users/cream.ice/Documents/elasticsearch/data/my-es-1/nodes/0] free: 18.3gb[7.8%], shards will be relocated away from this node
第二个启动数据:
2018-02-11T19:13:46,377][INFO ][o.e.n.Node ] [my-es-2] initializing ...
[2018-02-11T19:13:46,448][INFO ][o.e.e.NodeEnvironment ] [my-es-2] using [1] data paths, mounts [[/ (/dev/disk1)]], net usable_space [18.3gb], net total_space [232.6gb], types [hfs]
[2018-02-11T19:13:46,448][INFO ][o.e.e.NodeEnvironment ] [my-es-2] heap size [989.8mb], compressed ordinary object pointers [true]
[2018-02-11T19:13:46,451][INFO ][o.e.n.Node ] [my-es-2] node name [my-es-2], node ID [yxjWfnTwQuy9jbl2fIlT6w]
[2018-02-11T19:13:46,452][INFO ][o.e.n.Node ] [my-es-2] version[6.1.2], pid[60094], build[5b1fea5/2018-01-10T02:35:59.208Z], OS[Mac OS X/10.11.6/x86_64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_51/25.51-b03]
[2018-02-11T19:13:46,452][INFO ][o.e.n.Node ] [my-es-2] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/Users/cream.ice/Documents/software/elasticsearch-6.1.2-1, -Des.path.conf=/Users/cream.ice/Documents/software/elasticsearch-6.1.2-1/config]
[2018-02-11T19:13:47,165][INFO ][o.e.p.PluginsService ] [my-es-2] loaded module [aggs-matrix-stats]
[2018-02-11T19:13:47,166][INFO ][o.e.p.PluginsService ] [my-es-2] loaded module [analysis-common]
[2018-02-11T19:13:47,166][INFO ][o.e.p.PluginsService ] [my-es-2] loaded module [ingest-common]
[2018-02-11T19:13:47,166][INFO ][o.e.p.PluginsService ] [my-es-2] loaded module [lang-expression]
[2018-02-11T19:13:47,166][INFO ][o.e.p.PluginsService ] [my-es-2] loaded module [lang-mustache]
[2018-02-11T19:13:47,166][INFO ][o.e.p.PluginsService ] [my-es-2] loaded module [lang-painless]
[2018-02-11T19:13:47,166][INFO ][o.e.p.PluginsService ] [my-es-2] loaded module [mapper-extras]
[2018-02-11T19:13:47,166][INFO ][o.e.p.PluginsService ] [my-es-2] loaded module [parent-join]
[2018-02-11T19:13:47,166][INFO ][o.e.p.PluginsService ] [my-es-2] loaded module [percolator]
[2018-02-11T19:13:47,167][INFO ][o.e.p.PluginsService ] [my-es-2] loaded module [reindex]
[2018-02-11T19:13:47,167][INFO ][o.e.p.PluginsService ] [my-es-2] loaded module [repository-url]
[2018-02-11T19:13:47,167][INFO ][o.e.p.PluginsService ] [my-es-2] loaded module [transport-netty4]
[2018-02-11T19:13:47,167][INFO ][o.e.p.PluginsService ] [my-es-2] loaded module [tribe]
[2018-02-11T19:13:47,167][INFO ][o.e.p.PluginsService ] [my-es-2] no plugins loaded
[2018-02-11T19:13:48,162][INFO ][o.e.d.DiscoveryModule ] [my-es-2] using discovery type [zen]
[2018-02-11T19:13:48,584][INFO ][o.e.n.Node ] [my-es-2] initialized
[2018-02-11T19:13:48,584][INFO ][o.e.n.Node ] [my-es-2] starting ...
[2018-02-11T19:13:48,698][INFO ][o.e.t.TransportService ] [my-es-2] publish_address {127.0.0.1:9110}, bound_addresses {[::1]:9110}, {127.0.0.1:9110}
[2018-02-11T19:13:51,748][INFO ][o.e.c.s.MasterService ] [my-es-2] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {my-es-2}{yxjWfnTwQuy9jbl2fIlT6w}{ChfV-K7QTGKeTrKaMLmotg}{127.0.0.1}{127.0.0.1:9110}
[2018-02-11T19:13:51,752][INFO ][o.e.c.s.ClusterApplierService] [my-es-2] new_master {my-es-2}{yxjWfnTwQuy9jbl2fIlT6w}{ChfV-K7QTGKeTrKaMLmotg}{127.0.0.1}{127.0.0.1:9110}, reason: apply cluster state (from master [master {my-es-2}{yxjWfnTwQuy9jbl2fIlT6w}{ChfV-K7QTGKeTrKaMLmotg}{127.0.0.1}{127.0.0.1:9110} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2018-02-11T19:13:51,765][INFO ][o.e.h.n.Netty4HttpServerTransport] [my-es-2] publish_address {127.0.0.1:9100}, bound_addresses {[::1]:9100}, {127.0.0.1:9100}
[2018-02-11T19:13:51,765][INFO ][o.e.n.Node ] [my-es-2] started
[2018-02-11T19:13:51,771][INFO ][o.e.g.GatewayService ] [my-es-2] recovered [0] indices into cluster_state
[2018-02-11T19:14:21,763][WARN ][o.e.c.r.a.DiskThresholdMonitor] [my-es-2] high disk watermark [90%] exceeded on [yxjWfnTwQuy9jbl2fIlT6w][my-es-2][/Users/cream.ice/Documents/elasticsearch/data/my-es-2/nodes/0] free: 18.3gb[7.8%], shards will be relocated away from this node
希望对排查问题有所帮助
通过explain api找到问题,elasticsearch对磁盘有一个机制,默认情况下使用率高于85%时就不再保存分片,我的本磁盘使用率太高,所以都不分配,我使用的命令:GET /_cluster/allocation/explain