GET _analyze { "analyzer":"ik_max_word", "text":"中华人民共和国国歌"}
用这个查询的时候可以正常分词
{ "tokens": [ { "token": "中华人民共和国", "start_offset": 0, "end_offset": 7, "type": "CN_WORD", "position": 0 }, { "token": "中华人民", "start_offset": 0, "end_offset": 4, "type": "CN_WORD", "position": 1 }, { "token": "中华", "start_offset": 0, "end_offset": 2, "type": "CN_WORD", "position": 2 }, { "token": "华人", "start_offset": 1, "end_offset": 3, "type": "CN_WORD", "position": 3 }, { "token": "人民共和国", "start_offset": 2, "end_offset": 7, "type": "CN_WORD", "position": 4 }, { "token": "人民", "start_offset": 2, "end_offset": 4, "type": "CN_WORD", "position": 5 }, { "token": "共和国", "start_offset": 4, "end_offset": 7, "type": "CN_WORD", "position": 6 }, { "token": "共和", "start_offset": 4, "end_offset": 6, "type": "CN_WORD", "position": 7 }, { "token": "国", "start_offset": 6, "end_offset": 7, "type": "CN_CHAR", "position": 8 }, { "token": "国歌", "start_offset": 7, "end_offset": 9, "type": "CN_WORD", "position": 9 } ] }
但是加上索引之后就不能正常分词了
GET article/_analyze { "analyzer":"ik_max_word", "text":"中华人民共和国国歌"}
{ "error": { "root_cause": [ { "type": "remote_transport_exception", "reason": "[node_252.12][10.22.252.12:9300][indices:admin/analyze[s]]" } ], "type": "illegal_argument_exception", "reason": "failed to find analyzer [ik_max_word]" }, "status": 400 }
这个要怎么设置分词方式呢?想用ik_max_word。