安装elasticsearch
1.部署单点es
1.1.创建网络
因为我们还需要部署kibana容器,因此需要让es和kibana容器互联。这里先创建一个网络:
docker network create es-net
1.2.加载镜像
这里我们采用elasticsearch的7.12.1版本的镜像,这个镜像体积非常大,接近1G。不建议大家自己pull。
课前资料提供了镜像的tar包:
大家将其上传到虚拟机中,然后运行命令加载即可:
# 导入数据
docker load -i es.tar
同理还有kibana的tar包也需要这样做。
1.3.运行
运行docker命令,部署单点es:
docker run -d \
--name es \
-e "ES_JAVA_OPTS=-Xms512m -Xmx512m" \
-e "discovery.type=single-node" \
-v es-data:/usr/share/elasticsearch/data \
-v es-plugins:/usr/share/elasticsearch/plugins \
--privileged \
--network es-net \
-p 9200:9200 \
-p 9300:9300 \
elasticsearch:7.12.1
命令解释:
-e "cluster.name=es-docker-cluster":设置集群名称-e "http.host=0.0.0.0":监听的地址,可以外网访问-e "ES_JAVA_OPTS=-Xms512m -Xmx512m":内存大小-e "discovery.type=single-node":非集群模式-v es-data:/usr/share/elasticsearch/data:挂载逻辑卷,绑定es的数据目录-v es-logs:/usr/share/elasticsearch/logs:挂载逻辑卷,绑定es的日志目录-v es-plugins:/usr/share/elasticsearch/plugins:挂载逻辑卷,绑定es的插件目录--privileged:授予逻辑卷访问权--network es-net:加入一个名为es-net的网络中-p 9200:9200:端口映射配置
在浏览器中输入:http://192.168.150.101:9200 即可看到elasticsearch的响应结果:
2.部署kibana
kibana可以给我们提供一个elasticsearch的可视化界面,便于我们学习。
2.1.部署
运行docker命令,部署kibana
docker run -d \
--name kibana \
-e ELASTICSEARCH_HOSTS=http://es:9200 \
--network=es-net \
-p 5601:5601 \
kibana:7.12.1
--network es-net:加入一个名为es-net的网络中,与elasticsearch在同一个网络中-e ELASTICSEARCH_HOSTS=http://es:9200":设置elasticsearch的地址,因为kibana已经与elasticsearch在一个网络,因此可以用容器名直接访问elasticsearch-p 5601:5601:端口映射配置
kibana启动一般比较慢,需要多等待一会,可以通过命令:
docker logs -f kibana
查看运行日志,当查看到下面的日志,说明成功:
此时,在浏览器输入地址访问:http://192.168.150.101:5601,即可看到结果
2.2.DevTools
kibana中提供了一个DevTools界面:
这个界面中可以编写DSL来操作elasticsearch。并且对DSL语句有自动补全功能。
3.安装IK分词器
3.1.在线安装ik插件(较慢)
# 进入容器内部
docker exec -it elasticsearch /bin/bash
# 在线下载并安装
./bin/elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v7.12.1/elasticsearch-analysis-ik-7.12.1.zip
#退出
exit
#重启容器
docker restart elasticsearch
3.2.离线安装ik插件(推荐)
1)查看数据卷目录
安装插件需要知道elasticsearch的plugins目录位置,而我们用了数据卷挂载,因此需要查看elasticsearch的数据卷目录,通过下面命令查看:
docker volume inspect es-plugins
显示结果:
[
{
"CreatedAt": "2022-05-06T10:06:34+08:00",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/es-plugins/_data",
"Name": "es-plugins",
"Options": null,
"Scope": "local"
}
]
说明plugins目录被挂载到了:/var/lib/docker/volumes/es-plugins/_data这个目录中。
2)解压缩分词器安装包
下面我们需要ik分词器解压缩,重命名为ik
3)上传到es容器的插件数据卷中
也就是/var/lib/docker/volumes/es-plugins/_data:
4)重启容器
# 4、重启容器
docker restart es
# 查看es日志
docker logs -f es
mak_smart 和ik_max_word区别:
前者分词力度小,后者分词力度大,比如程序员前者可能就只会是一个程序员,后者就是程序员,程序.但是后者内存占用大点,但是会快点
增删改查
所有的GET,POST ,DELETE都是大写!
查询所有
GET _search
{
"query": {
"match_all": {}
}
}
分词查询
#分词ik_smart/ik_max_word
POST /_analyze
{
"text":"姚耕航太帅了吧,爱了",
"analyzer":"ik_smart"
}
创建索引列
#创建type表示字段类型,analyzer表示用什么类型的分词,index默认为true,代表创建索引,创建后就会分词
PUT /ygh
{
"mappings": {
"properties": {
"info":{
"type": "text",
"analyzer":"ik_smart"
},
"email":{
"type": "keyword",
"index": false
},
"name":{
"type": "object",
"properties": {
"firstName":{
"type":"keyword"
},
"lastName":{
"type":"keyword"
}
}
}
}
}
}
查询表
GET /ygh
新增加字段
PUT /ygh/_mapping
{
"properties":{
"age":{
"type":"integer"
}
}
}
错误演示
#这样修改会报错,不能修改类型,以及字段。只有删除该索引列表,重新创建就可以
PUT /ygh/_mapping
{
"properties":{
"age":{
"type":"text"
}
}
}
POST /ygh/update/_mapping
{
"properties":{
"age":{
"type":"text"
}
}
}
删除表
DELETE /ygh
插入文档
POST /ygh/_doc/1
{
"info":"姚耕航好帅",
"email":"791842566@qq.com",
"name":{
"firstName":"航",
"lastName":"姚"
}
}
根据id查询文档
GET /ygh/_doc/1
删除文档
DELETE /ygh/_doc/1
全量修改
#全量修改,先删除再添加,如果id不存在就会添加(put后直接加索引列/_doc/id)
PUT /ygh/_doc/1
{
"info":"姚耕航gege好帅",
"email":"791842566@qq.com",
"name":{
"firstName":"航",
"lastName":"姚"
}
}
局部修改文档字段
使用post,索引列后跟update
POST /ygh/_update/1
{
"doc":{
"info":"姚耕航ge哥哥好帅"
}
}
es整合springBoot;
1.添加依赖
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>elasticsearch-rest-high-level-client</artifactId>
<version>7.12.1</version>
</dependency>
导入后注意看springboot依赖里面的版本,有的只支持默认的7.6.2的版本。我们需要手动指定那个7.12.1的版本
es在Java中的基本操作
创建,功能实现,销毁
@BeforeEach
void setUp() throws Exception {
this.client = new RestHighLevelClient(RestClient.builder(
HttpHost.create("http://139.9.49.251:9200")));
}
@AfterEach
void dele() throws IOException {
this.client.close();
}
@Test
void tesdel() throws IOException {
DeleteIndexRequest request = new DeleteIndexRequest("hotel");
AcknowledgedResponse deleteIndexResponse = client.indices().delete(request, RequestOptions.DEFAULT);
if (deleteIndexResponse.isAcknowledged()) {
System.out.println("Index hotel deleted successfully");
} else {
System.out.println("Failed to delete index hotel");
}
}
@Test
void createIndex() throws IOException {
CreateIndexRequest request = new CreateIndexRequest("hotel");
request.source(MAPPING_TEMP, XContentType.JSON);
//创建
client.indices().create(request, RequestOptions.DEFAULT);
System.out.println(client);
}
@Test
void testHotel() throws IOException {
//判断索引库是否存在
GetIndexRequest request = new GetIndexRequest("hotel");
boolean exists = client.indices().exists(request, RequestOptions.DEFAULT);
if (exists) {
System.out.println("Index hotel exists");
} else {
System.out.println("Index hotel does not exist");
}
}
@Test
void getDateById() throws IOException {
GetRequest getRequest = new GetRequest("hotel", "45845"); // 创建一个Get请求,指定索引名为"hotel",文档ID为"45845"
GetResponse response = client.get(getRequest, RequestOptions.DEFAULT); // 执行Get请求
String json = response.getSourceAsString(); // 获取响应中的源数据,并转换为字符串
System.out.println(json); // 打印源数据
}
// 更新数据(局部更新)
@Test
void getDateupdate() throws IOException {
UpdateRequest updateRequest = new UpdateRequest("hotel", "45845"); // 创建一个更新请求,指定索引名为"hotel",文档ID为"45845"
updateRequest.doc("age", 25, "name", "ygh"); // 设置要更新的字段和值
UpdateResponse update = client.update(updateRequest, RequestOptions.DEFAULT); // 执行更新请求
GetResult result = update.getGetResult(); // 获取更新结果(如果需要的话)
}
// 删除数据(通过ID删除)
@Test
void getDateDelete() throws IOException {
DeleteRequest deleteRequest = new DeleteRequest("hotel", "45845"); // 创建一个删除请求,指定索引名为"hotel",文档ID为"45845"
DeleteResponse delete = client.delete(deleteRequest, RequestOptions.DEFAULT); // 执行删除请求
}
// 删除所有数据(通过匹配所有文档删除)
@Test
void getDateDeletse() throws IOException {
String indexName = "hotel"; // 设置索引名为"hotel"
DeleteByQueryRequest request = new DeleteByQueryRequest(indexName); // 创建一个删除请求,指定索引名
request.setQuery(QueryBuilders.matchAllQuery()); // 设置查询条件为匹配所有文档
request.setRefresh(true); // 设置刷新索引,使删除操作立即可见
BulkByScrollResponse bulkByScrollResponse = client.deleteByQuery(request, RequestOptions.DEFAULT); // 执行删除请求
System.out.println(bulkByScrollResponse.getDeleted()); // 打印被删除的文档数量
}
// 批量添加数据
@Test
void bukerFeer() throws IOException {
BulkRequest bulkRequest = new BulkRequest(); // 创建一个批量请求
List<Hotel> list = service.list(); // 从service中获取Hotel列表数据
list.stream().forEach(hotel -> { // 遍历Hotel列表
HotelDoc hotelDoc = new HotelDoc(hotel); // 将Hotel对象转换为HotelDoc对象(假设是文档模型)
bulkRequest.add(new IndexRequest("hotel") // 创建一个索引请求,指定索引名为"hotel"
.id(hotelDoc.getId().toString()) // 设置文档ID为hotelDoc的ID(转换为字符串)
.source(JSON.toJSONString(hotelDoc), XContentType.JSON)); // 设置文档源数据为hotelDoc的JSON字符串,并指定内容类型为JSON
});
client.bulk(bulkRequest, RequestOptions.DEFAULT); // 执行批量请求
}
es单个字段#单个字段搜索
GET /hotel/_search
{
"query":{
"match": {
"all":"外滩如家"
}
}
}
match和match_all以及term区别
match查询的时候会进行分词,查询的时候只会查询输入的字段。高亮的时候只有用它才行,并且只当字段,all的话必须指定某个字段为false
match_all也会分词,会全部进行查询。但是不建议用这个,一般用copyto效率会高一点
term呢就算精确匹配,不会分词,就像人名,地名不分词语。
多个字段检索,不建议,性能会差建议使用copyto.会快一点
GET /hotel/_search
{
"query":{
"multi_match": {
"query": "上海酒店",
"fields":["name","brand"]
}
}
}
精确查询term
GET /hotel/_search
{
"query":{
"term": {
"city": {
"value": "北京"
}
}
}
}
范围查询加e是等于的意思equals
GET /hotel/_search
{
"query":{
"range": {
"price": {
"gte": "100",
"lte":"300"
}
}
}
}
经纬度范围查询,查询两个经纬度之间的矩阵
GET /hotel/_search
{
"query": {
"geo_bounding_box": {
"location": {
"top_left":{
"lat":31.1,
"lon":121.5
},
"bottom_right":{
"lat":"30.9",
"lon":"121.7"
}
}
}
}
}
经纬度范围查询,到指定中心点查询
GET /hotel/_search
{
"query": {
"geo_distance": {
"distance":"2km",
"location":"31.21,121.5"
}
}
}
改变查询出来的数据权重
GET /hotel/_search
{
"query": {
"function_score": {
"query": {
"match": {
"all": "外滩"
}
}
, "functions": [
{
"filter": {
"term": {
"brand": "如家"
}
},
"weight":10
}
],
"boost_mode": "sum"
}
}
}
bool符合查询
GET /hotel/_search
{
"query": {
"bool": {
"must": [
{"term":{"brand":"如家"}}
],
"must_not": [
{"range": {
"price": {
"gte": 400
}
}}
],
"filter": [
{"geo_distance": {
"distance":"10km",
"location":"31.21,121.5"
}
}
]
}
}
}
排序
可以简化成这样写条件
"location": {
"lat": 31.034661,
"lon": 121.612282
},
"location": "31.034661,121.612282",
"price": {
"order": "desc"
},
"price":"desc"
GET /hotel/_search
{
"query": {
"bool": {
"must": [
{"term":{"brand":"如家"}}
],
"must_not": [
{"range": {
"price": {
"gte": 400
}
}}
],
"filter": [
{"geo_distance": {
"distance":"10km",
"location":"31.21,121.5"
}
}
]
}
}
, "sort": [
{
"price": {
"order": "desc"
}
}
]
}
#根据价格,评分排序
GET /hotel/_search
{
"query": {
"match_all": {
}
}
, "sort": [
{
"price": {
"order": "desc"
},
"score":"asc"
}
]
}
#查询到我附近距离最近的一些数据
GET /hotel/_search
{
"query": {
"match_all": {
}
}
, "sort": [
{
"_geo_distance": {
"location": {
"lat": 31.034661,
"lon": 121.612282
},
"order": "asc"
}
}
]
}
#做了排序打分就没有意义了就不会有打分值_score
#这样简化写经纬度也可以
GET /hotel/_search
{
"query": {
"match_all": {
}
}
, "sort": [
{
"_geo_distance": {
"location":
"31.034661,121.612282",
"order": "asc"
}
}
]
}
分页:
from+size支持随机翻页,是属于查询出来之后进行截取想要的那一部分
after+search当翻页的时候取上一次查询的排序值,使用这个排序值来进行查询接下来的数据
scroll属于查询出来放到内存里面,会很消耗内存。
#分页from,size对应limit后面的两个参数
GET /hotel/_search
{
"query": {
"match_all": {
}
}
, "sort": [
{
"_geo_distance": {
"location":
"31.034661,121.612282",
"order": "asc"
}
}
],
"from": 0,
"size": 20
}
高亮:
#match和高亮字段一致的时候,可以直接搜索出来
GET /hotel/_search
{
"query": {
"match": {
"name": "如家"
}
},
"highlight": {
"fields": {
"name": {}
}
}
}
#match和高亮字段不一致的时候在name里加字段 "require_field_match": "false"
GET /hotel/_search
{
"query": {
"match": {
"all": "如家"
}
},
"highlight": {
"fields": {
"name": {
"require_field_match": "false"
}
}
}
}
//match查询
@Test
void selectMatch() throws IOException {
//查询
SearchRequest request = new SearchRequest("hotel");
request.source().query(QueryBuilders.matchQuery("all","如家"));
SearchResponse search = client.search(request, RequestOptions.DEFAULT);
for (SearchHit hit : search.getHits()) {
String json = hit.getSourceAsString();
System.out.println(json);
}
}
@Test
void selectRange() throws IOException {
//查询
SearchRequest request = new SearchRequest("hotel");
request.source().query(QueryBuilders.termQuery("city","杭州"));
request.source().query(QueryBuilders.rangeQuery("price").gte("200").lte("500"));
SearchResponse search = client.search(request, RequestOptions.DEFAULT);
for (SearchHit hit : search.getHits()) {
String json = hit.getSourceAsString();
System.out.println(json);
}
}
//bool查询加排序分页
@Test
void selectBool() throws IOException {
//bool查询
SearchRequest request = new SearchRequest("hotel");
BoolQueryBuilder boolQueryBuilder = QueryBuilders.boolQuery();
boolQueryBuilder.must(QueryBuilders.termQuery("all","北京"));
boolQueryBuilder.filter(QueryBuilders.rangeQuery("price").lte(250));
request.source().query(boolQueryBuilder).from(0).size(10).
sort("price", SortOrder.DESC).highlighter(new HighlightBuilder().field("name").requireFieldMatch(false));
SearchResponse search = client.search(request, RequestOptions.DEFAULT);
for (SearchHit hit : search.getHits()) {
String json = hit.getSourceAsString();
System.out.println(json);
}
}
//高亮查询
@Test
void selectHelgith() throws IOException {
//高亮查询
SearchRequest request = new SearchRequest("hotel");
request.source().query(QueryBuilders.matchQuery("name","如家")).from(0).size(10).
sort("price", SortOrder.DESC).highlighter(new HighlightBuilder().field("name").requireFieldMatch(false));
SearchResponse search = client.search(request, RequestOptions.DEFAULT);
for (SearchHit hit : search.getHits()) {
Map<String, HighlightField> highlightFields = hit.getHighlightFields();
boolean b = highlightFields.size() > 0;
if (b){
HighlightField highlightField = highlightFields.get("name");
if (highlightField!=null){
String name = highlightField.getFragments()[0].toString();
System.out.println(name);
}
}
}
}
复杂查询情况
使用代码来进行查询
在启动这直接注入
条件查询构建方法
public BoolQueryBuilder getAllParms(RequestParams requestParams,SearchRequest request){
BoolQueryBuilder boolQueryBuilder = QueryBuilders.boolQuery();
if (requestParams.getKey()!=null&&requestParams.getKey()!=""){
boolQueryBuilder.must(QueryBuilders.matchQuery("all",requestParams.getKey()));
} else {
boolQueryBuilder.must(QueryBuilders.matchAllQuery());
}
if (requestParams.getBrand()!=null&&requestParams.getBrand()!=""){
boolQueryBuilder.filter(QueryBuilders.termQuery("brand",requestParams.getBrand()));
}
if (requestParams.getCity()!=null&&requestParams.getCity()!=""){
boolQueryBuilder.filter(QueryBuilders.termQuery("city",requestParams.getCity()));
}
if (requestParams.getStarName()!=null&&requestParams.getStarName()!=""){
boolQueryBuilder.filter(QueryBuilders.termQuery("starName.keyword",requestParams.getStarName()));
}
if (requestParams.getMaxPrice()!=null){
boolQueryBuilder.filter(QueryBuilders.rangeQuery("price").lte(requestParams.getMaxPrice()));
}
if (requestParams.getMinPrice()!=null){
boolQueryBuilder.filter(QueryBuilders.rangeQuery("price").gte(requestParams.getMinPrice()));
}
if (requestParams.getLocation()!=null&&requestParams.getLocation()!=""){
request.source().sort(
SortBuilders.geoDistanceSort("location",
new GeoPoint(requestParams.getLocation()))
.order(SortOrder.ASC)
.unit(DistanceUnit.KILOMETERS)
);
}
return boolQueryBuilder;
}
接口写法:
@PostMapping("/list")
public PageResult getFilter(@RequestBody RequestParams requestParams) throws IOException {
//查询
SearchRequest request = new SearchRequest("hotel");
BoolQueryBuilder boolQueryBuilder = getAllParms(requestParams, request);
//查询地理位置附近的酒店
FunctionScoreQueryBuilder queryBuilder = QueryBuilders.functionScoreQuery(boolQueryBuilder,
new FunctionScoreQueryBuilder.FilterFunctionBuilder[]{
new FunctionScoreQueryBuilder.FilterFunctionBuilder(
QueryBuilders.termQuery("isAD", true),
//权重直接乘以10
ScoreFunctionBuilders.weightFactorFunction(10)
)
});
request.source().query(queryBuilder);
Integer page=requestParams.getPage();
Integer size=requestParams.getSize();
request.source().from((page-1)*size).size(size);
SearchResponse search = client.search(request, RequestOptions.DEFAULT);
return getPageResult(search);
}
如何进行广告标记?
首先使用一个字段在es里面需要打广告的那里进行添加isAD,然后写一个权重查询
FunctionScoreQueryBuilder queryBuilder = QueryBuilders.functionScoreQuery(boolQueryBuilder,
new FunctionScoreQueryBuilder.FilterFunctionBuilder[]{
new FunctionScoreQueryBuilder.FilterFunctionBuilder(
QueryBuilders.termQuery("isAD", true),
//权重直接乘以10
ScoreFunctionBuilders.weightFactorFunction(10)
)
});
接下来就传给前端,接下来就算前端做的事情了
查询附近的数据
在request加一个排序就行了,使用地理位置远近来排序
request.source().sort(
SortBuilders.geoDistanceSort("location",
new GeoPoint(requestParams.getLocation()))
.order(SortOrder.ASC)
.unit(DistanceUnit.KILOMETERS)
);
分组聚合查询
语法
#聚合查询,使用aggs来开启聚合查询。juhe是自定义的名称而已,query可以限定聚合的范围
GET /hotel/_search
{
"size": 0,
"aggs": {
"juhe": {
"terms": {
"field": "starName",
"order": {
"_count": "desc"
},
"size": 10
}
}
}
}
#嵌套聚合,按照平均分倒序排列
GET /hotel/_search
{
"size": 0,
"aggs": {
"juhe": {
"terms": {
"field": "brand",
"order": {
"qiantaojuhe.avg": "desc"
},
"size": 10
},
"aggs": {
"qiantaojuhe": {
"stats": {
"field": "score"
}
}
}
}
}
}
代码实现:
public void getMap(String terms,SearchRequest request,Map <String ,List<String>> map,String name,Integer num,String names) throws IOException {
List<String> list = new ArrayList<>();
getRequest(request,terms,name,num);
SearchResponse search = client.search(request, RequestOptions.DEFAULT);
Aggregations aggregations = search.getAggregations();
Terms brandTerms = aggregations.get(terms);
List<? extends Terms.Bucket> buckets = brandTerms.getBuckets();
for (Terms.Bucket bucket : buckets) {
list.add(bucket.getKeyAsString());
}
map.put(names,list);
}
public Map<String, List<String>> filterSelect(RequestParams requestParams) throws IOException {
//聚合查询
SearchRequest request = new SearchRequest("hotel");
request.source().size(0);
Map <String ,List<String>> map=new HashMap<>();
Integer num=100;
BoolQueryBuilder boolQueryBuilder = getAllParms(requestParams, request);
request.source().query(boolQueryBuilder);
getMap("cityAggs",request,map,"city",num,"city");
getMap("starAggs",request,map,"starName.keyword",num,"starName");
getMap("brandAggs",request,map,"brand",num,"brand");
return map;
}
拼音自动补全查询:
首先要在es安装拼音分词器
原理就是在创建索引的时候把一个汉字多分解一个拼音出来,查询的时候就能根据拼英查询到数据了。
注意
加了拼英分词器会有一个问题,就是查询的时候会查询到同音的,我们可以使用
, "analyzer": "my_analyzer",
"search_analyzer": "ik_smart"
查询和分词走不通的路径,这样就可以解决这样的问题。下面是dsl
PUT /test
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "ik_max_word",
"filter": "py"
}
},
"filter": {
"py": {
"type": "pinyin",
"keep_full_pinyin": false,
"keep_joined_full_pinyin": true,
"keep_original": true,
"limit_first_letter_length": 16,
"remove_duplicated_term": true,
"none_chinese_pinyin_tokenize": false
}
}
}
},
"mappings": {
"properties": {
"name":{
"type": "text"
, "analyzer": "my_analyzer",
"search_analyzer": "ik_smart"
}
}
}
}
重新创建索引库
酒店数据索引库 completion_analyzer意思是查询的时候就直接不用分词
PUT /hotel
{
"settings": {
"analysis": {
"analyzer": {
"text_anlyzer": {
"tokenizer": "ik_max_word",
"filter": "py"
},
"completion_analyzer": {
"tokenizer": "keyword",
"filter": "py"
}
},
"filter": {
"py": {
"type": "pinyin",
"keep_full_pinyin": false,
"keep_joined_full_pinyin": true,
"keep_original": true,
"limit_first_letter_length": 16,
"remove_duplicated_term": true,
"none_chinese_pinyin_tokenize": false
}
}
}
},
"mappings": {
"properties": {
"id":{
"type": "keyword"
},
"name":{
"type": "text",
"analyzer": "text_anlyzer",
"search_analyzer": "ik_smart",
"copy_to": "all"
},
"address":{
"type": "keyword",
"index": false
},
"price":{
"type": "integer"
},
"score":{
"type": "integer"
},
"brand":{
"type": "keyword",
"copy_to": "all"
},
"city":{
"type": "keyword"
},
"starName":{
"type": "keyword"
},
"business":{
"type": "keyword",
"copy_to": "all"
},
"location":{
"type": "geo_point"
},
"pic":{
"type": "keyword",
"index": false
},
"all":{
"type": "text",
"analyzer": "text_anlyzer",
"search_analyzer": "ik_smart"
},
"suggestion":{
"type": "completion",
"analyzer": "completion_analyzer"
}
}
}
}
后端实体类也要加一个suggestion
后端代码如何实现查询?
//补全
@Test
void selectBuquan() throws IOException {
//补全查询
SearchRequest request = new SearchRequest("hotel");
SuggestBuilder suggestBuilder = new SuggestBuilder();
//skipDuplicates跳过重复的
suggestBuilder.addSuggestion("suggestions",
SuggestBuilders.completionSuggestion("suggestion")
.prefix("r")
.skipDuplicates(true)
.size(10)
);
request.source().suggest(suggestBuilder);
SearchResponse search = client.search(request, RequestOptions.DEFAULT);
Suggest suggest = search.getSuggest();
CompletionSuggestion suggestions = suggest.getSuggestion("suggestions");
for (CompletionSuggestion.Entry.Option option : suggestions.getOptions()) {
System.out.println(option.getText().toString());
}
}
要资料的可以留言发给你哦