Elasticsearch Chinese search: Analyzers and best practices
Analysis and lexicization are crucial in the content index of Elasticsearch, especially when dealing with non-English languages. For Chinese, this process is even more complicated due to the characteristics of Chinese characters and the lack of spaces between words and sentences.
This article discusses several solutions for analyzing Chinese content in Elasticsearch, including the default Chinese analyzer, paoding plug-in, cjk analyzer, smartcn analyzer and ICU plug-in, and analyzes their advantages and disadvantages and applicable scenarios.
Challenges of Chinese Search
Chinese characters are ideograms that represent a word or morphemes (the smallest meaningful unit in language). When combined together, its meaning will change, representing a completely new word. Another difficulty is that there are no spaces between words and sentences, which makes it difficult for computers to know where a word starts and ends.
Even if you only consider Mandarin (the official Chinese language and the most widely used Chinese in the world), there are tens of thousands of Chinese characters, even if you actually write Chinese, you only need to know three to four thousand Chinese characters. For example, "volcano" (volcano) is actually a combination of the following two Chinese characters:
- Fire: Fire
- Mountain: Mountain
Our word participle must be smart enough to avoid separating these two Chinese characters, because their meaning is different from when they are separated.
Another difficulty is the spelling variant used:
- Simplified Chinese: Calligraphy
- Traditional Chinese, more complex and richer: Book method
- Pinyin, Romanized form of Mandarin: shū fǎ
Chinese Analyzer in Elasticsearch
At present, Elasticsearch provides the following Chinese analyzers:
- Default
Chinese
Analyzer, based on deprecated classes in Lucene 4; -
paoding
Plugin, although no longer maintained, is based on a very good dictionary; -
cjk
Analyzer, which binaryizes content; -
smartcn
Analyzer, an officially supported plug-in; - ICU plug-in and its word segmentation device.
These analyzers vary greatly, and we will compare their performance with a simple test word "mobile phone". "Mobile phone" means "mobile phone", which consists of two Chinese characters, which represent "hand" and "mobile". The word "ji" also constitutes many other words:
- Flights: Air tickets
- Robot:Robot
- Machine gun: machine gun
- Opportunity: Opportunity
Our participle cannot split these Chinese characters because if I search for "mobile phone", I don't want any documentation about Rambo owning a machine gun.
We will test these solutions using the powerful _analyze
API:
curl -XGET 'http://localhost:9200/chinese_test/_analyze?analyzer=paoding_analyzer1' -d '手機(jī)'
-
Default
Chinese
Analyzer: It only divides all Chinese characters into word elements. Therefore, we get two lexical elements: mobile phone and mobile phone. Elasticsearch'sstandard
analyzer produces exactly the same output. Therefore,Chinese
is deprecated and will soon be replaced bystandard
and should be avoided. -
paoding
Plug-in:paoding
Almost an industry standard and is considered an elegant solution. Unfortunately, the plugin for Elasticsearch is not maintained, and I can only run it on version 1.0.1 after some modifications. (Installation steps are omitted, original text provided) After installation, we get a newpaoding
word segmenter and two collectors:max_word_len
andmost_word
. By default, there is no public analyzer, so we have to declare a new analyzer. (Configuration steps are omitted, original text provided) Both configurations provide good results with clear and unique lexical elements. It also behaves very well when dealing with more complex sentences. -
cjk
Analyzer: Very simple analyzer that converts only any text into binaries. "Mobile phone" only indexes手機(jī)
, which is good, but if we use longer words, such as "Lantern Festival" (Lantern Festival), two words will be generated: Lantern Festival and Xiao Festival, which means "Lantern Festival" and respectively "Xiao Festival". -
smartcn
Plug-in: Very easy to install. (Installation steps are omitted, original text provided) It exposes a newsmartcn
analyzer, as well assmartcn_tokenizer
word segmenter, using Lucene'sSmartChineseAnalyzer
. It uses a probability suite to find the best segmentation of words, using hidden Markov models and a large amount of training text. Therefore, a fairly good training dictionary has been embedded—our examples are correctly participled. -
ICU Plugin: Another official plugin. (Installation steps are omitted, original text provided) If you deal with any non-English language, it is recommended to use this plugin. It discloses a
icu_tokenizer
word segmenter, as well as many powerful analysis tools such asicu_normalizer
,icu_folding
,icu_collation
, etc. It uses Chinese and Japanese dictionaries that contain information about word frequency to infer Chinese character groups. On "mobile phone", everything is normal and works as expected, but on "Lantern Festival", two words will be produced: Lantern Festival and Festival - this is because "Lantern Festival" and "Festival" are more important than "Lantern Festival". common.
Comparison of results (The form omitted, original text provided)
From my point of view, paoding
and smartcn
got the best results. chinese
The word participle is very bad, icu_tokenizer
is a bit disappointing on the "Lantern Festival", but it is very good at dealing with traditional Chinese.
Traditional Chinese support
You may need to process traditional Chinese from a document or user search request. You need a normalization step to convert these traditional inputs into modern Chinese because plugins like smartcn
or paoding
do not handle it correctly.
You can handle it through your application, or try using the elasticsearch-analysis-stconvert
plugin to handle it directly in Elasticsearch. It can convert traditional and simplified characters in both directions. (Installation steps are omitted, original text has been provided)
The last solution is to use cjk
: If you can't enter correctly participle, you're still very likely to capture the required documentation and then use icu_tokenizer
(also quite good) to improve relevance.
Further improvements
There is no perfect universal solution for Elasticsearch analysis, and Chinese is no exception. You must combine and build your own analyzers based on the information you have obtained. For example, I use the cjk
and smartcn
participle on the search field, using multi-field and multi-match query.
(FAQ part omitted, original text provided)
The above is the detailed content of Efficient Chinese Search with Elasticsearch. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

ToversionaPHP-basedAPIeffectively,useURL-basedversioningforclarityandeaseofrouting,separateversionedcodetoavoidconflicts,deprecateoldversionswithclearcommunication,andconsidercustomheadersonlywhennecessary.StartbyplacingtheversionintheURL(e.g.,/api/v

TosecurelyhandleauthenticationandauthorizationinPHP,followthesesteps:1.Alwayshashpasswordswithpassword_hash()andverifyusingpassword_verify(),usepreparedstatementstopreventSQLinjection,andstoreuserdatain$_SESSIONafterlogin.2.Implementrole-basedaccessc

Proceduralandobject-orientedprogramming(OOP)inPHPdiffersignificantlyinstructure,reusability,anddatahandling.1.Proceduralprogrammingusesfunctionsorganizedsequentially,suitableforsmallscripts.2.OOPorganizescodeintoclassesandobjects,modelingreal-worlden

PHPdoesnothaveabuilt-inWeakMapbutoffersWeakReferenceforsimilarfunctionality.1.WeakReferenceallowsholdingreferenceswithoutpreventinggarbagecollection.2.Itisusefulforcaching,eventlisteners,andmetadatawithoutaffectingobjectlifecycles.3.YoucansimulateaWe

To safely handle file uploads in PHP, the core is to verify file types, rename files, and restrict permissions. 1. Use finfo_file() to check the real MIME type, and only specific types such as image/jpeg are allowed; 2. Use uniqid() to generate random file names and store them in non-Web root directory; 3. Limit file size through php.ini and HTML forms, and set directory permissions to 0755; 4. Use ClamAV to scan malware to enhance security. These steps effectively prevent security vulnerabilities and ensure that the file upload process is safe and reliable.

In PHP, the main difference between == and == is the strictness of type checking. ==Type conversion will be performed before comparison, for example, 5=="5" returns true, and ===Request that the value and type are the same before true will be returned, for example, 5==="5" returns false. In usage scenarios, === is more secure and should be used first, and == is only used when type conversion is required.

Yes, PHP can interact with NoSQL databases like MongoDB and Redis through specific extensions or libraries. First, use the MongoDBPHP driver (installed through PECL or Composer) to create client instances and operate databases and collections, supporting insertion, query, aggregation and other operations; second, use the Predis library or phpredis extension to connect to Redis, perform key-value settings and acquisitions, and recommend phpredis for high-performance scenarios, while Predis is convenient for rapid deployment; both are suitable for production environments and are well-documented.

The methods of using basic mathematical operations in PHP are as follows: 1. Addition signs support integers and floating-point numbers, and can also be used for variables. String numbers will be automatically converted but not recommended to dependencies; 2. Subtraction signs use - signs, variables are the same, and type conversion is also applicable; 3. Multiplication signs use * signs, which are suitable for numbers and similar strings; 4. Division uses / signs, which need to avoid dividing by zero, and note that the result may be floating-point numbers; 5. Taking the modulus signs can be used to judge odd and even numbers, and when processing negative numbers, the remainder signs are consistent with the dividend. The key to using these operators correctly is to ensure that the data types are clear and the boundary situation is handled well.
