Skip to main content
The Jieba tokenizer is a tokenizer for Chinese text that leverages both a dictionary and statistical models. It is generally considered to be better at identifying ambiguous Chinese word boundaries compared to the Chinese Lindera and Chinese compatible tokenizers, but the tradeoff is that it is slower.
CREATE INDEX search_idx ON mock_items
USING bm25 (id, (description::pdb.jieba))
WITH (key_field='id');
To get a feel for this tokenizer, run the following command and replace the text with your own:
SELECT 'Hello world! 你好!'::pdb.jieba::text[];
Expected Response
              text
--------------------------------
 {hello," ",world,!," ",你好,!}
(1 row)
I