Skip to main content
The literal normalized tokenizer is similar to the literal tokenizer in that it does not split the source text. All text is treated as a single token, regardless of how many words are contained. However, unlike the literal tokenizer, this tokenizer allows token filters to be applied. By default, the literal normalized tokenizer also lowercases the text.
CREATE INDEX search_idx ON mock_items
USING bm25 (id, (description::pdb.literal_normalized))
WITH (key_field='id');
To get a feel for this tokenizer, run the following command and replace the text with your own:
SELECT 'Tokenize me!'::pdb.literal_normalized::text[];
Expected Response
       text
------------------
 {"tokenize me!"}
(1 row)
I