multiple times?
I think yes
of course. The decoder can be reused, but may not be used concurrently. Multiple decoders can reuse a single tokenizer instance, but again, not concurrently. The API is designed in such a way that allows you to maximum efficiency and memory reusage
so how are strings treated in case of []byte as an input? in my http parser, I'm just using a big buffer, in which all the strings are stored continuously, so I can pre-allocate memory and reuse it easily. However, it makes me to use unsafe conversions
if you're using the decoder then it will allocate dynamically and copy. Refering to the original source may be dangerous and if that's what you want then using the tokenizer to write a custom, more efficient task specific parser is a better but more time consuming solution.
Обсуждают сегодня