that probably won't work for all the scattered information - somebody has to go to each page to archive them. some sort of wholesale archive that is searchable...
i'd chip in to the effort of some API-using scraper.
Probably cheaper in man hours and energy to just pay the money, save the data, and build a clone.
there’s already several clones so already off to a good start
Обсуждают сегодня