I would read in the BCC corpus frequency list as a dictionary, then Having concatenated all the news/magazine articles as plain text, I would build a dictionary of all the words in the news/magazine articles up to 8 characters long, counting their number of occurrences with the help of the BCC frequency list (which tells us which combinations ...
Word frequency list based on a 15 billion character corpus: BCC (BLCU ...
I guess in my case, I could go with per-corpus flashcard sets to keep the per-corpus tagging, and one user dictionary (without tags) with all the per-corpus ranking info included in one entry per term.
The BCC corpus seems to have pretty loose licensing terms. Pleco already seems to be using frequency data to sort the search results. Adding them meaningfully to dictionary definitions would be even better, I believe. That is something which printed dictionaries can’t do.
The Beijing Language and Culture University created a balanced corpus of 15 billion characters. It’s based on news (人民日报 1946-2018,人民日报海外版 2000-2018), literature (books by 472 authors, including a significant portion of non-Chinese writers), non-fiction books, blog and weibo entries as well as...
With a small corpus of 650 articles from People's Daily, downloaded using a Python script, I hope to start providing a more modern frequency list of media-related vocabulary. The frequency list has the following features: It uses all sections of the 人民日报 / People's Daily newspaper, including the sports section.
PyCantonese comes with one built-in corpus, the Hong Kong Cantonese Corpus. For corpora other than HKCanCor, PyCantonese provides the function read_chat () to read in Cantonese data in the CHAT format. Someone with more skills than me could try to read 裏 through this python search from other corpuses and see what is the result.