How to refine data for #
LLMs? What does it mean that the data has high quality?
It's not about the data having fewer typos, or less wrong answers. Unless you are training a trivia bot.
The power of LLMs comes from them modelling the latent processes behind the task trajectories, the data, especially when the processes contain intelligent thought.
So, when you're generating synthetic data, or refining collected data, you will need to make sure the refinery output is of higher quality than its inputs.
This means you need to:
- Add intelligence. Make the new task trajectories perform deeper syntheses, pull in more relevant knowledge, take steps futher. Make more complex task performances out of simpler ones. Go through more possibilities. Go deeper meta-level and e.g. validate validations. Use search over alternative solutions.
- Groom out bad data. Rank, criticize, evaluate, and either improve/fix bad data or recontextualize it.
- Collect new data which is created by the data refinement processes themselves.
- Add knowledge from external sources, and synthesize it with the knowledge already known. Also consider the next level implications of all the knowledge already acquired.
- Apply skills to knowledge to produce new knowledge and new skills.
LLMs are data-defined. Data isn't a static thing, it needs to be looked at philosophically.