Optimization Controversy

 

The Multi-Task Optimization Controversy






The multi-task learning paradigm — that is, the ability to train models on multiple tasks at the same time — has been a blessing as much as a curse.

A blessing because it allows us to build a single model where previously we would have needed multiple. That makes live simpler: fewer models that need to be maintained, re-trained, tuned, and monitored.

A curse because it opens up an entirely new pandora’s box of questions: which tasks should be learned together? Which tasks do we really need? What happens if tasks are competing with each other? How can we make make the model prioritize certain tasks over others? How can we avoid ‘task rot’, that is, the accumulation of task heads over time that eventually lead to degradation of model performance?

It is questions like these that spawned a new subdomain of Machine Learning known as multi-task optimization, that is, the science of how to optimize a model on multiple, sometimes competing, tasks.

Post a Comment

0 Comments