From TVD Mutual Information to Item Response Theory - Clipped-Linear Alternative to Logistic Models
Imagine you’re trying to evaluate multiple AI agents on a battery of tests, but you can’t observe all agent-item pairs due to computational constraints. How do you infer the missing performance data? We’ll review total variation distance (TVD) and mutual information (MI) and how it can be connected with the Rasch model in item response theory (IRT) using the principle of maximum entropy. This sparse evaluation problem led me to use information theory through maximum entropy to make a precise solution that is naturally compatible with how TVD-MI works. We then validate assumptions and show how it can reduce the sampling requirements to estimate TVD-MI.