Regret-Optimal Federated Transfer Learning for Kernel Regression – with Applications in American Option Pricing
We propose an optimal iterative scheme for federated transfer learning, where a central planner has access to datasets D1,…,DN for the same learning model fθ. Our objective is to minimize the cumulative deviation of the generated parameters {θi(t)}Tt=0 across all T iterations from the specialized parameters
θ∗1,…,θ∗N while respecting the loss function for the model fθ(T) produced by the algorithm upon halting. We only allow for continual communication between each of the specialized models (nodes/agents) and the central planner (server), at each iteration (round).
For the case where the model fθ is a finite-rank kernel regression, we derive explicit updates for the regret-optimal algorithm. By leveraging symmetries within the regret-optimal algorithm, we further develop a nearly regret-optimal heuristic that runs with O(Np2) fewer elementary operations, where p is the dimension of the parameter space.
Additionally, we investigate the adversarial robustness of the regret-optimal algorithm showing that an adversary which perturbs q training pairs by at-most ε>0, across all training sets, cannot reduce the regret-optimal algorithm's regret by more than O(εqˉN1/2), where ˉN is the aggregate number of training pairs. To validate our theoretical findings, we conduct numerical experiments in the context of American option pricing, utilizing a randomly generated finite-rank kernel.