Sergei Vassilvitskii
vsergei.bsky.social
Sergei Vassilvitskii
@vsergei.bsky.social
Algorithms, predictions, privacy.
https://theory.stanford.edu/~sergei/
These two assumptions are enough to prove convergence bounds! More generally, this view presents a theoretical framework that unifies existing elements of synthetic data approaches, facilitating reasoning about when they might succeed or fail.
February 14, 2025 at 1:48 PM
Instead of a weak learner, we assume access to models that can perfectly model an input distribution, which we call strong learners.

But instead of iid samples, we have access to only weak information about the target distribution, i.e. weak data.
February 14, 2025 at 1:48 PM
Shameless plug on dp synthetic data: research.google/blog/protect...
Protecting users with differentially private synthetic training data
research.google
December 21, 2024 at 8:00 PM