epsilon-exhausted version space
machine-learning
epsilon-exhausted version space
Suppose we are in the modelling framework with some training data $T \subset A \times B$, a hypothesis space $H \subset Func(A,B)$ and a probability distribution $\mathbb{D}$ on $A$. For some $0 \leq \epsilon \leq 0.5$ the version space $VS_H(T)$ is $\epsilon$-exhausted if and only if for all $h \in VS_H(T)$ the true error
$$Error_{\mathbb{D}}(h) \leq \epsilon.$$