# information theory – Channel capacity of DMC with each transmission having different distribution

I got this doubt while reading about AVC in Csiszar Korner’s book.

Corollary 12.3 The $$epsilon$$-capacity of the AVC {$$W : X to Y$$}
average probability of error equals, for every $$0 < epsilon < 1$$, the
corresponding $$epsilon$$-capacity of the AVC {$$mathcal{bar W}$$}. It does not exceed
the minimum of $$C(W)$$ as $$W$$ ranges over $$bar W$$ where $$C( W)$$ is the
capacity of the DMC {$$W$$}.

Here, $$mathcal{bar W}$$ is the closure of all channels given by varying state sequence $$vec s$$. I am wondering how $$C(W)$$ can be the capacity of the AVC as we need to analyze it for blocks on code length $$n$$. What the author has written makes sense for the capacity ‘expression’. The capacity of AVC need not be that expression. It would make sense to me if the following can be proven true.

Given a class of DMC $$mathcal{W}$$ which consists of DMCs $$W$$. All these $$W$$ are the normal DMC and their capacity is given by
$$max_{P_X} I(X;Y)$$
Suppose that for each symbol transmission, one of the DMC $$W in mathcal{W}$$ is chosen by someone (possibly adversarially), then what can we claim about the new channel capacity? Intuitively, it feels like the worst case scenario would be when all the transmissions take place with the worst channel in $$mathcal{W}$$ (in terms of capacity). In such case, capacity would be the normal capacity of the worst DMC in $$mathcal{W}$$, ie,
$$max_{P_X} min_{W in mathcal{W}} I(X;Y)$$
However, I am not sure if this is true and I do not know how to show that this is indeed the worst case scenario. Can someone please shed some light on this?