performance – How important (or useful) is branchless programming in bytecode or interpreted languages?

Branchless programming is more efficient in most cases (if your hack to avoid a branch isn’t too complicated), atleast in languages that compile to binary because branch prediction failures can lead to a fair bit of lost efficiency.

Does this extend to languages that run bytecode (like java) and interpreted languages (like, say, php or javascript)? where there is an additional layer of abstraction of the VM and/or interpreter between the source code and the actual instructions being executed. Do those runtimes have sufficiently high branching/complexity that trying to optimise for branchless source code does effectively nothing?