You can perform secure multi-party computation with a deck of playing cards.
This is not a metaphor. Card-based cryptographic protocols use physical shuffles to hide information — if Alice and Bob each place a card face-down, and a verifiable shuffle mixes them, neither party can determine the other's input while both can learn the computation's output. The protocols are simple enough to execute by hand, yet they implement the same logical operations as digital cryptographic circuits.
The computational power of these protocols depends on which shuffles are available. A random cut (rotating the deck by a random offset) is easy to implement physically but weak in computational terms. A uniform random permutation is powerful but hard to implement fairly — how do you convince both parties that the shuffle was truly random? Between these extremes lies a hierarchy of shuffle operations, classified by implementation complexity.
The separation results are the key finding. Certain shuffles at higher levels of the hierarchy provably cannot be constructed from compositions of lower-level shuffles. This means the hierarchy is strict — adding a more complex shuffle genuinely expands the set of computable functions, not just the convenience of computing them.
The practical implications are clean. When designing a card-based protocol, you can now measure its complexity not just by how many cards or rounds it requires, but by the minimum shuffle level needed. A protocol requiring only random cuts is cheaper to implement fairly than one requiring pile-scramble shuffles. The complexity metric captures something that card count alone misses: the trust assumptions embedded in the physical operations.