Wi-Fi’s secret packet loss revealed 802.11 has been quietly dropping packets.

For years, Wi-Fi networks have been quietly losing packets because of a flaw in the standard, according to results from extensive network tests.

Although the problem has gone unnoticed till now, it will become more critical as voice-based applications run over Wi-Fi networks. It is likely to be untreatable in the existing 802.11a, 802.11b and 802.11g protocols, so researchers are concentrating their efforts on the upcoming 802.11n protocol.

The basic transmission protocol used in 802.11a/b/a networks shows an “unavoidable “packet loss”, according to tests carried out on behalf of Network World, by wireless test specialist Veriwave, and system vendor Aruba Networks, and reported in Unstrung and Wi-Fi Planet.

The 802.11 standards include techniques to spot corrupt data and ask for retransmission, a mechanism that has been assumed to be foolproof, and works well enough for most usage. “[An 802.11 network] is corruptible, lossy, but has strong instruments for re-transmission if a packet doesn’t arrive,” Eran Karoly, vice president of marketing at testing company VeriWave told Wi-Fi Planet.

However, although the payload of each packet has a 32-bit cyclic redundancy check, enough to ensure that systems spot corrupt data and have it retransmitted, the error-correction on the packet header is much weaker. Each packet carries a header that specifies details of its size, and transmission rate – the physical layer convergence procedure (PLCP) header – and this only has a single bit parity check.

This means that a receiving station may mistake the size and speed of an incoming packet. If it mistook a short 100-byte packet coming in at 54 Mbit/s for a much longer data-stream coming in at a lower bit rate, it would then be “blinded” for milliseconds while it waited for the long stream. The sending station would spot the problem: it would receive no acknowledgement of the packet, and so would retransmit it, but would have given up and dropped the packet by the time the receiving station had stopped waiting for the erroneous longer stream.
Error is small but non-zero The error condition requires the PLCP header to be corrupted in a specific way, without altering the parity bit, and then for that error to cause a condition beyond the capability of the retransmion mechanism – which adds up to a small probability. “It’s extremely small, around .001 percent, but it’s never zero,” Veriwave chief technology officer Tom Alexander told Unstrung. “That’s not what the protocol says; the loss should be zero.”

About the author

Wi-Fi’s secret packet loss revealed 802.11 has been quietly dropping packets.

For years, Wi-Fi networks have been quietly losing packets because of a flaw in the standard, according to results from extensive network tests.

Although the problem has gone unnoticed till now, it will become more critical as voice-based applications run over Wi-Fi networks. It is likely to be untreatable in the existing 802.11a, 802.11b and 802.11g protocols, so researchers are concentrating their efforts on the upcoming 802.11n protocol.

The basic transmission protocol used in 802.11a/b/a networks shows an “unavoidable “packet loss”, according to tests carried out on behalf of Network World, by wireless test specialist Veriwave, and system vendor Aruba Networks, and reported in Unstrung and Wi-Fi Planet.

The 802.11 standards include techniques to spot corrupt data and ask for retransmission, a mechanism that has been assumed to be foolproof, and works well enough for most usage. “[An 802.11 network] is corruptible, lossy, but has strong instruments for re-transmission if a packet doesn’t arrive,” Eran Karoly, vice president of marketing at testing company VeriWave told Wi-Fi Planet.

However, although the payload of each packet has a 32-bit cyclic redundancy check, enough to ensure that systems spot corrupt data and have it retransmitted, the error-correction on the packet header is much weaker. Each packet carries a header that specifies details of its size, and transmission rate – the physical layer convergence procedure (PLCP) header – and this only has a single bit parity check.

This means that a receiving station may mistake the size and speed of an incoming packet. If it mistook a short 100-byte packet coming in at 54 Mbit/s for a much longer data-stream coming in at a lower bit rate, it would then be “blinded” for milliseconds while it waited for the long stream. The sending station would spot the problem: it would receive no acknowledgement of the packet, and so would retransmit it, but would have given up and dropped the packet by the time the receiving station had stopped waiting for the erroneous longer stream.
Error is small but non-zero The error condition requires the PLCP header to be corrupted in a specific way, without altering the parity bit, and then for that error to cause a condition beyond the capability of the retransmion mechanism – which adds up to a small probability. “It’s extremely small, around .001 percent, but it’s never zero,” Veriwave chief technology officer Tom Alexander told Unstrung. “That’s not what the protocol says; the loss should be zero.”

About the author