Do congestion problems stem from a bit of 1987 code?
We all know that broadband providers have a tendency to blame P2P traffic for many woes and a lot of time this goes hand in hand with comments about illegal traffic in the form of pirated software and copyrighted material being transmitted without appropriate permissions.
There are some very good reasons behind why P2P traffic is the problem that it is, and anyone in a shared household will be all too aware that one person running a P2P app can easily swamp the connection and even if others try to grab some traffic space they find it very hard. This is often down to the nature of the TCP stack that includes algorithms on how traffic is handled. At its simplest level it means the software that opens the most connections will get the lions share of the available bandwidth. A blog on ZDNet titled Fixing the unfairness of TCP congestion control describes the issue in some length and provides a potential solution.
Back in the days before web browsers existed the Internet did exist and around October 1986 the Internet, which measured some 30,000 computers, started to grind to a halt and Van Jacobson created a patch in 1987 that resolved this problem. This patch is still at the core of TCP stacks today. In a day when computers would generally only have one TCP stream open at a time this worked, but once P2P, which may use 10 to 100 simultaneous connections, has become popular the congestion control mechanism is working against the average user who by and large is still using interactive bursty applications such as web browsing.
The problems in Japan where the 100Mbps connections are described as a traffic jam will sound familiar to UK users, and perhaps be a surprise to some, as many believe the Japanese market to provide cheap fast broadband. Of course providers have come up with various solutions, such as usage limits, pay as you go services, and traffic management techniques. The latter can be unfair since it will often impact the occasional user as much as those looking to fill as many hard disks as they can afford. Traffic management can also lead to unexpected results, breaking some games for example that use P2P to distribute patches. The net neutrality debate centers mostly around concerns that a company may throttle external services such as video while prioritising its own video service and does little to address the issues of congestion other than forcing providers to upgrade networks which, unless the customer pays more, will eventually bankrupt companies.
The proposed solution of a Weighted TCP algorithm whereby single stream apps tag their stream with a higher weight than an application with eleven streams would seem to offer a solution. The method sounds not unlike some traffic tagging and control systems whereby one piece of hardware is used to tag the traffic and a second bit of hardware actually adjusts the amount of capacity given to that traffic stream. The advantage of embedding this into the TCP stack is that it would become part of all hardware, thus helping to manage traffic even at the home router level.
One other solution that has been around since 2001 is Explicit Congestion Notification (ECN). This is already in the Linux, Windows Vista and Windows Server 2008 TCP stacks but is disabled by default. If enabled, people may find it works but if older routers are encountered, packets may be dropped. As older kit is removed from networks and more TCP stacks support ECN we may see the mechanism exploiting its better signaling method, which at its heart is the fact that packets are not dropped when a network gets busy.
The P2P and network congestion debates are a minefield as many bodies are voicing just their own interests rather than looking at a larger much more technical picture. With luck an engineering solution can be arrived at rather than the bean counter solutions which often see excess usage charges or traffic throttling being applied.