Friday, February 27, 2009

Problems due to NATs and firewalls

Network Address Translators (NAT) and firewalls create problems for end-to-end connectivity on the Internet. This not only affects P2P-SIP but also client-server SIP. In this article I post some example numbers to illustrate the point.

These numbers are for example only: suppose there are 10% public Internet nodes, 30% nodes behind good (cone or address restricted) NAT, 30% nodes behind bad (symmetric) NAT and 30% nodes behind UDP blocking firewalls (F). Let's denote these as P=10%, G=30%, B=30%, F=30%. Here the public Internet nodes are typically from universities and research institutes, those behind good NAT are usually from residential DSL/Cable access, those behind bad NAT are partly from residential and partly from enterprise environment, and those behind UDP blocking firewalls are from enterprise and corporate networks. Suppose a call event between any two pair of nodes is independent of each other for the probability analysis purpose and nodes are equally likely to call any other node. Thus, percentage of calls between two public Internet nodes is (10%)^2 = 0.01 = 1%.

Now let us enumerate the NAT and firewall traversal techniques available to SIP. STUN helps with good NAT, whereas TURN relay is needed for bad NAT. ICE is used to negotiate the connectivity using STUN or TURN bindings. A TCP-based relay (or even HTTP relay) is needed for UDP blocking and very restricted firewalls. (what about TCP hole punching and other techniques?) A STUN server is light in terms of bandwidth utilization, whereas a TURN relay needs high network bandwidth and hence costs the service provider more money. Same is the case with TCP-based relay.

In a call if one participant is behind a UDP blocking firewall (F), then the call must use a TCP relay. This amounts to 1-(1-F)^2 = 51% calls going through TCP relay.

In a call if both participants are behind bad NAT, then we need a TURN relay. This amounts to B^2 = 9% of the calls.

If one participant in a call is either on public Internet or good NAT and other is on public Internet, good NAT or bad NAT, then the media can go end-to-end using STUN bindings. This amounts to 40% of the calls.

In conclusion, the VoIP provider will need to host UDP or TCP relays for 51+9=60% of the calls. This is not a good proposition.

In real world, the call events are not independent of each other: probability of a corporate user calling another corporate user within the same corporation is high. Also probability of a home user calling another home user is also high. For example, a SIP service targeted towards consumers can expect to have most of the calls among residential users. Thus, the percentage of calls that can be end-to-end is much higher than 40%. Similarly, an enterprise VoIP system can expect to have mostly internal intra-enterprise calls, which do not need to cross the enterprise firewall. Hence the percentage of calls needing the relay is not as high as 60%. Let us analyze these two use cases separately.

Suppose, for a consumer SIP service, the distribution of nodes is P=15%, G=50%, B=30%, F=5%, i.e., less number of users are from bad NAT or UDP blocking firewalls. In this scenario about 20% calls need a relay whereas 80% calls don't.

In an enterprise VoIP system, suppose 60% calls are intra-office and 40% are with outside the office network, then only those 40% calls need a relay whereas 60% calls don't. In a properly engineered enterprise VoIP system, appropriate ports are opened for UDP as well as appropriate media relays are installed in DMZ which facilitates smooth media path for inter-office communication.

While we can play with these numbers as much as we want, the fact remains that a significant percentage of calls need media relay, either UDP TURN relays or TCP relays. This puts unnecessary burden on the VoIP service provider to install and manage relays and buy network bandwidth for those relays, or simply disallow calls that require relay (in which case they may lose customers).

In a peer-to-peer system with super nodes such as Skype, these super nodes can act as media relays and hence save a lot of bandwidth and maintenance cost for the provider. There are some things to consider though: a node behind public Internet can become UDP as well as TCP relay for any call, whereas a node behind good NAT can become only UDP relay with some workaround, but not a TCP relay. This puts too much burden on nodes behind public Internet.

Let us consider the original example with P=10%, G=30%, B=30%, F=30%. In this case the 51% of calls that require TCP relay must use one of the 10% P nodes. When acting as a relay, the bandwidth requirement at the relay is twice that of when the node is in a call. Suppose each node makes N calls a day, and generally speaking needs bandwidth for N calls. However, a public Internet node not only needs bandwidth for its own N calls, but also for relaying 5xN calls of other users which amounts to total bandwidth for 11xN calls. Thus, while the super-node architecture is beneficial to the provider, it heavily punishes users on the public Internet. (My guess is that number of public nodes using VoIP are about 4-5%, which further burdens the public nodes).

A managed P2P-SIP infrastructure can be a good alternative, where corporations and universities donate hosts/bandwidth on high speed network to act as relays/super-nodes. Alternatively, one can have an incentive system to promote hosts to become relays and super-nodes.

Tuesday, February 24, 2009


Adobe's RTMFP is not P2P-VoIP as exemplified by Skype. On the other hand, RTMFP is closer to client-server SIP or H.323 where signaling happens via a server and media path can be end-to-end between the endpoints. When people refer to RTMFP as P2P, it is more like 'end-to-end media' similar to client-server SIP.

Why is RTMFP important? The previous Adobe protocol RTMP is strictly client-server even for media path. This gives poor quality for real-time media communication because media packets go from client to server, that too over TCP, and then are redistributed to the other client, again on TCP. End-to-end media based VoIP systems existed before Adobe implemented RTMP. I suppose the difficulty of NAT and firewall traversal and lack of interactive video communication requirement in Flash Player resulted in RTMP. Adobe corrected this mistake in the new protocol RTMFP which allows NAT and firewall traversal (to some extent) and allows end-to-end media path without going through the server. Although, the signaling is still going via the central server.

Once we understand this difference between P2P-VoIP and RTMFP, lets enumerate the differences between an RTMFP-based and a client-server SIP-based communication system.

1. RTMFP is a closed protocol, although Adobe recently opened up the previous RTMP. On the other hand, SIP is an open standard from IETF. This means anyone can implement SIP whereas only Adobe can implement RTMFP. That also means that a bug in the RTMFP protocol or its implementation is outside the scope of public review such as for security experts.

2. RTMFP is an integrated protocol that has support for signaling, encryption, media flow (flow control and congestion control), NAT traversal. Whereas SIP is just one piece of the puzzle, that is used in conjunction with RTP/RTCP, SDP, STUN, TURN, ICE, SRTP, etc. to build a complete system. In that regard there is more scope for interoperability problems in SIP systems. The SIP interoperability test (SIPit) events have helped in solving interoperability problems among current products for over a decade. (see next point on why RTMFP alone may not be sufficient?)

3. Based on the available documentation, RTMFP works on UDP. Whereas SIP can work on UDP as well as TCP. In an RTMFP application, the client should fall back to TCP-based RTMP if for some reason UDP is blocked for the client-server communication. This also means that the client will lose some of the benefits such as encryption available in RTMFP. There are other protocols RTMPS and RTMPE to facilitate security and encryption over TCP-based RTMP.

4. Although RTMFP works on UDP, it implements additional flow control and TCP-friendly congestion control. This helps media traffic deal with network congestion and slow receivers. On the other hand most existing SIP system do not implement such mechanisms in the media path. While this looks like an advantage in RTMFP, it turns out to be a problem because of the way it is implemented. In particular, the network components are disconnected from the media source components such as camera and microphone. The rate control mechanisms are implemented in network components which internally slow down the media traffic by delaying or dropping the UDP media packets. On the other hand the encode quality settings on camera and microphone components are unaffected. This results in packet drops due to congestion and hence choppy video or audio drop-outs. A good application built on top of RTMFP is supposed to get feedback from network components and adjust the encode quality parameters (framerate, bitrate, quality) in the camera and microphone components so that the packet drops are reduced. Thus, unless the application is smart enough to deal with this, the disconnected implementation of rate control and media source causes quality problems in RTMFP.

5. Both RTMFP and SIP can use media relays to workaround NATs and firewalls. However, RTMFP does not use a super-node architecture where some clients (Flash Player instances) act as relays, whereas (P2P) SIP can use existing client nodes to act as media relays. This means that when using RTMFP, the service provider must bear all the bandwidth cost of the relays, whereas in (P2P) SIP the cost can be distributed among the users because of the peer-to-peer nature. I analyze the cost due to NAT and firewall traversal in my next post.

Sunday, February 22, 2009

Why does client-server video conference fail?

I analyze some problems in client-server communication for multi-party video conferencing.

Audio communication differs from video in two important ways: (1) usually in a conference only one person is speaking at any time whereas everyone's video is on, (2) audio codecs are usually fixed bit-rate whereas video codecs adjust bit-rate based on various parameters such as available network bandwidth and desired frame-rate.

Problem 1:
In a client server mode, because video coming from one participant needs to be distributed to all the other participants, the bandwidth and processing requirement at the server can be higher; unlike audio where usually only one person is speaking. Secondly, the downstream video bandwidth requirement at the client increases with the number of participants in a conference. In an N-party conference, each client will have usually one outbound audio stream, one inbound audio stream, one outbound video stream and N-1 inbound video streams. Note that this problem is worse for peer-to-peer (P2P) video conference, where everyone is sending video stream to everyone else: in which case there are N-1 inbound and N-1 outbound video streams at each client. For asymmetric network access (ADSL or Cable), where upstream bandwidth is lower than downstream, this causes early saturation in outbound network bandwidth. Shutting down video stream or reducing the video quality while a person is not speaking saves some bandwidth especially for speaker mode conferences.

Problem 2:
Second point of difference is that audio is usually encoded using fixed bit-rate codec whereas video bit-rate is adjusted based on several parameters such as available network bandwidth, desired quality and frame-rate. In a client-server environment most implementations use the client-to-server network quality information to decide what bit-rate to use for client's video encoding. Consider a two party client-server conference, where first client is closer to the server hence has lower latency. The first client decides to use high quality high bitrate video encoding. On the other hand the second client decides to use low quality low bitrate video encoding. This asymmetry causes the first client to receive poor quality video whereas the second client's downstream link gets congested with high bitrate video. The problem is further aggravated if in a multi-party conference there is only one participant on poor quality network. The problem is caused because we use client-server network latency metric instead of end-to-end network latency metric in deciding the video encoding bitrate.

Problem 3:
Sometimes, the conference server imposes bitrate control to limit the traffic towards a low bandwidth client. However, for efficiency reason the server doesn't re-encode the video packets. Instead, it just drops non-Intra frames if there is not enough bandwidth. This causes marginal to no improvement primarily because Intra frames are several times bigger than other frames. Secondly, it causes choppy video which further degrades the experience. The layered encoding in MPEG solves this problem.

Problem 4:
Larger video packets may not traverse end-to-end over UDP. An encoded audio packet is usually small, of the order of 10-80 bytes per 20 ms. On the other hand an intra-frame video packet size can be much larger, say 1000-10000 bytes. When media packets are sent over UDP, and the packet size is large, there is high probability of getting the packet dropped. This is because of the MTU restriction and middle-boxes (NAT and firewall) in the media path. An UDP packet of size larger than MTU (typically approx 1300-1400 bytes) gets fragmented at the IP layer such that subsequent fragments after the first one do not have the UDP header information (such as source and destination port numbers). A port inspecting NAT or firewall that doesn't handle fragmentation correctly may drop such subsequent fragments, causing loss of the whole UDP packet at the receiver end. Thus, video over UDP has to take care of additional fragmentation and reassembly, and/or discovery of path MTU in the application layer.

Problem 5:
The server may allow video over UDP as well as TCP from the clients, typically to support NAT and firewall traversal. If some clients are over TCP and others over UDP, then the server also needs to proxy packets from one to other. If the client over TCP assumes ordered packet delivery, then the server will also need to do buffering, packet re-ordering and delay adjustment, which further adds to the implementation complexity of the server. The problem is not that visible for audio beyond a glitch in sound, whereas for video the view may get completely corrupted until the next Intra frame.

Problem 6:
A slightly related problem is when the conference server does audio mixing but video forwarding. In this case, the server must perform delay adjustment, packet re-ordering, and buffering for the audio path. However, for efficiency reason it may blindly forward the video packets among the participants. Thus the synchronization information between the audio and video gets lost, and performing lip synchronization at the receiving client becomes a challenge. A correct implementation of the server should act as an RTP mixer, i.e., include the contributing source information in the mixed audio stream, and distribute RTCP information to all that participants for synchronization. (How to do this if each audio call leg is a separate RTP session?)

Some of these problems (2,3,5,6) can be solved to some extent by using peer-to-peer video conferencing.