How does Google video chat work in gmail?

This post is just a speculation (aka guess work).

Google has a video chat function from within the Gmail web pages. This function is not available in the GTalk client yet. Google requires you to download a plugin which enables the video chat function form gmail. The video is rendered using Flash Player. In this article I present my understanding of how it works.

Flash Player exposes certain audio/video functions to the (SWF) application. But the Flash Player does not give access to the raw real-time audio/video data to the application. There are some ActionScript API classes and methods: the Camera class allows you to capture video from your camera, the Microphone class allows you to capture audio from your microphone, the NetConnection/NetStream classes allow you to stream the video from Flash Player to remote server and vice-versa, the Video class allows you to render video either captured by Camera or received on NetStream. Given these, to display the video in Flash Player the video must be either captured by Camera object or received from remote server on NetStream. Luckily, ActionScript allows you to choose which Camera to use for capture.

When the Google plugin is installed, it exposes itself as two Camera devices; actually virtual device drivers. These devices are called 'Google Camera Adaptor 0' and 'Google Camera Adaptor 1' which you can see in the Flash Player settings, when you right click on the video. One of the device is used to display local video and the other to display the remote participant video. The Google plugin also implements the full networking protocol and stack, which I think are based on the GTalk protocol. In particular, it implements XMPP with (P2P) Jingle extension, and UDP-based media transport for transporting real-time audio/video. The audio path is completely independent of the Flash Player. In the video path: the plugin captures video from the actual camera device installed on your PC, and sends it to the Flash Player via one of the virtual camera device driver. It also encodes and sends the video to the remote user. In the reverse direction, it receives video (over UDP) from the remote user, and gives it to the Flash Player via the second of the virtual camera device drivers. The SWF application running in the browser creates two Video objects, and attaches them to two Camera object, one each for the two virtual video device, instead of attaching it to your real camera device. This way, the SWF application can display both the local and remote video in the Flash application.

What this means is that for multi-party video calls, either (1) the plugin will have to expose as more video devices (is there any limit on devices?), or (2) somehow multiplex multiple videos in same video stream (which is CPU expensive), or (3) show only one active remote participant in the call (which gives bad user experience).

An open question to ask: will it be possible to use the Google's plugin to build our own Flash application and somehow use our own network application/protocol to implement video call? Hopefully Google will make the plugin API available to public some day.

Problems in RTMP

Adobe's RTMP or Real-Time Messaging Protocol was recently made available to public as an open specification as part of Adobe's Open Screen initiative. Most of the protocol has already been implemented in third-party software such as Red5, rtmpy and rtmplite much before this specification became public. In this article I take a critical look at the protocol.

There are three parts in the specification: (1) RTMP chunk stream, (2) RTMP message format and (3) RTMP command messages. At the high level, there are different types of messages such as command, data, audio and video. The last specification describes the high-level RPC (remote-procedure call) for various commands and their responses such as creating a network stream or publishing a stream. The actual formatting and parsing of individual types in a command are specified using AMF (Action Message Format) which comes in two flavors: AMF0 and AMF3. The messages that control the protocol such as setting the window size of lower layer or bandwidth for the peer, are specified in the second specification. Finally, the first specification defines the low level chunk format and separates the high level message stream from low level transport (chunk) stream.

The first (and worst) problem with RTMP is that it is overly complex in doing what it does. One reason is that it was poorly designed without extensibility or competing peer protocols in mind, and later on "fixed" itself to extend new features. As an example of complexity: the chunk stream ID field in the first specification was initially intended to be up to 63 but later extended to 65599. For ID 2 to 63, the first byte stores the value in its most significant 6 bits. For ID in the range 64-319 the second byte stores the value minus 64, whereas the first 6 bits of first byte store 0. For values between 64-65599, the second and third bytes store the value using a complicated formula whereas the first six bits of the first byte store 1. Another example is the timestamp field which is 24-bits. However, the protocol supports 32-bits timestamp such that if the value is more than 24-bits than the 24-bits are all 1's, and the actual (extended) timestamp is stored after the header. What is surprising is that a binary protocol called RTP (Real-time Transport Protocol) existed before RTMP was conceived, and had well defined and well thought-of message layout. For example, RTP has version field for extensibility, and 32-bit timestamp. Unfortunately, RTMP didn't learn from the peer protocol and suffered in the form of excessive complexity.

RTMP is designed to work only on TCP, and cannot work on UDP without several modifications. One well understood conclusion of early Internet multimedia research was that UDP is better suited than TCP for real-time media transport. While RTMP calls itself as real-time, it was designed to work solely on TCP. There is no sequence number to handle lost packets, hence it relies on the lower layer (TCP) to provide guaranteed packet delivery. Note that timestamp cannot be used to detect lost packets. The header optimization does not work if packets are delivered out-of-order. The new RTMFP does work over UDP but has its own set of problems and is not yet an open specification.

RTMP has several unnecessary elements. The chunk stream mechanism is not necessary and actually hurts the performance of real-time media transport, besides complicating the implementation. In particular, for client-server communication where typically number of connections/streams between one client-server pair is one, there is no good advantage of using chunks. It can have advantage in server-to-server communication in avoiding head-of-line blocking of one stream from another. Secondly, the initial bulky handshake of RTMP which, I believe, was intended to measure bandwidth or end-to-end latency, actually is not useful.

Media and control path should be separate. The IETF Protocols such as RTSP or SIP as well as ITU-T protocol H.323 exhibit this separation by delegating the media transport to separate RTP stream. This has several advantages because control path usually travels through application servers that are CPU and memory intensive, and have different scaling requirements than media servers which are bandwidth and disk intensive. Separating media from control path achieves scalability, robustness and distributed component architecture in the system. On the other hand, in RTMP control goes hand-in-hand with media. For example, the application server that handles shared objects and conference state, also handles media storage and transport.

RTMP has inconsistencies. First example is the use of some data types. The stream ID field appears at several places in the protocol, in different forms: 32-bit little endian, 32-bit big endian, and 64-bit floating point number. Second example is incoherency between layers: The default chunk size is 128 bytes. The default real-time audio captured from microphone is streamed to the server using Nellymoser encoded audio packets with two frames per packet. Each Nellymoser encoded frame is 64 bytes. Besides, there is a one byte header indicating the codec type. Thus each packet in the default case is 129 bytes. Thus, under default operation, a Flash Player should immediately change the chunk size from 128 to 129 to accommodate a full audio packet in a chunk (so as to avoid fragmenting it which will be inefficient). Going off by 1 byte indicates that something went wrong while designing the protocol for the default case.

When rest of the world was moving towards open standards such as RTP, Adobe embraced closed and proprietary RTMP. Adobe has been a proponent of proprietary technologies and imposing sub-optimal technologies to the developers and users. Another example is the RTMPE extension for encrypted RTMP communication. Readers are encouraged to read this article: "The major implication of this takedown notice is that Adobe has definitively told us that a fully-compliant free software Flash player is illegal. This is because RTMPE is part of Flash, circumventing RTMPE is illegal (in the US at least), and Adobe will never give a key to a free software project since they cannot hide the key. As a result, Flash cannot truly be a standard..."