Note: This file has badly needed updating since the 6-May-93
version. Some of the most glaring errors have been fixed here,
but more rewriting is required. It is also intended that this
text form and an HTML form will be generated from a common source.
Some other sources of info are:
http://www.research.att.com/mbone-faq.html
ftp://taurus.cs.nps.navy.mil/pub/mbmg/mbone.html
ftp://genome-ftp.stanford.edu/pub/mbone/mbone-connect
http://www.cl.cam.ac.uk/mbone/
http://www.eit.com/techinfo/mbone/mbone.html
The MBONE is a virtual network. It is layered on top of portions of the physical Internet to support routing of IP multicast packets since that function has not yet been integrated into many production routers. The network is composed of islands that can directly support IP multicast, such as multicast LANs like Ethernet, linked by virtual point-to-point links called "tunnels". The tunnel endpoints are typically workstation-class machines having operating system support for IP multicast and running the "mrouted" multicast routing daemon.
Previous versions of the IP multicast software (before March 1993) used a different method of encapsulation based on an IP Loose Source and Record Route option. This method remains an option in the new software for backward compatibility with nodes that have not been upgraded. In this mode, the multicast router modifies the packet by appending an IP LSRR option to the packet's IP header. The multicast destination address is moved into the source route, and the unicast address of the router at the far end of the tunnel is placed in the IP Destination Address field. The presence of IP options, including LSRR, may cause modern router hardware to divert the tunnel packets through a slower software processing path, causing poor performance. Therefore, use of the new software and the IP encapsulation method is strongly encouraged.
Between continents there will probably be only one or two tunnels, preferably terminating at the closest point on the MBONE mesh. In the US, this may be on the Ethernets at the two FIXes (Federal Internet eXchanges) in California and Maryland. But since the FIXes are fairly busy, it will be important to minimize the number of tunnels that cross them. This may be accomplished using IP multicast directly (rather than tunnels) to connect several multicast routers on the FIX Ethernet.
The intent is that when a new regional network wants to join in, they will make a request on the appropriate MBONE list, then the participants at "close" nodes will answer and cooperate in setting up the ends of the appropriate tunnels. To keep fanout down, sometimes this will mean breaking an existing tunnel to inserting a new node, so three sites will have to work together to set up the tunnels.
To know which nodes are "close" will require knowledge of both the MBONE logical map and the underlying physical network topology, for example, the physical T3 NSFnet backbone topology map combined with the network providers' own knowledge of their local topology.
Within a regional network, the network's own staff can independently manage the tunnel fanout hierarchy in conjunction with end-user participants. New end-user networks should contact the network provider directly, rather than the MBONE list, to get connected.
Note that the design bandwidth must be multiplied by the number of tunnels passing over any given link since each tunnel carries a separate copy of each packet. This is why the fanout of each mrouted node should be no more than 5-10 and the topology should be designed so that at most 1 or 2 tunnels flow over any T1 line.
While most MBONE nodes should connect with lines of at least T1 speed, it will be possible to carry restricted traffic over slower speed lines. Each tunnel has an associated threshold against which the packet's IP time-to-live (TTL) value is compared. By convention in the IETF multicasts, higher bandwidth sources such as video transmit with a smaller TTL so they can be blocked while lower bandwidth sources such as compressed audio are allowed through.
It is best if the workstations can be dedicated to the multicast routing function to avoid interference from other activities and so there will be no qualms about installing kernel patches or new code releases on short notice. Since most MBONE nodes other than endpoints will have at least three tunnels, and each tunnel carries a separate (unicast) copy of each packet, it is also useful, though not required, to have multiple network interfaces on the workstation so it can be installed parallel to the unicast router for those sites with configurations like this:
+----------+ | Backbone | | Node | +----------+ | ------------------------------------------ External DMZ Ethernet | | +----------+ +----------+ | Router | | mrouted | +----------+ +----------+ | | ------------------------------------------ Internal DMZ Ethernet
(The "DMZ" Ethernets borrow that military term to describe their role as interface points between networks and machines controlled by different entities.) This configuration allows the mrouted machine to connect with tunnels to other regional networks over the external DMZ and the physical backbone network, and connect with tunnels to the lower-level mrouted machines over the internal DMZ, thereby splitting the load of the replicated packets. (The mrouted machine would not do any unicast forwarding.)
Note that end-user sites may participate with as little as one workstation that runs the packet audio and video software and has a tunnel to a network-provider node.
To set up and run an mrouted machine will require the knowledge to build and install operating system kernels. If you would like to use a hardware platform other than those currently supported, then you might also contribute some software implementation skills!
We will depend on participants to read mail on the appropriate mbone mailing list and respond to requests from new networks that want to join and are "nearby" to coordinate the installation of new tunnel links. Similarly, when customers of the network provider make requests for their campus nets or end systems to be connected to the MBONE, new tunnel links will need to be added from the network provider's multicast routers to the end systems (unless the whole network runs MOSPF).
Part of the resources that should be committed to participate would be for operations staff to be aware of the role of the multicast routers and the nature of multicast traffic, and to be prepared to disable multicast forwarding if excessive traffic is found to be causing trouble. The potential problem is that any site hooked into the MBONE could transmit packets that cover the whole MBONE, so if it became popular as a "chat line", all available bandwidth could be consumed. Steve Deering plans to implement multicast route pruning so that packets only flow over those links necessary to reach active receivers; this will reduce the traffic level. This problem should be manageable through the same measures we already depend upon for stable operation of the Internet, but MBONE participants should be aware of it.
Machines Operating Systems Network Interfaces -------- ----------------- ------------------ Sun SPARC SunOS 4.1.1,2,3 ie, le, lo Vax or Microvax 4.3+ or 4.3-tahoe de, qe, lo Decstation 3100,5000 Ultrix 3.1c, 4.1, 4.2a ln, se, lo Silicon Graphics All ship with multicastThere is an interested group at DEC that may get the software running on newer DEC systems with Ultrix and OSF/1. Also, some people have asked about support for the RS-6000 and AIX or other platforms. Those interested could use the mbone list to coordinate collaboration on porting the software to these platforms!
An alternative to running mrouted is to run the experimental MOSPF software in a Proteon router (see MOSPF question below).
ipmulti-pmax31c.tar ipmulti-sunos41x.tar.Z Binaries & patches for SunOS 4.1.1,2,3 ipmulticast-ultrix4.1.patch ipmulticast-ultrix4.2a-binary.tar ipmulticast-ultrix4.2a.patch ipmulticast.README [** Warning: out of date **] ipmulticast.tar.Z Sources for BSD ### Release 3.3 of the SunOS kernel multicast extensions was made in ### late August, but was not put on gregorio like previous releases. ### Instead, it is on parcftp.xerox.com, directory pub/net-research, ### file ipmulti3.3-sunos413x.tar.Z This release has multicast ### pruning and several other major improvements over previous ### releases, so upgrading is encouraged.You don't need kernel sources to add multicast support. Included in the distributions are files (sources or binaries, depending upon the system) to modify your BSD, SunOS, or Ultrix kernel to support IP multicast, including the mrouted program and special multicast versions of ping and netstat.
Silicon Graphics includes IP multicast as a standard part of their operating system. The mrouted executable and ip_mroute kernel module are not installed by default; you must install the eoe2.sw.ipgate subsystem and "autoconfig" the kernel to be able to act as a multicast router. In the IRIX 4.0.x release, there is a bug in the kernel code that handles multicast tunnels; an unsupported fix is available via anonymous ftp from sgi.com in the sgi/ipmcast directory. See the README there for details on installing it.
IP multicast is also included in all Solaris 2 releases. Solaris 2.3 out of the box supports the non-pruning version of mrouted (mrouted version 2.0-2.2), but mrouted is not included with Solaris so you must fetch the mrouted sources from the multicast software distribution and compile them. Solaris 2.2 requires patches to run mrouted, available from ftp.uoregon.edu in the directory /pub/Solaris2.x/src/MBONE/Solaris2.x/Kernel.
IP multicast is supported in BSD 4.4.
The most common problem encountered when running this software is with hosts that respond incorrectly to IP multicasts. These responses typically take the form of ICMP network unreachable, redirect, or time-exceeded error messages, which are a waste of bandwidth and can cause an error in the packet send operation executed by a multicast source. The result may be dropouts in an audio or video stream. These responses are in violation of the current IP specification and, with luck, will disappear over time.
Multicast routing algorithms are described in the paper "Multicast Routing in Internetworks and Extended LANs" by S. Deering, in the Proceedings of the ACM SIGCOMM '88 Conference.
There is an article in the June 1992 ConneXions about the first IETF audiocast from San Diego, and a later version of that article is in the July 1992 ACM SIGCOMM CCR. A reprint of the latter article is available by anonymous FTP from venera.isi.edu in the file pub/ietf-audiocast-article.ps. There is no article yet about later IETF audio/videocasts.
/pub/net-research/mbone-map-{big,small}.psThe small one fits on one page but is hard to read, and the big one is four pages that have to be taped together for viewing. This map was produced from topology information collected automatically from all MBONE nodes that respond to remote queries for mapping information (some are running ancient versions of mrouted that do not respond, and some are hidden by firewalls or routing boundaries). Pavel Curtis at Xerox PARC has added the mechanisms to automatically collect the map data and produce the map. (Thanks also to Paul Zawada of NCSA who manually produced an earlier map of the MBONE.)
The MBONE is now large enough that it is hard to show all nodes and links on one graph. To give a higher level view of the MBONE, there is another map that shows only the major links and nodes in a roughly geographical representation that was created manually by Steve Casner. It is available from ftp.isi.edu:
mbone/mbone-topology.ps
The advantages to linking DVMRP with MOSPF are: fewer configured tunnels, and less multicast traffic on the links inside the MOSPF domain. There are also a couple potential drawbacks: increasing the size of DVMRP routing messages, and increasing the number of external routes in the OSPF systems. However, it should be possible to alleviated these drawbacks by configuring area address ranges and by judicious use of MOSPF default routing.
AlterNet mbone-request@uunet.uu.net CA*net canet-eng@canet.ca CERFnet mbone@cerf.net CICNet mbone@cic.net CONCERT mbone@concert.net Cornell swb@nr-tech.cit.cornell.edu JvNCnet multicast@jvnc.net Los Nettos prue@isi.edu NCAR mbone@ncar.ucar.edu NCSAnet mbone@cic.net NEARnet nearnet-eng@nic.near.net OARnet oarnet-mbone@oar.net ONet onet-eng@onet.on.ca PSCnet pscnet-admin@psc.edu PSInet mbone@nisc.psi.net SESQUINET mbone@sesqui.net SDSCnet mbone@sdsc.edu Sprintlink mbone@sprintlink.net SURAnet multicast@sura.net UNINETT mbone-no@uninett.noIf you are a network povider, send a message to the -request address of the mailing list for your country to be added to that list for purposes of coordinating setup of tunnels, etc:
Australia: mbone-oz-request@internode.com.au Austria: mbone-at-request@noc.aco.net Canada: canet-mbone-request@canet.ca Denmark: mbone-request@daimi.aau.dk France: mbone-fr-request@inria.fr Germany: mbone-de-request@informatik.uni-erlangen.de Italy: mbone-it-request@nis.garr.it Japan: mbone-jp-request@wide.ad.jp Korea: mbone-korea-request@cosmos.kaist.ac.kr Netherlands: mbone-nl-request@nic.surfnet.nl New Zealand: mbone-nz-request@waikato.ac.nz Singapore: mbone-sg-request@technet.sg UK: mbone-uk-request@cs.ucl.ac.ukIf your country is not listed, send your request to the appropriate regional sublist:
Europe: mbone-eu-request@sics.se N. America: mbone-na-request@isi.edu other: mbone-request@isi.eduThese lists are primarily aimed at network providers who would be the top level of the MBONE organizational and topological hierarchy. The mailing list is also a hierarchy; mbone@isi.edu forwards to the regional lists, then those lists include expanders for network providers and other institutions. Mail of general interest should be sent to mbone@isi.edu, while regional topology questions should be sent to the appropriate regional list.
Individual networks may also want to set up their own lists for their customers to request connection of campus mrouted machines to the network's mrouted machines. Some that have done so were listed above.
STEP 2: Set up an mrouted machine, build a kernel with IP multicast extensions added, and install the kernel and mrouted; or, install MOSPF software in a Proteon router.
STEP 3: Send a message to the mbone list for your region asking to hook in, then coordinate with existing nodes to join the tunnel topology.
phyintThe phyint command can be used to disable multicast routing on the physical interface identified by local IP address[disable] [metric ] [threshold ] tunnel [metric ] [threshold ]
The tunnel command can be used to establish a tunnel link between
local IP address
The metric is the "cost" associated with sending a datagram on the
given interface or tunnel; it may be used to influence the choice
of routes. The metric defaults to 1. Metrics should be kept as
small as possible, because mrouted cannot route along paths with a
sum of metrics greater than 31. It is recommended that the metric
of all links be set to 1 unless you are specifically trying to
force traffic to take another path. On such a "backup tunnel",
the metric should be the sum of metrics on primary path + 1.
The threshold is the minimum IP time-to-live required for a
multicast datagram to be forwarded to the given interface or
tunnel. It is used to control the scope of multicast datagrams.
(The TTL of forwarded packets is only compared to the threshold,
it is not decremented by the threshold. Every multicast router
decrements the TTL by 1.) The default threshold is 1.
Since the multicast routing protocol implemented by mrouted does
not yet prune the multicast delivery trees based on group
membership (it does something called "truncated broadcast", in
which it prunes only the leaf subnets off the broadcast trees), we
instead use a kludge known as "TTL thresholds" to prevent
multicasts from traveling along unwanted branches. This is NOT
the way IP multicast is supposed to work; MOSPF does it right, and
mrouted will do it right some day.
Before the November 1992 IETF we established the following
thresholds. The "TTL" column specifies the originating IP
time-to-live value to be used by each application. The "thresh"
column specifies the mrouted threshold required to permit passage
of packets from the corresponding application, as well as packets
from all applications above it in the table:
Mrouted will not initiate execution if it has fewer than two
enabled vifs, where a vif (virtual interface) is either a physical
multicast-capable interface or a tunnel. It will log a warning if
all of its vifs are tunnels, based on the reasoning that such an
mrouted configuration would be better replaced by more direct
tunnels (i.e., eliminate the middle man). However, to create a
hierarchical fanout for the MBONE, we will have mrouted
configurations that consist only of tunnels.
Once you have edited the mrouted.conf file, you must run mrouted
as root. See ipmulticast.README for more information.
IP multicast host extensions are being added to some vendors'
operating systems. That's one of the first steps. Proteon has
announced IP multicast support in their routers. No network
provider is offering production IP multicast service yet.
TTL thresh
--- ------
IETF chan 1 low-rate GSM audio 255 224
IETF chan 2 low-rate GSM audio 223 192
IETF chan 1 PCM audio 191 160
IETF chan 2 PCM audio 159 128
IETF chan 1 video 127 96
IETF chan 2 video 95 64
local event audio 63 32
local event video 31 1
It is suggested that a threshold of 128 be used initially, and
then raise it to 160 or 192 only if the 64 Kb/s voice is excessive
(GSM voice is about 18 Kb/s), or lower it to 64 to allow video to
be transmitted to the tunnel.
Have there been any movements towards productizing any of this?
The network infrastructure will require resource management
mechanisms to provide low delay service to real-time applications
on any significant scale. That will take a few years. Until that
time, product-level robustness won't be possible. However,
vendors are certainly interested in these applications, and
products may be targeted initially to LAN operation.