Multicast networking refers to a method of sending data from one source device to multiple destination devices simultaneously. Instead of sending multiple copies of the data stream to each recipient (unicast), multicast enables the network to handle replication of packets through routes optimized for efficiency. Multicast is one of several common network protocols used for efficient data distribution.
This approach provides major advantages when distributing things like audio/video streams or stock market data to many users concurrently. By leveraging multicast, source servers avoid overload and network capacity is preserved. However, implementing multicast requires specific network protocols and configuration.
In this guide, we will explore what exactly multicast is, its core components like addressing schemes and routing protocols, how to design and set up a basic multicast network, creating a custom multicast application, and concluding thoughts on further expanding your multicast deployment.
Learn about network visibility and why it matters in this ebook
From network asset management to performance monitoring, mapping and traffic visibility. Get explanations and examples to help
What is multicast?
Multicast networking, often referred to simply as “multicast,” is a network traffic distribution mode that enables one-to-many and many-to-many communication from data sources to multiple destinations simultaneously. It establishes point-to-multipoint connections over Layer 3 networks utilizing IP addressing.
Instead of sending copies of packets individually to each intended recipient (unicast) or transmitting packets indiscriminately to all nodes on the local network segment (broadcast), multicast sends traffic only to registered end-points that comprise a specific multicast group. This avoids flooding the entire network while still reaching numerous subscribers.
In multicast communications, sources send packets just once to the multicast network, labeled with special reserved IP addresses indicating the multicast group. Network routers then utilize specific multicast protocols to replicate packets and forward them along optimized distribution paths to reach all group members.
Multicast traffic forwarding is accomplished through the construction of a multicast distribution tree, which spreads through all nodes containing receivers belonging to the applicable group. This tree creation, group registration, and packet replication functionality necessitates specific multicast protocols and network configuration to work properly.
Key multicast terms
- Multicast group: A set of receivers registered to receive traffic sent to a specific multicast IP address. Group members join through IGMP signaling.
- Multicast address: A dedicated IP address assigned from designated multicast ranges between 224.0.0.0 to 239.255.255.255. Used to label multicast traffic for group delivery.
- Distribution tree: Pathway formed by multicast-enabled routers to reach all receivers registered in a specific multicast group. Forwarding decisions are made hop-by-hop.
Advantages include efficient utilization of source server resources and available network bandwidth by avoiding duplicate unicast streams. This permits scaling to large numbers of receivers. But it does require configuration changes to implement and lacks reliability mechanisms native to TCP transport.
Now that you know broadly what multicast accomplishes and its primary characteristics, let’s explore the key components that enable devices to communicate in this manner.
Multicast components
Specialized protocols and network infrastructure adjustments enable hosts to broadcast and receive traffic on multicast groups across routers.
These core multicast components include:
Multicast IP addresses
As mentioned earlier, multicast communications leverage dedicated IP addresses reserved exclusively for multicast group traffic. These function similarly to individual or unicast addresses but instead of pointing to a single interface, they represent group associations. IPv4 designates the range 224.0.0.0 through 239.255.255.255 for multicast, also sometimes called Class D addresses. In IPv6 networks, the range FF00::/8 performs analogous group address functionality.
Senders label packets destined for multicast groups using these addresses. Routers then replicate and forward that traffic appropriately down interfaces included in the distribution tree for that particular multicast IP address. IGMP signaling handles group joins and leaves while PIM builds a distribution tree state between multicast routers.
While we’ve focused on IPv4, it’s worth considering how IPv6 and NAT interact with multicast addressing schemes. IPv6 offers a much larger address space and eliminates the need for NAT in many scenarios, which can simplify multicast implementations.
Certain ranges within the broader multicast address blocks are earmarked for specific uses like local subnet communication. Multicast applications should avoid these reserved segments when assigning group addresses programmatically. IANA guidelines detail IPv4 multicast address allocation while IPv6 follows RFC 5771.
Internet group management protocol (IGMP)
IGMP introduces capabilities allowing hosts to communicate group membership information to local multicast-enabled routers. Periodic reports from IGMP notify routers on each subnet which multicast streams group members need to receive. Routers with IGMP functionality utilize these reports to forward requested groups only to required interfaces.
There exist three IGMP versions with evolving capabilities:
- IGMPv1: Permits joining multicast groups
- IGMPv2: Adds leave messages to depart groups
- IGMPv3: Enables source-specific multicast requests
Without IGMP signaling from potential receivers informing routers of necessary group traffic, routers would not replicate or forward packets to appropriate downstream interfaces. IGMP messages originate from the host seeking particular multicast data streams and relay within one hop to the local router using Layer 3 information.
Protocol independent multicast (PIM)
While IGMP handles group join communications from neighboring receivers, PIM coordinates overall inter-router multicast forwarding. Multicast routing requires building distribution trees between routers leading to all group members across the network. PIM constructs and maintains this delivery infrastructure.
PIM operation modes:
- Dense mode: Floods multicast traffic everywhere, then prunes branches with no receivers
- Sparse mode: Starts with no distribution tree, then selectively builds branches where needed
- Sparse-dense hybrid mode: Runs sparse for some groups and dense for others
Dense mode PIM begins by forwarding all multicast streams throughout the network, then cuts branches where routers discover a lack of receivers through IGMP and PIM signaling. This utilizes bandwidth less efficiently but operates without relying on rendezvous point routers which may fail.
On the other hand, sparse mode establishes distribution paths only where active receivers exist on each subnet, initiating no traffic flow until branches are required. This prevents excessive flooding while introducing reliance upon dedicated rendezvous points.
Rendezvous points
PIM sparse mode, including the sparse-dense hybrid implementation, uses special rendezvous point (RP) routers to collate all available multicast sources and mediate join requests from receiver populations. Instead of directly connecting senders and receivers, traffic gets routed through the RP.
PIM routers must discover the appropriate RP for each group, which may occur statically, dynamically via Auto-RP, or through other means like Bootstrap Router (BSR). Auto-RP broadcasts RP information via reserved multicast groups (224.0.1.39 and 224.0.1.40).
Once learning the active RP, routers indirectly request streams through the RP before eventually switching to shortest path source-based routing once distribution trees stabilize. This RP intermediary aids initial infrastructure state convergence.
IGMP snooping & CGMP
Additional efficiency gains are possible when Layer 2 switches enable IGMP Snooping or routers communicate directly using Cisco Group Management Protocol (CGMP).
IGMP snooping allows switches to eavesdrop on IGMP communications from hosts joining and leaving multicast groups. Instead of flooding all multicast traffic on every interface, the switch forwards streams selectively only to segments actually containing group members.
Similarly, CGMP permits multicast-aware Cisco routers and Layer 2 switches to synchronize forwarding decisions through a proprietary protocol built atop Cisco Discovery Protocol (CDP). CGMP maintenance tasks get delegated to the switch infrastructure.
Primary uses for multicast networking
In IPv6 networks, there’s no such thing as broadcast. Multicast is used for everything that broadcast was previously used for, including a number of standard network infrastructure things like router discovery, address allocation, and neighbour discovery (which replaces ARP).
Multicast has a couple of fundamental characteristics that dictate how it’s used. Because a server only needs to send each packet once and will reach all of the recipients, it’s useful for situations where a large number of receivers need to receive the same data. Since the replication and distribution of these packets is done by the network rather than the head end server, it scales well to extremely large numbers of receivers.
But because the multicast is one-way, any responses would need to be implemented using a separate protocol. This also means that dropped packets must either be unimportant, or the recovery mechanisms for lost data must be built separately. There are two really common places where these characteristics are great strengths.
1. Distributing AV data streams
The first is in distributing audio/visual data streams identically to a large number of users. This is the case, for example, with modern IP-based cable TV networks. The set-top box subscribes to a multicast data stream that represents a program or a channel, and the network starts forwarding that data stream to it. Change the channel and the set-top box unsubscribes from that data stream and subscribes to another. If a packet’s lost, it’s not usually even noticeable, although losing several packets in a row can cause that blocky choppy video effect we’re all familiar with.
2. Providing real-time stock-market data
The second common application is real-time stock market data. In this scenario, all market participants must receive the same information at the same time to ensure the market is fair, so multicast is an ideal way of distributing the data. But in this case, lost packets are a potentially serious problem since they all contain important data about prices and transactions. So these types of data streams are accompanied by other systems that allow market participants to recover lost data over standard TCP unicast protocols.
3. Satellite networks
Another interesting application of multicast is in satellite networks, including the new low Earth orbit (LEO) constellations. Satellites and non-directional antennas tend to be broadcast links by nature, making them well-suited for multicast transmission. This can provide good bandwidth efficiencies when sending packets to multiple receivers. However, implementing multicast over satellite links often requires specialized protocols, as the standard terrestrial multicast protocols may not work optimally in these environments.
Multicast would also theoretically be useful for things like internet gaming, in which real-time game information needs to be distributed to a large number of players. However, I’m not aware of any internet services that provide multicast forwarding capabilities. The reason for this is obvious: If you could cause the network to arbitrarily replicate packets, this could be abused to create denial of service attacks.
Essential problems with multicast networking
There are two essential problems with delivering multicast data streams.
The first is how to allow receivers to subscribe to the multicast data they want and how to unsubscribe from data they no longer want to receive. The second is how to forward those packets from the server to all of the receivers so each packet is forwarded once and only once, regardless of how things are interconnected.
We solve these problems using two protocols. IGMP (Internet Group Management Protocol) handles the joining and leaving of multicast streams for individual receivers.
PIM (Protocol Independent Multicast) is the protocol that Layer 3 network devices such as routers use to build and manage the multicast delivery tree structures across the network.
1. IGMP
The first thing to mention about IGMP is, although the multicast IP addresses that appear in the “destination” field of the multicast packet header look like normal IPv4 or IPv6 addresses, they’re actually “group” addresses that refer to all of the members of the group.
A device can join a group by sending an IGMP “membership report” message to the group IP address. This packet is received by a multicast router on the segment, and the router does whatever is necessary to start forwarding the data stream to this device.
IGMP membership report packets are always sent with a TTL value of 1 so they can’t leave the current network segment.
There are three versions of IGMP: Version 1 provides the basic functionality of allowing devices to subscribe to multicast groups. Version 2 introduced the ability to also leave a multicast group. And in Version 3, devices were given the ability to request a multicast stream from a specific source device (Source Specific Multicast), instead of just generically from any sending device.
2. IGMP snooping
An important Layer 2 adjunct to IGMP is called IGMP snooping. This isn’t a protocol so much as a feature. With IGMP Snooping enabled, a Layer 2 device like an Ethernet switch listens to IGMP membership reports as well as the router. The switch then is able to use this information to ensure that only those devices that actually want to receive this multicast group get the data, instead of all devices on the segment.
In many implementations of IGMP Snooping, the switch also intercepts the membership reports and keeps track of which groups are required on each VLAN. It can become a proxy “querier” on behalf of the VLAN, requesting groups that are required and delivering them only to the right end devices. At the same time, it can keep track of when no devices on the segment are still interested in each group, and send a “leave” IGMP report up to the multicast router to tell it to stop forwarding this group.
Some Cisco switches implement a protocol called CGMP (Cisco Group Management Protocol), which uses the CDP (Cisco Discovery Protocol) to communicate group membership between the switch and the router. This is only relevant if you have Cisco switches and routers deployed on your network. CGMP was sort of a “stop gap” solution to some of the shortcomings of IGMP Version 1, and it should probably be avoided in favour of standard protocols now.
3. PIM
Because IGMP membership report packets are always sent with a TTL value of 1, they can’t be used to find a source on a different network segment. For that, we use PIM (Protocol Independent Multicast). The “protocol independence” here means that it can use any IP routing protocol, including static routes if need be. It doesn’t need to distribute its own routes or maintain a separate multicast routing table.
Routers take part in PIM. Each router uses its unicast routing table to discover the shortest path back to the multicast source. This is called RPF (Reverse Path Forwarding). Using RPF, each router is able to decide which interface it should receive the multicast group through. As long as all of the routers in the network agree on the unicast routing table, this will form a reliable tree structure that, like spanning tree, will be free from loops.
PIM has two operating modes: dense mode and sparse mode. I don’t really recommend using dense mode except for the very smallest and simplest of multicast networks. It doesn’t use network resources as efficiently as sparse mode.
The biggest practical difference to configuring coarse mode is the requirement for a “Rendezvous Point.”
4. Rendezvous Points
The Rendezvous Point (RP) in a PIM-sparse mode network exists to find the multicast group and source. A multicast router has received a request to forward some multicast group out through a particular interface. This could have come from a directly connected device via IGMP or from a downstream router via PIM. If the router already has this multicast stream, then it can simply start forwarding it to the destination. But if it doesn’t know anything about this group, it needs to somehow find it first.
The RP is another router on the network. When the source device starts sending its multicast packets, the first router, on the same segment as the source, picks up that packet and forwards it to the RP router. This first step, which is called “registration,” is done using unicast.
The first hop router already knows the address of the RP. There are several mechanisms for this information to be learned, but the simplest and arguably the most effective and secure is to just configure it statically in each router.
Each multicast group has a RP. It’s common to have a single device act as the RP for a large number of multicast groups. The RP has two functions: It maintains a table of all of the sources for each group, and it also receives and redistributes all of the groups that it is responsible for.
When another router on the network receives a request for a multicast group that it doesn’t already know about, it sends a PIM “join-group” message to the next upstream router in the direction of the RP. That upstream router does the same, and the process continues until the chain of routers reaches all the way to the RP. The RP then starts forwarding the group along that path, and ultimately the device that made the request using IGMP starts to receive the multicast group as well.
As soon as the first multicast packets from the RP arrive at the “first hop” router, the one that received the original IGMP join request from the receiving device, that first hop router has an important new piece of information—it now knows the source IP address for the group.
Now that it knows the real source IP address, it can get the multicast data stream directly from the source, instead of from the RP. So it uses PIM joins to create a new multicast tree that goes directly to the source IP (the “source-rooted tree”). And, if the RP-rooted multicast tree uses a different interface from the source-rooted tree, then it will tear down the RP-rooted tree to avoid receiving multiple copies of the multicast packets.
There are other various adaptations to this process such as Source-Specific Multicast and Bi-directional PIM, which use other tricks to improve the efficiency in finding the source and creating the source-rooted tree, but the ultimate goal is the same.
5. Multicast in specialized networks
While the standard protocols work well for most networks, some specialized environments like satellite and wireless mesh networks require a customized approach. The long delays and broadcast nature of satellite links mean that standard multicast protocols don’t always join groups or build forwarding trees efficiently.
In a satellite network, the entire satellite cloud often operates as one big multicasting domain, with satellite modems handling replication and forwarding internally. This presents a clean multicast interface to the ground stations and terrestrial networks. Specialized protocols tuned for satellite characteristics allow quick joins and optimal forwarding trees.
When implementing multicast across network boundaries or between administrative domains, care must be taken with IP addressing and scopes. Public or inter-domain multicast should use allocated ranges like the “ad hoc” blocks assigned by IANA or the GLOP range based on a registered ASN to prevent overlap with internal use.
It may be powerful, but multicast does have security implications. Many essential network services use multicast for discovery and communication. Unexpected multicast traffic could indicate a misconfiguration or malicious activity. If this concerns you, consult your equipment documentation or a network specialist.
For analyzing multicast traffic, a network monitoring tool with a protocol analyzer function is invaluable. Capture filters like “multicast” grab all multicast packets, while specifics like “dst host 224.1.1.1” select a single group. This helps immensely in debugging or security investigation.
How to set up a multicast network
Setting up a multicast network involves several steps, including configuring and troubleshooting multicast on various network devices.
The key steps to deploying basic multicast functionality are:
1. Enable PIM on routers
The first task is activating Protocol Independent Multicast on the routers intended to handle forwarding multicast streams between different IP subnets. This might involve enabling PIM sparsely on all routers or specifically on a dedicated subset planned to form the multicast routing backbone.
PIM manages the distribution tree state and replicated packet flows at the IP routing level, which serves as the foundation for the overall multicast infrastructure.
2. Choose sparse or dense PIM mode
Next, make determinations around opting for PIM sparse mode or dense mode operation per router or per routing domain. The mode drives overall multicast behavior in that domain. Dense mode simplistically floods traffic then prunes branches, while sparse constructs selective distribution trees.
Often, a hybrid configuration permitting sparse behavior normally but falling back to dense flooding under specific conditions provides flexibility during initial deployment. Remember dense mode forwards streams everywhere upfront as a baseline.
3. Configure rendezvous points
If utilizing a sparse topology or sparse-dense hybrid approach, the multicast design requires configuring dedicated rendezvous point routers. RPs centralize group state information, assisting receiver discovery and facilitating efficient join signaling.
Strategic placement balances reliability with convergence speed. Standard techniques for integrating RPs range from manual static configuration to dynamic discovery through Auto-RP or similar means.
4. Enable IGMP on routers
In addition, enabling Internet Group Management Protocol functionality allows routers to parse join and leave requests originating from host machines on their directly connected subnets. IGMP must be activated for last hop routing devices to properly communicate receiver membership with multicast sources.
5. Configure IGMP snooping on switches
Where Layer 2 switching infrastructure exists, enabling IGMP snooping permits the suppression of unnecessary multicast flooding. Switches can selectively forward only to interfaces with known receivers by snooping existing router/host IGMP exchanges. Proper switch configuration is crucial for enabling IGMP snooping and managing multicast traffic efficiently at Layer 2.
Alternatively, in networks with end-to-end Cisco routers and multilayer switches deployed, Cisco Group Management Protocol (CGMP) offers a proprietary router/switch coordination mechanism resembling an early form of IGMP snooping.
How to set up a multicast application on your network
With multicast routing infrastructure established, the final step involves developing custom applications and services to transmit and receive flows utilizing the network’s replication and forwarding capacities.
A few key guidelines you should consider when architecting multicast-based applications include:
Choosing a multicast group address range
Any application publishing streams through multicasting requires you to reserve a dedicated multicast IP address to label the traffic against. As mentioned previously, the IPv4 multicast block spans 224.0.0.0 through 239.255.255.255. Certain segments have predefined purposes, like 224.0.0.0/24 for routing protocols.
For privately contained applications in a self-managed network environment, leverage the administratively scoped range 239.0.0.0/8. These addresses stay localized, meaning they never route over the public Internet. Their containment resembles how RFC 1918 private unicast ranges stay segmented from external networks.
However, software meant for public visibility or multi-organization deployments should obtain permanently assigned multicast addresses from IANA under 224.0.2.0 through 224.1.255.255 or generate addresses algorithmically based on your registered Autonomous System Number using GLOP guidelines. This prevents administrative domain overlap.
With any programmatic multicast source, avoid hardcoding addresses. Incorporate dynamic discovery like multicast DNS (mDNS) or datastores like Consul/Zookeeper instead.
Handling packet loss
Multicast transmission builds distribution trees using best-effort IP routing and UDP transports exclusively. This absence of connection-oriented guarantees intrinsic to TCP means applications must handle message loss possibility themselves.
If preservation of all packet data matters, incorporate loss recovery by implementing forward error correction overlays that reconstruct missing elements mathematically without retransmission or design custom acknowledgment mechanisms and resend techniques similar to TCP.
For non-critical usage like video distribution, occasional lost packets prove largely inconsequential to perceivable quality, hence reliable protocols get excluded for efficiency. Assess application reliability needs accordingly when designing transports.
Preventing network congestion
While multicast optimizes delivery simultaneously to large receiver pools, scaling constraints still apply around interface capacities and replication processing burdens. Caution against bursting streams without limit.
Implement safeguards like transmission rate thresholds, traffic shaping policies, and pacing through gradual sender ramp-up while monitoring downstream buffer measurements reported via receiver feedback. This matches loads to network realities rather than solely theoretical maximum throughput.
Closely evaluate end-to-end performance under simulated workloads during development cycles and staged rollouts. Multicast bottlenecks compound non-linearly as streams duplicate traversing links and routers. Modeling on paper often massively estimates capacity.
Additional implementation concerns
With applications built and addressing sorted, creating even a modest multicast deployment presents further opportunities around access controls, monitoring systems, and ancillary infrastructure integration:
- Integrate ACLs at the IGMP/PIM layers to govern group joins and route subscription requests for security and policy.
- Tap critical links with network packet brokers to extract copies of multicast flows for inspection using aggregators and analysis tools.
- Use a network traffic monitor to observe multicast flows and ensure proper distribution.
- Bridge monitoring details into platforms like Splunk or Elasticsearch for analytics using IGMP/PIM-aware adapters.
- Interoperate multicast forwarding state with SDN environments through controllers that synchronize proprietary data planes using standards like OpFlex.
- Connect registrations and distributed queries through service meshes and orchestrators for dynamically landing stations.
When testing multicast functionality, take a methodical approach – segregate key functions like group joins, stream transmission, router forwarding path establishment, and leave signaling into distinct components. Validate the interoperation of each individual element first in isolation before chaining them together into complete sequences.
Throughout the process, remember the importance of network documentation to keep track of your configurations and troubleshooting steps.
Let multicast revolutionize your IT architecture
Multicast networking grants immense efficiency benefits for your one-to-many communication use cases. It establishes firm footing as video distribution, IoT sensors, and programmatic infrastructure continue rapidly expanding throughout enterprise IT environments, creating heightened expectations for flexible data routing between centralized controllers and distributed endpoints.
Mastering multicast’s suite of enabling protocols—including PIM for distribution trees, IGMP for group management, and IGMP Snooping for selective stream replication—involves hands-on practice understanding the interplay between the multicast routers, switches, and host applications exchanging messages. But conquering the learning curve permits exponentially greater scalability to concurrent receivers than brute force unicast solutions alone can achieve.
To further explore multicast’s possibilities on your own network, study the integration opportunities around augmenting multicast with bandwidth reservation platforms like RSVP, adding multicast service overlays onto existing SDN solutions like VMware NSX, and leveraging service discovery ecosystems for dynamically landing new real-time data streams between publishers and subscribers.
The programmatic flexibility unlocked by marrying optimized multipoint transport protocols like multicast with software-defined infrastructure control planes stands ready to enable the next generation of responsive, high-performance edge analytics and virtualized service architectures.
Are there ways to coordinate IPv6 multicast protocols with the Layer 1 networks. Satellites and non-directional antennas tend to be broadcast links.
I should get good bandwidth efficiencies if I can send my multicast packets down these links when there are receivers at the other end. With the new merging LEOs this a question of growing interest. I assume all the devices in a multicast group would have permanent IPv6 addresses. Finally, I assume that some IGMP packets would have to have TTL over 1; otherwise how can it be used by other network segments?
Hi Alan.
Multicast over satellite networks is an interesting topic and one that I know very little about. You are right that standard protocols like PIM don’t work well over these links. In particular, it’s hard to keep track of when ground devices leave the group. I understand that some satellite networks have implemented special-purpose protocols for handling this problem.
I don’t think I would want to run IGMP over that satellite network, though. Instead, I would connect a switch or a router to the ground station and run PIM between them. Then the entire satellite cloud would just look like a PIM forwarding domain to the networks connected to the ground stations. How many TTL hops your packets would count as they cross the satellite depends on the provider’s implementation. They might hide it all from you by encapsulating your multicast packets in tunnels.
The other big question in such a design would be how to handle the Rendezvous Points. Again, I expect that this would depend on the provider’s implementation.
ben there, seen it, done it
224.0.1.145 Satcast One 1999-08-01
224.0.1.146 Satcast Two 1999-08-01
224.0.1.147 Satcast Three 1999-08-01
Does a device who want to receive multicast packets need to set a route?, e.g. route add -net 224.0.0.0 netmask 240.0.0.0 eth0 ?
Great question. In general, yes, the receiving device needs to know where to send its IGMP packets. So you will usually see routes in your routing table for 224.0.0.0/4 and, equivalently in IPv6, for ff00::/8. Then the receiving device will also need a route that points to the multicast source’s IP address so that it can do a reverse path validation to ensure that it is receiving it on the right interface.
If I want to send multicast data over to an external network(with multicast enabled) across DMZ with proxy server behind the perimeter router facing the external network, is there a best common practice for this type of use case?
end user (multicast destination) ——– external network ——–proxy server ———-internal network ——– multicast source.
Yes, when sending multicast to external networks there are a few things that you should watch out for. Just like with external unicast networking, you should be using public multicast IP ranges. There are two main ways of getting public multicast addresses. The first is to register the range with IANA from one of the “ad hoc” address blocks. The second is to take advantage of the GLOP range, which uses your registered BGP ASN to create a unique publicly routable range. Refer to RFCs 5771 and 3180 for more information on these public multicast address ranges.
Then you need to make sure that the external parties have routes to the multicast sources and rendezvous points. So, again, these should be public registered addresses, so they can be routed. In principle, you can use multicast NAT to translate private source addresses, but multicast data is often sensitive to latency and jitter. So it’s better to avoid too much packet level processing.
For fault tolerance, it is useful to have redundant connections between the networks and use a routing protocol like BGP to advertise the source and RP addresses. Generally I like to advertise these addresses across network boundaries as both /32 and some larger prefix (such as /24) so that the external network can easily manage their traffic engineering and failover using BGP inbound route filtering.
There are protocols such as MSDP for distributing RP information between networks, but I haven’t seen them used much for this purpose. It’s more common (and simpler to configure) to use static RP configuration in the routers. Fault tolerance will take advantage of the best route to the RP changing with BGP instead. I do like to use MSDP to create multiple redundant RP’s inside the source network, however. This clever trick, which is called “anycast RP”, is described in RFC 4610.
I hope this helps.
Thanks for your detailed explanation! In terms of multicast data traversing across DMZ to an external network, how does proxy server handle the multicast stream? Since the multicast data is UDP and you can’t proxy udp as it is a connection-less protocol… Is there a best common practice for multicast stream to traverse DMZ network?
Generally, since multicast packets only go from source to receiver, a proxy server probably doesn’t have useful role to play here.
Thanks for publishing this. I found it incredibly insightful, but unfortunately for different than others who posted. I recently ran an “ifconfig” process and to my horror (and longtime suspicion) discovered 8-9 ACTIVE configured connections on my device, each of them flagged with these flagged characteristics definitions “UP,POINTOPOINT,RUNNING,MULTICAST,ARP”. I did not authorize or set up any of these portals. Each connection has a different name such as “awdlo” “ipsec0” “pdp_ip1” “ipsec3” “llwo” etc etc…
This article unfortunately seems mostly written mainly for people to appreciate this MULTICAST NETWORKING as a super cool & interesting new thing (and it probably is!)… But seems to maybe have grim and far reaching security implications for people whose data is being “multicast” without their permission or against their will.
Any advice for us?? Looking for more info on 1) what the connections mean and 2) how to ** not** multicast my networking!! Would deeply appreciate any insight! Thanks.
that’s right, your every move is being broadcast to the world. That’s what you get for using a mac.
…oh, wait, those are just standard services:
llw0: a WLAN low-latency interface
awdl0: Apple Wireless Direct Link used for AirDrop, Airplay, bluetooth, etc.
pdp_ip1: used for 3g & cellular data
ipsec0: possibly used for wifi calling in FaceTime (e.g., from your phone to your laptop)
in all, this reminds me of the super deadly teddybear virus that afflicted Windows
How can I capture multicast gaming packets using something like wire shark?
Wireshark actually makes this extremely simple. When you start a capture, you can specify a capture filter. If you want to capture all multicast packets, you can just use the capture filter keyword “multicast” and start capturing. Or, if you know the multicast address used by the application that you’re interested in, you can specify that as the destination address in a capture filter such as “dst host 239.1.1.1”.
Awesome! Thank you!
In a server which has multiple clients joining a multicast group, does the server get on message and then distribute it to every client connected and joined to the group or does the switch send a message to every connected client on the server?? Thank you in advance for your response.
Generally, the server that provides a multicast data stream doesn’t know anything at all about its clients. The server just sends the multicast packets and the first hop router (the Designated Router or DR) picks up those packets and forwards them to the Rendezvous Point (RP). Client devices send their IGMP requests to join the group, and those requests are picked up by their first hop router, which builds a tree towards the RP. Then, once the packets start to flow, it rebuilds that tree towards the DR. The important things are, first, the data packets themselves play an essential role in establishing and maintaining the distribution tree, and second, the routers do all the multicast packet replication and forwarding. When the routers stop seeing packets, the routers start tearing down the distribution trees. The server just sends data packets with no awareness of what happens to them.
what could be the use case of having multiple sources for a single mcast group. if both source are sending traffic simultaneously, will it not be duplicated at the reciever?
Hi, thanks for the very helpful article. One question: from the perspective of a receiver of a multicast message, what is the source IP? Is the source IP the multicast IP or is the source IP the original transmitter of the message?