P2PSP (Peer-to-Peer Straightforward Protocol)

The P2PSP team

June 24, 2019

Abstract

P2PSP (https://p2psp.github.io) is an application-layer protocol that provides real-time broadcasting of media streams on the Internet. Peers collaborate to diseminate the stream that is generated by a single source, generating a controlled latency and protocol overhead. P2PSP overlays (teams) are push-based (topology-driven) dynamic meshes. The chunks of data are forwarded without explicit requests, using a set of dynamic routes, that except in the case of losing chunks, show a maximum bounded latency, defined by the overlay users.

Notation

Cursive is used the first time a P2PSP-related term/concept is introduced, and for key concepts or ideas.

Introduction

P2PSP has a modular design organized in sets of rules, where each module is especialized in implementing different functionalities.

Contents

1 LBS (Load Balancing Set)
2 DBS (Data Broadcasting Set)
 2.1 Team definition and types of peers
 2.2 Feeding the team
 2.3 Joining a team
 2.4 Buffering chunks
 2.5 Chunk flooding
 2.6 Routes discovery and topology optimization
 2.7 Leaving a team
 2.8 Free-riding control at the splitter
3 FCS (Free-riding Control Set)
4 IMS (Ip Multicast Set)
5 TAS (Topology Adaptation Set)
6 MRS (Massively-lost chunk Recovery Set)
7 ACS (Adaptive Capacity Set)
8 NTS (NAT Traversal Set)
 8.1 Traffic filtering
 8.2 Cone VS symmetric
 8.3 Port allocation
 8.4 NAT type analysis
 8.5 (Theoretical) NAT traversal performance of DBS
 8.6 A port prediction algorithm (Max’s proposal)
9 MCS (Multi-Channel Set)
10 CIS (Content Integrity Set)

1 LBS (Load Balancing Set)

Not implemented.



Figure 1: A possible data-flow in an hybrid CDN(Icecast)/P2PSP network. S represents a splitter and Pi the i-th peer.

P2PSP supposes that there is a collection of channels that are broadcasted in parallel.1 The channels are available at one or more2 streaming servers, and each channel has a different URL (Universal Resource Locator), usually expressed as a Web address with the structure:

  http://server/mount_point

Notice that a server can be serving several channels.

P2PSP does not perform data-flow control over the stream. The transmission bit-rate between P2PSP entities is controlled by the servers (Icecast servers, for example), which provides the stream to the P2PSP teams. Fig. 1 shows an example of a streaming overlay where several servers relay a set of channels generated by a set of source-clients, directly or through other servers. As can be seen, a listener (which usually plays the stream) can be replaced by a splitter, a P2PSP entity that sends the received stream (a single channel) to a set of P2PSP peers.

In a pure CDN system, users request the channels directly to the servers. Unfortunately, this simple procedure has a drawback: normally, users do not know the load nor the distance to the servers. This problem can be solved by using a load balancer. The listeners, which know the URL of the required channel, connects first to a load balancer which redirects them (with an HTTP 302 code) to a suitable server.

This idea can be extended to minimize the response time of hybrid CDN/P2PSP structures. When a user (who knows an URL of the channel) runs a local peer, it provides to his peer the URL of the channel (the URL pointing to a server and a mount point). Then, the peer (as any other listener does) contacts a load balancer which in this case sends a list of splitters which are broadcasting the channel.3 Then, the peer tries to connect with all the splitters in parallel, and the first establised connection determines the selected splitter (the rest of connections are closed). If only those splitters with space in their teams answer to the peer, this procedure should select the “nearest” splitter for the peer in terms of response time.

For the case of the incorporation of new splitters to the network, the procedure is similar. A new splitter (which is instantiated knowing an URL of a channel) contacts the load balancer which returns a list of servers and peers, which are serving the channel. Then, the splitter tries to connect with all of them in parallel, and the first successfull connection is finally selected.4

Using the idea of the extended load balancer, when a player (listener) connects to it, if there is a local peer running in the same host or the same private network that the player, the balancer will redirect the player to the local peer.

Finally, it is compulsory that all the splitters associated to the same channel to generate exactly the same chunks (content and header). See Section 9 for more information.

2 DBS (Data Broadcasting Set)


Parameter Meaning


N Maximum number of peers in a team
C Chunk size
B Buffer size in chunks in the peers
B Size of the set of last B peers served by the splitter
L Maximum allowed number of lost chunks
M Number of monitors
R Number of rounds to compute L.
Variable


N Number of peers in the team
tC Chunk time
tR Round time
tB Buffering time
L Physical network latency
T Latency experimented by the end-user

Table 1: Nomenclature used in DBS.

DBS provides ALM [2] of a media stream in unicast environments [5]. First, the media is sent by a streaming server, and received by a splitter (see Sec. 1). Then, the splitter divides the stream into a sequence of chunks of data, and relay them to its team using a round-robing schema. A team is composed by peers and each peer gathers the chunks from the splitter and the rest of peers of the team, and sends them to at least one player5 .

2.1 Team definition and types of peers

A team is a set of one or more peers (referenced by their end-points) that share the same stream. By definition, in a team of size one (the corresponding splitter is considered out of the team if feeds), the only peer is known as a monitor peer, and in a team with more than one peer, at least one of them must be a monitor, which are instantiated by the overlay administrator to monitorize different aspects of the broadcasting, such as, the expected quality of the rendered video at the peers or the expected average end-user latency.

The number of peers (normal peers and monitors) in a team has a maximum N (see Tab. 1). This parameter has a impact on the latency of the protocol (see Sec. 2.4) and usually is defined by the administrator of the overlay.

2.2 Feeding the team

The splitter divides the stream into chunks of constant length C (chunk size), and sends exclusively each chunk to a different origin6 peer, using a round-robin schema. Chunks are enumerated to distinguish them, and this information is transmitted as a part of a chunk header.

We define a round as the process of transmitting N different chunks from the splitter to a team of N N peers. Therefore, for a team of size N, the round time tR = NtC. Notice that tR is generally variable, and depends on the current number of peers in the team (N), and the chunk time (tC) (which depends on the chunk size (C) and the average bit-rate of the media stream).

In DBS, all the peers of the team are origin of a different chunk, in each round.

2.3 Joining a team

After connecting with a splitter, incoming peers request (using a reliable communication) to the splitter the current set of peers in the team. To minimize the joining time, the peer sends a [hello] message to each other peer of the team, in parallel with the reception of the set. When a peer of the team receives a [hello], it adds the sender of the message to a table7 of peers called forward[] (see forward[] in peer.py). If a peer Pi has an entry forward[Pj] = Pk, then each chunk received by Pi and originated at Pj will be forwarded to Pk. When an incoming peer Pi has received the set of peers, its forwarding table has been initialized to forward[Pi] = {team Pi}. Notice that, as long as the forwarding table contains this information, all chunks received from the splitter will be forwarded to the rest of the team. So, in absence of communication constraints, the team will be organized as a full-connected overlay (see Fig. 2a).

The splitter, in an infinite loop: (1) listens to the incoming peers, (2) sends to them the set of peers of the team, and (3) includes the incoming peer to the set. Notice that only those peers that are in the set of peers of the splitter are considered to be in the team served by such splitter.

Note: See destination_of_chunk[] in peer_dbs.py.

2.4 Buffering chunks

In order to hide the jitter generated by the physical network and the protocol itself, peers need to store the received chunks in a buffer during a period of time, before sending them to a player. A chunk with number x is inserted in the position (xmod2B) of the buffer, where B is the maximum number of chunks that the buffer can store. In a peer’s life, B is a constant especified by the user, but it is not compulsory that all peers of a team use the same buffer size. The larger the buffer size, the higher the buffering delay, but also the lower the probability of lossing chunks.

The buffer is implemented as a circular queue of 2B chunks, which is filled up to only B chunks during the buffering time tB (which is the main part of the start-up time that the users experiment). Chunks with a higher number (newer chunks) are inserted in the head of the buffer. The (received) chunk pointed by the tail of the buffer is sent to the player. This action is carried out each time a new chunk is received8 . Empty cells in the buffer (caused by the chunks that have not been received on time) are skipped until to find the next cell with content.

tB determines how long the peers must wait for start playing the chunks. In general, tB should be as small as possible, and to achieve this we can reduce tC and B. Unfortunately, these reductions generate another drawbacks. On the one hand, the overhead of the header of the transport protocol is inversely proportional to tC, and therefore, tC should be selected enough large to keep under control this overhead. On the other hand, if B is too small (for example, if B < N) the peer will not have enought space to buffer all the chunks of a round, and due to the probability of receiving all the chunks in order is very small, some chunks will overwrite others before they can be played. This problem can also happen even if N B < 2N, because the maximum jitter for a given peer (generated by DBS) that a chunk can experiment is the sum of the maximum jitter produced by the splitter for this peer, that can be N, and the maximum jitter produced by the team, N, in the case of a full-connected mesh such as the shown in the Fig. 2a. Therefore, users should select B 2N.

Given a N value, DBS peers may buffer a different number of chunks that depends on the order in which chunks are received. If x1 is the (number of the) first received chunk (the first chunk to be played), the buffering time finishes when a chunk with number equal or greater than x1+B is received.9 Lets analyze some interesting cases.

Lets suppose that the first received chunk is x1 and that the rest of chunks of the buffer of size B are received, being the chunk x1+B the last one (this is the ideal scenario). In this case, the stream can be played without artifacts. Because the playing of the chunks starts after the buffering process, the end-latency experimented by users in the ideal case would be T = BtC + L, being L the latency generated by the physical transmission media.

Lets imagine now one of the worst possible scenarios, in which after receiving x1 the chunk x1+B is received. In this case, the chunks x2,x1+B1 have been lost (or delayed too much) by the physical transmission media or the transmission protocol, but again (and considering tC constant), the buffering time is tB = BtC because the chunk x1+B was generated B chunk times after x1. Therefore, in this case the end-latency is also T = BtC + L.

After considering these two extreme situations, we can conclude that the end-latency does not depend on the loss chunk ratio during the buffering time (always that this ratio is smaller than one), but only on B, tC and L.

2.5 Chunk flooding

DBS implements a push-based protocol. When a peer receives a chunk, it can be retransmitted to a large number of neighbors (depending on the number of different destinations in its forwarding table). Therefore, even by controlling the chunk rate at the servers10 , some kind of flow control must be performed in order to reduce network congestion while peers perform the flooding.


(a) A full-connected overlay.

(b) A star-shaped overlay.

Figure 2: In a full-connected DBS team (see Subfig. (a)), all peers receive and send the same number of chunks. In a star-shaped DBS team (Subfig. (b)), P1 should send all the chunks of the stream to the rest of the team, except those that the splitter has sent directly to the rest of peers.

The congestion (in particular, the one caused by how DBS nodes use the physical links) may be avoided by means of a basic idea: only if I have received a chunk, I send a chunk (not necessary to the sender neither the same chunk). It is easy to see that, in a fully connected overlay (Fig. 2a), this allows to control the data flow. However, in more realistic scenarios (such as those in which firewalls and symmetric NATS are used), where the physical media imposes interconnexion constraints, peers can not be “connected” with the rest the team, and therefore, if the splitter follows a pure round-robin strategy, some peers can send more chunks that they receive (Fig. 2b). In these scenarios, the simple rule of sending a chunk for each received one does not work.

The previous idea can be adapted to handle a variable connectivity degree (also called neighborhood degree) if each peer uses a table of sets, pending[], indexed by the neighbor’s end-points, where each set indicates the positions in the buffer of those chunks that must be transmited to the corresponding neighbor, the next time such neighbor be selected in the flooding process. For example, if pending[Px] = {11,22}, chunks found at positions 11 and 22 of the buffer have to be sent to peer Px.

Notice that using this procedure, more than one chunk can be sent to a neighbor in a transmission burst, which could congest the switching devices. However, except in very unbalanced overlays (Fig. 2b), the bursts are very short on average (only one chunk in most of cases). As an advantage, if a burst is produced, all the chunks of the burst travel between the two same hosts, which usually increases the performance of the physical routing. In this case, chunks can be grouped in one packet, reducing the protocol overhead.

An example of the temporal evolution of a team using this behaviour has been described in the Figures 3, 4 and 5.



Figure 3: A team has been created with a single monitor M0 ([hello] messages are not shown). Chunks with indexes 0 and 1 (the time t is measured in chunks-time) have been transmitted from the splitter S to M0. f and p represents the forward[] and the pending[] structures, respectively. The chunks stored in the buffer are shown below the entity.



Figure 4: At t2, peer P1 joins the team (the [hello]’s are not shown). In M0, f = {M0 : [P1]} because when M0 receives the [hello] from P1, M0 is the origin peer for all chunks received from S and P1 is its neighbor. P1 includes an entry P1 : [M0] in its forwarding table because M0 is in the set of peers received from the splitter. After that, when the chunk number 2 arrives to M0 from S, an entry P1 : 2 is created in P{} for that chunk, and this entry is deleted when the chunk 2 is sent to P1.



Figure 5: P2 joins the team. M0 and P1 includes P2 in their forwarding tables. The chunk 3 is received by P1 which decides to send it to P2. Chunk 3 remains as pending to be sent to M0 when the next chunk is received by P1.



Figure 6: Chunk 4 is received by P2 which relays it to P1, what relays chunk 3 to M0.



Figure 7: Chunk 5 is received by M0 which relays it to P2.

Note:



















2.6 Routes discovery and topology optimization

Chunks can be lost under bandwidth and buffering time constraints.11 A chunk is considered as lost when it is time to send it to the player and the chunk has not been received. In this situation, for each lost chunk, the peer sends a [requestlost_chunk_number] to a peer of the team, selected at random among the rest of the team. When a peer Px receives a [requestlost_chunk_number] from Py, Px adds Py to forward[Po], where Po is the origin peer of the chunk stored in the position (lost_chunk_numbermod2B) of its buffer.

In this situation, it can happen that some peers request redundant routes between an origin peer and itself, and therefore, some chunks could be received more than once. If this case, for each duplicate chunk, a peer Pi should send a [pruneduplicate_chunk_index] message to those neighbors that have sent to it the duplicate chunk (notice that the faster neighbor to send the chunk will not receive such prunning message). Neighbors receiving this message from peer Pi should remove the Pi from forward[Po], where Po is the origin peer of the duplicate chunk.

As a consequence of these rules, the neighborhood degree of peers can decrease or increase. A decrement is produced if the [requestlost_chunk_number] is sent to a neighbor peer, and a duplicate is received from another neighbor. An increment is produced if the [requestlost_chunk_number] is sent to a peer that is not a neighbor, and a duplicate is received from a neighbor, that still keeps sending chunks at least from a different origin.

As peers select (randomly) [requestlost_chunk_number]’s destinations of the team, the list of known peers of the team should be up to date. This list is populated in first instance when peers join a team. Peers also add to this list the sender of a [hello] or a chunk. Unfortunately, these rules do not guarantee that that all peers know the end-points of the rest of the team, because all the [hello]s and the chunks sent to a already incorporated peer can be lost. To solve this, peers also add to the list the new origin peers of the all received chunks, except for those received from the splitter.

2.7 Leaving a team

An outgoing peer must to: (1) say [goodbye] to the splitter and the neighbor peers (in this order), (2) relay any pending (received but yet not sent) chunks, and (3) wait for a [goodbye] from the splitter. In case of timeout12 , the leaving procedure is reset a number of times.

When a peer of the team receives a [goodbye], removes the sender from its forward[] table. The splitter removes the outgoing peer from the set of peers as soon as the [goodbye] is received.

2.8 Free-riding control at the splitter

The splitter remembers which chunk, of a list of the last B transmitted chunks, was sent to each peer of the team. Notice that, in order to remember the chunk that was sent to each peer in each round, must be hold that B N. See destination_of_chunk[] in splitter_dbs.py.

Monitor peers (which are trusted peers) complain to their splitter with a [lostlost_chunk_number] for each lost chunk. The splitter only considers these type of messages if they come from a monitor.

Note: This last functionality has not been implemented, at least, as it has been explained here. The forget() thread is controlled by a timer, not by a counter of rounds.

3 FCS (Free-riding Control Set)

DBS does not imposes any control over the grade of solidarity of the peers. This means that selfish peers (or simply peers with reduced connectivity) can stay in the team thanks to the generosity of the rest of peers, even if they never achive to deliver a chunk to any peer of the team. This set or rules preclude this possible behavior, by impossing a minimum degree of solidarity between neighbor peers.

To know the level of solidarity between neighbor peers, each peer uses a table of chunk debts, debt[]. Every time a peer Pi sends a chunk to Pj, Pi increments debt[Pj], and on the contrary, decrements debt[Pj] when Pi receives a chunk from Pj.

Peers forward chunks to their neighbors in the order in which the entries of pending[] are accessed (a round-robing scheduling in DBS). Considering this, FCS modifies this behavior:

  1. To go through pending[] in the order provided by debt[], selecting first those entries with a smaller debts.
  2. To reset the run of pending[] in each round (when a chunk is received from the splitter).
  3. If Pi realises that debt[Pj] > L, Pi removes Pj from forward[Pk {team Pi}] and pending[]. Notice that this action decreases the neighborhood degree of Pi and, soon or later of Pj that will consider Pi as unsupportive.
  4. In DBS, request messages are sent selecting the destination peer at random. In FCS, request messages are sent to those peers with a higher debt. Thus, if the insolidarity is produced by a overlay topology imbalance (an extreme example is in Fig. 2b), badly connected peers peers could have the chance of mitigating this problem by forwarding more chunks to their neighbors.

Using FCS, supportive peers will be served first, incrementing the QoE of the corresponding peers. On the other hand, those peers with a higher chunk debt will tend to be unserved if no enough bandwidth is available.

Note: The prioritized round-robin neighbor selection has not yet been implemented as it has been explained here. The debt[] structure exists, but is used for a different purporse.

4 IMS (Ip Multicast Set)

IPM is available by default in LANs (Local Are Network)s and VLANs (Virtual LANs) [6], but not in the Internet [4]. IMS runs on the top of DBS and provides efficient native IPM, where available.

All peers in the same LAN or VLAN have the same network address. When a joining peer Pi receives the list of peers from its splitter, first checks if there are neighbors in the same subnet. For all those peers, Pi uses the IP address 224.0.0.1 (all systems on this subnet), (default) port 1234, to multicast (only) the chunks received from the splitter. Therefore, all peers in the same local network communicate using this multicast group address and port. The rest of external peers will be referenced using their public end-points.

5 TAS (Topology Adaptation Set)

In TAS, the splitter request to each peer of the team the list of neighbors (peers that send chunks directly, in one hop). This communication is reliable (TCP) and transmits the lists as a collection of end-points. The number of requests per round is limited by the available bandwidth in the overlay, and by the request-ratio defined at the splitter. Obviously, the higher the ratio, a more accurate description of the real connectivity in the overlay will be obtained.

After knowing the connectivity degree of each peer, the slitter can adapt the round-robin scheduling of the origin peers by sending a number of chunks proportional to the inverse of the degree of the origin peer.

6 MRS (Massively-lost chunk Recovery Set)

MRS extends DBS (or an extension of it) to retransmit massively-lost chunks. MRS should be implemented if error-prone communications are expected, specially if these channels are used by the splitter. MRS is based on the use of monitors (see Sec: 2.8). The idea is: the splitter will resend lost chunks to one or more the monitors when all monitors report their loss. To increase the probability of receiving on time the resent chunk (by normal peers), monitors halves the number of chunks in their buffers in relation to common peers. Notice that MRS only modifies the behavior of the splitters and the monitors (normal peers does no need to implement LRS or its extensions).

7 ACS (Adaptive Capacity Set)

ACS relaxes the peer’s upload requirements imposed by DBS. It should be used in if it is known that some peers can provide the capacity than others cannot, or when we want to mix the CS and P2P models, sending more chunks from the splitter to one or more monitors controlled by the contents provider.

ACS is based on the idea of using the information that the splitter knows about the number of chunks that each peer has lost (see Sec 2.8), to send to those more reliable peers a higher number of chunks than to the others. In other words, ACS adapts the round-time of each peer to its capacity.

Notice that ACS only affects the behavior of the splitter.

8 NTS (NAT Traversal Set)

Most of the peers run inside of “private” networks, i.e. behind NAT devices. NTS13 is an DBS extension which provides peer connectivity for some NAT configurations where DBS can not provide direct peer communication.14

Peers behind the same NAT will use the same external (also called “public”, because in most cases we have not nested NAT configurations) IP address of the NAT. Basically, there exist two different types of NATs: (1) cone, and (2) symmetric. At the same time, NATs can implement different filtering strategies for the packets that comes from the external side: (a) no filtering, (b) source IP filtering, and (c) source end-point filtering. Finally, NATs can use several port allocation algorithms, among which, the most frequent are: (i) port preservation and (ii) random port. Notice that in this discussion, only UDP transmissions will be considered.

8.1 Traffic filtering

Lets suppose a team in which, for the sake of simplicity, there is only one external (public) peer Pe, and that a new internal (private) peer Pi has sent the sequence of [hello]’s (see Sec 2.3). Lets denote Pi’s NAT as A. When no filtering is used at all, A forwards to Pi any external packet that arrives to it (obviously, if it was sent to the entry in A’s translation table that was created during the transmission of the sequence of [hello]’s), independently on the source end-points of the packets. In the case of source (IP) address filtering, A will forward the packets only if they come from Pe’s host. When source end-point filtering is used, A also checks the source port, i.e., that the packets were originated at Pe’s end-point.



Figure 8: Cone NAT port allocation.



Figure 9: Symmetric NAT port allocation.

8.2 Cone VS symmetric

Cone NATs use the same external end-point for every packet that comes from the same internal end-point, independently on the destination of the packets (see Fig. 9). For the external peer Pe, the situation is identical to the case in which the NATed peer Pi would be running in a public host.

Symmetric NATs use different external end-points for different packets that comes from the same internal end-point, when these packets have different destination end-points (see Fig. ??). Thus, two different external peers will see two different public end-points of Pe.

8.3 Port allocation

In the case of port preservation, if X:Y is the private end-point (IP address:port) of a UDP packet, the NAT will use the public port Y , if available (notice that Y cound have been assigned to a previous communcation). If Y were unavailable, the NAT usually will assign the closer free port (this is called “sequentially port allocation”), usually by increasing the port value, although this behavior has not been standarized at all.

When random port allocation is implemented, the public port will be assigned at random. Notice that, even in SN-PPA configurations, in most of the real situations (where peers must compete with the rest of processes that use the network for the same NAT resources,) some kind of randomization should be always expected during a the port assignment.

8.4 NAT type analysis

An incoming peer Pi can determine its NAT behavior using the following steps:

  1. Let A0,A1,,AM} the public ports used by peer Pi, whose NAT is A, to send the [hello] UDP packets towards the splitter S and the M monitor peers of the team, in this order. This data is known by Pi after receiving the acknowledgment of each [hello]. Compute
    Δk = Ak Ak1 (1)

    for k = 1,2,,M, the port distances gathered by Pi.

  2. Determine a port step
    Δ = 0, if i,Δi = 0 GCD(Δ1,,Δm), otherwise (2)

    where GCD is the Greatest Common Divisor operator.

  3. If Δ = 0 (A is using the same external port for communicating Pi with the rest of peers of the team) then Pi is behind a cone NAT. Notice that public (not NATed) peers will be considered as being using this type of NAT, also.
  4. If Δ > 0 (A is using a different external port for each external peer) then Pi is behind a symmetric NAT. In this case:
    1. If
      Δ1 = Δ2 = = Δm (3)

      then A is using sequentially port allocation.

    2. If
      Δ = limmGCD(Δ1,,Δm) = 1. (4)

      then A is using random port allocation.

8.5 (Theoretical) NAT traversal performance of DBS


Peer1/2 CN CN-AF CN-EF SN-PPA SN-RPA






CN DBS DBS DBS DBS DBS
CN-AF DBS DBS DBS NTS -
CN-EF DBS DBS DBS NTS -
SN-PPA DBS NTS NTS NTS -
SN-RPA DBS - - - -

Table 2: NAT traversal success for different NAT typical combinations. CN-NF (also known by “full cone NAT”) stands for Cone NAT (without packet filtering). CN-AF (also known as “restricted cone NAT”) stands for Cone NAT with source Address Filtering. CN-EF (also known by “port restricted cone NAT”) stands for Cone NAT source End-point Filtering. SN-PPA stands for Symmetric NAT Port Preservation Allocation, and no packet filtering has been considered. SN-RPA stands for Symmetric NAT Random Port Allocation, and no packet filtering has been used.



Figure 10: An example that shows how it is possible to establish a connection with DBS when two peers P1 and P2 that are behind cone NATs.



Figure 11: An example that shows why its is impossible to establish a connection with DBS when two peers P1 and P2 that are behind symmetric NATs.



Figure 12: Timeline of an (ideal) NTS interaction between two peers P1 and P2 which are behind symmetric NATs.

Table 2 shows the theoretical traversal success of DBS (or an extension of it) for different NAT type combinations. Peer1 represents to a peer already joined to the team, and Peer2 to an incoming peer. The entries labeled with “DBS” are those that will be handled by DBS, out-of-the-box. An explanation of why the DBS handshake works for such configurations is shown in Fig. 10. Notice that source end-point filtering has been used in this example, although a similar results should be obtained for simple source address filtering. On the other hand, the combinations labeled with “-” or “NTS” will not work with DBS (see Fig.11). In fact, only the “NTS” entries should work, in general, with NTS, depending on the port prediction algorithm and the number of tries.

Fig. 12 shows an example of an NTS (NAT traversal) success. When the new NATed peers, P1 and P2, arrive at the team, the following events happen:

Summarizing, NTS can provide connectivity for those peers that are behind port-preservation symmetric NATs with sequential port allocation.

8.6 A port prediction algorithm (Max’s proposal)

When both peers, Peer1 and Peer2, are behind symmetric NATs, both must predict the port that the NAT of the interlocutor peer will use to send the packets towards it. And obviously, this must be performed by each already incorporated peer that is behind a symmetric NAT.

The list of predicted ports Z that a a peer Px performs is determined by:

Z = A0 + x + {s {0,1,,N2 1}}; Z + = A0 + (x + {s {0,1,,N 1}}) Δ. (5)

where “+ =” denotes the concatenation of lists and N is the number of guessed ports, A0 is the first external port (the port used to communicate with S) assigned to the incoming peer and Δ is the (maximum) port step measured for the incoming peer’s NAT.

9 MCS (Multi-Channel Set)

When using MDC [1], SVC [3], or for emulating the CS model, it can be interesting for peers to belong to more than one team. To implement MCS, peers must replicate the P2PSP modules (DBS at least) for each team (channel), except the buffer.

The use of MDC is trivial: the higher the number received descriptions (channels), even partially, the higher the quality of the playback. However, when transmitting SVC media, peers should prioritize the reception of the most important layers.

When a peer belongs to more than one team, and the teams broadcast exactly the same stream (the same chunks and headers), it could move between teams seamless (without losts of signal).

A pure CS service could be provided if the corresponding splitter announces one empty team and sends each chunk so many times as teams (with one peer/team) there are.

10 CIS (Content Integrity Set)

A variety of techniques to fight pollution in P2P live streaming systems are available in the literature, including hash-based signature and data encryption techniques.

References

[1]   Pierpaolo Baccichet, Jeonghun Noh, Eric Setton, and Bernd Girod. Content-aware p2p video streaming with low latency. In Multimedia and Expo, 2007 IEEE International Conference on, pages 400–403. IEEE, 2007.

[2]   Suman Banerjee, Bobby Bhattacharjee, and Christopher Kommareddy. Scalable application layer multicast, volume 32. ACM, 2002.

[3]   Xiaowen Chu, Kaiyong Zhao, Zongpeng Li, and Anirban Mahanti. Auction-based on-demand p2p min-cost media streaming with network coding. IEEE Transactions on Parallel and Distributed Systems, 20(12):1816–1829, 2009.

[4]   Douglas E. Comer. Internetworking with TCP/IP. Principles, Protocols, and Architectures (4th Edition), volume 1. Prentice Hall, 2000.

[5]   Douglas E Comer and Ralph E Droms. Computer networks and internets. Prentice-Hall, Inc., 2003.

[6]   Lior Shabtay and Benny Rodrig. Ip multicast in vlan environment, April 12 2011. US Patent 7,924,837.