专利摘要:
AUTOMATED TRAFFIC ENGINEERING FOR SWITCHING MULTIPLE PROTOCOLS LABEL (MPLS) WITH USE OF LINK AS FEEDBACK IN A DECISION MECHANISM.A method that provides a node in a multi-protocol label switching network (MPLS) for improved load distribution, including determining a first set of one or more shortest paths between each MPLS node pair, selecting at least one first shortest path applying a common algorithm decision process, calculating a link utilization value for each link in the MPLS network, determining a second set of one or more shortest paths between each pair of MPLS nodes, generating a value of path use for each shortest path in the second set of shortest paths based on the link utilization values corresponding to each of the shortest paths and selecting a second shortest path from the second set of shortest paths based on the value of path utilization, whereby the selection of the second subset in the path utilization light minimizes the standard deviation of load distribution through the entire MPLS network.
公开号:BR112013003488A2
申请号:R112013003488-2
申请日:2011-08-04
公开日:2020-08-04
发明作者:David Ian Allan;Scott Andrew Mansfield
申请人:Telefonaktiebolaget L M Ericsson (Publ);
IPC主号:
专利说明:

ÍWS AUTOMATED TRAFFIC ENGINEERING FOR SWITCHING MULTIPLE PROTOCOL LABELS (MPLS) WITH THE USE OF LINKS AS FEEDBACK IN A DECISION MECHANISM
CROSS REFERENCE TO THE CORRELATE APPLICATION 5 A cross-reference is made to a copending patent application in the name of David Ian Allen and Scott Andrew Mansfield for "AUTOMATED TRAFFIC ENGINEERING FOR 802 IAQ BASED ON THE USE OF LINK USE AS A FEEDBACK IN THE DECISION MECHANISM ' filed on the same date as the present invention and of common ownership The cross-reference application is presently incorporated as a reference.
FIELD OF THE INVENTION The modalities of the present invention relate to a method and an apparatus for improving the load distribution in a network. Specifically, the modalities of the invention relate to a method for spreading load on P 15 multi-protocol switching networks (MPLS) with multiple paths of equal cost between the nodes in the network.
BACKGROUND TECHNIQUE 'Load spreading or load spreading is a method by which broadband is most effectively used and overall performance is improved on a network. The 20 automated load spreading and load spreading techniques currently released operate with only a very local view, these load spreading and load spreading techniques only consider the number of paths or the following signal shift portions (hops) for destination and do not consider the overall distribution of traffic on the network. 25 Equal cost multiple paths (ECMP) is a common strategy for spreading the load of single transmission traffic (unicast) on routed networks and it is used where the decision of how to forward a packet to a given destination can solve for any of the multiple paths of "equal cost", which form an impasse on the most Cutaway path when calculating a database. The 30 ECMP can be used in conjunction with most single transmission routing protocols and the nodes are equipped with data plan support equipment, in order to be based on a hop decision that is local to a single router which assumes a promiscuous reception and a complete routing table at each intermediate node. Using ECMP on any given node in a network, the load is divided 35 pseudo-un'fomously through a set of next hops of equal cost. This process is implemented independently in each hop of the network where there is more than one path to a given destination.
In many implementations, when the presence of multiple close hops of equal cost is found, each packet is inspected for an entropy source such as a lntemet Protocojo header (lP) and a header information module fragment (hash) of the number of paths is used to select the next hop for the particular pack 5. For highly aggregated traffic, this method will distribute the load on average evenly over regular regular topologies (ie, symmetric topologies) and offers some improvement in less regular topologies. Multiple protocol label switching (MPLS) is a combination of a data plan and control plan technology used to route traffic across the network using a label lookup and translation arrangement (referred to as "swapping '). Each node in the network supports MPLS by reviewing the traffic received by the network and forwarding this traffic based on its label, the label is usually translated or "swapped" at each signal shift (hop). MPLS networks can improve the distribution of routed traffic on the network using the ECMP 15 per hop to distribute or spread a load through equal cost paths. In MPLS networks, a label switching path (LSP) is prepared at each nearby hop across every node in the network. The routing path for a given "destination on the network is calculated using the first shortest path (SPF) algorithm on each node in the network, mapped to the Iocal label junctions on the node, and the resulting connectivity appears as a network of multiple point to multiple point Individual Nodes when presented with traffic destined for multiple paths of equal cost use load information as part of the path selection mechanism to maximize the regularity of the flow distribution through a set of paths. Multiple-point to multiple-point LSP is automatic 25 The label distribution protocol (LDP) or similar protocol is used to over-provision a complete set of label joins for all possible routing equivalence classes in the network, and then, each label switching router (LSR) independently computes the set of nearby hops for each Bake equivalence and select which label joins will actually be used at any given time.
SUMMARY A method is provided on a node in a multiple switching label (MPLS) network to improve load distribution, wherein the node is one of a plurality of nodes in the MPLS network each of which provides a process of 35 decision of common algorithm to produce path trees of shortest minimum cost, the node includes a database of topology of the MPLS network, in which the topology of the MPLS network includes a plurality of nodes and links between q nodes, the method comprises the steps of: determining a first set of one or more shortest paths between each MPLS node pair in the MPLS network by executing a shortest path search algorithm in the MPLS network topology stored in the topology database; select at least "a first shortest path from the first set of shortest paths for each 5 pair of MPLS nodes, applying the common algorithm decision process; calculate a link usage value for each link in the MPLS network based on shortest path count selected which transit on each link, determine a second set of one or more shortest paths between each MPLS node pair in the MPLS network by executing the shortest path search algorithm in the MPLS network topology stored in the base of 10 topology data; generate a path usage value for each shortest path in the second set of one or more shortest paths based on the link utilization values corresponding to each shortest path; select a shorter path from the second set of one or more shortest paths on the basis of the path usage value, in which the selection uses the common algorithm decision process 15 when the multiple paths m short values having path usage values
- egalitarian èstàj present in the set of one or more shortest paths; and store at least the first shortest path and the second shortest paths for each "MPLS node pair in a label information database, where the label information database indicates where to forward incoming traffic on the MPLS node, whereby the selection of the second subsets in light of the path usage minimizes the standard deviation of distribution and load across the entire MPLS network.
A network element for improved load distribution in a multi-label switching protocol (MPLS) network that includes the network element, where the network element is one of a number of nodes in the MPLS network, in which a topology the MPLS 25 network includes a plurality of nodes and links between the nodes, the network element comprises: a topology database to store link information for each Iink in the MPLS network; a database to store label information for each port of the network element; where the label information database indicates where to forward the equivalence class (FEC) that enters the network element; a control processor 30 coupled to the topology database and the information database, the network processor is coupled to the topology database and the label information database, the network processor is configured to process data traffic, in which the network processor comprises: an MPLS management module configured to route data traffic over label sharing paths (LSP's) across the network
35 MPLS; a short path search module configured to determine at least one short path between each pair of MPLS nodes in the MPLS network by executing a short path search algorithm in the topology database, in which the search module shortest path is configured to send, for each pair of MPLS nodes with a plurality of shorter paths and equal cost in a load distribution module; a classification module configured to order each "of the shortest path pIuralities of equal cost based on the path use value of the Iink usage values associated with each path in the plurality of shorter paths of equal cost; and the load distribution module configured to select, from the pIurality of shortest paths of equal cost received, a first subset of the plumlity of shortest paths of equal cost so that the pair of MPLS nodes to be used to share load of 10 traffic data between the pair of MPLS nodes and to select, based on the path usage value, a second set of the plurality of shorter paths of equal cost so that the pair of MPLS nodes to be used can share the data traffic load with the first subset so that the Ethemet Biidge pair, by selecting the second subset in light of the truck utilization value o, minimizes the standard deviation of load distribution across the entire MPLS network. > BRIEF DESCRIPTION OF THE FIGURES The present invention is illustrated by means of examples and not by way of limitation, "by the appended figures in which common references indicate similar elements. It should be noted that different references to" one "modality in the present description are not necessarily in the same modality, and such references mean at least one.Moreover, when a particular presentation, structure or feature in connection with other modalities whether or not explicitly described, Figure 1 illustrates a diagram of an example of a network topology. Figure 2 illustrates a diagram of a network element modality that implements automatic traffic engineering for a network of multiple label switching protocols (MPLS) Figure 3 illustrates a flow diagram of a modality of a load distribution process which includes automated traffic engineering that incorporates the use of a link use as feedback within a decision mechanism.
30 Figure 4 illustrates a flowchart of a modality of a process for generating a label mapping message as part of the label distribution protocol. Figure 5 illustrates a diagram of an example of a multiple-point to multiple-point network topology. Figure 6 illustrates a diagram of another example of a 35-point to multiple-point network topology. Figure 7 illustrates a diagram of a modality of mapping a set of pseudo-lines in the underlying packet switching network to support operations, administration and maintenance (OAM) in the MPLS network.
DETAILED DESCRIPTION OF THE INVENTION In the following description, numerous specific details will be provided. "However, it is understood that the modalities of the invention can be practiced without such 5 specific details. In other cases, well-known circuits, structures and techniques have not been shown in detail so as not to impair the understanding of this description. However, it will be appreciated by those skilled in the art, that the invention can be practiced without such specific details. Those skilled in the art with the undue descriptions, will be able to provide the appropriate functionality without undue experimentation. 10 The modalities include a basic decision process with specific properties including those properties where the process will always resolve by a single path is independent of the order or direction of the computation, and has a location property such that an impasse for any portion of the considered path can be resolved without having to consider the entire path. operations of the flow diagrams will be described with reference to the modality - ex emplificative of Figure 2. However, it must be understood that the operations of the flow diagrams can be performed by the modalities of the invention other than those discussed with reference to Figure 2, and the modality discussed with reference to Figure 2 may perform operations different from those discussed. with reference to the flow diagrams in 20 Figures 3 and 4. Figures 1 and 5-7 provide examples of topologies and scenarios that illustrate the implementation of the principles and structures in Figures 2,3 and 4. The techniques shown in the figures can be provided using code and data stored and executed on one or more electronic devices (for example, terminal station, a network element, etc.). Such electronic devices store and 25 communicate (internally and / or with other electronic devices over a network) code data using non-transient media read by machine or read by computer, such as non-transitory storage media read by machine or read by computer (for example, magnetic disks; optical disks; random access memory; single-read memory; di $ positjvQs of flash memory; and phase shift memory). In addition, such 30 electronic devices typically include a set of one or more processes coupled to one or more other components, such as one or more storage devices, user input / output devices (eg, keyboard, screen and / or display), and network connections. The coupling of the processor set and other components is usually via one or more bus subsystems and biidges 35 (also known as bus controllers). The storage device represents one or more non-transitory communication media, read by machine or read by computer. Then, the storage device with a given electronic device l d 6/19 normally stores code and / or data for executing the set of one or more processors of an electronic device.
Of course, one or more parts of an embodiment of the present invention can be provided using different combinations "of software, firmware and / or hardware. 5 As used at present, a network element (e.g., router, and switch, bridge, etc. .) is a piece of network equipment, including hardware and software that communicatively interconnects other equipment on the network (for example, network elements, terminal stations, etc.). Some network elements are "multiple service network elements" that provide support for multiple network functions (for example, 10 routing, bndging, switching, Layer 2 aggregation, session limit control, multi-transmission, and / or subscriber management) and / or providing support for multiple application services ( eg data, voice and video) Subscribers and terminal stations (eg servers, workstations, laptops, palm tops, mobile phones, smart phones, multimedia phones, Voice over Protoco lntemet (VOlP), telephones, portable media devices, GPS units, game systems, set-top boxes (STB) 'S, etc.) access to "content / services provided over the internet and / or services provided on virtual private networks (VPN'S) overlaid on the internet.
Content and / or services are "normally provided by one or more terminal stations (for example, server terminal stations) belonging to a service or content provider or terminal stations that participate in a user to user service (peer to peer) , and may include a public network page (free content, storage fronts, search services, etc.), a private network page (for example, a user name password page accessed providing messaging services, etc.) corporate networks on VPN's, IPTV, etc.
Typically, subscriber terminal stations are coupled (for example through 25 user premise equipment coupled to an access network (wired or wireless) on the extreme network elements, which are coupled (for example, through one or more elements network core to other extreme network elements) at other end stations (e.g., servldor end stations) The embodiments of the present invention provide a system, network and method to avoid the disadvantages of the prior art including: low performance in asymmetric topologies, lack of support for operations, administration and management protocols (OAM), high resource requirements for package inspection, high levels of expansion to achieve reasonable network utilization, generation of multiple metric sets and maintenance, and significant stresses required to cause minor changes in state.
The modalities of the invention circumvent these disadvantages by allowing dynamic traffic engineering while minimizing a number of crossbars in the base of n
7/19 topology data for a network.
The load distribution method incorporates dynamic traffic engineering and uses the instantiation of mutual | sets of equal cost paths in the routing plan, which can be aggregated into sets of + equal to the same cost, where the cumulative number of shortest paths that 5 pass through each Iink in a path result from all previous interactions of the path generation process factor within a decision to generate the next set of paths.
Once a node has made an initial path selection using the tie-breaking process, and has processed all pairs of nodes in the topology database, the number of shortest paths transmitting each link is determined and is 10 referred to as a link usage vaior.
For each subsequent step in the database to generate other path sets, the path set between any two pairs of nodes is cut by ordering the lexicographically classified list of link usage values for each link on each path being considered. the ordered list has a single lowest used path, so this path is selected.
If the ordered list 15 does not have a lower used path, then the decision process is "applied to the subset of the shortest paths that generated an impasse for L (lower link ti | ization." In the method and the distribution system of load, the network load model is computed in each path generation interaction taking into account the decision 20 of the previous interactions in order to regularize the load of links in the network.
The improved algorithm inherently favors the selection of less loaded links in each interaction after the first interaction.
The load distribution process uses the decision process with distinct properties such that for a path between any two points it will be assigned to a single 25 'symmetrical path regardless of the computing direction, computing order or examination of any subset of the path, a property described as "any portion of the 'shortest path is also the shortest path". Or to put it another way, where the impasse occurs along any portion of the shortest path, those nodes will resolve the impasse for the subset of the path with the same choice, the result 30 'being a minimum cost path tree. This is now referred to as the "common algorithm decision" process. In the load distribution process, an initial pass of the topology database using the common algorithm decision process results in the generation of the first set of trees. This is because no load on any link was registered,
35 then all equal cost paths will be tied for use where the definition of equal cost is the lowest and with the least number of hops.
The initial step requires determining the shortest path between each pair of nodes
MPLS in the network where for more than one shortest path between any two MPLS nodes in which the decision process for common algorithm is found is used for decision in order to generate a unique path selection between each of the pairs of MPLS nodes in 0 network and to generate one or more sets of referral trees of equal cost, 5 called "conjunctito ECT". The load distribution process can order equal cost paths and determine the highest and lowest order paths, or "bookend paths", where both paths exhibit a set of requirements properties.
This load distribution process can therefore select more than one path of a single pass "all 10 pairs" from the database.
The load distribution process also computes the number of shortest paths that cross each link based on the paths re-selected by previous decision procedures.
This value is referred to as the "link usage" value, which can be used for subsequent computation.
The usage values of Íink can be the count of pairs of MPLS nodes whose shortest path 15 passes through the link.
In other modalities, more sophisticated possibilities exist for
- be used instead of using a link considering the additional information in the topology database. "In subsequent steps through the database to generate other sets of paths or trees, the set of shortest paths between qL | any two MPLS nodes is first ordered by generating values of path usage that may include the values of link usage lexicographically classified for each of the paths or simply the sum of the usage of each link in the path and then ordering the resulting paths based on the usage values.
Two or more ordering schemes can also be used, because when more than one path is selected when generating a set of equal cost paths or trees it is advantageous to minimize the number of times the same path is selected.
Using multiple link sorts that demonstrate diversity, you can minimize the number of iterations required to select multiple paths.
When the ordering process generates a single lowest used path, it can be selected without further processing.
When more than one ordering (for example, a lower ordering and a higher ordering) is considered, then the lowest path used is selected as either higher order or lower order.
When there is more than one lower path used and of equal cost, the common algorithm decision process is applied to the set of paths used plus 35 lower ones to make the selection.
In one mode, more than one order can be selected from this stage.
When more than one load ordering mechanism is used (for example, sum ordering and lexicographic classification) it is also possible to extract multiple orders of each when moorings occur.
Additional passes or iterations through the topology database can be performed in each iteration, the value of using the link assigned to each link in a path is the cumulative measure or iridication of shorter paths that pass through the selected link during all passes through the topology database.
Figure 1 illustrates a diagram of an embodiment of an example network topology.
The network topology example includes six nodes with corresponding node identifiers 1-6. No path pairs have been determined for the network topology.
A common algorithm decision process is used and sorts lexicographically paths using node identifiers.
By examining the set of equal cost paths between node 1 and node 4, the following ordered set of path identifiers will be generated (note that the path identifiers have been lexicographically classified so that the node identifiers do not appear as a transit list) : 1-2-3-4 1-2-4-6 1-3-4-5 1-4-5-6
This initial application of the decision process will select 1-2-3-4 and 14-5-6 as the lowest and highest rank paths between these nodes.
For simplicity in this example, only a pair of nodes 1 and 4 $ are considered in determining the path count for the network instead of the shortest path trees of all six nodes.
In this example, the links in the selected links are then assigned a path pair 1 count. For the next pass, through the topology database the load distribution process would yield the following lexicographic classification of associated link loading. to each of the ID's on the path.
Load 0,1,1 for path 1-2-4-6 Load 0,1,1 for path 1-3-4-5 Load 1,1,1 for path 1-2-3-4 Load 1,1, 1 for path 1-4-5-6 The lexicographic classification of loads of links will result in an impasse for paths 1-2-4-6 and 1-3-4-5, as each is 0.1.1 Similarly, the sum of link loads will yield: Load 2 for path 1-2-4-6 Load 2 for path 1-3-4-5 Load 3 for truck 1-2-3-4 Load 3 for path 1 -4-5-6
As a result for both ordering styles, the secondary decision making of lexicographically classified path IDs is employed.
In both cases of this secondary decision, the low path of (1-2-4-6) is selected.
Similarly 1-3-4-5 "can be selected as the high order path ID from the set of 5 lowest loaded paths.
In one mode, when a high-low selection * is used, two paths are selected.
These paths can be the same or have significant overlap.
For example, if the path 1-3-4-5 does not exist in the list ordered above, then the path 1-2-4-6 would qualify both the low ordered path and the high ordered path with the lowest cost.
In other modalities, an initial entry in the selection of the low rail can be ordered based on the lexicographic classification of the loads and the primary entry in the high path selection can be ordered based on the sum of the loads.
While the example only considered the use of a link examining a pair of paths, a person skilled in the art could understand that after a single pass 15 of the database, a comprehensive view of the distribution of potential traffic exists and that
- the decision steps will inherently avoid the maximums and therefore, the load is distributed through the network with regularity.
The load distribution modification level "provides | decreases with each new set of paths considered as the effect is cumulative. 20 The number of paths selected per process iteration and the cumulative number of paths that a network is configured to use can be a function of a priori a routing state versus a computational power analysis) required.
Selecting either the lowest low cost path or the lowest low path there will minimize the amount of computing power required for a given improvement in the standard deviation of Iink usage, but will require more routing status as a consequence , because two sets of equal costs will be generated by iteration.
Selecting a single single path permutation from each iteration will require more computing power, but it will reduce the amount of forwarding database states required for a given redL | in the standard deviation of use, because the number of times that two paths must be selected from a single candidate and the lowest use is minimized.
The global number of paths generated is determined based on a combination of network element state and computational power considerations balanced against network efficiency.
Using multiple schemes to order path loads allows more
35 paths to be selected from a given step in the database as it reduces the probability of the same path being selected more than once for a given number of path selections.
In the examples above, two methods of ordering the path load have been described which would produce consistent results applied across the network. In other modalities, additional methods or substitutes of ordering could be used. For example, other load ordering mechanisms that also <have a location property (any portion of the lowest loaded path 5 is also the lowest loaded path when combined with a
W decision of common algorithm) and combinations of such arrangements can be used. In addition, in the example above, the use of fink is represented by counting the shortest paths that have passed through an Iink. It is possible to use numerous variations to represent the use of Iink in greater detail and with increased precision. Within the label information and the topology database there is sufficient information such that each node in the network can determine the number of service instances that use a particular short path. The usage value of iink can be determined based on this usage to properly weigh the corresponding cowlink. By adding the data stored by the label information or topology database, an additional broadband profile information per service is available for use in "load distribution calculations. In another mode, only the minimum metric of the link of the set of links in a path is used as representative of the maximum load "that could be offered between the pair of nodes. In other modalities, a similar metric or more detailed metrics can be used. 20 In one modality, everything but the final pass of the topology database involves computation of "all pairs" of the shortest paths between all pairs of nodes in the network. This can be computationally costly due to its complexity. load distribution, however does not require a significant number of phases through the topology database to yield measurable benefits and as a result the load distribution process provides valuable global improvements in the allocation of network resources that justify these " all pairs ". In experimental examples using random graph generation, single passes through the database after establishing the initial ECT set as a result of an approximate average reduction of 45 ° 6 in the coefficient of variation in link loading 30 measured as counting of shortest paths that pass on every Iink on the network. Subsequent three passes through the topology database continued to reduce the coefficient of variation at the point where an average 75% reduction occurred, but most of the benefits in load distribution came in the first pass after the establishment of the baseline. So, most of the benefits in load distribution are accumulated in the first two passes of the database. The number of paths through the network was doubled when the second set was explicitly placed to avoid loading the first set. However, the rate of improvement in the variability coefficient drops from pass to pass much more quickly than 1/2 rates; 1/3; 1/4 that the cumulative path count could superficially suggest.
So, significant results can be achieved while maintaining the load distribution process treatable in terms of both computing and forwarding status. 5 Because the method is effectively connection oriented, and looks for the least loaded links, any disturbance of the traffic matrix caused by a failure tends to be isolated and local in nature.
The load distribution process will tend to direct data traffic back into the original distribution once the constriction in the network has been bypassed (by-passed) b The method also works on the basis of emerging MPLS-TP 10 technology, as that operation, administration and management protocols (OAM) can be used unmodified and preserve the architecture and service guarantees of the MPLS network. The load and system balancing process also allows an administrator to previously "unbalance" a link with a factor load that will have the effect of changing some load outside of a particular link. This allows more useful graduations to
"multiply routing behavior than a simple metric change, simpler to administer than topology routing and obviates the need for" Iink virtualization (here as MPLS "routing adjacencies" as for RFC 4206) to artificially trigger the mesh density, which is done before load balancing systems.
For the classification of two stages, the time when the imbalance of the link is applied is important.
This is normally considered for the second and third subsequent interactions.
In an implementation where in the first interaction all the equal cost paths were tied for use (zero), applying the imbalance factor would immediately tend to shift the entire load out
25 of this Iink with a direction for other paths resulting from the first interaction.
Figure 2 is a diagram of a modality of a network element that implements the load distribution method incorporating the use of Iink as feedback within the decision mechanism.
The network element (200) can include a label information database (215), a topology database (217), a module
Inlet (203), an egress module (205) and a control processor (207). The ingress module (203) can handle the processing of data packets being received by the network element (200) at the physical link and data link level.
The egress module (205) handles the processing of data packets being transmitted by the network element (200) at the physical link and at the data link level.
The control processor
35 (207) Handles routing, routing and processing at the highest level of data traffic.
The control processor (207) can perform or include a short path survey module (209), load distribution module (215), label distribution protocol module (LDP) (213), management module MPLS (217) and rating modub (211). The label information database (215) includes a board with
W forwarding of labels that define the way in which data packets are to be forwarded. Label forwarding entries report labels and FEC
Underlying P and virtual topologies for network element interfaces (200). This information can be used by the control controller (207) to determine how a data packet is to be handled, i.e., to which network interface the data packet should be forwarded. The load distribution method and system creates routing entries using the label distribution protocol (LDP) that implement the load distribution as described below. The topology database (217) stores a network model or similar representation of the network topology with which the network element (200) is assigned. In one embodiment, the nodes in the network are each label switching routers (LRS) and the 15 Iinks between the LSRs can use a number of underlying protocols and technologies. The "nodes can be identified with unique node identifiers such as addresses of communication channels" loopback toToday and Iinks with node-identifier pairs. Those "skilled in the art would understand that this network model representation is provided as an example and that other representations of network topology 20 can be used with the load distribution method and system. The shortest path search module (209 ) is a component of the protocol processor (207) or a module executed by the control processor (207). The shortest path search module (209) traverses the topology database to determine the shortest path between any two nodes in the network topology If 25 multiple paths exist that have the same distance or cost in the network between two nodes and these multiple paths of equal cost can be provided in the classification module (211) and load distribution module (21 5) for The shortest path search module (209) can determine the shortest paths between all nodes in the network topology, currently referred to as and 30 "all pairs". The shortest path search module (209) provides a set of shortest paths for each pair of nodes and the load distribution module (215) selects a subset of shortest paths and updates the label information database to include an entry that implements the subset of each 35 of the shortest paths that cross the network element (200) - After the first step, the shortest path search module (209) calculates the value of using link for each link in the network topology resulting from the first pass through the topology database.
The link usage value is a count of the number of shortest paths that cross a given link.
A separate link usage value is calculated and recorded for each Iink.
These Iink usage values are «used to generate a path usage value which in turn is used to unbalance the path ordering for subsequent steps through the topology database where the initial decision is both the ordered list of lexicographically cIassified link usage values or the sum of link usage values (ie, in the form of the path usage value), and where this results in an impasse, the common algorithm decision process is used as a decision maker subsequent. 10 The classification module (211) is a component of the control processor (207) or a module executed by the control processor (207) - The classifier module (211) assists the load distribution module (215) performing an initial ordering of the set loaded with trees of equal cost based on fears of using the path in the second pass and in the subsequent passes. 15 For each pair of nodes with multiple equal cost paths, the · classification module (211) generates an ordering of each of the equal cost paths based on the values of path use and the load distribution module (215) "select at least one path from this sort.
In other embodiments, the highest ordering and lowest ordering paths can be $ and | ecjonated to divide the load 20 between the corresponding node pairs.
The load distribution module (215) is a component of the control processor (207) or a module executed by the control processor (207). This process can be repeated through any number of passes or iterations where the link utilization values are updated to be a cumulative indication 25 of the set of shortest paths that pass through that link.
Path utilization values are also updated in line with changes in tink utilization values.
The standard deviation in the variance in the normative paths decreases with each iteration. but as the number of sets of paths increases, the overall impact of each additional set is proportionally decreased, indicating that the usq of more than one 30 or three passes or iterations is not advantageous even if the computational effort to produce or the routing state is instantiated.
The number of passes or iterations is designed by an administrator and is configured across the network.
The MPLS management module (217) is a component of the protocol processor (207) or a module executed by the network processor (207). The MPLS manager module 35 (217) inspects incoming packets and determines the associated labels and performs look-up arrangements for the packets in the label information database (219) to determine a network interface to route the packet through it.
The module
15l19 / MPLS management (217) also performs any necessary label swapping, label addition, or label removal operations to affect the LSP's own cross section for each data packet. The LDP module (213) is a component of the control processor (207) or a
P 5 module executed by the control processor (207). The LDP module (213) generates the necessary messages to establish the routing equivalence class (FEC) and the virtual topobgia for label junctions in the network used to create those LSP'S used to distribute the network load. The LSP module (213) generates label mapping messages that include FEC type-length-value (TLV) fields, nv label fields 10, as well as virtual topology fields. The TLV virtual topology includes a topology index that indicates which iteration of the load distribution process to which the label and FEC are associated. The LDP module (213) also performs other traditional functions to implement label distribution. Figure 3 illustrates a flowchart of a one-way process for automated traffic engineering to support load distribution for multi-protocol label switching. based on the use of using a link as feedback within a decision mechanism for equal cost paths. In another "modality, the process can be performed in the initiation of a network element such as an Iink switching router, by notifying a change in the topobgia for 2çl the network connected to a router, at defined intervals or at events or times A base of topology data is maintained on each network element in a network as a separate process from the load distribution process and is assumed to be a current representation of the true topology of the network. starts with determining a set of shortest paths between a network element or MPLS node (for example, an LSR) in the network or another network element or MPLS node in the network (Block 301). short can be conceived as individual paths or as a set of hearts with each network element as a root of its respective tree. A check is made to determine if there are multiple paths 30 more short, that is, there is an impasse by the shortest path between MPLS nodes (Block 303). If the pair of MPLS nodes has a single shortest path between "them, the label information database is updated to reflect the shortest path (BIOco 306). In one embodiment, the label information database is updated to reflect each of the paths that traverse the network element that holds it in. Each network element in the network reactivates this same calculation.The load distribution process is deterministic and then each network element will produce the same result. processing those pairs of MPLS nodes with a single shortest path is unnecessary unless there is a change in topology. If the pair of MPLS nodes does not have a single shortest path usually measured as the lowest number of hops and the lowest cost then the common "algorithm decision process" is used to allow a single short path or set of 5 short paths to be selected (BIOcO 305). In one mode, it is possible to select the first and last r routes ordered. After the paths are selected, they are stored in the label information database or used to update the label information database, such that all pairs of MPLS nodes have at least one path selected between them. 10 After selecting the shortest path, a check is made to determine whether all pairs of MPLS nodes have a selected path (Block 307). If other pairs of MPLS nodes have not had a path or set of paths selected, then the process continues by selecting the next pair of nodes to process (Block 309). If all pairs of MPLS nodes have a shortest path 15 selected, then the process continues on the second pass or interaction.
· The link utilization value for each link is calculated even as a consequence of or after updating the routing database for all "pairs of MPLS nodes has been completed (Block 310). The link utilization value is a count of the number of paths that cross each corresponding link in a network topdogy 20. The link utilization value is calculated for each link in the network The link utilization value provides an indication of the level of use and potential bottlenecks in the network which should be avoided if additional paths were formed. For the subsequent formation of shorter paths, a tie-breaking decision is unjustly generated, generating path utilization values even according to 25 Iexicographically drawn list where 0 $ utilization path values include link usage values or the sum of Iink usage values. The process for all nodes starts again by selecting a pair of MPLS nodes and determining u a set of shortest paths between the pair of MPLS nodes (Block 311). This process includes path usage values based on the link usage values that correspond to each path (Block 313). Path utilization values can represent the overall load of each path, such as a sum of link utilization values, or can be a lexicographically classified arrangement of link utilization values highlighting the links more or less loaded in each path or arrangement. and similar representations. The shortest paths are ordered by their path usage values (Block 315) - 35 A check is made to determine if there is more than one shortest path for a given pair of MPLS nodes having equal path utilization values (Block 317 ). Where a uniquely low path exists it will be selected without another
= processing for all path orderings (for example, the lowest and highest çj). When there is more than one identical load short path (ie, identical path utilization values), the common algorithm decision process is then used to carry out the path selection on this subset of the lowest loaded 5 most path sets short (Block 321). The ordering takes into account the value
W of link utilization such that those paths with the lowest or least used links are the most likely to be selected, which takes into account the global load of the network and not just a next hop on the network as a result, routing through the network it is more balanced. The label information database is then updated to reflect the 10 selected paths (Bbco 318). A check is then made to determine whether all MPLS node pairs have selected shortest path or selected shortest path set (Block 319). If not, then the process continues by selecting the next pair of MPLS nodes to process (Block 323). If all pairs of MPLS 15 nodes have been calculated, then a check is made to determine if additional paths are needed (Block 325). If no additional paths are needed (this can be a parameter that is set by the network administrator or similarly determined), then the load distribution process ends, if additional paths are needed, then the process continues with a third step or iteration that is similar to the 20 second step, but built on the use of a determined link in previous iterations. The process can have any number of iterations- Figure 4 is a flowchart of a modality of a process to generate a label mapping message as part of the label distribution protocol. In one embodiment, the process is started in response to a change topology or a change in a label information database for the network. In another mode, the process is initiated by each node that generates a message label mapping message to be sent to one of its peers (Block 401). The label mapping message includes a number of type-length-value (TLV) fields. A separate label mapping message is generated 30 for each forwarding equivalence class (FEC) and each topology path or tree in the topology of the network represented in the host node's label information base. For each label mapping message the field match equivalence class is defined in an FEC TLV field of the label mapping message (Block 403). 35 The label TLV field of each label mapping message is also defined according to the label assigned to an LSP for each of the interfaces on the path (Block 405). A topology index is also defined in the label mapping message (Block 407). The topology index indicates the iteration of the LSP selection process defined by the label mapping message.
For example, if the label mapping message matches the first selected tree or 't path, then the Topology Index of zero or one can be selected and inserted 5 within the label mapping message.
Similarly, if a second path or tree matches the message then one or two can be specified as the vabr.
Once each of the label mapping messages is defined and each of their values is specified, then the label mapping message can be sent to each of the label distribution protocol pairs (Block 409). In a modality, a TLV topology is defined for the label mapping message.
Figure 5 is a diagram of a modality of a multiple-to-multiple-point network including a set of label switching routers (LSR) 1-18. The diagram shows a set of paths or trees defined by the first iteration of the process defined above for the given example.
The diagram assumes that the entry into this network 15 can be distributed over nodes 1-4 and in the same way 13-18, in other words, these "LSRs are at the edge of the network, but have the same extreme interfaces.
In this example, in the first pass of the process, a set of unique canines "lexicographically classified for all pairs of nodes from 1-13 to 4-18 would be generated (for example, 1- 5, 5-9, 9-13 and 4 -8, & 12, 12-18) of this set of unique paths the example assumes that 20 a low path or a high path from the ordering of these unique path identifiers is selected, which corresponds to trees 501 and 503. Figure 6 illustrates paths or trees selected in the second iteration of the load distribution method currently described.
In this example, the load distribution method finds two paths where the lexicographic ordering of the link load associated with each path produces an impasse between two paths and the exemplary lexicographic ordering of the node id according to a path identifier is invoked to authoritatively resolve the impasse.
The lowest order tree (605) and the highest order tree (607) in the second iteration also distribute traffic between nodes 104 and nodes 13-18 and supplements the lowest order tree (601) and the most highly ordered tree 30 (603) of the first iteration illustrated in Figure 5. Incorporating the value of using Íink in the Iexicographic classification, the second iteration selects equal cost paths that have the least used links through which there is an increase in the use of broadband and the diversity of topology of the "all q even" paths selected. 35 Figure 7 illustrates a diagram of a mapping modality of a set of pseudo lines in the underlying packet switching network to support operations management and maintenance (OAM) in the MPLS network.
Performance monitoring can lt
19/19 be maintained and compatibility maintained with the traffic engineering system by underlining an entire LSP loop pair between the equivalent end points in a set of pseudo lines.
The order of packet switching network scales (N) and fault management can measure accordingly, but the overlap has 5 pairwise properties required for performance monitoring.
The Fins of pseL | do Iinhas are modified to join FEC of pseudo lines in a virtual topology index PSN.
As the PSN topobgia is yogically paired between the pseudo end points, the pseudo line label provides a means of font disambiguation for OAM counters. 10 So, a method, a system and a device for load distribution in an MPLS network that takes into account the use of a link has now been described. It should be understood that the above description is intended to be illustrative and not restrictive- Many other modalities will become apparent to those skilled in the art by reading and understanding the above description.
The scope of the present invention should therefore be determined with reference to the appended claims, with which the entire
· Scope of equivalents are covered.
权利要求:
Claims (18)
[1]
1. Method implemented in a node of a multi-protocol label switching network (MPLS) for improved load distribution, characterized by the fact that the node is one of a plurality of nodes in the MPLS network in which each of the nodes implements a common algorithm decision process to produce shortest path and minimum cost trees, the node including a topology database to store an MPLS network topology, where the MPLS network topology includes a plurality of nodes and links between the nodes, the method comprising the steps of: - determining a first set of one or more shortest paths between each MPLS node pair in the MPLS network by executing a shortest path search algorithm in the topology of the MPLS network stored in the database topology; - select at least one first shortest path from the first set of shortest paths for each pair of MPLS nodes, applying the common algorithm decision process; - calculate a link usage value for each link in the MPLS network based on the count of selected shortest paths that pass through each link; - determine a second set of one or more shortest paths between each pair of MPLS nodes in the MPLS network by executing the shortest path search algorithm in the topology of the MPLS network stored in the topology database; - generate a path usage value for each shortest path in the second set of one or more shorter paths based on the link utilization values corresponding to each shortest path; - selecting a second shortest path from the second set of one or more shortest paths based on the path utilization value, where the selection uses the common algorithm decision process when multiple shorter paths having equal path utilization values they are present in the set of one or more shortest paths; and - store at least the first shortest path and the second shortest path for each pair of MPLS nodes in a label information database, where the label information database indicates where to route incoming traffic at the node MPLS, whereby the selection of the second subsets in light of the path utilization minimizes the standard deviation of the load distribution across the entire MPLS network.
[2]
2. Method, according to claim 1, characterized by the fact that the step of generating the path usage value comprises: - adding link utilization values corresponding to each path, or - lexicographically classifying the corresponding link utilization values each way.
[3]
3. Method, according to claim 2, characterized by the fact that it comprises the steps of: - receiving a link modification factor from an administrator; and - combining the link modification factor with the link utilization value to weight a correspondent of the links and paths to decrease the use of the link, decreasing a selection probability affecting the ordering of the lowest loaded path set.
[4]
4. Method, according to claim 2, characterized by the fact that it additionally comprises the steps of: - ordering each shortest path in the second set of shortest paths based on the corresponding path utilization values, in which the selection step of at least the second shortest path additionally comprises: - selecting from the ordination a shortest path ordered highest and lowest.
[5]
5. Method, according to claim 2, characterized by the fact that it comprises the steps of: - iteratively selecting additional shortest paths to share load distribution with the first shortest path and the second shortest path until a managed number of paths that reflect a desire by network operators for overall improvement to the Ethernet network.
[6]
6. Method, according to claim 1, characterized by the fact that the sets of shortest paths between pairs of MPLS nodes are implemented as switched label paths within the MPLS network.
[7]
7. Method, according to claim 1, characterized by the fact that it additionally comprises the steps of: - generating a label mapping message; - define a type-length-value (TLV) FEC field in the label mapping message; - define a label TLV field in the label mapping message; - define a topology index for the label mapping message, where the topology index indicates an iteration in the selection steps of the first subset and the second subset; and - send the label mapping message for each label distribution protocol pair in the MPLS network.
[8]
8. Method, according to claim 7, characterized in that the label mapping messages are sent to each LDP pair for each combination of topology index and FEC values.
[9]
9. Network element for improved load distribution in a multi-protocol label switching network (MPLS) that includes the network element, characterized by the fact that it is one of a plurality of nodes in the MPLS network, in which a network topology MPLS network includes a plurality of nodes and links between nodes, the network element comprising: - a topology database to store link information for each link in the MPLS network; - a label information database to store label information for each port of the network element, where the label information database indicates where to forward each class of routing equivalence (FEC) that enters the network element ; - a control processor coupled to the topology database and the label information database, the network processor configured to process data traffic, where the network processor comprises: - an MPLS management module configured to forward data traffic through label switching paths (LSP); - a label distribution protocol (LDP) module configured to establish LSPs on the MPLS network; - a short path search module configured to determine at least one short path between each pair of MPLS nodes in the MPLS network by executing a short path search algorithm in the topology database, in which the search module shortest path is configured to send, for each of the pairs of MPLS nodes with a plurality of shorter paths of equal cost, the shortest paths of equal cost to a load distribution module; - a classification module configured to order each of the plurality of short paths of equal cost based on a path use value derived from the link utilization values associated with each path in the plurality of short paths of equal cost; and - a load distribution module configured to select, from the plurality of short paths of equal cost received, a first subset of the plurality of short paths of equal cost for that pair of MPLS nodes to be used to share traffic load of between the pair of MPLS nodes and to select, based on the path utilization value, a second subset of the plurality of shorter paths of equal cost for that pair of MPLS nodes to be used to share data traffic load with the first subset for that Ethernet Bridge pair,
whereby, the selection of the second subset in the light of the path utilization value minimizes the standard deviation of load distribution across the entire MPLS network.
[10]
10. Network element, according to claim 9, characterized by the fact that the classification module is additionally configured to classify the link usage values lexicographically to create an ordering of the plurality of shorter paths of equal cost.
[11]
11. Network element, according to claim 9, characterized by the fact that the shortest path search module is additionally configured to calculate the link utilization value for each link in the topology.
[12]
12. Network element, according to claim 9, characterized by the fact that the control processor is additionally configured to generate label switching paths (LSP) to implement each of the shortest selected paths between pairs of nodes within the network MPLS.
[13]
13. Network element, according to claim 9, characterized by the fact that the load distribution module is additionally configured to receive a link modification factor from an administrator and combine the link modification factor with the utilization value link to weight a corresponding link in a path to decrease link usage by decreasing the likelihood of selection affecting the lexicographic classification of that path.
[14]
14. Network element, according to claim 9, characterized in that the load distribution module is additionally configured to select the first subset of each of the plurality of shorter paths of equal cost by selecting a higher and lower item in the first ordering of shorter paths of equal cost.
[15]
15. Network element, according to claim 9, characterized in that the load distribution module is additionally configured to select the second subset of each of the plurality of shorter paths of equal cost by selecting a higher and more item low applying a common algorithm decision process on the shortest paths of equal cost having a lower load.
[16]
16. Network element, according to claim 9, characterized by the fact that the classification module and the load distribution module are additionally configured to iteratively select additional subsets to share load distribution with the first subset and the second subset .
[17]
17. Network element, according to claim 9, characterized in that the LDP module is additionally configured to generate a label mapping message that includes a type-length-value (TLV) FEC field in the label mapping message , a label TLV field in the label mapping message, a topology index for the label mapping message, where the topology index indicates an iteration in the selection steps of the first subset and the second subset and is additionally configured for send the label mapping message for each label distribution protocol pair in the MPLS network.
[18]
18. Network element, according to claim 17, characterized in that the LDP module is additionally configured to send module mapping messages for each LDP pair for each topology index and FEC combination.
类似技术:
公开号 | 公开日 | 专利标题
BR112013003488A2|2020-08-04|automated traffic engineering for multi-protocol label switching | using link as feedback in a decision mechanism
US8553584B2|2013-10-08|Automated traffic engineering for 802.1AQ based upon the use of link utilization as feedback into the tie breaking mechanism
US10541905B2|2020-01-21|Automatic optimal route reflector root address assignment to route reflector clients and fast failover in a network environment
US9210071B2|2015-12-08|Automated traffic engineering for fat tree networks
US8040906B2|2011-10-18|Utilizing betweenness to determine forwarding state in a routed network
US9479424B2|2016-10-25|Optimized approach to IS-IS LFA computation with parallel links
US9160651B2|2015-10-13|Metric biasing for bandwidth aware tie breaking
US8848509B2|2014-09-30|Three stage folded Clos optimization for 802.1aq
US20150016242A1|2015-01-15|Method and Apparatus for Optimized LFA Computations by Pruning Neighbor Shortest Path Trees
US9225629B2|2015-12-29|Efficient identification of node protection remote LFA target
US11218399B2|2022-01-04|Embedded area abstraction
WO2020186803A1|2020-09-24|Fault protection method, node, and storage medium
同族专利:
公开号 | 公开日
US20120057466A1|2012-03-08|
TW201215063A|2012-04-01|
US8553562B2|2013-10-08|
EP2614618A1|2013-07-17|
JP5985483B2|2016-09-06|
AU2011300438A1|2013-04-11|
JP2013539646A|2013-10-24|
EP2614618B1|2018-12-12|
CN103081416A|2013-05-01|
AU2011300438B2|2015-05-07|
TWI521924B|2016-02-11|
KR20130109132A|2013-10-07|
WO2012032426A1|2012-03-15|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

JP4150159B2|2000-03-01|2008-09-17|富士通株式会社|Transmission path control device, transmission path control method, and medium recording transmission path control program|
US7889681B2|2005-03-03|2011-02-15|Cisco Technology, Inc.|Methods and devices for improving the multiple spanning tree protocol|
US20070002770A1|2005-06-30|2007-01-04|Lucent Technologies Inc.|Mechanism to load balance traffic in an ethernet network|
US8687519B2|2006-06-27|2014-04-01|Telefonaktiebolaget L M Ericsson |Forced medium access control learning in bridged ethernet networks|
US20080159277A1|2006-12-15|2008-07-03|Brocade Communications Systems, Inc.|Ethernet over fibre channel|
JP4884525B2|2007-03-13|2012-02-29|富士通株式会社|Communication path control method, communication apparatus, and communication system|
JP4757826B2|2007-03-27|2011-08-24|Kddi株式会社|Communication path control device, program, and recording medium|
US8264970B2|2007-10-12|2012-09-11|Rockstar Bidco, LP|Continuity check management in a link state controlled Ethernet network|
US7990877B2|2008-05-15|2011-08-02|Telefonaktiebolaget L M Ericsson |Method and apparatus for dynamically runtime adjustable path computation|
US7733786B2|2008-05-15|2010-06-08|Telefonaktiebolaget L M Ericsson |Method and apparatus for performing a constraint shortest path first computation|
JP5004869B2|2008-05-22|2012-08-22|三菱電機株式会社|Network route selection method and communication system|
US20100157821A1|2008-12-18|2010-06-24|Morris Robert P|Methods, Systems, And Computer Program Products For Sending Data Units Based On A Measure Of Energy|
US20110194404A1|2010-02-11|2011-08-11|Nokia Siemens Networks Ethernet Solutions Ltd.|System and method for fast protection of dual-homed virtual private lan service spokes|US8761022B2|2007-12-26|2014-06-24|Rockstar Consortium Us Lp|Tie-breaking in shortest path determination|
US7911944B2|2007-12-26|2011-03-22|Nortel Networks Limited|Tie-breaking in shortest path determination|
US9456054B2|2008-05-16|2016-09-27|Palo Alto Research Center Incorporated|Controlling the spread of interests and content in a content centric network|
US8923293B2|2009-10-21|2014-12-30|Palo Alto Research Center Incorporated|Adaptive multi-interface use for content networking|
WO2015011648A1|2013-07-24|2015-01-29|Telefonaktiebolaget L M Ericsson |Automated traffic engineering based upon the use of bandwidth and unequal cost path utilization|
US9160651B2|2013-07-24|2015-10-13|Telefonaktiebolaget L M Ericsson |Metric biasing for bandwidth aware tie breaking|
WO2013173900A1|2012-05-22|2013-11-28|Rockstar Bidco Lp|Tie-breaking in shortest path determination|
CN102857424B|2012-08-30|2015-04-15|杭州华三通信技术有限公司|Method and equipment for establishing LSPin MPLSnetwork|
US9280546B2|2012-10-31|2016-03-08|Palo Alto Research Center Incorporated|System and method for accessing digital content using a location-independent name|
US9400800B2|2012-11-19|2016-07-26|Palo Alto Research Center Incorporated|Data transport by named content synchronization|
US10430839B2|2012-12-12|2019-10-01|Cisco Technology, Inc.|Distributed advertisement insertion in content-centric networks|
US9584387B1|2013-03-15|2017-02-28|Google Inc.|Systems and methods of sending a packet in a packet-switched network through a pre-determined path to monitor network health|
US9978025B2|2013-03-20|2018-05-22|Cisco Technology, Inc.|Ordered-element naming for name-based packet forwarding|
US9935791B2|2013-05-20|2018-04-03|Cisco Technology, Inc.|Method and system for name resolution across heterogeneous architectures|
JP6217138B2|2013-05-22|2017-10-25|富士通株式会社|Packet transfer apparatus and packet transfer method|
US9185120B2|2013-05-23|2015-11-10|Palo Alto Research Center Incorporated|Method and system for mitigating interest flooding attacks in content-centric networks|
US9444722B2|2013-08-01|2016-09-13|Palo Alto Research Center Incorporated|Method and apparatus for configuring routing paths in a custodian-based routing architecture|
US20150109934A1|2013-10-23|2015-04-23|Paramasiviah HARSHAVARDHA|Internet protocol routing mehtod and associated architectures|
US9407549B2|2013-10-29|2016-08-02|Palo Alto Research Center Incorporated|System and method for hash-based forwarding of packets with hierarchically structured variable-length identifiers|
US9450858B2|2013-10-30|2016-09-20|Cisco Technology, Inc.|Standby bandwidth aware path computation|
US9282050B2|2013-10-30|2016-03-08|Palo Alto Research Center Incorporated|System and method for minimum path MTU discovery in content centric networks|
US9276840B2|2013-10-30|2016-03-01|Palo Alto Research Center Incorporated|Interest messages with a payload for a named data network|
US9401864B2|2013-10-31|2016-07-26|Palo Alto Research Center Incorporated|Express header for packets with hierarchically structured variable-length identifiers|
US9634938B2|2013-11-05|2017-04-25|International Business Machines Corporation|Adaptive scheduling of data flows in data center networks for efficient resource utilization|
US10101801B2|2013-11-13|2018-10-16|Cisco Technology, Inc.|Method and apparatus for prefetching content in a data stream|
US10129365B2|2013-11-13|2018-11-13|Cisco Technology, Inc.|Method and apparatus for pre-fetching remote content based on static and dynamic recommendations|
US9311377B2|2013-11-13|2016-04-12|Palo Alto Research Center Incorporated|Method and apparatus for performing server handoff in a name-based content distribution system|
US10089655B2|2013-11-27|2018-10-02|Cisco Technology, Inc.|Method and apparatus for scalable data broadcasting|
US9503358B2|2013-12-05|2016-11-22|Palo Alto Research Center Incorporated|Distance-based routing in an information-centric network|
US9479437B1|2013-12-20|2016-10-25|Google Inc.|Efficient updates of weighted cost multipathgroups|
US9397957B2|2013-12-23|2016-07-19|Google Inc.|Traffic engineering for large scale data center networks|
US9166887B2|2013-12-26|2015-10-20|Telefonaktiebolaget L M Ericsson |Multicast convergence|
US9379979B2|2014-01-14|2016-06-28|Palo Alto Research Center Incorporated|Method and apparatus for establishing a virtual interface for a set of mutual-listener devices|
US10098051B2|2014-01-22|2018-10-09|Cisco Technology, Inc.|Gateways and routing in software-defined manets|
US10172068B2|2014-01-22|2019-01-01|Cisco Technology, Inc.|Service-oriented routing in software-defined MANETs|
US9374304B2|2014-01-24|2016-06-21|Palo Alto Research Center Incorporated|End-to end route tracing over a named-data network|
US9531679B2|2014-02-06|2016-12-27|Palo Alto Research Center Incorporated|Content-based transport security for distributed producers|
US9954678B2|2014-02-06|2018-04-24|Cisco Technology, Inc.|Content-based transport security|
CN103825818B|2014-02-14|2018-07-31|新华三技术有限公司|A kind of more topological network retransmission methods and device|
US9678998B2|2014-02-28|2017-06-13|Cisco Technology, Inc.|Content name resolution for information centric networking|
US10089651B2|2014-03-03|2018-10-02|Cisco Technology, Inc.|Method and apparatus for streaming advertisements in a scalable data broadcasting system|
US9836540B2|2014-03-04|2017-12-05|Cisco Technology, Inc.|System and method for direct storage access in a content-centric network|
US9473405B2|2014-03-10|2016-10-18|Palo Alto Research Center Incorporated|Concurrent hashes and sub-hashes on data streams|
US9391896B2|2014-03-10|2016-07-12|Palo Alto Research Center Incorporated|System and method for packet forwarding using a conjunctive normal form strategy in a content-centric network|
US9626413B2|2014-03-10|2017-04-18|Cisco Systems, Inc.|System and method for ranking content popularity in a content-centric network|
US9407432B2|2014-03-19|2016-08-02|Palo Alto Research Center Incorporated|System and method for efficient and secure distribution of digital content|
US9916601B2|2014-03-21|2018-03-13|Cisco Technology, Inc.|Marketplace for presenting advertisements in a scalable data broadcasting system|
US9363179B2|2014-03-26|2016-06-07|Palo Alto Research Center Incorporated|Multi-publisher routing protocol for named data networks|
US9363086B2|2014-03-31|2016-06-07|Palo Alto Research Center Incorporated|Aggregate signing of data in content centric networking|
US9716622B2|2014-04-01|2017-07-25|Cisco Technology, Inc.|System and method for dynamic name configuration in content-centric networks|
US9473576B2|2014-04-07|2016-10-18|Palo Alto Research Center Incorporated|Service discovery using collection synchronization with exact names|
US10075521B2|2014-04-07|2018-09-11|Cisco Technology, Inc.|Collection synchronization using equality matched network names|
US9390289B2|2014-04-07|2016-07-12|Palo Alto Research Center Incorporated|Secure collection synchronization using matched network names|
US9451032B2|2014-04-10|2016-09-20|Palo Alto Research Center Incorporated|System and method for simple service discovery in content-centric networks|
US9413707B2|2014-04-11|2016-08-09|ACR Development, Inc.|Automated user task management|
US8942727B1|2014-04-11|2015-01-27|ACR Development, Inc.|User Location Tracking|
US9413668B2|2014-04-23|2016-08-09|Dell Products L.P.|Systems and methods for load-balancing in a data center|
US9203885B2|2014-04-28|2015-12-01|Palo Alto Research Center Incorporated|Method and apparatus for exchanging bidirectional streams over a content centric network|
US9992281B2|2014-05-01|2018-06-05|Cisco Technology, Inc.|Accountable content stores for information centric networks|
US9609014B2|2014-05-22|2017-03-28|Cisco Systems, Inc.|Method and apparatus for preventing insertion of malicious content at a named data network router|
US9455835B2|2014-05-23|2016-09-27|Palo Alto Research Center Incorporated|System and method for circular link resolution with hash-based names in content-centric networks|
US9276751B2|2014-05-28|2016-03-01|Palo Alto Research Center Incorporated|System and method for circular link resolution with computable hash-based names in content-centric networks|
US9516144B2|2014-06-19|2016-12-06|Palo Alto Research Center Incorporated|Cut-through forwarding of CCNx message fragments with IP encapsulation|
US9537719B2|2014-06-19|2017-01-03|Palo Alto Research Center Incorporated|Method and apparatus for deploying a minimal-cost CCN topology|
US9467377B2|2014-06-19|2016-10-11|Palo Alto Research Center Incorporated|Associating consumer states with interests in a content-centric network|
US9426113B2|2014-06-30|2016-08-23|Palo Alto Research Center Incorporated|System and method for managing devices over a content centric network|
US9699198B2|2014-07-07|2017-07-04|Cisco Technology, Inc.|System and method for parallel secure content bootstrapping in content-centric networks|
US9621354B2|2014-07-17|2017-04-11|Cisco Systems, Inc.|Reconstructable content objects|
US9959156B2|2014-07-17|2018-05-01|Cisco Technology, Inc.|Interest return control message|
US9590887B2|2014-07-18|2017-03-07|Cisco Systems, Inc.|Method and system for keeping interest alive in a content centric network|
US9729616B2|2014-07-18|2017-08-08|Cisco Technology, Inc.|Reputation-based strategy for forwarding and responding to interests over a content centric network|
US9535968B2|2014-07-21|2017-01-03|Palo Alto Research Center Incorporated|System for distributing nameless objects using self-certifying names|
US9882964B2|2014-08-08|2018-01-30|Cisco Technology, Inc.|Explicit strategy feedback in name-based forwarding|
US9729662B2|2014-08-11|2017-08-08|Cisco Technology, Inc.|Probabilistic lazy-forwarding technique without validation in a content centric network|
US9503365B2|2014-08-11|2016-11-22|Palo Alto Research Center Incorporated|Reputation-based instruction processing over an information centric network|
US9391777B2|2014-08-15|2016-07-12|Palo Alto Research Center Incorporated|System and method for performing key resolution over a content centric network|
US9467492B2|2014-08-19|2016-10-11|Palo Alto Research Center Incorporated|System and method for reconstructable all-in-one content stream|
US9800637B2|2014-08-19|2017-10-24|Cisco Technology, Inc.|System and method for all-in-one content stream in content-centric networks|
US9497282B2|2014-08-27|2016-11-15|Palo Alto Research Center Incorporated|Network coding for content-centric network|
US10204013B2|2014-09-03|2019-02-12|Cisco Technology, Inc.|System and method for maintaining a distributed and fault-tolerant state over an information centric network|
US9553812B2|2014-09-09|2017-01-24|Palo Alto Research Center Incorporated|Interest keep alives at intermediate routers in a CCN|
US10069933B2|2014-10-23|2018-09-04|Cisco Technology, Inc.|System and method for creating virtual interfaces based on network characteristics|
US9590948B2|2014-12-15|2017-03-07|Cisco Systems, Inc.|CCN routing using hardware-assisted hash tables|
US9536059B2|2014-12-15|2017-01-03|Palo Alto Research Center Incorporated|Method and system for verifying renamed content using manifests in a content centric network|
US10237189B2|2014-12-16|2019-03-19|Cisco Technology, Inc.|System and method for distance-based interest forwarding|
US9846881B2|2014-12-19|2017-12-19|Palo Alto Research Center Incorporated|Frugal user engagement help systems|
US10003520B2|2014-12-22|2018-06-19|Cisco Technology, Inc.|System and method for efficient name-based content routing using link-state information in information-centric networks|
US9473475B2|2014-12-22|2016-10-18|Palo Alto Research Center Incorporated|Low-cost authenticated signing delegation in content centric networking|
US9660825B2|2014-12-24|2017-05-23|Cisco Technology, Inc.|System and method for multi-source multicasting in content-centric networks|
US9832291B2|2015-01-12|2017-11-28|Cisco Technology, Inc.|Auto-configurable transport stack|
US9946743B2|2015-01-12|2018-04-17|Cisco Technology, Inc.|Order encoded manifests in a content centric network|
US9916457B2|2015-01-12|2018-03-13|Cisco Technology, Inc.|Decoupled name security binding for CCN objects|
US9954795B2|2015-01-12|2018-04-24|Cisco Technology, Inc.|Resource allocation using CCN manifests|
US9602596B2|2015-01-12|2017-03-21|Cisco Systems, Inc.|Peer-to-peer sharing in a content centric network|
US9462006B2|2015-01-21|2016-10-04|Palo Alto Research Center Incorporated|Network-layer application-specific trust model|
US9552493B2|2015-02-03|2017-01-24|Palo Alto Research Center Incorporated|Access control framework for information centric networking|
US10333840B2|2015-02-06|2019-06-25|Cisco Technology, Inc.|System and method for on-demand content exchange with adaptive naming in information-centric networks|
US10075401B2|2015-03-18|2018-09-11|Cisco Technology, Inc.|Pending interest table behavior|
US10116605B2|2015-06-22|2018-10-30|Cisco Technology, Inc.|Transport stack name scheme and identity management|
US10075402B2|2015-06-24|2018-09-11|Cisco Technology, Inc.|Flexible command and control in content centric networks|
US10701038B2|2015-07-27|2020-06-30|Cisco Technology, Inc.|Content negotiation in a content centric network|
US9986034B2|2015-08-03|2018-05-29|Cisco Technology, Inc.|Transferring state in content centric network stacks|
US10610144B2|2015-08-19|2020-04-07|Palo Alto Research Center Incorporated|Interactive remote patient monitoring and condition management intervention system|
US10530692B2|2015-09-04|2020-01-07|Arista Networks, Inc.|Software FIB ARP FEC encoding|
US9832123B2|2015-09-11|2017-11-28|Cisco Technology, Inc.|Network named fragments in a content centric network|
US10355999B2|2015-09-23|2019-07-16|Cisco Technology, Inc.|Flow control with network named fragments|
US10313227B2|2015-09-24|2019-06-04|Cisco Technology, Inc.|System and method for eliminating undetected interest looping in information-centric networks|
US9977809B2|2015-09-24|2018-05-22|Cisco Technology, Inc.|Information and data framework in a content centric network|
US10454820B2|2015-09-29|2019-10-22|Cisco Technology, Inc.|System and method for stateless information-centric networking|
US10263965B2|2015-10-16|2019-04-16|Cisco Technology, Inc.|Encrypted CCNx|
US9794238B2|2015-10-29|2017-10-17|Cisco Technology, Inc.|System for key exchange in a content centric network|
US9807205B2|2015-11-02|2017-10-31|Cisco Technology, Inc.|Header compression for CCN messages using dictionary|
US10009446B2|2015-11-02|2018-06-26|Cisco Technology, Inc.|Header compression for CCN messages using dictionary learning|
US10021222B2|2015-11-04|2018-07-10|Cisco Technology, Inc.|Bit-aligned header compression for CCN messages using dictionary|
US10097521B2|2015-11-20|2018-10-09|Cisco Technology, Inc.|Transparent encryption in a content centric network|
US9912776B2|2015-12-02|2018-03-06|Cisco Technology, Inc.|Explicit content deletion commands in a content centric network|
US10097346B2|2015-12-09|2018-10-09|Cisco Technology, Inc.|Key catalogs in a content centric network|
US10078062B2|2015-12-15|2018-09-18|Palo Alto Research Center Incorporated|Device health estimation by combining contextual information with sensor data|
CN105516328A|2015-12-18|2016-04-20|浪潮电子信息产业有限公司|Dynamic load balancing method and system, and devices used for distributed storage system|
US10257271B2|2016-01-11|2019-04-09|Cisco Technology, Inc.|Chandra-Toueg consensus in a content centric network|
US9949301B2|2016-01-20|2018-04-17|Palo Alto Research Center Incorporated|Methods for fast, secure and privacy-friendly internet connection discovery in wireless networks|
US10305864B2|2016-01-25|2019-05-28|Cisco Technology, Inc.|Method and system for interest encryption in a content centric network|
CN105721307A|2016-02-19|2016-06-29|华为技术有限公司|Multipath message forwarding method and device|
US10043016B2|2016-02-29|2018-08-07|Cisco Technology, Inc.|Method and system for name encryption agreement in a content centric network|
US10742596B2|2016-03-04|2020-08-11|Cisco Technology, Inc.|Method and system for reducing a collision probability of hash-based names using a publisher identifier|
US10038633B2|2016-03-04|2018-07-31|Cisco Technology, Inc.|Protocol to query for historical network information in a content centric network|
US10003507B2|2016-03-04|2018-06-19|Cisco Technology, Inc.|Transport session state protocol|
US10051071B2|2016-03-04|2018-08-14|Cisco Technology, Inc.|Method and system for collecting historical network information in a content centric network|
US9832116B2|2016-03-14|2017-11-28|Cisco Technology, Inc.|Adjusting entries in a forwarding information base in a content centric network|
US10212196B2|2016-03-16|2019-02-19|Cisco Technology, Inc.|Interface discovery and authentication in a name-based network|
US10067948B2|2016-03-18|2018-09-04|Cisco Technology, Inc.|Data deduping in content centric networking manifests|
US10091330B2|2016-03-23|2018-10-02|Cisco Technology, Inc.|Interest scheduling by an information and data framework in a content centric network|
US10033639B2|2016-03-25|2018-07-24|Cisco Technology, Inc.|System and method for routing packets in a content centric network using anonymous datagrams|
US10320760B2|2016-04-01|2019-06-11|Cisco Technology, Inc.|Method and system for mutating and caching content in a content centric network|
US9930146B2|2016-04-04|2018-03-27|Cisco Technology, Inc.|System and method for compressing content centric networking messages|
US10425503B2|2016-04-07|2019-09-24|Cisco Technology, Inc.|Shared pending interest table in a content centric network|
US10027578B2|2016-04-11|2018-07-17|Cisco Technology, Inc.|Method and system for routable prefix queries in a content centric network|
US10404450B2|2016-05-02|2019-09-03|Cisco Technology, Inc.|Schematized access control in a content centric network|
US10320675B2|2016-05-04|2019-06-11|Cisco Technology, Inc.|System and method for routing packets in a stateless content centric network|
US10547589B2|2016-05-09|2020-01-28|Cisco Technology, Inc.|System for implementing a small computer systems interface protocol over a content centric network|
US10084764B2|2016-05-13|2018-09-25|Cisco Technology, Inc.|System for a secure encryption proxy in a content centric network|
US10063414B2|2016-05-13|2018-08-28|Cisco Technology, Inc.|Updating a transport stack in a content centric network|
US10103989B2|2016-06-13|2018-10-16|Cisco Technology, Inc.|Content object return messages in a content centric network|
US10305865B2|2016-06-21|2019-05-28|Cisco Technology, Inc.|Permutation-based content encryption with manifests in a content centric network|
US10148572B2|2016-06-27|2018-12-04|Cisco Technology, Inc.|Method and system for interest groups in a content centric network|
US10009266B2|2016-07-05|2018-06-26|Cisco Technology, Inc.|Method and system for reference counted pending interest tables in a content centric network|
US9992097B2|2016-07-11|2018-06-05|Cisco Technology, Inc.|System and method for piggybacking routing information in interests in a content centric network|
US10122624B2|2016-07-25|2018-11-06|Cisco Technology, Inc.|System and method for ephemeral entries in a forwarding information base in a content centric network|
US10027571B2|2016-07-28|2018-07-17|Hewlett Packard Enterprise Development Lp|Load balancing|
US10069729B2|2016-08-08|2018-09-04|Cisco Technology, Inc.|System and method for throttling traffic based on a forwarding information base in a content centric network|
US10956412B2|2016-08-09|2021-03-23|Cisco Technology, Inc.|Method and system for conjunctive normal form attribute matching in a content centric network|
US10033642B2|2016-09-19|2018-07-24|Cisco Technology, Inc.|System and method for making optimal routing decisions based on device-specific parameters in a content centric network|
US10212248B2|2016-10-03|2019-02-19|Cisco Technology, Inc.|Cache management on high availability routers in a content centric network|
US10447805B2|2016-10-10|2019-10-15|Cisco Technology, Inc.|Distributed consensus in a content centric network|
US10135948B2|2016-10-31|2018-11-20|Cisco Technology, Inc.|System and method for process migration in a content centric network|
US10243851B2|2016-11-21|2019-03-26|Cisco Technology, Inc.|System and method for forwarder connection information in a content centric network|
CN111492624A|2017-10-23|2020-08-04|西门子股份公司|Method and control system for controlling and/or monitoring a device|
US10469372B2|2018-01-09|2019-11-05|Cisco Technology, Inc.|Segment-routing multiprotocol label switching end-to-end dataplane continuity|
法律状态:
2020-08-18| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]|
2020-08-18| B15K| Others concerning applications: alteration of classification|Free format text: A CLASSIFICACAO ANTERIOR ERA: H04L 12/56 Ipc: H04L 12/729 (2013.01), H04L 12/723 (2013.01) |
2020-08-25| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]|
2020-12-08| B11B| Dismissal acc. art. 36, par 1 of ipl - no reply within 90 days to fullfil the necessary requirements|
2021-11-03| B350| Update of information on the portal [chapter 15.35 patent gazette]|
优先权:
申请号 | 申请日 | 专利标题
US12/877,830|US8553562B2|2010-09-08|2010-09-08|Automated traffic engineering for multi-protocol label switchingwith link utilization as feedback into the tie-breaking mechanism|
US12/877,830|2010-09-08|
PCT/IB2011/053493|WO2012032426A1|2010-09-08|2011-08-04|Automated traffic engineering for multi-protocol label switchingwith link utilization as feedback into the tie-breaking mechanism|
[返回顶部]