CUSTOMIZED ROUTING PROTOCOL

 

Introduction

This document deals with the preliminary implementation details , COTS protocol suites election details and to be discussed section

Requirements

We are considering the following requirements for the simulator and emulator development work porting in to the router.

Problem Statement

Routing protocols used in commercial routers have limitations of long convergence time and also they are not suited to deal with―flapping‖links that is common in the Tactical Battle Area(TBA). Objective is to overcome the limitations faced by commercial routing protocol suite which needs to be validated in simulator combined emulator platform and port the same to the BEL router,BEL router we assume is an indigenous router

Scope

OSPFv2 will be selected for simulation, OSPF along with other extensions like MDR,OSPF-TE(RFC – 3630),OPPF with Opaque LSA(RFC5250)needs to be explored further to optimize convergence time. It has been made clear that OSPF in its current form is not suitable in tactical environment. Nodes are all semi static and mobile in nature and nodes communicate with each other in static mode only OSPFv2 has to be benchmarked with EIGRP for given topologies

Network convergence is the process of synchronizing network forwarding tables after a topology change. It is governed by the formula

Convergence = Failure_Detection_Time + Event_Propagation_Time + SPF_Run_Time + RIB_FIB_Update_Time
The suggested formula reflects the fact that convergence time for a link-state protocol is sum of the following
components:
Time to detect the network failure, e.g. interface down condition.
Time to propagate the event,i.e. flood the LSA across the topology.
Time to perform SPF calculations on all routers upon reception of the new information.
Time to update the forwarding tables for all routers in the area.
The convergence should not be “fast” but it should be “optimally fast” and it has to utilize damping or
Lag/wait to accumulate multiple failures within a window of time.
OSPF Fast_Hello_Packets feature allows the OSPF network to experience faster convergence time. This feature
allows us todetectlostneighborswithin1second,this feature will be explored in simulator
Optimal value for the Hello Interval that will lead to fast failure detection in the network while keeping the false
alarm occurrence within acceptable limits will be derived from the simulation.OSPF equal cost load balancing and
handle unequal cost also. Link flap scenarios, Loss of Link indication intimation as alarms, critical/non-critical
alarms are should be derived from SNMP or directly measured parameters. Non available link should be displayed
in NMS. Alarm levels,false-alarms and panic levels will be given as input from BEL discussed failure detection
mechanism at physical/logical layer simulator has to deal with the following delays
Standard Configurable Delays
Router-specific Delays
BW (Bandwidth), delay, load and utilization are used as configurable parameters to arrive at suitable metrics.
Monitor and display the following effect of network connectivity in a back bone area, effect of network connectivity
in a leaf area
Following events needs to be monitored apart from these standard topology event changes New Node, Link
Down,Link Up, Link Cost Increase, Link Cost Decrease, Aged LSA, Convergence time (min,max,average) and
Protocol overhead
Apart from convergence time as a goal we also have provisions in our design to observe the traffic loss and quickly
map its characteristics this is achievable through bulk-statistics
Work entails
Develop simulator to validate Custom routing protocols in the following sequence
PHASE –I Simulator Development
Behavioral characteristics of the following routers (CISCO 3845,M10i andJ4350 ofJuniper)needs to be supported in
both homogeneous and heterogeneous modes.Please refer section ―Open source routing software suite ―of the technical
specifications. Results derived from modeled and framed simulator (BEL has to suggest on simulator to be used
OPNET/NS3/Glomosim&Parsec)need to be cross verified another simulator, please refer section ―ModelingBehavioral
difference‖ of the technical specifications
Phase-II Simulator outcome optimization
Kindly refer section ―real packet processing times and Scenarios Vs Packet Delay Variations ‖of the technical
specifications. Metrics needs to be validated in this phase.
Note1:-BEL has to suggest is it enough if we validate the metrics by simulating the model behaviors across two
different simulators and opt which one is close to reality,we should parallely try statistics collection from the
physical routers with the model configurations configured in it
Note2:-We assume no change or modifications are required as of now on the BFD protocol RFC5882 as well as for
the IP Fast Reroute Framework. IP-FRR is an option for optimizing NW throughput.
Emulator Design
HW emulator an environment that can create virtual router instances,VRRP(Virtual Router Redundancy
Protocol)RFC3768can be made used.This also helps to provide hardware redundancy oftheL3routing
functionality .Note:- BEL will clarity this in the PDR stage
Solutions approach
OSPF packet procedure
Shown below is the procedure to check the OSPF packet flow

PNET with emulator interface
When the Emulator is started, the OPNET Simulator will be first initialized and then the main OPNET processing
thread essentially hands over control to the Real-time Control Loop in emulator.The emulator DLL should initialize
its own processing threads and from that moment on, emulator controls both the emulation and the simulation.
Emulator should be responsible for injecting messages in to the OPNET simulation and advancing OPNET‘s
simulation clock in discrete increments.As the simulation progresses, OPNET processes the discrete events that
occur within that time increment.

SPFd–OSPFv2routingengine
An OSPFv2 routing engine that works with Quagga routing engine. Mentioned approach is suggested for
implementation.
OSPF processing upon LSU receipt
During routing convergence the time needed for all routers in the network to agree on the network topology after the
topology has changed either because of a failure or because of planned maintenance operation.This process can be
divided into three parts- Failure detection, Flooding Route calculation.
Failure detection: Failure can be detected by a link layer protocol or by the Hello protocol.In most of the cases
failure detection should be based on link layer mechanisms because it is typically much faster.TheHello protocol
canal so be useful in a situation when the router goes down but its interfaces are in up mode.The Hello detection
mechanism is based on frequent exchanges of the Hello packets between the neighbors. Adjacent routers send Hello
packets to each other every Hello interval seconds(typically10 seconds).
Flooding: Link State Advertisement(LSA),which provides information about all subnets of the network directly
connected to the advertising router is generated. Before transmission, the LSA is encapsulated in to the Link State
Update (LSU)packet together with other LSAs that can wait for transmission.OSPF determines if the LSA is new or
duplicate by examining Link State Database (LSDB) containing all of the previously received new LSAs.It compares
the received LSA with the LSAs stored in the database using the LSAID number,the LSA type and the advertising
router IDLSA’s fields.
Route calculation: When the new LSA is received, the router schedules SPF calculation in order to recalculate the
Shortest Path Tree(SPT)that represents the set of the shortest paths to all other routers in the network. After
successful SPF calculation the router has to update its routing table called Routing Information Base(RIB) and install
all the routes in its Forwarding Information Base(FIB).
Timers: The convergence process is additionally delayed by several vendor specific timers.A network card driver in
routers waits for Carrier Delay before bringing the interface down and starting OSPF convergence.
Network Transport Simulation: System architecture
Simulation of network transport and its system architecture is given below.
Hardware simulators are provided to emulate the software and hardware activities executed on the simulated
devices.Each hardware simulator runs a state machine for the hardware of a single device.
The network Event handler is provided to perform as central event processing unit for the network simulator.
The network event handler receives events from each software emulator,each hardware simulator and each end
station simulator.
The end station stub is provided for forwarding messages between an end station simulator and the network event
handler. One instance of an end station stub is provided for every instance of an end station simulator provided in
the network.The end station stub can communicate with an end station simulator through a communication link.
An error injector is provided for simulating various error conditions in the simulated network such as dropped
packets, erroneous packets, nodes failing, links failing and card removal.The error injector communicates error
requests from the GUI or scripting interface through the user event handler to the network event handler and the
topology manager.
A diagnostics module is provided to allow the user to view internal data structures in the simulator system.The
diagnostics module retrieves data maintained by other portions of the system, such as the hardware simulators, and
passes data to the GUI.
Each emulator system will be provided with a Hardware Interface Layer(HIL)interface though which all
interactions with the hardware are passed and permit interactions with the real hardware to be replaced with
interactions with the simulated hardware.
OSPF FLAP DAMPING ALGORITHM
The current OSPF fixed timer damping function is not ―flaptolerant‖ as each flap may lead to a few
seconds of lost connectivity. Moreover MinLSInterval and MinLSArrival timers can sometimes
adversely affect the network convergence. When OSPF advertises andcomputes SPF based upon in
correct information,there are lower chances of switching to the alternatives table route even if one
exists.
We can maintain a figure of merit named penalty for each route which is incremented each time it
flaps. This gives the degree of instability of that route. Penalty is increased by a constant Default Penalty when a
Down event occurs for the route. For the routes which are flapping it is their availability which is kept suppressed
rather than their unavailability.
When the penalty goes beyond a configured Suppress value,the route is kept from being advertised.
During the period which goes without a flap, the penalty continues to be decayed exponentially at a rate
by which the penalty will be reduced by half in configured Half-life seconds.When the penalty falls
Below a configured reuse value,the route that was suppressed is now advertised and the router
originates an AS-External-LSA with age set to 0.
Protocol Proxy scheme
In this scheme we propose a protocol proxy which emulates the OSPF protocol in the network emulator and meets
our objectives. In the protocol scheme, OSPF protocol emulation is achieved by combining an protocol proxy that
we introduce here and an OSPF peer state manager based on existing OSPF protocol software. The protocol proxy
produces OSPF packets of link state advertisements with customized extensions while the OSPF peer state manager
realizes neighbor establishment via the proxy.
The protocol proxy has two functions rewrite OSPF packets originated by the OSPF peer state manager and generate
OSPF packets to inform the updated topology to the TE server. Implementation of the OPSF customized extensions
requires modification of only the protocol proxy software, the existing OSPF software for the OPSF peer state
manager is no touched. This makes the implementation of the OSPF emulation easy and flexible.
The protocol proxy obtains the network topology information from the resource simulator, which is managed in a
centralized manner. This reduces the amount of processing resources required and is scalable in terms of network
size. We develop a prototype of the network emulator that includes OSPF protocol emulation via the protocol proxy
scheme. The effectiveness of the protocol proxy scheme is confirmed by an experiment on a network with 100
nodes.
Network emulation: Protocol Proxy based scheme
A device that emulates an actual network’s behavior is needed to confirm the validity and performance of the
developed TE server. This device is called a network emulator. Architecture of a network emulator for Network
management is presented below
The network emulator mainly consists of three modules: a router interface module, a resource simulator, and a traffic generator. The router interface module supports several protocols, such as telnet, SNMP, OSPF. Each router
interface in the network is implemented as a virtual node in the router interface module. The resource simulator
module simulates the network resources based on requests of path setups and releases triggered by the TE server via
the router interface module; it judges if the requests can be accepted. The traffic generator creates traffic
information, which is retrieved from the TE server via SNMP, such as traffic volume passing through each link
interface. The network emulator provides an experimental environment for Network management and allows a
variety of network control actions to be examined under the various traffic characteristics expected of a large-scale
network.
The network emulator uses the same router interfaces to communicate with the TE server as the actual Network and
behaves as an actual Network between the interfaces. Moreover, the network emulator provides a variety of
customizable traffic environments. In the above figure shown above the actual network environments a test
conducted on an actual network its cost is excessive and it is difficult to generate realistic traffic due to its
unpredictability. The TE server uses OSPF to collects the topology and resource information. Each OSPF-router
exchanges information with neighboring OSPF routers. The TE server running OSPF has a neighbor relationship
with at least one router in the network from which, gets the information necessary. Note that the TE server does not
need to have an OSPF neighbor relationship with all routers in the network. Moreover it setups or releases a path via
CLI in response to a request.
As shown above the test with network emulator was conducted on the network emulator, which behaves as the test
with actual network. The TE server has two main functions. First it collects information of network topology
network resources and traffic. Second it controls elements such as network paths, links and bandwidth according to
the traffic demands traffic characteristics and quality-of-service requirements. The network emulator mainly consists
of three modules, which are a router interface module, a resource simulator module, and a traffic generator module
and several databases.
The databases are one TE database (DB) for each service network, one TE DB for the optical network, and the
traffic DB. The functions of these modules for the network emulator are as follows. The router interface module
communicates with the TE server and replicates the behavior of one or more routers. This module consists of N submodules for Command Line Interface (CLI)/SNMP, where N is the number of routers in the emulated service and
one sub-module for OSPF.
Sub-modules for CLI/SNMP, which are denoted as virtual nodes 1,…, N, are prepared for all routers, which
correspond to the emulated network. These sub-modules communicate with the TE server, to update the router
configurations including path setup and release information and to collect traffic information. Virtual node R uses
OSPF to exchange topology information with the TE server. If the topology information is updated, the resource
simulator notifies the update to virtual node R, which then sends it to the TE server. Since the TE server uses OSPF
to get the topology information from virtual node R, the OSPF configuration from virtual nodes 1 to N is not
required. Virtual node R behaves as a neighbor node in the actual network. Although each router runs OSPF in the
actual network, it does not use OSPF to communicate with the TE server with OSPF, only its neighboring router.
Therefore, in the network emulator, OSPF communication between the TE server and virtual node R is enough.
Using virtual node R reduces the complexity of the resource simulator. If all virtual nodes speak OSPF, the resource
simulator has to control all virtual nodes. If the network emulator must emulate a large-scale network, it would
become to complex. It is more practical if the resource simulator controls only the virtual node R since R aggregates
the OSPF information.
The resource simulator module emulates the statuses of the emulated network, such as path setup and release and
resource management so as to ensure compliance with protocols .When the TE server requests path setup via CLI to
the router interface module the resource simulator module judges whether the path setup request should be accepted
or rejected according to the current available resources, the requested bandwidth, and the route. The TE databases
are used by the resource simulator to keep the updated topology information.
The traffic generator module generates traffic information to reflect various traffic characteristics with consideration
given to the unpredictability of traffic fluctuations. The information so generated is used by the resource simulator
and the TE server via SNMP. The generated traffic information is kept in the traffic DB which is managed by the
traffic generator.
Each virtual node communicates with the resource simulator module and traffic generator modules. In addition, the
resource simulator module communicates with the traffic generator module. To make the network emulator scalable,
these communications are performed via TCP/IP. Therefore, the proposed architecture allows modules/sub-modules
to be apportioned among different computers, so network emulation can be performed in a distributed manner. This
makes the network emulator scalable in terms of network size.
Requirements: OSPF EMULATION
The network emulator is mainly used to test the functioning of the TE server as an alternative to creating an actual
network as the test environment. OSPF protocol emulation must inform the network topology and resource
information to test the functioning of the TE server. OSPF protocol emulation is provided by virtual node in the
network emulator. Virtual node obtains the updated topology information from the resource simulator and
communicates with the TE server using OSPF. For OSPF protocol emulation, there are three main requirements.
First, customized OPSF extensions should be easily supported. Second processing resources should be efficiently
used to allow the emulation of large-scale Network s. Third, the protocol emulation software should be implemented
in a flexible and timely manner. The key OSPF functions are neighbor establishment with another OSPF peer and
link state advertisement.
To emulate the OSPF protocol, there are two conventional approaches. One is to modify an existing OSPF emulator
which may be a commercial one and the other is to modify existing OSPF software. In the first approach an existing
commercial OSPF emulator which has limited interfaces, supports various networks and some extensions by setting
pre-defined configurations. Shown below is a schematic view of the approach. Whenever the network status is
changed the configurations need to be updated. However existing commercial OSPF emulators are not easily accept the addition of customized extensions that are
not supported. In addition, as the interfaces of existing commercial OSPF emulators available for updating network
configurations are limited, it is difficult to ensure integration with the other modules in our network emulator. In the
network emulator, the resource simulator module requires virtual node R to access the configuration file via TCP/IP.
Alternate approach
OSPF peer that uses existing OSPF software, for example Quagga is processed at each emulated router. Shown
below is a schematic view of the alternate approach.
As processing resources are required in proportion to the number of nodes in the emulated network, it is not scalable
in terms of network size. In addition, to adding customized OSPF extensions, we have to modify the existing OSPF
protocol, which requires substantial development efforts including software debugging.
PROTOCOL PROXY based scheme
In protocol proxy based scheme that meets our requirements. Figure shown is a schematic view of the protocol
proxy based scheme.
This scheme achieves OSPF protocol emulation by combining the protocol proxy that we introduced and an OSPF
peer state manager based on existing OSPF protocol software which remains unmodified. The protocol proxy is
inserted between the TE server and the OSPF peer state manager. The protocol proxy produces OSPF packets of link
state advertisements using customized extensions ,while the OSPF peer state manager realizes neighbor
establishment via the proxy.The protocol proxy has two main functions. The first one is to rewrite OSPF packets
originated by the OSPF peer, and the second one is to generate OSPF packets to inform the updated topology to the
TE server. The topology information is configured and updated by the resource simulator via TCP/IP. To implement
the OPSF customized extensions, only the protocol proxy software is modified, the existing OSPF software for the
OPSF peer state manager remains unmodified. This makes the implementation of the OSPF emulation easy and
flexible. Furthermore, the protocol proxy obtains the network topology information from the resource simulator
module, which is managed in a centralized manner.
Shown below is an example of the behaviors of the protocol proxy. The protocol proxy captures all OSPF packets
that are exchanged between the OSPF peer state manager and the TE server, and relays the captured packets to the
other side. The protocol proxy rewrites each captured packet before forwarding them to the destination A first the OSPF protocol exchanges hello packets between each router. At virtual node R, a hello packet is
generated by the OSPF peer state manager. The protocol proxy relays the hello packet between the OSPF peer state
manager and the TE server. After exchanging the hello packets, the OSPF protocol determines the adjacency
relationships. A database description (DD) packet is used to establish each adjacency relationship, such as neighbor
establishment. The DD packet has the neighbor information gained from already established router. A router
exchanges the information with its neighbors via DD packets. The protocol proxy captures the DD packets from the
OSPF peer state manager, and rewrites the topology information in the DD packet according to the resource
simulator status. The protocol proxy sends the rewritten DD packet to the TE server. The DD packet from the TE
server, is checked the status by protocol proxy, and relays the DD packet to the OSPF peer state manager. After
neighbor establishment is achieved, virtual node R notifies the resource information using link state (LS)
update/request/acknowledge including extensions. LS update is used to inform other routers of the new information,
LS request is used to get the information from other router, and LS acknowledge is used to confirm the packet.
The protocol proxy generates LS packets including extensions using the resource simulator’s output. The protocol proxy offers two advantages. First, the existing software used to implement extended OSPF functions
does not need to be modified. The protocol proxy provides the basic OSPF functions to establish adjacency relations
and extended functionally to notify the resource information. The basic OSPF functions are provided by the existing
OSPF software, but the extended functions are provided by the protocol proxy. The protocol proxy scheme can
implement the extended functions by modifying the protocol proxy. Second, the protocol proxy scheme reduces the
complexity of the resource simulator. In using the existing OSPF software scheme, the resource simulator has to
control status of all OSPF nodes to pass the OSPF information to the TE server. In contrast, the protocol proxy
scheme aggregates the OSPF information of all virtual nodes at the protocol proxy. Hence the resource simulator
controls only the protocol proxy.
Prototype: Network emulator
Emulation of nodes is chosen as 40 in CRP for trial. Three computers will be used to implement the network
emulator. Each computer was equipped with Software ―Xen‖ was used to realize the VMs. 20 virtual nodes for
CLI/SNMP were installed on computers 1 and 2. Virtual node, resource simulator and databases were installed on
computer 3. Virtual node R consists of our developed protocol proxy and the OSPF peer, where Quagga/XORP is
the existing OSPF software. Quagga is a routing software suite: General Public License (GPL) licensed IPv4/IPv6
routing software. In general one machine can be dedicated as a router to implement a virtual node that speaks CLI/SNMP/OSPF.
However, this approach is not cost-effective, so we employ the virtual machine (VM) approach. VM technology
allows multiple machines to run independently on one computer. This means that we do not take the other approach, which integrates the resource simulator and traffic generator with the virtual nodes. This is because the
VM technology enables us to develop the network emulator more easily, a significant benefit since router
interfaces are frequently upgraded. Our approach allows the use of several available software packages as the
required protocol suites. However, using VM technology prevents the direct emulation of the propagation delay.
Because the virtual nodes work in the same computer they are unable to replicate the real delay in the network.
Path setup
The network emulator has two planes. One is the control and management plane. This plane connects to the TE
server, and provides the CLI and OSPF interface. This plane is visible to the TE server. The other one is the internal
plane. The internal plane is used for communication among the modules in the network emulator. The internal
plane is invisible to the TE server.
PP is the protocol proxy’s interface which communicates with the resource simulator. RS is the resource simulator’s
interface that communicates with the virtual node and the protocol proxy. OP1 and OP2 are particular interfaces.
They are invisible and are not connected to any plane. Instead, they are to provide the basic OSPF functions. Shown
here is the path setup and release in the network. The procedure for path setup on the network emulator as follows.
Step 1: The TE server makes the adjacency relation with virtual node via interface C1(10.0.0.254).
Step 2: The protocol proxy receives the topology and resource information from the resource simulator via PP
(192.168.1.50) interface and RS (192.168.1.60) interface.
Step 3: The protocol proxy informs the topology and resource information to the TE server via C1 interface.
Step 4: The TE server logins to the virtual nodes, which correspond to ingress and egress router interfaces such
as M1 (10.0.0.1) to M40 (10.0.0.40), using CLI over telnet. The TE server sends a path-setup request to the ingress
and egress virtual nodes. This emulates the router configurations. The path-setup request information includes
several attributes such as interfaces with physical ports and IP addresses, required bandwidth, required route, and
switch type. The received information is saved at the ingress and egress virtual nodes.
Step 5: After the received information is confirmed, the virtual nodes activate the configurations by sending the
received information to the resource simulator module from I1 (192.168.1.1) to I40 (192.168.1.40) interface to RS
(192.168.1.60) interface.
Step 6: The resource simulator judges if the path setup request can be accepted by comparing the requested
attributes to the available resources in the associated TE DB.
Step 7: If the path-setup request is accepted by the resource simulator, the associated TE DBs are updated based
on the newly accepted path attributes. The acceptance is
notified to the ingress and egress virtual nodes by the resource simulator. The information updated at the TE DBs is
notified to virtual node . Otherwise, the resource simulator notifies the rejection to the ingress and egress virtual
nodes.
Step 8: The ingress and egress virtual nodes notify the acceptance or rejection to the TE server via M1 interface
to M40 interface.
Step 9: If network topology or resource information is changed, the resource simulator sends the updated
topology or resource information to the protocol proxy from the resource simulator via RS interface to PP interface.
Step 10: When the protocol proxy receives the information, the protocol proxy informs the topology and
resource information to the TE server by OSPF via C1 interface.
Effects: Protocol Proxy based scheme
The protocol proxy makes the implementation of the OSPF emulation easy and flexible, and reduces the amount of
processing resources required. First we will examine the effectiveness of the protocol proxy scheme in terms of the
software development time. It is difficult to quantify how much the developing time is reduced. Therefore the
number of lines in the source code is considered to be an indication of development time. There are 74,000 lines,
approximately, in the existing OSPF software. While the source code of the protocol proxy has only 1,000 lines. In
the protocol proxy scheme, we do not need to touch the Quagga/XORP source code. Since the protocol proxy is
modified to implement extended functions, the modification of Quagga’s/XORP source code is not required. This
means that the protocol proxy scheme reduces the code needed to implement some functions. When we modify
Quagga’s/XORP source code ,we have to confirm that the modification satisfies the standard RFC.
The protocol proxy does not work by its self. To establish adjacency relation, the protocol proxy still needs an OSPF
peer state manager such as Quagga. Therefore, the advantage of the protocol proxy is that modification of the
Quagga in the protocol proxy scheme is not required. Hence, we do not need to test the functions provided by
Quagga. This is because it is guaranteed to offer the standardized functions specified in RFC2328. Quagga/XOPR is
used to establish the adjacency relation. If Quagga/XOPR is modified the standardized functions provided by the
modified Quagga must be tested.
Second, we examine the processing time on the protocol proxy. We measure the packet rewriting time and LS
packet generating time. The packet rewriting time starts from when the protocol proxy receives a packet that needs
rewriting, and stops when the modified packet is sent out. The packet generating time starts when the protocol proxy
receives the information from the resource simulator, and stops when the packet is sent out. To measure the processing time, we use the getrusage() function, a system call, provided by the Linux operating system. The
getrusage() function returns the CPU processing time in millisecond.
The processing time of the protocol proxy is less than one milli-second. This means that the processing time of the
protocol proxy has little impact on the network emulator. We note that the processing time of the protocol proxy is
not related to network size, because the protocol proxy supports only packet rewriting and generation. It takes less
than one millisecond from receiving the information to rewrite and generate the packet. In the protocol proxy
scheme, the resource simulator manages the OSPF information. When the network emulator emulates large-scale
network, the processing time of the resource simulator is significant. In contrast, the protocol proxy takes less than
one millisecond. This means that the protocol proxy is not a bottleneck in the network emulator.
Third, we examine the reduction in the amount of processing resources required. In the conventional approach based
on existing OSPF software, all virtual nodes have to install the OSPF software. The existing OSPF software
approach demands that the processing resources must be proportional to the number of nodes. On the other hand, the
protocol proxy scheme needs only one OSPF peer to perform neighbor establishment. Thus the superiority of the
protocol proxy scheme relative to the existing OSPF software approach increases with network size. Figure 12
compares the used memory amount of the protocol scheme with that of the conventional approach. In Figure 12, the
used memory of the conventional approach is proportional to the number of nodes. The conventional approach
requires 2.3MB memories per node, where OSPF processes run. On the other hand, the protocol proxy scheme
requires only 2.8MB memories for a network, which does not depend on the number of nodes in the network. The
breakdown is OSPF peer 2.3MB and protocol proxy 0.5MB. This result shows the protocol proxy scheme reduces
the required memory amount. We also investigated the required CPU resource by using a command of ―pidstat‖,
which is included in ―sysstat version 8‖. However, the CPU resource was not measurable, because it is too small.
This means that the CPU resource is not a bottleneck for the network emulator.
Wireless OSPF (WOSPF): RFC5820
There is requirement that the nodes are semi-static in nature and are connected through BEL radio interfaces. Here is
given the architecture Deploying a legacy routing protocol defined for wired networks in a wireless environment
calls for modifications and optimizations. Wired routing protocols are not designed for operation in a multi-hop
environment. Second, dissemination of routing packets in a network whose topology is rapidly changing requires
intelligent and optimized techniques, unless resource consuming, pure flooding is to be used.
WOSPF interfaces should take into account the different aspects of resource constrained WOSPF environments the
bandwidth may be scarce, topology is unpredictable, and link quality poor. The wireless extensions described in this
proposal aim to define an interface that can cope with these properties.
Wireless OSPF-OR Neighbors
WOSPF-OR router is designed to interact with both OSPF and WOSPF-OR routers in a seamless fashion. Unless
explicitly signaled WOSPF-OR router assumes that it is communicating with an OSPF router.
The WOSPF-OR neighbor data structures located in the neighbor table include information such as whether the
neighbor is chosen to act as an AOR for the local router or not.This limits the amount of different data structures
needed to make calculations as the WOSPF-OR neighbor data structure and maintains information relevant to
different calculations needed to operate in network.
WOSPF-OR neighbor is merely an OSPF neighbor with the ability to perform different MANET routing overhead
optimizations. These optimizations require additional information to be registered on a per neighbor basis and this is
implemented by the use of encapsulation: Instead of extending the neighbor data structure already defined in OSPF,
the existing OSPF neighbor data structures are in as many cases as possible encapsulated in new WOSPF-OR
neighbor data structures in the following manner:
struct wospf_neighbor {
}
As a consequence no modification of the existing OSPF neighbor data structure is needed, as all WOSPF-OR
specific information is added to the WOSPF-OR neighbor data structure only. These data structures are organized in a WOSPFOR neighbor table, equivalent to the organization of neighbor entries.This structure reflects the idea
behind adding extensions to OSPF – every WOSPF-OR neighbor is an OSPF neighbor with some functionality
needed in network.WOSPF-OR also maintains a table of two hop neighbors, the same as in OLSR. A two hop
neighbor entry does not have a link to an OSPF neighbor data structure. The entry serves the purpose of maintaining
a view of the local neighborhood in order to compute the active overlapping relay set.
Signaling Non-OSPF Information
Outgoing Hello and DD messages are intercepted by a WOSPF-OR function responsible for appending an LLS data
block, if needed. TLVs (Type/Length/Value) related to signaling of AOR-elections are constructed after looking up
AORs by iterating with the neighbor table. The entries constituting the neighbor table have information on the
neighboring router, so by iterating with the neighbor table the router IDs of AORs and dropped AORs can be listed
in ―AOR-TLVs‖ and ―Dropped-TLVs‖.
Relaying LSAs
Nodes in the non-AOR set is required to relay information if nodes in the AOR set fail to do so. In effect, these
neighbors serve as backup overlapping relays. The AOR selection algorithm is based on the algorithm used in OLSR
for calculating the MPR set. The AOR election is also registered with each WOSPF-OR neighbor data structure,
utilizing the encapsulation design as described in the previous subsection. As shown above part of the information flow of outgoing LS Update messages. Outgoing messages are defined as
messages that have already been processed or originated by the routing system, and are to be transmitted on the
interface. Received LS Update messages are either retransmitted immediately or delayed according to the
mechanism outlined in the following paragraphs. As OSPF already implements numerous forwarding decision the
―Flood filter‖ box represents the additional conditions that are needed to utilize the overlapping relays mechanism.
1) The Active set: When a received LSA is to be re-flooded by the local node, a decision is made whether it should
do so immediately or back off for some time. If the local node has been elected by the transmitting node to serve as
an AOR (i.e.the transmitting node is registered with the local node‘s AOR Selector set) the local node refloods
immediately.
2) The Non-Active set: A non-AOR re floods an LSA if it decides this will not result in an redundant
retransmission.
Obviously, a transmission is redundant if all neighbors have already received the LSA. However the implementation
of this decision process the router must store and update information about which LSAs has not been received by
which neighbors. When a router decides it should not immediately re-flood an LSA (because it is not an AOR of the sender), it adds the LSA to a dedicated pending LSA list unique for each WOSPF-OR interface and the LSA is
scheduled to be retransmitted within a short time interval. For each pending LSA neighbors who have not yet
received this LSA (i.e., neighbor from which the router has not received an ack or heard a reflood) are added to this
list. Hence, each interface has one two-dimensional list consisting of pending LSAs and their neighbors for which
the LSAs are pending. When the routers register that a neighbor has received the LSA (either by receiving a
multicast LS Ack or hearing a re-flood), the neighbor is removed from the pending LSA list. When the LSA is
scheduled for retransmission, if any neighbors remain listed in the pending LSA list, the LSA is re-flooded. Note
that under normal circumstances this should not occur as the router was not elected as an AOR for the sender in the
first place. It is even possible that every neighbor acks the LSA before the local router receives the LSA. To keep
track of which LS Ack has been received by which neighbors each WOSPF-OR neighbor data structure maintains a
list of LSAs that have been acked, but which the router itself has not received. This list is referred to as the Acked
LSA list. This list makes use of the OSPF link state database data structure already implemented and operations on
the database is made through the interface implemented in the OSPF source code.
Incremental Hellos
Routers supporting partial neighbor information (i.e., incremental Hellos) signal this by setting a bit in the Hello
message‘s option field. Communication with this neighbor will then utilize the incremental Hello scheme by not
including the neighbor‘s router ID in the following Hello messages. If the neighbor does not support this scheme the
local router will continue listing the neighbor in its Hello messages. An incremental Hello capable router,WOSPFOR, will be able to communicate compatibly with routers who are not capable of this.
If one or more state changes occur, the local State Check Sequence (SCS) number is incremented and the next Hello
message will contain full state. This number indicates current state and must be signaled when using the incremental
Hellos scheme. To utilize the incremental Hellos scheme some modifications to the Hello protocol are required. As
soon as a router starts forming an adjacency with an incremental Hellos capable neighbor the router removes the
neighbor router ID from the neighbor list reported in the Hellos. The neighboring router, on the other hand failing to
find its router ID in the Hello messages does not interpret this as a relationship failure – such information is signaled
explicitly using the LLS mechanism. Neighbors that are no longer known are listed in a dedicated ―Neighbor Drop‖
TLV. When a router finds that it is listed in such a TLV it declares the neighbor as down and deletes all data
structures associated with that neighbor. As shown above is the part of the information flow of outgoing Hello and DD messages. The defined as messages
already processed or originated by the routing system, and are to be transmitted on the interface. An LLS data block
is appended to Hello and DD packets which TLVs to be carried in the LLS block depends on changes in state or
willingness to serve as an AOR. The TLV carrying the local router‘s SCS number is always appended to indicate
current state. The Hello module seen in the figure is a conceptual data structure maintaining all information related
to the incremental Hellos scheme. In practice this module consists of lists of dropped relays neighbors requesting
full state.
Design enhancement: WOSPF
Designing a software system with this size and complexity calls for manageable and smart solutions. Modifying an
existing protocol implementation should be done with caution as adding to the complexity of a protocol can result in
unexpected results. Our design goal is therefore to modify the source code as less as possible without compromising
functionality. On the other hand the extension modules themselves are designed for protocol efficiency.As
incremental Hello messages may not be listing any neighbors even though they are not dropped, these messages
cannot flow through the OSPF system as usual. These messages are intercepted and processed by the Hello module,
and the OSPF neighbor data structures are modified accordingly. An alternative would be to have the Hello module
construct a ―fake‖ Hello message listing all known neighbors, and run it through the OSPF system in the same
manner as the traditional OSPF Hello messages would. For the implementation the former solution is used for
efficiency reasons (the local router does not need to spend time and processor power to construct new ―fake‖ Hello
messages).As adjacency forming and maintenance imposes relatively high routing overhead on the network
(consider for example scenarios where nodes move in and out of transmission range) reducing the set of neighbors
with which a node forms adjacency might be desirable.
Plugins
Extending the functionality of a protocol might increase the complexity, in addition to adding functionality not
really part of the protocol itself. Under these circumstances plugins come in handy, as they provide the flexibility to
modify or add to the functionality without altering the source code. WOSPF-OR implementation is to support plugin
by defining a plugin interface
Implementation Issues
Quagga: The platform upon which the implementation will be built is Quagga. Quagga is a GPL licensed routing
software suite forked off from GNU Zebra . Because of this active developer community, Quagga was elected to be
the basis for our WOSPF-OR extensions. Both Quagga and OLSR are implemented in C, which allows for reuse of
the source code of OLSR.As of this writing Quagga is still beta software, meaning that no official versions have
been released.
Shown above is a high-level overview of the Quagga architecture with WOSPF-OR framework. The strict layered
architecture enables the routing protocol to be implemented independent of the underlying operating system.An
alternate option is to use XORP , which is free from licensing and code modifications disclosure to the open source
community. Implementation of XORP is similar to Quagga.
GNU/Linux
The Quagga routing software provides routing protocol implementations for Unix platforms such as FreeBSD and
GNU/Linux. The test bed will made up of routers running either Debian or Ubuntu.
Emulate Mobile environment
To emulate a mobile environment where links are created and destroyed in a more or less random fashion a script
will be developed that will emulate link breakage by blocking messages from chosen neighbors. Blocking messages
is done by modifying the iptables on each machine. To emulate a link breakage between two routers A and B, A‘s
iptables is added a rule blocking all messages send by B, and vica versa.
Scalable and real-time network emulation: SynTime
Network emulation brings together the strengths of network simulation (scalability, modeling flexibility) and realworld software prototypes (realistic analysis). Unfortunately network emulation fails if the simulation is not realtime capable, e.g., due to large scenarios or complex models. We present SynTime this platform is for scalable and
accurate network emulation. It enables the precise evaluation of arbitrary networking software with event-based
simulations of any complexity by relieving the network simulation from its real-time constraint. We achieve this
goal by transparently matching the execution speed of virtual machines hosting the software prototypes with the
network simulation.
An event-based simulation modeling a computer network of choice is connected to real-world software prototype.
Traffic from the prototype is fed to the simulation and vice versa. This way, the software prototype can be evaluated
in any network that can be modeled by the simulator. One fundamental issue of network emulation is that there will
be different time representations of event based simulations and software prototypes. Event-based simulations
consist of a series of discrete events with an associated event execution time. Once an event has been processed, the
simulation time is advanced to the execution time of the next event. By contrast, software prototypes observe a
continuously progressing wall-clock time.
Existing implementations of network emulation pin the execution of simulation events to the corresponding wallclock time. Unfortunately this approach is only useful if the simulation can be executed in real time. Otherwise a simulation without sufficient computational resources will lag behind and thus be unable to deliver packets timely.
Such simulator overload may result from complex network simulations, for example due to a high number of
simulated nodes or models of high computational complexity. Simulator overload has to be prevented because
deficient protocol behavior such as connection time-outs unwanted retransmissions or the assumption of network
congestion would be the direct consequence.
Moreover even slight simulator overload may invalidate performance evaluations because the network
can not be simulated within the required timing bounds. Speeding up the simulation to make it real-time capable is
the first obvious option to deal with simulation overload. This speed-up can be achieved by supplying the simulation
machine with sufficient computational resources in forms of hardware or by parallelizing the network simulation.
However we argue that this approach lacks generality because parallel processing can only scale to the degree of
possible parallelism within the simulation.
We thoroughly elaborate the design of SynTime and its underlying concept of synchronized network emulation. It
eliminates the need of the network simulation to execute in real-time. This enables network emulation scenarios
using simulations of any complexity. We achieve this goal by synchronizing the software prototypes with the
network simulation. Using virtualization we decouple the software prototypes‘ perceived progression of time from
wall-clock time.
Our implementation of SynTime for x86 systems enables the synchronized execution of Xen-based virtualized
prototypes and ns-3 simulations with an accuracy down to 0.01 ms. SynTime delivers a high degree of accuracy and
transparency both regarding timing and perceived network bandwidth. We further demonstrate in our evaluation of
SynTime that is run-time efficient and that the synchronization overhead stays below 10% at an accuracy of 0.5 ms.
Design of SynTime
SynTime setup incorporates three main components, the central synchronization component (synchronizer), at least
one virtual machine (VM) carrying a software prototype of choice, and an event-based network simulation. The
synchronizer controls the execution of the network simulation and the software prototypes. In order to carry out such
a synchronization , the synchronizer must interrupt the execution of the prototype or the simulation at times to
achieve precise clock alignment. To enable this suspension the software prototypes are hosted inside virtual
machines for means of control.
Synchronization Component
The synchronization component centrally coordinates a SynTime setup. Its task is to manage the synchronous
execution of the network simulation and the attached virtual machines. It implements a synchronization algorithm to
prevent potential time drifts and clock misalignments between the virtual machines and the network simulation.
As choice for the synchronization algorithm, we consider solutions known from the research domain of
parallel discrete event-based simulation. In this regard two classes of synchronization are distinguished,
optimistic synchronization schemes and conservative synchronization schemes.
In case of synchronization errors, roll-backs are used to restore a consistent and error-free global state. For the
ability to roll back to a consistent state, optimistic schemes often incorporate regular snapshots of the synchronized
peers. As the state of a virtual machine includes the memory allocated for the running operating system instance,
check-pointing is costly at the desired level of synchronization granularity. Conservative synchronization schemes,
by contrast guarantee a parallel execution without synchronization errors and hence do not require a roll-back
mechanism.
Virtual Machines
The virtual machines encapsulate the software prototype to be integrated with the network simulation. We consider a
prototype to be an instance of any operating system (OS) that carries arbitrary network protocol implementations or
applications. The virtualization of OS instances hosting software prototypes disassociates their execution from the
system hardware and hence allows for obtaining full control over their run-time behavior. Therefore the execution of
the prototype can be suspended until all synchronized components have reached the end of the time slice. This
suspension avoids simulator overload by allowing the network simulator to run while the virtual machines are
waiting. However, this suspension is typically detectable by the VMs, because they are relayed information from
hardware time sources. Under normal circumstances, this behavior is desired to keep the clock synchronized to wallclock time and to make sure that timers expire at the right point of time. However, since we suspend the VMs in
order to synchronize their time against each other and the simulation we must avoid this behavior. Having full
control over the VM‘s perception of time we instead provide them with a consistent and continuous logical time.
This leaves us with the possibility of transparently suspending the execution of a prototype without the
implementation noticing the actual gap in real-world time.
Event-based Network Simulation
The key task of the network simulation is to model the network that connects the virtual machines.
we distinguish between an opaque and a protocol-aware network emulation mode. In the case of opaque network
emulation, the simulator merely influences the propagation of network traffic, for example by delaying or
duplicating packets.
Implementation
Implementation of SynTime comprising three types of main components as shown below a synchronization
component (synchronizer) the virtual machine infrastructure and a network simulation. Two different flows of
communication are present in our system. The synchronizer delivers the synchronization information over the timing
control interface using a lightweight signaling protocol. A tunnel that carries Ethernet frames from the VMs to the
simulation and vice versa serves as our data communication interface. The VM implementation is based on the Xen
hypervisor and executes multiple instances of guest domains which host an operating system and a prototype
implementation. Our implementation uses the ns-3 network simulator to model the network to which the VMs are
connected. For this purpose we extend the existing emulation framework of ns-3 for synchronized network
emulation.
Synchronization component
The synchronizer is implemented as a user-space application. Its main purpose is to implement the timing control
interface. The synchronization component assigns discrete slices of run-time to the simulation and to the virtual
machines. In order to distribute the synchronized components across different physical systems, the synchronization
signaling is implemented on top of UDP. In addition to the synchronization coordination, the synchronizer also
manages the set of synchronized components. In particular it allows peers to join and to leave the synchronization
during run-time. This allows to run certain tasks (e.g., booting and configuring a virtual machine and the hosted
software prototype) outside the synchronized setup.
Timing Control Interface
One challenge is the large amount of messages that needs to be exchanged between the synchronized VMs and the
simulation. For example if the time slices are configured to a static logical duration of 0.1 ms, the synchronization
component needs to issue 10 000 time slices to all attached VMs and the simulation for one second of logical time.
An additional massive amount of messages is caused by the synchronized peers to signal the completion of every
time slice individually to the synchronizer. Therefore, in order to maintain good run-time efficiency, it is vital to
limit the delays and the overhead caused by synchronization signaling and message parsing. For these reasons we
created a lightweight synchronization protocol based on UDP for SynTime. It provides all communication primitives
of the timing control interface. The assignment of time slices to all synchronized peers is carried out using UDP
broadcasts while the remaining communication, such as signaling time slice completion takes place using uncast
datagram. Moreover, the UDP packets have a fixed structure and only carry the synchronization information in
binary form. This is necessary to keep both the packet size and the parsing complexity at a very low level.
Virtual Machines
We use the Xen hypervisor and its scheduling mechanisms as the basis of our work. Xen is a virtual machine
monitor for x86 CPUs. The hypervisor itself takes care of memory management and scheduling, while hardware
access is delegated to a special privileged virtual machine (or domain, in Xen‘s parlance) running a modified Linux
kernel. As the first domain that is started during booting, it is often referred to as dom0. Xen supports two modes of
operation: paravirtualization mode (PVM) and hardware virtualization mode (HVM). SynTime uses Xen HVM
domains for virtualizing operating systems and software prototypes. In contrast to para-virtualization, HVM Xen
domains do not require the kernel of the guest system to be modified for virtualization.
Data Communication Interface
For the network data communication between virtual machines and simulation, it is first important to note that every
virtual machine can have one or several virtual network interfaces that look like real interfaces to the virtual
machine, and can be accessed inside dom0. We bridge the virtual interface in the dom0 with a tap device and
redirect all Ethernet traffic from the VM to the computer running the simulation. Conversely all Ethernet frames
received from the simulation over the tunnel are fed back to the virtual machine in the same way.
The Xen Synchronization Client
To keep the VM in sync with the communication the synchronization component communicates with a
synchronization client on the machine running Xen. Because of the potentially high number of synchronization
messages (depending on the size of the chosen time slices), the performance of the synchronization clients is crucial
to the overall performance of the system. For this reason, the client was implemented as a Linux kernel module. This
is especially beneficial because Xen delegates hardware access to the privileged domain dom0. Therefore, the
implementation in kernel space of the privileged domain saves half of the otherwise necessary context switches for
communication and our VM implementation.
Since context switches (between user space, kernel space, and, in addition here, hypervisor context)
are expensive operations, halving the number of them has a very noticeable impact on the overall performance. The
client communicates with the synchronization component via UDP datagrams.It then instructs Xen‘s scheduler via a
hyper call (the domain-hypervisor equivalent of a user-kernel system call) to start the synchronized domain for the
amount of time specified by the synchronizer. The client also registers an interrupt handler to a virtual interrupt that
is, an interrupt that can be raised by the hypervisor. When the synchronized domain has finished its assigned time
slice the interrupt is raised the client‘s handler is executed and it can inform the synchronizer via UDP. This
interrupt-based signaling ensures a prompt processing by the involved entities.
Syntime assisted Network Simulation
To match the real time packet flow from simulation to emulation environment ,it should be assisted through syntime
which will result in real time packet characteristics. The network simulator ns-3 internally represents packets as bit
vectors in network byte order resembling real-world packet formats. This removes the need of explicit message
translation mechanisms and simplifies the implementation of network emulation features. The modular design of ns-
3 facilitates the integration of the additional components as needed by SynTime. The timing control as well as the
communication interface is implemented as completely separate components whose implementation is not
intermingled with existing code.
There are some similarities between the SynTime simulation components and the emulation features
already provided by ns-3. Both have to synchronize the event execution to an external time source. For the existing
emulation implementation of ns-3 this is the wall-clock time. In the case of SynTime the synchronizer acts as
external time source. The so called simulator implementations in ns-3 are responsible for scheduling, unqueuing and
executing events.We added a third simulator implementation that connects arbitrary ns-3 simulations to the timing
control interface. The simulation registers at the synchronizer before its actual run begins. Similarly the simulation
deregisters itself at the synchronizer after all events have been executed. Upon the execution of an event our
implementation checks whether its associated simulation time is in the current time slice. If this is not the case, it
sends a finish message to the synchronizer and waits for the barrier being shifted. The actual communication with
the synchronizer is encapsulated in a helper class which holds a socket, provides methods to establish and tear down
a connection and to exchange the synchronization messages.
Another modification is the provision of a method which schedules an event in the current time slice. This
is needed because the regular scheduling methods only provide the time of the last executed event which can be
wrong in case of network packets arriving from outside the simulation.The ns-3 simulator already provides two
mechanisms for data communication with external systems. Both can be used with real-time simulations and
synchronized emulation. The emulation net device works like any ns-3 network device, but instead of being attached
to a simulated channel, it is attached to a real network device of the system running the simulation. In contrast to this
the tap bridge attaches to an existing ns-3 network device and creates a virtual tap device in the host system. With
both mechanisms, packets received on the host system are scheduled in the simulation and packets received in the
simulation are injected into the host system.
Besides supporting these existing two ways, we added a synchronized tunnel bridge. It implements the data
communication interface and connects the simulation to a remote endpoint. The endpoint is usually formed by a
VM, however the tunnel protocol could also be used to interconnect different instances of ns-3. Again the actual
communication is encapsulated in a helper class. This is not only to keep the bridge itself small, but also to reduce
the number of sockets needed. In a scenario where multiple tunnel bridges are installed inside a simulation it is
sufficient to have one instance of this helper class.
Outgoing packets are sent through its socket to a destination specified by the bridge sending the packet. Incoming
packets are dispatched by an identifier included in our tunnel protocol and then scheduled as event in the
corresponding bridge to which the sender of the packet is connected. Since incoming packets are not triggered by an
event inside the simulation but can occur at any time there is a separate thread running which uses a blocking
receive call on the socket. This technique has the advantage to avoid polling and is also used by the emulation net
device and the tap bridge.
CPU PERFORMANCE AND SYSTEM EVALUATION
In order to quantify the CPU performance of a VM we can execute CoreMark inside the synchronized VM.
CoreMark is a synthetic benchmark for CPUs and microprocessors recently made available by the Embedded
Microprocessor Benchmark Consortium (EEMBC). It performs different standard operations, such as CRC
calculations and matrix manipulations and outputs a single CPU performance score.
We have planned to concurrently execute O-Profile in the control domain while CoreMark should run inside the VM
we will be able to trace internal CPU events caused by the VM. This way, we can identify if there is an increased
amount of L2 cache misses which would cause the performance degradation.
Note:- For smaller time slices, the CPU performance of a VM decreases due to an increased amount of L2 cache
misses.
CRP Monitoring framework
CRP Monitoring framework can be envisioned as a light-weight monitoring system, which enables users to obtain
timely snapshots of network performance by initiating and executing unplanned monitoring tasks upon request.
After a distributed measurement task is finished, the raw measurement results are encapsulated in the form of tuples
which contain the values of performance metrics, as well as information on task configuration (e.g. task ID,
measurement sites etc). These measurement results are then stored and indexed in a distributed style which enables
users to issue queries to obtain measurement results of interest. Monitoring framework interfaces with the client
applications and the underlying intra-domain network, such that edge-to-edge active measurement tasks can be
registered on demand and triggered to be performed as scheduled the measurement results are collected and stored,
and can be queried by users. To realize these functionalities a communication-centric overlay network needs to be
constructed to propagate task configuration parameters to the network nodes that are required to participate in the
measurement task a data-centric overlay network also needs to be constructed to store and index the accumulated monitoring results hence to support measurement results being queried.
CRP Monitoring framework architecture
CRP Monitoring framework provides functionalities that enable the clients of CRP Monitoring framework to
register new measurement tasks and to retrieve measurement results. The registration of new tasks can be initiated at
any edge router in the system. Measurement result retrieval takes two forms pull-based range query with query
condition specified, or push-based report when a registered measurement task finishes. By using a pull-based
approach, a query can be issued from any node within the system, while by using a push-based approach the
reporting point is normally the node at which the task is registered. To release a user‘s thread from waiting, the API
is RMI-based to use an asynchronous call-back style to send users the results of registering a new measurement task
or retrieving a measurement result. Inside monitoring framework, the essential feature is that two overlay networks
are constructed, namely the control overlay and data overlay, corresponding to the communication-centric overlay
and data-centric overlay respectively and shown in the right-hand side of the picture. Here control overlay refers to
an application level multicast tree along which measurement task parameters are disseminated from the task initiator
to the task participant nodes; while data overlay refers to the overlay network by which measurement results are
stored and indexed, according to data‘s attributes and data‘s locality, so as to support distributed multiple-attribute
range query.
To do so, the coordinator layer on the top governs the coordination between the different functional modules. For
example, Measurement, Register, and Query are the three major modules, which respectively are in charge of
conducting active measurement, initiating a new measurement task, and organizing range queries on measurement
results. They all interact with the Virtual Repository, logically the storage provided by the data overlay which
physically consists of the edge routers‘ memory. That is, once task configuration parameters reach a participant
node, and the new task is successfully registered with the task participant node, the task will be executed according
to the configured time schedule; once the measurement task is conducted and completed, the measurement results
with associated task information are then stored in the virtual repository, in the form of data records, which are
tuples with multiple attributes, e.g. (task_ID, execution_time, task_results); range query against one or more
attributes can then be conducted on these data records.
To efficiently build overlay networks, the module called Topo.Tracker on the left-hand side in the above figure is
designed. By passively snooping the OSPF packets processed by the edge routers, followed by parsing and reusing
the topology information encapsulated in the OSPF packets, overlay network construction in CRP Monitoring
framework is more efficient since the underlying physical topology can be used, such that circuitous and duplicate
overlay paths can be reduced or even avoided. More importantly, at the same time of minimizing the impact caused
by the overlay networks to the underlying physical network, this performance improvement is achieved for free – it
does not cause any overhead traffic; this contrasts with sending detecting-probes, an approach that is normally
adopted to facilitate overlay construction and maintenance.
Network Topology Tracker
This section describes the snooping and parsing OSPF packets in reconstructing a network topology that is catered
for CRP Monitoring framework overlay construction.
Periodic Dumping of OSPF Configuration Files
Network topology is formed as the result of configuring routers according to a pre-defined topology graph. By
dumping configuration files available from each router, and parsing and analyzing the contained configuration data,
an entire view of the network can be reconstructed. This approach requires a thorough understanding of an IP
router‘s operations. When errors exist in configuration data, these errors must be identified and removed.More
importantly, this approach only provides a static view of the network topology. To make it more dynamic, the
dumping frequency can be increased, but it is hard to go beyond a certain limit, due to the overheads caused by data
collection.
Participating in OSPF Packet Exchange
In this approach, the OSPF topology tracker acts as a dumb OSPF router attached to a real OSPF router via a pointto-point link. By doing so, an adjacency relationship can be fully or partially formed between the topology tracker
and the router, such that the LSAs are exchanged between them. Here fully refers to the adjacency being completely
set up but special measures are taken to stop the topology tracker being used by other routers to forward IP packets
(e.g. by assigning infinite OSPF weight on the links incoming to it, or setting up strict route filters on it) partial here
refers to keeping the process of setting up the adjacency failed to complete (e.g. including fake LSAs in Data
Description packet but never actually sends out LSAs),such that the router will not be able to include a link to the
topology tracker, but still keeps sending LSAs to it.
Since the topology tracker participates in LSA exchange the advantage of this approach is that the link-state
database can be reliably and quickly bootstrapped and this mode can be used on any type of network media without
constraints. However the major problem of this approach is that it will impact the network, at least the attached
router. As a result the router might trigger SPT calculation too frequently if the topology tracker repeats setting up
adjacency with it, and the router‘s memory will be greatly wasted in sending LSAs in vain.
Passive Snooping of OSPF Packets
Since the OSPF packets are encapsulated in IP packets with their own protocol number, a protocol monitor can be
deployed to passively snoop OSPF packets thus the OSPF topology can be gradually formed as packets are captured.
Due to its complete passive manner, this approach has the advantage of not introducing any network traffic. As a
result, OSPF routers will not be aware of the protocol monitor‘s presence and will not be affected at all. However
the drawback with this approach is that the topology is formed incrementally; that is , if no network topology change
happens, one must wait for routers to refresh their LS database (e.g. normally 30 min). This is often undesirable for
operators who wish to visualize a newly configured network.
As can be seen there are pros and cons of each option discussed above. For the dumping Option it is error-prone,
complicated and only provides a relative static topology view. For the participating option, the overhead caused to
the network is not desirable. Finally for the passive snooping option, although it is safe with low overhead, initial
delay might be introduced when the system starts to capture the LSU packet. Given that CRP Monitoring framework
is to be deployed on edge routers, the place where OSPF packets are initiated the capture of LSU packets is very
reliable. With the reliability of capturing OSPF traffic, as well as none overhead being introduced, the initial delay
should be tolerable. As a result CRP Monitoring framework chooses the method of snooping OSPF packet
passively.
Overlay Construction
In overlay based data management the process of overlay construction is the process of building up a distributed
data structure; the focus is to establish and maintain the pointers that connect to other related nodes.
Node Join
Like most other distributed overlay techniques, before a new node joins the system, it needs to know at least one
node that is already part of the system. The incoming node queries this existing node and obtains state about the
hubs, along with a list of representatives for each hub in the system. Then, it randomly chooses a hub to join and contacts an existing node of that hub. The incoming node installs itself as a predecessor to become a part of the hub.
That is it takes all the successors as its own successors and half of range as its initial range then it initiates other
maintenance processes such as long-distance link construction successor list maintaining.
Node Departure
When a node departs other related nodes need to repair their links particularly the successor and the predecessor
links that directly impacts the correctness of routing. To repair successor/predecessor links within a hub, a short list
of contiguous nodes further clockwise on the ring than its successor is maintained by each node. Each node then
pings the nodes in the list periodically to update their liveness, as well as the range of information for which these
nodes are responsible. When a node departs, it is its predecessor‘s responsibility to find the next node along the ring
as its new successor. As to long-distance links, which is important for routing optimization, they are repaired in the
next round of periodical reconstruction, by using the node count that is newly estimated .Finally to repair the broken
cross-hub links a node can either use a backup cross-hub link, or to query its successor or the predecessor for their
links to the desired hub, or to query a bootstrap server (i.e. a node dedicated for processing bootstrap requests) for
the address of a node participating in the desired hub.
Above figure outlines the division and interaction between the different layers, showing the major modules and
interface methods of the software structure. The top layer, the RMI (Remote Method Invocation) layer, consists of
one major module named as Monitoring frameworkRMIServer. This layer accepts monitoring tasks issued by the
users (i.e. act as the RMI clients); a monitoring task can register a new measurement task, or retrieve a measurement
result. It thus interfaces with the supporting overlay layer via the rmiTaskInit() method; once the monitoring
task has been completed by the supporting overlay layer, the result is returned to users via callBack(), a callback style interface method.
The supporting overlay layer has three major modules, namely CtrOverlayMgr, TopologrMgr, and
DataOverlayMgr. As their names indicate, the CtrOverlayMgr module mainly deals with the functionalities of the
control overlay, including construction of ALM trees (via the alm() method) to disseminate the measurement task configurations to the appropriate participating nodes, and to schedule the corresponding active measurement (via the
schedule() method) according to the task configurations. Similarly the DataOverlyMgr module deals with the
functionalities of the data overlay, including construction of the hub-based overlay network (via the hub() method)
and routing data overlay messages (via the route() method). Both CtrOverlayMgr and DatarOverlayMgr need to
interface with the network layer, to set up network connections with other overlay nodes; in addition, both of them
obtain topology information provided by the TopologyMgr module (via the topology() method), which
interfaces with the OSPFSnooper module at the network layer using an instance of the observer pattern – i.e. a
publish/subscribe relationship exists between OSPFSnooper and TopologyMgr, thus TopologyMgr is notified by
OSPFSnooper when there are any changes in the topology graph.
The network layer performs network level functionalities, including conducting active measurement tasks by
sending UDP based probe packets (i.e. UDP_AM in Above figure); supporting the supporting overlay layer by
making TCP/UDP connections to other nodes (i.e. UDP_Worker in above figure for the control overlay, and
TCP_Worker in Above figure for the data overlay); and providing topology information by passively capturing
OSPF packets and continuously maintaining a topology database. In addition, when an active measurement finishes,
the UDP_AM module interfaces with the overlay layer to pass the measurement results to the DataOverlayMgr
module, from which the results can be routed to a node whose range covers the values of the measurement results.
User Level Interfaces
User level interfaces refer to setting up the interfaces that allow CRP Monitoring framework users to perform
monitoring tasks. The design is driven by the desire to make CRP Monitoring framework a lightweight monitoring
system, but flexible and extensible in terms of providing a large configuration space to users. The topics discussed in
this section include the API functions that are provided to users, and the utilization of Java RMI, which not only
facilitates these interactions, but also provides an efficient means to detect and control the run-time status of an CRP
Monitoring framework node for experimental purposes.
Intra-node Structures
In major intra node operations and interfaces of CtrOverlayMgr with other components are illustrated in the below
figure.
Basically for an Monitoring node to accommodate multiple sessions at the same time each session is described by an
ALMSessionInfo data structure (i.e. sessionInfo in Above figure), which not only encapsulates the original
configuration information of the measurement task (passed from the RMI layer), but also encapsulates information
regarding the multicast session, i.e. the ID of the root node (rootID) and the sequence number of the session
(sessionSeq). The sessionSeq is a contiguous sequence number generated by each node when it initiates a new
multicast session; by combining sessionSeq and rootID, a multicast session can therefore be uniquely identified. The
activity of each multicast session is further wrapped into an ALMSession (i.e. session in above figure), thus the
Steiner tree calculation, as well as the progress of each multicast session, can be guaranteed to be independent from
each other.
For each Monitoring node, depending upon the role that it assumes, all of the multicast sessions in which it
participates are organized into three tables. The entries in these tables are aggressively managed using timers. At
every tick, e.g. every other minute, the session at the head of each table is examined to see whether it should be
timed out. If the table is for sessions in which the node acts as a leaf node in the multicast tree (i.e. the
Leaf Table in Above figure), timing out means that the time scheduled to perform the measurement task has arrived.
Otherwise, for those sessions in which the node acts as the root node or an internal node, timing out means that the
multicast packets have been sent out, however not all the ACK packets have been successfully received. If a node
receives all the ACKs that it is waiting for, it deletes the session from the corresponding table, and either
acknowledges the upstream parent if it is an internal node, or using call-back to notify the RMI clients if it is the root
node in the session.
Message Handling
As seen from above figure the UDP is the transport protocol used by the control overlay, primarily due to its low
latency cost and state overhead relative to TCP. The multicast content is encapsulated into a packet using the
ALMPacket data structure, which is also used to carry the acknowledgement from a node to its upstream parent
node. An ALMPacket is composed of a header part and a data part. The packet header format is shown below The definition of packet header is rather straight forward: the Packet Type field indicates whether the packet is a
multicast packet or a multicast_ack packet; the Session Sequence field is the contiguous sequence number generated
by the root node, whose ID is shown in the Root ID field; and the Sender ID field contains the ID of the sender of
this packet. Lastly, the Data Part 1 Length and Data Part 2 Length fields specify the lengths of the two segments of
the data part as explained below.
In a multicast packet, the data part consists of two segments: the measurement task‘s configuration parameters; and
the multicast tree map calculated by the root node. Thus, their lengths in bytes are specified by the fields of Data
Part 1 Length and Data Part 2 Length respectively. In a multicast_ack packet, the data part consists of the
acknowledging node‘s answer (i.e. accepting or rejecting the measurement task), as well as a list of downstream
children who reject the measurement task. Thus the Data Part 1Length field specifies the length of the local node‘s
answer, and the Data Part 2 Length field specifies the length of rejecting node list (it is zero if none of the
downstream children rejects the measurement task.)
The Data Overlay
This section introduces the data overlay of monitoring framework, focusing on its inter-node
Operations, intra-node software structures and the message exchange mechanism.
Inter-node Operations
The data overlay is coordinated mainly by DataOverlayMgr, the Java class acting as the data overlay manager; its
major functionalities include constructing and maintaining the hub-based overlay network, and routing data overlay
messages. Recall that in the data overlay nodes are logically arranged into ring-based hubs with each responsible for
a contiguous range of values for the attribute that the hub represents data items and queries can therefore be routed
along the hubs and processed by all nodes that might potentially have matching values. To do so each node
maintains three types of neighboring relationship within a hub, namely successors, predecessors and long distance
neighbours as additionally and each node maintains cross-hub neighbors that are in other hubs to support multiple
attribute routing. Periodically random sampling is performed thus system level estimates can be made to improve the
long-distance link construction.
Accordingly the inter-node operations can be grouped into three categories, namely overlay construction, random
sampling, and overlay routing. Given that the overlay construction is complicated in terms of one action (e.g.
joining/leaving/long-neighbor establishment) normally having multiple nodes involved, the operations taken by
different nodes are listed below. The corresponding message exchange including the status change of the involved
nodes is illustrated through examples
Intra-node Structures
Shown below is the major intra-node operations and interfaces of DataOverlayMgr with other components are
illustrated. As seen the DataOverlayMgr interfaces with the UDP_AM module once an active measurement produces the
measurement result; it also interfaces with the RMI module when a range query is being issued. In both cases, a
routingItem is generated to wrap the measurement result or the range query as a routable item to be inserted into, or
retrieved from the system (respectively by the components of Insertion and Retrieval). If the routing item is a range
query and the result is retrieved from the system, the DataOverlayMgr interfaces with the RMI module again to
return the queried result to users and the Insertion component stores the measurement results.
In addition, to set up connections with other nodes, the DataOverlayMgr interfaces with the network layer module
TCP_Worker through the MsgSwitch component. Essentially the TCP_Worker module provides physical network
connections with other nodes by using TCP as the transport protocol, while the MsgSwitch component is to provide
and process three types of interface methods to other components that need to interact with other nodes e.g.
Sampling and Construct as part of HubMgr and Retrieval and Insertion as routing facilities. In more details by
calling the routing() method a message is sent to the next hop that is calculated by using the routing algorithm.
By calling the sending() method a message is sent directly to a node without being routed and by calling the
delivery() method, messages are dispatched to different components according to the message type. In the case that a routing item is for the local node, it is immediately delivered to the responsible component without being passed
onto the network. Clearly the Retrieval and Insertion components call the routing() method most while the Sampling
and Construct components call the sending() method most.
The calculation of next hop in the MsgSwitch component is based upon the information provided by the HubMgr
component, which maintains a set of hubs and each hub performs independent sampling and constructing. In other
words when a node is a member of multiple hubs, the sampling and constructing operations are processed multiple
times, once for each particular hub.
Finally in topology-aware overlay construction, the TopologyMgr module provides topology information to the
Construct component, which calculates the fitness of choosing a bootstrap node when a node joins the system. A
timer is set up for periodic operations, such as randomly sampling, long-distance link repair and histogram
construction.
Chosen Approach for CRP monitoring
Since each evaluation methodology has its strength and weakness an ideal approach is to
combine them together to tackle the issues of cost, scalability and realism. Such an
approach can be described as follows although in practice.
Beginning with scenarios of small size, firstly the system can be evaluated by
both simulation and emulation, with identical network conditions and run-time
parameter configurations.
By comparing the results from simulation and emulation, the simulation
methodology can be verified and the reliability and limits of the simulation
software can be determined.
Then in scenarios of large networks or scenarios with varied configuration
parameters, can be designed and setup for simulation.
Once the simulation results show that the system has the desired behaviors, the
system can be implemented and deployed on a test bed or larger emulation
environment for further evaluation.
Finally the system can be deployed into a real environment.
However most emulator tools are very hardware-specific they normally require a dedicated environment constructed
for a specific purpose. When considering the issue of scalability in emulation, i.e. the limited size of networks that
can be emulated, simulation also needs to be considered.
CRP Monitoring framework in a distributed network system where an application level multicast tree and a data
overlay network is built upon the network layer although the underlying network layer supports the overlay network
and its impact cannot be ignored performance metrics such as number of hops,link stress and stretch are of primary
importance.
This is in contrast to the fact that most traditional network simulation tools, e.g. NS-2, focus more at network level
metrics such as link throughput, packet delay and packet loss.
Simulators such as NS [NSWeb] and OPNET [OpNetWeb] offer an efficient event driven execution model by
requiring the protocol under test be rewritten according to the simulator‘s event-driven model. There is no actual
network traffic in simulators and the functionalities provided by the supporting modules are merely logical
operations the simulated protocol cannot be tested using real implementation code but must be refined
and converted to a real implementation later. The main issue with simulation is that it only provides a conceptual
network environment using a virtual timescale. Since the fully controlled simulation environment is decoupled from any external traffic or system simplified assumptions may result in inaccurate representation of traffic dynamics
seen in real-world environments.
Emulators such as EMPOWER, GNS3, NIST Net, Emulab and MARS solve the verification and validation
problems by directly executing unmodified real-world code in a network testbed environment. In other words, it can
be regarded as real-time simulation that uses real computer systems and networks as the platform and the protocol
modules, i.e. the modules under test, are real implementations interacting with the protocol stack of the underling
emulator host[s].
However one of the major tasks of network emulation is to generate specific network conditions and traffic
dynamics as required. Here, traffic dynamics are referring to packet-level concepts, such as packet delay, packet loss
etc… whereas network conditions are referring to higher level concepts, such as a link‘s bandwidth, capability, a
router failure, topology changes, etc… To achieve this typically an emulator software module has to be run within
the kernel of the emulator host.
Since the context switching time of processes is large and the maximum number of processes in an OS is limited the
major issue with emulation is its scalability in terms of its overall emulation capacity and the capability of emulating
specific network parameters such as maximum bandwidth and packet delay of the system.
Building an Emulation Environment
This section presents an emulation environment constructed for the evaluation of CRP. It firstly shows that by
leveraging the virtualization technology of Xen,multiple virtual machines can be created on one physical machine
and these virtual machines can be configured for emulation purpose. Then it presents an emulation toolkit built upon
Xen by using this toolkit emulation networks are setup automatically and CRP can be efficiently deployed on the
emulated networks.
Xen is a virtualization technology, It enables a single machine to run multiple independent guest operating systems
concurrently in separate virtual machines (VMs). These VMs, also termed as guest domains are completely isolated
from any of the others running on the same machine which provides the illusion of an isolated physical system for
each of the guest operating systems. By isolating VMs, fault tolerance can be preserved between virtual machines.
For example, if one guest operating system crashes it will not take down the whole machine just its own virtual
machine.
The architecture of Xen is layered, and the lowest and most privileged layer is the Xen Virtual Machine Manager
(VMM). For network connections each domain network interface (vif) is connected to a virtual network interface in
domain 0 by a point to point link which can be effectively viewed as a ―virtual crossover cable‖.
Domain 0 takes control of each vif’s access to the host machine‘s physical network devices, and ―switches‖ each
packet seen at the host‘s physical network device to the appropriate vif. Consequently, each vif in a guest domain
appears as a normal network interface card (NIC) to its own domain. In order to ―switch‖ incoming packets through
the correct vifs to a guest domain, and outgoing packets from vifs to the correct physical device, domain 0 is
responsible for performing proper ARP configuration for each active guest domain to generate the desired IP routing
path inside the local Xen machine and across multiple Xen machines; then it handles the packets by using standard
Linux network utilities, such as bridging, routing, NAT, etc. Shown below is the network setup on two Xen machines. The two machines are connected by a Gigabit Ethernet switch and on each of them two guest domains are created. In each guest domain a different number of virtual NICs are configured and bridging is used to connect the virtual interfaces to which these NICs are attached.
Shown below another example where four Xen machines (i.e. host_20,host_15, host_17, and host_18) are used to
set up an emulated network with four subnets connected by four routers through static, asymmetric routing. To do
so for each subnet one VM with two NICs can be created to act as a router node (i.e. VR1 – VR4); VMs with one
NIC can be created as non-router nodes (e.g. 10.0.20.10 and 10.0.15.10). These four router nodes can be hosted on
the Xen machines where the non-router nodes of its subnet are hosted, or they can be hosted together in a separated
Xen machine (i.e. host_R) to achieve maximized routing ability by assigning special hardware resources. Note that by ―asymmetric routing‖, it means the path of a ping packet from 10.0.15.10 to 10.0.20.10 goes via VR1 →VR4;
while the acknowledgement from 10.0.20.10 to 10.0.15.10 goes via VR4 → VR3 →VR2 →VR1.
Xen-based Emulation Framework
In order to examine the performance of CRP under different network settings,multiple experiments must be
conducted and repeated with varying network size and topology which leads to corresponding changes to router
interface configuration OSPF daemon setup as well as application level run-time parameters. All these configuration
tasks are laborious, error-prone and time consuming if done interactively through standard command line interfaces.
Therefore a configuration toolkit is required. This toolkit is an interactive console programme written in the Python
programming language and the Linux Shell script language. The system infrastructure is shown in Figure in which
the toolkit takes a topology file (generated by a topology generator) as input and generates the configuration files
and commands of the resulting emulation network accordingly. The toolkit can be functionally divided into two
parts, working at different levels: one works at the Xen host level and the other works at the VM level. The major
functionalities of each level, together with the NAT configuration that plays an important role in setting up the
emulated network automatically are explained in the following\
Xen Host Level Operations
At the Xen host level, the input topology file defines thetopology of the network as a list of nodes and a list of edges
connecting network nodes the input host list defines the available hosts running Xen. These inputs are firstly parsed
into a setup file in which network nodes of the topology file are translated into a list of VMs; the edges between
nodes are mapped into IP addressed network interfaces on each VM; and these VMs are evenly assigned to available
Xen hosts as specified in the host list. Next, for each VM, configuration files to set up the guest domain on the
appropriate Xen host as well as to set up the Zebra and OSPFdaemons on each VM are generated and deployed via a
SSH connection to the remote Xen host. Once the deployment of configuration files is successful, further commands
can be issued, such as VM start/shutdown and display summary information of running VMs.
VM Level Operations
At the VM level, the input is the vmlist file generated by the deploying process at Xen Host Level. The vmlist file
specifies the list of all VMs; it also distinguishes core routers and edge routers, depending upon the number of
network interfaces configured into each VM. Once the VMs are successfully started, CRP relevant commands can
be issued through NAT (Network Address Translation) directly to the appropriate VMs e.g. for edge router VMs to
install and run CRP framework to collect and clear evaluation log files.
NAT Setup
This toolkit should run at a Linux machine kettle on the network with a public IP address xxx.xxx.xxx.xxx, while the
emulated network is constructed on subnets xxx.xxx.xxx.xxx/xy, to access these VMs, the normal approach is via the
control console of the Xen daemon at the Xen machine where the VM is hosted. This is not an efficient approach
when the size of the emulated network is large. Another approach is to assign public IP addresses to emulated
network nodes, but this is not practical due to a shortage of public IP addresses in the department. Therefore to set
up the emulated network nodes in a controlled and automatic manner, NAT is needed to connect the two islands of
IP addresses. During the deploy process at Xen Host Level the toolkit generates the NAT configuration file
automatically and sets up NAT on the first VM; NAT starts to work when the first VM is successfully started. After
a short stabilization period during which the network construction is completed further control and management
commands to all other VMs can be issued.
Emulation : Control overlay
The emulation experiments that will be performed are illustrated in the below Figure and are
explained as follows
With a configured network size, yielding a fixed edge router set, the multicast session consists of all edge
routers.
To make the results fair, i.e. to alleviate the issue that multicast performance is dependent upon the tree‘s
shape, each edge router is chosen as the root node from which to initiate the multicast session.
Firstly, the multicast packet is relayed along the multicast Steiner Tree (ST).
Secondly, the unicast packet is relayed along the Shortest Path Tree (SPT) consisting of shortest paths from
root node to each receiving node.
Once a cycle of each node acting as the root node to initiate the multicast/unicast trees is finished, the
performance is measured by averaging each session‘s result.
Each experiment is repeated twice, with topology information being obtained through OSPF snooping and
through pair-wise probing, respectively.
In the case of pair-wise probing the estimate of the network topology is periodically
constructed as follows:
1. Each edge router sends an UDP packet as a probe to each of the other edge routers and calculates the packet delay
when it receives the corresponding ACK packet.
2. Each edge router sends this updated packet delay information, i.e. from itself to all other edge routers, to a
centralized aggregator. The aggregator aggregates the reported delay measurement into a V*V adjacency matrix with
each row representing a source, each column representing a destination, and each cell containing the delay
information from a source to a destination.
3. The aggregator sends the summarized adjacency matrix back to each edge router, where the topology graph of the
network is constructed by taking the adjacency matrix as a full connection graph. The graph is directed with each
cell in the matrix representing the weight of corresponding directed edge between two vertices, i.e. the distance (by
delay) from one edge router to another edge router.
Note that in this pair-wise probing approach, since core routers are not involved in the probing process, the resulting
topology graph is just a partial view of the whole network topology; it reflects the end-to-end distance between a
pair of edge routers, i.e. the result of packets being routed along the shortest path between two edge routers in the
physical network For each experimental iteration, in which one of edge routers is chosen as the root node and all the others act as
receiving nodes, the logged information is stored in separate trace files for each virtual node, and is gathered and
stored locally on kettle in a hierarchical structure. The benefit of using this batch style running is that, given an
emulated network setup and a chosen mode (OSPF snooping mode or UDP probing mode) for discovering topology
information, experiments can be continuously iterated, thus the system does not need to be restarted from scratch
each time, and as a result, does not have to wait for the OSPF configuration to stabilize. In addition, the separated
archives of raw trace files can ensure the repeatability and verifiability of the evaluation calculations.
Data Overlay
The evaluation of CRP data overlay will be conducted via emulation by running the
developed prototype on the Xen-based emulation framework. Unless stated otherwise, the experiments are run for
one hub that represents the integer value of the measurement task ID, with the minimum value as 0 while the
maximum value as n *K . Note that the setting of K will not constrain interpretation of the results, since the data to
be inserted, as discussed below, are randomly generated within the range of {0, n × K }. That means, ideally with
balanced data, each node is responsible for a fixed number of K units of the whole range of values, regardless of the
network size. In all experiments that are run, K = 20 , thus the variance in results from different network size can be
seen.Given an emulated network a configuration file containing a hub‘s basic information, such as its name, data
type the minimum and the maximum value etc, is sent to each node. With this file, the first node in the system is in
charge of the whole range of the attribute and splits half of its range to the second node that joins the system. This
splitting process continues when more nodes join the system.A full list of network nodes is also sent to each node.
With this list, nodes are queued into an identical order hence they can locate themselves in the list and randomly
choose a node that is in front of it in the list as the node to contact.
Following the order of the node list, nodes join the system one by one. Once all nodes are started from the Linux
machine kettle, a script on each node is triggered to run. This script runs an RMI client to inquire the initial range that each node is assigned when it first joins the system. These initial ranges are collected by kettle to verify if there
is any overlap or faults with regards to the nodes‘ ranges.Once it is verified that all nodes have joined the system
successfully, they are triggered to create experimental data. That is, each node generates 10 experimental data tuples
of the form: {taskID, timeStamp, taskResult}.Particularly, the attribute of taskID is randomly generated within the
range of {0, n × K }, and the overlay hub is formed for this attribute.
With experimental data being generated at each node, they are inserted into the system by being routed to the node
whose responsible range covers the attribute‘s value of the data. The process of inserting data into system is
configured to repeat for M = 10 times, every 30 minutes, to run the experiment long enough herein to capture the
average performance of the system. Note that in the data overlay, when a data value is generated, it is just raw; when
being inserted into the system, it is assigned a unique ID, hence although a node may insert the same raw data value
for several times each is processed as a unique data item. At run time, the logging component at each node logs
information about each overlay message it sends out. The information includes the message‘s type and the
corresponding size in bytes, as well as the destination node‘s IP address and port number.
When a node receives a message, and if the message is a routing message, it records the routing behavior. That is,
the hop-count field, which is encapsulated in the header of a routing message, is retrieved and increased by one; then
if the node is the destination of the message, the final hop-count value is logged for evaluation (otherwise the
message is forwarded to the next hop).Furthermore, every minute, a system level evaluation is carried out, to
generate statistics with regards to bandwidth consumption, memory consumption, number of messages etc.When an
experiment finishes, the remote scripts are firstly triggered to run on each node to calculate the performance of each
node, and the results are then collected and processed locally on kettle for further evaluation; separate archives of
raw trace files for each node ensures the repeatability and verifiability of the evaluation calculations.
Experimental Network Setup
The environment setup for CRP can be constructed on ten blade servers which are connected by a Gigabit switch,
each with two Intel® Xeon 3.00GHz CPUs and 2GB memory. A number of virtual domains (i.e. domU) are
constructed on each blade server the VMs on all blade servers collectively form an emulation network. Xen version
2.0 with Xeno-Linux version 2.6.11.10 can be installed and run on both dom0 and domUs. Each domU acts as a
(virtual) router and is configured with mMB of RAM; the memory left for dom0 is calculated as mem dom0
=(2048-m*n)MB, where n is the number of domUs running on the blade server.
As can be seen, the memory left for dom0 is a function of the number of domUs and the memory each domU
assigned. On the one hand, each Xen blade server is expected to run as many virtual routers as possible on the other
hand, dom0 serves an important role in switching network traffic, as well as keeping each domU monitored and
managed. Therefore, with the current hardware configuration, this balance is maintained by assigning 368 MB to
dom0 and 210MB to each domU, i.e. at most 8 domUs can be configured on each blade server.
OSPF Configuration
Once the topology generator generates the topology information, i.e. the number of nodes,the number of edges, and
the connection relationship between nodes this topological information needs to be mapped into network setup. In
this setup each link out of a router is mapped to one of the router‘s network interfaces with an IP address as well as a
cost (or weight) value that is used by OSPF routing daemons to calculate shortest paths and the corresponding
routing tables. In CRP framework each edge between two nodes on the topology map is taken as a subnet Segment
in other words each subnet segment connects two routers. By configuring OSPF daemon this way, to construct such
dynamically built emulation networks,complexity is mitigated while generality is not lost.For the cost value of each
router‘s interfaces, they reflect the output side of each router interface, i.e. it is associated either with the intra-area
distance between two routers or the externally derived routing data (e.g. the BGP-learned routes).
The principle of configuring cost value is that the lower the cost is the more likely the interface is to be used to
forward data traffic. Normally this cost is configured by the system administrator and computed by dividing the
reference bandwidth (in kbps) of an interface with the configured bandwidth of the interface. For example in
practice by default, an interface of a Cisco router is assigned a weight value proportional to the inverse of the
bandwidth of the associated link.
Metrics
In overlay networks network edges are direct UDP/TCP connections between pairs of nodes and overlay packets are
physically forwarded by routers, hop by hop, along the unicast path. Since overlay networks can‘t control how
packets are forwarded in the underlying physical network, packets might be transmitted on some of the links more
than once, hence extra delay might be caused compared to native unicast or IP multicast , often there exists a tradeoff between performance and overhead multicast members periodically exchanging updated state information causes
control traffic but at the same time affords flexibility and resilience in optimal path selection and failure recovery.To
evaluate these overheads the following metrics are commonly used for measurement for performance
Transmission Cost: defined as the average cost of sending a packet from one group member to the rest of the
group. It can be measured by summing up all weight values of any parent to its children along the unicast paths. In
other words this includes the cost of multiple traversals on some of the network links.
Note that if it refers to the count of packets and corresponding bytes seen at application level, the total cost of
multicast and of unicast is equal; however for the root node, the packets sent out in multicast are greatly reduced;
hence the bottleneck issue at the root node is mitigated.
Link Stress: It is defined as the total number of identical copies of a packet travelling over a single physical link, i.e.
the duplicate packets on the links. For network level IP multicast, the link stress is always one; for overlay level
multicast the multicast packets are forwarded along the unicast paths, therefore a router may receive and send data
over the same network interface causing duplicate packets to be transmitted. Link Stretch: defined as the ratio of the length along the overlay path relative to the network level unicast path
between two nodes. This pair-wise metric can be measured either by hop or by delay and essentially captures the
additional distance that a packet must cover relative to the unicast path. The shortest path tree has a link stretch of
one, and typically application level overlay multicast has a stretch greater than one.
Note:- When link stretch is measured by delay it is also referred to as RDP (Relative Delay Penalty); and the
measurement result closely relates to the run-time network traffic level. In that case, if the clock on each node is
well synchronized, no doubt this process assists in determination of real world delay of overlay packets. However
even though Xen is designed to have each virtual domain (i.e. domU) synchronized with domain 0 and each domain
0 can be synchronized with a higher level time source bias might be introduced depending upon whether the two
virtual domains are physically hosted by one or by two machines.
OSPF HITLESS RESTART
Edge and core routers are expected to deliver advanced services at higher throughput while handling large amounts
of link state advertisements with a wide range of attributes.The OSPFv2 and OSPFv3 routing emulation options
allow flooding of the DUT with large amounts of link state advertisements to simulate these stressful conditions.
The ability to deliver carrier-class reliability is a key requirement of today‘s high-performance routers. OSPF,
combined with graceful restart and non-stop forwarding is one of the key protocols that allow routers to withstand
network instability, dynamic changes and hitless router upgrades. The OSPFv2 and OSPFv3 Routing Emulation
Options allow measurement of the time required by routers to recalculate their routing tables, as well as propagate
LSAs while network changes occur. The amount of user data lost during the time routes are being recalculated can
also be measured.
Graceful Restart is a key capability defined in RFC 3623 that IP networks need in order to meet the carrier class
demands for mission critical applications. By allowing OSPF peers to maintain network state during restarts, routes
and therefore data forwarding, is maintained. This limits service disruptions during router upgrades. OSPF options
allow emulation of helper router and restarting router behavior. Routers can be tested for proper restarting router and
helper router behavior which can include the ability to continue data traffic forwarding during restarts. State
information is provided during and after restarts to help users verify and diagnose behaviors.
The OSPF Hitless Restart feature adds extensions to OSPF to avoid interruptions in traffic forwarding and network
wide route churn when control processor restarts or switches over to a redundant control processor. The OSPF
hitless restart is part of the high availability project which aims to reduce the duration of service interruptions caused
by an control processor switchover to zero or near zero.
The goal of the OSPF Hitless Restart feature is to enhance the OSPF component to implement all extensions to
OSPF described in [draft-ieft-ospf-hitless-restart-04.txt].
The OSPF hitless restart feature will be introduced in two phases:
Phase 1: Support the ―helper role‖ of OSPF hitless restart. Emulator implementing enhancements for ―helper role‖
will be able to keep a ―restarting router‖ in the forwarding path by continuing to announce an adjacency with the
restarting router for a certain pre-specified time period. This is assuming that the ―restarting router‖:
Is capable of maintaining forwarding state across restart of its control plane software (on the same control processor
or switchover to a redundant control processor).
During the duration the ―restarting router‖ takes to completely restart, network topology is stable.
Following sections will spell out the details on the changes needed to implement the ―helper role‖.
Phase 2: Support the ―restarting role‖ of OSPF hitless restart. When the emulator switches over from the primary EMULATOR to the secondary EMULATOR, the cards continue to forward traffic using the stale Ospf routes for a
certain amount of time. Support for this phase will include both ―planned‖ and ―unplanned‖ hitless restart.
OSPF enhancements for hitless restart are as follows. The router attempting a hitless restart originates link-local
Opaque-LSAs. Called Grace-LSAs, announcing the intention to perform a hitless restart, and asking for a ―grace
period‖. During the grace period its neighbors continue to announce the restarting router in their LSAs as if it were
fully adjacent (i.e., OSPF neighbor state Full), but only if the network topology remains static(i.e., the contents of
the LSAs in the link-state database having LS types 1-5, 7 remain unchanged; periodic refreshes are allowed).
There are two roles being played by OSPF routers during hitless restart, First there is the router that is being
restarted. Then there are router‘s neighbors, which must cooperate in order for the restart to be hitless. During hitless
restart we say that the neighbors are executing in ―helper mode‖. Emulator will act as a helper or do a hitless restart
only if it is configured to do so.
Example Scenario for Emulator in restarting role
Unplanned restart:
The following scenario illustrates how OSPF hitless restart works during an Switch Routing
Processor(EMULATOR) switchover from EMULATOR-1 to EMULATOR-2 on the EMULATOR due to an
unplanned restart (primary EMULATOR reload due to a software/hardware failure).
1. Active EMULATOR-1 is running OSPF.
2. OSPF routes have been calculated and (if best) installed in the forwarding tables on cards. Traffic is being
forwarded using these routes.
3. EMULATOR-1 goes down.
4. The cards stay up and continue to forward traffic using the most recent information in their forwarding
table which they received from EMULATOR-1 before it went down(this forwarding table starts to become
stale as time passes from this point on).
5. EMULATOR-2 notices that EMULATOR-1 has gone down and it‘s a switch fabric take over forwarding
traffic. This switching fabric switchover should be near instantaneous.
6. OSPF comes up on EMULATOR-2. It learns as part of coming up that is being restarted as a result of a
switchover (warm start), and if forwarding state was maintained across switchover. Since the restart was
unplanned (per its NVS), ospf on the primary EMULATOR would not have sent out any grace LSAs
before going down.
7. If forwarding state was maintained across switchover, after its config, ospf will send out grace LSAs on all
ospf interfaces. It is mandatory that ospf sends out grace LSAs before sending out any Hellos. If Hellos are
processed on the adjacent router before grace LSA, the adjacent router will bring the adjacency down
(doesn‘t see itself in the hellos) and restart will abort. The issue here is that grace LSAs cannot be sent
reliably since the neighbors are not known to the restarting router (retransmissions to support reliable
flooding can‘t be done since neighbors are not known). One option is to send out grace LSAs a few times
before sending out any hellos. In neighbors from the cards and send the grace LSAs reliably. Once they are
acke‘d by the neighbors we can send out hellos.
The other issue here is that a neighbor router will declare the restarting router as ―dead‖ and bring the
adjacency down if it does not receive hellos and/or grace LSAs before it‘s inactivity timer fires for the
restarting neighbor, and if the latency in ospf coming up on EMULATOR-2 and sending out grace
LSAs/hellos is large (due to a large config), the restart will not be successful.
8. Once Ospf on EMULATOR-2 receives hellos from neighbors it will start database exchange (as it does
normally). From the helping neighbors it will learn its own pre-restart router LSAs (network LSA if it was
DR) from helping neighbors. It can figure out the list of active neighbors from that LSA. It will need to
maintain a copy of these self-originated learnt LSAs. In normal course it would have flushed or reoriginated these with a higher sequence number. Note that the list of FULL neighbors at the time of restart
is not stored in NVS or mirrored storage.
9. If it receives an LSA that is inconsistent with its self originated pre-restart LSA, it will abort restart. One
example of received LSA being inconsistent is:The restarting EMULATOR gets a router LSA from a
neighbor and this router LSA does not have a link back to the EMULATOR, but restarting EMULATOR‘s
own self-originated pre-restart LSA has a link to this neighbor.
10. Till the time all neighbors are fully synced up with, ospf will not populate the route table with any routers.
SPF will be run in order to bring up virtual-links if any to restart with virtual-neighbors.
11. Once all neighbors have been acquired (i.e. synced up with all neighbors that it had pre-restart), it will flush
all it’s received self-originated LSAs, reoriginate it’s LSAs based on current state of adjacency, run SPF to
calculate routes, purge all its originated grace LSAs. At this point restart is complete. Based on the SPF
run, routes will be added to the route table. Till all restarting protocols have converged routes are not
downloaded to the cards. Once every restarting protocol has signaled convergence, new tree will be
downloaded to the cards.
12. The purged grace LSAs will signal to the helping routers conclusion of restart.
13. Exiting out of restart will be bounded by a configurable timer, on the expiry of which the restarting router
will abort restart, start afresh i.e. originate it’s LSAs indicating real current state of adjacency with its
neighbors.( see configuration section for more details). At this point ospf can signal convergence (to HA
route table manager component) such that current route table can be downloaded. From here on spf
calculations will yield normal route updates. One option we might want to consider is that after an abort
ospf waits for a configurable amount of time (up to configured grace period) before signaling convergence
and updating routes. This will ensure ospf has had a chance to acquire its neighbors and has really
converged. This helps to ensure routes to bgp next hops will not be changing rapidly (just because of
restart). The premise is that restart should not have any impact, which is what successful restart would have
achieved (SPF and route table update would happened only after ospf was in FULL state w.r.t. all
neighbors). This is more of an implementation issue and involves other components too (will be specified
more in the arch spec).
IP Engine
In the adopted software architecture, the IP Engine is the set of platform dependent components supporting the IP
Stack and router. Its major functions are:
• IP interface management. The IP Engine is responsible for installing interfaces on the local line card and nexthop pointers to those interfaces on all emulator cards.
• Route distribution. The IP Engine processes route updates generated by the router and updates the Forwarding
Table on all line cards accordingly.
• Virtual routers. The IP Engine provides support for multiple virtual routers (vrouters).
• Packets transmit and receive (to us/from us). The IP Engine is responsible for transporting IP packets between
applications on the system Controller and the forwarder.
• IP Slow Path forwarding.
• IP statistics gathering and correlation. The IP Engine is responsible for gathering and accumulating statistics
across line cards, interfaces, and routers to generate the statistics called out in the various MIBs.
The IP Engine’s functions are realized by system controller (SC) and Interface Controller (IC) resident components.
An instance of the IP Engine runs on the SC for each instance of the router and IP Stack. An instance of an IP Agent
component runs on the line card for each IP Stack.
The IP Engine Architecture Specification describes the overall architecture and introduces its components and their
interfaces for the ROUTER-EMULATION platform.
Architecture Overview
The IP Engine interacts with IP Agents on the card to configure, control, and get statistics from the forwarding and
IP Slow Path. On the SC, the IP Engine is modeled on the AR1 implementation. It uses a new component, the Table
Manager Handle Allocator, which allocate handles for DDMEM structures. The IP Engine allocates its own IP
handles.
IP Interface Management
IP interfaces are configured in the IP Stack using the CLI or SNMP. An IP interface cannot be configured unless its
lower layers are already configured – dangling interfaces are not allowed.
SC Interface Hierarchy
The IP Stack manages NVRAM and configures all IP interfaces with the IP Engine using the engine‘s addInterface()
method. Interfaces are identified internally by UID‘s (of typeIpInterface) are assigned by the IP Stack when the
interface is first configured. If the interface is successfully added, its configuration, including the assigned UID is
saved in NVRAM. The interface stack configuration is maintained by recording the lower layer interface‘s UID. On IP init(), the application binds to all potential lower layer protocols and reads its NVRAM configuration. IP
instantiates local interface objects and, for each interface, ―verifies‖ that the complete underlaying interface stack is
in place on the SC before it requests the IP Engine to add the interface and send the interface configuration
information across to its IP Agent on the “local” line card. This verification is indicated by the successful return of
the getInterfaceLocation(lower UID) call made to the lower layer. Each layer invokes (through a lowerBinding
object) the getInterfaceLocation() method of its lower layer, until at the lowest protocol layer, the UID is mapped to
a physical location (InterfaceLocation), which in the AR1 is a slot/port pair. It should be noted that
getInterfaceLocation() will return the physical location even if the emulator card is not currently installed (in which
case a separate status indicates “hardware not present”). Once an IP interface is verified in this manner, and provided
the IP Stack calls the IP Engine’s addlnterface() method to assign global resources and install the interface on the
line card.
OSPF conformance test suite
Conformance testing is an important tool to verify how a Device under test (DUT) complies with specific protocol
standards. Conformance test tools perform their tests as a dialog: they send packets to the router being tested,
receive the packets sent in response and then analyze the response to determine the next action to take. This
methodology allows conformance test tools to test complicated scenarios much more intelligently and flexibly than
achievable by simple packet generation and capture devices. Conformance testing also includes negative test cases
to help validate device response to killer packets.
All OSPF common routing protocol behaviors should be verified including:
• Adjacency Establishment
• Adjacency Maintenance
• Adjacency Deletion
• Designated Router Election
• Database Synchronization
• Preferred Path (or route) Hierarchical Routing
• Master/Slave during Database Exchange
Sr. No Routing Protocol Behavior to be Tested RFC Compliance Tested Evolving
Standard
1 Adjacency Establishment RFC 2328
2 Adjacency Maintenance RFC 2328
3 Adjacency Deletion RFC 2328
4 Designated Router Election RFC 2328
5 Database Synchronization RFC 2328
6 Preferred Path (or route) Hierarchical Routing RFC 2328
7 Interface and Neighbor states Verification RFC 2328
8 Master/Slave during Database Exchange RFC 2328
9 Virtual links RFC 2328
10 Summary3 RFC 2328
11 Summary4 RFC 2328
12 External RFC 2328
13 Type 9-11 RFC 2328
14 Opaque RFC 2370
15 Verification of LSA fields RFC 2328
16 LS Options RFC 2328
17 Link type RFC 2328
18 Basic Metrics RFC 2328
19 LS Type RFC 2328
20 Link State ID RFC 2328
21 Advertising Router RFC 2328
22 Types of Routers RFC 2328
23 Internal Routers RFC 2328
24 Area Border Routers RFC 2328
25 Backbone Routers RFC 2328
26 AS Boundary Routers RFC 2328
27 Network Types RFC 2328
28 Point-to-point RFC 2328
29 Broadcast RFC 2328

Compliance with IETF Standards
OSPF Version 2 according to RFC 2328
OSPF for IPv6 according to RFC 2740
The OSPF Opaque LSA Option according to RFC 2370
Traffic Engineering Extensions to OSPF Version 2 according to draft-katz-yeung-ospf-traffic-10.txt
OSPF Restart Signaling draft-nguyen-ospf-restart-04.txt
OSPF Link-local Signaling draft-nguyen-ospf-lls-04.txt
OSPF Out-of-band LSDB resynchronization draft-nguyenospf-oob-resync-04.txt
OSPF NSSA RFC 3101
OSPF Graceful Restart RFC 3623
Note:- Certifying institute to do details .We have to instruct the institute/agency to work on an automated OSPF
conformance test suite , if they are not able to carry out then we can help them . Scripting TCs can be selected based
on the test-gear that is available at their campus (eg:-IIT Delhi/IISc Bangalore/xyz..) . In our case CRP
running in emulator should be treated as device under test which has to be validated and after porting to BEL router ,
II-Level automated validation has to be done
Testing suite
Our OSPF Emulation Software is a scalable and flexible solution for testing routers or router systems for OSPF
functionality, capacity, and performance. Both OSPFv2 for IPv4 and OSPFv3 for IPv6 networks are supported.
OSPF networks can be simulated using a grid topology making it easy to create complicated scenarios. Network
ranges can be defined using broadcast or point-to-point link types. With OSPFv2 traffic engineering can be enabled
on a per-router or a per-link basis. Learned LSAs can be displayed on each interface making it easy to verify the
router‘s Link State Database (LSDB). In addition, interface state descriptions can be viewed on a diagnostic port
trace to aid in debugging.
OSPF Emulation software is extremely scalable. By using multiple test ports large network configurations can be set
up with a single emulation system. By emulating realistic configurations on our emulator through test-gear we can
perform real-world stress testing of routers and networks. Multiple routing protocol emulations can be run
simultaneously on each port and wire-speed traffic can be generated to test the data and control planes.
Emulation Flexibility
A test gear provides several configuration modes for OSPF emulation testing. The first is the router paradigm mode
where minimal OSPF knowledge is required. Routers, interfaces, and routes are created based on IP addresses,
Network Masks, Network Type, Area ID, and Route Origins. This information is then used to construct and flood the appropriate Link State Advertisements (LSAs) in the respective adjacencies. Each emulated router is capable of
advertising thousands of intra-area and external type routes. The second mode, user LSAs, allows users to directly
configure LSAs. Tips, pull-downs, and checkboxes are used to facilitate data entry. LSAs can originate from one
userdefined LSA group or be segmented based on routers, areas, or test scenarios. Tcl scripts can also be used to
quickly generate thousands of user-defined LSA packets. The third mode, stream encode, provides users with further
flexibility in manipulating an existing configuration or conducting negative testing.
Scalability
Thousands of OSPF sessions and millions of routes can be supported on a single card with CPU per port Load
Modules. Each emulator test interface can emulate a router grid. In addition links and routes can be flapped to
dynamically assess how the router will behave under adverse network conditions.
Multi-Protocol Support
Emulation software can be run simultaneously on one or many ports in conjunction with line-rate traffic to simulate
realistic network scenarios. OSPF Traffic Engineering LSAs can be injected into a System Under Test (SUT) to
create realistic configurations (OSPFv2 only). In addition custom TLV encoding can be used to create various
scenarios.
Integrated Testing
Software exposes many of the packet parameters so that complex IP packets can be created. Traffic streams to
advertised routes can be automatically created with an automated stream generation tool. Traffic statistics can be
logged and graphed. In addition OSPF encodes are available to facilitate negative state and session testing.
Objective:-Verify the Device Under Test.s (DUT.s) compliance with the following capabilities defined in various
OSPF RFCs: OSPFv2 . RFC 1583, RFC 2328. OSPF Opaque LSA . RFC 2370. OSPF NSSA . RFC 1587 . OSPF
Database Overflow . RFC 1765. OSPFv3 (OSPF for IPv6) . RFC 2740.
Setup:-A minimum of two network connections are required from the test tool to the DUT.One for request packets
and one for response packets. Conformance test solution is run from a Linux workstation either connected directly to
the DUT or via test hardware as shown below Test card emulates various OSPF topologies depending on the
configuration of each test case
InputParameters:-Two sets of parameters are required prior to running conformance tests: one for test tool
configuration and one for DUT configuration. The test tool configuration describes the interface and protocol
configuration of the tester while the DUT configuration describes the OSPF features of the DUT using Expect
scripts
Parameters Description
Test Tool
Configuration Tester Test IP Addresses, DUT IP Address, OSPF protocol
parameters (Hello interval, router priority, authentication, etc.)
DUT Configuration OSPF features (TOS Routing, Database Exchange Timeout,
Routing Table Update Timeout, etc.), via Expect scripts.
Methodology:- Conformance testing is an important tool to verify how a DUT complies with specific protocol
standards. Conformance test tools perform their tests as a dialog they send packets to the router being tested, receive
the packets sent in response, and then analyze the response to determine the next action to take. This methodology
allows conformance test tools to test complicated scenarios much more intelligently and flexibly than achievable by
simple packet generation and capture devices. Conformance testing also includes negative test cases to help validate
device response to killer packets..For OSPF conformances testing a number of test cases are run against the DUT
based on the direct interpretation of various OSPF RFCs.
1. Enter parameters to describe both the Conformance Tester and DUT configuration. 2. Run the conformance tests
from the user interface or in a batch mode via command scripts, reconfiguring the DUT as required between test
cases to match the test setup
Results:-Number of tests passed/failed including reasons for failed cases testing emulator GUI can keep the history
of each pass or fail test case.
OSPF Route Capacity Test
Objective:-Determines the number of routes that an OSPF DUT can sustain at a single time. This scalability test is
designed to help network and test engineers to: Evaluate devices to be purchased or used in the network.Test
capacity and understand network limitations before actual deployment of new network elements and services.
Setup:-The test requires two tester ports , one to transmit traffic and one to receive. The transmit direction of traffic
is unidirectional. Test port 2 is used to advertise the OSPF network topology and routes while test port 1 sends
traffic to verify the advertised topology. During the test , tester port 2 gradually increases the number of advertised
routes until the maximum sustainable route capacity can be determined. Script application can be used to configure
control and execute this test script also provides comprehensive test results showing frame loss percentage based on
the ability to forward under maximum route capacity.
Parameter Description
Max Rate Rate at which frames will be sent to advertised
routes
Tolerance Percentage of traffic loss tolerance
Route Step Number of routes to increase per iteration
Number of Routes The number of prefixes to generate at the beginning
of the test
Advertise Delay Per route The maximum time in seconds the router is allowed
to absorb the advertised route. This number is
multiplied by the number of routes to calculate the
―Max Wait Time‖
Methodology:-
1. Test port 2 advertises the initial number of routes defined by the parameter .Number of Routes..
2. After passing the .Max Wait Time.(determined by .Advertised Delay Per Route.), test port 1 sends traffic
targeting each advertised route behind port 2. The traffic throughput rate is set by the parameter .Max Rate..
3. Test port 2 verifies packets received within the defined loss .Tolerance..
4. Test port 2 advertises more routes increased by the amount defined by .Route Step
5. Repeat step 2 through step 4 until port 2 receives no packets or packet loss is above the Tolerance level.
Results:-When the test completes and the tolerance has been exceeded, the test results will show the maximum
number of routes learned by the DUT. The results can be broken down per frame size and show the resulting
numbers for .max routes verified., .total loss percentage and tolerance.. The .Max Routes Verified. From its value
we will know the maximum number of routes that could be sustained at that particular traffic rate and frame size.
This test can be executed manually as well but automation with scripting helps to simplify and speed the testing
process.
OSPF Route Convergence Test
Objective:-Verifies the ability of a router to switch between preferred and less preferred routes when the preferred
routes are withdrawn and re-advertised. The test calculates convergence by taking an average convergence latency
of multiple topological changes.
Setup:- This test uses three test ports .one to transmit and two to receive. Both receive ports emulate OSPF
networks. The transmit direction of traffic is unidirectional. The DUT must have three ports utilized with two
enabled for OSPF. All three ports should be configured for IP and have unique subnets in which to communicate
with the tester ports. Scripting application can be used to configure, control, and execute this test.
Parameter Description
Max Rate The rate at which frames are transmitted. This is the percentage of
the maximum theoretical frame rate.
Number of Routes The number of prefixes to generate at the start of the test.
Advertised Delay Per Route The maximum time, in seconds, to allow the router to absorb each
route. This time is multiplied by the number of routes to calculate
the ―Max Wait Time ‖ – the amount of time the test will wait for
the entire topology to stabilize
Methodology:-This methodology can be executed manually or by script. The key to determining an accurate
convergence time is understanding the DUT capabilities and properly manipulating the test parameters.Test ports 1
and 2 advertise the same OSPF topology and routes with different metrics. The path via port 1 will be used as the
preferred route, while the path via port 2 will be used as the alternate route. After the ―Max Wait Time‖, the Tx port
sends traffic to target all advertised routes. The DUT should route the traffic via the preferred routes to test port
1.Routes are withdrawn from test port1 (the preferred path). Traffic should reroute to arrive at test port 2 (the
alternate path).Measure the timestamp T1 of the last packet targeting a specific route delivered on the preferred path.
Measure the timestamp T2 of the first packet targeting the same route arriving via the alternate path.
Calculate the convergence time for one specific route = T2 – T1.Repeat step 4 and 5 to obtain convergence time for
all withdrawn routes. Calculate average convergence for all routes.
Results:- When The test results provide an average convergence time for all routes. Figure displays example results
for the automated OSPF convergence test in scripts In addition to convergence time, this test also indicates the
amount of lost packets caused by the convergence.
Scalability Test
Objective:-This test builds an OSPF topology and tests the DUT‘s capability to learn intra-area LSAs. A given
number of LSAs are generated and traffic sent to all routes advertised to verify.
Setup:- The test requires at least two test ports – one to transmit and one to emulate and advertise the OSPF intraarea topology as shown above The OSPF tester port must be able to generate OSPF LSAs to construct topological
databases. Tester ports can be added on the Rx side to increase the OSPF database to scale. OSPF Routing Protocol
Emulation can be used to run this test.
Parameter Description
Traffic rate Rate at which traffic is sent to the destination routes.
Number of ports The number of Tx(traffic) and Rx(OSPF)ports.
Number of routes The number of routes is dependent on the number of
emulated routers.
Number of routers The number of emulated routers dictates the number of
routes,depending on whether the configuration is
broadcast or point-to-point.
Methodology:-
1. Configure at least two test ports – one to transmit and one to receive for OSPF.
2. The OSPF port(s) advertise Type 1 router LSAs. The LSAs mesh together to create a logical topology
3. Verify all OSPF neighbors per port are in full state on the DUT.
4. Confirm the DUT has learned all LSAs and can effectively forward traffic to all destinations within the topology
Transmit traffic from the Tx test port to accomplish the verification. Packets are counted on the Rx ports and
analyzed for missing frames.
5. If route verification is successful, the test can be scaled by adding physical ports, additional emulated routers per
port, or more LSAs to each router. Traffic rates can be increased for forwarding performance measurements.
6. Continue to add ports and LSAs until the DUT can no longer forward to all destinations successfully.
Results:- A receive statistic for each packet sent to every route advertised by OSPF. This tells the tester that the
device was able to populate the forwarding table and is capable of sending traffic to that route while sustaining the
desired topology. This test can serve as the basis for building larger tests with similar parameters. Rates can be
monitored with color-coding and specifics like latency, data integrity, and sequence checking are available as well.
OSPF Equal Cost Path Verification Test
Objective:- This test confirms OSPF load balancing features, given four equal cost paths to the same destination.
Setup:- This test requires a minimum of three ports — one to transmit and two to receive and represent four routers
with same cost paths to the same destination prefix as shown. Two OSPF Rx ports each advertise two OSPF
neighbors, each with the same route advertisement. OSPF Routing Protocol Emulation can be used to run this test.
Parameter Description
Traffic rate Rate at which traffic is sent to the destination network.
Number of ports The number of Tx(traffic) and Rx(OSPF)ports
Number of routes The number of routes can be increased and load balancing
take place over several destinations.
Number of routers
per port The number of emulated routers per physical port can be
varied.
Methodology:-
1. Establish the number of test ports needed to advertise the number of OSPF adjacencies required.
2. Advertise LSAs from each peer for the same route. Each emulated router advertises one route path with the same
metric.
3. Confirm that the DUT has reached full state with each OSPF router and verify equal cost paths exist and are all in
the forwarding table.
4. Run continuous traffic to destination IP addresses in the advertised networks.
5. Increase the number of ports or adjacencies per port with the same route.
Results:- This test results in a rate metric for each destination port.
Emulator incorporated test bed will be tested for various configurations and scenarios based on the test gear
available . The below mentioned protocol derivatives and extensions will be tested
OSPFv2 LSAs Supported Router, Network, Summary, Summary type 4, AS-External, Traffic Engineering
Opaque LSAs, NSSA
OSPFv3 LSAs Supported Router, Network, Inter-Area-Prefix, Inter-Area-Router, AS-External, Link, Intra-Area
Prefix
Router Types Supported Intra-Area Routers, Area Border Routers, and Autonomous System, Boundary Routers
Adjacency Timers Hello and Dead intervals configurable
Area ID Configurable in decimal or IP notation format
NetworkTypes Supported Point-to-Point and Broadcast
Messages Supported Hello, Database Description, Link State Request, Link State Update, Link State
Acknowledgement
Authentication MD5, Password
Protocol Statistics OSPF Sessions Configured, OSPF Neighbors in full state
LearnedLSA
Functionality
(OSPFv2 only) Summarized list of LSA headers learned per interface. This includes Link State ID,
Advertising Router, Link
Type, Sequence Number, and LSA age. The list can be filtered based on Link State ID,
Advertising Router, and LSA types
Learned LSA Statistics
(OSPFv2 only) Total count of LSAs learned as well as counts per LSA type
Graceful Restart Enable or disable emulated OSPF routers to act as Helper during graceful restart
process
DR/BDR Enable or disable DR/BDR election process
Statistics Session Configured, Full Neighbors, Hellos TX/RX, DBD TX/RX, LS Request
TX/RX, LS Update TX/RX, LS Ack
TX/RX, LinkState Advertisement TX/RX, Router LSA TX/RX, Network LSA TX/RX,
Summary IP LSA TX/RX, Summay AS LSA TX/RX, External LSA TX/RX, NSSA
LSA TX/RX, Opaque Local LSA TX/RX, Opaque Area LSA TX/RX, Opaque Domain
LSA TX/RX

OSPF Graceful Restart
If the control and forwarding functions in a router can be separated independently, it is possible to maintain a router‘s data
forwarding capability intact while the router‘s control software is restarted/re-loaded. This functionality is termed as ―Graceful
Restart‖ or ―Nonstop Forwarding‖. The router‘s control software (the routing protocols and the signalling protocols) can stop and
can restart for reasons
Software error crashing the protocol task
Switch over to the redundant control card or
Planned shutdown as part of the operational maintenance
The idea behind graceful restart is to continue forwarding packets based on the snapshot of FIB just before the router restarted.
Router can continue forwarding even while its routing process is inactive, atleast for a while.
Current routers have separate routing and forwarding paths
Routing in software(CPU), forwarding in hardware(switching)
This separation creates the possibility of maintaining a router‘s data forwarding capability while the routers control software is
restarted/reloaded.
OSPF GR Working
Restarting OSPF router originates a Grace LSA (link local Opaque LSA) specifying the ‗grace period‘, thereby indicating
to its neighbors the time, in seconds, that the neighbors (the helpers) should continue to consider this router as fully
adjacent. The helping neighbors enter into a state known as helping mode during this period. The onus falls on the helpers
to detect a topological change during the grace period and acting accordingly.
n case of a planned restart, OSPF issues a Grace LSA to its neighbors on each restarting interface and sets the value 1,
which is Software restart, in the Graceful Restart Reason TLV. In case of an unplanned outage, the router first issues a
Grace LSA before sending out any HELLOs. Most implementations transmit the Grace LSAs multiple times, till an
acknowledgement is heard from the neighboring routers.
The helping router continues advertising the restarting router in its LSAs and other routers in the network never come to
know of this event.
Using standard OSPF procedures the helping routers establish adjacencies with the restarting router and synchronize their
LSDBs. During the grace period, the restarting router receives its own self generated pre-restart LSAs. It accepts them as
valid, and does not originate type 1 through 5 and type 7 LSAs, even after it transitions to a FULL state. The restarting
router can run the SPF, but it‘s not yet allowed to update the FIB.
Once the restarting router and its helpers have synchronized their databases within the grace period, the former flushes its
grace LSAs to signal successful completion of the graceful restart procedure. The restarting router now reoriginates its
router LSAs on all attached areas and the network LSAs on the segments, where it‘s the DR. It now schedules a full SPF,
calculates the routes, and updates the FIB
The restarting router had marked all the routes in FIB as stale before sending out the Grace LSAs. After graceful restart is
over and it has recalculated the routes, it deletes all the routes marked as stale in the FIB. It can now reoriginate summary
LSAs, type 7 LSAs and AS External LSAs as appropriate.
When the helpers receive the flushes Grace LSAs, they exit the helper mode and revert back to normal OSPF procedures.
OSPF automatically reverts back to standard OSPF restart from graceful restart if topological changes are detected or if one
or more of the restarting router‘s neighbors do not support graceful restart.
Technical specifications
Category Design considerations Recommendations
Simulation setup
and Initial
assumption Network area, number of nodes, mobility
Models, node distribution, traffic model,
transmission range, bidirectional
communication, capturing effect,
Simulation type: terminating vs. steady state,
protocol stack model, RF propagation model
and proper variable definitions Most of these issues can be easily solved
by proper documentation.
Try to tune setting some parameters
against an actual implementation if
possible or improve the abstraction
level of used models.
Simulation
Execution Protocol model validation, scenario initialization,
empty caches, queues and table and proper
statistics collection. Validating protocol
models against analytical models
or protocol specifications
Determining the number of
independent runs required.
Proper setting and address of
random number generators
Collecting data only after deleting
transient values or
eliminating it by
proper preloading routing cache,
queues and tables.
Output analysis Single set of data, Statistical analysis,
autocorrelation, averages, aggregation, mean and
variance confidence level Experiment should be run
for some
minimum number of times
Analysis should be based on sound
mathematical principles
Provide proper confident interval
for a given experiment.
Modeling
Behavioral
difference How are we going to identify this?
Complete difference in behavior between
simulators
How do we justify this? Which simulator comes
closest to reality? Capture the difference
factors
Identical implementation of
algorithms in Simulators is it
possible or not?
Simulators vs. Real-time test beds
Traffic passing in the
background ,traffic passing
through links
Real packet
processing times Emulation capability in network simulation should
allow real packet processing times.
Backbone area While designing NETWORK with more than one
area,
oneareashould beconsidered asbackbonearea
Stub areas Stub areas have some restrictions to receive routes
from outside the autonomous systems. It receives
only routes from within autonomous systems. Stub
areas are physically connected to the backbone
area. Some features are followed to associate with
stub areas.
Stub area:
External LSAs cannot be flooded.
Default router is defined in to the stub area.
Totally Stub areas Totally stubby area is physically connected to the backbone area. It
receives only default route from an external area that must be the
backbone area. Totally stubby area communicates with others
network by default route
Features of totally stub areas do not allow Inter-area routes. Does
not allow Intra–area routes.
Default route is allowed as summary route.
Totally Not-So
Stubby Area Behaviors of Totally NSSA
Summary LSAs are not allowed.
As a summary route, default route is inserted.
External LSAs are not allowed.
OSPF Cost The path cost of an interface in OSPF is called metric that indicates
standard value such as speed. The cost of an interface is calculated
on the basis of bandwidth. Cost is inversely proportional to the
bandwidth. Higher bandwidth is attained with a lower cost
Backbone routers Routers that have more than one interfaces in area is called
backbone area. Backbone routers can be considered as an ABR
routers and internal routers.
Area border
Routers Router that connects multiple areas is called ABR. ABR is used to
link at non backbone areas to the backbone. Summary link
advertisement is generated by ABR.
Autonomous
System boundary
Routers Router which belongs to OSPF area and has a connection to another
OSPF area is called an ASBR. It acts like a gateway.
Designated
routers Router at which all other routers have connection within an area and
send Link State Advertisement is called DR. Designated routers
keep all their Link State Updates and flood LSAs to other networks
reliably. Every OSPF can have a DR and a Backup Designated
Router (BDR). DR must have higher priority in the OSPF area.
Route
Summarization Multiple routes are summarized in to one single route.
This summarization is done by ABRs. This process is used to
merge up the list of multiple routes in to one route.
Advantages:-It reduces routing table overhead and the network
size.
The main purpose of summarization is to reduce the bandwidth and
processing time.
Inter-Area Route
Summarization The process of inter-area route summarization is done by
ABRs to concern route in the Autonomous System.
External routes are not injected into OSPF through
redistribution.
External Route
Summarization External route summarization allows external routes to inject into
OSPF via redistribution. The external ranges that are being
summarized must be contiguous.
Link State
Advertisement
(LSA) Router Link (RL): Router links are generated by all routers. It
describes the state of the router interfaces for each area where it
belongs to. LSAs are flooded within the routers area. By flooding LSA all the
devices in the network have
topological awareness. A new
routing table is generated by all
the routers by running Dijkstra
algorithm.
Network Link(NL) Designated Router (DR) is responsible for generating
network links to explain a set of routers connected a
particular network. Network links are flooded to the area.
Summary Links
(SL) Summary links are generated by ABRs defining inter-area route.
Summary links keep lists of other areas in the network belonging to
the autonomous system. ABRs inject summary link from the
backbone to other areas and other areas to the backbone. Backbone
aggregates address between areas by summary link.
External Links(EL) ASBRs inject external routes via redistribution to the autonomous
Scenario Name Packet Delay(ms)
EIGRP_OSPF Xxxxms
EIGRP Xxxxms
OSPF Xxxxms Average value of packet
Delay variation (ms) Delay
variation is measured by the
difference in the delay of the
packets.
End to end delay Scenario Name End to End Delay (ms)
EIGRP_OSPF x.xx
OSPF x.xx
EIGRP x.xx End to End delay refers to the
time that is taken to transmit
the packet through the network
from source to destination.
Video Traffic Scenario Sent(bytes/sec) Received(bytes/sec)
Name
EIGRP_OSPF
EIGRP xxx Xxx
OSPF xxx Xxx Average value of sent and
received (bytes/sec) for
video traffic
Derived after simulation
Voice
conferencing
(Jitter) Scenario Name Jitter (sec)
EIGRP_OSPF xxx
OSPF xxx
EIGRP Average value of jitter for
Voice conferencing (sec)
Derived after simulation
Voice Traffic Scenario Sent Received Packet
Name (bytes/sec) (bytes/sec) Loss
OSPF xxx xxx
EIGRP_OSP xxx xxx
F
EIGRP xxx xxx Voice Traffic Sent and
Received (bytes/sec)
Derived after simulation
Throughput Scenario Name Throughput(bits/sec)
EIGRP_OSPF Xxx
OSPF Xxx
EIGRP Xxx Throughput is a key parameter
to determine the rate at which
total data packets are
successfully delivered through
the channel in the network.
Open source routing
software suite Quagga (http://www.quagga.net) is an open source routing software
suite that supports IPv4 as well as IPv6 provide simple
implementations of OSPFv2, OSPFv3. The Quagga Routing
Software Suite is provided under the gnu General Public License
(GPL).
XORP is the eXtensible Open Router Platform
(http://www.xorp.org).
XORP is free covered by a BSD-style license and is publicly
available for research, development, and use.
BIRD (BIRD Internet Routing Daemon) project
(http://bird.network.cz/) aims to develop a fully functional
dynamic IP routing daemon primarily targeted for UNIX like
systems. BIRD supports both IPv4 and IPv6, OSPF (IPv4only) and
static routes. BIRD is distributed under the GNU General Public
License.
Vyatta (http://www.vyatta.com) provides an open source router
called the Vyatta OFR (Open, Flexible Router) based on XORP.
The OFR supports the same IPv4 unicast routing protocols as
XORP; OSPFv2. Different components have licenses defined as
open source licenses, but the exact license type varies among the
components. These implementations will set
the kernel to do IP forwarding
and we may have to do this
manually.
No support to create GRE
Tunnels through their CLIs
, if required then we have to
work on its establishment
Emulation
Behavior Integration with test beds and virtual machines –emulation modes Test bed interconnection with
simulator stack
Emulation We need to employ high speed emulation processors and wired
client-server links.
We need to capture the Overall impact of emulation on the end-to
end network delay Measure simulation lag
time (ona1.7GHzPC)
recommend
As the emulation scales up,
there will be an impact on
simulation lag time, so
emulation‘s validity
decreases as simulation lag
time increases.
OPNET Emulator Software design of NW emulator–OPNET emulator can be
implemented as a Microsoft Foundation Class (MFC) Dynamic
Linked Library (DLL), written in Microsoft Visual C++. The
Windows operating system dynamically links this DLL with the
OPNET simulator to create the multi-threaded process. Applicable only if OPNET is
chosen as simulator with
emulation support
OSPF Security
perspective Outsider attacks and Insider attacks
Vulnerabilities and protection of OSPF protocol Keyed-MD5 can defend
Outsider attacks
Customized
Routing protocol
porting BEL Router details required Porting document to be
prepared after router
details are known

Mechanism Advantage Disadvantage
Hardware based failure detection Failure discovery with in tens
of milliseconds Not always available.
Reduced Hello Interval Can safely be reduced to half a
second range. Further reduction may lead
to router overloads and false
alarms.
Bidirectional forwarding detection Protocol independent, light
weight. Can be implemented in
the line card‘s
Hardware/firmware. Can be used
in association with reduced Hello
Interval to significantly reduce the
failure detection time. Cannot detect failures in
control plane.
Topology Change Message Change
Message
Insides an
area Messages about a router:
RTR UP
RTR DOWN
BECAME BORDER RTR
NO LONGER
BORDER RTR
BECAME A SBR
NOLO NGERAS BR
Messagesabou tanInterface on a router:
INTF DOWN
INTF MASK CHANGE
Messages about an Adjacency:
ADJACENCYUP
ADJACENCYDOWN
ADJACENCYCOSTCH
ANGE ALL
ADJ ACENCI ESDOWN
Messagesabou ta host-route on a router:
STUBLINKUP
STUBLINKDO
WN
STUB LINKC OSTC HANG E
Messages abou tDRona broadcast network:
NEWDR
NO
LONGERDR
DR CHANGE
MASKCHANG
E
Change
messages for
Remote
Areas Messages about a prefix in a remote area:
TYPE-3ROUTE
ANNOUNCED TYPE-
3ROUTEWITHDRAWN
TYPE-3ROUTE
COSTCHANGE
Messages about an ASBR in a remote area:
TYPE-
4ROUTEANNOUNCED
TYPE-
4ROUTEWITHDRAWN
TYPE-4ROUTE
COSTCHANGE
Change
messages for
external routes TYPE-5ROUTEANNOUNCED
5ROUTEWITHDRAWN
TYPE-5ROUTE
COSTCHANGE
TYPE-5ROUTE COST_TYPECHANGE
TYPE-5ROUTEFORW_ADDRCHANGE
Flap Messages RTR FLAP
INTFFLAP
ADJACENCYF
LAP
STUBLINKFL
AP
TYPE-
Messages related to
anomalous Behavior NON-BORDERRTR YETTO WITHDRAWTYPE-3/4ROUTES
NON-ASBRYET TOWITHDRAWTYPE-
5ROUTES DUPLICATEADJACENCY
DUPLICATESTUBLINK
TYPE-3ROUTE FROMNON-BORDERRTR
TYPE-4ROUTE FROMNON
BORDERRTR TYPE-5ROUTE
FROMNON -ASBR
LSA Storm Messages LSASTORM
Schedule

Commercial terms

Conclusion
Detailed technical proposal has been presented after first level of interaction.

Enter verification code:


(or)


Enter email to get code