Install All Required Components on a Linux Server

A comprehensive guide for manually installing the NDTwin system on a native Linux environment. This section covers system dependencies, building the NDTwin Kernel with Ninja, configuring Python environments, and running the full system.

Native-Linux Execution Environment Setup

This guide provides step-by-step instructions to manually build the Network Digital Twin (NDTwin) environment on a native Linux machine.

1. System Requirements

The system has been verified on the following configuration:

  • OS: Ubuntu 20.04 LTS or higher (Verified on Ubuntu 24.04.3 LTS).
  • Kernel: Generic Linux Kernel (x86_64).
  • Permissions: Root access (sudo) is required for Mininet and Open vSwitch operations.

2. Python Environment Setup (for Ryu)

The system requires two specific Python environments to handle version conflicts. Ryu requires Python 3.8 due to specific library dependencies, while other components may use newer versions.

Prerequisite: Ensure Miniconda or Anaconda is installed.

Step 2.1: Create the Ryu Conda Environment (ryu-env)

This environment runs the SDN controller.

conda create -n ryu-env python=3.8 -y
conda activate ryu-env
python --version   # should be Python 3.8.x

Conda TOS Notice (required once) If this is your first time using the Anaconda default channels (pkgs/main, pkgs/r), you must accept the Terms of Service before creating the environment:

conda tos accept --override-channels --channel https://repo.anaconda.com/pkgs/main
conda tos accept --override-channels --channel https://repo.anaconda.com/pkgs/r

Step 2.2: Install System Build Dependencies

sudo apt update
sudo apt install -y build-essential python3-dev libssl-dev libffi-dev libxml2-dev libxslt1-dev

Step 2.3: Install Ryu + Compatible Python Libraries

  1. Upgrade pip / setuptools / wheel (compatible versions)
pip install --upgrade "pip<24" "setuptools<68" wheel
  1. Install Ryu (disable PEP-517)
pip install ryu --no-use-pep517
  1. Pin required libraries
# Eventlet must be <0.33 (0.30–0.31 works)
pip install eventlet==0.30.2

# Greenlet <3
pip install "greenlet<3"

# dnspython <2.3
pip install "dnspython<2.3"

Step 2.4: Verify Installation

pip list | grep -E "eventlet|greenlet|dnspython|ryu"

# Expected output:
# dnspython       1.16.0
# eventlet        0.30.2
# greenlet        2.0.2
# ryu             4.34

Step 2.5: Test Ryu

ryu-manager ryu.app.simple_switch_13

Alt text

Step 2.6: Prepare the Customized Ryu Controller App

This project uses a customized Ryu (OpenFlow 1.3) controller to:

  • Install all-destination IPv4 forwarding entries during startup (proactive routing bootstrap)
  • Support static topology mode (load topology from JSON)
  • Support dynamic discovery mode if the static file is missing (topology events + host learning via packet-in/ICMP)
  • Compute paths and push flow entries to each switch once the topology is ready
  1. Create the Ryu App file in your Home (or any folder)
nano intelligent_router.py
  1. Paste the controller code The full controller implementation is shown below:
Click to expand: intelligent_router.py
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
from ryu.base import app_manager
from ryu.controller import ofp_event
from ryu.controller.handler import CONFIG_DISPATCHER, MAIN_DISPATCHER, DEAD_DISPATCHER
from ryu.controller.handler import set_ev_cls
from ryu.ofproto import ofproto_v1_3
from ryu.topology import event, switches
from ryu.topology.api import get_switch, get_link
from ryu.lib.packet import packet, ethernet, ipv4, ether_types, arp, tcp, udp, icmp
import networkx as nx
from ryu.controller import dpset
import requests
import json
from ryu.app.wsgi import ControllerBase, WSGIApplication, route
from webob import Response
from time import time
import ipaddress
import hashlib
from pathlib import Path
import threading
import random
from ryu.lib import hub

# TODO: Change it
static_topology_file_path = Path("/home/patty/Desktop/NDTwin-Kernel/setting/StaticNetworkTopologyMininet_10Switches.json")

RYU_SERVER_INSTANCE_NAME = "ndt_ryu_app"
switch_num = 10
detecting_time = 60
is_all_dst_biased = False
all_dst_ecmp_biased_factor = 1

is_mininet = True


def normalize_sort_key(v):
    if isinstance(v, str) and "." in v:
        try:
            return (1, ipaddress.IPv4Address(v))  # host IP
        except:
            return (2, v)  # fallback for weird strings
    elif isinstance(v, int):
        return (0, v)  # switch ID
    else:
        return (2, str(v))  # other types as string fallback


class IntelligentRyu(app_manager.RyuApp):
    OFP_VERSIONS = [ofproto_v1_3.OFP_VERSION]
    _CONTEXTS = {
        "dpset": dpset.DPSet,
        "topology_api_app": switches.Switches,
        "wsgi": WSGIApplication, 
        "topology": event.EventHostRequest,
    }

    def __init__(self, *args, **kwargs):
        super(IntelligentRyu, self).__init__(*args, **kwargs)
        self.topology_api_app = kwargs["topology_api_app"]
        self.is_dynamically_detect_topo = False
        self.static_net = nx.DiGraph()
        self.dynamic_net = nx.DiGraph()
        self.switches = {}
        self.ip_to_mac = {}
        self.flow_stats_reply = {}  # dpid -> latest flow stats list

        wsgi = kwargs["wsgi"]
        wsgi.register(RyuServerController, {RYU_SERVER_INSTANCE_NAME: self})

        self.install_initial_openflow_entries_completed = False
        self.all_destination_paths = []
        

    @set_ev_cls(ofp_event.EventOFPSwitchFeatures, CONFIG_DISPATCHER)
    def switch_features_handler(self, ev):
        # Install table-miss flow entry
        datapath = ev.msg.datapath
        parser = datapath.ofproto_parser
        ofproto = datapath.ofproto

        self.logger.info(f"Datapath ID: {datapath.id}")

        match = parser.OFPMatch()
        actions = [
            parser.OFPActionOutput(ofproto.OFPP_CONTROLLER, ofproto.OFPCML_NO_BUFFER)
        ]
        self.add_flow(datapath, 0, match, actions)

    def add_flow(self, datapath, priority, match, actions):
        ofproto = datapath.ofproto
        parser = datapath.ofproto_parser

        inst = [parser.OFPInstructionActions(ofproto.OFPIT_APPLY_ACTIONS, actions)]
        mod = parser.OFPFlowMod(
            datapath=datapath, priority=priority, match=match, instructions=inst
        )
        datapath.send_msg(mod)

    def safe_add_or_modify_flow(self, datapath, priority, match, actions):
        ofproto = datapath.ofproto
        parser = datapath.ofproto_parser

        inst = [parser.OFPInstructionActions(ofproto.OFPIT_APPLY_ACTIONS, actions)]

        # Try MODIFY_STRICT first
        mod = parser.OFPFlowMod(
            datapath=datapath,
            command=ofproto.OFPFC_MODIFY_STRICT,
            priority=priority,
            match=match,
            instructions=inst,
        )
        datapath.send_msg(mod)

        # Also try ADD — if MODIFY failed (no existing flow), ADD will succeed
        mod_add = parser.OFPFlowMod(
            datapath=datapath, priority=priority, match=match, instructions=inst
        )
        datapath.send_msg(mod_add)

    @set_ev_cls(event.EventSwitchEnter)
    def get_topology_data(self, ev):
        # ------ Update topology info ------
        self.logger.info("Topology update triggered")

        start = time()
        switch_list = []
        while time() - start < 20:
            switch_list = get_switch(self.topology_api_app, None)
            if switch_list:
                break
            hub.sleep(1)

        if not switch_list:
            self.logger.warning(
                "Switch list is empty after timeout — aborting topology update"
            )
            return

        self.logger.info("Complete get_switch")
        self.switches = {sw.dp.id: sw.dp for sw in switch_list}
        
        
        for sw in switch_list:
            if not self.dynamic_net.has_node(sw.dp.id):
                self.dynamic_net.add_node(sw.dp.id)

        links_list = get_link(self.topology_api_app, None)
        self.logger.info("Complete get_link")
        
        for link in links_list:
            src, dst = link.src.dpid, link.dst.dpid
            src_port, dst_port = link.src.port_no, link.dst.port_no
            # self.logger.info(f"Add edge ({src},{src_port}) -> ({dst},{dst_port})")
            # Add forward and reverse edges
            self.dynamic_net.add_edge(src, dst, port=src_port)
            self.dynamic_net.add_edge(dst, src, port=dst_port)

        # ------ Update switch is_up state ------
        dpid = ev.switch.dp.id
        api_url = f"http://localhost:8000/ndt/inform_switch_entered?dpid={dpid}"
        self.logger.info("Switch entered: %s", dpid)

        try:
            response = requests.get(api_url)
            self.logger.info(
                "Notified NDT (switch enter), status: %s", response.status_code
            )
        except Exception as e:
            self.logger.warning("Failed to notify NDT (switch enter): %s", str(e))

        # After connecting to all switches, try to read static topology file first, if it dose not exist, then try to detect topolody dynamically
        self.logger.info(f"len(self.switches) {len(self.switches)}")
        if len(self.switches) >= switch_num:
            if not self.install_initial_openflow_entries_completed:
                self.load_static_topology()
                
    @set_ev_cls(ofp_event.EventOFPStateChange,
                [CONFIG_DISPATCHER, MAIN_DISPATCHER, DEAD_DISPATCHER])
    def _state_change_handler(self, ev):
        dp = ev.datapath
        if ev.state == MAIN_DISPATCHER:
            if dp.id == None:
                return
            self.logger.info("Switch %016x connected (EventOFPStateChange)", dp.id)
            # ------ Update switch is_up state ------
            dpid = ev.datapath.id
            api_url = f"http://localhost:8000/ndt/inform_switch_entered?dpid={dpid}"
            # self.logger.info("Switch entered: %s", dpid)
            try:
                response = requests.get(api_url)
                self.logger.info(
                    "Notified NDT (switch enter), status: %s", response.status_code
                )
            except Exception as e:
                self.logger.warning("Failed to notify NDT (switch enter): %s", str(e))
        elif ev.state == DEAD_DISPATCHER:
            if dp.id == None:
                return
            self.logger.info("Switch %016x disconnected (EventOFPStateChange)", dp.id)
    
    def _dynamic_topology_worker(self):
        self.logger.info("No static topo file, falling back to dynamic detection. Waiting 60s...")
        hub.sleep(detecting_time)  # this will NOT block the main Ryu thread

        self.print_all_hosts(self.dynamic_net)
        try:
            self.install_all_pair_paths(self.dynamic_net)
            self.install_initial_openflow_entries_completed = True
            self.logger.info("Dynamic topology initialized, all-destination paths installed.")
        except Exception as e:
            self.logger.error(f"Dynamic topology init failed: {e}")
            
    def find_target_by_src_port(self, G, src_node, src_port_attr, attr_name="port"):
        for _, v, data in G.out_edges(src_node, data=True):
            if data.get(attr_name) == src_port_attr:
                return v
        return None
    
    def int_to_mac(self, n: int) -> str:
        if not (0 <= n < (1 << 48)):
            raise ValueError("MAC int must be in [0, 2^48)")
        return ":".join(f"{(n >> (8*i)) & 0xff:02x}" for i in reversed(range(6)))

            
    def load_static_topology(self, path: Path = static_topology_file_path):
        if not path.exists():
            self.logger.info(f"Static topology file not found: {path}")
            self.is_dynamically_detect_topo = True
            self.logger.info(f"self.is_dynamically_detect_topo {self.is_dynamically_detect_topo}")

            # Start background thread instead of blocking with sleep
            t = threading.Thread(target=self._dynamic_topology_worker, daemon=True)
            t.start()

            return None

        try:
            with path.open("r") as f:
                topo = json.load(f)
            self.logger.info(f"Loaded static topology from {path}")
            
            
            # Add nodes and edges to net
            for node in topo.get("nodes", []):
                if not node: continue
                # self.logger.info(f"n {node.get('nickname', '')}")
                if node.get("vertex_type", "") == 0:    # switch
                    ecmp_groups = node.get("ecmp_groups", [])
                    self.static_net.add_node(int(node.get("dpid")), ecmp_groups=ecmp_groups)
                elif node.get("vertex_type", "") == 1: # host
                    ip_list = node.get("ip")
                    mac = node.get("mac")
                    self.static_net.add_node(self.int_to_mac(mac), ip_list=ip_list)
                    for ip in ip_list:
                        self.ip_to_mac[ip] = mac
                    
            for edge in topo.get("edges", []):
                if not edge: continue
                # self.logger.info(f"e src_dpid {edge.get('src_dpid', '')} -> dst_dpid {edge.get('dst_dpid', '')}")
                if edge.get("src_dpid") == 0:   # host to sw
                    # self.logger.info("host to sw")
                    # Look up mac from vertex
                    first_src_ip = edge.get("src_ip")[0]
                    mac = self.int_to_mac(self.ip_to_mac[first_src_ip])
                    # self.logger.info(f"src mac {mac} target dst_dpid {edge.get('dst_dpid')} port 0")
                    self.static_net.add_edge(mac, edge.get("dst_dpid"), port=0)
                elif edge.get("dst_dpid") == 0: # sw to host
                    # self.logger.info("sw to host")
                    # Look up mac from vertex
                    first_dst_ip = edge.get("dst_ip")[0]
                    mac = self.int_to_mac(self.ip_to_mac[first_dst_ip])
                    # self.logger.info(f"src src_dpid {edge.get('src_dpid')} target mac {mac} port {edge.get('src_interface')}")
                    self.static_net.add_edge(edge.get("src_dpid"), mac, port=edge.get("src_interface"))
                else:
                    # self.logger.info("sw to sw")
                    self.static_net.add_edge(edge.get("src_dpid"), edge.get("dst_dpid"), port=edge.get("src_interface"))
            # Install all-destination routing entries
            if is_mininet:
                hub.sleep(60)
            self.install_initial_openflow_entries_completed = True
            self.install_all_pair_paths(self.static_net)
            self.logger.info("Static topology initialized, all-destination paths installed.")
            
        except Exception as e:
            self.logger.error(f"Failed to load static topology file {path}: {e}")


    def print_all_hosts(self, net):
        # Sort nodes by first IP
        sorted_nodes = sorted(
            net.nodes,
            key=lambda node: (
                ipaddress.IPv4Address(net.nodes[node]["ip_list"][0])
                if "ip_list" in net.nodes[node]
                else ipaddress.IPv4Address("255.255.255.255")
            ),  # Put at the end
        )

        # Create a new graph
        ordered_net = nx.DiGraph()

        # Add nodes and edges in order
        for node in sorted_nodes:
            ordered_net.add_node(node, **net.nodes[node])

        ordered_net.add_edges_from(net.edges(data=True))

        # Replace self.net
        net = ordered_net

        all_ips_num = 0
        self.logger.info("All IPs in all hosts (sorted):")
        for node in net.nodes:
            node_data = net.nodes[node]
            if "ip_list" in node_data:
                # Sort all collected IPs
                node_data["ip_list"] = sorted(
                    node_data["ip_list"], key=lambda ip: ipaddress.IPv4Address(ip)
                )
                self.logger.info(f"{node_data['ip_list']}")
                all_ips_num += len(node_data["ip_list"])

        print(f"all_ips_num: {all_ips_num}")


    
    def find_host_by_ip(self, net, target_ip):
        for node in net.nodes:
            node_data = net.nodes[node]
            if "ip_list" in node_data:
                if target_ip in node_data["ip_list"]:
                    return node
        return None


    
    def find_connected_switch(self, net, host):
        return list(net.neighbors(host))[0]

    
    def get_host_port(self, net, host, switch):
        return net[switch][host]["port"]

    def is_switch(self, node):
        return isinstance(node, int) and node in self.switches

    def hash_dst_ip(self, str):
        # Use SHA256 or any hash to make it deterministic
        return int(hashlib.sha256(str.encode()).hexdigest(), 16)

    def debug_print_graph(self, net):
        print(f"=== NODES {len(net.nodes)} ===")
        for n, data in net.nodes(data=True):
            print(f"{n}: {data}")

        print(f"\n=== EDGES {len(net.edges)} ===")
        for u, v, data in net.edges(data=True):
            print(f"{u} -> {v}: {data}")


    def install_all_pair_paths(self, net):
        self.logger.info("install_all_pair_paths")
        self.debug_print_graph(net)
        all_hosts_ip_list = []
        all_destination_paths = []
        for node in net.nodes:
            node_data = net.nodes[node]
            if "ip_list" in node_data:
                all_hosts_ip_list.extend(node_data["ip_list"])

        for dst_ip in all_hosts_ip_list:
            dst_host = self.find_host_by_ip(net, dst_ip)
            dst_switch = self.find_connected_switch(net, dst_host)
            # self.logger.info("Installing paths toward host %s via BFS", dst_ip)
            parent_hash = {}
            parent_hash[dst_ip] = None


            # BFS traversal starting from dst_switch
            visited = set()
            queue = [(dst_switch, None)]  # (current_switch, previous_switch)

            while queue:
                current_switch, prev_switch = queue.pop(0)
                if current_switch in visited:
                    continue
                visited.add(current_switch)

                # Determine out_port toward dst_host
                if prev_switch is not None:
                    out_port = net[current_switch][prev_switch]["port"]
                    parent_hash[current_switch] = prev_switch
                else:
                    out_port = self.get_host_port(net, dst_host, current_switch)
                    parent_hash[current_switch] = dst_ip

                # Install OpenFlow entry for forwarding to dst_ip
                # self.logger.info(f"current_switch type {type(current_switch)}")
                # self.logger.info(f"current_switch {current_switch}")
                datapath = self.switches.get(current_switch)
                parser = datapath.ofproto_parser
                match = parser.OFPMatch(eth_type=0x0800, ipv4_dst=dst_ip)
                actions = [parser.OFPActionOutput(out_port)]
                self.add_flow(datapath, priority=10, match=match, actions=actions)

                # self.logger.info(
                #     "Installing flow on switch %s: match(ipv4_dst=%s) -> output(port=%d)",
                #     current_switch,
                #     dst_ip,
                #     out_port,
                # )


                
                # Add neighbors to BFS queue randomly
                # neighbors = list(net.neighbors(current_switch))
                # print(f"neighbors {neighbors}")
                # random.shuffle(neighbors)  # Randomize neighbor order

                # Add neighbors to BFS queue deterministically
                neighbors = list(net.neighbors(current_switch))
                # self.logger.info(f"neighbors {neighbors}")

                        
                # Sort neighbors based on hash of (dst_ip + neighbor)
                neighbors.sort(key=lambda neighbor: (self.hash_dst_ip(dst_ip + str(neighbor))))
                # self.logger.info(f"sorted neighbors {neighbors}")
                
                
                if is_all_dst_biased:
                    ecmp_groups = net.nodes[current_switch]["ecmp_groups"]
                    ecmp_groups_member_in_neighbors = []
                    if ecmp_groups != []:
                        for group in ecmp_groups:
                            members = group["members"]
                            temp = [] 
                            for member in members:
                                port_id = member["port_id"]
                                target_node = self.find_target_by_src_port(net, current_switch, port_id, "port")
                                self.logger.info(f"target_node {target_node}")
                                
                                if target_node in neighbors:
                                    temp.append(target_node)
                                    
                            ecmp_groups_member_in_neighbors.append(temp)
                                
                    self.logger.info(f"ecmp_groups_member_in_neighbors {ecmp_groups_member_in_neighbors}")
                    
                    for group in ecmp_groups_member_in_neighbors:
                        r = random.random()
                        r2 = int((random.random() * 10)) % len(group)-1
                        self.logger.info(f"r {r} r2 {r2}")
                        temp = 0
                        if r <= all_dst_ecmp_biased_factor: # choose first element
                            temp = group[0]
                        else:   # choose others
                            temp = group[r2+1]
                        group.remove(temp)
                        group.append(temp)
                        
                
                    for group in ecmp_groups_member_in_neighbors:
                        for ele in group:
                            neighbors.remove(ele)
                            neighbors.insert(0,ele)
                
                    self.logger.info(f"biased neighbors {neighbors}")
                
                for neighbor in neighbors:
                    if neighbor not in visited and self.is_switch(neighbor):
                        queue.append((neighbor, current_switch))


            # Reconstruct path from any switch back to dst_switch
            for switch in parent_hash:
                path = []
                node = switch
                while node is not None:
                    if parent_hash.get(node) is not None:
                        next_hop = parent_hash[node]
                        if self.is_switch(next_hop):
                            out_port = net[node][next_hop]["port"]
                        else:
                            host = self.find_host_by_ip(net, next_hop)
                            out_port = net[node][host]["port"]
                        path.append((node, out_port))  
                    else:
                        path.append((node, 0)) 
                    node = parent_hash.get(node)


                # print(f"Flow path to {dst_ip} through switch {switch}: {' -> '.join(str(n) for n in path)}")
                full_path = []
                for src_ip in all_hosts_ip_list:
                    if src_ip == dst_ip:
                        continue
                    src_host = self.find_host_by_ip(net, src_ip)
                    src_switch = self.find_connected_switch(net, src_host)
                    out_port = net[src_switch][src_host]["port"]
                    # print(f"src out_port {out_port}")
                    if src_switch == switch:
                        full_path = [(src_ip, out_port)] + path
                        # self.logger.info("Flow path from %s to %s path %s\n\n\n\n\n", src_ip, dst_ip, full_path)
                        all_destination_paths.append(full_path)
        
        self.all_destination_paths = all_destination_paths
                        


    @set_ev_cls(ofp_event.EventOFPPacketIn, MAIN_DISPATCHER)
    def _packet_in_handler(self, ev):
        msg = ev.msg
        datapath = msg.datapath
        dpid = datapath.id
        ofproto = datapath.ofproto
        parser = datapath.ofproto_parser
        in_port = msg.match["in_port"]

        pkt = packet.Packet(msg.data)
        eth = pkt.get_protocol(ethernet.ethernet)

        # Ignore LLDP packets
        if eth.ethertype == ether_types.ETH_TYPE_LLDP:
            # self.logger.info("LLDP from switch %s", dpid)
            return

        # Ignore ARP packets
        arp_pkt = pkt.get_protocol(arp.arp)
        if arp_pkt:
            return

        # Ignore mDNS, SSDP, LLMNR
        if eth.ethertype == ether_types.ETH_TYPE_IP:
            ip_pkt = pkt.get_protocol(ipv4.ipv4)

            # Define a set of multicast IPs to ignore.
            # This is more efficient than multiple 'if' statements.
            multicast_ips_to_ignore = {
                "224.0.0.251",  # mDNS (Multicast DNS)
                "224.0.0.252",  # LLMNR (Link-Local Multicast Name Resolution)
                "239.255.255.250",  # SSDP (Simple Service Discovery Protocol)
            }

            # If the destination IP is in our ignore list, simply drop the packet and return.
            if ip_pkt.dst in multicast_ips_to_ignore:
                # self.logger.debug(f"Ignoring multicast packet to {ip_pkt.dst} from DPID {dpid}")
                return

        # self.logger.info("Packet in triggered")

        eth_dst = eth.dst
        eth_src = eth.src

        ip_pkt = pkt.get_protocol(ipv4.ipv4)

        if not ip_pkt:
            return  # Only process IPv4 packets

        ip_dst = ip_pkt.dst
        ip_src = ip_pkt.src

        tcp_pkt = pkt.get_protocol(tcp.tcp)
        udp_pkt = pkt.get_protocol(udp.udp)
        icmp_pkt = pkt.get_protocol(icmp.icmp)

        if icmp_pkt:  # Use ping to let Ryu detect all IPs (IP alias)
            port_no = in_port
            # print(f"ip_src {ip_src} packet in")
            host_id = eth_src
            # if self.install_initial_openflow_entries_completed == True:
            #     print(f"ip_src {ip_src} packet in")
            
            # self.logger.info(f"self.is_dynamically_detect_topo {self.is_dynamically_detect_topo}")
            if self.is_dynamically_detect_topo:
                # self.logger.info(f"packet in host_id {host_id}")
                if not self.dynamic_net.has_node(host_id):
                    # self.logger.info("self.dynamic_net.add_node")
                    self.dynamic_net.add_node(host_id, ip_list=[ip_src])
                else:
                    # self.logger.info("else self.dynamic_net.add_node")
                    ip_list = self.dynamic_net.nodes[host_id]["ip_list"]
                    if ip_src not in ip_list:
                        ip_list.append(ip_src)

                if not self.dynamic_net.has_edge(dpid, host_id):
                    self.dynamic_net.add_edge(dpid, host_id, port=port_no)

                if not self.dynamic_net.has_edge(host_id, dpid):
                    self.dynamic_net.add_edge(host_id, dpid, port=0)
           

    @set_ev_cls(event.EventLinkDelete)
    def on_link_delete(self, ev):
        self.logger.warning("Link deleted: %s", ev.link)
        link = ev.link
        src_dpid = link.src.dpid
        src_port = link.src.port_no
        dst_dpid = link.dst.dpid
        dst_port = link.dst.port_no

        # Notify NDT
        api_url = "http://localhost:8000/ndt/link_failure_detected"

        headers = {"Content-Type": "application/json"}

        data = {
            "src_dpid": src_dpid,
            "src_interface": src_port,
            "dst_dpid": dst_dpid,
            "dst_interface": dst_port,
        }

        try:
            response = requests.post(api_url, json=data, headers=headers)
            self.logger.warning("Notified NDT, status code: %s", response.status_code)
        except Exception as e:
            self.logger.warning("Failed to notify NDT: %s", str(e))

    @set_ev_cls(event.EventLinkAdd)
    def on_link_add(self, ev):
        self.logger.warning("Link added: %s", ev.link)
        link = ev.link
        src_dpid = link.src.dpid
        src_port = link.src.port_no
        dst_dpid = link.dst.dpid
        dst_port = link.dst.port_no

        # Add the edge from self.net
        if self.is_dynamically_detect_topo:
            if not self.dynamic_net.has_edge(src_dpid, dst_dpid):
                self.dynamic_net.add_edge(src_dpid, dst_dpid, port=src_port)
                self.logger.info(
                    "Added edge from net: %s %s -> %s %s",
                    src_dpid,
                    dst_dpid,
                    src_port,
                    dst_port,
                )
            # If bidirectional, Add reverse link too
            if not self.dynamic_net.has_edge(dst_dpid, src_dpid):
                self.dynamic_net.add_edge(dst_dpid, src_dpid, port=dst_port)
                self.logger.info("Added reverse edge: %s -> %s", dst_dpid, src_dpid)

        # Notify NDT link is recovered
        api_url = "http://localhost:8000/ndt/link_recovery_detected"

        headers = {"Content-Type": "application/json"}

        data = {
            "src_dpid": src_dpid,
            "src_interface": src_port,
            "dst_dpid": dst_dpid,
            "dst_interface": dst_port,
        }

        try:
            response = requests.post(api_url, json=data, headers=headers)
            self.logger.warning("Notified NDT, status code: %s", response.status_code)
        except Exception as e:
            self.logger.warning("Failed to notify NDT: %s", str(e))

      
    @set_ev_cls(ofp_event.EventOFPFlowStatsReply, MAIN_DISPATCHER)
    def flow_stats_reply_handler(self, ev):
        dpid = ev.msg.datapath.id
        stats = []

        for stat in ev.msg.body:
            # Safely extract match
            try:
                match = {k: v for k, v in stat.match.items()}
            except Exception as e:
                self.logger.error("Failed to extract match for DPID %s: %s", dpid, e)
                match = {}

            # Extract instructions and actions
            actions_list = []
            for instruction in stat.instructions:
                if hasattr(instruction, "actions"):
                    for action in instruction.actions:
                        action_info = {
                            "type": action.__class__.__name__,
                            "port": getattr(action, "port", None),
                            "max_len": getattr(action, "max_len", None),
                        }
                        actions_list.append(action_info)

            entry = {
                "table_id": stat.table_id,
                "priority": stat.priority,
                "match": match,
                "instructions": actions_list,
                "duration_sec": stat.duration_sec,
                "packet_count": stat.packet_count,
                "byte_count": stat.byte_count,
            }
            stats.append(entry)

        self.flow_stats_reply[dpid] = stats
        # self.logger.info(
        #     "Flow stats for DPID %s: %s", dpid, json.dumps(stats, indent=2)
        # )




# For NDT API
class RyuServerController(ControllerBase):
    # use the same key you passed to wsgi.register()
    def __init__(self, req, link, data, **config):
        super().__init__(req, link, data, **config)
        self.ndt_app = data[RYU_SERVER_INSTANCE_NAME]

    @route("ndt", "/ryu_server/all_destination_paths", methods=["GET", "POST"])
    def get_all_paths(self, req, **kwargs):
        print("all_destination_paths in")
        payload = {
            "status": "success",
            "all_destination_paths": self.ndt_app.all_destination_paths
        }
        return Response(
            content_type="application/json",
            body=json.dumps(payload).encode('utf-8')
        )

Note: Before running the controller, update the parameters in intelligent_router.py to match your environment.

  1. Configure deployment parameters (static_topology_file_path, is_mininet, switch_num)

The controller uses these parameters to decide how to load/discover the topology and when to proactively install all-destination IPv4 flow entries.

  • static_topology_file_path: points to your topology JSON (used in static topology mode).
  • is_mininet: set true for Mininet, false for physical testbed.
  • switch_num: the controller waits until this many switches are connected before installing initial routing entries.
from pathlib import Path

# (1) Static topology JSON path (update to your local file path)
static_topology_file_path = Path("/home/<user>/Desktop/NDTwin-Kernel/setting/StaticNetworkTopology_XXX.json")

# (2) Deployment mode
is_mininet = True   # True: Mininet, False: physical testbed

# (3) Number of switches expected to connect before installing initial routing entries
switch_num = 10      # TODO: change to your switch count

Step 2.7: Install required Python libraries for the customized Ryu app

# Graph algorithms used by the controller
pip install -U networkx

# Pin requests/urllib3 to compatible versions (avoid runtime conflicts)
pip install -U "requests<2.29" "urllib3<2"

3. System Dependencies Installation

You need to install build tools, network analysis utilities, and specific C++ libraries required by the NDTwin Kernel and Mininet.

Step 3.1: Update & Install Build Tools

We use ninja-build for faster compilation and iperf3/wireshark for traffic generation and analysis.

sudo apt update
sudo apt install -y build-essential cmake g++ make git \
    ninja-build xterm curl wireshark iperf3

Step 3.2: Install Required Libraries & Mininet

Run the following command to install all necessary development libraries and the network emulator:

sudo apt install -y \
    libboost-all-dev \
    libfmt-dev \
    libspdlog-dev \
    libssh-dev \
    nlohmann-json3-dev \
    mininet \
    openvswitch-switch

Step 3.3: Verify Network Components

Ensure Mininet and Open vSwitch (OVS) are installed correctly.

# Check OVS version
ovs-vsctl --version

# Test Mininet installation (Pingall test)
sudo mn --test pingall

4. Download & Compile NDTwin Kernel

We use CMake and Ninja to compile the C++ core.

Step 4.1: Download Source Code

If you haven’t downloaded the project yet, clone it to your Desktop (or preferred location).

cd ~/Desktop
git clone https://github.com/ndtwin-lab/NDTwin-Kernel.git

Step 4.2: Compile with Ninja

  1. Navigate to the project directory:
cd ~/Desktop/NDTwin-Kernel
  1. Prepare the build directory:
rm -rf build  # Remove existing build directory if present
mkdir build && cd build
  1. Compile: Note: We do not set the build type to “Release” yet as optimization flags are pending refactoring.
cmake -GNinja ..
ninja clean
ninja -j $(( $(nproc) / 2 ))

5. Prepare Network Topology Script

We rely on a custom Python script (testbed_topo.py) to solve sFlow routing challenges. This script creates a virtual management network on the loopback interface, allowing the host to receive sFlow packets from Mininet switches without disrupting internet connectivity.

  1. Create the Mininet Script file
nano testbed_topo.py
  1. Paste the code
Click to expand: testbed_topo.py
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
#!/usr/bin/env python3


from mininet.topo import Topo
from mininet.net import Mininet
from mininet.node import RemoteController, OVSKernelSwitch
from mininet.cli import CLI
from mininet.log import setLogLevel
from mininet.link import TCLink
import os
import threading

# --- Global Configuration ---

# The number of hosts to create in the topology.
HOST_NUM = 128

# The IP address that our sFlow collector will receive packets on.
# We will add this IP as an alias to the host's loopback 'lo' interface.
COLLECTOR_IP = "192.168.123.1"

# The base of the IP address range for our sFlow management network.
# Switches will be assigned IPs from this range.
MGMT_IP_BASE = "192.168.123."


class MyTopo(Topo):
    """
    Custom topology definition.
    """

    def build(self):
        # Add switches to the topology.
        s1 = self.addSwitch("s1")
        s2 = self.addSwitch("s2")
        s3 = self.addSwitch("s3")
        s4 = self.addSwitch("s4")
        s5 = self.addSwitch("s5")
        s6 = self.addSwitch("s6")
        s7 = self.addSwitch("s7")
        s8 = self.addSwitch("s8")
        s9 = self.addSwitch("s9")
        s10 = self.addSwitch("s10")

        # Add links between switches to form a resilient core network.
        self.addLink(s1, s5, bw=1000, port1=1, port2=1)
        self.addLink(s1, s6, bw=1000, port1=2, port2=1)
        self.addLink(s2, s5, bw=1000, port1=1, port2=2)
        self.addLink(s2, s6, bw=1000, port1=2, port2=2)
        self.addLink(s3, s7, bw=1000, port1=1, port2=1)
        self.addLink(s3, s8, bw=1000, port1=2, port2=1)
        self.addLink(s4, s7, bw=1000, port1=1, port2=2)
        self.addLink(s4, s8, bw=1000, port1=2, port2=2)
        self.addLink(s5, s9, bw=10000, port1=3, port2=1)
        self.addLink(s5, s10, bw=10000, port1=4, port2=1)
        self.addLink(s6, s9, bw=10000, port1=3, port2=2)
        self.addLink(s6, s10, bw=10000, port1=4, port2=2)
        self.addLink(s7, s9, bw=10000, port1=3, port2=3)
        self.addLink(s7, s10, bw=10000, port1=4, port2=3)
        self.addLink(s8, s9, bw=10000, port1=3, port2=4)
        self.addLink(s8, s10, bw=10000, port1=4, port2=4)

        # Create and add hosts to a list.
        hosts = []
        for i in range(1, HOST_NUM + 1):
            host = self.addHost(f"h{i}")
            hosts.append(host)

        # Connect the first quarter of hosts to switch s1.
        # Assign port numbers in 3, 4, 5, 6, ... order to avoid conflicts.
        for i in range(int(HOST_NUM / 4)):
            self.addLink(hosts[i], s1, bw=1000, port1=1, port2=i + 3)

        # Connect the second quarter of hosts to switch s2.
        for i in range(int(HOST_NUM / 4), int(HOST_NUM / 2)):
            self.addLink(
                hosts[i], s2, bw=1000, port1=1, port2=i - int(HOST_NUM / 4) + 3
            )

        # Connect the third quarter of hosts to switch s3.
        for i in range(int(HOST_NUM / 2), int(3 * HOST_NUM / 4)):
            self.addLink(
                hosts[i], s3, bw=1000, port1=1, port2=i - int(HOST_NUM / 2) + 3
            )

        # Connect the last quarter of hosts to switch s4.
        for i in range(int(3 * HOST_NUM / 4), HOST_NUM):
            self.addLink(
                hosts[i], s4, bw=1000, port1=1, port2=i - int(3 * HOST_NUM / 4) + 3
            )


def find_ovs_agent_iface(switch):
    """
    Finds the correct network interface name for a given switch.
    In Mininet, the management interface for a switch (e.g., 's1') is
    named after the switch itself. This function reliably finds it.
    """
    for intf in switch.intfList():
        if not intf.name.startswith("lo") and "s" in intf.name:
            return intf.name
    return switch.name  # Fallback to the switch name.


def enable_sflow(switch, agent_iface, collector_ip, collector_port=6343):
    """
    Generates and executes the ovs-vsctl command to enable sFlow on a switch.
    Args:
        switch (str): The name of the switch (e.g., "s1").
        agent_iface (str): The network interface to use as the sFlow agent.
        collector_ip (str): The IP address of the sFlow collector.
        collector_port (int): The UDP port of the sFlow collector.
    """
    target = f"{collector_ip}:{collector_port}"
    # The 'agent' parameter tells OVS which interface's IP should be used
    # as the source IP for sFlow datagrams. This is crucial for identification.
    cmd = (
        f"ovs-vsctl -- --id=@sflow create sflow agent={agent_iface} "
        f'target=\\"{target}\\" header=128 sampling=256 polling=0 '
        f"-- set bridge {switch} sflow=@sflow"
    )
    os.system(cmd)


def ping_test(src, dst_ip):
    """
    A simple utility function to perform a single ping test and print the result.
    This is used for verifying connectivity within the Mininet topology.
    """
    print(f"Pinging from {src.name} to {dst_ip}...")
    result = src.cmd(f"ping -c 1 {dst_ip}")
    print(f"Result from {src.name} to {dst_ip}:\n{result}")


if __name__ == "__main__":
    setLogLevel("info")

    # It's good practice to clean up any previous Mininet runs.
    # A good practice is to run 'sudo mn -c' in the terminal before starting.
    # os.system("sudo mn -c") # Uncomment if you want to automate this.

    topo = MyTopo()
    # Using RemoteController to connect to an external SDN controller (e.g., Ryu).
    net = Mininet(
        topo=topo,
        controller=RemoteController,
        switch=OVSKernelSwitch,
        link=TCLink,
        autoSetMacs=True,
    )

    try:
        # == STEP 1: Add the IP Alias to the Host's Loopback Interface ==
        # This is the core of the solution. We give the host machine a "mailbox"
        # in our private management network, so it can receive sFlow packets.
        # This command is safe and does not affect normal network operations.
        print(f"Adding IP alias {COLLECTOR_IP}/24 to 'lo' interface...")
        os.system(f"sudo ip addr add {COLLECTOR_IP}/24 dev lo")

        net.start()

        # == STEP 2: Configure Each Switch with a Unique IP and sFlow Target ==
        # We loop through each switch, assign it a unique management IP, and tell it
        # to send sFlow data to our special collector IP alias.
        switch_ip_start = (
            11  # Starting from .11 to avoid collision with the collector's .1
        )

        switch_names = [
            f"s{i}" for i in range(1, 11)
        ]  # List of switch names (s1 to s10)
        for i, sw_name in enumerate(switch_names):
            sw = net.get(sw_name)
            iface_name = find_ovs_agent_iface(sw)

            # Assign a unique IP to the switch's management interface.
            switch_ip = f"{MGMT_IP_BASE}{switch_ip_start + i}"
            sw.cmd(f"ifconfig {iface_name} {switch_ip}/24 up")

            print(f"Configuring sFlow for {sw_name}:")
            print(f"  - Agent IP (source): {switch_ip}")
            print(f"  - Target Collector: {COLLECTOR_IP}:6343")

            # Enable sFlow, pointing to our collector's IP alias.
            enable_sflow(
                switch=sw_name, agent_iface=iface_name, collector_ip=COLLECTOR_IP
            )

        # Display the current sFlow configuration for verification.
        os.system("ovs-vsctl list sflow")

        # == Standard Mininet Host and Network Configuration ==
        # The following section sets up the IP addresses, MACs, and ARP entries
        # for the hosts within the simulation, enabling them to communicate.
        for i in range(1, HOST_NUM + 1):
            h = net.get(f"h{i}")
            ip = f"10.0.0.{i}/24"
            mac = f"00:00:00:00:00:{i:02x}"
            h.setIP(ip)
            h.setMAC(mac)

        for i in range(HOST_NUM):
            src = net.get(f"h{i+1}")
            for j in range(HOST_NUM):
                if i == j:
                    continue  # skip adding an ARP entry to itself
                dst_ip = f"10.0.0.{j+1}"
                dst_mac = f"00:00:00:00:00:{(j+1):02x}"
                src.cmd(f"arp -s {dst_ip} {dst_mac}")

        # Launch ping tests in parallel to generate some traffic.
        threads = []
        for i in range(int(HOST_NUM / 2)):
            client = net.get(f"h{i+1}")
            server_ip = f"10.0.0.{i+1+int(HOST_NUM/2)}"
            t = threading.Thread(target=ping_test, args=(client, server_ip))
            threads.append(t)
            t.start()
        for i in range(int(HOST_NUM / 2)):
            server = net.get(f"h{i+1+int(HOST_NUM/2)}")
            client_ip = f"10.0.0.{i+1}"
            t = threading.Thread(target=ping_test, args=(server, client_ip))
            threads.append(t)
            t.start()
        for t in threads:
            t.join()

        print("\n--- Final Configuration Active ---")
        print("Host internet: OK | sFlow reachability: OK | Switch identification: OK")
        print(f"Run 'sflowtool -p 6343' in another terminal to see the data.")
        CLI(net)

    finally:
        # == STEP 3: Clean Up Gracefully ==
        # This 'finally' block ensures that our created IP alias is removed,
        # and the Mininet network is stopped, no matter how the script exits.
        # This keeps the host system clean.
        print(f"\nCleaning up: Removing IP alias {COLLECTOR_IP} from 'lo' interface...")
        os.system(f"sudo ip addr del {COLLECTOR_IP}/24 dev lo")
        net.stop()

Installation Complete

You have successfully finished installing the environment!

To continue, please follow the User Manual to try launching the NDTwin Kernel and get started with your experiments.