[[TOC()]] = Federation Users Guide = This tutorial walks through using the federation command line tools to create and operate federated experiments. To simplify the tutorial, most of the experiments are actually federated within DETERLab or between DETERLab and a desktop federant. The procedures generalize to more wide open federation. In order to create federated experiments you will need permission to open holes in DETER's containment. Being able to connect across experiments or from an experiment to outside hosts means that users can send traffic that seems to originate from inside DETERLab - in fact some examples explain how to do that fairly generally. To get this privilege contact testbed operations. == Creating A Federated Experiment == We start by creating an experiment with the layout shown here: [[Image(federation1.png)]] Each square is a computer and each line a link. The layout is a simple dumbbell with nodes a and c as the ends of the inner "bar" and b, d, e, and f as the leaves. Nodes a, b, and d are in one testbed and c, e, f and f in another. The following ns2 file is a description for such a topology: {{{ # Simple federated topology, all on DETER # # SERVICE: project_export:deter/exp1::project=TIED # SERVICE: project_export:deter/exp2::project=TIED # set ns [new Simulator] source tb_compat.tcl set a [$ns node] set b [$ns node] set c [$ns node] set d [$ns node] set e [$ns node] set f [$ns node] tb-set-node-testbed $a "deter/exp1" tb-set-node-testbed $b "deter/exp1" tb-set-node-testbed $c "deter/exp2" tb-set-node-testbed $d "deter/exp1" tb-set-node-testbed $e "deter/exp2" tb-set-node-testbed $f "deter/exp2" set link0 [ $ns duplex-link $a $b 1Gb 0ms DropTail] set link1 [ $ns duplex-link $a $c 1Gb 0ms DropTail] set link2 [ $ns duplex-link $a $d 1Gb 0ms DropTail] set link3 [ $ns duplex-link $e $c 1Gb 0ms DropTail] set link4 [ $ns duplex-link $f $c 1Gb 0ms DropTail] $ns rtproto Static $ns run }}} A [attachment:federation1.tcl copy of that file] is attached to this page. Most of that file looks like any other simple DETERLab experiment layout, and one can swap it in on DETERLab directly and get a single experiment. The differences are * The assignments of nodes to testbeds using {{{tb-set-node-testbed}}} * The assignments of projects within a testbed by the {{{SERVICE}}} directives in the comments The this layout connects 2 experiment on DETERLab. This is because the testbeds are all prefixed with the {{{deter}}} testbed name. The ''testbed''/''sub-testbed'' syntax allows multiple instantiations of experiments on the same testbed. This layout creates 2 experiment on DETERLab (DETERLab's [FeddAbout#TheExperimentController experiment contoller] maps the {{{deter}}} testbed name to DETERLab). The commented lines including the {{{SERVICE}}} keyword configure [FeddAbout#ExperimentServices experiment services]. The format of the line is 4 colon-separated parameters giving: * The service name ('''project_export''' other valid services are listed on the [FeddAbout#ExperimentServices experiment services description]) * The exporter * The importers (comma-separated) * Any attributes (comma-separated name value pairs, each joined by an equals sign) These lines request that each sub experiment be instantiated in the TIED project on DETER. People following this example should either remove the lines or edit them to name a project that they are a member of. To create the federated experiment, execute the [FeddCommands#fedd_create.py fedd_create.py] command on {{{users.isi.deterlab.net}}}: {{{ users:~$ fedd_create.py --file federation1.tcl --experiment_name fed1 }}} The parameters are the file containing the layout ({{{federation1.tcl}}}) and the shorthand name the caller wants to use to refer to the federated experiment. The shorthand is a request. The experiment controller defines the namespace of experiments and will resolve conflicts. That is, if another user has already named an experiment {{{fed1}}} on this controller, the controller with pick an new name. Services can also by configured using the {{{--service}}} parameter to [FeddCommands#fedd_create.py fedd_create.py]. The format of the parameter is the same colon-separated list above. This will run for a little while, as much as a few minutes. It is gathering rights to access remote testbeds and starting the sub-experiments on them. Shortly you will see something like: {{{ localname: fed1 fedid: 6a4f58292d572c57ef612e3e44e5d8134196e550 status: starting }}} The {{{localname}}} is the name that the controller picked for the federated experiment, usually the same as the {{{experiment_name}}} parameter. If the output is different, the user will need to use that shorthand in subsequent commands. The {{{fedid}}} is a unique identifier that refers to this experiment. We discuss ways to use this below. Assuming that {{{status}}} is {{{starting}}} the experiment is being created. If there has been a error, there will be an error message as well. At any time a user can poll the current status of their federated experiment using the [FeddCommands#fedd_multistatus.py fedd_multistatus.py] command. Run it like this on {{{users.isi.deterlab.net}}} and you should see similar output: {{{ users:~$ fedd_multistatus.py fed1:6a4f58292d572c57ef612e3e44e5d8134196e550:starting }}} The output is the experiment name, the experiment fedid, and the status, comma separated. If an experiment creation has failed the output will look more like: {{{ users:~$ fedd_multistatus.py fed1:6a4f58292d572c57ef612e3e44e5d8134196e550:starting bad_experiment:da83eb06712a2006abeae34308c363b0ab0faa0a:failed }}} The experiment called {{{bad_experiment}}} has failed to get created. When fed1 has finished swapping in (both parts) in DETERLab, the output will look like: {{{ users:~$ fedd_multistatus.py fed1:6a4f58292d572c57ef612e3e44e5d8134196e550:active bad_experiment:da83eb06712a2006abeae34308c363b0ab0faa0a:failed }}} A user can certainly use the fedd_multistatus.py command to poll and monitor experiment creation, but output is fairly terse and polling is inefficient. The [FeddCommands#fedd_spewlog.py fedd_spewlog.py] command will output a debugging log from the experiment controller. If the experiment is being created or terminated, the command puts out the log so far and continues updating it until the operation succeeds or fails. A user can follow the progress of the creation in real time using this command. Here is sample output from the sample creation above: {{{ users:~$ fedd_spewlog.py --experiment_name fed1 07 May 14 11:15:34 fedd.experiment_control.fed1 Calling StartSegment at https://users.isi.deterlab.net:23231 07 May 14 11:15:34 fedd.experiment_control.fed1 Calling StartSegment at https://users.isi.deterlab.net:23231 07 May 14 11:15:34 fedd.experiment_control.fed1 Calling StartSegment at https://users.isi.deterlab.net:23233 Allocated vlan: 380807 May 14 11:16:34 fedd.experiment_control.fed1 Waiting for sub threads (it has been 1 mins) 07 May 14 11:17:34 fedd.experiment_control.fed1 Waiting for sub threads (it has been 2 mins) 07 May 14 11:15:37 fedd.access.fed1-exp1 State is swapped 07 May 14 11:15:37 fedd.access.fed1-exp1 [swap_exp]: Terminating fed1-exp1 07 May 14 11:15:41 fedd.access.fed1-exp1 [swap_exp]: Terminate succeeded 07 May 14 11:15:41 fedd.access.fed1-exp1 [make_null_experiment]: Creating experiment 07 May 14 11:16:02 fedd.access.fed1-exp1 [make_null_experiment]: Create succeeded 07 May 14 11:16:02 fedd.access.fed1-exp1 [start_segment]: creating script file 07 May 14 11:16:02 fedd.access.fed1-exp1 [scp_file]: /usr/bin/scp -o IdentitiesOnly yes -o StrictHostKeyChecking no -o ForwardX11 no -i /usr/local/etc/fedd/deter/fedd_rsa /tmp/tmp739CnG faber@users.isi.deterlab.net:tmp739CnG 07 May 14 11:16:02 fedd.access.fed1-exp1 [ssh_cmd]: /usr/bin/ssh -n -o 'IdentitiesOnly yes' -o 'StrictHostKeyChecking no' -o 'ForwardX11 no' -i /usr/local/etc/fedd/deter/fedd_rsa faber@users.isi.deterlab.net sh -x tmp739CnG 07 May 14 11:16:03 fedd.access.fed1-exp1 [scp_file]: /usr/bin/scp -o IdentitiesOnly yes -o StrictHostKeyChecking no -o ForwardX11 no -i /usr/local/etc/fedd/deter/fedd_rsa /tmp/access-P0TiQP/fedgw_rsa.pub faber@users.isi.deterlab.net:/proj/TIED/exp/fed1-exp1/tmp/fedgw_rsa.pub 07 May 14 11:16:03 fedd.access.fed1-exp1 [scp_file]: /usr/bin/scp -o IdentitiesOnly yes -o StrictHostKeyChecking no -o ForwardX11 no -i /usr/local/etc/fedd/deter/fedd_rsa /tmp/access-P0TiQP/fedgw_rsa faber@users.isi.deterlab.net:/proj/TIED/exp/fed1-exp1/tmp/fedgw_rsa 07 May 14 11:16:03 fedd.access.fed1-exp1 [scp_file]: /usr/bin/scp -o IdentitiesOnly yes -o StrictHostKeyChecking no -o ForwardX11 no -i /usr/local/etc/fedd/deter/fedd_rsa /tmp/access-P0TiQP/hosts faber@users.isi.deterlab.net:/proj/TIED/exp/fed1-exp1/tmp/hosts 07 May 14 11:16:04 fedd.access.fed1-exp1 [scp_file]: /usr/bin/scp -o IdentitiesOnly yes -o StrictHostKeyChecking no -o ForwardX11 no -i /usr/local/etc/fedd/deter/fedd_rsa /tmp/access-P0TiQP/ca.pem faber@users.isi.deterlab.net:/proj/TIED/exp/fed1-exp1/tmp/ca.pem 07 May 14 11:16:04 fedd.access.fed1-exp1 [scp_file]: /usr/bin/scp -o IdentitiesOnly yes -o StrictHostKeyChecking no -o ForwardX11 no -i /usr/local/etc/fedd/deter/fedd_rsa /tmp/access-P0TiQP/node.pem faber@users.isi.deterlab.net:/proj/TIED/exp/fed1-exp1/tmp/node.pem 07 May 14 11:16:04 fedd.access.fed1-exp1 [scp_file]: /usr/bin/scp -o IdentitiesOnly yes -o StrictHostKeyChecking no -o ForwardX11 no -i /usr/local/etc/fedd/deter/fedd_rsa /tmp/access-P0TiQP/client.conf faber@users.isi.deterlab.net:/proj/TIED/exp/fed1-exp1/tmp/client.conf 07 May 14 11:16:05 fedd.access.fed1-exp1 [scp_file]: /usr/bin/scp -o IdentitiesOnly yes -o StrictHostKeyChecking no -o ForwardX11 no -i /usr/local/etc/fedd/deter/fedd_rsa /tmp/access-P0TiQP/experiment.tcl faber@users.isi.deterlab.net:/proj/TIED/exp/fed1-exp1/tmp/experiment.tcl 07 May 14 11:16:05 fedd.access.fed1-exp1 [scp_file]: /usr/bin/scp -o IdentitiesOnly yes -o StrictHostKeyChecking no -o ForwardX11 no -i /usr/local/etc/fedd/deter/fedd_rsa /tmp/access-P0TiQP/software/fedkit.tgz faber@users.isi.deterlab.net:/proj/TIED/software//fed1-exp1/fedkit.tgz 07 May 14 11:16:06 fedd.access.fed1-exp1 [scp_file]: /usr/bin/scp -o IdentitiesOnly yes -o StrictHostKeyChecking no -o ForwardX11 no -i /usr/local/etc/fedd/deter/fedd_rsa /tmp/access-P0TiQP/software/seer-fbsd71-current.tgz faber@users.isi.deterlab.net:/proj/TIED/software//fed1-exp1/seer-fbsd71-current.tgz 07 May 14 11:16:06 fedd.access.fed1-exp1 [modify_exp]: Modifying fed1-exp1 07 May 14 11:16:23 fedd.access.fed1-exp1 [modify_exp]: Modify succeeded 07 May 14 11:16:23 fedd.access.fed1-exp1 [swap_exp]: Swapping fed1-exp1 in 07 May 14 11:17:49 fedd.access.fed1-exp1 [swap_exp]: Swap succeeded 07 May 14 11:17:49 fedd.access.fed1-exp1 [get_mapping] Generating mapping 07 May 14 11:17:49 fedd.access.fed1-exp1 Node mapping complete 07 May 14 11:17:49 fedd.access.fed1-exp1 Link mapping complete07 May 14 11:15:37 fedd.access.fed1-exp2 State is swapped 07 May 14 11:15:37 fedd.access.fed1-exp2 [swap_exp]: Terminating fed1-exp2 07 May 14 11:15:41 fedd.access.fed1-exp2 [swap_exp]: Terminate succeeded 07 May 14 11:15:41 fedd.access.fed1-exp2 [make_null_experiment]: Creating experiment 07 May 14 11:16:02 fedd.access.fed1-exp2 [make_null_experiment]: Create succeeded 07 May 14 11:16:02 fedd.access.fed1-exp2 [start_segment]: creating script file 07 May 14 11:16:02 fedd.access.fed1-exp2 [scp_file]: /usr/bin/scp -o IdentitiesOnly yes -o StrictHostKeyChecking no -o ForwardX11 no -i /usr/local/etc/fedd/deter/fedd_rsa /tmp/tmp6dcyg4 faber@users.isi.deterlab.net:tmp6dcyg4 07 May 14 11:16:02 fedd.access.fed1-exp2 [ssh_cmd]: /usr/bin/ssh -n -o 'IdentitiesOnly yes' -o 'StrictHostKeyChecking no' -o 'ForwardX11 no' -i /usr/local/etc/fedd/deter/fedd_rsa faber@users.isi.deterlab.net sh -x tmp6dcyg4 07 May 14 11:16:03 fedd.access.fed1-exp2 [scp_file]: /usr/bin/scp -o IdentitiesOnly yes -o StrictHostKeyChecking no -o ForwardX11 no -i /usr/local/etc/fedd/deter/fedd_rsa /tmp/access-iH3Eaj/fedgw_rsa.pub faber@users.isi.deterlab.net:/proj/TIED/exp/fed1-exp2/tmp/fedgw_rsa.pub 07 May 14 11:16:03 fedd.access.fed1-exp2 [scp_file]: /usr/bin/scp -o IdentitiesOnly yes -o StrictHostKeyChecking no -o ForwardX11 no -i /usr/local/etc/fedd/deter/fedd_rsa /tmp/access-iH3Eaj/fedgw_rsa faber@users.isi.deterlab.net:/proj/TIED/exp/fed1-exp2/tmp/fedgw_rsa 07 May 14 11:16:03 fedd.access.fed1-exp2 [scp_file]: /usr/bin/scp -o IdentitiesOnly yes -o StrictHostKeyChecking no -o ForwardX11 no -i /usr/local/etc/fedd/deter/fedd_rsa /tmp/access-iH3Eaj/hosts faber@users.isi.deterlab.net:/proj/TIED/exp/fed1-exp2/tmp/hosts 07 May 14 11:16:04 fedd.access.fed1-exp2 [scp_file]: /usr/bin/scp -o IdentitiesOnly yes -o StrictHostKeyChecking no -o ForwardX11 no -i /usr/local/etc/fedd/deter/fedd_rsa /tmp/access-iH3Eaj/ca.pem faber@users.isi.deterlab.net:/proj/TIED/exp/fed1-exp2/tmp/ca.pem 07 May 14 11:16:04 fedd.access.fed1-exp2 [scp_file]: /usr/bin/scp -o IdentitiesOnly yes -o StrictHostKeyChecking no -o ForwardX11 no -i /usr/local/etc/fedd/deter/fedd_rsa /tmp/access-iH3Eaj/node.pem faber@users.isi.deterlab.net:/proj/TIED/exp/fed1-exp2/tmp/node.pem 07 May 14 11:16:04 fedd.access.fed1-exp2 [scp_file]: /usr/bin/scp -o IdentitiesOnly yes -o StrictHostKeyChecking no -o ForwardX11 no -i /usr/local/etc/fedd/deter/fedd_rsa /tmp/access-iH3Eaj/client.conf faber@users.isi.deterlab.net:/proj/TIED/exp/fed1-exp2/tmp/client.conf 07 May 14 11:16:05 fedd.access.fed1-exp2 [scp_file]: /usr/bin/scp -o IdentitiesOnly yes -o StrictHostKeyChecking no -o ForwardX11 no -i /usr/local/etc/fedd/deter/fedd_rsa /tmp/access-iH3Eaj/experiment.tcl faber@users.isi.deterlab.net:/proj/TIED/exp/fed1-exp2/tmp/experiment.tcl 07 May 14 11:16:05 fedd.access.fed1-exp2 [scp_file]: /usr/bin/scp -o IdentitiesOnly yes -o StrictHostKeyChecking no -o ForwardX11 no -i /usr/local/etc/fedd/deter/fedd_rsa /tmp/access-iH3Eaj/software/fedkit.tgz faber@users.isi.deterlab.net:/proj/TIED/software//fed1-exp2/fedkit.tgz 07 May 14 11:16:06 fedd.access.fed1-exp2 [scp_file]: /usr/bin/scp -o IdentitiesOnly yes -o StrictHostKeyChecking no -o ForwardX11 no -i /usr/local/etc/fedd/deter/fedd_rsa /tmp/access-iH3Eaj/software/seer-fbsd71-current.tgz faber@users.isi.deterlab.net:/proj/TIED/software//fed1-exp2/seer-fbsd71-current.tgz 07 May 14 11:16:06 fedd.access.fed1-exp2 [modify_exp]: Modifying fed1-exp2 07 May 14 11:16:23 fedd.access.fed1-exp2 [modify_exp]: Modify succeeded 07 May 14 11:16:23 fedd.access.fed1-exp2 [swap_exp]: Swapping fed1-exp2 in 07 May 14 11:18:09 fedd.access.fed1-exp2 [swap_exp]: Swap succeeded 07 May 14 11:18:09 fedd.access.fed1-exp2 [get_mapping] Generating mapping 07 May 14 11:18:10 fedd.access.fed1-exp2 Node mapping complete 07 May 14 11:18:10 fedd.access.fed1-exp2 Link mapping complete07 May 14 11:18:11 fedd.experiment_control.fed1 [start_segment]: Experiment fed1 active active }}} Much of that is useful only for debugging but the last line indicates the final status of the experiment creation. The two choices are {{{active}}} and {{{failed}}}. == Examining the Federated Experiment == To gather detailed information about how the system has created the experiment, use the [FeddCommands#fedd_ftopo.py fedd_ftopo.py] command (ftopo is short for "federated topology"). Here's a sample invocation; again the {{{--experiment_name}}} parameter is the shorthand name returned by {{{fedd_create.py}}} or picked from {{{fedd_multistatus.py}}}. {{{ users:~$ fedd_ftopo.py --experiment_name fed1 d:bpc144.isi.deterlab.net,d.fed1-exp1.TIED.isi.deterlab.net:active::deter/exp1 e:bpc130.isi.deterlab.net,e.fed1-exp2.TIED.isi.deterlab.net:active::deter/exp2 a:bpc151.isi.deterlab.net,a.fed1-exp1.TIED.isi.deterlab.net:active::deter/exp1 f:bpc142.isi.deterlab.net,f.fed1-exp2.TIED.isi.deterlab.net:active::deter/exp2 b:bpc154.isi.deterlab.net,b.fed1-exp1.TIED.isi.deterlab.net:active::deter/exp1 c:bpc150.isi.deterlab.net,c.fed1-exp2.TIED.isi.deterlab.net:active::deter/exp2 }}} The output is colon-separated. The fields are: * The node name from the experiment layout file * The local names that the individual testbed has given to the node. These are testbed dependent. If there is more than one name assigned by the testbed, they will be comma-separated. * The status of the individual node * Operations allowed on the node * The testbed name on which the node is instantiated The first line describes node {{{d}}}. It can be accessed at {{{bpc144.isi.deterlab.net}}} and {{{d.fed1-exp1.TIED.isi.deterlab.net}}}. The node is active, defines no operations, is instantiated on {{{deter/exp1}}}. Simple views of the topology can be produced by the [FeddCommands#fedd_image.py fedd_image.py] command. This invocation: {{{ users:~$ fedd_image.py --experiment_name fed1 --out fed1.png }}} Produces this image in {{{fed1.png}}}. [[Image(fed1.png)]] Computers are green squares, links are lines, and shared networks are blue circles (not shown). Each node is labeled with its name, each link or network with its name, and interfaces with their IP address. The labels can be removed with the {{{--no_labels}}} parameter: [[Image(fed1-nolabels.png)]] The nodes can be grouped by any attribute, the most useful of which is the testbed on which they are instantiated: {{{ fedd_image.py --experiment_name fed1 --group testbed --out fed1-group.png }}} produces [[Image(fed1-group.png)]] This is very similar to the image at the top of the page. That image was also produced using {{{fedd_image.py}}} but using the layout description directly: {{{ users:~$ fedd_image.py --file federation1.tcl --group testbed --out fed1-groups.png }}} because the specification does not include the IP addresses, they are not in the image. == Looking Around Inside The Federated Experiment == Armed with the information from {{{fedd_ftopo.py}}} a user can log in to an experiment node and see the unified experiment. The user can log into a node {{{d}}} by: {{{ users:~$ ssh d.fed1-exp1.TIED }}} From there the user can ping the various nodes in both parts of the federated experiment and run any tools, etc. as though it were a single DETER experiment. The experiment is composed of two DETER experiments. The federation system does hide this, but it is not hard to deduce the experiment names from the {{{fedd-ftopo.py}}} output. In this case the two experiments are: [[Image(exp1.png)]] and [[Image(exp2.png)]] Because each testbed only knows about the resources it has committed to the federated experiment, these representations are incomplete. Even the two virtual testbeds on DETERLab do not include information about the other testbed. It can be instructive to see how the federation system interconnects these experiments. Logging into node {{{d}}} in the {{{deter/exp2}}} sub-experiment, inspecting the routing table shows: {{{ d:~$ ip route default via 192.168.1.254 dev eth1 10.0.0.0/24 dev eth2 proto kernel scope link src 10.0.0.1 10.0.1.0/24 via 10.0.0.2 dev eth2 proto zebra metric 30 10.0.2.0/24 via 10.0.0.2 dev eth2 proto zebra metric 30 10.0.3.0/24 via 10.0.0.2 dev eth2 proto zebra metric 20 10.0.4.0/24 via 10.0.0.2 dev eth2 proto zebra metric 20 192.168.0.0/22 dev eth1 proto kernel scope link src 192.168.3.144 192.168.252.0/22 via 192.168.1.254 dev eth1 proto zebra }}} There are routes in the table to hosts that are not in the DETERLab experiment. Specifically none of 10.0.0.0/24, 10.0.3.0/24 or 10.0.4.0/24 are present in the {{{TIED/fed1-exp2}}} experiment that contains {{{d}}}. (The 192.168. addresses are all for accessing DETERLab infrastrcuture.) The federation system has done 4 things to make it possible and simple for the sub-experiments to communicate. * The [FeddAbout#TheExperimentController experiment controller] assigned consistent network (IP) addresses to the experiment nodes. * The controller configured and started [http://en.wikipedia.org/wiki/Open_Shortest_Path_First ospf] routers on each machine to propagate routes between the two experiments. * The controller interconnected the two experiments through a local VLAN (by coordinating with a third virtual testbed) * Entries for the nodes in the other experiment were added to {{{/etc/hosts}}} so symbolic names can be used to reach nodes allocated in either experiment. Similar work can be done to present unified login information and shared filesystems. That is not done in this case because the experiments are both in the same DETER project and testbed. == Releasing Resources == When the federated experiment is no longer needed, the [FeddCommands#fedd_terminate.py fedd_terminate.py] command is used to release resources. Individual testbeds release resources and the experiment controller purges all its data. To release the experiment above: {{{ users:~$ fedd_terminate.py --experiment_name fed1 }}} The command does not return until the individual testbeds have released their resources. The {{{--force}}} parameter can be used to terminate experiments that are not in the {{{active}}} state, or that have other problems. == Generating and Using TopDL Descriptions == [TopDl TopDL] is the XML-encoded layout description language that the federation system uses internally. While ns2/tcl can be shorter and easier to generate - especially by human programmers - it can be very difficult for tools to deal with. In particular, because each ns2 description is a program in a Turing-complete interpreted language, extracting the layout for analysts or presentation requires running the program. Translation of XML is much less computationally expensive and presents fewer security concerns. The [FeddCommands#fedd_create.py fedd_create.py] command will take either a topdl or an ns2 description. To get topdl description from an ns2 description, one can use the [FeddCommands#fedd_ns2topdl.py fedd_ns2topdl.py] command. Note that this command must contact an [FeddAbout#TheExperimentController experiment controller] to do the conversion. DETERLab's experiment controller at !https://users.isi.deterlab.net:23235 provides this service. Running {{{fedd_ns2topdl.py}}} on that machine contacts the experiment controller by default. Converting the [attachment:federation1.tcl experiment layout we used above] into topdl is done by running this command on {{{users.isi.deterlab.net}}}: {{{ fedd_ns2topdl.py --file federation1.tcl --out federation1.xml }}} The contents of [attachment:federation1.xml federation1.xml], run through an XML formatter, are attached. The topDL representation is much more verbose, but encodes the same information about layout. The information it does not encode is the [FeddAbout#ExperimentServices service] information. When using the topdl representation, services have to be passed to [FeddCommands#fedd_create.py fedd_create.py] using the {{{--service}}} parameter, like so: {{{ users:~$ fedd_create.py --service project_export:deter/exp1::project=TIED --service project_export:deter/exp2::project=TIED --file federation1.xml --experiment_name fed2 localname: fed2 fedid: 4758ff5a8b20ec1a54ea59f84e1eaaf60ee39cf9 status: starting }}} The two formats are generally interchangeable using federation tools, though some external tools will prefer one or the other. == Federating A Desktop Computer Into A DETER Experiment == The DETER lightweight federation system is a simple way to join your desktop with a DETER experiment. The desktop can dynamically join an experiment in DETER when the DETER experiment is created using the federation system. The technology in general is also called desktop federation. The desktop runs virtual machine image that DETER provides and needs some simple configuration from the experimenter to coordinate with DETER. In the simplest case, that configuration consists of: * The IP address on which DETER can reach the VM * The DETER(s) user who is allowed in create experiments that talk to this desktop The rest of this document describes how to get and use the lightweight federation technology. We also have detailed instructions about [FeddDesktop configuring a desktop controller outside a VM] and [DesktopExoGeni using desktop federation in ExoGENI]. == Getting A Lightweight Federation Image and Getting DETER Federation Rights == We have two lightweight federations images: * [https://vim.isi.edu/Linux_Federation_14.04.ova Ubuntu 14.04 based image] * [https://vim.isi.edu/Linux_Federation.ova Ubuntu 12.04 based image] These are [http://en.wikipedia.org/wiki/Open_Virtualization_Format open virtualization format] files that can be imported by many virtual machine monitors (VMMs). In particular, [http://virtualbox.org virtualbox], a free, open source VMM that runs on many operating systems will import the files. In addition, users will need to get ther DETER accounts authorized to create federated experiments. DETER administration controls this facility because it allows users to reach outside DETER, which we normally do not allow. To get an account authorized for federation, contact [mailto:faber@isi.edu]. == Connecting a Desktop Node to a DETER Experiment == This section is a tutorial description of setting up a federated experiment to using the VM above. === Configuring The Desktop === Download the VM image above and import it into a VMM. If using [http://virtualbox.org virtualbox], the directions are [http://grok.lsu.edu/article.aspx?articleid=13838 here]. In addition, you will need to forward external connections on port 23231 into the virtual machine. Instructions for this are [https://www.virtualbox.org/manual/ch06.html#natforward here]. Most VMMs, including virtualbox, configure the guest VM to be assigned [http://en.wikipedia.org/wiki/Private_network IP addresses on a private network]. These addresses cannot be routed in the public Internet, so the VMM translates those addresses using a [http://en.wikipedia.org/wiki/NAT Network Address Translator (NAT)]. This is a common configuration for home networks as well. Similarly, by default DETERLab assigns experimental nodes the same sort of private addresses. In particular, interfaces visible as part of the federated experiment are drawn from the 10.0.0.0/8 address space and interfaces used access and provide services to the computers in DETERLab are drawn from the 192.168.0.0/16 address space. Unfortunately virtualbox uses the 10.0.0.0/8 address space for VMs, which can confuse the routing in DETERlab. Assign the single interface in the VM an address in the 192.168.0.0/16 address space rather than the 10.0.0.0/24 space. Directions are [https://www.virtualbox.org/manual/ch09.html#idp59863744 here] to do that in virtualbox. In particular, we recommend using the 192.168.233.0/24 address space to avoid conflicts: {{{ $ VBoxManage modifyvm "VM name" --natnet1 "192.168.233/24" }}} Start the VM and log into it. The account on the VM is fedd and the password is fedd. Do not allow remote logins to this VM unless you change the account's password. The image provides a script to set up a single node for federation. The script and the federation configuration both are stored in the {{{/usr/local/etc/fedd}}} directory. To configure the federation system, log into the node, change directory to {{{/usr/local/etc/fedd}}} and run the {{{init_fedd}}} script there. It takes 2 parameters, the management IP address and the experimenter to authorize. The following sequence will configure fedd to allow user "faber" to contact DETER using IP address 192.1.242.14: {{{ ~# cd /usr/local/etc/fedd /usr/local/etc/fedd# ./init_fedd 192.1.242.14 faber }}} When that script completes, start the federation system on the VM. When debugging we recommend leaving a window open and running the daemon as: {{{ # fedd.py --config /usr/local/etc/fedd/desktop.conf --debug }}} You can also run it in the background, logging to {{{/var/log/fedd.log}}} by: {{{ # touch /var/log/fedd.log # fedd.py --config /usr/local/etc/fedd/desktop.conf --logfile /var/log/fedd.log & }}} {{{fedd.py}}} expects the logfile to exist, hence the {{{touch}}} command. There are some detailed debugging messages that libraries and other dependent software produce that are visible with {{{--debug}}} that are lost during normal logging. === Creating the Federated Experiment === To connect the desktop to a DETER experiment, specify an federated experiment with the desktop as being on testbed "desktop". Here is an example DETER experiment description of that format: {{{ # simple DETER topology federated to a desktop # set ns [new Simulator] source tb_compat.tcl set a [$ns node] set b [$ns node] set c [$ns node] set d [$ns node] set e [$ns node] set f [$ns node] tb-set-node-testbed $a "deter" tb-set-node-testbed $c "deter" tb-set-node-testbed $d "deter" tb-set-node-testbed $e "deter" tb-set-node-testbed $f "deter" tb-set-node-testbed $b "desktop" set link0 [ $ns duplex-link $a $b 1Gb 0ms DropTail] set link1 [ $ns duplex-link $a $c 1Gb 0ms DropTail] set link2 [ $ns duplex-link $a $d 1Gb 0ms DropTail] set link3 [ $ns duplex-link $e $c 1Gb 0ms DropTail] set link4 [ $ns duplex-link $f $c 1Gb 0ms DropTail] $ns rtproto Static $ns run }}} You can [attachment:desk.tcl download a copy of that file]. That file specified a topology that looks like this. Computers are boxes and network connections are lines. The larger blue outlines show which testbed each computer is in. [[Image(desk.png)]] To instantiate that topology, run the command: {{{ fedd_create.py --file desk.tcl --experiment_name $EXPNAME --map desktop:https://$MGMT_IP:23231 }}} where {{{$EXPNAME}}} is replaced with a short mnemonic name for the combined experiment (the example below assumes we used {{{faber-smart5}}}) and {{{$MGMT_IP}}} is the IP address of the node running the federation software. You can use its DNS name as well. When that returns, it will return something like: {{{ localname: faber-smart5 fedid: 2b7b6852a2db53d3e77431937e1da01d8fbf335d status: starting }}} DETER is coordinating between its local federation controllers and the one running on the desktop node, allocating resources, and stitching them. You can check the status same commands we used above. === Interacting with the Experiment === Once an experiment is active, it completes stitching itself together and a user can log in to the various nodes using the native testbed mechanisms. That stitching may take a minute or two after the federation system declares the experiment active. Additionally it may take the dynamic routing some time to converge, depending on the complexity of the topology. Inside the DETER experiment, one can log into the nodes and interact with them by node name as usual. Details are [https://trac.deterlab.net/wiki/Tutorial/UsingNodes here]. With node b on the desktop and a on DETER, as shown above, this sequence shows the transparent connection. A user logs into node a in the local experiment (experiment faber-smart5 in in project detertest) and pings node b from node a. {{{ users.isi.deterlab.net:~$ ssh a.faber-test.detertest a:~$ ping b PING b-link0 (10.0.3.2) 56(84) bytes of data. 64 bytes from b-link0 (10.0.3.2): icmp_req=1 ttl=64 time=152 ms 64 bytes from b-link0 (10.0.3.2): icmp_req=2 ttl=64 time=76.3 ms }}} While for all intents and purposes, b (the desktop) is part of the experiment, the long ping times make it easy to identify: {{{ a:~$ ping e PING e-link3 (10.0.1.1) 56(84) bytes of data. 64 bytes from e-link3 (10.0.1.1): icmp_req=1 ttl=63 time=0.797 ms 64 bytes from e-link3 (10.0.1.1): icmp_req=2 ttl=63 time=0.437 ms }}} Similarly, one can log into the desktop VM node and see the DETER nodes by the same names: {{{ root@server-18393:~# ping a PING a-link2 (10.0.0.2) 56(84) bytes of data. 64 bytes from a-link2 (10.0.0.2): icmp_req=1 ttl=64 time=76.7 ms 64 bytes from a-link2 (10.0.0.2): icmp_req=2 ttl=64 time=76.5 ms }}} It may be surprising, but the desktop node can route to nodes throughout our multi-hop DETER topology: {{{ root@server-18393:~# ping c PING c-link3 (10.0.1.2) 56(84) bytes of data. 64 bytes from c-link3 (10.0.1.2): icmp_req=1 ttl=63 time=153 ms 64 bytes from c-link3 (10.0.1.2): icmp_req=2 ttl=63 time=76.7 ms }}} The federated experiment in DETER runs [http://en.wikipedia.org/wiki/Ospf ospf] on each node, and fedd.py starts an ospf daemon on the desktop VM node as well. The desktop simply discovers the routing table from that connection. We will show how to expolit this connection to interconnect more interesting topologoes. === Tearing The Experiment Down === To tear down the experiment, use the same {{{fedd_terminate.py}}} command as before. {{{ users:~$ fedd_terminate.py --experiment_name $EXPNAME }}} This releases the DETER resources and disconnects the desktop node. If an experimenter tears down the desktop VM before the {{{fedd_terminate.py}}} command is issued, or there is some other problem, the {{{--force}}} flag can be given to make {{{fedd_terminate.py}}} purge all state that the federation system can reach. == A More Complex Desktop Layout == To connect a more complex topology, we route TCP connections from the DETER experiment through the desktop VM to the local network. This example uses the ISI subnet, 128.9.0.0/16. The desktop is running the VM running fedd and that VM will be accessible as before at hostname "b". In addition we will make the rest of the subnet accessible throughout the DETER topology by its IP addresses. === Configuring the Desktop Federation VM === Log in to the VM and run the {{{init_fedd}}} utility as before. In addition, add the following lines to {{{/usr/local/etc/fedd/desktop.config}}}: {{{ # Export Interfaces (interfaces to run OSPF on/export to DETER). # Comma-separated list of interface names export_interfaces: eth0 # Export Networks (networks to export to OSPF - these usually correspond # to export_interfaces). Comma separated export_networks: 128.9.0.0/16 }}} With those settings, {{{fedd.py}}} will export any routes discovered on {{{eth0}}} and network 128.9.0.0/16 to the ospfd running in DETER. Routes to other places will not be exported, nor would routes on other interfaces. (The VM has only the eth0 interface, so the second part is sort of moot.) Now we need to construct a route to the network we want to export. This route will be given to the [http://www.nongnu.org/quagga/docs/docs-info.html quagga routing system] and distributed throughout the experiment. To construct it, we must know the default router for the VM. To discover it, use the command: {{{ $ ip route default via 192.168.233.2 dev eth0 proto static [ ... ] }}} That will produce many lines of output, but the important one is the default route line. If you have constructed a more complex routing layout, you will need to choose the appropriate router, but that is beyond the scope of this example. Edit {{{/usr/local/etc/fedd/external_networks}}} and put the following line in: {{{ ip route 128.9.0.0/16 192.168.233.2 }}} This is a route command to the [http://www.nongnu.org/quagga/docs/docs-info.html#Zebra quagga routing system] explaining that the route to the network we want to export into the federated experiment (128.9.0.0./16) can be reached via the default router of the VM (192.168.233.2). When the federation system builds an experiment, it issues those commands directly. === Creating the Federated Experiment and Demonstrating Connectivity === Start up fedd.py on the desktop VM as before and run the same {{{fedd_create.py}}} command on DETER. In addition to being able to contact the desktop VM directly, nodes in the DETER experiment now see a route to the 128.9.0.0/16 network: {{{ a:~$ ip route default via 192.168.1.254 dev eth0 10.0.0.0/24 dev eth3 proto kernel scope link src 10.0.0.2 10.0.1.0/24 via 10.0.4.2 dev eth1 proto zebra metric 20 10.0.2.0/24 via 10.0.4.2 dev eth1 proto zebra metric 20 10.0.3.0/24 dev eth4 proto kernel scope link src 10.0.3.1 10.0.4.0/24 dev eth1 proto kernel scope link src 10.0.4.1 128.9.0.0/16 via 10.0.3.2 dev eth4 proto zebra metric 20 192.168.0.0/22 dev eth0 proto kernel scope link src 192.168.0.81 192.168.252.0/22 via 192.168.1.254 dev eth0 proto zebra }}} We can collect the ISI home page from the 128.9.0.0/16 network: {{{ a:~$ wget www.isi.edu --2014-06-10 10:42:34-- http://www.isi.edu/ Resolving www.isi.edu (www.isi.edu)... 128.9.176.20 Connecting to www.isi.edu (www.isi.edu)|128.9.176.20|:80... connected. HTTP request sent, awaiting response... 302 Found Location: http://www.isi.edu/home [following] --2014-06-10 10:42:34-- http://www.isi.edu/home Reusing existing connection to www.isi.edu:80. HTTP request sent, awaiting response... 200 OK Cookie coming from www.isi.edu attempted to set domain to www.isi.edu Length: unspecified [text/html] Saving to: `index.html' [ <=> ] 12,294 --.-K/s in 0.1s 2014-06-10 10:42:34 (80.7 KB/s) - `index.html' saved [12294] }}} This also works from nodes deeper in the layout, e.g, node "f": {{{ f:~$ ip route default via 192.168.1.254 dev eth0 10.0.0.0/24 via 10.0.2.2 dev eth4 proto zebra metric 30 10.0.1.0/24 via 10.0.2.2 dev eth4 proto zebra metric 20 10.0.2.0/24 dev eth4 proto kernel scope link src 10.0.2.1 10.0.3.0/24 via 10.0.2.2 dev eth4 proto zebra metric 30 10.0.4.0/24 via 10.0.2.2 dev eth4 proto zebra metric 20 128.9.0.0/16 via 10.0.2.2 dev eth4 proto zebra metric 20 192.168.0.0/22 dev eth0 proto kernel scope link src 192.168.0.87 192.168.252.0/22 via 192.168.1.254 dev eth0 proto zebra f:~$ wget www.isi.edu --2014-06-10 10:44:07-- http://www.isi.edu/ Resolving www.isi.edu (www.isi.edu)... 128.9.176.20 Connecting to www.isi.edu (www.isi.edu)|128.9.176.20|:80... connected. HTTP request sent, awaiting response... 302 Found Location: http://www.isi.edu/home [following] --2014-06-10 10:44:07-- http://www.isi.edu/home Reusing existing connection to www.isi.edu:80. HTTP request sent, awaiting response... 200 OK Cookie coming from www.isi.edu attempted to set domain to www.isi.edu Length: unspecified [text/html] Saving to: `index.html' [ <=> ] 12,344 --.-K/s in 0.09s 2014-06-10 10:44:07 (128 KB/s) - `index.html' saved [12344] }}} === Limitations of NATs === This configuration allows all the nodes in the federated experiment to make outgoing TCP connections into 128.9.0.0/16. There are actually 2 NATs manipulating packets: [[Image(NATS.png)]] The inner NAT is converting from the DETERLab private addresses (10.0.0.0/8) into the interface address of the Fedd VM in the VMM private network (192.168.233.0/24, if you are using virtualbox as configured above). The second NAT converts from the VMM private address space into the desktop's address (on the 128.9.0.0/16 net0. This limits the connectivity that one can provide. The outer NAT can only convert back into VM addresses for ports that have been exported, as can the fedd VM NAT. There can be hundreds of nodes in the federated experiment, so complete translation is impractical. The trade-off is between the convenience of using the VM image to avoid the details of the federation setup vs. the connectivity provided. Much more complete connectivity can be achieved by running the desktop plugin directly on a desktop and avoiding the NATs.