From b0c66daedfa2cbf208d5c157f4103182100c8751 Mon Sep 17 00:00:00 2001 From: iondev33 Date: Wed, 13 Mar 2024 18:40:35 +0000 Subject: [PATCH] deploy: c5288f8302c034f3c97508d2463f5d2eee3af6ff --- ION-Deployment-Guide/index.html | 72 ++++ search/search_index.json | 2 +- sitemap.xml | 594 ++++++++++++++++---------------- sitemap.xml.gz | Bin 1927 -> 1927 bytes 4 files changed, 370 insertions(+), 298 deletions(-) diff --git a/ION-Deployment-Guide/index.html b/ION-Deployment-Guide/index.html index 3c9d7cbd0..4acaf0dac 100644 --- a/ION-Deployment-Guide/index.html +++ b/ION-Deployment-Guide/index.html @@ -854,6 +854,63 @@ + + +
  • + + + Testing & Known Issues + + + + +
  • @@ -3310,6 +3367,21 @@

    Memory Allocation

    acquire some block of memory. These would include the Space Management Trace features and standalone programs such as "file2sm", "sm2file" and "smlistsh".

    +

    Testing & Known Issues

    +

    Factors Affecting LTP Testing Over UDP

    +

    Terrestrial testing of LTP during the prototype and initial system integration phases often relies on using the UDP protocol because it is readily available on most terrestrial computing systems. ION's udplso and udplsi programs provide the basic capability to flow LTP traffic between two hosts on the internet. To increase the fidelity of LTP testing, short of directly utilizing actual radio systems, customized software or hardware can be added to the data path. This addition aims to introduce longer delays and data corruption/loss in a controlled manner.

    +

    However, testing LTP over UDP can yield unpredictable results due to several factors. Understanding these factors is essential for accurate analysis and troubleshooting:

    +

    UDP's Inherent Unreliability

    +

    UDP lacks a built-in mechanism for retransmitting lost packets. Consequently, the rate at which packets are lost can fluctuate significantly. This inherent unreliability of UDP may affect the performance and reliability tests of LTP, as LTP relies on UDP for transport.

    +

    Kernel Buffering and IP Fragment Reassembly

    +

    The ability of the operating system kernel to buffer and reassemble IP fragments plays a critical role, especially if an LTP segment exceeds the Maximum Transmission Unit (MTU) size. The efficiency of this process can vary based on:

    + + +

    External testing tools, either customized software or WAN emulators, are often used to simulate network conditions or impairments but may also impact the fidelity of testing by exaggerating the delay differences between different traffic streams, including UDP fragments, when improperly configured, and further complicate the interpretation of LTP performance results over UDP.

    Operation

    ION is generally optimized for continuous operational use rather than research. In practice, this means that a lot more attention, both in the diff --git a/search/search_index.json b/search/search_index.json index 8a7716754..87b29c70d 100644 --- a/search/search_index.json +++ b/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Interplanetary Overlay Network (ION)","text":""},{"location":"#ion-description","title":"\ud83d\udef0\ufe0f ION Description","text":"

    Interplanetary Overlay Network (ION) is an implementation of the DTN architecture, as described in Internet RFC 4838 (version 6) and RFC 9171 (version 7), that is intended to be usable in both embedded environments including spacecraft flight computers as well as ground systems. It includes modular software packages implementing Bundle Protocol version 6 (BPv6) and version 7 (BPv7), Licklider Transmission Protocol (LTP), Bundle Streaming Service (BSS), DTN-based CCSDS File Delivery Protocol (CFDP), Asynchronous Message Service (AMS), and several other DTN services and prototypes. ION is currently the baseline implementation for science instruments on the International Space Station (ISS) and the gateway node (ION Gateway) that provides relay services for command/telemetry and science data download.

    Here you will find videos of the Interplanetary Overlay Network courses and presentation materials.

    DTN Development/Deployment Kit is an ISO image of an Ubuntu virtual machine, pre-configured with ION and a GUI virtualization environment. It contains a number of pre-built scenarios (network topologies) to demonstrate various features of ION software. ( currently the DevKit is undergoing upgrade to BPv7, release date is TBD.)

    "},{"location":"#application-domains-of-dtn-and-ion","title":"\ud83d\udce1 Application Domains of DTN and ION","text":""},{"location":"#performance-data","title":"\ud83d\udcca Performance Data","text":""},{"location":"#installation-configuration","title":"\ud83d\udee0\ufe0f Installation & Configuration","text":"
    1. Clone the repository:

    git clone https://github.com/nasa-jpl/ION-DTN.git\n
    2. Follow the steps in the Quick Start Guide to build, install, and run a simple two node example. 3. A simple tutorial of ION's configuration files can be found here. 4. A set of configuration file templates for various DTN features can be found here.

    "},{"location":"#license","title":"\ud83d\udcdc License","text":"

    ION is licensed under the MIT License. Please see the LICENSE file for details.

    "},{"location":"#important-papers-on-ion-and-dtn","title":"\ud83d\udcda Important Papers on ION and DTN","text":"

    For a list of key DTN and ION-related publications, please refer to the List-of-Papers page.

    "},{"location":"AMS-Programmer-Guide/","title":"AMS Programmer's Guide","text":"

    Version 3.0

    Sky DeBaun, Jet Propulsion Laboratory, California Institute of Technology

    Document Change Log

    Ver No. Date Affected Description Comments 2.2 Sept 2010 3.0 June 2023 All Updates and Corrections"},{"location":"AMS-Programmer-Guide/#purpose-and-scope","title":"Purpose and Scope","text":"

    The Consultative Committee for Space Data Systems' (CCSDS) Asynchronous Message Service (AMS) is a communication architecture for data systems. It is designed to allow mission system modules to operate as if they were isolated, each producing and consuming mission information without explicit knowledge of the other active modules. This self-configuring communication relationship minimizes complexity in the development and operation of modular data systems.

    AMS is the foundation of a system that can be described as a 'society' of largely autonomous, interoperating modules. These modules can adapt over time in response to changing mission objectives, functional upgrades of modules, and recovery from individual module failures. The primary objective of AMS is to reduce mission cost and risk by providing a standard, reusable infrastructure for information exchange among data system modules. This infrastructure is designed to be user-friendly, highly automated, flexible, robust, scalable, and efficient.

    Notably, AMS provides a publication and subscription service for both terrestrial and extraterrestrial communications, utilizing the Interplanetary Overlay Network (ION). This service ensures a seamless and efficient communication system that can adapt dynamically to various missions.

    "},{"location":"AMS-Programmer-Guide/#definitions","title":"Definitions","text":"

    Within the context of this document the following definitions apply:

    A continuum is a closed set of entities that utilize AMS for purposes of communication among themselves. Each continuum is identified by a continuum name and corresponding non-negative continuum number. The continuum name that is the character string of length zero indicates \"all known continua\" or \"any known continuum\", whichever is less restrictive in the context in which this continuum name is used; the reserved continuum number zero corresponds to this continuum name.

    An application is a data system implementation, typically taking the form of a set of source code text files, that relies on AMS procedures to accomplish its purposes. Each application is identified by an application name.

    An authority is an administrative entity or persona that may have responsibility for the configuration and operation of an application. Each authority is identified by an authority name.

    A venture is an instance of an application, i.e., a functioning projection of the application -- for which some authority is responsible -- onto a set of one or more running computers.

    A message is an octet array of known size which, when copied from the memory of one module of a venture to that of another (exchanged), conveys information that can further the purposes of that venture.

    The content of a message is the array of zero or more octets embedded in the message containing the specific information that the message conveys.

    A role is some part of the functionality of an application. Each role is identified by a role name and corresponding non-negative role number. The role name that is the character string of length zero indicates 'all roles' or 'any role', whichever is less restrictive in the context in which the role name is used; the reserved role number zero corresponds to this role name. The role name \"RAMS '' identifies Remote AMS (RAMS) gateway functionality as discussed below; the reserved role number 1 corresponds to this role name.

    A module (of some mission data system) is a communicating entity that implements some part of the functionality of some AMS venture -- that is, performs some application role -- by, among other activities, exchanging messages with other modules. Associated with each module is the name of the role it performs within the application. [Note that multiple modules may perform the same role in an application, so the role name of a module need not uniquely identify the module within its message space.] In order to accomplish AMS message exchange a module generates AMS service requests and consumes AMS service indications; the module that is the origin of a given AMS service request or the destination of a given AMS service indication is termed the operative module.

    A message space is the set of all of the modules of one AMS venture that are members of a single AMS continuum; that is, a message space is the intersection of a venture and a continuum. Each message space is uniquely identified within that continuum by the combination of the name of the application and the name of the authority that is responsible for the venture, and by a corresponding venture number greater than zero. [Note that unique naming of continua enables multiple message spaces that are in different continua but are identified by the same application and authority names to be concatenated via Remote AMS (discussed below) into a single venture.]

    A unit (i.e., a unit of organization) is an identified subset of the organizational hierarchy of the modules of one AMS venture, declared during venture configuration as specified by the responsible authority for that venture. Each unit is uniquely identified within the venture by unit name and corresponding non-negative unit number. The root unit of a venture is the unit that is coterminous with the venture itself; its unit name is the character string that is of length zero, and the reserved unit number zero corresponds to this unit name. A unit whose name is identical to the first N bytes -- where N is greater than or equal to zero -- of the name of another unit of the same message space is said to contain that other unit. The membership of a unit that is contained by another unit is a subset of the membership of the containing unit.

    A cell is the set of all modules that are members of one unit of a given venture and are also members of a given continuum; that is, it is the intersection of a unit and a continuum. Since each unit is a subset of a venture, each cell is necessarily a subset of the message space for that venture in that continuum. Each cell is uniquely identified within its message space by its unit's name and number. The root cell of a message space is coterminous with the message space itself. A cell contains some other cell only if its unit contains that other cell's unit. A cell may be an empty set; that is, in a given continuum there may be no modules that are members of the cell's unit. The registered membership of a cell is the set of all modules in the cell that are not members of any other cell which does not contain that cell^1^. [Note that the root cell contains every other cell in the message space, and every module in the message space is therefore a member -- though not necessarily a registered member -- of the root cell.]

    The domain of an AMS service request is the set of modules to which the request pertains. It comprises all of the modules that are members of the venture in which the operative module is itself a member, with the following exceptions:

    The subject number (or subject) of a message is an integer embedded in the message that indicates the general nature of the information the message conveys, in the context of the AMS venture within which the message is exchanged. A subject name is a text string that serves as the symbolic representation of some subject number.

    To send a message is to cause it to be copied to the memory of a specified module. To publish a message on a specified subject is to cause it to be sent to one or more implicitly specified modules, namely, all those that have requested copies of all messages on the specified subject. To announce a message is to cause it to be sent to one or more implicitly specified modules, namely, all those modules that are located within a specified continuum (or all continua), are members of a specified unit (possibly the root unit) and that perform a specified role in the application (possibly \"any role\").

    A subscription is a statement requesting that one copy of every message published on some specified subject by any module in the subscription's domain be sent to the subscribing module; the domain of a subscription is the domain of the AMS service request that established the subscription.

    ^1^ For example, if cell A contains cells B and C, and cell C contains cells D and E, any nodes in C that are not in either D or E are in the registered membership of cell C. Those nodes are also members of cell A, but because they are in cell C -- which does not contain cell A -- they are not in cell A's registered membership.

    An invitation is a statement of the manner in which messages on some specified subject may be sent to the inviting module by modules in the domain of the invitation; the invitation's domain is the domain of the AMS service request that established the invitation.

    "},{"location":"AMS-Programmer-Guide/#overview","title":"Overview","text":""},{"location":"AMS-Programmer-Guide/#general","title":"General","text":"
    1. Architectural Character

    A data system based on AMS has the following characteristics:

    a. Any module may be introduced into the system at any time. That is, the order in which system modules commence operation is immaterial; a module never needs to establish an explicit a priori communication \"connection\" or \"channel\" to any other module in order to pass messages to it or receive messages from it.

    b. Any module may be removed from the system at any time without inhibiting the ability of any other module to continue sending and receiving messages. That is, the termination of any module, whether planned or unplanned, only causes the termination of other modules that have been specifically designed to terminate in this event.

    c. When a module must be upgraded to an improved version, it may be terminated and its replacement may be started at any time; there is no need to interrupt operations of the system as a whole.

    d. When the system as a whole must terminate, the order in which the system's modules cease operation is immaterial.

    AMS-based systems are highly robust, lacking any innate single point of failure and tolerant of unplanned module termination. At the same time, communication within an AMS-based system can be rapid and efficient:

    e. Messages are exchanged directly between modules rather than through any central message dispatching nexus.

    f. Messages are automatically conveyed using the \"best\" (typically -- though not necessarily -- the fastest) underlying transport service to which the sending and receiving modules both have access. For example, messages between two ground system modules running in different computers on a common LAN would likely be conveyed via TCP/IP, while messages between modules running on two flight processors connected to a common bus memory board might be conveyed via a shared-memory message queue.

    g. Finally, AMS is designed to be highly scalable: partitioning message spaces into units enables a venture to comprise hundreds or thousands of cooperating modules without significant impact on application performance.

    1. Message Exchange Models

    AMS message exchange is fundamentally asynchronous, akin to a \"postal\" system. An AMS module, after sending a message, can continue its functions without waiting for a reply.

    While message exchange is asynchronous, AMS provides a mechanism for linking reply messages to their original context. This is achieved by including a context number in the original message. The reply message automatically echoes this context number, allowing the original sender to link the reply to the application activity that triggered the initial message. This creates a pseudo-synchronous communication flow. The specific mechanism for establishing this link is implementation-dependent.

    In some cases, true message synchrony may be necessary, requiring a module to suspend operations until a reply is received. AMS supports this communication model when required.

    The majority of message exchange in an AMS-based system follows a \"publish-subscribe\" model. A module announces its subscription to a specific subject using AMS procedures. From that point, any published message on that subject is automatically delivered to all subscribing modules. This model simplifies application development and integration, allowing each module to plug into a data \"grid\" and exchange data without detailed knowledge of other modules.

    However, there may be instances where a module needs to send a message privately to a specific module, such as in reply to a published message. AMS also supports this communication model when necessary.

    "},{"location":"AMS-Programmer-Guide/#architectural-elements","title":"Architectural Elements","text":"
    1. General

    The architectural elements involved in the asynchronous message service protocol are depicted as below:

    Figure 1: Architectural Elements of AMS

    1. Communicating Entities

    All AMS communication is conducted among three types of communicating entities: modules (defined earlier), registrars, and configuration servers.

    A registrar is a communicating entity that catalogs information regarding the registered membership of a single unit of a message space. It responds to queries for this information, and it updates this information as changes are announced.

    A configuration server is a communicating entity that catalogs information regarding the message spaces established within some AMS continuum, specifically the locations of the registrars of all units of all message spaces. It responds to queries for this information, and it updates this information as changes are announced.

    "},{"location":"AMS-Programmer-Guide/#overview-of-interactions","title":"Overview of Interactions","text":"
    1. Transport Services for Application Messages

    AMS, best characterized as a messaging \"middleware\" protocol, operates between the Transport and Application layers of the OSI protocol stack model. It relies on underlying Transport-layer protocols for actual message copying from sender to receiver and for transmitting meta-AMS (or MAMS) messages for dynamic self-configuration of AMS message spaces.

    In any AMS continuum, a common transport service, termed the Primary Transport Service (PTS), is used for MAMS traffic by all entities involved in the operations of all message spaces. The PTS, being universally available, can also be used for application message exchange among all modules in a continuum. However, in some cases, performance can be improved by using Supplementary Transport Services (STSs), especially when modules share access to a convenient communication medium like a shared-memory message queue.

    Supplementary Transport Services (STSs) are performance-optimizing transport services used in the Asynchronous Message Service (AMS) for message exchange between modules that share access to a particularly convenient communication medium, such as a shared-memory message queue. While the Primary Transport Service (PTS) is universally available for message exchange, STSs can be employed to enhance application performance in certain scenarios (see CCSDS Blue Book Recommended Standard 735.1-B-1 \"Asynchronous Message Service\" for additional information).

    A module's network location for receiving messages via a given transport service is its delivery point for that service. A module may have multiple delivery points, each characterized by the same service mode. For a given service mode, the list of all delivery points providing that mode to a module, ranked in descending order of preference (typically network performance), is termed the delivery vector for that service mode, for that module.

    See \"Primary Transport Services\" below for additional information.

    1. Registrar Registration

    Every message space in AMS always includes at least one unit, the root unit, and each module is registered within a unit. In the simplest case, all modules reside in the root unit. Each unit is served by a single registrar, which monitors the health of all registered modules and propagates six types of message space configuration changes.

    Registrars themselves register with the configuration server for the continuum containing the message space. A list of all potential network locations for the configuration server, ranked in descending order of preference, must be well-known and included in the AMS management information bases (MIBs) accessible to all registrars. Each continuum must always have an operational configuration server at one of these locations to enable registration of registrars and modules.

    All registrars and modules of the same message space must register through the same configuration server.

    1. Module Registration

    Each module has a single meta-AMS delivery point (MAPD) for receiving MAMS messages. A new module joins a message space by registering within a unit, announcing its role name and MAPD to the unit's registrar. However, the module cannot have hard-coded information about the registrar's communication details, as these can change.

    Therefore, the first step in registering a new module is contacting the configuration server at one of its known network locations. These locations, listed in descending order of preference, are included in the AMS Management Information Bases (MIBs) accessible to all application modules. The configuration server then provides the new module with the contact details for its registrar.

    The module obtains a unique module number from the registrar and completes registration. The registrar ensures that all other modules in the message space learn the new module's role name, module number, and MAPD. These modules, in turn, announce their own details to the new module.

    1. Monitoring Module Health

    Maintaining accurate knowledge of a message space configuration is crucial for application purposes and resource efficiency. Each registrar must promptly detect the termination of modules in its unit's registered membership. While a module under application control notifies its registrar upon termination, a module that crashes or is powered off does not. To address this, each module sends a \"heartbeat\" message to its registrar every few seconds (see comment #3 at top of amscommon.h for additional details). The registrar interprets three consecutive missing heartbeats as a module termination.

    Upon detecting a module's termination, either overt or imputed from heartbeat failure, the registrar informs all other modules in the unit's registered membership and, through other registrars, all modules in the message space.

    When termination is imputed from a heartbeat failure, the registrar attempts to notify the presumed terminated module. If the module is still running, it terminates immediately upon receiving this message, minimizing system confusion due to other application behavior triggered by the imputed termination.

    1. Monitoring Registrar Health

    Each registrar not only monitors the heartbeats of all modules in its unit's registered membership but also issues its own heartbeats to those modules. If a module detects three consecutive missing registrar heartbeats, it assumes the registrar has crashed. The module then re-queries the configuration server to determine the new network location of the registrar and resumes exchanging heartbeats.

    This assumption is reasonable because the configuration server also monitors registrar heartbeats on a slightly shorter cycle. If the configuration server detects three consecutive missing registrar heartbeats, it takes action to restart the registrar, possibly on a different host. Therefore, by the time the registrar's modules detect its crash, it should already be running again.

    Since the module heartbeat interval is two seconds (see N4 in amscommon.h), the registrar will receive heartbeat messages from every running module in the unit's registered membership within the first six seconds after restart. This allows the registrar to accurately know the unit's configuration. This accurate configuration information must be delivered to new modules at startup, enabling them to orient a newly-restarted registrar if it crashes. Therefore, during the first six seconds after the registrar starts, it only accepts MAMS messages from modules already registered in the unit. This prevents the risk of delivering incorrect information to a new module.

    1. Configuration Service Fail-over

    A configuration server, like any other component, can also fail or be rebooted. Each registrar interprets three consecutive missing configuration server heartbeats as an indication of a crash. Upon detecting such a crash, the registrar cycles through all the known network locations for the continuum's configuration server, attempting to re-establish communication after the server's restart, possibly at an alternate network location. New modules attempting to register will also cycle through network locations seeking a restarted configuration server and will be unable to contact their registrars, and therefore unable to register, until they find one. However, application message exchange and subscription management activity among existing modules and registrars are not affected by this infrastructure failure.

    Upon the configuration server's restart at one of its known network locations, all registrars will eventually find it and re-announce themselves, enabling newly registering application modules to successfully register.

    In certain failure scenarios, multiple configuration servers may operate concurrently for a brief period, such as when a perceived failure is caused by a transient network connectivity issue rather than an actual server crash. To resolve this, each running configuration server periodically sends an \"I am running\" MAMS message to every lower-ranking configuration server network location in the known list of configuration server locations. If a configuration server receives such a message, it immediately terminates. All registrars and modules communicating with it will detect its disappearance and search again for the highest-ranking reachable configuration server, eventually restoring orderly operations in the continuum.

    1. Configuration Resync

    Finally, every registrar can optionally be configured to re-advertise to the entire message space the detailed configuration of its unit's registered membership (all active modules, all subscriptions and invitations) at some user-specified frequency, e.g., once per minute. This capability is referred to as configuration resync. Configuration resync of course generates additional message traffic, and it may be unnecessary in extremely simple or extremely stable operating environments. But it does ensure that every change in application message space configuration will eventually be propagated to every module in the message space, even if some MAMS messages are lost and even if an arbitrary number of registrars had crashed at the time the change occurred.

    Taken together, these measures make AMS applications relatively fault tolerant:

    a. When a module crashes, its registrar detects the loss of heartbeat within three heartbeat intervals and notifies the rest of the message space. Application message transmission everywhere is unaffected.When a registrar crashes, its configuration server detects the loss of heartbeat within three heartbeat intervals and takes action to restart the registrar. During the time that the unit has no registrar, transmission of application messages among modules of the message space is unaffected, but the heartbeat failures of crashed modules are not detected and reconfiguration messages originating in the unit's registered membership (registrations, terminations, subscription and invitation assertions, and subscription and invitation cancellations) are not propagated to any modules. However, after the registrar is restarted it will eventually detect the losses of heartbeat from all crashed modules and will issue obituaries to the message space, and if configuration resync is enabled it will eventually re- propagate the lost reconfiguration messages.

    b. When a configuration server crashes, all new registration activity will come to a standstill. But no application modules fail (at least, not because of communication failure), and on restart of the configuration server the registration of new modules eventually resumes.

    1. Security

    AMS can be configured to confine service access to application modules that can prove they are authorized to participate. For this purpose, asymmetric MAMS encryption may be used as follows:

    a. The AMS MIB exposed to the configuration server contains a list of all applications for which registration service may be offered, identified by application name. Associated with each application name is the AMS public encryption key for that application.

    b. The AMS MIB exposed to every registrar in each message space contains a list of all functional role names defined for the message space's application; this list limits the role names under which modules may register in that message space. Associated with each role name is the AMS public encryption key for the application module(s) that may register in that role.

    c. The AMS MIBs exposed to all registrars and application modules in the message space contain the AMS public encryption key of the configuration server.

    d. The AMS MIBs exposed to the configuration server and to all registrars and application modules in the message space contain the private encryption keys that are relevant to those entities.

    As described later, this information is used to authenticate registrar registration and exclude spurious registrars from the message space, to authenticate module registration attempts and deny registration to unauthorized application modules, and to assure the authenticity, confidentiality, and integrity of MAMS traffic exchanged between modules and their registrars.

    In addition, the confidentiality and integrity of AMS message exchange may be protected at subject granularity. The AMS MIB exposed to each module of a given message space may contain, for any subset of the message subjects (identified by name and number) used in the message space's application:

    e. a list of the role names of all modules that are authorized senders of messages on this subject;

    f. a list of the role names of all modules that are authorized receivers of messages on this subject;

    g. encryption parameters, including a symmetric encryption key, enabling encryption of messages on this subject.

    This information may be used to support secure transmission of messages on selected subjects.

    Note*, though, that the JPL implementation of AMS does not implement* any of the cryptographic algorithms that are required to support these security features.

    1. Subject Catalog

    The structure of the content of messages on a given subject is application-specific; message content structure is not defined by the AMS protocol. However, the AMS MIB exposed to all modules of a given message space will contain, for each message subject (identified by name and number) used in the message space:

    a. a description of this message subject, discussing the semantics of this type of message;

    b. a detailed specification of the structure of the content of messages on this subject;

    c. optionally, a specification of the manner in which a correctly assembled message is marshaled for network transmission in a platform-neutral manner and, on reception, un-marshaled into a format that is suitable for processing by the application.

    When AMS is requested to send a message on a given subject, the message content that is presented for transmission is always in a format that is suitable for processing by the application. In the event that this format is not suitable for network transmission in a platform-neutral manner, as indicated by the presence in the MIB of a marshaling specification for this subject, AMS will marshal the message content as required before transmitting the message.

    When AMS receives a message on a subject for which a marshaling specification is present in the MIB, AMS will un-marshal the message content into a format that is suitable for processing by the application before delivering the message.

    Message subjects, as noted earlier, are integers with application-defined semantics. This minimizes the cost of including subject information (in effect, message type) in every message, and it makes processing simpler and faster: subscription and invitation information are recorded in arrays that are indexed by subject number.

    This implementation choice, however, requires that message management control arrays be large enough to accommodate the largest subject numbers used in the application. The use of extremely large subject numbers would therefore cause these arrays to consume significant amounts of memory. In general, it is best for an AMS application to use the smallest subject numbers possible, starting with 1.

    1. Remote AMS Message Exchange

    AMS' asynchronous message issuance model allows for a high degree of concurrency in the operations of data system modules. This means that a module can issue a message without suspending its operation until a response is received. This feature also largely insulates applications from variations in signal propagation time across the AMS continuum.

    However, some critical MAMS (Multicast AMS) communication is unavoidably synchronous. For instance, a newly registering module must wait for responses from the configuration server and the registrar before it can proceed with application activity. Therefore, the core AMS protocol is best suited for operational contexts with generally uninterrupted network connectivity and relatively small and predictable signal propagation times, such as the Internet or a stand-alone local area network. It is typically advantageous for all entities within a single AMS continuum to operate within such a \"low-latency\" environment

    AMS application messages can be exchanged between modules in different AMS continua using the Remote AMS (RAMS) procedures. These procedures are executed by special-purpose application modules known as RAMS gateways. Each RAMS gateway interfaces with two communication environments: the AMS message space it serves and the RAMS network, which is a mesh or tree of mutually aware RAMS gateways. This network enables AMS messages produced in one message space to be forwarded to other message spaces within the same venture.. RAMS gateways operate as follows:

    a. RAMS gateways operate by opening private RAMS network communication channels to the RAMS gateways of other message spaces within the same venture. These interconnected gateways use these communication channels to forward message petition assertions and cancellations among themselves.

    b. Each RAMS gateway subscribes locally to all subjects that are of interest in any of the linked message spaces.

    c. When a RAMS gateway receives a message on any of these subjects, it uses the RAMS network to forward the message to every other linked RAMS gateway whose message space contains at least one module that has subscribed to messages on that subject.

    d. On receiving a message the RAMS gateway module forwards the message to any subscribers in its own message space.

    The RAMS protocol allows for the free flow of published application messages across deep space links while ensuring efficient utilization of those links. Only a single copy of any message is ever transmitted on any RAMS grid communication channel, regardless of how many subscribers will receive copies when the message reaches its destination.

    RAMS operations generalize the AMS architecture as shown in Figure 2 below.

    Figure 2 General AMS application structure

    This extension of the publish/subscribe model to inter-continuum communications is invisible to application modules. Application functionality is unaffected by these details of network configuration, and the only effects on behavior being those intrinsic to variability in message propagation latency.

    It's important to note that the nature of the RAMS network communication channels depends on the implementation of the RAMS network. To communicate over the RAMS network for a given venture, each RAMS gateway must know the RAMS network location, expressed as an endpoint in the protocol used to implement the RAMS network.

    Also, only AMS application messages are propagated across continuum boundaries by RAMS. Modules are never notified of registrations, subscriptions, and invitations that occur in remote continua. The purpose of RAMS is to limit traffic on the scarce link resources supporting inter-continuum communication to the minimum necessary for successful operation of the venture. MAMS message traffic within a message space is required to enable the operation of the message space, but venture-wide application message exchange can readily be provided without propagating MAMS messages to remote continua.

    "},{"location":"AMS-Programmer-Guide/#the-jpl-implementation","title":"The JPL Implementation","text":"

    JPL's implementation of AMS has the following components:

    The codebase, written in C, relies on a shared library, ICI. This library supports other JPL implementations, like CFDP and the DTN Bundle Protocol. ICI includes a \"platform\" portability layer, easing code compilation and execution in environments like Linux, vxWorks, and Interix.

    ICI also includes its own dynamic memory management system, called \"PSM\", which provides dynamic management of a privately allocated block of memory. This may be useful in environments such as spacecraft flight software where the dynamic management of system memory (malloc, free) cannot be tolerated. Use of PSM by AMS is optional.

    An AMS application program, linked with libams, uses the ams_register function to instantiate an AMS module registered within a specified unit of a specified message space. Once registration is accomplished, the application may commence inviting, subscribing to, publishing, announcing, sending, and replying to messages.

    This AMS implementation is multi-threaded. The process of registration starts a pair of POSIX threads, or pthreads, which manage timing and MAMS events in the background. Additionally, another pthread is started to receive MAMS messages via the primary transport service and add them to the MAMS event queue. This queue also includes MAMS message transmission requests. For each transport service that the module can receive AMS messages from, one more pthread is started. These threads receive AMS messages and add them to the AMS event queue, combining them with \"discovery\" events added by the MAMS event handling thread.

    The general structure of an AMS module, then, is as shown in Figure 3 below.

    Figure 3 AMS module structure

    The application program has the option to start another thread to manage AMS events. This thread automatically calls event-type-specific callback functions, leaving the main application thread free to respond to non-AMS events, such as mouse events or keyboard input. The application code can also add application-specific events to the AMS event queue, potentially with higher priority than any queued AMS messages. However, to prevent certain types of unusual application behavior, the main application thread is not allowed to receive and handle any AMS events while the background AMS event handling thread is running.

    "},{"location":"AMS-Programmer-Guide/#primary-transport-services","title":"Primary Transport Services","text":"

    As shipped, AMS currently includes support for two underlying transport services: TCP, and DGR (Datagram Retransmission, a UDP-based system that includes congestion control and retransmission-based reliability). Although TCP is faster than DGR, its connection-based architecture makes it unsuitable as a primary transport service: all MAMS message traffic is conveyed via connectionless DGR.

    "},{"location":"AMS-Programmer-Guide/#installation","title":"Installation","text":"

    AMS source is provided in the ION distribution (a gzipped tarfile containing AMS and all supporting ION packages: ici, dgr, ltp, and bp, etc.).

    The following two installation methods are provided.

    "},{"location":"AMS-Programmer-Guide/#automake","title":"Automake","text":"

    This method automatically compiles and links all required executables, installs them, and copies ION library headers to the relevant system path(s) on your system.Use the following command sequence in the unzipped ION source directory (Linux).

    ./configure\nmake\nsudo make install\nsudo ldconfig\n

    Note: if support for the expat XML parsing library is required see 4.3 \"Support for the Expat XML Parsing Library\" below

    "},{"location":"AMS-Programmer-Guide/#make","title":"Make","text":"

    This alternate installation method installs all ION packages (if run from ION root directory), or installs individual ION packages as follows

    1. Before installation, first determine which environment (i.e. platform) you're going to be building for: i86- redhat, i86_64-fedora, sparc-solaris, RTEMS, etc. Note this for the following step
    2. Move the ION .gz file to a directory in which you want to build the system, gunzip the file, and then un-tar it; a number of new directories will appear.AMS requires the following packages: ici, dgr, ltp, bp, and ams. For each, in that specific order, do the following:cd to directory_namemodify Makefile (as needed):\"PLATFORMS = environment_name\"]modify Makefile (as needed):\"OPT = desired system path\"]makesudo make installcd ..
    3. For additional information see the ION Design and Operations manual for dependency details and package build order instructions (see ION.pdf in the ION distribution tar)

    Note that for both install methods (e.g. on Linux) the default configuration used in the ION makefiles is as follows:

    If you want a different configuration, you'll need to modify the makefiles accordingly (e.g. see the OPT variable in 2b above).

    "},{"location":"AMS-Programmer-Guide/#support-for-the-expat-xml-parsing-library","title":"Support for the Expat XML Parsing Library","text":"

    The expat open-source XML parsing library is required by AMS only if MIBs use the XML format (see man amsxml and man amsrc for additional information).

    Note that Linux environments typically have expat built in, but for VxWorks installations it is necessary to download and install expat prior to installing AMS.

    To build ION with support for expat use the following flag during the .configure step of installation (see 4.1 \"Automake\" above):

    ./configure --with-expat

    "},{"location":"AMS-Programmer-Guide/#the-ams-daemon","title":"The AMS Daemon","text":"

    The AMS daemon program amsd can function as the configuration server for a continuum, as the registrar for one cell of a specified message space, or both. To run it, enter a command of the following form at a terminal window prompt:

    amsd mib_source_name eid_spec

    or

    amsd mib_source_name eid_spec application_name authority_name unit_name

    The former form of the command starts amsd as a configuration server only.

    mib_source_name is as discussed in the documentation of ams_register below; it enables amsd to run.

    eid_spec is a string that specifies the IP address and port that amsd must establish in order to receive MAMS messages in its capacity as a configuration server. See man amsd for more information.

    When the latter form of the amsd command is used, the daemon is configured to function as the registrar for the indicated message space unit. If the value \".\" (period character) is supplied for eid_spec, then the daemon will [only] function as a registrar. Otherwise the daemon will function as both configuration server and registrar; this option can be useful when operating a simple, stand-alone message space, such as a demo.

    "},{"location":"AMS-Programmer-Guide/#c-application-programming-interface","title":"\"C\" Application Programming Interface","text":"

    The AMS application programming interface is defined by the header file ams.h, which must be #included at the beginning of any AMS application program source file.

    See section 9 'Application Development Guide' for compilation and linkage instructions.

    "},{"location":"AMS-Programmer-Guide/#type-and-macro-definitions","title":"Type and Macro Definitions","text":"
    #define THIS_CONTINUUM  (-1)\n#define ALL_CONTINUA    (0)\n#define ANY_CONTINUUM   (0)\n#define ALL_SUBJECTS    (0)\n#define ANY_SUBJECT (0)\n#define ALL_ROLES   (0)\n#define ANY_ROLE    (0)\n\ntypedef enum\n{\nAmsArrivalOrder = 0, AmsTransmissionOrder\n} AmsSequence;\n\ntypedef enum\n{\nAmsBestEffort = 0, AmsAssured\n} AmsDiligence;\n\ntypedef enum\n{\nAmsMsgUnary = 0, AmsMsgQuery, AmsMsgReply, AmsMsgNone\n} AmsMsgType;\n\ntypedef struct amssapst *AmsModule; typedef struct amsevtst *AmsEvent;\n\n/*  AMS event types.    */\n#define AMS_MSG_EVT 1\n#define TIMEOUT_EVT 2\n#define NOTICE_EVT  3\n#define USER_DEFINED_EVT    4\n\n\ntypedef enum\n{\nAmsRegistrationState, AmsInvitationState, AmsSubscriptionState\n} AmsStateType;\n\ntypedef enum\n{\nAmsStateBegins = 1, AmsStateEnds\n} AmsChangeType;\n\ntypedef void    (*AmsMsgHandler)(AmsModule module, void *userData, AmsEvent *eventRef, int continuumNbr, int unitNbr,\nint moduleNbr, int subjectNbr,\nint contentLength, char *content,\nint context, AmsMsgType msgType, int priority);\n\ntypedef void    (*AmsRegistrationHandler)(AmsModule  module,\nvoid *userData, AmsEvent *eventRef, int unitNbr,\nint moduleNbr, int roleNbr);\n\ntypedef void    (*AmsUnregistrationHandler)(AmsModule  module,\nvoid *userData, AmsEvent *eventRef, int unitNbr,\nint moduleNbr);\n\ntypedef void    (*AmsInvitationHandler)(AmsModule  module,\nvoid *userData, AmsEvent *eventRef, int unitNbr,\nint moduleNbr,\nint domainRoleNbr,\nint domainContinuumNbr, int domainUnitNbr,\nint subjectNbr, int priority,\nunsigned char flowLabel, AmsSequence sequence, AmsDiligence diligence);\n\ntypedef void    (*AmsDisinvitationHandler)(AmsModule  module,\nvoid *userData, AmsEvent *eventRef, int unitNbr,\nint moduleNbr,\nint domainRoleNbr,\nint domainContinuumNbr, int domainUnitNbr,\nint subjectNbr);\n\ntypedef void    (*AmsSubscriptionHandler)(AmsModule  module,\nvoid *userData, AmsEvent *eventRef, int unitNbr,\nint moduleNbr,\nint domainRoleNbr,\nint domainContinuumNbr, int domainUnitNbr,\nint subjectNbr, int priority,\nunsigned char flowLabel, AmsSequence sequence, AmsDiligence diligence);\n\ntypedef void    (*AmsUnsubscriptionHandler)(AmsModule  module,\nvoid *userData, AmsEvent *eventRef, int unitNbr,\nint moduleNbr,\nint domainRoleNbr,\nint domainContinuumNbr, int domainUnitNbr,\nint subjectNbr);\n\ntypedef void    (*AmsUserEventHandler)(AmsModule  module,\nvoid *userData, AmsEvent *eventRef, int code,int dataLength, char *data);\n\ntypedef void    (*AmsMgtErrHandler)(void *userData, AmsEvent *eventRef);\ntypedef struct\n{\nAmsMsgHandler   msgHandler;\nvoid    *msgHandlerUserData;\nAmsRegistrationHandler  registrationHandler;\nvoid     *registrationHandlerUserData; AmsUnregistrationHandler unregistrationHandler;\nvoid    *unregistrationHandlerUserData;\nAmsInvitationHandler    invitationHandler;\nvoid    *invitationHandlerUserData; AmsDisinvitationHandler disinvitationHandler;\nvoid    *disinvitationHandlerUserData; AmsSubscriptionHandler   subscriptionHandler;\nvoid    *subscriptionHandlerUserData; AmsUnsubscriptionHandler  unsubscriptionHandler;\nvoid    *unsubscriptionHandlerUserData;\nAmsUserEventHandler userEventHandler;\nvoid    *userEventHandlerUserData;\nAmsMgtErrHandler    errHandler;\nvoid    *errHandlerUserData;\n} AmsEventMgt;\n\n/*  Predefined term values for ams_query and ams_get_event.  */ \n#define AMS_POLL (0)    /* Return immediately. */ \n#define AMS_BLOCKING  (-1)  /*   Wait forever.    */\n
    "},{"location":"AMS-Programmer-Guide/#module-management-functions","title":"Module Management functions","text":"
    int ams_register(char *mibSource, char *tsorder, char *applicationName, char *authorityName, char *unitName, char *roleName, AmsModule\n*module);\n

    This function is used to initiate the application's participation as a module in the message space identified by specified application and authority names, within the local AMS continuum.

    mibSource indicates the location of the Management Information Base (MIB) information that will enable the proposed new module to participate in its chosen message space. Nominally it is the name of an XML file in the current working directory; if NULL, mibSource defaults to roleName.xml. (A future version of loadmib.c might load MIB information from an ICI \"sdr\" database rather than from a file.)

    tsorder is the applicable overriding transport service selection order string. This capability is not yet fully supported; for now, tsorder should always be NULL.

    applicationName identifies the AMS application within which the proposed new module is designed to function. The application must be declared in the MIB.

    authorityName, together with applicationName, identifies the message space in which the new module proposes to register. The message space must be declared in the MIB.

    unitName identifies the cell, within the specified message space, in which the new module proposes to register. The unit must be declared in the MIB for ventures containing the specified message space, and a registrar for this cell of this message space must currently be running in order for the ams_register function to succeed.

    roleName identifies the functional role that the proposed new module is designed to perform within the indicated application. The role must be declared in the MIB for that application, and its name will serve as the name of the module.

    module points to the variable in which the applicable AMS service access point will be returned upon successful registration of the new module.

    The function returns 0 on success, -1 on any error.

    The application thread that invoked ams_register is assumed by AMS to be the main application thread for the module, or \"prime thread\". Following successful completion of ams_register all threads of the application process may commence invoking AMS services -- inviting messages, publishing messages, etc. -- except that only the prime thread may receive AMS events, e.g., process incoming messages.

     int ams_get_module_nbr(AmsModule module);\n

    The function returns the unique identifying number (within its chosen cell) assigned to the indicated module as a result of successful registration.

     int ams_get_unit_nbr(AmsModule module);\n

    The function returns the number that uniquely (within the message space) identifies the cell in which the module registered. The combination of unit number and module number uniquely identifies the module within its message space.

     int ams_set_event_mgr(AmsModule module, AmsEventMgt\n *rules);\n

    The function starts a background \"event manager\" thread that automatically receives and processes all AMS events (messages, notices of message space configuration change, etc.) enqueued for the indicated module.

    The thread processes each event according to the indicated rules structure; any event for which a NULL callback function is provided is simply discarded. For details of the rules structure and prototype definitions for the callback functions that the rules point to, see 6.1 above. Some notes on this interface:

    While the event manager thread is running, the prime thread is prohibited from receiving any AMS events itself, i.e., ams_get_event will always fail.

    Only the prime thread may call ams_set_event_mgr. The function returns 0 on success, -1 on any error.

    void ams_remove_event_mgr(AmsModule module);\n
    The function stops the background event manager thread for this module, if any is running. Only the prime thread may call ams_remove_event_mgr. Following completion of this function the prime thread is once again able to receive and process AMS events.

    int ams_unregister(AmsModule module);\n
    The function terminates the module's registration, ending the ability of any thread of the application process to invoke any AMS services; it automatically stops the background event manager thread for this module, if any is running.

    Only the prime thread may call ams_unregister. The function returns 0 on success, -1 on any error.

    "},{"location":"AMS-Programmer-Guide/#message-subscription-and-invitation","title":"Message Subscription and Invitation","text":"

     int  ams_invite (AmsModule module, int roleNbr, int\n continuumNbr, int unitNbr, int subjectNbr, int priority, unsigned char\n flowLabel, AmsSequence sequence, AmsDiligence diligence);\n
    This function establishes the module's willingness to accept messages on a specified subject, under specified conditions, and states the quality of service at which the module would prefer those messages to be sent. Invitations are implicitly constrained by venture number: only messages from modules registered in messages spaces characterized by the same application and authority names as the message space in which the inviting module itself is registered are included in any invitation.

    module must be a valid AMS service access point as returned from ams_register.

    roleNbr identifies the role that constrains the invitation: only messages from modules registered as performing the indicated role are included in this invitation. If zero, indicates \"all modules\".

    continuumNbr identifies the continuum that constrains the invitation: only messages from modules operating within the indicated continuum are included in this invitation. If -1, indicates \"the local continuum\". If zero, indicates \"all continua\".

    unitNbr identifies the unit that constrains the invitation: only messages from modules registered in cells identified by the indicated number -- or in cells that are contained within such cells -- are included in this invitation. A reminder: cell zero is the \"root cell\", encompassing the entire message space.

    subjectNbr identifies the subject that constrains the invitation: only messages on the indicated subject are included in this invitation.

    priority indicates the level of priority (from 1 to 15, where 1 is the highest priority indicating greatest urgency) at which the inviting module prefers that messages responding to this invitation be sent.

    flowLabel specifies the flow label (a number from 1 to 255, which AMS may pass through to transport service adapters for quality-of-service specification purposes) that the inviting module asks issuing modules to cite when sending messages in response to this invitation. Flow label 0 signifies \"no flow label.\"

    sequence indicates the minimum level of transmission order preservation that the inviting module requires for messages responding to this invitation.

    diligence indicates the minimum level of reliability (based on acknowledgement and retransmission) that the inviting module requires for messages responding to this invitation.

    The function returns 0 on success, -1 on any error. When successful, it causes the invitation to be propagated automatically to all modules in the inviting module's own message space.

    int  ams_disinvite (AmsModule module, int roleNbr, int continuumNbr, int unitNbr, int subjectNbr);\n

    This function terminates the module's prior invitation for messages on a specified subject under specified conditions. roleNbr, continuumNbr, unitNbr, and subjectNbr must be identical to those that characterized the invitation that is to be terminated. The function returns 0 on success, -1 on any error. When successful, it causes cancellation of the invitation to be propagated automatically to all modules in the inviting module's own message space.

    int  ams_subscribe (AmsModule module, int roleNbr, int continuumNbr, int unitNbr, int subjectNbr, int priority, unsigned char flowLabel, AmsSequence sequence, AmsDiligence diligence);\n

    This function establishes the module's request to receive a copy of every future message published on a specified subject, under specified conditions, and states the quality of service at which the module would prefer those messages to be sent. Subscriptions are implicitly constrained by venture number: only messages from modules registered in messages spaces characterized by the same application and authority names as the message space in which the subscribing module itself is registered are included in any subscription.

    module must be a valid AMS service access point as returned from ams_register.

    roleNbr identifies the role that constrains the subscription: only messages from modules registered as performing the indicated role are included in this subscription. If zero, indicates \"all modules\".

    continuumNbr identifies the continuum that constrains the subscription: only messages from modules operating within the indicated continuum are included in this subscription. If -1, indicates \"the local continuum\". If zero, indicates \"all continua\".

    unitNbr identifies the unit that constrains the subscription: only messages from modules registered in cells identified by the indicated number -- or in cells that are contained within such cells -- are included in this subscription. A reminder: cell zero is the \"root cell\", encompassing the entire message space.

    subjectNbr identifies the subject that constrains the subscription: only messages on the indicated subject are included in this subscription. subjectNbr may be zero to indicate that messages published on all subjects are requested; in this case, continuumNbr must be -1.

    priority indicates the level of priority (from 1 to 15, where 1 is the highest priority indicating greatest urgency) at which the subscribing module prefers that messages responding to this subscription be sent.

    flowLabel specifies the flow label (a number from 1 to 255, which AMS may pass through to transport service adapters for quality-of-service specification purposes) that the subscribing module asks issuing modules to cite when publishing messages in response to this subscription. Flow label 0 signifies \"no flow label.\"

    sequence indicates the minimum level of transmission order preservation that the subscribing module requires for messages responding to this subscription.

    diligence indicates the minimum level of reliability (based on acknowledgement and retransmission) that the subscribing module requires for messages responding to this subscription.

    The function returns 0 on success, -1 on any error. When successful, it causes the subscription to be propagated automatically to all modules in the subscribing module's own message space.

    int  ams_unsubscribe (AmsModule module, int roleNbr, int continuumNbr, int unitNbr, int subjectNbr);\n

    This function terminates the module's prior subscription to messages on a specified subject under specified conditions. roleNbr, continuumNbr, unitNbr, and subjectNbr must be identical to those that characterized the subscription that is to be terminated. The function returns 0 on success, -1 on any error. When successful, it causes cancellation of the subscription to be propagated automatically to all modules in the subscribing module's own message space.

    "},{"location":"AMS-Programmer-Guide/#configuration-lookup","title":"Configuration Lookup","text":"

     int  ams_lookup_unit_nbr (AmsModule module, char\n *unitName);\n
    The function returns the unit number corresponding to the indicated unitName, in the context of the venture encompassing the message space in which the invoking module is registered. Returns -1 if this unitName is undefined in this venture.
     int  ams_lookup_role_nbr (AmsModule module, char\n *roleName);\n
    The function returns the role number corresponding to the indicated roleName, in the context of the application characterizing the message space in which the invoking module is registered. Returns -1 if this roleName is undefined in this application.
     int  ams_lookup_subject_nbr (AmsModule module, char\n *subjectName);\n
    The function returns the subject number corresponding to the indicated subjectName, in the context of the application characterizing the message space in which the invoking module is registered. Returns -1 if this subjectName is undefined in this application.
     int  ams_lookup_continuum_nbr (AmsModule module, char\n *continuumName);\n
    The function returns the continuum number corresponding to the indicated continuumName, Returns -1 if the named continuum is unknown.
     char * ams_lookup_unit_name (AmsModule module, int\n unitNbr);\n
    The function returns the unit name corresponding to the indicated unitNbr, in the context of the venture encompassing the message space in which the invoking module is registered. Returns NULL if this unitNbr is undefined in this venture.
     char * ams_lookup_role_name (AmsModule module, int\n roleNbr);\n
    The function returns the role name corresponding to the indicated roleNbr, in the context of the application characterizing the message space in which the invoking module is registered. Returns NULL if this roleNbr is undefined in this application.
     char * ams_lookup_subject_name (AmsModule module, int\n subjectNbr);\n
    The function returns the subject name corresponding to the indicated subjectNbr, in the context of the application characterizing the message space in which the invoking module is registered. Returns NULL if this subjectNbr is undefined in this application.
     char * ams_lookup_continuum_name (AmsModule module, int\n continuumNbr);\n
    The function returns the continuum name corresponding to the indicated continuumNbr. Returns NULL if the specified continuum is unknown.
     char * ams_get_role_name (AmsModule module, int unitNbr,\n int moduleNbr);\n
    The function returns the name of the role under which the module identified by unitNbr and moduleNbr is registered, within the invoking module's own message space. Returns NULL if no module identified by unitNbr and moduleNbr is known to be currently registered within this message space.
     Lyst  ams_list_msgspaces (AmsModule module);\n
    The function returns a Lyst (see the documentation for lyst) of the numbers of all AMS continua in which there is known to be another message space for the venture in which module is registered. Returns NULL if there is insufficient free memory to create this list. NOTE: be sure to use lyst_destroy to release the memory occupied by this list when you're done with it.
     int  ams_subunit_of (AmsModule module, int argUnitNbr, int\n refUnitNbr);\n
    The function returns 1 if the unit identified by argUnitNbr is a subset of (or is identical to) the unit identified by refUnitNbr. Otherwise it returns 0.
     int  ams_get_continuum_nbr ();\n
    The function returns the local continuum number.
     int  ams_continuum_is_neighbor (int continuumNbr);\n
    The function returns 1 if the continuum identified by continuumNbr is a neighbor (within the RAMS network) of the local continuum. Otherwise it returns 0.
     int  ams_rams_net_is_tree (AmsModule module);\n
    The function returns 1 if the RAMS network is configured as a tree. Otherwise it returns 0.

    "},{"location":"AMS-Programmer-Guide/#message-issuance","title":"Message Issuance","text":"

     int  ams_publish (AmsModule module, int subjectNbr, int\n priority, unsigned char flowLabel, int contentLength, char *content,\n int context);\n
    This function causes an AMS message to be constructed on the indicated subject, encapsulating the indicated content and characterized by the indicated processing context token, and causes one copy of that message to be sent to every module in the message space that currently asserts a subscription for messages on this subject such that the invoking module satisfies the constraints on that subscription.

    priority may be any value from 1 to 15, overriding the priority preference(s) asserted by the subscriber(s), or it may be zero indicating \"use each subscriber's preferred priority.\" flowLabel may be any value from 1 to 255, overriding the flow label preference(s) asserted by the subscriber(s), or it may be zero indicating \"use each subscriber's preferred flow label.\"

    The function returns 0 on success, -1 on any error.

     int  ams_send (AmsModule module, int continuumNbr, int\n unitNbr, int moduleNbr, int subjectNbr, int priority, unsigned char\n flowLabel, int contentLength, char *content, int context);\n
    This function causes an AMS message to be constructed on the indicated subject, encapsulating the indicated content and characterized by the indicated processing context token, and causes that message to be sent to the module identified by unitNbr and moduleNbr within the indicated continuum, provided that this module currently asserts an invitation for messages on this subject such that the invoking module satisfies the constraints on that invitation.

    If continuumNbr is -1, the local continuum is inferred.

    priority may be any value from 1 to 15, overriding the priority preference asserted by the destination module, or it may be zero indicating \"use the destination module's preferred priority.\" flowLabel may be any value from 1 to 255, overriding the flow label preference asserted by the destination module, or it may be zero indicating \"use the destination module's preferred flow label.\"

    The function returns 0 on success, -1 on any error.

    int  ams_query (AmsModule module, int continuumNbr, int unitNbr, int moduleNbr, int subjectNbr, int priority, unsigned char flowLabel, int contentLength, char *content, int context, int term, AmsEvent *event);\n
    This function is identical to ams_send in usage and effect except that, following issuance of the message, the function blocks (that is, does not return control to the invoking function) until either (a) a message that is a specific reply to this message is received or (b) the time period indicated by term -- in microseconds -- elapses. Upon return of control to the invoking function, the AMS event pointer referenced by event points to the AMS event that caused the return of control, either a reply message or a timeout or (possibly) a notice of processing error. If term is 0, the function returns control to the invoking function immediately and *event always points to a timeout event. If term is -1, the function never returns control until a reply message is received.

    The function returns 0 on success, -1 on any error.

     int  ams_reply (AmsModule module, AmsEvent msg, int\n subjectNbr, int priority, unsigned char flowLabel, int contentLength,\n char *content);\n
    This function is identical to ams_send in usage and effect except that the destination of the reply message is not stated explicitly by the invoking function; instead, the invoking function provides a pointer to the AMS message (an AmsEvent whose event type is AMS_MSG_EVT) whose sender is the destination of the reply message.

    The function returns 0 on success, -1 on any error.

     int  ams_announce (AmsModule module, int roleNbr, int\n continuumNbr, int unitNbr, int subjectNbr, int priority, unsigned char\n flowLabel, int contentLength, char *content, int context);\n
    This function causes an AMS message to be constructed on the indicated subject, encapsulating the indicated content and characterized by the indicated processing context token, and causes one copy of that message to be sent to every module in the domain of the announcement that currently asserts an invitation for messages on this subject such that the invoking module satisfies the constraints on that invitation. The domain of the announcement is the set of all modules such that:

    continuumNbr indicates \"the local continuum\"; a value of zero indicates \"all continua\".

    priority may be any value from 1 to 15, overriding the priority preference(s) asserted by the destination module(s), or it may be zero indicating \"use each destination module's preferred priority.\" flowLabel may be any value from 1 to 255, overriding the flow label preference(s) asserted by the destination module(s), or it may be zero indicating \"use each destination module's preferred flow label.\"

    The function returns 0 on success, -1 on any error.

    "},{"location":"AMS-Programmer-Guide/#event-including-message-reception","title":"Event (Including Message) Reception","text":"

    int  ams_get_event (AmsModule module, int term, AmsEvent\n *event);\n
    This function acquires the next AMS event currently in the queue of AMS events that have yet to be handled by the application. The function blocks (that is, does not return control to the invoking function) until either (a) an event is available to be acquired or (b) the time period indicated by term -- in microseconds -- elapses. Upon return of control to the invoking function, the AMS event pointer referenced by event points to the AMS event that caused the return of control: a message, a notice of message space configuration change, a user-defined event, or a timeout. If term is 0, the function returns control to the invoking function immediately. If term is -1, the function never returns control until a non-timeout event can be acquired.

    The function returns 0 on success, -1 on any error. Following acquisition of an event, the application program should:

     int  ams_get_event_type (AmsEvent event);\n
    This function returns the event type of the indicated event, enabling the event to be properly parsed by the application program. The possible event types are AMS_MSG_EVT, TIMEOUT_EVT, NOTICE_EVT, and USER_DEFINED_EVT.

     int  ams_parse_msg (AmsEvent event, int *continuumNbr,\n int *unitNbr, int *moduleNbr, int *subjectNbr, int *contentLength, char **content,\n int *context, AmsMsgType *msgType, int *priority, unsigned char *flowLabel);\n
    This function extracts the content of an AMS event that is a received message, inserting values into the variables that the function's arguments point to. continuumNbr, unitNbr, and moduleNbr identify the module that sent the message. Returns 0 unless one or more of the arguments provided to the function are NULL, in which case the function returns -1.
    int  ams_parse_notice (AmsEvent event, AmsStateType\n *state, AmsChangeType *change, int *unitNbr, int *moduleNbr, int *roleNbr, int *domainContinuumNbr, int *domainUnitNbr, int *subjectNbr, int *priority, unsigned char *flowLabel, AmsSequence *sequence, AmsDiligence *diligence);\n
    This function extracts the content of an AMS event that is a notice of change in message space configuration, inserting values into the variables that the function's arguments point to.

    state and change indicate the nature of the change.

    unitNbr and moduleNbr identify the module to which the change pertains.

    roleNbr is provided in the event that the change is the registration of a new module (in which case it indicates the functional nature of the new module) or is a subscription, unsubscription, invitation, or disinvitation (in which case it indicates the role constraining the subscription or invitation).

    For a notice of subscription, unsubscription, invitation, or disinvitation:

    For a notice of subscription or invitation, priority, flowLabel, sequence, and diligence indicate the quality of service requested by the module for this subscription or invitation.

    Returns 0 unless one or more of the arguments provided to the function are NULL, in which case the function returns -1.

     int  ams_parse_user_event (AmsEvent event, int *code, int *dataLength, char **data);\n
    This function extracts the content of a user-defined AMS event, inserting values into the variables that the function's arguments point to. Returns 0 unless one or more of the arguments provided to the function are NULL, in which case the function returns -1.
     int  ams_recycle_event (AmsEvent event);\n
    This function simply releases all memory occupied by the indicated event. Returns 0 unless event is NULL, in which case the function returns -1.

    "},{"location":"AMS-Programmer-Guide/#user-event-posting","title":"User Event Posting","text":"

     int  ams_post_user_event (AmsModule module, int\n userEventCode, int userEventDataLength, char *userEventData, int\n priority);\n
    This function posts a user-defined event into the queue of AMS events that have yet to be handled by the application. userEventCode is an arbitrary, user-defined numeric value; userEventData, if not NULL, is assumed to be an arbitrary, user-defined character string of length userEventDataLength.

    priority may be any value from 0 to 16. Note that this enables the application to post an event to itself that is guaranteed to be of higher priority than any message -- assuring that it will be processed before any message that is currently enqueued or that arrives in the future -- or, alternatively, to post an event that is guaranteed to be of lower priority than any message and will therefore only be processed during a lull in message reception.

    Returns 0 on success, -1 on any error.

    "},{"location":"AMS-Programmer-Guide/#remote-ams","title":"Remote AMS","text":"

    The JPL implementation of Remote AMS comprises a library (librams.c) and a sample RAMS gateway program (ramsTest.c) that uses that library.

    "},{"location":"AMS-Programmer-Guide/#library","title":"Library","text":"

    The RAMS library implements a very simple API, which is defined by the header file

    rams.h and comprises just a single function:

     int  rams_run (char *mibSource, char *tsorder, char\n *applicationName, char *authorityName, char *unitName, char\n *roleName, int lifetime);\n
    This function initiates a RAMS gateway operations loop. mibSource, tsorder, mName, memory, mSize, applicationName, authorityName, unitName, and roleName are as discussed in the documentation of ams_register above; they are used to register the RAMS gateway process as an AMS module. lifetime is the user-specified maximum time to live for all DTN bundles issued by the RAMS gateway in the course of its communications over the RAMS network.

    Note that the priority assigned to any DTN bundle that conveys a published or privately sent AMS message over the RAMS network will be computed as a function of the flow label specified at the time the message was originally published or sent. If no overriding flow label was specified, then the bundle priority will be 1 (standard). Otherwise, the number represented by the two low-order bits of the flow label will be used as the bundle priority and two \"extended class of service\" parameters will be derived from the next two higher-order bits of the flow label: bit 5 (the 3^rd^-lowest-order bit) will be used as the value of the \"minimum latency\" flag, with a value of 1 indicating that the bundle is critical, and bit 4 (the 4^th^-lowest-order bit) will be used as the value of the \"best-effort\" flag, with a value of 1 indicating that the bundle should be sent over an unacknowledged convergence-layer protocol. All bundles issued by the RAMS gateway that don't carry AMS messages will be assigned bundle priority 1.

    This function runs indefinitely until it fails or is interrupted by a SIGINT signal. Returns -1 on any failure, 0 on normal termination.

    "},{"location":"AMS-Programmer-Guide/#ramsgate","title":"ramsgate","text":"

    The sample RAMS gateway program ramsgate provides basic RAMS gateway functionality. To run it, enter a command of the following form at a terminal window prompt:

     ramsgate *application_name authority_name_name lifetime*\n [*memory_size* [*memory_manager_name*]]\n
    application_name, authority_name, lifetime, memory_manager_name, and memory_size are as discussed in the documentation of ams_register. If not specified, memory size defaults to 200000. The gateway process always registers in the role named \"RAMS\".

    Note that ramsgate relies on the ION implementation of DTN; be sure an ION node is operating on the local computer before starting ramsgate. (See the ION Design and Operations Guide for details.)

    To terminate operation of the gateway process, just use CTRL-C to interrupt the program.

    "},{"location":"AMS-Programmer-Guide/#management-information-base","title":"Management Information Base","text":"

    In order to operate correctly, every AMS application process -- and amsd as well -- must initially load Management Information Base (MIB) values that are universally consistent.

    Currently this is accomplished automatically during registration: MIB value declaration commands in XML format are read from a file, parsed, and processed automatically.

    "},{"location":"AMS-Programmer-Guide/#mib-file-syntax","title":"MIB file syntax","text":"

    The elements of the XML files used to load AMS MIBs are as follows: [ams_mib_load] : contains a series of MIB load commands.

    Attributes: none

    [ams_mib_init] : command that initializes the MIB. Attributes:

    continuum_nbr: the number identifying the local continuum ptsname: the name of the primary transport service

    [ams_mib_add] : contains a series of elements that add items to the MIB.

    Attributes: none

    [continuum] :

    Attributes:

    nbr: the number that identifies this continuum name: the name that identifies this continuum

    neighbor: a Boolean indication (\"1\" or \"0\") of whether or not this is a neighboring continuum [If omitted, the continuum is by default assumed to be a neighbor -- that is, an implicit neighbor=\"1\" attribute is the default.]

    desc: a brief textual description of this continuum

    [csendpoint] : configuration server endpoint specification (i.e., network location of configuration server)

    Attributes:

    epspec: PTS-specific endpoint specification string (to be more fully documented in a later edition of this [Programmer's Guide] )

    [application] : defines an application Attributes:

    name: name of application

    [venture] : defines a venture (an instance of an application) Attributes:

    [nbr]: the number that identifies this venture

    [appname]: the name of the application served by this venture

    [authname]: the name of the authority responsible for this instance of this application

    [net_config]: the configuration (\"mesh\" or \"tree\") of the RAMS network comprising all AMS continua that participate in this venture. If omitted, the RAMS network configuration is by default assumed to be a mesh.

    [gweid]: a string identifying the endpoint for the local continuum's RAMS gateway within the RAMS network for this venture; default is \"bp@ipn:local_continuum_nbr.venture_nbr\"

    root_cell_resync_period: the period (expressed as a count of registrar heartbeats) on which the configuration of the root unit of this venture will automatically be resynchronized. If omitted or set to zero, automatic resync is disabled within the root unit.

    [role] : defines a role within a venture Attributes:

    [nbr]: the number that identifies this role

    [name]: the name that identifies this role

    [subject] : defines a message subject within a venture Attributes:

    [nbr]: the number that identifies this subject

    [name]: the name that identifies this subject

    [desc]: a brief textual description of this message subject

    [element] : defines one of the elements (fields) of a message on a given subject (note that elements must be defined in the order in which they appear in the message, without omission)

    Attributes:

    [type]: a number that specifies the data type of this element (1 = long, 2 = int, 3 = short, 4 = char, 5 = string)

    [name]: the name that identifies this element

    [desc]: a brief textual description of this message field. [unit] : defines a unit of a venture

    Attributes:

    [nbr]: the number that identifies this unit name: the name that identifies this unit

    [resync_period]: the period (expressed as a count of registrar heartbeats) on which the configuration of this unit will automatically be resynchronized. If omitted or set to zero, automatic resync is disabled within this unit.

    [msgspace] : identifies a remote continuum that contains a message space that is part of this venture

    [Attributes]:

    [nbr]: the number that identifies the continuum containing this message space

    [gweid]: a string identifying the endpoint for the indicated continuum's RAMS gateway, within the RAMS network for this venture; default is \"bp@ipn:continuum_nbr.venture_nbr\"

    neighbor: Signifies adjacency of the message space with the parent continuum using a Boolean value (i.e. 1 is adjacent, 0 is non-adjacent). Note that in this context message space is analogous to continuum. If the neighbor attribute is omitted the default value is 1.

    "},{"location":"AMS-Programmer-Guide/#a-sample-mib","title":"A Sample MIB","text":"
    <?xml version=\"1.0\" standalone=\"yes\"?>\n<ams_mib_load>\n<ams_mib_init continuum_nbr=\"1\" ptsname=\"dgr\"/>\n<ams_mib_add>\n<continuum nbr=\"2\" name=\"gsfc\"\" desc=\"Goddard Space Flight Center\"/>\n<csendpoint epspec=\"amroc.net:2502\"/>\n<application name=\"demo\"/>\n<venture nbr=\"9\" appname=\"demo\" authname=\"test\" >\n<role nbr=\"2\" name=\"shell\"/>\n<role nbr=\"3\" name=\"log\"/>\n<role nbr=\"4\" name=\"pitch\"/>\n<role nbr=\"5\" name=\"catch\"/>\n<role nbr=\"6\" name=\"benchs\"/>\n<role nbr=\"7\" name=\"benchr\"/>\n<subject nbr=\"1\" name=\"text\" desc=\"ASCII text\"/>\n<subject nbr=\"2\" name=\"noise\" desc=\"more ASCII text\"/>\n<subject nbr=\"3\" name=\"bench\" desc=\"numbered messages\"/>\n<unit nbr=\"1\" name=\"orbiters\"/>\n<unit nbr=\"2\" name=\"orbiters.near\"/>\n<unit nbr=\"3\" name=\"orbiters.far\"/>\n<msgspace nbr=\"2\" neighbor=1/>\n</venture>\n</ams_mib_add>\n</ams_mib_load>\n
    "},{"location":"AMS-Programmer-Guide/#application-development","title":"Application Development","text":"

    AMS applications (i.e. \"modules\") are custom software built using AMS' application programming interface (API). These AMS modules serve as the interfaces between AMS and mission-specific processes or applications.

    "},{"location":"AMS-Programmer-Guide/#overview_1","title":"Overview","text":"

    A general overview of the required steps follows this sequence:

    1. Include the AMS library header file (ams.h) in your application's source code
    2. Use the provided AMS library functions to implement the interface (to include functionality such as publication and subscription)
    3. Compile your application and link the compiled binary to the required library objects (i.e. ici, dgr, ltp, bp)
    "},{"location":"AMS-Programmer-Guide/#compiling-custom-ams-applications","title":"Compiling Custom AMS Applications","text":"

    Applications created with the API must be compiled with links to ION's dynamic libraries.

    The following two-step build procedure provides an example of how this works (note that the default path for installed ION library files (Linux) is used: /usr/local/lib/):

    gcc -g -Wall -Werror -Dlinux -DUDPTS -DTCPTS -DDGRTS -DNOEXPAT -fPIC -DSPACE_ORDER=3 -I../library -I../include -I../rams -I/usr/local/include -c your_module_name.c\n\ngcc -g -Wall -Werror -Dlinux -DUDPTS -DTCPTS -DDGRTS -DNOEXPAT -fPIC -DSPACE_ORDER=3 -I../library -I../include -I../rams -I/usr/local/include -o your_module_name your_module_name.o -L./lib -L/usr/local/lib -lams -ldgr -lici -lpthread -lm*\n
    "},{"location":"AMS-Programmer-Guide/#running-the-application","title":"Running the Application","text":"

    Applications must self-register with an AMS registrar. To facilitate this the following information is required by the application (i.e. using the ams_register() function):

    1. unit name
    2. role name
    3. application name
    4. authority name

    Given this requirement, it may be useful (but not necessary) for the application to accept command line arguments.

    An example command to start a custom AMS module might look like the following:

    ./your_module_name ' ' your_module_mib_role_name amsdemo test\n
    Note: the empty quotes (' ') above are used to specify registration with the root unit registrar.

    "},{"location":"AMS-Programmer-Guide/#a-sample-application","title":"A Sample Application","text":"

    Follows is the complete source code for 'amshello', a simple Unix-based AMS application installed alongside AMS. It is a complete distributed system comprising two AMS application modules that functions as follows.

    When the amshello program starts it fork()s into two processes, a \"pitcher\" and a \"catcher\". Both processes then register as modules in the root cell of an \"amsdemo/test\" message space. The catcher invites messages on subject \"text\" and waits for the first such message to arrive. The pitcher waits for an invitation to send messages of subject \"text\" and, when it arrives, sends one such message and terminates. When the catcher receives the message it prints the text of the message and then terminates.

    To run the 'amshello' demonstration application the following 2 steps are required.

    1. An AMS configuration server and registrar must be started (do this after all other ION processes have started) using the following command:amsd @ @ amsdemo test \"\" &
    2. Then run the amshello application using the following command: amshello

    Important points regarding the above amsd command (see man amsd for additional details):

    /*\namshello.c\n\"Hello world\" demonstration using AMS - Unix platform (only)\n\nCopyright (c) 2023, California Institute of Technology. \nSky DeBaun, Jet Propulsion Laboratory.\n\n\nThis program assumes the following conditions---------------\n1.) ION is running\n2.) An AMS Registrar is running     \n3.) An AMS Configuration Server is running \n4.) An MIB configuration file has been created and is specified for use (see note below)\n\n*NOTE: the following command completes steps 2, 3, and 4 above (run this command after other ION processes start, then run the \u2018amshello\u2019 command from terminal to run the program):\n\namsd @ @ amsdemo test \"\" &\n\n\n*/\n\n#include \"ams.h\"\n\nstatic int  runPitcher()\n{\n    AmsModule           me;\n    AmsEvent            evt;\n    AmsStateType        state;\n    AmsChangeType       change;\n    int             zn, nn, rn, dcn, dzn, sn, pr, textlen;\n    unsigned char       fl;\n    AmsSequence         sequence;\n    AmsDiligence        diligence;\n    char                buffer[80];\n\n    isprintf(buffer, sizeof buffer, \"Hello from process %d\", (int)     getpid());\n    textlen = strlen(buffer) + 1;\n\n    //register pitch module using default in-memory MIB (i.e. using the @)\n    oK(ams_register(\"@\", NULL, \"amsdemo\", \"test\", \"\", \"pitch\", &me));\n\n    while (1)\n    {\n        if (ams_get_event(me, AMS_BLOCKING, &evt) < 0) \n{\nreturn 0;\n}\nelse\n{\n            ams_parse_notice(evt, &state, &change, &zn, &nn, &rn, &dcn,\n                    &dzn, &sn, &pr, &fl, &sequence, &diligence);\n            ams_recycle_event(evt);\n}\n\n        if (state == AmsInvitationState && sn == 1)\n        {\n            printf(\"Process %d sending:  '%s'\\n\", (int) getpid(), buffer);\n            fflush(stdout);\n            ams_send(me, -1, zn, nn, 1, 0, 0, textlen, buffer, 0);\n            ams_unregister(me); \nreturn 0;\n        }\n    }\n}\n\nstatic int  runCatcher()\n{\n    AmsModule           me;\n    AmsEvent            evt;\n    int             cn, zn, nn, sn, len, ct, pr;\n    unsigned char       fl;\n    AmsMsgType          mt;\n    char                *txt;\n\n    //register catch module using default in-memory MIB (i.e. @)\n    oK(ams_register(\"@\", NULL, \"amsdemo\", \"test\", \"\", \"catch\", &me));\n    ams_invite(me, 0, 0, 0, 1, 8, 0, AmsArrivalOrder, AmsAssured);\n\n    while (1)\n    {\n        if (ams_get_event(me, AMS_BLOCKING, &evt) < 0) return 0;\n        if (ams_get_event_type(evt) == AMS_MSG_EVT) break;\n        else ams_recycle_event(evt);\n    }\n\n    ams_parse_msg(evt, &cn, &zn, &nn, &sn, &len, &txt, &ct, &mt, &pr, &fl);\n    printf(\"Process %d received: '%s'\\n\", (int) getpid(), txt); fflush(stdout);\n    ams_recycle_event(evt); ams_unregister(me); return 0;\n}\n\n\nint main(void) \n{\n    pid_t pid = fork();\n\n    if (pid == -1) {\n        fprintf(stderr, \"Failed to create child process.\\n\");\n        return EXIT_FAILURE;\n    }\n\n    if (pid == 0)\n        //child process runs transmitter----------------------\n        runPitcher();\n    else \n    {\n        //parent process runs receiver------------------------\n        runCatcher();\n    }\n\n    return 0;\n}\n
    "},{"location":"AMS-Programmer-Guide/#acknowledgment","title":"Acknowledgment","text":"

    The research described in this publication was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration.

    "},{"location":"BP-Service-API/","title":"BP Service API","text":"

    This tutorial goes over the basic user APIs for developing application software to take advantage of the Bundle Protocol (BP) services provided by ION.

    "},{"location":"BP-Service-API/#pre-requisite","title":"Pre-requisite","text":"

    For a user-developed software to utilize BP services provided by ION, there are two pre-requisite conditions: (1) ION services must be properly configured and already running in a state ready to provide BP services, and (2) BP libraries and header files must be installed and linked to the user application.

    "},{"location":"BP-Service-API/#check-ion-installation-bp-version","title":"Check ION Installation & BP Version","text":"

    A simple way to check if ION is installed in the host is to determine the installation location of ION, one can execute which ionadmin and it will show the directory where ionadmin is currently located:

    $ which ionadmin\n/usr/local/bin/ionadmin\n

    If ionadmin is not found, it means either ION was not built or not properly installed in the execution path. If ionadmin is found, run it and provide the command v at the : prompt to determine the installed ION software version. For example:

    $ ionadmin\n: v\nION-OPEN-SOURCE-4.1.2\n

    Quit ionadmin by command q. Quitting ionadmin will not terminate any ION and BP services, it simply ends ION's user interface for configuration query and management. If you see warning messages such as those shown below when quitting ionadmin, then it confirms that ION is actually not running at the moment. There are no software error:

    : q\nat line 427 of ici/library/platform_sm.c, Can't get shared memory segment: Invalid argument (0)\nat line 312 of ici/library/memmgr.c, Can't open memory region.\nat line 367 of ici/sdr/sdrxn.c, Can't open SDR working memory.\nat line 513 of ici/sdr/sdrxn.c, Can't open SDR working memory.\nat line 963 of ici/library/ion.c, Can't initialize the SDR system.\nStopping ionadmin.\n

    You can also run bpversion to determine the version of Bundle Protocol built in the host:

    $ bpversion\nbpv7\n
    "},{"location":"BP-Service-API/#determine-bp-service-state","title":"Determine BP Service State","text":"

    Once it is determined that ION has been installed, a user may want to determine whether BP service is running by checking for the presence of various BP daemons and shared memory/semaphores, which are created by ION for interprocess communications among the various the BP service daemons.

    To check if BP service is running, you can list current running processes using ps -aux command and inspect if you see the following BP service daemons:

    You can find more details about these daemons in the manual pages. You may also see daemons related to other activate modules. For example, if the LTP engine is active in the system, you will see the following daemons:

    To further vary that BP service is running, you can check for presence of ION shared memory and semaphores:

    ------ Shared Memory Segments --------\nkey        shmid      owner         perms      bytes      nattch     status   \n0x0000ee02 47         userIon       666        641024     13                  \n0x0000ff00 48         userIon       666        50000000   13                  \n0x93de0005 49         userIon       666        1200002544 13                  \n0x0000ff01 50         userIon       666        500000000  13                  \n\n------ Semaphore Arrays --------\nkey        semid      owner         perms      nsems   \n0x0000ee01 23         userIon       666        1     \n0x18020001 24         userIon       666        250   \n

    In this example, the shared memory and semaphore keys for the SDR heap space (shmid 49) and the semaphorebase (semid 24) are created using a random key generated from the process ID of ionadmin and they will vary each time ION is instantiated. This is specific to SVR4 semaphore, which is the default for ION 4.1.2. However, starting with ION 4.1.3, the default semaphore will switch to POSIX semaphore and the output will be different. The other memory and semaphore keys listed in this example are typical default values, but they too, can be changed through ionconfig files.

    "},{"location":"BP-Service-API/#if-ion-is-installed-but-not-running","title":"If ION is installed but not running","text":"

    If ION is installed but not running (either you see no shared memory or don't see the BP service daemons) you can restart ION. Before restarting ION, run ionstop to clear all remaining orphaned processes/shared memory allocations in case the previous instance of BP service was not properly shutdown or has suffered a crash. Here is an example of the output you may see:

    $ ionstop\nIONSTOP will now stop ion and clean up the node for you...\ncfdpadmin .\nStopping cfdpadmin.\nbpv7\nbpadmin .\nStopping bpadmin.\nltpadmin .\nStopping ltpadmin.\nionadmin .\nStopping ionadmin.\nbsspadmin .\nBSSP not initialized yet.\nStopping bsspadmin.\nThis is a single-ION instance configuration. Run killm.\nkillm\nSending TERM to acsadmin lt-acsadmin    acslist lt-acslist      aoslsi lt-aoslsi        aoslso lt-aoslso        bibeadmin lt-bibeadmin  bibeclo lt-bibeclo     bpadmin lt-bpadmin      bpcancel lt-bpcancel    bpchat lt-bpchat        bpclm lt-bpclm  bpclock lt-bpclock      bpcounter lt-bpcounter         bpdriver lt-bpdriver    bpecho lt-bpecho        bping lt-bping  bplist lt-bplist        bpnmtest lt-bpnmtest    bprecvfile lt-bprecvfile       bpsecadmin lt-bpsecadmin        bpsendfile lt-bpsendfile        bpsink lt-bpsink        bpsource lt-bpsource    bpstats lt-bpstats     bpstats2 lt-bpstats2    bptrace lt-bptrace      bptransit lt-bptransit  brsccla lt-brsccla      brsscla lt-brsscla      bsscounter lt-bsscounter       bssdriver lt-bssdriver  bsspadmin lt-bsspadmin  bsspcli lt-bsspcli      bsspclo lt-bsspclo      bsspclock lt-bsspclock  bssrecv lt-bssrecv     bssStreamingApp lt-bssStreamingApp      cgrfetch lt-cgrfetch    cpsd lt-cpsd    dccpcli lt-dccpcli      dccpclo lt-dccpclo    dccplsi lt-dccplsi       dccplso lt-dccplso      dgr2file lt-dgr2file    dgrcli lt-dgrcli        dgrclo lt-dgrclo        dtka lt-dtka    dtkaadmin lt-dtkaadmin         dtn2admin lt-dtn2admin  dtn2adminep lt-dtn2adminep      dtn2fw lt-dtn2fw        dtpcadmin lt-dtpcadmin  dtpcclock lt-dtpcclock         dtpcd lt-dtpcd  dtpcreceive lt-dtpcreceive      dtpcsend lt-dtpcsend    file2dgr lt-file2dgr    file2sdr lt-file2sdr    file2sm lt-file2sm     file2tcp lt-file2tcp    file2udp lt-file2udp    hmackeys lt-hmackeys    imcadmin lt-imcadmin    imcadminep lt-imcadminep      imcfw lt-imcfw   ionadmin lt-ionadmin    ionexit lt-ionexit      ionrestart lt-ionrestart        ionsecadmin lt-ionsecadmin      ionunlock lt-ionunlock         ionwarn lt-ionwarn      ipnadmin lt-ipnadmin    ipnadminep lt-ipnadminep        ipnd lt-ipnd    ipnfw lt-ipnfw  lgagent lt-lgagent     lgsend lt-lgsend        ltpadmin lt-ltpadmin    ltpcli lt-ltpcli        ltpclo lt-ltpclo        ltpclock lt-ltpclock    ltpcounter lt-ltpcounter       ltpdeliv lt-ltpdeliv    ltpdriver lt-ltpdriver  ltpmeter lt-ltpmeter    ltpsecadmin lt-ltpsecadmin      nm_agent lt-nm_agent  nm_mgr lt-nm_mgr         owltsim lt-owltsim      owlttb lt-owlttb        psmshell lt-psmshell    psmwatch lt-psmwatch    ramsgate lt-ramsgate  rfxclock lt-rfxclock     sdatest lt-sdatest      sdr2file lt-sdr2file    sdrmend lt-sdrmend      sdrwatch lt-sdrwatch    sm2file lt-sm2file    smlistsh lt-smlistsh     smrbtsh lt-smrbtsh      stcpcli lt-stcpcli      stcpclo lt-stcpclo      tcaadmin lt-tcaadmin    tcaboot lt-tcaboot    tcacompile lt-tcacompile tcapublish lt-tcapublish        tcarecv lt-tcarecv      tcc lt-tcc      tccadmin lt-tccadmin    tcp2file lt-tcp2file  tcpbsi lt-tcpbsi         tcpbso lt-tcpbso        tcpcli lt-tcpcli        tcpclo lt-tcpclo        udp2file lt-udp2file    udpbsi lt-udpbsi      udpbso lt-udpbso         udpcli lt-udpcli        udpclo lt-udpclo        udplsi lt-udplsi        udplso lt-udplso                amsbenchr lt-amsbenchr         amsbenchs lt-amsbenchs  amsd lt-amsd    amshello lt-amshello    amslog lt-amslog        amslogprt lt-amslogprt  amsshell lt-amsshell   amsstop lt-amsstop      bputa lt-bputa  cfdpadmin lt-cfdpadmin  cfdpclock lt-cfdpclock  cfdptest lt-cfdptest    bpcp lt-bpcp    bpcpd lt-bpcpd ...\nSending KILL to the processes...\nChecking if all processes ended...\nDeleting shared memory to remove SDR...\nKillm completed.\nION node ended. Log file: ion.log\n

    At this point run ps -aux or ipcs to verify that ION has terminated completely.

    "},{"location":"BP-Service-API/#simple-ion-installation-test","title":"Simple ION Installation Test","text":"

    When ION is not running, you can performance a simple unit test to verify ION is build properly.

    Navigate to the root directory of the ION source code, cd into the tests folder and then execute a bping test using the command:

    ./runtest bping\n

    You can watch the terminal output ION restarting itself and executing a loopback ping. When successful it will indicate at the end of the test:

    TEST PASSED!\n\npassed: 1\n    bping\n\nfailed: 0\n\nskipped: 0\n\nexcluded by OS type: 0\n\nexcluded by BP version: 0\n\nobsolete tests: 0\n
    "},{"location":"BP-Service-API/#locate-ion-libraries-and-header-files","title":"Locate ION Libraries and Header Files","text":"

    The standard ./configure; make; sudo make install; sudo ldconfig process should automatically install the BP libraries under /usr/local/lib and the relevant header files under /usr/local/include unless ION is specifically configured to a different customized install location through the ./configure script during the compilation process. Here is a list of libraries and header files you should find there:

    $ cd /usr/local/lib\n$ ls\nlibamp.a                 libbp.so.0.0.0    libcgr.a          libdtpc.la        libmbedcrypto.so.7   libtc.so.0.0.0      libudpcla.a\nlibamp.la                libbss.a          libcgr.la         libdtpc.so        libmbedtls.a         libtcaP.a           libudpcla.la\nlibamp.so                libbss.la         libcgr.so         libdtpc.so.0      libmbedtls.so        libtcaP.la          libudpcla.so\nlibamp.so.0              libbss.so         libcgr.so.0       libdtpc.so.0.0.0  libmbedtls.so.14     libtcaP.so          libudpcla.so.0\nlibamp.so.0.0.0          libbss.so.0       libcgr.so.0.0.0   libici.a          libmbedx509.a        libtcaP.so.0        libudpcla.so.0.0.0\nlibampAgentADM.a         libbss.so.0.0.0   libdgr.a          libici.la         libmbedx509.so       libtcaP.so.0.0.0    libudplsa.a\nlibampAgentADM.la        libbssp.a         libdgr.la         libici.so         libmbedx509.so.1     libtcc.a            libudplsa.la\nlibampAgentADM.so        libbssp.la        libdgr.so         libici.so.0       libstcpcla.a         libtcc.la           libudplsa.so\nlibampAgentADM.so.0      libbssp.so        libdgr.so.0       libici.so.0.0.0   libstcpcla.la        libtcc.so           libudplsa.so.0\nlibampAgentADM.so.0.0.0  libbssp.so.0      libdgr.so.0.0.0   libltp.a          libstcpcla.so        libtcc.so.0         libudplsa.so.0.0.0\nlibams.a                 libbssp.so.0.0.0  libdtka.a         libltp.la         libstcpcla.so.0      libtcc.so.0.0.0     libzfec.a\nlibams.la                libcfdp.a         libdtka.la        libltp.so         libstcpcla.so.0.0.0  libtcpbsa.a         libzfec.la\nlibbp.a                  libcfdp.la        libdtka.so        libltp.so.0       libtc.a              libtcpbsa.la        libzfec.so\nlibbp.la                 libcfdp.so        libdtka.so.0      libltp.so.0.0.0   libtc.la             libtcpbsa.so        libzfec.so.0\nlibbp.so                 libcfdp.so.0      libdtka.so.0.0.0  libmbedcrypto.a   libtc.so             libtcpbsa.so.0      libzfec.so.0.0.0\nlibbp.so.0               libcfdp.so.0.0.0  libdtpc.a         libmbedcrypto.so  libtc.so.0           libtcpbsa.so.0.0.0\n\n\n$ cd /usr/local/include\n$ ls\nams.h               bpsec_instr.h  cfdpops.h  eureka.h       llcv.h    platform.h     rfc9173_utils.h  sdrhash.h    sdrxn.h    tcaP.h\nbcb_aes_gcm_sc.h    bpsec_util.h   crypto.h   icinm.h        ltp.h     platform_sm.h  rfx.h            sdrlist.h    smlist.h   tcc.h\nbib_hmac_sha2_sc.h  bss.h          dgr.h      ion.h          lyst.h    psa            sci.h            sdrmgt.h     smrbt.h    tccP.h\nbp.h                bssp.h         dtka.h     ion_test_sc.h  mbedtls   psm.h          sda.h            sdrstring.h  sptrace.h  zco.h\nbpsec_asb.h         cfdp.h         dtpc.h     ionsec.h       memmgr.h  radix.h        sdr.h            sdrtable.h   tc.h\n

    In this document, we assume that ION was build and installed via the ./configure installation process using the full open-source codebase and with the standard set of options.

    The location and content of the library and header directories shown above included non-BP modules and may not match exactly with what you have especially if you have built ION with features options enabled via ./configure or used a manual/custom Makefile, or built ION from the ion-core package instead.

    "},{"location":"BP-Service-API/#launch-ion-bp-services","title":"Launch ION & BP Services","text":"

    Once you are confidant that ION has been properly built and installed in the system, you can start BP service by launching ION. To do this, please consult the various tutorials under Configuration.

    After launching ION, you can verify BP service status in the same manner as described in previous section.

    "},{"location":"BP-Service-API/#bp-service-api-reference","title":"BP Service API Reference","text":""},{"location":"BP-Service-API/#header","title":"Header","text":"
    #include \"bp.h\"\n
    "},{"location":"BP-Service-API/#bp_attach","title":"bp_attach","text":"

    Function Prototype

    int bp_attach( )\n

    Parameters

    Return Value

    Example Call

    if (bp_attach() < 0)\n{\n        printf(\"Can't attach to BP.\\n\");\n        /* user inser error handling code */\n}\n

    Description

    Typically the bp_attach() call is made at the beginning of a user's application to attach to BP Service provided by ION in the host machine. This code checks for a negative return value.

    bp_attach() automatically calls the ICI API ion_attach() when necessary, so there is no need to call them separately. In addition to gaining access to ION's SDR, which is what ion_attach() provides, bp_attach() also gains access to the Bundle Protocol's state information and database. For user application that interacts with the Bundle Protocol, bp_attach() is the entry point to ION.

    "},{"location":"BP-Service-API/#sdr-bp_get_sdr","title":"Sdr bp_get_sdr( )","text":"

    Function Prototype

    Sdr bp_get_sdr()\n

    Parameters

    Return Value

    Example Call

    /* declare SDR handle */\nSdr sdr;\n\n/* get SDR handle */\nsdr = bp_get_sdr();\n\n/* user check sdr for NULL \n * and handle error */\n

    Description

    Returns handle for the SDR data store used for BP, to enable creation and interrogation of bundle payloads (application data units). Since the SDR handle is needed by many APIs, this function is typically executed early in the user's application in order to access other BP services.

    "},{"location":"BP-Service-API/#bp_detach","title":"bp_detach","text":"

    Function Prototype

    void bp_detach( )\n

    Parameters

    Return Value

    Description

    Terminates all access to BP functionality for the invoking process.

    "},{"location":"BP-Service-API/#bp_open","title":"bp_open","text":"

    Function Prototype

    int bp_open(char *eid, BpSAP *ionsapPtr)\n

    Parameters

    Return Value

    Example Call

    if (bp_open(ownEid, &sap) < 0)\n{\n        putErrmsg(\"bptrace can't open own endpoint.\", ownEid);\n\n        /* user's error handling function here */\n}\n

    Description

    Opens the application's access to the BP endpoint identified by the string at eid, so that the application can take delivery of bundles destined for the indicated endpoint. This SAP can also be used for sending bundles whose source is the indicated endpoint.

    Please note that all bundles sent via this SAP will be subject to immediate destruction upon transmission, i.e., no bundle addresses will be returned by bp_send for use in tracking, suspending/resuming, or cancelling transmission of these bundles.

    On success, places a value in *ionsapPtr that can be supplied to future bp function invocations.

    NOTE: To allow for bp_send to return a bundle address for tracking purpose, please use bp_open_source instead.

    "},{"location":"BP-Service-API/#bp_open_source","title":"bp_open_source","text":"

    Function Prototype

    int bp_open_source(char *eid, BpSAP *ionsapPtr, int detain)\n

    Parameters

    Return Value

    Example Call

    if (bp_open_source(ownEid, &txSap, 1) < 0)\n{\n        putErrmsg(\"can't open own 'send' endpoint.\", ownEid);\n\n        /* user error handling routine here */\n}\n

    Description

    Opens the application's access to the BP endpoint identified by eid, so that the application can send bundles whose source is the indicated endpoint. If and only if the value of detain is non-zero, citing this SAP in an invocation of bp_send() will cause the address of the newly issued bundle to be returned for use in tracking, suspending/resuming, or cancelling transmission of this bundle.

    USE THIS FEATURE WITH GREAT CARE: such a bundle will continue to occupy storage resources until it is explicitly released by an invocation of bp_release() or until its time to live expires, so bundle detention increases the risk of resource exhaustion. (If the value of detain is zero, all bundles sent via this SAP will be subject to immediate destruction upon transmission.)

    On success, places a value in *ionsapPtr that can be supplied to future bp function invocations and returns 0. Returns -1 on any error.

    "},{"location":"BP-Service-API/#bp_send","title":"bp_send","text":"

    Function Prototype

    int bp_send(BpSAP sap, char *destEid, char *reportToEid, \n             int lifespan, int classOfService, BpCustodySwitch custodySwitch, \n             unsigned char srrFlags, int ackRequested, \n             BpAncillaryData *ancillaryData, Object adu, Object *newBundle)\n

    Parameters

    BP_MINIMUM_LATENCY designates the bundle as \"critical\" for the\npurposes of Contact Graph Routing.\n\nBP_BEST_EFFORT signifies that non-reliable convergence-layer protocols, as\navailable, may be used to transmit the bundle.  Notably, the bundle may be\nsent as \"green\" data rather than \"red\" data when issued via LTP.\n\nBP_DATA_LABEL_PRESENT signifies whether or not the value of _dataLabel_\nin _ancillaryData_ must be encoded into the ECOS block when the bundle is\ntransmitted.\n

    NOTE: For Bundle Protocol v7, no Extended Class of Service, or equivalent, has been standardized yet. This capability, however, has been retained from BPv6 and is available to BPv7 implementation in ION.

    Return Value

    Example Call

    if (bp_send(sap, destEid, reportToEid, ttl, priority,\n    custodySwitch, srrFlags, 0, &ancillaryData,\n    traceZco, &newBundle) <= 0)\n{\n        putErrmsg(\"bptrace can't send file in bundle.\",\n                        fileName);\n\n        /* user error handling code goes here */\n}\n

    Description

    Sends a bundle to the endpoint identified by destEid, from the source endpoint as provided to the bp_open() call that returned sap.

    When sap is NULL, the transmitted bundle is anonymous, i.e., the source of the bundle is not identified. This is legal, but anonymous bundles cannot be uniquely identified; custody transfer and status reporting therefore cannot be requested for an anonymous bundle.

    The function returns 1 on success, 0 on user error, -1 on any system error.

    If 0 is returned, then an invalid argument value was passed to bp_send(); a message to this effect will have been written to the log file.

    If 1 is returned, then either the destination of the bundle was \"dtn:none\" (the bit bucket) or the ADU has been accepted and queued for transmission in a bundle. In the latter case, if and only if sap was a reference to a BpSAP returned by an invocation of bp_open_source() that had a non-zero value in the detain parameter, then newBundle must be non-NULL and the address of the newly created bundle within the ION database is placed in newBundle. This address can be used to track, suspend/resume, or cancel transmission of the bundle.

    "},{"location":"BP-Service-API/#bp_track","title":"bp_track","text":"

    Function Prototype

    int bp_track(Object bundle, Object trackingElt)\n

    Parameters

    Return Value

    Example Call

    /* a lyst of bundles in SDR */\nObject bundleList;\n\n/* a bundle object in SDR */\nObject bundleObject;\n\nbundleElt = sdr_list_insert_last(sdr, bundleList,\n                bundleObject);\nif (bp_track(outAdu.bundleObj, bundleElt) < 0)\n{\n        sdr_cancel_xn(sdr);\n        putErrmsg(\"Can't track bundle.\", NULL);\n\n        /* user error handling code goes here */\n}\n

    The bundleList is managed via the sdr_list library of APIs.

    Description

    Adds trackingElt to the list of \"tracking\" references in bundle. trackingElt must be the address of an SDR list element -- whose data is the address of this same bundle -- within some list of bundles that is privately managed by the application. Upon destruction of the bundle this list element will automatically be deleted, thus removing the bundle from the application's privately managed list of bundles. This enables the application to keep track of bundles that it is operating on without risk of inadvertently de-referencing the address of a nonexistent bundle.

    "},{"location":"BP-Service-API/#bp_untrack","title":"bp_untrack","text":"

    Function Prototype

    void bp_untrack(Object bundle, Object trackingElt)\n

    Parameters

    Return Value

    Description

    Removes trackingElt from the list of \"tracking\" references in bundle, if it is in that list. Does not delete trackingElt itself.

    "},{"location":"BP-Service-API/#bp_suspend","title":"bp_suspend","text":"

    Function Prototype

    int bp_suspend(Object bundle)\n

    Parameters

    Return Value

    Description

    Suspends transmission of bundle. Has no effect if bundle is \"critical\" (i.e., has got extended class of service BP_MINIMUM_LATENCY flag set) or if the bundle is already suspended. Otherwise, reverses the enqueuing of the bundle to its selected transmission outduct and places it in the \"limbo\" queue until the suspension is lifted by calling bp_resume. Returns 0 on success, -1 on any error.

    "},{"location":"BP-Service-API/#bp_resume","title":"bp_resume","text":"

    Function Prototype

    int bp_resume(Object bundle)\n

    Parameters

    Return Value

    Description

    Terminates suspension of transmission of bundle. Has no effect if bundle is \"critical\" (i.e., has got extended class of service BP_MINIMUM_LATENCY flag set) or is not suspended. Otherwise, removes the bundle from the \"limbo\" queue and queues it for route re-computation and re-queuing. Returns 0 on success, -1 on any error.

    "},{"location":"BP-Service-API/#bp_cancel","title":"bp_cancel","text":"

    Function Prototype

    int bp_cancel(Object bundle)\n

    Parameters

    Return Value

    Description

    Cancels transmission of bundle. If the indicated bundle is currently queued for forwarding, transmission, or retransmission, it is removed from the relevant queue and destroyed exactly as if its Time To Live had expired. Returns 0 on success, -1 on any error.

    "},{"location":"BP-Service-API/#bp_release","title":"bp_release","text":"

    Function Prototype

    int bp_release(Object bundle)\n

    Parameters

    Return Value

    Description

    Releases a detained bundle for destruction when all retention constraints have been removed. After a detained bundle has been released, the application can no longer track, suspend/resume, or cancel its transmission. Returns 0 on success, -1 on any error.

    NOTE: for bundles sent through an bundle protocol end-point which is opened via bp_open_source with detain set to non-zero value, they will not be destroyed, even after successful transmissions, until time-to-live has expired or explicitly released via bp_release.

    "},{"location":"BP-Service-API/#bp_receive","title":"bp_receive","text":"

    Function Prototype

    int bp_receive(BpSAP sap, BpDelivery *dlvBuffer, int timeoutSeconds)\n

    Parameters

    Return Value

    Example Call

    if (bp_receive(state.sap, &dlv, BP_BLOCKING) < 0)\n{\n        putErrmsg(\"bpsink bundle reception failed.\", NULL);\n\n        /* user code to handle error or timeout*/\n}\n

    In this example, BP_BLOCKING is set to -1, that means that the call will block forever until a bundle is received, unless interrupted bp_interrupt.

    Description

    Receives a bundle, or reports on some failure of bundle reception activity.

    The \"result\" field of the dlvBuffer structure will be used to indicate the outcome of the data reception activity.

    If at least one bundle destined for the endpoint for which this SAP is opened has not yet been delivered to the SAP, then the payload of the oldest such bundle will be returned in dlvBuffer->adu and dlvBuffer->result will be set to BpPayloadPresent. If there is no such bundle, bp_receive() blocks for up to timeoutSeconds while waiting for one to arrive.

    If timeoutSeconds is BP_POLL (i.e., zero) and no bundle is awaiting delivery, or if timeoutSeconds is greater than zero but no bundle arrives before timeoutSeconds have elapsed, then dlvBuffer->result will be set to BpReceptionTimedOut. If timeoutSeconds is BP_BLOCKING (i.e., -1) then bp_receive() blocks until either a bundle arrives or the function is interrupted by an invocation of bp_interrupt().

    dlvBuffer->result will be set to BpReceptionInterrupted in the event that the calling process received and handled some signal other than SIGALRM while waiting for a bundle.

    dlvBuffer->result will be set to BpEndpointStopped in the event that the operation of the indicated endpoint has been terminated.

    The application data unit delivered in the data delivery structure, if any, will be a \"zero-copy object\" reference. Use zco reception functions (see zco(3)) to read the content of the application data unit.

    Be sure to call bp_release_delivery() after every successful invocation of bp_receive().

    The function returns 0 on success, -1 on any error.

    "},{"location":"BP-Service-API/#bp_interrupt","title":"bp_interrupt","text":"

    Function Prototype

    void bp_interrupt(BpSAP sap)\n

    Parameters

    Return Value

    Description

    Interrupts a bp_receive() invocation that is currently blocked. This function is designed to be called from a signal handler; for this purpose, sap may need to be obtained from a static variable.

    "},{"location":"BP-Service-API/#bp_release_delivery","title":"bp_release_delivery","text":"

    Function Prototype

    void bp_release_delivery(BpDelivery *dlvBuffer, int releaseAdu)\n

    Parameters

    Return Value

    Description

    Releases resources allocated to the indicated delivery by dlvBuffer, which is returned by bp_receive. releaseAdu is a Boolean parameter: if non-zero, the ADU ZCO reference in dlvBuffer (if any) is destroyed, causing the ZCO itself to be destroyed if no other references to it remain.

    "},{"location":"BP-Service-API/#bp_close","title":"bp_close","text":"

    Function Prototype

    void bp_close(BpSAP sap)\n

    Parameters

    Return Value

    Description

    Terminates the application's access to the BP endpoint identified by the eid cited by the indicated service access point. The application relinquishes its ability to take delivery of bundles destined for the indicated endpoint and to send bundles whose source is the indicated endpoint.

    "},{"location":"BP-Service-API/#walk-through-of-bpsourcec","title":"Walk Through of bpsource.c","text":"

    TO BE UPDATED.

    "},{"location":"BP-Service-API/#compiling-and-linking","title":"Compiling and Linking","text":""},{"location":"BP-Service-API/#zero-copy-object-zco-types","title":"Zero-copy Object (ZCO) Types","text":"

    We have shown that the way to hand user data from an application to BP is via a zero-copy object (ZCO).

    In general there are two types of ZCO that is relevant to a user.

    "},{"location":"BP-Service-API/#sdr-zco","title":"SDR ZCO","text":""},{"location":"BP-Service-API/#file-zco","title":"File ZCO","text":""},{"location":"Basic-Configuration-File-Tutorial/","title":"Basic Configuration File Tutorial","text":""},{"location":"Basic-Configuration-File-Tutorial/#programs-in-ion","title":"Programs in ION","text":"

    The following tools are available to you after ION is built:

    Daemon and Configuration

    Simple Sending and Receiving

    Testing and Benchmarking

    "},{"location":"Basic-Configuration-File-Tutorial/#ion-logging","title":"ION Logging","text":"

    It is important to note that, by default, the administrative programs will all trigger the creation of a log file called\u00a0ion.log\u00a0in the directory where the program is called. This means that write-access in your current working directory is required. The log file itself will contain the expected log information from administrative daemons, but it will also contain error reports from simple applications such as\u00a0bpsink. This is important to note since the BP applications may not be reporting all error information to stdout or stderr.

    "},{"location":"Basic-Configuration-File-Tutorial/#starting-the-ion-daemon","title":"Starting the ION Daemon","text":"

    A script has been created which allows a more streamlined configuration and startup of an ION node. This script is called\u00a0ionstart, and it has the following syntax. Don't run it yet; we still have to configure it!

    ionstart -I <filename>

    filename: This is the name for configuration file which the script will attempt to use for the various configuration commands. The script will perform a sanity check on the file, splitting it into command sections appropriate for each of the administration programs.

    Configuration information (such as routes, connections, etc) can be specified one of two ways for any of the individual administration programs:

    (Recommended) Creating a configuration file and passing it to\u00a0ionadmin,\u00a0bpadmin,\u00a0ipnadmin...\u00a0either directly or via the\u00a0ionstart\u00a0helper script. Manually typing configuration commands into the terminal for each administration program.

    You can find appropriate commands in the following sections.

    "},{"location":"Basic-Configuration-File-Tutorial/#configuration-files-overview","title":"Configuration Files Overview","text":"

    There are five configuration files about which you should be aware.

    The first,\u00a0ionadmin's configuration file, assigns an identity (node number) to the node, optionally configures the resources that will be made available to the node, and specifies contact bandwidths and one-way transmission times. Specifying the \"contact plan\" is important in deep-space scenarios where the bandwidth must be managed and where acknowledgments must be timed according to propagation delays. It is also vital to the function of contact-graph routing.

    The second,\u00a0ltpadmin's configuration file, specifies spans, transmission speeds, and resources for the Licklider Transfer Protocol convergence layer.

    The third,\u00a0ipnadmin's configuration file, maps endpoints at \"neighboring\" (topologically adjacent, directly reachable) nodes to convergence-layer addresses. Our examples use TCP/IP and LTP (over IP/UDP), so it maps endpoint IDs to IP addresses. This file populates the ION analogue to an ARP cache for the \"ipn\" naming scheme.

    The fourth,\u00a0bpadmin's configuration file, specifies all of the open endpoints for delivery on your local end and specifies which convergence layer protocol(s) you intend to use. With the exception of LTP, most convergence layer adapters are fully configured in this file.

    The fifth,\u00a0dtn2admin's configuration file, populates the ION analogue to an ARP cache for the \"dtn\" naming scheme.

    "},{"location":"Basic-Configuration-File-Tutorial/#the-ion-configuration-file","title":"The ION Configuration File","text":"

    Given to\u00a0ionadmin\u00a0either as a file or from the daemon command line, this file configures contacts for the ION node. We will assume that the local node's identification number is 1.

    This file specifies contact times and one-way light times between nodes. This is useful in deep-space scenarios: for instance, Mars may be 20 light-minutes away, or 8. Though only some transport protocols make use of this time (currently, only LTP), it must be specified for all links nonetheless. Times may be relative (prefixed with a + from current time) or absolute. Absolute times, are in the format\u00a0yyyy/mm/dd-hh:mm:ss. By default, the contact-graph routing engine will make bundle routing decisions based on the contact information provided.

    The configuration file lines are as follows:

    1 1 ''

    This command will initialize the ion node to be node number 1.

    1\u00a0refers to this being the initialization or ''first'' command. 1\u00a0specifies the node number of this ion node. (IPN node 1). ''\u00a0specifies the name of a file of configuration commands for the node's use of shared memory and other resources (suitable defaults are applied if you leave this argument as an empty string).

    s

    This will start the ION node. It mostly functions to officially \"start\" the node in a specific instant; it causes all of ION's protocol-independent background daemons to start running.

    a contact +1 +3600 1 1 100000

    specifies a transmission opportunity for a given time duration between two connected nodes (or, in this case, a loopback transmission opportunity).

    a\u00a0adds this entry in the configuration table. contact\u00a0specifies that this entry defines a transmission opportunity. +1\u00a0is the start time for the contact (relative to when the\u00a0s\u00a0command is issued). +3600\u00a0is the end time for the contact (relative to when the\u00a0s\u00a0command is issued). 1\u00a0is the source node number. 1\u00a0is the destination node number. 100000\u00a0is the maximum rate at which data is expected to be transmitted from the source node to the destination node during this time period (here, it is 100000 bytes / second).

    a range +1 +3600 1 1 1

    specifies a distance between nodes, expressed as a number of light seconds, where each element has the following meaning:

    a\u00a0adds this entry in the configuration table. range\u00a0declares that what follows is a distance between two nodes. +1\u00a0is the earliest time at which this is expected to be the distance between these two nodes (relative to the time\u00a0s\u00a0was issued). +3600\u00a0is the latest time at which this is still expected to be the distance between these two nodes (relative to the time\u00a0s\u00a0was issued). 1\u00a0is one of the two nodes in question. 1\u00a0is the other node. 1\u00a0is the distance between the nodes, measured in light seconds, also sometimes called the \"one-way light time\" (here, one light second is the expected distance).

    m production 1000000

    specifies the maximum rate at which data will be produced by the node.

    m\u00a0specifies that this is a management command. production\u00a0declares that this command declares the maximum rate of data production at this ION node. 1000000\u00a0specifies that at most 1000000 bytes/second will be produced by this node.

    m consumption 1000000

    specifies the maximum rate at which data can be consumed by the node.

    m\u00a0specifies that this is a management command. consumption\u00a0declares that this command declares the maximum rate of data consumption at this ION node. 1000000\u00a0specifies that at most 1000000 bytes/second will be consumed by this node.

    This will make a final configuration file\u00a0host1.ionrc\u00a0which looks like this:

    1 1 ''\ns\na contact +1 +3600 1 1 100000\na range +1 +3600 1 1 1\nm production 1000000\nm consumption 1000000\n
    "},{"location":"Basic-Configuration-File-Tutorial/#the-licklider-transfer-protocol-configuration-file","title":"The Licklider Transfer Protocol Configuration File","text":"

    Given to\u00a0ltpadmin\u00a0as a file or from the command line, this file configures the LTP engine itself. We will assume the local IPN node number is 1; in ION, node numbers are used as the LTP engine numbers.

    1 32

    This command will initialize the LTP engine:

    1\u00a0refers to this being the initialization or ''first'' command. 32\u00a0is an estimate of the maximum total number of LTP ''block'' transmission sessions - for all spans - that will be concurrently active in this LTP engine. It is used to size a hash table for session lookups.

    a span 1 32 32 1400 10000 1 'udplso localhost:1113'

    This command defines an LTP engine 'span':

    a\u00a0indicates that this will add something to the engine.

    span\u00a0indicates that an LTP span will be added.

    1\u00a0is the engine number for the span, the number of the remote engine to which LTP segments will be transmitted via this span. In this case, because the span is being configured for loopback, it is the number of the local engine, i.e., the local node number. This will have to match an outduct in Section\u00a02.6.

    32\u00a0specifies the maximum number of LTP ''block'' transmission sessions that may be active on this span. The product of the mean block size and the maximum number of transmission sessions is effectively the LTP flow control ''window'' for this span: if it's less than the bandwidth delay product for traffic between the local LTP engine and this spa's remote LTP engine then you'll be under-utilizing that link. We often try to size each block to be about one second's worth of transmission, so to select a good value for this parameter you can simply divide the span's bandwidth delay product (data rate times distance in light seconds) by your best guess at the mean block size.

    The second\u00a032specifies the maximum number of LTP ''block'' reception sessions that may be active on this span. When data rates in both directions are the same, this is usually the same value as the maximum number of transmission sessions.

    1400\u00a0is the number of bytes in a single segment. In this case, LTP runs atop UDP/IP on ethernet, so we account for some packet overhead and use 1400.

    1000\u00a0is the LTP aggregation size limit, in bytes. LTP will aggregate multiple bundles into blocks for transmission. This value indicates that the block currently being aggregated will be transmitted as soon as its aggregate size exceeds 10000 bytes.

    1\u00a0is the LTP aggregation time limit, in seconds. This value indicates that the block currently being aggregated will be transmitted 1 second after aggregation began, even if its aggregate size is still less than the aggregation size limit.

    'udplso localhost:1113'\u00a0is the command used to implement the link itself. The link is implemented via UDP, sending segments to the localhost Internet interface on port 1113 (the IANA default port for LTP over UDP).

    s 'udplsi localhost:1113'

    Starts the ltp engine itself:

    s\u00a0starts the ltp engine.

    'udplsi localhost:1113'\u00a0is the link service input task. In this case, the input ''duct' is a UDP listener on the local host using port 1113.

    This means that the entire configuration file\u00a0host1.ltprc\u00a0looks like this:

    1 32\na span 1 32 32 1400 10000 1 'udplso localhost:1113'\ns 'udplsi localhost:1113'\n
    "},{"location":"Basic-Configuration-File-Tutorial/#the-bundle-protocol-configuration-file","title":"The Bundle Protocol Configuration File","text":"

    Given to\u00a0bpadmin\u00a0either as a file or from the daemon command line, this file configures the endpoints through which this node's Bundle Protocol Agent (BPA) will communicate. We will assume the local BPA's node number is 1; as for LTP, in ION node numbers are used to identify bundle protocol agents.

    1

    This initializes the bundle protocol:

    1\u00a0refers to this being the initialization or ''first'' command.

    a scheme ipn 'ipnfw' 'ipnadminep'

    This adds support for a new Endpoint Identifier (EID) scheme:

    a\u00a0means that this command will add something.

    scheme\u00a0means that this command will add a scheme.

    ipn\u00a0is the name of the scheme to be added.

    'ipnfw'\u00a0is the name of the IPN scheme's forwarding engine daemon.

    'ipnadminep'\u00a0is the name of the IPN scheme's custody transfer management daemon.

    a endpoint ipn:1.0 q

    This command establishes this BP node's membership in a BP endpoint:

    a\u00a0means that this command will add something.

    endpoint\u00a0means that this command adds an endpoint.

    ipn\u00a0is the scheme name of the endpoint.

    1.0\u00a0is the scheme-specific part of the endpoint. For the IPN scheme the scheme-specific part always has the form\u00a0nodenumber:servicenumber. Each node must be a member of the endpoint whose node number is the node's own node number and whose service number is 0, indicating administrative traffic.

    q\u00a0means that the behavior of the engine, upon receipt of a new bundle for this endpoint, is to queue it until an application accepts the bundle. The alternative is to silently discard the bundle if no application is actively listening; this is specified by replacing\u00a0q\u00a0with\u00a0x.

    a endpoint ipn:1.1 q

    a endpoint ipn:1.2 q

    These specify two more endpoints that will be used for test traffic.

    a protocol ltp 1400 100

    This command adds support for a convergence-layer protocol:

    a\u00a0means that this command will add something.

    protocol\u00a0means that this command will add a convergence-layer protocol.

    ltp\u00a0is the name of the convergence-layer protocol.

    1400\u00a0is the estimated size of each convergence-layer protocol data unit (in bytes); in this case, the value is based on the size of a UDP/IP packet on Ethernet.

    100\u00a0is the estimated size of the protocol transmission overhead (in bytes) per convergence-layer procotol data unit sent.

    a induct ltp 1 ltpcli

    This command adds an induct, through which incoming bundles can be received from other nodes:

    a\u00a0means that this command will add something.

    induct\u00a0means that this command will add an induct.

    ltp\u00a0is the convergence layer protocol of the induct.

    1\u00a0is the identifier of the induct, in this case the ID of the local LTP engine.

    ltpcli\u00a0is the name of the daemon used to implement the induct.

    a outduct ltp 1 ltpclo

    This command adds an outduct, through which outgoing bundles can be sent to other nodes:

    a\u00a0means that this command will add something.

    outduct\u00a0means that this command will add an outduct.

    ltp\u00a0is the convergence layer protocol of the outduct.

    1\u00a0is the identifier of the outduct, the ID of the convergence-layer protocol induct of some remote node. See Section\u00a02.5\u00a0for remote LTP engine IDs.

    ltpclo\u00a0is the name of the daemon used to implement the outduct.

    s

    This command starts the bundle engine including all daemons for the inducts and outducts.

    That means that the entire configuration file\u00a0host1.bprc\u00a0looks like this:

    1\na scheme ipn 'ipnfw' 'ipnadminep'\na endpoint ipn:1.0 q\na endpoint ipn:1.1 q\na endpoint ipn:1.2 q\na protocol ltp 1400 100\na induct ltp 1 ltpcli\na outduct ltp 1 ltpclo\ns\n
    "},{"location":"Basic-Configuration-File-Tutorial/#ipn-routing-configuration","title":"IPN Routing Configuration","text":"

    As noted earlier, this file is used to build ION's analogue to an ARP cache, a table of ''egress plans.'' It specifies which outducts to use in order to forward bundles to the local node's neighbors in the network. Since we only have one outduct, for forwarding bundles to one place (the local node), we only have one egress plan.

    a plan 1 ltp/1

    This command defines an egress plan for bundles to be transmitted to the local node:

    a\u00a0means this command adds something.

    plan\u00a0means this command adds an egress plan.

    1\u00a0is the node number of the remote node. In this case, that is the local node's own node number; we're configuring for loopback.

    ltp/1\u00a0is the identifier of the outduct through which to transmit bundles in order to convey them to this ''remote'' node.

    This means that the entire configuration file\u00a0host1.ipnrc\u00a0looks like this:

    a plan 1 ltp/1

    "},{"location":"Basic-Configuration-File-Tutorial/#testing-your-connection","title":"Testing Your Connection","text":"

    Assuming no errors occur with the configuration above, we are now ready to test loopback communications. In one terminal, we have to run the start script (the one we said that you would have to have earlier). It's right here, in case you forgot to write it down:

    ionstart -i host1.ionrc -l host1.ltprc -b host1.bprc -p host1.ipnrc

    This command will run the appropriate administration programs, in order, with the appropriate configuration files. Don't worry that the command is lengthy and unwieldly; we will show you how to make a more clean single configuration file later.

    Once the daemon is started, run:

    bpsink ipn:1.1

    This will begin listening on the Endpoint ID with the\u00a0endpoint_number\u00a01 on\u00a0service_number\u00a01, which is used for testing.

    Now open another terminal and run the command:

    bpsource ipn:1.1

    This will begin sending messages you type to the Endpoint ID\u00a0ipn:1.1, which is currently being listened to by\u00a0bpsink. Type messages into\u00a0bpsource, press enter, and see if they are reported by\u00a0bpsink.

    If so, you're ready for bigger and better things. If not, check the following:

    Do you have write permissions for your current directory? If not, you will not be able to start the daemon as it has to write out to the ion.log file. Are your config files exactly as specified, except for IP address changes? Are you running it on one of our supported platforms? Currently, those are the only supported distributions.

    If you are still having problems, you can ask for help on the ION users' list or file an ION bug report.

    "},{"location":"Basic-Configuration-File-Tutorial/#stopping-the-daemon","title":"Stopping the Daemon","text":"

    As the daemon launches many ducts and helper applications, it can be complicated to turn it all off. To help this, we provided a script. The script similar to\u00a0ionstart\u00a0exists called\u00a0ionstop, which tears down the ion node in one step. You can call it like so:

    ionstop

    After stopping the daemon, it can be restarted using the same procedures as outlined above. Do remember that the ion.log file is still present, and will just keep growing as you experiment with ION.

    IMPORTANT:\u00a0The user account that runs ionstart\u00a0must also run ionstop. If that account does not, no other accounts can successfully start the daemon, as the shared memory vital to ION's functionality will already be occupied.

    "},{"location":"Basic-Configuration-File-Tutorial/#more-advanced-usage","title":"More Advanced Usage","text":"

    Detailed documentation of ION and its applications are available via the man pages. It is suggested that you start with\u00a0man ion\u00a0, as this is an overview man page listing all available ION packages.

    "},{"location":"Basic-Configuration-File-Tutorial/#ionscript-for-simplified-configuration-files","title":"Ionscript for Simplified Configuration Files","text":"

    The most difficult and cumbersome method of starting an ION node is to manually run the various administration programs in order, manually typing configuration commands all the way. It is much more efficient and less error-prone to place the configuration commands into a configuration file and using that as input to the administration program, but this is still cumbersome as you must type in each administration program in order. The\u00a0ionstart\u00a0program will automatically execute the appropriate administration programs with their respective configuration files in order. Unfortunately, as seen in the previous sections,\u00a0the command is lengthy. This is why the\u00a0ionscript\u00a0script was added to make things even easier.

    The ionscript\u00a0will basically concatenate the configuration files into one large file. The format of this large configuration file is simply to bookend configuration sections with the lines:\u00a0## begin PROGRAM\u00a0and\u00a0## end PROGRAM, where\u00a0PROGRAM\u00a0is the name of the administration program for which the configuration commands should be sent (such as\u00a0ionadmin, bpadmin, ipnadmin).

    To create a single file\u00a0host1.rc\u00a0out of the various configuration files defined in the previous section, run this command:

    ionscript -i host1.ionrc -p host1.ipnrc -l host1.ltprc -b host1.bprc -O host1.rc

    The command can also be used to split the large\u00a0host1.rc\u00a0into the individual configuration files (so long as the large file is formatted correctly). Just run this command to revert the process:

    ionscript -i host1.ionrc -p host1.ipnrc -l host1.ltprc -b host1.bprc -I host1.rc

    This isn't very practical in this specific case (as you already have the individual files) but if you start with a single configuration file, this can be helpful.

    Once you have a single configuration file, starting the ION node is a single command:

    ionstart -I host1.rc

    Note that\u00a0ionstart\u00a0and\u00a0ionscript\u00a0require\u00a0sed\u00a0and\u00a0awk, but those are almost universally available on Unix-based systems. The two scripts will always sanity-check the large configuration file to ensure that it interprets the bookend lines correctly- and it will warn you of any errors you have made in the file. Consult the USAGE for each script for further help, by attempting to run the script with no arguments or the\u00a0-h\u00a0argument.

    "},{"location":"Basic-Configuration-File-Tutorial/#examples-of-network-configurations","title":"Examples of Network Configurations","text":"

    For a simple single-node ION configuration - running multiple instances of ION in the same host, see the tutorial here.

    For a two-node configuration, see the tutorial here.

    For a multi-hop and also multi-network configuration, see this page.

    "},{"location":"CLA-API/","title":"Convergence Layer Adaptor - APIs","text":"

    ION currently provides several CLAs, include LTP, TCP, UDP, STCP. However, it is possible to develop a customized CLA using ION's API. This document will describe the basic set of API used to develop a customized CLA.

    "},{"location":"CLA-API/#cla-apis","title":"CLA APIs","text":""},{"location":"CLA-API/#header","title":"Header","text":"
    #include \"bpP.h\"\n
    "},{"location":"CLA-API/#bpdequeue","title":"bpDequeue","text":"

    Function Prototype

    extern int bpDequeue(VOutduct *vduct,\n                    Object *outboundZco,\n                    BpAncillaryData *ancillaryData,\n                    int stewardship);\n

    This function is invoked by a convergence-layer output adapter (outduct) daemon to get a bundle that it is to transmit to some remote convergence-layer input adapter (induct) daemon.

    The function first pops the next (only) outbound bundle from the queue of outbound bundles for the indicated duct. If no such bundle is currently waiting for transmission, it blocks until one is [or until the duct is closed, at which time the function returns zero without providing the address of an outbound bundle ZCO].

    On obtaining a bundle, bpDequeue does DEQUEUE processing on the bundle's extension blocks; if this processing determines that the bundle is corrupt, the function returns zero while providing 1 (a nonsense address) in *bundleZco as the address of the outbound bundle ZCO. The CLO should handle this result by simply calling bpDequeue again.

    bpDequeue then catenates (serializes) the BP header information (primary block and all extension blocks) in the bundle and prepends that serialized header to the source data of the bundle's payload ZCO. Then it returns the address of that ZCO in *bundleZco for transmission at the convergence layer (possibly entailing segmentation that would be invisible to BP).

    Requested quality of service for the bundle is provided in *ancillaryData so that the requested QOS can be mapped to the QOS features of the convergence-layer protocol. For example, this is where a request for custody transfer is communicated to BIBE when the outduct daemon is one that does BIBE transmission. The stewardship argument controls the disposition of the bundle following transmission. Any value other than zero indicates that the outduct daemon is one that performs \"stewardship\" procedures. An outduct daemon that performs stewardship procedures will disposition the bundle as soon as the results of transmission at the convergence layer are known, by calling one of two functions: either bpHandleXmitSuccess or else bpHandleXmitFailure. A value of zero indicates that the outduct daemon does not perform stewardship procedures and will not disposition the bundle following transmission; instead, the bpDequeue function itself will assume that transmission at the convergence layer will be successful and will disposition the bundle on that basis.

    Return Values

    "},{"location":"CLA-API/#bphandlexmitsuccess","title":"bpHandleXmitSuccess","text":"

    Function Prototype

    extern int      bpHandleXmitSuccess(Object zco);\n

    This function is invoked by a convergence-layer output adapter (an outduct) on detection of convergence- layer protocol transmission success. It causes the serialized (catenated) outbound bundle in zco to be destroyed, unless some constraint (such as local delivery of a copy of the bundle) requires that bundle destruction be deferred.

    Return Values

    "},{"location":"CLA-API/#bphandlexmitfailure","title":"bpHandleXmitFailure","text":"

    Function Prototype

    extern int bpHandleXmitFailure(Object zco);\n

    This function is invoked by a convergence-layer output adapter (an outduct) on detection of a convergence- layer protocol transmission error. It causes the serialized (catenated) outbound bundle in zco to be queued up for re-forwarding.

    Return Values

    "},{"location":"CLA-API/#bpgetacqarea","title":"bpGetAcqArea","text":"
    extern AcqWorkArea  *bpGetAcqArea(VInduct *vduct);\n

    Allocates a bundle acquisition work area for use in acquiring inbound bundles via the indicated duct. This is typically invoked just once at the beginning of a CLA process initialization

    Return Value

    "},{"location":"CLA-API/#bpreleaseacqarea","title":"bpReleaseAcqArea","text":"
    extern void     bpReleaseAcqArea(AcqWorkArea *workArea);\n

    Releases dynamically allocated bundle acquisition work area. This should be called before shutting down a CLA process.

    Return Value

    "},{"location":"CLA-API/#bpbeginacquisition","title":"bpBeginAcquisition","text":"
    extern int  bpBeginAcq( AcqWorkArea *workArea,\n                int authentic,\n                char *senderEid);\n

    This function is invoked by a convergence-layer input adapter to initiate acquisition of a new bundle via the indicated workArea. It initializes deserialization of an array of bytes constituting a single transmitted bundle. The \"authentic\" Boolean and \"senderEid\" string are knowledge asserted by the convergence-layer input adapter invoking this function: an assertion of authenticity of the data being acquired (e.g., per knowledge that the data were received via a physically secure medium) and, if non-NULL, an EID characterizing the node that send this inbound bundle.

    Return Values

    "},{"location":"CLA-API/#bpcontinueacq","title":"bpContinueAcq","text":"
    extern int  bpContinueAcq(  AcqWorkArea *workArea,\n                char *bytes,\n                int length,\n                ReqAttendant *attendant,\n                unsigned char priority);\n

    This function continues acquisition of a bundle as initiated by an invocation of bpBeginAcq(). To do so, it appends the indicated array of bytes, of the indicated length, to the byte array that is encapsulated in workArea.

    bpContinueAcq is an alternative to bpLoadAcq, intended for use by convergence-layer adapters that incrementally acquire portions of concatenated bundles into byte-array buffers. The function transparently creates a zero-copy object for acquisition of the bundle, if one does not already exist, and appends \"bytes\" to the source data of that ZCO.

    The behavior of bpContinueAcq when currently available space for zero- copy objects is insufficient to contain this increment of bundle source data depends on the value of \"attendant\". If \"attendant\" is NULL, then bpContinueAcq will return 0 but will flag the acquisition work area for refusal of the bundle due to resource exhaustion (congestion). Otherwise, (i.e., \"attendant\" points to a ReqAttendant structure, which MUST have already been initialized by ionStartAttendant()), bpContinueAcq will block until sufficient space is available or the attendant is paused or the function fails, whichever occurs first.

    \"priority\" is normally zero, but for the TCPCL convergence-layer receiver threads it is very high (255) because any delay in allocating space to an extent of TCPCL data delays the processing of TCPCL control messages, potentially killing TCPCL performance.

    Return Values

    "},{"location":"CLA-API/#bpcancelacq","title":"bpCancelAcq","text":"
    extern void     bpCancelAcq(    AcqWorkArea *workArea);\n

    Cancels acquisition of a new bundle via the indicated workArea, destroying the bundle acquisition ZCO of workArea.

    "},{"location":"CLA-API/#bpendacq","title":"bpEndAcq","text":"

    extern int      bpEndAcq(   AcqWorkArea *workArea);\n
    Concludes acquisition of a new bundle via the indicated workArea. This function is invoked after the convergence-layer input adapter has invoked either bpLoadAcq() or bpContinueAcq() [perhaps invoking the latter multiple times] such that all bytes of the transmitted bundle are now included in the bundle acquisition ZCO of workArea.

    Return Value

    "},{"location":"CLA-API/#bploadacq-suitable-for-certain-cla-types","title":"bpLoadAcq (suitable for certain CLA types)","text":"

    extern int      bpLoadAcq(  AcqWorkArea *workArea,\n                    Object zco);\n
    This function continues the acquisition of a bundle as initiated by an invocation of bpBeginAcq(). To do so, it inserts the indicated zero-copy object - containing possibly multiple whole bundles in concatenated form - into workArea.

    bpLoadAcq is an alternative to bpContinueAcq, intended for use by convergence-layer adapters that natively acquire concatenated bundles into zero-copy objects such as LTP.

    Return Value

    "},{"location":"CLA-API/#setting-up-a-custom-cla-in-ion","title":"Setting up a custom CLA in ION","text":"

    This section is under construction

    "},{"location":"Configure-Multiple-Network-Interfaces/","title":"Configure ION for Multiple Network Interfaces","text":"

    Lab testing of ION-based DTN network often uses only a single network. However, during deployment or operational testing, ION network must operate over multiple networks. To clarify how to configure ION, we consider the following hypothetical network configuration.

    The basic topology is illustrated here:

        +-------+  protocol a     protocol b  +-------+\n    |       |                             |       |\n    |  SC1  +-----+                  +--->+  MOC1 |\n    |  21   |     |                  |    |  24   |\n    +-------+     |     +-------+    |    +-------+\n                  +---->+       +----+\n          rfnet         |  GS   |          gsnet\n                  +---->+  23   +----+\n    +-------+     |     +-------+    |    +-------+\n    |       |     |                  |    |       |\n    |  SC2  +-----+                  +--->+  MOC2 |\n    |  22   |                             |  25   |\n    +-------+                             +-------+\n\nsubnet: 192.168.100.0/24      subnet:192.168.200.0/24\n
    "},{"location":"Configure-Multiple-Network-Interfaces/#induct-and-outduct-relationship","title":"Induct and Outduct Relationship","text":"

    ION associates each neighbor with an convergence layer protocol and an outduct. With the exception of UDP convergence layer, each outduct is associated with an induct as well.

    When there are multiple neighbors using the same convergence layer protocol, only one induct is used to 'pair' with multiple outducts of the same protocol.

    If neighbors using the same protocol are all within the same network, then the induct is associated with the IP address of the ION node on that particular network.

    If the neighbors using the same protocol are from multiple networks, then the induct will need to be associated with INADDR_ANY, 0.0.0.0:port defined for protocols such as TCP, UDP, and STCP.

    For LTP, however, multiple inducts can be defined for each network using the IP address of each network separately, and the induct for a network is called seat (see manual page for ltprc).

    "},{"location":"Configure-Multiple-Network-Interfaces/#ion-configurations","title":"ION Configurations","text":""},{"location":"Configure-Multiple-Network-Interfaces/#ltptcp-example","title":"LTP/TCP Example","text":"

    In this case, SC1 and SC2 communicates with GS using LTP, while MOC1 and MOC2 communicate with GS using TCP. The port we used is 4556.

    For GS, it defines TCP in this manner in the .bprc file:

    a protocol tcp\n\na induct tcp 192.168.200.23:4556 tcpcli\n\na outduct tcp 192.168.200.24:4556 tcpclo\na outduct tcp 192.168.200.25:4556 tcpclo\n\na plan ipn:24.0\na planduct ipn:24.0 tcp 192.168.200.24:4556\na planduct ipn:25.0 tcp 192.168.200.25:4556\n

    There is only induct for the two outducts. Since node 23, 24, and 25 are in the 192.168.200.0/24 subnet, the induct for 23 can simply use its statically assigned IP address of 192.168.200.23:4556.

    For MOC1, TCP is specified in this manner in the .bprc file:

    a protocol tcp\n\na induct tcp 192.168.200.24:4556 tcpcli\n\na outduct tcp 192.168.200.23:4556 tcpclo\n\na plan ipn:23.0\n\na planduct ipn:23.0 tcp 192.168.200.23:4556\n

    Since MOC1 has only 1 neighbor and uses TCP, the induct/outduct and egress plans are very much the standard configuration we see typically in a single network configuration.

    Similar configuration can be written for MOC2.

    For LTP, the configuration for GS is:

    # in bprc file\na protocol ltp\n\na induct ltp 23 ltpcli\n\na outduct ltp 21 ltpclo\na outduct ltp 22 ltpclo\n\na plan ipn:21.0 \na plan ipn:22.0\n\na planduct ipn:21.0 ltp 21\na planduct ipn:22.0 ltp 22\n\n# in .ltprc file\na span 21 100 100 1482 100000 1 'udplso 192.168.100.21:1113'\na span 22 100 100 1482 100000 1 'udplso 192.168.100.22:1113'\na seat 'udplsi 192.168.100.23:1113'\n\ns\n

    For ltp, a single induct is specified for the 192.168.100.0/24 subnet using the a seat (add seat) command. The older syntax is s 'udplsi 192.168.100.23:1113', which works only for the case of a single network and port combination. However, when extending LTP to multiple seats (inducts) when there are multiple networks or when there are multiple ports, the seat command offers the flexibility to support more complex configurations.

    "},{"location":"Configure-Multiple-Network-Interfaces/#ltpstcp","title":"LTP/STCP","text":"

    The syntax for LTP/STCP is identical, except replacing tcp with stcp, tcpcli and tcpclo with stcpcli and stcpclo in the configuration files.

    "},{"location":"Configure-Multiple-Network-Interfaces/#tcp-and-stcp-across-multiple-networks","title":"TCP and STCP across multiple networks","text":"

    When runing TCP or STCP over both networks, the only change is that for the GS node, the induct definitions in .bprc are replaced by:

    a induct tcp 0.0.0.0:4556 tcpcli and a induct stcp 0.0.0.0:4556 tcpcli

    "},{"location":"Configure-Multiple-Network-Interfaces/#ltp-over-multiple-networks","title":"LTP over multiple networks","text":"

    When running LTP over both networks, the only key difference is that in the .ltprc file for the GS node, two seats are defined:

    a span 21 100 100 1482 100000 1 'udplso 192.168.100.21:1113'\na span 22 100 100 1482 100000 1 'udplso 192.168.100.22:1113'\na span 24 100 100 1482 100000 1 'udplso 192.168.200.24:1113'\na span 25 100 100 1482 100000 1 'udplso 192.168.200.25:1113'\na seat 'udplsi 192.168.100.23:1113'\na seat 'udplsi 192.168.200.23:1113'\n\ns\n

    Of course the bprc file must also be updated to add reflect the additional LTP neighbors, but that extension is straightforward so we will not be listing them here.

    "},{"location":"Configure-Multiple-Network-Interfaces/#use-of-contact-graph","title":"Use of Contact Graph","text":"

    For ION, the use contact graph is optional when there is one hop. In that case, the data rate, normally defined in the contact graph, is provided through commands plan command in .bprc file.

    When contact graph is present, the information in the contact graph supersedes the data rate specified in the plan command.

    If there are no contact graph and the data rate is either 0 or omitted in the plan command, then there is no bundle level throttling of data.

    "},{"location":"Configure-Multiple-Network-Interfaces/#use-of-exit-command","title":"Use of exit command","text":"

    When no contact graph is provided, only immediate neighbor can exchange data. If relay operation is stil desired, an exit command can be used. In the case of the topology presented earlier, the node GS can be a gateway between the rfnet and gsnet. So GS can be added as an exit node for identified pair of source-destination.

    "},{"location":"Extension-Block-Interface/","title":"BP Extension Interface","text":"

    ION offers software developer a set of standard interface for adding extensions to Bundle Protocol without modifying the core BP source code. This capability can be used to implement both standardized bundle extension blocks or user-specific extension blocks.

    ION's interface for extending the Bundle Protocol enables the definition of external functions that insert extension blocks into outbound bundles (either before or after the payload block), parse and record extension blocks in inbound bundles, and modify extension blocks at key points in bundle processing. All extension-block handling is statically linked into ION at build time, but the addition of an extension never requires that any standard ION source code be modified.

    Standard structures for recording extension blocks -- both in transient storage memory during bundle acquisition (AcqExtBlock) and in persistent storage [the ION database] during subsequent bundle processing (ExtensionBlock) -- are defined in the bei.h header file. In each case, the extension block structure comprises a block type code, block processing flags, possibly a list of EID references, an array of bytes (the serialized form of the block, for transmission), the length of that array, optionally an extension-specific opaque object whose structure is designed to characterize the block in a manner that's convenient for the extension processing functions, and the size of that object.

    "},{"location":"Extension-Block-Interface/#extension-definition-extesniondef-extensiondefs","title":"Extension Definition: ExtesnionDef & extensionDefs","text":"

    The definition of each extension is asserted in an ExtensionDef structure, also as defined in the bei.h header file.

    /**\n *  \\struct ExtensionDef\n *  \\brief Defines the callbacks used to process extension blocks.\n *\n * ExtensionDef defines the callbacks for production and acquisition\n * of a single type of extension block, identified by block type name\n * and number.\n */\ntypedef struct\n{\n    char            name[32];   /** Name of extension   */\n    BpBlockType     type;       /** Block type      */\n\n    /*  Production callbacks.                   */\n\n    BpExtBlkOfferFn     offer;      /** Offer       */\n    BpExtBlkSerializeFn serialize;  /** Serialize       */\n    BpExtBlkProcessFn   process[5]; /** Process     */\n    BpExtBlkReleaseFn   release;    /** Release         */\n    BpExtBlkCopyFn      copy;       /** Copy        */\n\n    /*  Acquisition callbacks.                  */\n\n    BpAcqExtBlkAcquireFn    acquire;    /** Acquire         */\n    BpAcqExtReviewFn    review;     /** Review      */\n    BpAcqExtBlkDecryptFn    decrypt;    /** Decrypt         */\n    BpAcqExtBlkParseFn  parse;      /** Parse       */\n    BpAcqExtBlkCheckFn  check;      /** Check       */\n    BpExtBlkRecordFn    record;     /** Record      */\n    BpAcqExtBlkClearFn  clear;      /** Clear       */\n} ExtensionDef;\n

    Each ExtensionDef must supply:

    All extension definitions must be coded into an array of ExtensionDef structures named extensionDefs.

    "},{"location":"Extension-Block-Interface/#extensionspec-specification-for-producing-an-extension-block","title":"ExtensionSpec - specification for producing an extension block","text":"
    /*  ExtensionSpec provides the specification for producing an\n *  outbound extension block: block definition (identified by\n *  block type number), a formulation tag whose semantics are\n *  block-type-specific, and applicable CRC type.           */\n\ntypedef struct\n{\n    BpBlockType type;       /*  Block type      */\n    unsigned char   tag;        /*  Extension-specific  */\n    BpCrcType   crcType;    /*  Type of CRC on block    */\n} ExtensionSpec;\n

    An array of ExtensionSpec structures named extensionSpecs is also required. Each ExtensionSpec provides the specification for producing an outbound extension block:

    1. block definition (identified by block type number),
    2. three discriminator tags whose semantics are block-type-specific, and
    3. CRC type indicating what type of CRC must be used to protect this extension block.

    The order of appearance of extension specifications in the extensionSpecs array determines the order in which extension blocks will be inserted into locally sourced bundles.

    "},{"location":"Extension-Block-Interface/#procedure-to-extend-the-bundle-protocol","title":"Procedure to Extend the Bundle Protocol","text":"

    The standard extensionDefs array -- which is empty -- is in the noextensions.c prototype source file. The procedure for extending the Bundle Protocol in ION is as follows:

    1. Specify -DBP_EXTENDED in the Makefile's compiler command line when building the libbpP.c library module.

    2. Create a copy of the prototype extensions file, named \"bpextensions.c\", in a directory that is made visible to the Makefile's libbpP.c compilation command line (by a -I parameter).

    3. In the \"external function declarations\" area of \"bpextensions.c\", add \"extern\" function declarations identifying the functions that will implement your extension (or extensions).

    4. Add one or more ExtensionDef structure initialization lines to the extensionDefs array, referencing those declared functions.

    5. Add one or more ExtensionSpec structure initialization lines to the extensionSpecs array, referencing those extension definitions.

    6. Develop the implementations of the extension implementation functions in one or more new source code files.

    7. Add the object file or files for the new extension implementation source file (or files) to the Makefile's command line for linking libbpP.so.

    "},{"location":"Extension-Block-Interface/#extension-implementation-functions","title":"Extension Implementation Functions","text":"

    The function pointers supplied in each ExtensionDef must conform to the following specifications.

    NOTE that any function that modifies the bytes member of an ExtensionBlock or AckExtBlock must set the corresponding length to the new length of the bytes array, if changed.

    int (*BpExtBlkOfferFn)(ExtensionBlock *blk, Bundle *bundle)\n

    Populates all fields of the indicated ExtensionBlock structure for inclusion in the indicated outbound bundle. This function is automatically called when a new bundle is locally sourced or upon acquisition of a remotely sourced bundle that does not contain an extension block of this type. The values of the extension block are typically expected to be a function of the state of the bundle, but this is extension-specific. If it is not appropriate to offer an extension block of this type as part of this bundle, then the size, length, object, and bytes members of blk must all be set to zero. If it is appropriate to offer such a block but no internal object representing the state of the block is needed, the object and size members of blk must be set to zero. The type, blkProcFlags, and dataLength members of blk must be populated by the implementation of the \"offer\" function, but the length and bytes members are typically populated by calling the BP library function serializeExtBlk(), which must be passed the block to be serialized (with type, blkProcFlags and dataLength already set), a Lyst of EID references (two list elements -- offsets -- per EID reference, if applicable; otherwise NULL), and a pointer to the extension-specific block data. The block's bytes array and object (if present) must occupy space allocated from the ION database heap. Return zero on success, -1 on any system failure.

    int (*BpExtBlkProcessFn)(ExtensionBlock *blk, Bundle *bundle, void *context)\n

    Performs some extension-specific transformation of the data encapsulated in blk based on the state of bundle. The transformation to be performed will typically vary depending on whether the identified function is the one that is automatically invoked upon forwarding the bundle, upon taking custody of the bundle, upon enqueuing the bundle for transmission, upon removing the bundle from the transmission queue, or upon transmitting the serialized bundle. The context argument may supply useful supplemental information; in particular, the context provided to the ON_DEQUEUE function will comprise the name of the protocol for the duct from which the bundle has been dequeued, together with the EID of the neighboring node endpoint to which the bundle will be directly transmitted when serialized. The block-specific data in blk is located within bytes immediately after the header of the extension block; the length of the block's header is the difference between length and dataLength. Whenever the block's blkProcFlags, EID extensions, and/or block-specific data are altered, the serializeExtBlk() function should be called again to recalculate the size of the extension block and rebuild the bytes array. Return zero on success, -1 on any system failure.

    void (*BpExtBlkReleaseFn)(ExtensionBlock *blk)\n

    Releases all ION database space occupied by the object member of blk. This function is automatically called when a bundle is destroyed. Note that incorrect implementation of this function may result in a database space leak.

    int (*BpExtBlkCopyFn)(ExtensionBlock *newblk, ExtensionBlock *oldblk)\n

    Copies the object member of oldblk to ION database heap space and places the address of that new non-volatile object in the object member of newblk, also sets size in newblk. This function is automatically called when two copies of a bundle are needed, e.g., in the event that it must both be delivered to a local client and also fowarded to another node. Return zero on success, -1 on any system failure.

    int (*BpAcqExtBlkAcquireFn)(AcqExtBlock *acqblk, AcqWorkArea *work)\n

    Populates the indicated AcqExtBlock structure with size and object for retention as part of the indicated inbound bundle. (The type, blkProcFlags, EID references (if any), dataLength, length, and bytes values of the structure are pre-populated with data as extracted from the serialized bundle.) This function is only to be provided for extension blocks that are never encrypted; a extension block that may be encrypted should have a BpAcqExtBlkParseFn callback instead. The function is automatically called when an extension block of this type is encountered in the course of parsing and acquiring a bundle for local delivery and/or forwarding. If no internal object representing the state of the block is needed, the object member of acqblk must be set to NULL and the size member must be set to zero. If an object is needed for this block, it must occupy space that is allocated from ION working memory using MTAKE and its size must be indicated in blk. Return zero if the block is malformed (this will cause the bundle to be discarded), 1 if the block is successfully parsed, -1 on any system failure.

    int (*BpAcqExtBlkReviewFn)(AcqWorkArea *work)\n

    Reviews the extension blocks that have been acquired for this bundle, checking to make sure that all blocks of this type that are required by policy are present. Returns 0 if any blocks are missing, 1 if all required blocks are present, -1 on any system failure.

    int (*BpAcqExtBlkDecryptFn)(AcqExtBlock *acqblk, AcqWorkArea *work)\n

    Decrypts some other extension block that has been acquired but not yet parsed, nominally using encapsulated ciphersuite information. Return zero if the block is malformed (this will cause the bundle to be discarded), 1 if no error in decryption was encountered, -1 on any system failure.

    int (*BpAcqExtBlkParseFn)(AcqExtBlock *acqblk, AcqWorkArea *work)\n

    Populates the indicated AcqExtBlock structure with size and object for retention as part of the indicated inbound bundle. (The type, blkProcFlags, EID references (if any), dataLength, length, and bytes values of the structure are pre-populated with data as extracted from the serialized bundle.) This function is provided for extension blocks that may be encrypted; a extension block that can never be encrypted should have a BpAcqExtBlkAcquireFn callback instead. The function is automatically called when an extension block of this type is encountered in the course of parsing and acquiring a bundle for local delivery and/or forwarding. If no internal object representing the state of the block is needed, the object member of acqblk must be set to NULL and the size member must be set to zero. If an object is needed for this block, it must occupy space that is allocated from ION working memory using MTAKE and its size must be indicated in blk. Return zero if the block is malformed (this will cause the bundle to be discarded), 1 if the block is successfully parsed, -1 on any system failure.

    int (*BpAcqExtBlkCheckFn)(AcqExtBlock *acqblk, AcqWorkArea *work)\n

    Examines the bundle in work to determine whether or not it is authentic, in the context of the indicated extension block. Return 1 if the block is determined to be inauthentic (this will cause the bundle to be discarded), zero if no inauthenticity is detected, -1 on any system failure.

    int (*BpExtBlkRecordFn)(ExtensionBlock *blk, AcqExtBlock *acqblk)\n

    Copies the object member of acqblk to ION database heap space and places the address of that non-volatile object in the object member of blk; also sets size in blk. This function is automatically called when an acquired bundle is accepted for forwarding and/or delivery. Return zero on success, -1 on any system failure.

    void (*BpAcqExtBlkClearFn)(AcqExtBlock *acqblk)\n

    Uses MRELEASE to release all ION working memory occupied by the object member of acqblk. This function is automatically called when acquisition of a bundle is completed, whether or not the bundle is accepted. Note that incorrect implementation of this function may result in a working memory leak.

    "},{"location":"Extension-Block-Interface/#utility-functions-for-extension-processing","title":"Utility Functions for Extension Processing","text":"
    void discardExtensionBlock(AcqExtBlock *blk)\n

    Deletes this block from the bundle acquisition work area prior to the recording of the bundle in the ION database.

    void scratchExtensionBlock(ExtensionBlock *blk)\n

    Deletes this block from the bundle after the bundle has been recorded in the ION database.

    Object findExtensionBlock(Bundle *bundle, unsigned int type, unsigned char tag1, unsigned char tag2, unsigned char tag3)\n

    On success, returns the address of the ExtensionBlock in bundle for the indicated type and tag values. If no such extension block exists, returns zero.

    int serializeExtBlk(ExtensionBlock *blk, char *blockData)\n

    Constructs a BPv7-conformant serialized representation of this extension block in blk->bytes. Returns 0 on success, -1 on an unrecoverable system error.

    void suppressExtensionBlock(ExtensionBlock *blk)\n

    Causes blk to be omitted when the bundle to which it is attached is serialized for transmission. This suppression remains in effect until it is reversed by restoreExtensionBlock();

    void restoreExtensionBlock(ExtensionBlock *blk)\n

    Reverses the effect of suppressExtensionBlock(), enabling the block to be included when the bundle to which it is attached is serialized.

    "},{"location":"ICI-API/","title":"Interplanetary Communications Infrastructure (ICI) APIs","text":"

    This section will focus on a subset of ICI APIs that enables an external application to create, manipulate, and access data objects inside ION's SDR.

    "},{"location":"ICI-API/#ici-apis","title":"ICI APIs","text":""},{"location":"ICI-API/#header","title":"Header","text":"
    #include \"ion.h\"\n
    "},{"location":"ICI-API/#mtake-mrelease","title":"MTAKE & MRELEASE","text":"
    #define MTAKE(size) allocFromIonMemory(__FILE__, __LINE__, size)\n#define MRELEASE(addr)  releaseToIonMemory(__FILE__, __LINE__, addr)\n
    "},{"location":"ICI-API/#ionattach","title":"ionAttach","text":"

    Function Prototype

    extern int  ionAttach();\n

    Parameters

    Return Value

    Example Call

    if (ionAttach() < 0)\n{\n    putErrmsg(\"bpadmin can't attach to ION.\", NULL);\n\n    /* User calls error handling routine. */\n}\n

    Description

    Attached is the invoking task to ION infrastructure as previously established by running the ionadmin utility program. After successful execution, the handle to the ION SDR can be obtained by a separate API call. putErrmsg is an ION logging API, which will be described later in this document.

    "},{"location":"ICI-API/#iondetach","title":"ionDetach","text":"

    Function Prototype

    extern void ionDetach();\n

    Parameters

    Return Value

    Example Call

    ionDetach();\n

    Description

    Detaches the invoking task from ION infrastructure. In particular, releases handle allocated for access to ION's non-volatile database.

    "},{"location":"ICI-API/#ionterminate","title":"ionTerminate","text":"

    Function Prototype

    extern void ionTerminate();\n

    Parameters

    Return Value

    Example Call

    ionTerminate();\n

    Description

    Shuts down the entire ION node, terminating all daemons. The state of the SDR will be destroyed during the termination process, even if the SDR heap is implemented in a non-volatile storage, such as a file.

    "},{"location":"ICI-API/#ionstartattendant","title":"ionStartAttendant","text":"

    Function Prototype

    extern int  ionStartAttendant(ReqAttendant *attendant);\n

    Parameters

    typedef struct\n{\n    sm_SemId    semaphore;\n} ReqAttendant;\n

    Return Value

    Example Call

    if (ionStartAttendant(&attendant))\n{\n    putErrmsg(\"Can't initialize blocking transmission.\", NULL);\n\n    /* user implemented error handling routine */\n}\n

    Description

    Initializes the semaphore in attendant to block a pending ZCO space request. This is necessary to ensure that the invoking task cannot inject data into the Bundle Protocol Agent until SDR space has been allocated.

    "},{"location":"ICI-API/#ionstopattendant","title":"ionStopAttendant","text":"

    Function Prototype

    extern void ionStopAttendant(ReqAttendant *attendant);\n

    Parameters

    Return Value

    Example Call

    ionStopAttendant(&attendant);\n

    Description

    Destroys the semaphore in attendant, preventing a potential resource leak. This is typically called at the end of a BP application after all user data have been injected into the SDR.

    "},{"location":"ICI-API/#ionpauseattendent","title":"ionPauseAttendent","text":"

    Function Prototype

    void ionPauseAttendant(ReqAttendant *attendant)\n

    Parameters

    Return Value

    Example Call

    ionStopAttendant(&attendant);\n

    Description

    \"Ends\" the semaphore in attendant so that the task blocked on taking it is interrupted and may respond to an error or shutdown condition. This may be required when trying to quit a blocked user application while acquiring ZCO space.

    "},{"location":"ICI-API/#ioncreatezco","title":"ionCreateZco","text":"

    Function Prototype

    extern Object ionCreateZco( ZcoMedium source,\n            Object location,\n            vast offset,\n            vast length,\n            unsigned char coarsePriority,\n            unsigned char finePriority,\n            ZcoAcct acct,\n            ReqAttendant *attendant);\n

    Parameters

    Source: the type of ZCO to be created. Each source data object may be either a file, a \"bulk\" item in mass storage, an object in SDR heap space (identified by heap address stored in an \"object reference\" object in SDR heap), an array of bytes in SDR heap space (identified by heap address), or another ZCO.

    typedef enum\n{\n    ZcoFileSource = 1,\n    ZcoBulkSource = 2,\n    ZcoObjSource = 3,\n    ZcoSdrSource = 4,\n    ZcoZcoSource = 5\n} ZcoMedium;\n

    Return Value

    Example Call

    SdrObject bundleZco;\n\nbundleZco = ionCreateZco(ZcoSdrSource, extent, 0, lineLength,\n        BP_STD_PRIORITY, 0, ZcoOutbound, &attendant);\nif (bundleZco == 0 || bundleZco == (Object) ERROR)\n{\n    putErrmsg(\"Can't create ZCO extent.\", NULL);\n    /* user implemented error handling routine goes here */\n}\n

    Description

    This function provides a \"blocking\" implementation of admission control in ION. Like zco_create(), it constructs a zero-copy object (see zco(3)) that contains a single extent of source data residing at a location in the source, of which the initial offset number of bytes are omitted and the subsequent length bytes are included. By providing an attendant semaphore, initialized by ionStartAttendant, ionCreateZco() can be executed as a blocking call so long as the total amount of space in the source available for new ZCO formation is less than the length. ionCreateZco() operates by calling ionRequestZcoSpace, then pending on the semaphore in attendant as necessary before creating the ZCO and then populating it with the user's data.

    "},{"location":"ICI-API/#sdr-database-heap-apis","title":"SDR Database & Heap APIs","text":"

    SDR persistent data are referenced by object and address values in the application code, simply displacements (offsets) within the SDR address space. The difference between the two is that an Object is always the address of a block of heap space returned by some call to sdr_malloc, while an Address can refer to any byte in the SDR address space. An Address is the SDR functional equivalent of a C pointer; some Addresses point to actual Objects.

    The number of SDR-related APIs is significant, and most are used by ION internally. Fortunately, there are only a few APIs that an external application will likely need to use. The following list of the most commonly used APIs is drawn from the Database I/O and the Heap Space Management API categories.

    "},{"location":"ICI-API/#header_1","title":"Header","text":"
    #include \"sdr.h\"\n
    "},{"location":"ICI-API/#sdr_malloc","title":"sdr_malloc","text":"

    Function Prototype

    Object sdr_malloc(Sdr sdr, unsigned long size)\n

    Parameters

    Return Value

    Example Call

    CHKZERO(sdr_begin_xn(sdr));\nextent = sdr_malloc(sdr, lineLength);\nif (extent)\n{\n    sdr_write(sdr, extent, text, lineLength);\n}\n\nif (sdr_end_xn(sdr) < 0)\n{\n    putErrmsg(\"No space for ZCO extent.\", NULL);\n    bp_detach();\n    return 0;\n}\n

    In this example, a 'critical section' has been implemented by API calls: sdr_begin_xn and sdr_end_xn. The critical section ensures that the SDR is not altered while the space allocation is in progress. These APIs will be explained later in this document. The sdr_write API is used to write data into the space acquired by sdr_malloc.

    It may seem strange that failing to allocate space or write the data into the allocated space relies on checking the return value of sdr_end_xn instead of sdr_malloc and sdr_write functions. This is because when sdr_end_xn returns a negative value, it indicates that an SDR transaction was already terminated, which occurs when sdr_malloc or sdr_write fails. So, this is a convenient way to detect the failure of two calls simultaneously by checking the sdr_end_xn call return value.

    Description

    Allocates a block of space from the indicated SDR's heap. The maximum size is 1/2 of the maximum address space size (i.e., 2G for a 32-bit machine). Returns block address if successful, zero if block could not be allocated.

    "},{"location":"ICI-API/#sdr_insert","title":"sdr_insert","text":"

    Function Prototype

    Object sdr_insert(Sdr sdr, char *from, unsigned long size)\n

    Parameters

    Return Value

    Example Call

    CHKZERO(sdr_begin_xn(sdr));\nextent = sdr_insert(sdr, text, lineLength);\nif (sdr_end_xn(sdr) < 0)\n{\n    putErrmsg(\"No space for ZCO extent.\", NULL);\n    bp_detach();\n    return 0;\n}\n

    Description

    This function combines the action of sdr_malloc and sdr_write. It first uses sdr_malloc to obtain a block of space, and if this allocation is successful, it uses sdr_write to copy size bytes of data from memory into the newly allocated block.

    "},{"location":"ICI-API/#sdr_stow","title":"sdr_stow","text":"

    Function Prototype

    Object sdr_stow(sdr, variable)\n

    Parameters

    Return Value

    Description

    sdr_stow is a macro that uses sdr_insert to insert a copy of a variable into the dataspace. The size of the variable is used as the number of bytes to copy.

    "},{"location":"ICI-API/#sdr_object_length","title":"sdr_object_length","text":"

    Function Prototype

    int sdr_object_length(Sdr sdr, Object object)\n

    Parameters

    Return Value

    Description

    Returns the number of bytes of heap space allocated to the application data at object.

    "},{"location":"ICI-API/#sdr_free","title":"sdr_free","text":"

    Function Prototype

    void sdr_free(Sdr sdr, Object object)\n

    Parameters

    Return Value

    Description

    Frees the heap space occupied by an object at object. The space freed are put back into the SDR memory pool and will become available for subsequent re-allocation.

    "},{"location":"ICI-API/#sdr_read","title":"sdr_read","text":"

    Function Prototype

    void sdr_read(Sdr sdr, char *into, Address from, int length)\n

    Parameters

    Return Value

    Description

    Copies length characters from (a location in the indicated SDR) to the memory location given by into. The data are copied from the shared memory region in which the SDR resides, if any; otherwise, they are read from the file in which the SDR resides.

    "},{"location":"ICI-API/#sdr_stage","title":"sdr_stage","text":"

    Function Prototype

    void sdr_stage(Sdr sdr, char *into, Object from, int length)\n

    Parameters

    Return Value

    Description

    Like sdr_read, this function will copy length characters from (a location in the heap of the indicated SDR) to the memory location given by into. Unlike sdr_get, sdr_stage requires that from be the address of some allocated object, not just any location within the heap. sdr_stage, when called from within a transaction, notifies the SDR library that the indicated object may be updated later in the transaction; this enables the library to retrieve the object's size for later reference in validating attempts to write into some location within the object. If the length is zero, the object's size is privately retrieved by SDR, but none of the object's content is copied into memory.

    sdr_get is a macro that uses sdr_read to load variables from the SDR address given by heap_pointer; heap_pointer must be (or be derived from) a heap pointer as returned by sdr_pointer. The size of the variable is used as the number of bytes to copy.

    "},{"location":"ICI-API/#sdr_write","title":"sdr_write","text":"

    Function Prototype

    void sdr_write(Sdr sdr, Address into, char *from, int length)\n

    Parameters

    Return Value

    Description

    Like sdr_read, this function will copy length characters from (a location in the heap of the indicated SDR) to the memory location given by into. Unlike sdr_get, sdr_stage requires that from be the address of some allocated object, not just any location within the heap. sdr_stage, when called from within a transaction, notifies the SDR library that the indicated object may be updated later in the transaction; this enables the library to retrieve the object's size for later reference in validating attempts to write into some location within the object. If length is zero, the object's size is privately retrieved by SDR but none of the object's content is copied into memory.

    sdr_get is a macro that uses sdr_read to load variables from the SDR address given by heap_pointer; heap_pointer must be (or be derived from) a heap pointer as returned by sdr_pointer. The size of the variable is used as the number of bytes to copy.

    "},{"location":"ICI-API/#sdr-transaction-apis","title":"SDR Transaction APIs","text":"

    The following APIs manage transactions by implementing a critical section in which SDR content cannot be modified.

    "},{"location":"ICI-API/#header_2","title":"Header","text":"
    #include \"sdrxn.h\"\n
    "},{"location":"ICI-API/#sdr_begin_xn","title":"sdr_begin_xn","text":"

    Function Prototype

    int sdr_begin_xn(Sdr sdr)\n

    Parameters

    Return Value

    Description

    Initiates a transaction. Returns 1 on success, 0 on any failure. Note that transactions are single-threaded; any task that calls sdr_begin_xn is suspended until all previously requested transactions have been ended or canceled.

    "},{"location":"ICI-API/#sdr_in_xn","title":"sdr_in_xn","text":"

    Function Prototype

    int sdr_in_xn(Sdr sdr)\n

    Parameters

    Return Value

    Description

    Returns 1 if called in the course of a transaction, 0 otherwise.

    "},{"location":"ICI-API/#sdr_exit_xn","title":"sdr_exit_xn","text":"

    Function Prototype

    void sdr_exit_xn(Sdr sdr)\n

    Parameters

    Return Value

    Description

    Simply abandons the current transaction, ceasing the calling task's lock on ION. MUST NOT be used if any dataspace modifications were performed during the transaction; sdr_end_xn must be called instead to commit those modifications.

    "},{"location":"ICI-API/#sdr_cancel_xn","title":"sdr_cancel_xn","text":"

    Function Prototype

    void sdr_cancel_xn(Sdr sdr)\n

    Parameters

    Return Value

    Description

    Cancels the current transaction. If reversibility is enabled for the SDR, canceling a transaction reverses all heap modifications performed during that transaction.

    "},{"location":"ICI-API/#sdr_end_xn","title":"sdr_end_xn","text":"

    Function Prototype

    int sdr_end_xn(Sdr sdr)\n

    Parameters

    Return Value

    Description

    Ends the current transaction. Returns 0 if the transaction was completed without any error; returns -1 if any operation performed in the course of the transaction failed, in which case the transaction was automatically canceled.

    "},{"location":"ICI-API/#sdr-list-management-apis","title":"SDR List management APIs","text":"

    The SDR list management functions manage doubly-linked lists in managed SDR heap space. The functions manage two kinds of objects: lists and list elements. A list knows how many elements it contains and what its start and end elements are. An element knows what list it belongs to and the elements before and after it in the list. An element also knows its content, which is normally the SDR Address of some object in the SDR heap. A list may be sorted, which speeds the process of searching for a particular element.

    "},{"location":"ICI-API/#header_3","title":"Header","text":"
    #include \"sdr.h\"\n\ntypedef int (*SdrListCompareFn)(Sdr sdr, Address eltData, void *argData);\ntypedef void (*SdrListDeleteFn)(Sdr sdr, Object elt, void *argument);\n
    "},{"location":"ICI-API/#callback-sdrlistcomparefn","title":"Callback: SdrListCompareFn","text":""},{"location":"ICI-API/#callback-sdrlistdeletefn","title":"Callback: SDRListDEleteFn","text":"

    USAGE

    When inserting elements or searching a list, the user may optionally provide a compare function of the form:

    int user_comp_name(Sdr sdr, Address eltData, void *dataBuffer);\n

    When provided, this function is automatically called by the sdrlist function being invoked; when the function is called, it is passed the content of a list element (eltData, nominally the Address of an item in the SDR's heap space) and an argument, dataBuffer, which is nominally the address in the local memory of some other item in the same format. The user-supplied function normally compares some key values of the two data items. It returns 0 if they are equal, an integer less than 0 if eltData's key value is less than that of dataBuffer, and an integer greater than 0 if eltData's key value is greater than that of dataBuffer. These return values will produce a list in ascending order. If the user desires the list to be in descending order, the function must reverse the signs of these return values.

    When deleting an element or destroying a list, the user may optionally provide a delete function of the form:

    void user_delete_name(Sdr sdr, Address eltData, void *argData)\n

    When provided, this function is automatically called by the sdrlist function being invoked; when the function is called, it is passed the content of a list element (eltData, nominally the Address of an item in the SDR's heap space) and an argument, argData, which if non-NULL is normally the address in the local memory of a data item providing context for the list element deletion. The user-supplied function performs any application-specific cleanup associated with deleting the element, such as freeing the element's content data item and/or other SDR heap space associated with the element.

    "},{"location":"ICI-API/#sdr_list_insert_first","title":"sdr_list_insert_first","text":""},{"location":"ICI-API/#sdr_list_insert_last","title":"sdr_list_insert_last","text":"

    Function Prototype

    Object sdr_list_insert_first(Sdr sdr, Object list, Address data)\nObject sdr_list_insert_last(Sdr sdr, Object list, Address data)\n

    Parameters

    Return Value

    Description

    Creates a new element and inserts it at the front/end of the list. This function should not be used to insert a new element into any ordered list; use sdr_list_insert() instead.

    "},{"location":"ICI-API/#sdr_list_create","title":"sdr_list_create","text":"

    Function Prototype

    Object sdr_list_create(Sdr sdr)\n

    Parameters

    Return Value

    Description

    Creates a new list object in the SDR; the new list object initially contains no list elements. Returns the address of the new list or zero on any error.

    "},{"location":"ICI-API/#sdr_list_length","title":"sdr_list_length","text":"

    Function Prototype

    int sdr_list_length(Sdr sdr, Object list)\n

    Parameters

    Return Value number of elements in the list: Success *-1`: any error

    Description

    Returns the number of elements in the list, or -1 on any error.

    "},{"location":"ICI-API/#sdr_list_destroy","title":"sdr_list_destroy","text":"

    Function Prototype

    void sdr_list_destroy(Sdr sdr, Object list, SdrListDeleteFn fn, void *arg)\n

    Parameters

    Return Value

    Description

    Destroys a list, freeing all elements of the list. If fn is non-NULL, that function is called once for each freed element; when called, fn is passed the Address that is the element's data, and the argument pointer is passed to sdr_list_destroy. See the manual page for sdrlist for details on the form of the delete function sdrlist.

    Do not use sdr_free to destroy an SDR list, as this would leave the elements of the list allocated yet unreferenced.

    "},{"location":"ICI-API/#sdr_list_user_data_set","title":"sdr_list_user_data_set","text":"

    Function Prototype

    void sdr_list_user_data_set(Sdr sdr, Object list, Address userData)\n

    Parameters

    Return Value

    Description

    Sets the \"user data\" word of list to userData. Note that userData is nominally an Address but can be any value that occupies a single word. It is typically used to point to an SDR object that somehow characterizes the list as a whole, such as a name.

    "},{"location":"ICI-API/#sdr_list_user_data","title":"sdr_list_user_data","text":"

    Function Prototype

    Address sdr_list_user_data(Sdr sdr, Object list)\n

    Parameters

    Return Value

    Description

    Returns the value of the \"user data\" word of list, or zero on any error.

    "},{"location":"ICI-API/#sdr_list_insert","title":"sdr_list_insert","text":"

    Function Prototype

    Object sdr_list_insert(Sdr sdr, Object list, Address data, SdrListCompareFn fn, void *dataBuffer)\n

    Parameters

    Return Value

    Description

    Creates a new list element whose data value is data and inserts that element into the list. If fn is NULL, the new list element is simply appended to the list; otherwise, the new list element is inserted after the last element in the list whose data value is \"less than or equal to\" the data value of the new element (in dataBuffer) according to the collating sequence established by fn. Returns the address of the newly created element or zero on any error.

    "},{"location":"ICI-API/#sdr_list_insert_before","title":"sdr_list_insert_before","text":""},{"location":"ICI-API/#sdr_list_insert_after","title":"sdr_list_insert_after","text":"

    Function Prototype

    Object sdr_list_insert_before(Sdr sdr, Object elt, Address data)\nObject sdr_list_insert_after(Sdr sdr, Object elt, Address data)\n

    Parameters

    Return Value

    Description

    Creates a new element and inserts it before/after the specified list element. This function should not be used to insert a new element into an ordered list; use sdr_list_insert instead. Returns the address of the newly created list element or zero on any error.

    "},{"location":"ICI-API/#sdr_list_delete","title":"sdr_list_delete","text":"

    Function Prototype

    void sdr_list_delete(Sdr sdr, Object elt, SdrListDeleteFn fn, void *arg)\n

    Parameters

    Return Value

    Description

    Delete elt from the list it is in. If fn is non-NULL, that function will be called upon deletion of elt; when called, that function is passed the Address that is the list element's data value and the arg pointer passed to sdr_list_delete.

    "},{"location":"ICI-API/#sdr_list_first","title":"sdr_list_first","text":""},{"location":"ICI-API/#sdr_list_last","title":"sdr_list_last","text":"

    Function Prototype

    Object sdr_list_first(Sdr sdr, Object list)\nObject sdr_list_last(Sdr sdr, Object list)\n

    Parameters

    Return Value

    Description

    Returns the address of the first/last element of the list, or zero on any error.

    "},{"location":"ICI-API/#sdr_list_next","title":"sdr_list_next","text":""},{"location":"ICI-API/#sdr_list_prev","title":"sdr_list_prev","text":"

    Function Prototype

    Object sdr_list_next(Sdr sdr, Object elt)\nObject sdr_list_prev(Sdr sdr, Object elt)\n

    Parameters

    Return Value

    Description

    Returns the address of the element following/preceding elt in that element's list or zero on any error.

    "},{"location":"ICI-API/#sdr_list_search","title":"sdr_list_search","text":"

    Function Prototype

    Object sdr_list_search(Sdr sdr, Object elt, int reverse, SdrListCompareFn fn, void *dataBuffer);\n

    Parameters

    Return Value

    Description

    Search a list for an element whose data matches the data in dataBuffer, starting at the indicated initial list element.

    If the compare function is non-NULL, the list is assumed to be sorted in the order implied by that function, and the function is automatically called once for each element of the list until it returns a value that is greater than or equal to zero (where zero indicates an exact match and a value greater than zero indicates that the list contains no matching element); each time compare is called it is passed the Address that is the element's data value and the dataBuffer value passed to sm_list_search(). If the reverse is non-zero, then the list is searched in reverse order (starting at the indicated initial list element), and the search ends when the compare function returns a value that is less than or equal to zero. If the compare function is NULL, then the entire list is searched (in either forward or reverse order, as directed) until an element is located whose data value is equal to ((Address) dataBuffer). Returns the address of the matching element if one is found, 0 otherwise.

    "},{"location":"ICI-API/#sdr_list_list","title":"sdr_list_list","text":"

    Function Prototype

    Object sdr_list_list(Sdr sdr, Object elt)\n

    Parameters

    Return Value

    Description

    Returns the address of the list to which elt belongs, or 0 on any error.

    "},{"location":"ICI-API/#sdr_list_data","title":"sdr_list_data","text":"

    Function Prototype

    Address sdr_list_data(Sdr sdr, Object elt)\n

    Parameters

    Return Value

    Description

    Returns the Address that is the data value of elt, or 0 on any error.

    "},{"location":"ICI-API/#sdr_list_data_set","title":"sdr_list_data_set","text":"

    Function Prototype

    Address sdr_list_data_set(Sdr sdr, Object elt, Address data)\n

    Parameters

    Return Value

    Description

    Sets the data value for elt to data, replacing the original value. Returns the original data value for elt, or 0 on any error. The original data value for elt may or may not have been the address of an object in heap data space; even if it was, that object was NOT deleted.

    Warning: changing the data value of an element of an ordered list may ruin the ordering of the list.

    "},{"location":"ICI-API/#other-less-used-ici-apis","title":"Other less used ICI APIs","text":"

    There are many other less frequently used APIs. Please see the manual pages for the following:

    ion, sdr, sdrlist, platform, lyst, psm, memmgr, sdrstring, sdrtable, and smlist.

    "},{"location":"ION-Application-Service-Interface/","title":"ION Application Services","text":"

    This section covers interfaces for users to access the following four DTN application-level services provided by ION:

    "},{"location":"ION-Application-Service-Interface/#ccsds-file-delivery-protocol-cfdp-apis","title":"CCSDS File Delivery Protocol (CFDP) APIs","text":"

    The CFDP library provides functions enabling application software to use CFDP to send and receive files. It conforms to the Class 1 (Unacknowledged) service class defined in the CFDP Blue Book and includes several standard CFDP user operations implementations.

    In the ION implementation of CFDP, the CFDP notion of entity ID is identical to the BP (CBHE) notion of DTN node number used in ION.

    CFDP entity and transaction numbers may be up to 64 bits in length. For portability to 32-bit machines, these numbers are stored in the CFDP state machine as structures of type CfdpNumber.

    To simplify the interface between CFDP and the user application without risking storage leaks, the CFDP-ION API uses MetadataList objects. A MetadataList is a specially formatted SDR list of user messages, filestore requests, or filestore responses. During the time that a MetadataList is pending processing via the CFDP APIs, but is not yet (or is no longer) reachable from any FDU object, a pointer to the list is appended to one of the lists of MetadataList objects in the CFDP non-volatile database. This assures that any unplanned termination of the CFDP daemons won't leave any SDR lists unreachable - and therefore un-recyclable - due to the absence of references to those lists. Restarting CFDP will automatically purge any unused MetadataLists from the CFDP database. The \"user data\" variable of the MetadataList itself is used to implement this feature: while the list is reachable only from the database root, its user data variable points to the database root list from which it is referenced. In contrast, the list is attached to a File Delivery Unit; its user data is NULL.

    CFDP transmits the data in a source file in fixed-sized segments by default. The user application can override this behavior at the time transmission of a file is requested by supplying a file reader callback function that reads the file - one byte at a time - until it detects the end of a \"record\" that has application significance. Each time CFDP calls the reader function, the function must return the length of one such record (not greater than 65535).

    When CFDP is used to transmit a file, a 32-bit checksum must be provided in the \"EOF\" PDU to enable the receiver of the file to ensure that it was not corrupted in transit. When supplied with an application-specific file reader function, it updates the computed checksum as it reads each file byte; a CFDP library function is provided. Two types of file checksums are supported: a simple modular checksum or a 32-bit CRC. The checksum type must be passed through to the CFDP checksum computation function, so it must be provided by (and thus to) the file reader function.

    The user application may provide per-segment metadata. To enable this, upon formation of each file data segment, CFDP will invoke the user-provided per-segment metadata composition callback function (if any), a function conforming to the CfdpMetadataFn type definition. The callback will be passed the offset of the segment within the file, the segment's offset within the current record (as applicable), the length of the segment, an open file descriptor for the source file (in case the data must be read to construct the metadata), and a 63-byte buffer in which to place the new metadata. The callback function must return the metadata length to attach to the file data segment PDU (may be zero) or -1 in case of a general system failure.

    The return value for each CFDP \"request\" function (put, cancel, suspend, resume, report) is a reference number that enables \"events\" obtained by calling cfdp_get_event() to be matched to the requests that caused them. Events with a reference number set to zero were caused by autonomous CFDP activity, e.g., the reception of a file data segment.

    #include \"cfdp.h\"\n\ntypedef enum\n{\n    CksumTypeUnknown = -1,\n    ModularChecksum = 0,\n    CRC32CChecksum = 2,\n    NullChecksum = 15\n} CfdpCksumType;\n\ntypedef int (*CfdpReaderFn)(int fd, unsigned int *checksum, CfdpCksumType ckType);\n\ntypedef int (*CfdpMetadataFn)(uvast fileOffset, unsigned int recordOffset, unsigned int length, int sourceFileFD, char *buffer);\n\ntypedef enum\n{\n    CfdpCreateFile = 0,\n    CfdpDeleteFile,\n    CfdpRenameFile,\n    CfdpAppendFile,\n    CfdpReplaceFile,\n    CfdpCreateDirectory,\n    CfdpRemoveDirectory,\n    CfdpDenyFile,\n    CfdpDenyDirectory\n} CfdpAction;\n\ntypedef enum\n{\n    CfdpNoEvent = 0,\n    CfdpTransactionInd,\n    CfdpEofSentInd,\n    CfdpTransactionFinishedInd,\n    CfdpMetadataRecvInd,\n    CfdpFileSegmentRecvInd,\n    CfdpEofRecvInd,\n    CfdpSuspendedInd,\n    CfdpResumedInd,\n    CfdpReportInd,\n    CfdpFaultInd,\n    CfdpAbandonedInd\n} CfdpEventType;\n\ntypedef struct\n{\n    char            *sourceFileName;\n    char            *destFileName;\n    MetadataList    messagesToUser;\n    MetadataList    filestoreRequests;\n    CfdpHandler     *faultHandlers;\n    int             unacknowledged;\n    unsigned int    flowLabelLength;\n    unsigned char   *flowLabel;\n    int             recordBoundsRespected;\n    int             closureRequested;\n} CfdpProxyTask;\n\ntypedef struct\n{\n    char            *directoryName;\n    char            *destFileName;\n} CfdpDirListTask;\n
    "},{"location":"ION-Application-Service-Interface/#cfdp_attach","title":"cfdp_attach","text":"
    int cfdp_attach()\n

    Attaches the application to CFDP functionality on the local computer.

    Return Value

    "},{"location":"ION-Application-Service-Interface/#cfdp_entity_is_started","title":"cfdp_entity_is_started","text":"
    int cfdp_entity_is_started()\n

    Return Value * 1: if the local CFDP entity has been started and not yet stopped * 0: otherwise

    "},{"location":"ION-Application-Service-Interface/#cfdp_detach","title":"cfdp_detach","text":"
    void cfdp_detach()\n

    Terminates all access to CFDP functionality on the local computer.

    "},{"location":"ION-Application-Service-Interface/#cfdp_compress_number","title":"cfdp_compress_number","text":"
    void cfdp_compress_number(CfdpNumber *toNbr, uvast from)\n

    Converts an unsigned vast number into a CfdpNumber structure, e.g., for use when invoking the cfdp_put() function.

    "},{"location":"ION-Application-Service-Interface/#cfdp_decompress_number","title":"cfdp_decompress_number","text":"
    void cfdp_decompress_number(uvast toNbr, CfdpNumber *from)\n

    Converts a numeric value in a CfdpNumber structure to an unsigned vast integer.

    "},{"location":"ION-Application-Service-Interface/#cfdp_update_checksum","title":"cfdp_update_checksum","text":"
    void cfdp_update_checksum(unsigned char octet, uvast *offset, unsigned int *checksum, CfdpCksumType ckType)\n

    For use by an application-specific file reader callback function, which must pass to cfdp_update_checksum() the value of each byte (octet) it reads. offset must be octet's displacement in bytes from the start of the file. The checksum pointer is provided to the reader function by CFDP.

    "},{"location":"ION-Application-Service-Interface/#cfdp_create_usrmsg_list","title":"cfdp_create_usrmsg_list","text":"
    MetadataList cfdp_create_usrmsg_list()\n

    Creates a non-volatile linked list, suitable for containing messages-to-user that are to be presented to cfdp_put().

    "},{"location":"ION-Application-Service-Interface/#cfdp_add_usrmsg","title":"cfdp_add_usrmsg","text":"
    int cfdp_add_usrmsg(MetadataList list, unsigned char *text, int length)\n

    Appends the indicated message-to-user to list.

    "},{"location":"ION-Application-Service-Interface/#cfdp_get_usrmsg","title":"cfdp_get_usrmsg","text":"
    int cfdp_get_usrmsg(MetadataList list, unsigned char *textBuf, int *length)\n

    Removes from list the first of the remaining messages-to-user contained in the list and delivers its text and length. When the last message in the list is delivered, destroys the list.

    "},{"location":"ION-Application-Service-Interface/#cfdp_destroy_usrmsg_list","title":"cfdp_destroy_usrmsg_list","text":"
    void cfdp_destroy_usrmsg_list(MetadataList *list)\n

    Removes and destroys all messages-to-user in list and destroys the list.

    "},{"location":"ION-Application-Service-Interface/#cfdp_create_fsreq_list","title":"cfdp_create_fsreq_list","text":"
    MetadataList cfdp_create_fsreq_list()\n

    Creates a non-volatile linked list, suitable for containing filestore requests that are to be presented to cfdp_put().

    "},{"location":"ION-Application-Service-Interface/#cfdp_add_fsreq","title":"cfdp_add_fsreq","text":"
    int cfdp_add_fsreq(MetadataList list, CfdpAction action, char *firstFileName, char *seconfdFIleName)\n

    Appends the indicated filestore request to list.

    "},{"location":"ION-Application-Service-Interface/#cfdp_get_fsreq","title":"cfdp_get_fsreq","text":"
    int cfdp_get_fsreq(MetadataList list, CfdpAction *action, char *firstFileNameBuf, char *secondFileNameBuf)\n

    Removes from list the first of the remaining filestore requests contained in the list and delivers its action code and file names. When the last request in the list is delivered, destroys the list.

    "},{"location":"ION-Application-Service-Interface/#cfdp_destroy_fsreq_list","title":"cfdp_destroy_fsreq_list","text":"
    void cfdp_destroy_fsreq_list(MetadataList *list)\n

    Removes and destroys all filestore requests in list and destroys the list.

    "},{"location":"ION-Application-Service-Interface/#cfdp_get_fsresp","title":"cfdp_get_fsresp","text":"
    int cfdp_get_fsresp(MetadataList list, CfdpAction *action, int *status, char *firstFileNameBuf, char *secondFileNameBuf, char *messageBuf)\n

    Removes from list the first of the remaining filestore responses contained in the list and delivers its action code, status, file names, and message. When the last response in the list is delivered, it destroys the list.

    "},{"location":"ION-Application-Service-Interface/#cfdp_destroy_fsresp_list","title":"cfdp_destroy_fsresp_list","text":"
    void cfdp_destroy_fsresp_list(MetadataList *list)\n

    Removes and destroys all filestore responses in list and destroys the list.

    "},{"location":"ION-Application-Service-Interface/#cfdp_read_space_packets","title":"cfdp_read_space_packets","text":"
    int cfdp_read_space_packets(int fd, unsigned int *checksum)\n

    This is a standard \"reader\" function that segments the source file on CCSDS space packet boundaries. Multiple small packets may be aggregated into a single file data segment.

    "},{"location":"ION-Application-Service-Interface/#cfdp_read_text_lines","title":"cfdp_read_text_lines","text":"
    int cfdp_read_text_lines(int fd, unsigned int *checksum)\n

    This is a standard \"reader\" function that segments a source file of text lines on line boundaries.

    "},{"location":"ION-Application-Service-Interface/#cfdp_put","title":"cfdp_put","text":"
    int cfdp_put(CfdpNumber *destinationEntityNbr, unsigned int utParmsLength, unsigned char *utParms, char *sourceFileName, char *destFileName, CfdpReaderFn readerFn, CfdpMetadataFn metadataFn, CfdpHandler *faultHandlers, unsigned int flowLabelLength, unsigned char *flowLabel, unsigned int closureLatency, MetadataList messagesToUser, MetadataList filestoreRequests, CfdpTransactionId *transactionId)\n

    Sends the file identified by sourceFileName to the CFDP entity identified by destinationEntityNbr. destinationFileName is used to indicate the name by which the file will be catalogued upon arrival at its final destination; if NULL, the destination file name defaults to sourceFileName. If sourceFileName is NULL, it is assumed that the application is requesting transmission of metadata only (as discussed below) and destinationFileName is ignored. Note that both sourceFileName and destinationFileName are interpreted as path names, i.e., directory paths may be indicated in either or both. The syntax of path names is opaque to CFDP; the syntax of sourceFileName must conform to the path naming syntax of the source entity's file system and the syntax of destinationFileName must conform to the path naming syntax of the destination entity's file system.

    The byte array identified by utParms, if non-NULL, is interpreted as transmission control information that is to be passed on to the UT layer. The nominal UT layer for ION's CFDP being Bundle Protocol, the utParms array is normally a pointer to a structure of type BpUtParms; see the bp man page for a discussion of the parameters in that structure.

    closureLatency is the length of time following transmission of the EOF PDU within which a responding Transaction Finish PDU is expected. If no Finish PDU is requested, this parameter value should be zero.

    messagesToUser and filestoreRequests, where non-zero, must be the addresses of non-volatile linked lists (that is, linked lists in ION's SDR database) of CfdpMsgToUser and CfdpFilestoreRequest objects identifying metadata that are intended to accompany the transmitted file. Note that this metadata may accompany a file of zero length (as when sourceFileName is NULL as noted above) -- a transmission of metadata only.

    Return Value

    "},{"location":"ION-Application-Service-Interface/#cfdp_cancel","title":"cfdp_cancel","text":"
    int cfdp_cancel(CfdpTransactionId *transactionId)\n

    Cancels transmission or reception of the indicated transaction. Note that, since the ION implementation of CFDP is Unacknowledged, cancellation of a file transmission may have little effect.

    Return Value * request number: on success * -1: on any error

    "},{"location":"ION-Application-Service-Interface/#cfdp_suspend","title":"cfdp_suspend","text":"
    int cfdp_suspend(CfdpTransactionId *transactionId)\n

    Suspends transmission of the indicated transaction. Note that, since the ION implementation of CFDP is Unacknowledged, suspension of a file transmission may have little effect.

    Return Value * request number: on success * -1: on any error

    "},{"location":"ION-Application-Service-Interface/#cfdp_resume","title":"cfdp_resume","text":"
    int cfdp_resume(CfdpTransactionId *transactionId)\n

    Resumes transmission of the indicated transaction. Note that, since the ION implementation of CFDP is Unacknowledged, resumption of a file transmission may have little effect.

    Return Value * request number: on success * -1: on any error

    "},{"location":"ION-Application-Service-Interface/#cfdp_report","title":"cfdp_report","text":"
    int cfdp_report(CfdpTransactionId *transactionId)\n

    Requests issuance of a report on the transmission or reception progress of the indicated transaction. The report takes the form of a character string that is returned in a CfdpEvent structure; use cfdp_get_event() to receive the event (which may be matched to the request by request number).

    Return Value * request number: on success * 0: if the transaction ID is unknown * -1: on any error

    "},{"location":"ION-Application-Service-Interface/#cfdp_get_event","title":"cfdp_get_event","text":"
    int cfdp_get_event(CfdpEventType *type, time_t *time, int *reqNbr, CfdpTransactionId *transactionId, char *sourceFileNameBuf, char *destFileNameBuf, uvast *fileSize, MetadataList *messagesToUser, uvast *offset, unsigned int *length, CfdpCondition *condition, uvast *progress, CfdpFileStatus *fileStatus, CfdpDeliveryCode *deliveryCode, CfdpTransactionId *originatingTransactionId, char *statusReportBuf, MetadataList *filestoreResponses);\n

    Populates return value fields with data from the oldest CFDP event not yet delivered to the application. cfdp_get_event() blocks indefinitely until a CFDP processing event is delivered or the function is interrupted by an invocation of cfdp_interrupt().

    Return Value * 0: on success -OR- on application error, returns zero but sets errno to EINVAL. * -1: on system failure

    "},{"location":"ION-Application-Service-Interface/#cfdp_interrupt","title":"cfdp_interrupt","text":"
    void cfdp_interrupt()\n

    Interrupts an cfdp_get_event() invocation. This function is designed to be called from a signal handler.

    "},{"location":"ION-Application-Service-Interface/#cfdp_rput","title":"cfdp_rput","text":"
    int cfdp_rput(CfdpNumber *respondentEntityNbr, unsigned int utParmsLength, unsigned char *utParms, char *sourceFileName, char *destFileName, CfdpReaderFn readerFn, CfdpHandler *faultHandlers, unsigned int flowLabelLength, unsigned char *flowLabel, unsigned int closureLatency, MetadataList messagesToUser, MetadataList filestoreRequests, CfdpNumber *beneficiaryEntityNbr, CfdpProxyTask *proxyTask, CfdpTransactionId *transactionId)\n

    Sends to the indicated respondent entity a \"proxy\" request to perform a file transmission. The transmission is to be subject to the configuration values in proxyTask and the destination of the file is to be the entity identified by beneficiaryEntityNbr.

    "},{"location":"ION-Application-Service-Interface/#cfdp_rput_cancel","title":"cfdp_rput_cancel","text":"
    int cfdp_rput_cancel(CfdpNumber *respondentEntityNbr, unsigned int utParmsLength, unsigned char *utParms, char *sourceFileName, char *destFileName, CfdpReaderFn readerFn, CfdpHandler *faultHandlers, unsigned int flowLabelLength, unsigned char *flowLabel, unsigned int closureLatency, MetadataList messagesToUser, MetadataList filestoreRequests, CfdpTransactionId *rputTransactionId, CfdpTransactionId *transactionId)\n

    Sends to the indicated respondent entity a request to cancel a prior \"proxy\" file transmission request as identified by rputTransactionId, which is the value of transactionId that was returned by that earlier proxy transmission request.

    "},{"location":"ION-Application-Service-Interface/#cfdp_get","title":"cfdp_get","text":"
    int cfdp_get(CfdpNumber *respondentEntityNbr, unsigned int utParmsLength, unsigned char *utParms, char *sourceFileName, char *destFileName, CfdpReaderFn readerFn, CfdpHandler *faultHandlers, unsigned int flowLabelLength, unsigned char *flowLabel, unsigned int closureLatency, MetadataList messagesToUser, MetadataList filestoreRequests, CfdpProxyTask *proxyTask, CfdpTransactionId *transactionId)\n

    Same as cfdp_rput except that beneficiaryEntityNbr is omitted; the local entity is the implicit beneficiary of the request.

    "},{"location":"ION-Application-Service-Interface/#cfdp_rls","title":"cfdp_rls","text":"
    int cfdp_rls(CfdpNumber *respondentEntityNbr, unsigned int utParmsLength, unsigned char *utParms, char *sourceFileName, char *destFileName, CfdpReaderFn readerFn, CfdpHandler *faultHandlers, unsigned int flowLabelLength, unsigned char *flowLabel, unsigned int closureLatency, MetadataList messagesToUser, MetadataList filestoreRequests, CfdpDirListTask *dirListTask, CfdpTransactionId *transactionId)\n

    Sends to the indicated respondent entity a request to prepare a directory listing, save that listing in a file, and send it to the local entity. The request is subject to the configuration values in dirListTask.

    "},{"location":"ION-Application-Service-Interface/#cfdp_preview","title":"cfdp_preview","text":"
    int cfdp_preview(CfdpTransactionId *transactionId, uvast offset, unsigned int length, char *buffer);\n

    This function enables the application to get an advanced look at the content of a file that CFDP has not yet fully received. Reads length bytes starting at offset bytes from the start of the file that is the destination file of the transaction identified by transactionID, into buffer.

    Return Value * number of bytes read: on success * 0: on user error (transaction is nonexistent or is outbound, or offset is beyond the end of file) * -1: on system failure

    "},{"location":"ION-Application-Service-Interface/#cfdp_map","title":"cfdp_map","text":"
    int cfdp_map(CfdpTransactionId *transactionId, unsigned int *extentCount, CfdpExtent *extentsArray);\n

    This function enables the application to report on the portions of a partially-received file that have been received and written. Lists the received continuous data extents in the destination file of the transaction identified by transactionID. The extents (offset and length) are returned in the elements of extentsArray; the number of extents returned in the array is the total number of continuous extents received so far, or extentCount, whichever is less.

    Return Value * 0: on success, the total number of extents received so far is reported through extentCount * -1: on system failure, returns -1

    "},{"location":"ION-Application-Service-Interface/#cfdp-shell-test-program-cfdptest","title":"CFDP Shell Test Program: cfdptest","text":"

    ION provides application CFDP test program called cfdptest, which installed as part of regular ION build/install process and can be invoked from terminal this way:

    cfdptest\n

    The shell program present a ':' prompt for interactive mode commanding. You can type 'h' to see a list of available commands.

    One can also feed a sequence of commands to cfdptest non-interactively such that you will not see the stdout of the program. This is useful for running automated tests.

    cfdptest [file_containing_cfdptest_commands]\n

    A third way to use cfdptest is to feed command scripts but allows the interactive responses to be displayed in stdout:

    cfdptest < [file_containing_cfdptest_commands]\n

    The cfdptest.c source code is also provided as a code examples on how write applications using the CFDP APIs. The cfdptest command set can be found in the manual pages here.

    "},{"location":"ION-Application-Service-Interface/#cfdp-application-code-example","title":"CFDP Application Code Example","text":"

    this section is work-in-progress

    "},{"location":"ION-Application-Service-Interface/#bundle-streaming-service-bss","title":"Bundle Streaming Service (BSS)","text":"

    The BSS library supports the streaming of data over delay-tolerant networking (DTN) bundles. The intent of the library is to enable applications that pass streaming data received in transmission time order (i.e., without time regressions) to an application-specific \"display\" function -- notionally for immediate real-time display -- but to store all received data (including out-of-order data) in a private database for playback under user control. The reception and real-time display of in-order data is performed by a background thread, leaving the application's main (foreground) thread free to respond to user commands controlling playback or other application-specific functions.

    The application-specific \"display\" function invoked by the background thread must conform to the RTBHandler type definition. It must return 0 on success, -1 on any error that should terminate the background thread. Only on return from this function will the background thread proceed to acquire the next BSS payload.

    All data acquired by the BSS background thread is written to a BSS database comprising three files: table, list, and data. The name of the database is the root name that is common to the three files, e.g., db3.tbl, db3.lst, db3.dat would be the three files making up the db3 BSS database. All three files of the selected BSS database must reside in the same directory of the file system.

    Several replay navigation functions in the BSS library require that the application provide a navigation state structure of type bssNav as defined in the bss.h header file. The application is not reponsible for populating this structure; it's strictly for the private use of the BSS library.

    "},{"location":"ION-Application-Service-Interface/#bundle-streaming-service-bss-bundle-streaming-service-protocol-bssp-cla","title":"Bundle Streaming Service (BSS) & Bundle Streaming Service Protocol (BSSP CLA)","text":"

    The Bundle Streaming Service (BSS) and the Bundle Streaming Service Protocol (BSSP) CLA are independent modules.

    The BSSP CLA is designed to emulate a connection between two DTN neighboring nodes characterized by two delivery mechanisms: (a) a minimal delay, unreliable channel (physical or logical), and (b) a potentially delayed, but reliable channel. The minimal delay channel is emulated by transpot UDP (with a timer mechanism added) and the reliable channel is emulated via TCP transport.

    A DTN user mission may decide to use a single CCSDS AOS or TM downlink with LTP CLA running on top as its reliability mechanism. In that case, it can directly use the LTP CLA in ION and interface it with the CCSDS framing protocol which could be implemented by the mission's avionic system or the radio.

    However, it is also possible that a mission may utilize different types of transports, for example, using multiple downlinks via S, X, Ka-band or optical, each with different reliability mechanism (or not). Alternatively, a flight system may also use commercial communications services with differentiated delays and levels of reliability. In such case, BSSP can be used to approximate such configuration in a lab environment for prototyping and testing the impact on streaming data delivery, until the actual CLAs are implemented and tested.

    The Bundle Streaming Service, on the other hand, is an application-level service that can be used with any underlying CLAs to handle both realtime and delayed, in-order playback of streaming data including video, audio, and telemetry. When the user scenario is appropriate, BSS can certainly be used over BSSP CLA, but that is not a requirement.

    "},{"location":"ION-Application-Service-Interface/#bss-apis","title":"BSS APIs","text":"

    The following section describes the BSS library APIs.

    "},{"location":"ION-Application-Service-Interface/#bssopen","title":"bssOpen","text":"

    int bssOpen(char *bssName, char *path, char *eid)\n
    Opens access to a BSS database, to enable data playback. bssName identifies the specific BSS database that is to be opened. path identifies the directory in which the database resides. eid is ignored. On any failure, returns -1. On success, returns zero.

    "},{"location":"ION-Application-Service-Interface/#bssstart","title":"bssStart","text":"
    int bssStart(char *bssName, char *path, char *eid, char *buffer, int bufLen, RTBHandler handler)\n

    Starts a BSS data acquisition background thread. bssName identifies the BSS database into which data will be acquired. path identifies the directory in which that database resides. eid is used to open the BP endpoint at which the delivered BSS bundle payload contents will be acquired. buffer identifies a data acquisition buffer, which must be provided by the application, and bufLen indicates the length of that buffer; received bundle payloads in excess of this length will be discarded.

    handler identifies the display function to which each in-order bundle payload will be passed. The time and count parameters passed to this function identify the received bundle, indicating the bundle's creation timestamp time (in seconds) and counter value. The buffer and bufLength parameters indicate the location into which the bundle's payload was acquired and the length of the acquired payload. handler must return -1 on any unrecoverable system error, 0 otherwise. A return value of -1 from handler will terminate the BSS data acquisition background thread.

    On any failure, returns -1. On success, returns zero.

    "},{"location":"ION-Application-Service-Interface/#bssrun","title":"bssRun","text":"
    int bssRun(char *bssName, char *path, char *eid, char *buffer, int bufLen, RTBHandler handler)\n

    A convenience function that performs both bssOpen() and bssStart(). On any failure, returns -1. On success, returns zero.

    "},{"location":"ION-Application-Service-Interface/#bssclose","title":"bssClose","text":"

    void bssClose()\n
    Terminates data playback access to the most recently opened BSS database.

    "},{"location":"ION-Application-Service-Interface/#bssstop","title":"bssStop","text":"
    void bssStop()\n

    Terminates the most recently initiated BSS data acquisition background thread.

    "},{"location":"ION-Application-Service-Interface/#bssexit","title":"bssExit","text":"
    void bssExit()\n

    A convenience function that performs both bssClose() and bssStop().

    "},{"location":"ION-Application-Service-Interface/#bssread","title":"bssRead","text":"
    long bssRead(bssNav nav, char *data, int dataLen)\n

    Copies the data at the current playback position in the database, as indicated by nav, into data; if the length of the data is in excess of dataLen then an error condition is asserted (i.e., -1 is returned). Note that bssRead() cannot be successfully called until nav has been populated, nominally by a preceding call to bssSeek(), bssNext(), or bssPrev(). Returns the length of data read, or -1 on any error.

    "},{"location":"ION-Application-Service-Interface/#bssseek","title":"bssSeek","text":"
    long bssSeek(bssNav *nav, time_t time, time_t *curTime, unsigned long *count)\n

    Sets the current playback position in the database, in nav, to the data received in the bundle with the earliest creation time that was greater than or equal to time. Populates nav and also returns the creation time and bundle ID count of that bundle in curTime and count. Returns the length of data at this location, or -1 on any error.

    "},{"location":"ION-Application-Service-Interface/#bssseek_read","title":"bssSeek_read","text":"
    long bssSeek_read(bssNav *nav, time_t time, time_t *curTime, unsigned long *count, char *data, int dataLen)\n

    A convenience function that performs bssSeek() followed by an immediate bssRead() to return the data at the new playback position. Returns the length of data read, or -1 on any error.

    "},{"location":"ION-Application-Service-Interface/#bssnext","title":"bssNext","text":"
    long bssNext(bssNav *nav, time_t *curTime, unsigned long *count)\n

    Sets the playback position in the database, in nav, to the data received in the bundle with the earliest creation time and ID count greater than that of the bundle at the current playback position. Populates nav and also returns the creation time and bundle ID count of that bundle in curTime and count. Returns the length of data at this location (if any), -2 on reaching end of list, or -1 on any error.

    "},{"location":"ION-Application-Service-Interface/#bssnext_read","title":"bssNext_read","text":"
    long bssNext_read(bssNav *nav, time_t *curTime, unsigned long *count, char *data, int dataLen)\n

    A convenience function that performs bssNext() followed by an immediate bssRead() to return the data at the new playback position. Returns the length of data read, -2 on reaching end of list, or -1 on any error.

    "},{"location":"ION-Application-Service-Interface/#bssprev","title":"bssPrev","text":"
    long bssPrev(bssNav *nav, time_t *curTime, unsigned long *count)\n

    Sets the playback position in the database, in nav, to the data received in the bundle with the latest creation time and ID count earlier than that of the bundle at the current playback position. Populates nav and also returns the creation time and bundle ID count of that bundle in curTime and count. Returns the length of data at this location (if any), -2 on reaching end of list, or -1 on any error.

    "},{"location":"ION-Application-Service-Interface/#bssprev_read","title":"bssPrev_read","text":"
    long bssPrev_read(bssNav *nav, time_t *curTime, unsigned long *count, char *data, int dataLen)\n

    A convenience function that performs bssPrev() followed by an immediate bssRead() to return the data at the new playback position. Returns the length of data read, -2 on reaching end of list, or -1 on any error

    "},{"location":"ION-Application-Service-Interface/#asynchronous-messaging-service-ams-apis","title":"Asynchronous Messaging Service (AMS) APIs","text":"

    This section is under construction.

    "},{"location":"ION-Application-Service-Interface/#delay-tolerant-payload-conditioning-dtpc-communications-library","title":"Delay-Tolerant Payload Conditioning (DTPC) communications library","text":""},{"location":"ION-Application-Service-Interface/#description","title":"Description","text":"

    The dtpc library provides functions enabling application software to use Delay-Tolerant Payload Conditioning (DTPC) when exchanging information over a delay-tolerant network. DTPC is an application service protocol, running in a layer immediately above Bundle Protocol, that offers delay-tolerant support for several end-to-end services to applications that may require them. These services include delivery of application data items in transmission (rather than reception) order; detection of reception gaps in the sequence of transmitted application data items, with end-to-end negative acknowledgment of the missing data; end-to-end positive acknowledgment of successfully received data; end-to-end retransmission of missing data, driven either by negative acknowledgment or timer expiration; suppression of duplicate application data items; aggregation of small application data items into large bundle payloads, to reduce bundle protocol overhead; and application-controlled elision of redundant data items in aggregated payloads, to improve link utiliization.

    "},{"location":"ION-Application-Service-Interface/#dtpc-apis","title":"DTPC APIs","text":"

    int dptc_attach( )\n
    Attaches the application to DTPC functionality on the local computer. Returns 0 on success, -1 on any error.

    void dptc_detach( )\n

    Terminates all access to DTPC functionality on the local computer.

    int dtpc_entity_is_started( )\n
    Returns 1 if the local DTPC entity has been started and not yet stopped, 0 otherwise.

    int dtpc_open(unsigned int topicID, DtpcElisionFn elisionFn, DtpcSAP *dtpcsapPtr)\n

    Establishes the application as the sole authorized client for posting and receiving application data items on topic topicID within the local BP node. On success, the service access point for posting and receiving such data items is placed in *dtpcsapPtr, the elision callback function elisionFn (if not NULL) is associated with this topic, and 0 is returned. Returns -1 on any error.

    int dtpc_send(unsigned int profileID, DtpcSAP sap, char *destEid, unsigned int maxRtx, unsigned int aggrSizeLimit, unsigned int aggrTimeLimit, int lifespan, BpAncillaryData *ancillaryData, unsigned char srrFlags, BpCustodySwitch custodySwitch, char *reportToEid, int classOfService, Object item, unsigned int length)\n
    Inserts an application data item into an outbound DTPC application data unit destined for destEid.

    Transmission of that outbound ADU will be subject to the profile identified by profileID, as asserted by dtpcadmin(1), if profileID is non-zero. In that case, maxRtx, aggrSizeLimit, aggrTimeLimit, lifespan, ancillaryData, srrFlags, custodySwitch, reportToEid, and classOfService are ignored.

    If profileID is zero then the profile asserted by dtpcadmin(1) that matches maxRtx, aggrSizeLimit, aggrTimeLimit, lifespan, ancillaryData, srrFlags, custodySwitch, reportToEid, and classOfService will govern transmission of the ADU, unless no such profile has been asserted, in which case dtpc_send() returns 0 indicating user error.

    maxRtx is the maximum number of times any single DTPC ADU transmitted subject to the indicated profile may be retransmitted by the DTPC entity. If maxRtx is zero, then the DTPC transport service features (in-order delivery, end-to-end acknowledgment, etc.) are disabled for this profile.

    aggrSizeLimit is the size threshold for concluding aggregation of an outbound ADU and requesting transmission of that ADU. If aggrSizeLimit is zero, then the DTPC transmission optimization features (aggregation and elision) are disabled for this profile.

    aggrTimeLimit is the time threshold for concluding aggregation of an outbound ADU and requesting transmission of that ADU. If aggrTimeLimit is zero, then the DTPC transmission optimization features (aggregation and elision) are disabled for this profile.

    lifespan, ancillaryData, srrFlags, custodySwitch, reportToEid, and classOfService are as defined for bp_send (see bp(3)).

    item must be an object allocated within ION's SDR \"heap\", and length must be the length of that object. The item will be inserted into the outbound ADU's list of data items posted for the topic associated with sap, and the elision callback function declared for sap (if any, and if the applicable profile does not disable transmission optimization features) will be invoked immediately after insertion of the application data item but before DTPC makes any decision on whether or not to initiate transmission of the outbound ADU.

    The function returns 1 on success, 0 on any user application error, -1 on any system error.

    int dtpc_receive(DtpcSAP sap, DtpcDelivery *dlvBuffer, int timeoutSeconds)\n
    Receives a single DTPC application data item, or reports on some failure of DTPC reception activity.

    The \"result\" field of the dlvBuffer structure will be used to indicate the outcome of the data reception activity.

    If at least one application data item on the topic associated with sap has not yet been delivered to the SAP, then the payload of the oldest such item will be returned in dlvBuffer-__item_ and dlvBuffer-__result_ will be set to PayloadPresent. If there is no such item, dtpc_receive() blocks for up to timeoutSeconds while waiting for one to arrive.

    If timeoutSeconds is DTPC_POLL (i.e., zero) and no application data item is awaiting delivery, or if timeoutSeconds is greater than zero but no item arrives before timeoutSeconds have elapsed, then dlvBuffer-__result_ will be set to ReceptionTimedOut. If timeoutSeconds is DTPC_BLOCKING (i.e., -1) then bp_receive() blocks until either an item arrives or the function is interrupted by an invocation of dtpc_interrupt().

    dlvBuffer-__result_ will be set to ReceptionInterrupted in the event that the calling process received and handled some signal other than SIGALRM while waiting for a bundle.

    dlvBuffer-__result_ will be set to DtpcServiceStopped in the event that DTPC service has been terminated on the local node.

    The application data item delivered in the DTPC delivery structure, if any, will be an object allocated within ION's SDR \"heap\"; the length of that object will likewise be provided in the DtpcDelivery structure.

    Be sure to call dtpc_release_delivery() after every successful invocation of dtpc_receive().

    The function returns 0 on success, -1 on any error.

    void dtpc_interrupt(DtpcSAP sap)\n
    Interrupts a dtpc_receive() invocation that is currently blocked. This function is designed to be called from a signal handler; for this purpose, sap may need to be obtained from a static variable.

    void dtpc_release_delivery(DtpcDelivery *dlvBuffer)\n
    Releases resources allocated to the indicated DTPC delivery.

    void dtpc_close(DtpcSAP sap)\n
    Removes the application as the sole authorized client for posting and receiving application data items on the topic indicated in sap within the local BP node. The application relinquishes its ability to send and receive application data items on the indicated topic.

    "},{"location":"ION-Config-File-Templates/","title":"Available Configuration File Templates","text":"

    The following configurations can be downloaded (see file attachment)

    "},{"location":"ION-Config-File-Templates/#usage-notes","title":"Usage Notes","text":"

    These configuration files are provided to give you a basic functional setup; they may not be sufficient to support all features and throughput performance you want to achieve for your network. So please use them as a template and apply update as necessary.

    "},{"location":"ION-Deployment-Guide/","title":"ION Deployment Guide","text":"

    Version 4.1.3

    Jay Gao, Jet Propulsion Laboratory, California Institute of Technology

    Sky DeBaun, Jet Propulsion Laboratory, California Institute of Technology

    Document Change Log

    Ver No. Date Description Note V4.1.3 11/6/2023 Add LTP Performance Test Converted to markd down V4.1.2 1/5/2023 Added notes on SDR file and CGRM"},{"location":"ION-Deployment-Guide/#overview","title":"Overview","text":"

    The effort required to deploy the Interplanetary Overlay Network (ION) software in an operational setting may vary widely depending on the scope of the deployment and the degree to which the required ION functionality coincides with the capability provided by default in the software as distributed. This effort will be expended in two general phases: initial infusion and ongoing operation.

    "},{"location":"ION-Deployment-Guide/#infusion","title":"Infusion","text":"

    Even in the best case, some minimal degree of configuration will be required. Many elements of ION behavior are managed at run time by decisions recorded in ION's protocol state databases, as populated by a variety of administration utility programs. Others are managed at compile time by means of compiler command-line switches selected when the software is built. These compile-time configuration options are described in the Configuration section below.

    In some cases, mission-specific behavior that goes beyond the options built into ION must be enabled during ION deployment. The intent of the ION design is to minimize -- to eliminate, if possible -- any need to modify ION source code in order to enable mission-specific behavior. Two general strategies are adopted for this purpose.

    First, ION includes a number of conditionally defined functions that can be cleanly replaced with mission-specific alternative source code by setting a compiler command-line switch at build time. Setting such a switch causes the mission-specific source code, written in C, to be simply included within the standard ION source code at the time of compilation.

    Second, more generally it is always possible to add new application executables, new startup/shutdown/monitor/control utilities or scripts, and even entirely new route computation systems, BP convergence-layer adapters, and/or LTP link service adapters without ever altering the distributed ION source code. A few rough guidelines for making these kinds of modifications are described in the Adaptation section below.

    Finally, in rare cases it may be necessary to execute ION in an operating-system environment to which it has not yet been ported. Guidance for porting ION to new platforms will be provided in a future edition of this Deployment Guide.

    "},{"location":"ION-Deployment-Guide/#operation","title":"Operation","text":"

    On an ongoing basis, an ION deployment may require reconfiguration from time to time and/or may require troubleshooting to resolve performance or stability problems. Some suggestions for reconfiguration and troubleshooting procedures are offered in the Operation section below.

    "},{"location":"ION-Deployment-Guide/#configuration","title":"Configuration","text":""},{"location":"ION-Deployment-Guide/#configuring-the-ici-module","title":"Configuring the \"ici\" module","text":"

    Declaring values for the following variables, by setting parameters that are provided to the C compiler (for example, --DFSWSOURCE or --DSM_SEMBASEKEY=0xff13), will alter the functionality of ION as noted below.

    PRIVATE_SYMTAB

    This option causes ION to be built for VxWorks 5.4 or RTEMS with reliance on a small private local symbol table that is accessed by means of a function named sm_FindFunction. Both the table and the function definition are, by default, provided by the symtab.c source file, which is automatically included within the platform_sm.c source when this option is set. The table provides the address of the top-level function to be executed when a task for the indicated symbol (name) is to be spawned, together with the priority at which that task is to execute and the amount of stack space to be allocated to that task.

    PRIVATE_SYMTAB is defined by default for RTEMS but not for VxWorks 5.4.

    Absent this option, ION on VxWorks 5.4 must successfully execute the VxWorks symFindByName function in order to spawn a new task. For this purpose the entire VxWorks symbol table for the compiled image must be included in the image, and task priority and stack space allocation must be explicitly specified when tasks are spawned.

    FSWLOGGER

    This option causes the standard ION logging function, which simply writes all ION status messages to a file named ion.log in the current working directory, to be replaced (by #include) with code in the source file fswlogger.c. A file of this name must be in the inclusion path for the compiler, as defined by --Ixxxx compiler option parameters.

    FSWCLOCK

    This option causes the invocation of the standard time function within getUTCTime (in ion.c) to be replaced (by #include) with code in the source file fswutc.c, which might for example invoke a mission-specific function to read a value from the spacecraft clock. A file of this name must be in the inclusion path for the compiler.

    FSWWDNAME

    This option causes the invocation of the standard getcwd function within cfdpInit (in libcfdpP.c) to be replaced (by #include) with code in the source file wdname.c, which must in some way cause the mission-specific value of current working directory name to be copied into cfdpdbBuf.workingDirectoryName. A file of this name must be in the inclusion path for the compiler.

    FSWSYMTAB

    If the PRIVATE_SYMTAB option is also set, then the FSWSYMTAB option causes the code in source file mysymtab.c to be included in platform_sm.c in place of the default symbol table access implementation in symtab.c. A file named mysymtab.c must be in the inclusion path for the compiler.

    FSWSOURCE

    This option simply causes FSWLOGGER, FSWCLOCK, FSWWDNAME, and FSWSYMTAB all to be set.

    GDSLOGGER

    This option causes the standard ION logging function, which simply writes all ION status messages to a file named ion.log in the current working directory, to be replaced (by #include) with code in the source file gdslogger.c. A file of this name must be in the inclusion path for the compiler, as defined by --Ixxxx compiler option parameters.

    GDSSOURCE

    This option simply causes GDSLOGGER to be set.

    TRACKRFXEVENTS

    This option causes user-written code, in a file named rfxtracker.c, to be executed every time the rfxclock daemon dispatches a schedule RFX event such as the start or end of a transmission contact. A file of this name must be in the inclusion path for the compiler, as defined by --Ixxxx compiler option parameters.

    ION_OPS_ALLOC=*xx*

    This option specifies the percentage of the total non-volatile storage space allocated to ION that is reserved for protocol operational state information, i.e., is not available for the storage of bundles or LTP segments. The default value is 40.

    ION_SDR_MARGIN=*xx*

    This option specifies the percentage of the total non-volatile storage space allocated to ION that is reserved simply as margin, for contingency use. The default value is 20.

    The sum of ION_OPS_ALLOC and ION_SDR_MARGIN defines the amount of non-volatile storage space that is sequestered at the time ION operations are initiated: for purposes of congestion forecasting and prevention of resource oversubscription, this sum is subtracted from the total size of the SDR \"heap\" to determine the maximum volume of space available for bundles and LTP segments. Data reception and origination activities fail whenever they would cause the total amount of data store space occupied by bundles and segments to exceed this limit.

    HEAP_PTRS

    This is an optimization option for the SDR non-volatile data management system: when set, it enables the value of any variable in the SDR heap to be accessed directly by means of a pointer into the dynamic memory that is used as the data store storage medium, rather than by reading the variable into a location in local stack memory. Note that this option must not be enabled if the data store is configured for file storage only, i.e., if the SDR_IN_DRAM flag was set to zero at the time the data store was created by calling sdr_load_profile. See the ionconfig(5) man page in Appendix A for more information.

    NO_SDR_TRACE

    This option causes non-volatile storage utilization tracing functions to be omitted from ION when the SDR system is built. It disables a useful debugging option but reduces the size of the executable software.

    NO_PSM_TRACE

    This option causes memory utilization tracing functions to be omitted from ION when the PSM system is built. It disables a useful debugging option but reduces the size of the executable software.

    IN_FLIGHT

    This option controls the behavior of ION when an unrecoverable error is encountered.

    If it is set, then when an unrecoverable error is encountered the status message \"Unrecoverable SDR error\" is logged and the SDR non-volatile storage management system is globally disabled: the current data store access transaction is ended and (provided transaction reversibility is enabled) rolled back, and all ION tasks terminate.

    Otherwise, the ION task that encountered the error is simply aborted, causing a core dump to be produced to support debugging.

    SM_SEMKEY=0x*XXXX*

    This option overrides the default value (0xee01) of the identifying \"key\" used in creating and locating the global ION shared-memory system mutex.

    SVR4_SHM

    This option causes ION to be built using svr4 shared memory as the pervasive shared-memory management mechanism. svr4 shared memory is selected by default when ION is built for any platform other than MinGW (for which File Mapping objects are used), VxWorks 5.4, or RTEMS. (For the latter two operating systems all memory is shared anyway, due to the absence of a protected-memory mode.)

    POSIX1B_SEMAPHORES

    This option causes ION to be built using POSIX semaphores as the pervasive semaphore mechanism. POSIX semaphores are selected by default when ION is built for RTEMS but are otherwise not used or supported; this option enables the default to be overridden.

    SVR4_SEMAPHORES

    This option causes ION to be built using svr4 semaphores as the pervasive semaphore mechanism. svr4 semaphores are selected by default when ION is built for any platform other than MinGW (for which Windows event objects are used), VxWorks 5.4 (for which VxWorks native semaphores are the default choice), or RTEMS (for which POSIX semaphores are the default choice).

    SM_SEMBASEKEY=0x*XXXX*

    This option overrides the default value (0xee02) of the identifying \"key\" used in creating and locating the global ION shared-memory semaphore database, in the event that svr4 semaphores are used.

    SEMMNI=*xxx*

    This option declares to ION the total number of svr4 semaphore sets provided by the operating system, in the event that svr4 semaphores are used. It overrides the default value, which is 128. (Changing this value typically entails rebuilding the O/S kernel.)

    SEMMSL=*xxx*

    This option declares to ION the maximum number of semaphores in each svr4 semaphore set, in the event that svr4 semaphores are used. It overrides the default value, which is 250. (Changing this value typically entails rebuilding the O/S kernel.)

    SEMMNS=*xxx*

    This option declares to ION the total number of svr4 semaphores that the operating system can support; the maximum possible value is SEMMNI x SEMMSL. It overrides the default value, which is 32000. (Changing this value typically entails rebuilding the O/S kernel.)

    Note that this option is also supported in the MinGW (Windows) port of ION, with the same default value; changing this value does not involve an operating system modification.

    ION_NO_DNS

    This option causes the implementation of a number of Internet socket I/O operations to be omitted for ION. This prevents ION software from being able to operate over Internet connections, but it prevents link errors when ION is loaded on a spacecraft where the operating system does not include support for these functions.

    ERRMSGS_BUFSIZE=*xxxx*

    This option set the size of the buffer in which ION status messages are constructed prior to logging. The default value is 4 KB.

    SPACE_ORDER=*x*

    This option declares the word size of the computer on which the compiled ION software will be running: it is the base-2 log of the number of bytes in an address. The default value is 2, i.e., the size of an address is 2^2^ = 4 bytes. For a 64-bit machine, SPACE_ORDER must be declared to be 3, i.e., the size of an address is 2^3^ = 8 bytes.

    NO_SDRMGT

    This option enables the SDR system to be used as a data access transaction system only, without doing any dynamic management of non-volatile data. With the NO_SDRMGT option set, the SDR system library can (and in fact must) be built from the sdrxn.c source file alone.

    DOS_PATH_DELIMITER

    This option causes ION_PATH_DELIMITER to be set to '\\' (backslash), for use in the construction of path names. The default value of ION_PATH_DELIMITER is '/' (forward slash, as is used in Unix-like operating systems).

    "},{"location":"ION-Deployment-Guide/#configuring-the-ltp-module","title":"Configuring the \"ltp\" module","text":"

    Declaring values for the following variables, by setting parameters that are provided to the C compiler (for example, --DUDP_MULTISEND, will alter the functionality of LTP as noted below.

    UDP_MULTISEND

    The UDP_MULTISEND option can improve LTP performance by sharply reducing system call overhead: multiple LTP segments encapsulated in UDP datagrams may be transmitted with a single sendmmsg() call rather than multiple sendmsg() calls. This reduces the cost of sending LTP blocks in small segments, which in turn can limit IP fragmentation for LTP traffic.

    Note that sendmmsg() has no built-in rate control and offers no opportunity to exercise the rate control algorithm that minimizes UDP congestion loss in non-MULTISEND LTP. In order to achieve similar reduction in UDP congestion loss, a node that receives data sent by sendmmsg() may need to be configured for larger socket buffers. The sysctl operating system utility may be used for this purpose, setting new values for net.core.rmem_max and _default and net.core.wmem_max and _default.

    Note also that not all operating systems support the sendmmsg() system call. ION currently enables UDP_MULTISEND only for flavors of Linux other than bionic.

    MULTISEND_SEGMENT_SIZE

    By default, ION LTP in UDP_MULTISEND mode will always limit LTP segment size to 1450 so that every segment may be encapsulated in an IP packet whose size does not exceed the standard Ethernet frame size. For networks in which the MTU is known to be larger, this parameter may be overridden at compile time.

    MULTISEND_BATCH_LIMIT

    By default, the maximum number of UDP datagrams that ION LTP in UDP_MULTISEND mode will send in a single sendmmsg() call is automatically computed as the block aggregation size threshold divided by the maximum segment size; that is, normally the amount of data sent per sendmmsg() call is about one LTP block. This parameter may be overridden at compile time.

    MULTIRECV_BUFFER_COUNT

    In UDP_MULTISEND mode, ION LTP will also use recvmmsg() to receive multiple LTP segments (encapsulated in UDP datagrams) in a single system call. By default, 127 segment reception buffers (each one large enough to receive a single LTP segment of maximum size) are reserved for this purpose. This parameter may be overridden at compile time.

    "},{"location":"ION-Deployment-Guide/#configuring-the-bp-module","title":"Configuring the \"bp\" module","text":"

    Declaring values for the following variables, by setting parameters that are provided to the C compiler (for example, --DION_NOSTATS or --DBRSTERM=60), will alter the functionality of BP as noted below.

    TargetFFS

    Setting this option adapts BP for use with the TargetFFS flash file system on the VxWorks operating system. TargetFFS apparently locks one or more system semaphores so long as a file is kept open. When a BP task keeps a file open for a sustained interval, subsequent file system access may cause a high-priority non-BP task to attempt to lock the affected semaphore and therefore block; in this event, the priority of the BP task may automatically be elevated by the inversion safety mechanisms of VxWorks. This \"priority inheritance\" can result in preferential scheduling for the BP task -- which does not need it -- at the expense of normally higher-priority tasks, and can thereby introduce runtime anomalies. BP tasks should therefore close files immediately after each access when running on a VxWorks platform that uses the TargetFFS flash file system. The TargetFFS compile-time option ensures that they do so.

    MULTIDUCTS

    It is possible for multiple outducts to be attached to a single egress plan, enabling some bundles to be forwarded to a neighboring node using one outduct while others are forwarded using another. Selection of the outduct to use for the forwarding of a given bundle is a function of the bpclm \"convergence-layer manager\" daemon; each of a given node's egress plans is managed by a single bpclm task. The default outduct selection algorithm exercised by bpclm can be overridden by means of the MULTIDUCTS compile-time configuration option. Setting the -DMULTIDUCTS switch causes the standard outduct configuration logic in the outductSelected() function of bpclm.c to be replaced (by #include) with code in the source file named selectcla.c. A file of this name must be in the inclusion path for the compiler, as defined by --Ixxxx compiler option parameters.

    The implementation of outductSelected() in ION bpv7 implementation differs somewhat from that in the bpv6 implementation. The content of a very simple selectcla.c file for a node deploying bpv7 might look like this:

    if (bundle-\\>destination.ssp.ipn.serviceNbr == 99)\n{\n    if (strcmp(protocol-\\>name, \"bssp\") == 0)\n    {\n        return 1; /\\* Use a BSSP outduct for this bundle. \\*/\n    }\n}\n

    Note that any element of the state of the bundle may be used to select an outduct based on any element of the state of the outduct. The intent is for ION to be able to accommodate virtually any mission-defined algorithm for selecting among communication channels between topologically adjacent BP nodes.

    BRSTERM=*xx*

    This option sets the maximum number of seconds by which the current time at the BRS server may exceed the time tag in a BRS authentication message from a client; if this interval is exceeded, the authentication message is presumed to be a replay attack and is rejected. Small values of BRSTERM are safer than large ones, but they require that clocks be more closely synchronized. The default value is 5.

    ION_NOSTATS

    Setting this option prevents the logging of bundle processing statistics in status messages.

    KEEPALIVE_PERIOD=*xx*

    This option sets the number of seconds between transmission of keep-alive messages over any TCP or BRS convergence-layer protocol connection. The default value is 15.

    ION_BANDWIDTH_RESERVED

    Setting this option overrides strict priority order in bundle transmission, which is the default. Instead, bandwidth is shared between the priority-1 and priority-0 queues on a 2:1 ratio whenever there is no priority-2 traffic.

    "},{"location":"ION-Deployment-Guide/#configuring-the-ams-module","title":"Configuring the \"ams\" module","text":"

    Defining the following macros, by setting parameters that are provided to the C compiler (for example, -DNOEXPAT or --DAMS_INDUSTRIAL), will alter the functionality of AMS as noted below.

    NOEXPAT

    Setting this option adapts AMS to expect MIB information to be presented to it in \"amsrc\" syntax (see the amsrc(5) man page) rather than in XML syntax (as described in the amsxml(5) man page), normally because the expat XML interpretation system is not installed. Note that the default syntax for AMS MIB information is now amsrc syntax so the -DNOEXPAT switch is rarely needed.

    AMS_INDUSTRIAL

    Setting this option adapts AMS to an \"industrial\" rather than safety-critical model for memory management. By default, the memory acquired for message transmission and reception buffers in AMS is allocated from limited ION working memory, which is fixed at ION start-up time; this limits the rate at which AMS messages may be originated and acquired. When --DAMS_INDUSTRIAL is set at compile time, the memory acquired for message transmission and reception buffers in AMS is allocated from system memory, using the familiar malloc() and free() functions; this enables much higher message traffic rates on machines with abundant system memory where flight software constraints on dynamic system memory allocation are not applicable.

    "},{"location":"ION-Deployment-Guide/#configuring-the-cfdp-module","title":"Configuring the \"cfdp\" module","text":"

    Defining the following macro, by setting a parameter that is provided to the C compiler (i.e., --DTargetFFS), will alter the functionality of CFDP as noted below.

    TargetFFS

    Setting this option adapts CFDP for use with the TargetFFS flash file system on the VxWorks operating system. TargetFFS apparently locks one or more system semaphores so long as a file is kept open. When a CFDP task keeps a file open for a sustained interval, subsequent file system access may cause a high-priority non-CFDP task to attempt to lock the affected semaphore and therefore block; in this event, the priority of the CFDP task may automatically be elevated by the inversion safety mechanisms of VxWorks. This \"priority inheritance\" can result in preferential scheduling for the CFDP task -- which does not need it -- at the expense of normally higher-priority tasks, and can thereby introduce runtime anomalies. CFDP tasks should therefore close files immediately after each access when running on a VxWorks platform that uses the TargetFFS flash file system. The TargetFFS compile-time option assures that they do so.

    "},{"location":"ION-Deployment-Guide/#initialization","title":"Initialization","text":"

    ION requires several runtime configuration settings to be defined at the time a node is initialized. Most notable are the settings for the Admin functions of ION. ION provides a variety of administration utilities including ionadmin, ionsecadmin, ltpadmin, bsspadmin, bpadmin, ipnadmin, and cfdpadmin. Each of the corresponding modules that is to be used at runtime will need to be configured. The commands that perform these configuration tasks are normally presented to the admin utility in an admin configuration file.

    In the Linux environment, two different styles of configuration files are possible. Both styles are accepted by the \"ionstart\" program that installs as part of the official release, an AWK program. The first style requires that all configuration commands for all in-use admins will be stored in one file. This single file is sectioned off internally to separate the commands of each admin. The ionstart program accepts this single configuration file's name as a parameter, parses this file looking for sectioned-off areas for each possible admin function, and then uses the commands within these sections to configure the corresponding modules.

    The other style requires that each admin will have its own distinct configuration file. The ionstart program consumes these files as guided by command line switches and parameters identifying each configuration file.

    "},{"location":"ION-Deployment-Guide/#runtime-parameters","title":"Runtime Parameters","text":"

    Some ION configuration parameters are declared only at node initialization time; they cannot later be revised. In particular, the ionadmin \"1\" (the numeral one) initialization command must be executed just once, before any other configuration command is processed. The first parameter to this command is required and is a numeric value that indicates the node number of the DTN node being configured. The second parameter to this command is optional; if present, it must provide the full pathname of a local file of immutable configuration parameter values:

    wmKey (integer)\nwmSize (integer)\nwmAddress (integer)\nsdrName (string)\n\nsdrWmSize (integer)\n# bit pattern in integer form, e.g., 3 for 00000011\nconfigFlags 3\nheapWords (integer)\nheapKey (integer)\npathName (string)\n

    This path name should NOT be enclosed in any type of quotation marks. The file is a text file with 2 fields per line; lines are processed in sequence. The first field on each line holds one of the parameter identifier text strings as shown above. The second field holds the value that will be placed into the identified parameter. Make sure that the data type specified in the second field matches the type expected.

    For documentation on these parameters, see the ionconfig(5) man page.

    "},{"location":"ION-Deployment-Guide/#configflags","title":"configFlags","text":"

    The configFlags entry controls several features of the Simple Data Recorder (SDR).\u00a0 There are several flags of interest:

    #define SDR_IN_DRAM\u00a0\u00a0\u00a0\u00a0 1\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 /*\u00a0\u00a0Write to & read from memory.\u00a0 */\n#define SDR_IN_FILE\u00a0\u00a0\u00a0\u00a0 2\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 /*\u00a0\u00a0Write file; read file if nec. */\n#define SDR_REVERSIBLE\u00a0 4\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 /*\u00a0\u00a0Transactions may be reversed.\u00a0*/\n

    SDR_IN_DRAM is required for normal ION operation and should virtually always be specified.

    When SDR_REVERSIBLE is specified, SDR transactions that fail (e.g., due to memory allocation failure) are rolled back, allowing transactions to fail gracefully without corrupting the ION databases.\u00a0 If the flag is not supplied, failed transactions will cause an immediate task failure and, where supported, a core dump.\u00a0 This feature is intended only as an aid to debugging; in operations ION should normally be configured with reversible transactions.\u00a0 When transaction reversibility is enabled, ION creates & manages a log file in the directory named by \"pathName\" which must be writable by ION and which tracks the SDR changes and supports rollback to the last consistent state.\u00a0 The filesystem for this directory should be high-performance; a ramdisk is usually ideal.\u00a0 The maximum size of the logfile is dependent upon the largest transaction in the SDR, and is therefore of a size on the same order of magnitude as the largest bundle. NOTE that if the directory named by \"pathname\" does not exist then transaction reversibility will be disabled automatically; a message to this effect will be written to the ION log file.

    When SDR_IN_FILE is specified, ION creates a file in the \"pathName\" directory, which is maintained as a copy of the SDR heap in DRAM; whenever the SDR heap in memory is modified, the changes are also written to the sdr heap file.\u00a0 Thus the heap file is always the same size as the in-memory heap. Again, if the directory named by \"pathname\" does not exist then retention of the ION SDR heap in a file will be disabled automatically; a message to this effect will be written to the ION log file. NOTE that

    1. The use of SDR_IN_FILE may have the adverse effect of slowing down all SDR transactions, which can significantly impact transmission, relay, and reception speed of ION. Users should conduct performance testing to ensure that keeping SDR in file can still achieve the operational performance expected.
    2. The advantage of a SDR_IN_FILE option is that in the case of ION shutdown due to power reset where the state of SDR is not corrupted, it is possible to start ION with the SDR file and resume data transfer operations such as LTP transaction. However, if ION shuts down due to an internal error, then it is not recommended to keep the SDR file when restarting ION, as the SDR state is not certain to be without corruption.
    3. As a general rule, please always remove or move the SDR file away from the specified path between operations. It should only be left in place, if users are intentionally attempting to resume interrupted operations just prior to ION shutdown.
    "},{"location":"ION-Deployment-Guide/#allocating-working-memory","title":"Allocating Working Memory","text":"

    When ION stores a bundle, it typically holds part of the bundle part in memory (heap) as determined by the maxheap parameter in bprc. The default value is about 650 bytes. The rest of the payload is placed into file reference. Also a bundle, before transmission, kept its header and extensions inside a data structure for quick look up and manipulations; the bundle is serialized into a chunk of octet according to the standard just prior to transmission. Therefore, when a bundle is stored in an ION node, part of its footprint is in the 'heap' and part of it is in the 'working memory.'

    Test shows that leaving the maxHeap parameter to its default value, a bundle uses about 1.5KB of space in heap and about 100-150 Byte in working memory. Adding a 200% margin, we recommend that following relationship between heapWords and wmSize:

    wmSize = 3 x heapWords x 8 x 0.4 / 10\n

    where 3 is the margin, 8 is the number of octets per word, 0.4 accounts for the fact that inbound and outbound heap space is only 40% of the heap, and 10 accounts for the empirically estimated 10:1 ratio between heap and working memory footprints per bundle.

    Many ION runtime configuration parameters can be declared at node initialization and later revised dynamically, either by passing supplementary configuration files to the admin utilities or by exercising the admin utilities in interactive mode.

    For documentation on the admin commands, see the man pages. The man page names are in the form of <xxx>rc, where <xxx> gets replaced by the specific module name (bp, dtn2, ion, ionsec, ipn, ltp, bssp, cfdp). The directories in which to find these files are: ./ici/doc/pod5, ./ltp/doc/pod5, ./bssp/doc/pod5, ./bp/doc/pod5, and ./cfdp/doc/pod5.

    "},{"location":"ION-Deployment-Guide/#multi-node-operation","title":"Multi-node Operation","text":"

    Normally the instantiation of ION on a given computer establishes a single ION node on that computer, for which hard-coded values of wmKey and sdrName (see ionconfig(5)) are used in common by all executables to assure that all elements of the system operate within the same state space. For some purposes, however, it may be desirable to establish multiple ION nodes on a single workstation. (For example, constructing an entire self-contained DTN network on a single machine may simplify some kinds of regression testing.) ION supports this configuration option as follows:

    "},{"location":"ION-Deployment-Guide/#shutdown","title":"Shutdown","text":"

    Included in the root directory of the ION distribution is a bash script named ionstop. This script executes each of the \"admin\" utilities and instructs each subsystem to exit, by supplying the dummy command file name \".\". Once all of the utilities have exited, the script calls another script named killm (likewise located in the root directory of ion-open-source). The killm script first tries to kill all ION processes by name, then tries to destroy all of the shared-memory resources allocated to ION at the time the node was created.

    There are also many \"local\" versions of ionstop script, stored in test subdirectories out of which one or multiple ION instances were launched on the same host. These local versions of ionstop script differ from the ionstop script in the root directory in that it usually contains (a) additional, customized scripts to clean up test artifacts such as ion.log, sdr file, received test files, and temporarily acquisition files for LTP and BP that remained after a test is completed and (b) it generally does not execute kilim, which will kill all ION processes, not just the ones related to the ION instance being terminated.

    If you invoke the ionstop script that is part of the ION root directory, it does not clean up test artifacts or other products created during operation and if it detects that there are multiple ION instances running in the same host, it will NOT execute killm. In that case, the user is advised to always check that all processes are terminated properly and that the shared memory is cleared appropriately.

    When running ionstop, various administrative programs will process a dummy command file \".\" that signals shutdown. It will first check the value of the environment variable ION_NODE_WDNAME, defined in the current shell, to determine which instance of ION must be taken down. The ION instance that was shutdown does not depend on the current directory the shell is in. Therefore it is possible to use either the ionstop script provided in ION's root directory or a local, customized version to shutdown an individual ION instance.

    If you are having trouble shutting an ION node down, see the notes on \"Destroying a Node\" later in this Guide.

    It has been pointed out that if you are running ION in a Docker container inside a Kubernetes pod, the system is likely to assign process ID 1 to one of the ION processes at startup; since process 1 cannot be killed, the ionstop script can't complete and your node will not be cleanly destroyed. One solution seems to be to usedumb-init for the docker container.

    To make this work, you may have to override your entry point in the manifest file used by the Kubectl \"apply\" command.

    "},{"location":"ION-Deployment-Guide/#example-configuration-files","title":"Example Configuration Files","text":""},{"location":"ION-Deployment-Guide/#ion-node-number-cross-reference","title":"ION Node Number Cross-reference","text":"

    When you define a DTN node, you do so using ionadmin and its Initialize command (using the token '1'). This node is then referenced by its node number throughout the rest of the configuration file.

    ## begin ionadmin  \n1   1  /home/spwdev/cstl/ion-configs/23/badajoz/3node-udp-ltp/badajoz.ionconfig\ns\n\na  contact  +1  +86400    25    25  50000000\na  contact  +1  +84600    25   101  50000000\na  contact  +1  +84600    25     1  50000000\n\na  contact  +1  +86400   101    25  50000000\na  contact  +1  +86400   101   101  50000000\na  contact  +1  +86400   101     1  50000000\n\na  contact  +1  +86400     1    25  50000000\na  contact  +1  +86400     1   101  50000000\na  contact  +1  +86400     1     1  50000000\n\n\na  range  +1  +86400    25    25  1\na  range  +1  +86400    25   101  10\na  range  +1  +86400    25     1  10\n\na  range  +1  +86400   101    25  10\na  range  +1  +86400   101   101  1\na  range  +1  +86400   101     1  10\n\na  range  +1  +86400     1    25  10\na  range  +1  +86400     1   101  10\na  range  +1  +86400     1     1  1\n\nm  production   50000000\nm  consumption  50000000\n\n## end ionadmin\n##########################################################################\n## begin ltpadmin  \n1  32\n\na  span   25  1  1  1400  1400  1  'udplso 192.168.1.25:1113 1000000000'\na  span  101  1  1  1400  1400  1  'udplso 192.168.1.101:1113 1000000000'\na  span    1  1  1  1400  1400  1  'udplso 192.168.1.1:1113 1000000000'\n\ns  'udplsi 192.168.1.1:1113'\n\n## end ltpadmin  \n##########################################################################\n## begin bpadmin  \n1\n\na  scheme  ipn  'ipnfw'    'ipnadminep'\n\na  endpoint  ipn:1.0  q\na  endpoint  ipn:1.1  q\na  endpoint  ipn:1.2  q\n\na  protocol  ltp  1400  100\na  protocol  tcp  1400  100\n\na  outduct  ltp   25              ltpclo\na  outduct  ltp  101              ltpclo\na  outduct  ltp    1              ltpclo\n\na  induct   ltp   1              ltpcli\n\ns\n## end bpadmin\n##########################################################################\n##  begin  ipnadmin\n\na  plan   25  ltp/25\na  plan  101  ltp/101\na  plan    1  ltp/1\n\n## end  ipnadmin  \n
    "},{"location":"ION-Deployment-Guide/#ipn-parameters-cross-reference","title":"IPN Parameters Cross-reference","text":"

    The \"ipn\" scheme for URI formation is generally used to form the endpoint IDs of endpoints in an ION deployment. Any transmission using endpoints formed in the \"ipn\" scheme will have endpoints IDs of this form:

    ipn:nodenumber.servicenumber

    The Add Scheme command on line 51 below specifies that the \"ipn\" endpoint naming scheme is supported; the names of three endpoints formed in this scheme are shown in lines 53 thru 55.

    The two remaining parameters on this command are used to define the software functions that will act as data forwarder and administrative data receiver.

    "},{"location":"ION-Deployment-Guide/#the-bpadmin-add-scheme-command","title":"The bpadmin Add Scheme command","text":"

    a scheme scheme_name forwarder_command admin_app_command

    The add scheme command. This command declares an endpoint naming scheme for use in endpoint IDs, which are structured as URIs: scheme_name:scheme-specific_part. forwarder_command will be executed when the scheme is started on this node, to initiate operation of a forwarding daemon for this scheme. admin_app_command will also be executed when the scheme is started on this node, to initiate operation of a daemon that opens an administrative endpoint identified within this scheme so that it can receive and process custody signals and bundle status reports.

    Starting at line 71, the egress plans are defined. These determine the outducts by which data are sent to nodes that are topologically adjacent to the current node in the DTN-based network.

    "},{"location":"ION-Deployment-Guide/#the-ipnadmin-add-plan-command","title":"The ipnadmin Add Plan command","text":"

    a plan node_nbr default_duct_expression

    The add plan command. This command establishes an egress plan for the bundles that must be transmitted to the neighboring node identified by node_nbr. Each duct expression is a string of the form

    protocol_name outduct_name

    signifying that the bundle is to be queued for transmission via the indicated convergence layer protocol outduct.

    "},{"location":"ION-Deployment-Guide/#ltp-parameters-cross-reference","title":"LTP Parameters Cross-reference","text":"

    The ltpadmin utility allows the features of the LTP protocol to become available. For details of the LTP protocol, see RFC 5325.

    The first command that must be issued to ltpadmin is the Initialize command (see line number 38 below, the command token is the '1' (one)). The sole parameter passed to this command is est_max_export_sessions.

    "},{"location":"ION-Deployment-Guide/#the-ltpadmin-initialize-command","title":"The ltpadmin Initialize command","text":"

    This command uses est_max_export_sessions to configure the hash table it will use to manage access to export transmission sessions that are currently in progress. (For optimum performance, est_max_export_sessions should normally equal or exceed the summation of max_export_sessions over all spans as discussed below.)

    Appropriate values for this parameter and for the parameters configuring each span of potential LTP data exchange between the local LTP and neighboring engines are non-trivial to determine. See the ION LTP configuration spreadsheet and accompanying documentation for details.

    - Essentially, the \"max export sessions\" must be >= the total number of export sessions on all the spans. If it is expected that new spans will be added during an ION session, then max export sessions figure should be large enough to cover the maximum # of sessions possible.

    - Next to be defined are the Spans. They define the interconnection between two LTP engines. There are many parameters associated with the Spans.

    "},{"location":"ION-Deployment-Guide/#the-ltpadmin-add-span-command","title":"The ltpadmin Add Span command","text":"

    a span peer_engine_nbr max_export_sessions max_import_sessions max_segment_size aggregation_size_threshold aggregation_time_limit 'LSO_command' [queuing_latency]

    The \"add span\" command. This command declares that a span of potential LTP data interchange exists between the local LTP engine and the indicated (neighboring) LTP engine.

    The max_segment_size and the aggregation_size_threshold are expressed as numbers of bytes of data. max_segment_size limits the size of each of the segments into which each outbound data block will be divided; typically this limit will be the maximum number of bytes that can be encapsulated within a single transmission frame of the underlying link service. max_segment_size specifies the largest LTP segment that this span will produce.

    aggregation_size_threshold limits the number of LTP service data units (e.g., bundles) that can be aggregated into a single block: when the sum of the sizes of all service data units aggregated into a block exceeds this limit, aggregation into this block must cease and the block must be segmented and transmitted. When numerous small bundles are outbound, they are aggregated into a block of at least this size instead of being sent individually.

    aggregation_time_limit alternatively limits the number of seconds that any single export session block for this span will await aggregation before it is segmented and transmitted, regardless of size. The aggregation time limit prevents undue delay before the transmission of data during periods of low activity. When a small number of small bundles are outbound, they are collected until this time limit is met, whereupon the aggregated quantity is sent as a single, larger block.

    max_export_sessions constitutes the size of the local LTP engine's retransmission window for this span. The retransmission windows of the spans impose flow control on LTP transmission, preventing the allocation of all available space in the ION node's data store to LTP transmission sessions.

    The max_import_sessions parameter is simply the neighboring engine's own value for the corresponding export session parameter.

    LSO_command is script text that will be executed when LTP is started on this node, to initiate operation of a link service output task for this span. Note that peer_engine_nbr will automatically be appended to LSO_command by ltpadmin before the command is executed, so only the link-service-specific portion of the command should be provided in the LSO_command string itself.

    queuing_latency is the estimated number of seconds that we expect to lapse between reception of a segment at this node and transmission of an acknowledging segment, due to processing delay in the node. (See the 'm ownqtime' command below.) The default value is 1.

    If queuing_latency is a negative number, the absolute value of this number is used as the actual queuing latency and session purging is enabled; otherwise session purging is disabled. If session purging is enabled for a span then at the end of any period of transmission over this span all of the span's export sessions that are currently in progress are automatically canceled. Notionally this forces re-forwarding of the DTN bundles in each session's block, to avoid having to wait for the restart of transmission on this span before those bundles can be successfully transmitted.

    Additional notes:

    - A \"session block\" is filled by outbound bundles until its aggregation size threshold is reached, or its aggregation time limit is reached, whereupon it is output as a series of segments (of size bounded by max_segment_size). This series of segments is reliably transferred via a LTP protocol session with the remote node, one session per block. By adjusting the size of the session block, the rate of arrival of response segments from the remote node can be controlled. Assuming a bundle rate sufficient to fill the session block, a large session block size means a lot of LTP segments per session (good for a high-rate return, low-rate forward link situation). A small session block size means the number of segments per session is smaller and the LTP protocol will complete the block transfer more quickly because the number of segment retries is generally smaller.

    - A good starting point for a configuration is to set the aggregation size threshold to the number of bytes that will typically be transmitted in one second, so that blocks are typically clocked out about once per second. The maximum number of export sessions then should be at least the total number of seconds in the round-trip time for traffic on this LTP span, to prevent transmission from being blocked due to inability to start another session while waiting for the LTP acknowledgment that can end one of the current sessions.

    - The multiplicity of session blocks permits bundles to stream; while one session block is being transmitted, a second can be filled (and itself transmitted) before the first is completed. By increasing the number of blocks, high latency links can be filled to capacity (provided there is adequate bandwidth available in the return direction for the LTP acknowledgments). But it is desirable to reduce the max_export_sessions to a value where \"most\" of the sessions are employed because each session allocates an increment of buffer memory from the SDR whether it is used or not.

    - When a session block is transmitted, it is emitted as a series of back-to-back LTP segments that are simply queued for transmission; LTP does not meter segment issuance in any way. The underlying link layer is expected to pop segments from the queue and transmit them at the current rate as indicated in the contact plan. The udplso task does limit the task's rate of segment transmission over UDP/IP to the transmission rate declared in the contact plan, reducing the incidence of UDP congestion loss.

    - Note that an LTP session can only be concluded (enabling space occupied by the block to be recycled) when all segments have been successfully received -- or retransmission limits have been reached and the session is canceled. High bit error rates on the link correlate to high rates of data loss when segments are large and/or blocks comprise large numbers of segments; this typically results in larger numbers of NACK/retransmit cycles, retarding session completion. When bit error rates are high, LTP performance can be improved by reducing segment size and/or aggregation size threshold.

    "},{"location":"ION-Deployment-Guide/#the-ltpadmin-start-command","title":"The ltpadmin Start command","text":"

    s 'LSI command'

    This command starts link service output tasks for all LTP spans (to remote engines) from the local LTP engine, and it starts the link service input task for the local engine.

    The sole command on line number 44 below starts two main operations within LTP. The first of these operations starts all of the link service output tasks, the ones defined for each LTP span (see the LSO_command parameter of the Add Span command). In this example, each task instantiates the same function (named 'udplso'). Each 'udplso' needs a destination for its transmissions and these are defined as hostname or IP Address (192.168.1.1) and port number (nominally 1113, the pre-defined default port number for all LTP traffic).

    The second operation started by this command is to instantiate the link service input task. In this instance, the task is named \"udplsi\". It is through this task that all LTP input traffic will be received. Similar to the output tasks, the input task also needs definition of the interface on which LTP traffic will arrive, namely hostname or IP address (192.168.1.1) and port number (1113). If it is necessary for udplsi to listen on multiple network interfaces simultaneously, \\'udplsi 0.0.0.0[:port]\\' can be invoked. This instructs udplsi to listen to the UDP broadcast address, which aggregates traffic from all available network interfaces, including localhost.

    Once the LTP engine has been defined, initialized and started, we need a definition as to how data gets routed to the Convergence Layer Adaptors. Defining a protocol via bpadmin is the first step in that process.

    "},{"location":"ION-Deployment-Guide/#the-bpadmin-add-protocol-command","title":"The bpadmin Add Protocol command","text":"

    a protocol protocol_name payload_bytes_per_frame overhead_bytes_per_frame

    The \"add protocol\" command. This command establishes access to the named convergence layer protocol at the local node. As noted earlier, the payload_bytes_per_frame and overhead_bytes_per_frame arguments were previously used in calculating the estimated transmission capacity consumption of each bundle, to aid in route computation and congestion forecasting; in later versions of ION they are not needed and may be omitted.

    Once the protocol has been defined, it can be used to define ducts, both inducts and outducts, as seen in lines 76 thru 80 below. The Add \"duct\" commands associate a protocol (in this case, LTP) with individual node numbers (in this case, 25, 101 and 1) and a task designed to handle the appropriate Convergence Layer output operations. A similar scenario applies for the induct where the LTP protocol and node number 13 get connected with \"ltpcli\" as the input Convergence Layer function.

    "},{"location":"ION-Deployment-Guide/#the-bpadmin-add-outduct-and-add-induct-commands","title":"The bpadmin Add Outduct and Add Induct commands","text":"

    a outduct protocol_name duct_name 'CLO_command' [max_payload_length]

    The \"add outduct\" command. This command establishes a duct for transmission of bundles via the indicated CL protocol. The duct's data transmission structure is serviced by the outduct task whose operation is initiated by CLO_command at the time the duct is started. max_payload_length, if specified, causes ION to fragment bundles issued via this outduct (as necessary) to ensure that all such bundles have payloads that are no larger than max_payload_length.

    a induct protocol_name duct_name 'CLI_command'

    The \"add induct\" command. This command establishes a duct for reception of bundles via the indicated CL protocol. The duct's data acquisition structure is used and populated by the induct task whose operation is initiated by CLI_command at the time the duct is started.

    Note that only a single induct is needed for all bundle reception via any single protocol at any single node, and in fact ION may operate poorly if multiple inducts are established for any single protocol. For any induct whose duct name includes an IP address, use IP address 0.0.0.0 (INADDR_ANY) if the machine on which the node resides is multihomed and you want the node to be reachable via all of the machine's network interfaces.

    Once all of this has been defined, the last piece needed is the egress plan -- namely how do packets get transmitted to DTN nodes that are the local node's \"neighbors\" in the topology of the network.

    As you can see from line numbers 6 thru 29, the only network neighbor to node 1 is node 101. Node 25 has not been defined (because the commands in lines 8, 14, 21 and 27 have been commented). In line numbers 15 and 16, we see that the only destinations for data beginning at node 1 are nodes 101 and 1 (a loopback as such). Therefore, in order to get data from node 1 to node 25, our only choice is to send data to node 101. Out best hope of reaching node 25 is that the configurations for node 101 define a connection to node 25 (either a one-hop direct connection, or more multi-hop assumptions). This is where egress plans come into play.

    On line numbers 87 thru 89, this configuration defines the only choices that can be made regarding destinations. For a destination of node 25, which is not a neighbor, all node 1 can do is pass the data to its only neighbor, namely node 101; the \"exit\" command enables this operation. For destinations of nodes 101 and 1, the scenario is pretty simple.

    "},{"location":"ION-Deployment-Guide/#the-ipnadmin-add-exit-command","title":"The ipnadmin Add Exit command","text":"

    a exit first_node_nbr last_node_nbr gateway_endpoint_ID

    The \"add exit\" command. This command establishes an \"exit\" for static routing. An exit is an association of some defined routing behavior with some range of node numbers identifying a set of nodes. Whenever a bundle is to be forwarded to a node whose number is in the exit's node number range and it has not been possible to compute a dynamic route to that node from the contact schedules that have been provided to the local node and that node is not a neighbor to which the bundle can be directly transmitted, BP will forward the bundle to the gateway node associated with this exit.

    "},{"location":"ION-Deployment-Guide/#the-ipnadmin-add-plan-command_1","title":"The ipnadmin Add Plan command","text":"

    a plan node_nbr duct_expression [nominal_data_rate]

    The \"add plan\" command. This command establishes an egress plan for the bundles that must be transmitted to the neighboring node identified by node_nbr.

    Each duct expression is a string of the form

    protocol_name outduct_name

    signifying that the bundle is to be queued for transmission via the indicated convergence layer protocol outduct.

    The duct expression used in these examples has \"ltp\" being the protocol name and 101 and 1 being the outduct names.

    "},{"location":"ION-Deployment-Guide/#ipnadmins-plan-commands-have-been-superseded-by-bpadmin","title":"ipnadmin's \"plan\" commands have been superseded by bpadmin","text":"

    As of ION 4.1.0, bprc's \"plan\" and \"planduct\" commands supersede and generalize the egress plan commands documented in the ipnrc(5) and dtn2rc(5) man pages, which are [retained for backward compatibility]. The syntax of the egress plan commands consumed by bpadmin is DIFFERENT from that of the commands consumed by ipnadmin and dtn2admin. Please see the man page for bprc (5) for details.

    "},{"location":"ION-Deployment-Guide/#bundle-in-bundle-encapsulation","title":"Bundle-in-Bundle Encapsulation","text":"

    For some purposes it may be helpful to encapsulate a bundle inside another bundle -- that is, to let the serialized representation of a bundle be part of the payload of another bundle. This mechanism is called \"Bundle-in-Bundle Encapsulation\" (BIBE) and is defined in Internet Draft draft-burleigh-dtn-bibect-00.txt (which will likely be renamed at some point and ideally will become an IETF standards-track Request For Comments in due course).

    "},{"location":"ION-Deployment-Guide/#introduction-to-bibe","title":"Introduction to BIBE","text":"

    By way of overview, here is an excerpt from that document:

    Each BP node that conforms to the BIBE specification provides a BIBE convergence-layer adapter (CLA) that is implemented within the administrative element of the BP node's application agent. Like any convergence-layer adapter, the BIBE CLA provides:

    The BIBE CLA performs these services by:

    Bundle-in-bundle encapsulation may have broad utility, but the principal motivating use case is the deployment of \"cross domain solutions\" in secure communications. Under some circumstances a bundle may arrive at a node that is on the frontier of a region of network topology in which augmented security is required, from which the bundle must egress at some other designated node. In that case, the bundle may be encapsulated within a bundle to which the requisite additional BP Security (BPSEC) extension block(s) can be attached, whose source is the point of entry into the insecure region (the \"security source\") and whose destination is the point of egress from the insecure region (the \"security destination\").

    Note that:

    The protocol includes a mechanism for recovery from loss of an encapsulating bundle, called \"custody transfer\". This mechanism is adapted from the custody transfer procedures described in the experimental Bundle Protocol specification developed by the Delay-Tolerant Networking Research group of the Internet Research Task Force and documented in RFC 5050. Custody transfer is a convention by which the loss or corruption of BIBE encapsulating bundles can be mitigated by the exchange of other bundles, which are termed \"custody signals\".

    BIBE is implemented in ION, but configuring ION nodes to employ BIBE is not as simple as one might think. That is because BIBE functions as both a BP application and a convergence-layer adapter; coercing the Bundle Protocol to function in both capacities, offering services to itself at two different layers of the protocol stack, requires careful configuration.

    "},{"location":"ION-Deployment-Guide/#configuring-bibe-in-ion","title":"Configuring BIBE in ION","text":"

    Like any convergence-layer protocol, BIBE is used to copy a bundle from one BP node (the sending node) to another node (the receiving node), over one segment of the end-to-end path from the bundle's source node to its destination node. Somewhat confusingly, in BIBE the copying of the bundle is accomplished by issuing a second encapsulating bundle, which has its own source node and destination node:

    Each pair of sending and receiving nodes can be thought of as a \"tunnel\" which requires specific configuration. These tunnels constitute the communication relationships that must be implemented as \"outducts\" in ION.

    "},{"location":"ION-Deployment-Guide/#bclas","title":"BCLAs","text":"

    While the node IDs of the source and destination nodes of encapsulating bundles are necessary parameters for BIBE transmission, they are not sufficient: encapsulating bundles are characterized by quality of service, lifetime, etc., just like other bundles. For this purpose we use an additional BIBE administration utility program -- bibeadmin -- that consumes a file of .bprc commands; these commands add, revise, and delete BIBE convergence layer adapter objects (bclas) that are managed in a BIBE database. For example:

    a bcla ipn:3.0 20 20 300 2 128

    This command adds a bcla identified by \"ipn:3.0\" -- the ID of the destination node of all encapsulating bundles formed according to this bcla -- which asserts that the expected latency for each encapsulating bundle to reach this destination node is 20 seconds, the expected latency for a responding custody signal bundle is likewise 20 seconds, the encapsulating bundle's time-to-live is 300 seconds, its class of service is 2 (expedited), and its ordinal sub-priority is 128.

    Note that other configuration elements may also be implicitly associated with this bcla. For example, BPSEC security rules may map this BIBE source/destination node pair to security block configurations that will pertain to all encapsulating bundles formed according to this bcla.

    "},{"location":"ION-Deployment-Guide/#ducts","title":"Ducts","text":"

    Since BIBE is a convergence-layer protocol, each BIBE tunnel must be configured by means of BP administration (bpadmin) using .bprc commands; BIBE must be added as a protocol, the local node must be added as the BIBE induct, and each supported BIBE tunnel must be added as a BIBE outduct. For example:

    a protocol bibe\na induct bibe \\* ''\na outduct bibe ipn:4.0 'bibeclo ipn:3.0'\n

    The \"a outduct\" command states that the BIBE outduct (tunnel) identified by node ID \"ipn:4.0\" (the receiving node) is serviced by a BIBE convergence-layer output daemon operating according to the bcla identified by \"ipn:3.0\" as described above. The destination node ipn:3.0 is responsible for forwarding each extracted (encapsulated) bundle to the receiving node ipn:4.0. The sending node and the source node of the encapsulating bundles are both, implicitly, the local node.

    Note that for most convergence-layer adapters the node ID of the receiving node for a given outduct is implicit; for example, an stcp outduct explicitly identifies only the socket address of the receiving node's socket -- that is, the convergence-layer protocol endpoint ID -- not the node ID of the receiving node. BIBE differs only in that the convergence-layer protocol endpoint ID is, explicitly, the node ID of the receiving node, simply because BP is being used as the convergence-layer protocol.

    "},{"location":"ION-Deployment-Guide/#plans","title":"Plans","text":"

    In order to cause bundles to be conveyed to a specified receiving node via a BIBE outduct, that outduct must be associated with that node in an egress plan. For example, in the .ipnrc file:

    a plan ipn:4.0 bibe/ipn:4.0\na plan ipn:3.0 stcp/91.7.31.134:4546\n

    The first command asserts that all bundles destined for node \"ipn:4.0\" are to be forwarded using BIBE outduct \"ipn:4.0\". The second asserts that all bundles destined for node \"ipn:3.0\" (here, all BIBE encapsulating bundles formed according to the bcla identified by \"ipn:3.0\") are to be forwarded using the stcp outduct connected to TCP socket \"91.7.31.134:4546\".

    "},{"location":"ION-Deployment-Guide/#contacts","title":"Contacts","text":"

    Finally, in order for data to flow to receiving node ipn:4.0 via the bibe/ipn:4.0 outduct, a contact object must be added to the contact plan enabling the transmissions:

    a contact +0 +1000000000 2 4 100000

    This command states that data flow from node 2 (here, the local node) to node 4 (the receiving node) is continuously enabled, but the rate of transmission is limited to 100,000 bytes per second.

    "},{"location":"ION-Deployment-Guide/#overrides","title":"Overrides","text":"

    Under some circumstances, successful forwarding of BIBE bundles requires that outduct overrides be applied. See the biberc(5) man page for details.

    "},{"location":"ION-Deployment-Guide/#adaptations","title":"Adaptations","text":""},{"location":"ION-Deployment-Guide/#error-logging","title":"Error Logging","text":"

    ION contains a flexible system that allows its code to display errors in several different ways. At the core of this system is a typedef that defines a data type named \"Logger\" (with upper case \"L\") that is a function variable that accepts a character pointer (string) parameter and returns a value of type void.

    typedef void (* Logger)(char *);

    In ION, there is one variable defined to be of this type. Its identifier is \"logger\" (with lower case \"L\") and it is initialized to a value of \"logToStdout\". The function \"logToStdout\" is defined and its contents cause the string parameter to be printed to the stdout device. Therefore, any call to the function variable \"logger\" will have same effects as a call to the function \"logToStdout\".

    However, remember that \"logger\" is a variable and is allowed to change its value to that of other functions that accept string parameters and return void. This is how ION allows for flexibility in logging errors.

    At startup, ION makes a call to \"ionRedirectMemos\". This function makes a call to \"setLogger\" which eventually changes the value of the \"logger\" variable. The new value of the variable named \"logger\" is \"writeMemoToIonLog\". This function writes strings to a file named \"ion.log\".

    It is through this mechanism that any calls to the functions \"writeMemo\", \"writeMemoNote\" or \"writeErrMemo\" eventually pass their parameters to the function \"writeMemoToIonLog\". This is how the Linux-based ION's operate.

    Check out the FSWLOGGER macro option as documented in section 2.1.1 of the Design Guide.

    "},{"location":"ION-Deployment-Guide/#memory-allocation","title":"Memory Allocation","text":"

    What types of memory does ION use and how is memory allocated/controlled?

    For an introductory description of the memory resources used by ION, see Section 1.5 of the ION Design and Operation guide entitled \"Resource Management in ION\".

    Section 1.5 of the Design and Operation guide makes reference to parameters called \"wmSize\" and \"heapWords\". Discussion on these and all of the parameters can be found in this document under the section entitled \"Runtime Parameters\".

    ION allocates its large blocks of memory via calls to malloc. Should the need ever arise to place these large blocks of memory at known, fixed addresses, it would be possible to modify the function memalign, in the file platform.c. A better approach would be to create a shared-memory segment for each pre-allocated memory block (possibly using ION's sm_ShmAttach() function to do this) and pass the applicable shared-memory key values to ION at startup, in the \"heapKey\" and/or \"wmKey\" runtime parameters.

    Any code that references the function \"sm_ShmAttach\" will be looking to acquire some block of memory. These would include the Space Management Trace features and standalone programs such as \"file2sm\", \"sm2file\" and \"smlistsh\".

    "},{"location":"ION-Deployment-Guide/#operation_1","title":"Operation","text":"

    ION is generally optimized for continuous operational use rather than research. In practice, this means that a lot more attention, both in the code and in the documentation, has been paid to the care and feeding of an existing ION-based network than to the problem of setting up a new network in the first place. (The unspoken expectation is that you're only going to do it once anyway.)

    Unfortunately this can make ION somewhat painful for new users to work with. The notes in this section are aimed at reducing this pain, at least a little.

    "},{"location":"ION-Deployment-Guide/#wrong-profile-for-this-sdr","title":"\"Wrong profile for this SDR\"","text":"

    ION is based on shared access to a common data store in memory (and/or in a file), and the objects in that data store are intended to persist across multiple restarts of network activity in a continuously operational network. That's okay for Space Station operations, but it's not helpful while you're still struggling to get the network running in the first place. For this purpose you are probably creating and destroying one or more nodes repetitively.

    A key concept:

    Each time you run the standard ionstart script provided with ION, you are creating a new network from scratch. To minimize confusion, be sure to clear out the old data store first.

    If you don't wipe out the old system before trying to start the new one, then either you will pick up where you left off in testing the old system (and any endpoints, ducts, etc. you try to add will be rejected as duplicates) or -- in the event that you have changed something fundamental in the configuration, or are using an entirely different configuration file -- you'll see the \"Wrong profile for this SDR\" message and won't be able to continue at all.

    "},{"location":"ION-Deployment-Guide/#destroying-a-node","title":"Destroying a node","text":"

    In most cases the ionstop script should terminate the node for you. Invoke it once for every node of your network. To verify that you're starting from a clean slate, run the ipcs command after ionstop: the list of Semaphore Arrays should be empty. If it's not, you've got one or more leftover processes from the previous network still running; use ps ax to find them and kill -9 to get rid of them. The process names to look for are:

    Then run the killm script again to make sure the node's shared-memory resources have been released; run ipcs again to verify, and review your leftover processes again if those resources still haven't been released.

    An additional wrinkle: if you configure ION to manage your ION data store in a file as well as (or instead of) managing it in shared memory, then in addition to calling killm to destroy the semaphores and the copy of the data store that resides in shared memory, you also need to delete the data store file; this destroys the copy of the data store that resides in the file system. If the data store isn't deleted, then when you restart ION using your standard configuration file the file-system copy of the data store will automatically be reloaded into shared memory and all the config file commands that create new schemes, endpoints, etc. will fail, because they're still in the data store that you were using before.

    Another habit that can be helpful: whenever you restart ION from scratch, delete all the ion.log files in all of the directories in which you're configuring your ION nodes. This isn't mandatory -- ION will happily append new log messages to existing log files, and the messages are time-tagged anyway, so it's always possible to work out what happened when. But starting fresh with new log files removes a lot of clutter so that it's easy to see exactly what's happening in this particular iteration of your network research. ION will create new log files automatically if they don't exist; if there's something particularly interesting in the log from a prior system, copy that log file with a different name so you can come back to it if you need to.

    "},{"location":"ION-Deployment-Guide/#no-such-directory-disabling-heap-residence-in-file","title":"\"No such directory; disabling heap residence in file...\"","text":"

    This message just means that the directory whose name you've provided as the value of pathName in the ION configuration file does not exist, and therefore the ION operations that rely on being able to write files in that directory are disabled. It's strictly informative; nearly everything in ION will work just fine even if this message is printed every time you run.

    But if you do care about transaction reversibility, for example, or if you just want to get rid of the annoying message, simply create the directory that is named in pathName (it can be any path name you like) and make sure it's world-writable. The ionconfig(5) man page discusses this parameter and others that affect the fundamental character of the system you're configuring.

    "},{"location":"ION-Deployment-Guide/#cant-find-ion-security-database","title":"\"Can't find ION security database\"","text":"

    These messages are just warnings, but they are annoying. We're still struggling to work out a way to support bundle security protocol as fully and readily as possible but still let people run ION without it, if they want, without too much hassle.

    For now, the best answer might be to insert the following lines into each host.rc file immediately after the \"##end ionadmin\" line. They should create an empty ION security database on each host, which should shut down all those warnings:

    ## begin ionsecadmin\n1\n## end ionsecadmin\n
    "},{"location":"ION-Deployment-Guide/#clock-sync","title":"Clock sync","text":"

    Several key elements of ION (notably LTP transmission and bundle expiration) rely on the clocks of all nodes in the network being synchronized to within a few seconds. NTP is a good way to accomplish this, if you've got access to an NTP server. If you can't get your clocks synchronized, stick to the TCP or UDP convergence-layer adapters, don't count on using contact graph routing, and use long lifetimes on all bundles to prevent premature bundle expiration.

    "},{"location":"ION-Deployment-Guide/#node-numbers","title":"Node numbers","text":"

    In ION we always use the same numeric value for LTP (and BSSP) engine number and BP node number -- and for CFDP entity number and AMS continuum number as well. The idea is that a given ION node has a single identifying number, which by convention we use wherever a protocol endpoint identifier is needed for any local protocol agent. This is not a DTN or CCSDS requirement, but it doesn't violate any of the protocol specifications and it does marginally simplify both implementation and configuration.

    "},{"location":"ION-Deployment-Guide/#duct-names","title":"Duct names","text":"

    The bprc(5) man page explains the general format of the commands for adding convergence-layer inducts and outducts, but it doesn't provide the syntax for duct names, since duct name syntax is different for different CL protocols. Here's a summary of duct name syntax for the CL protocols supported as of ION 3.6.1:

    "},{"location":"ION-Deployment-Guide/#config-file-pitfalls-to-watch-for","title":"Config file pitfalls to watch for","text":"

    Here are some other points to bear in mind as you debug your ION node configuration:

    "},{"location":"ION-Deployment-Guide/#ion-hard-reset","title":"ION hard reset","text":""},{"location":"ION-Deployment-Guide/#ion-and-ltp-state-recovery","title":"ION and LTP State Recovery","text":""},{"location":"ION-Deployment-Guide/#configuring-loopback-contact","title":"Configuring \"Loopback\" Contact","text":""},{"location":"ION-Deployment-Guide/#ltp-performance-assessment-ion-412","title":"LTP Performance Assessment (ION 4.1.2)","text":"

    In this section, we present LTP throughput measurements collected on different computing platforms. The goal of these tests is to provide a set of data points that give ION users a sense of the achievable LTP throughput for a given level of computing resources, ranging from single-board computers (SBC) to medium-level or high-end servers connected via 10Gbps Ethernet. We made no attempt to match any particular user's computing environment in this test. Users must exercise their own good engineering sense when generalizing and applying these data points to make predictions regarding the performance of their own systems. The users are encouraged to install ION on the target platform and configure ION - using some of the configuration recommendations in this report - when conducting their own tests.

    Since our focus is to explore the speed limitation caused by software processing within ION, we try to eliminate external factors that can slow down throughput, such as a poor network connection or other processing-intensive software running concurrently on the host machine that compete for CPU cycles, etc. We also eliminated the impact of round-trip delay and packet error by testing LTP over a high-speed, low-error, direct Ethernet connection between two LTP peers.

    "},{"location":"ION-Deployment-Guide/#considerations-for-configuration","title":"Considerations for Configuration","text":"

    Given that LTP is designed for space links, not terrestrial links, LTP segment sizes much larger than typical terrestrial network MTUs (nominally 1,500 to 9,000 bytes) are considered in our testing. For LTP configuration in space, the CCSDS Packet Encapsulation service enables LTP segments of variable sizes to be transmitted over different CCSDS space link protocols.

    Most of our testing was conducted with the SDR in DRAM (configuration 1) to achieve higher data processing speed. However, we did collect data on several cases where reversibility and SDR object boundness checks were turned on.

    In the interest of efficiency, we also favor selecting larger bundles, potentially much larger than the LTP aggregation block size. In previous TCPCL testing, it was observed that a larger bundle size improves throughput since more data can be transferred per logical operation. For the same reason, we believe that a larger bundle size will improve LTP performance.

    ION performs bundle-level metering to throttle the speed with which data is presented to LTP engines for transmission. The throttle rate is set by the contact plan and should not exceed the line rate of the physical Ethernet connection. In many cases, we configure ION with a contact plan rate that is lower than the Ethernet line rate to allow BP/LTP to operate as fast as possible without creating a destructive level of congestion. To facilitate better testing, we also use the bpdriver utility program with the \\'i\\' option to control the source data injection rate. For some tests, we find data metering unnecessary, and ION can buffer and handle local congestion and deliver the maximum possible throughput.

    NOTE: The results presented here are based on System V semaphore. Recent upgrade and testing of a POSIX semaphore approach indicated a substantial performance increase to ION, and that result will be published in the next release of this document.

    As stated earlier, our goal is to test the ION processing rate limitation, not the host system\\'s memory availability. Therefore, we configure ION SDR with a generous amount of heap and working memory to ensure that data storage is not a limiting factor.

    Now, we present our test cases.

    "},{"location":"ION-Deployment-Guide/#test-case-1-mid-grade-linux-server-with-direct-ethernet-connection","title":"Test Case 1: Mid-grade Linux Server with Direct Ethernet Connection","text":"

    B = byte

    b = bit

    M = mega

    K = kilo

    G = giga

    ION Configuration:

    Hardware Specification and Operating System:

    Throughput Measured:

    "},{"location":"ION-Deployment-Guide/#test-case-2-arm64-raspberry-pi-4b","title":"Test Case 2: ARM64 Raspberry Pi 4B","text":"

    ION Configuration:

    Hardware Specification and Operating System:

    Throughput Measured:

    "},{"location":"ION-Deployment-Guide/#test-case-3-sdr-configuration-study-2015-xeon-skylake","title":"Test Case 3: SDR configuration Study (2015 Xeon Skylake)","text":"

    In this test case, we considered several SDR configuration combinations and assessed their impact.

    We do not include the \"SDR in file\" or any combination with that since file operation will slow down performance significantly.

    Base ION Memory Configuration

    BP/LTP Configuration

    Contact Plan Data Rate (1 Gb/sec)

    Hardware Specification and Operating System:

    Throughput Measured

    General observation is that SDR boundedness checks (each write operation must ensure that the location where data is written is occupied by an object of the same size of the write operation) introduce about 11% of throughput degradation. Adding reversibility will substantially slow down the system since the reversibility, by default, saves transaction operations record in a file until the transaction is complete or until when the transaction is canceled and must be reversed. Although it is possible to store transaction record in ION's working memory, we didn't consider this case in our testing due to time constraint.

    "},{"location":"ION-Deployment-Guide/#test-case-4-10gbps-physical-ethernet-study-with-2012-xeon-sandy-bridge","title":"Test Case 4: 10Gbps Physical Ethernet Study (with 2012 Xeon Sandy Bridge)","text":"

    In this 10Gbps case study, we measured LTP performance between two machines physically connected by a 10Gbps Ethernet switch. Initial testing with iperf showed that although the physical connection was 10Gbps, the actual throughput maxed out at 2.5Gbps. Improved throughput was attained by increasing the kernel buffer sizes to 8MB. Additionally, increasing the MTU (Maximum Transmission Unit) size from 1500 to 9600 resolved some caching issues seen at the receiving node.

    UDP Configuration Details

    The following kernel buffer size settings were used to enable full utilization of the 10Gbps Ethernet on the host machine. These are provided for your reference. Depending on your host system's configuration, you may not need to adjust any parameters to make sure the full capacity of the Ethernet connection is achievable. Even in cases where you do find it necessary to make such adjustments, the actual parameters values may not be the same.

    To resolve the caching issue, which allows the LTP engine to clean up after the test quickly, we set the MTU to 9600. This is not strictly required but we find it helpful when the MTU is set to 9600 Bytes instead of the typical value of 1500 Bytes (we observed improved LTP session cleanup times with the higher MTU). After applying these updates, iperf testing showed 9.9Gbps throughput on the Ethernet connection between the two hosts.

    Test Network Details

    The test network consists of 2 host machines physically connected via 10 Gb Network Interface Card

    Hardware

    ION Memory Configuration Details

    LTP Configuration Details

    Throughput Measurement

    The first series of tests provided some insights into the impact of bundle size on throughput. In general, using a larger bundle size allows ION to transfer more data per logical operation since the overhead of a bundle is relatively fixed regardless of the size of the payload. In our tests, we controlled the size of all bundles injected into ION for LTP transfer. In real operations, bundle size will vary, but for bulk data transfer, the user is generally able to dictate the size of the bundle it sends. To avoid processing smaller bundles individually (which occurs in real operations), we turned on LTP block aggregation and set the size to 64KB.

    Figure 1: LTP Throughput as a Function of Bundle Size

    In Figure 1, we can immediately observe that bundle size has a significant impact on LTP throughput. This is because the bundle is the basic unit of an LTP block. When LTP block aggregation is turned on, a block may consist of one or multiple bundles. When LTP block aggregation is not applied, each block is one bundle. When the bundle size is less than the aggregation size, LTP will accumulate several bundles before creating a block. While this will limit LTP overhead, the use of small bundles still has an impact on the bundle protocol level processing, both before LTP transmission and during post-LTP-reception reconstruction. Therefore, as we can see, when the bundle size is dropped below the LTP aggregation threshold, the throughput is still impacted by bundle size.

    While it may seem that the aggregation size limit does not have a strong impact on throughput, it does for long delay-bandwidth space links, where it dictates the maximum number of import/export sessions that ION must support simultaneously. That will be another investigation for a future study. For now, we focus solely on testing the limit of ION\\'s data processing speed in a low latency lab environment.

    We also conducted a second series of tests to look at the impact of LTP segment sizes on throughput. The results are in Figure 2 below.

    Figure 2: Impact of LTP Segment Size on Throughput

    In this test, we looked at bundle sizes that are 1MB or lower, with segment sizes ranging from 64KB to 1,500 bytes. Again, we observed that segment size has a stronger impact on throughput when it is less than 10% of the bundle size; once it goes above 10%, the impact is noticeably diminished. This has to do with the fact that each segment levies a minimal amount of logical operation. Using a large segment size can help reduce LTP processing overhead. However, since the segment is LTP\\'s standard protocol data unit and it determines the vulnerability/likelihood of data loss (large segments expose more data to loss due to corruption), it is not advised to arbitrarily increase the segment size in a real flight environment with a substantial data loss probability. The key point here is to illustrate that the choice of segment size can impact the processing overhead and speed of LTP.

    "},{"location":"ION-Deployment-Guide/#impact-of-variable-bundles-size","title":"Impact of variable bundles size","text":"

    In real life operation, we expect the users to generate a wide mixture of large and small bundles. Although we don't have a commonly agreed on \"profile\" of how a typical DTN user will generate bundles, it is nonetheless valuable for us to get a sense of how BP/LTP in ION would perform when handling bundles of random sizes.

    For a quick study, we leveraged the same 2.1GHz Xeon Sandy Bridge processor configuration with 1MB LTP aggregation limit and injected bundles whose payload size is a uniformly distributed random value between 1024 bytes and 62464 bytes. We found that the throughput is approximately 260Mbps for segment size of 9600 B, and 300Mbps when segment size is increased to 64,000B. For the second test, we increased the bundle size ranges to be between 1KB and 1MB, the measured throughput is 2.08Gbps.

    This performance is higher than we expected. For the same amount of data delivery, using 1MB bundle vs an average of 31KB per bundle (uniform between 1K and 62K) would increase bundle process overhead by a factor of 32. Holding all other parameters constant, the 300Mbps throughput is only a factor of 9.6 lower compared to the 1MB bundle case with throughput of 2.9Gbps. The 32-fold increase of bundle overhead didn't result in a 32-fold reduction of speed. The reason for this better-than-expected result is, we believe, due to the use of LTP block aggregation. Similarly, for the second test, we increased the average bundle overhead by a factor 2, but the data rate reduction is only about 29 percent. By keeping the block aggregation to 1MB, we keep the number of LTP sessions and handshaking overhead low, which mitigated some of the impact of the presence of smaller bundles.

    Our initial assessment is that the mixed use of larger and smaller bundles will reduce throughput but not as substantially as one would expect based on a linear interpolation of the bundle processing overhead. The use of LTP block aggregation can maintain a higher efficiency under such circumstances. Additional investigation in this area will be conducted and reported in the near future.

    "},{"location":"ION-Deployment-Guide/#summary-of-ltp-throughput-test-results","title":"Summary of LTP Throughput Test Results","text":"

    We conducted a series of tests, documenting the performance of BP/LTP for a range of hardware and ION configuration options. At the lower end, we tested two stock Raspberry Pi 4B single-board computers running ION 4.1.2 and achieved 60 Mbps one-way data transfer without any hardware or OS optimization. At the higher end of our tests, we measured ION performance between two Linux servers (see spec in Test Case 4; 2012 era Xeon Sandy Bridge Processors) and showed that ION's BP/LTP implementation can support up to 3.7Gbps throughput over a 10Gbps Ethernet physical connection. We also presented a discussion on the performance trades regarding various LTP configuration parameters.

    We hope that these data points will provide users with a sense of how to configure ION, BP, and LTP to achieve the highest possible throughput on their own systems. We acknowledge that these tests focus on exploring the performance envelope of ION\\'s data processing speed and do not emulate specific flight configurations, nor do they cover long round-trip delay and high error rate space links. For specific link conditions and computing the recommended LTP settings, please consult the LTP Configuration Tool spreadsheet provided with each ION open-source package.

    "},{"location":"ION-Deployment-Guide/#acknowledgment","title":"Acknowledgment","text":"

    Some of the technology described in this Deployment Guide was developed at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration.

    Copyright \u00a9 2021 California Institute of Technology

    The ION team would like to acknowledge the following individuals for contributed to the earlier versions of this Guide: Jane Marquart, NASA; Greg Menke, Columbus; Larry Shackelford, Microtel LLC; Scott Burleigh (retired), Jet Propulsion Laboratory, California Institute of Technology

    "},{"location":"ION-Design-and-API-Overview/","title":"ION Design and API Overview","text":""},{"location":"ION-Design-and-API-Overview/#basic-philosophy","title":"Basic Philosophy","text":"

    The development of ION began in the early 2000's, focusing on flight systems running Real-time Operating System (RTOS) with minimum resources under strict control. While these constraints might be somewhat relaxed for modern embedded systems, ION's lightweight, modular, and portable traits remain desirable to both flight and ground systems today:

    Hard Memory Allocation Limits: ION operates within a host-specified memory allocation, managing dynamic allocation internally via a private memory management system. This approach ensures efficient use of the allocated memory resources.

    Modular and Robust Operation: ION's design allows individual modules to start, stop, re-build, or possibly be replaced independently. This modular structure is implemented through separate daemons and libraries, enhancing system resilience. In a process crash, data in the process's queues/buffers can be preserved in the non-volatile SDR, preventing data loss.

    Efficient Resource Utilization: ION is optimized for environments with limited memory, storage, and processing resources. It avoids duplicate data copies during multi-stage processing by utilizing Zero-Copy Objects (ZCO) in shared memory space for fast hand-off between modules. This method, while more complex, ensures rapid data handling. Additionally, BP and CLA services operate as background daemons to minimize competition with critical spacecraft functions during nominal, high-stress, and off-nominal events.

    Independence from Native IP Socket Support: ION employs software abstraction to decouple socket-based programming from its core functionalities. This allows ION to interface the Bundle Protocol and CLAs with various underlying communication systems, such as CCSDS space links, radio communications systems, or customized processing chains that are not IP-based.

    Portability and Minimal Footprint for Static Linking: ION prioritizes portability and minimal resource footprint by building its function libraries. This approach supports static linking through the ION-core package for a specific set of modules. It reduces dependency on external libraries, thereby mitigating the risk of interference from unexercised or non-required code segments that cannot be removed from the libraries. This design also avoids potential compatibility issues between the target system\u2019s build environment and those of externally sourced libraries.

    "},{"location":"ION-Design-and-API-Overview/#ion-modules","title":"ION Modules","text":"

    The BP Service API document shows the default installation location of various libraries and daemons. Interactions with these daemons rely on various APIs made available through the libraries. The following diagram shows ION's modular architecture:

    ION provides four application-layer services that utilize the underlying DTN protocols. These services are:

    1. AMS: Asynchronous Message Service
    2. DTPC: Delay-Tolerant Payload Conditioning
    3. CFDP: CCSDS File Delivery Protocol
    4. BSS: Bundle Streaming Service

    ION provides BP services based on Bundle Protocol v6 and Bundle Protocol v7, BPSec (Bundle Protocol Security), and the Interplanetary Internet (IPN) naming scheme. In addition, it offers several standardized convergence layer adaptors, namely:

    1. LTP: Licklider Transmission Protocol
    2. TCPCL: TCP Convergence Layer version 3
    3. UDPCL: UDP Convergence Layer
    4. STCP: Simplified TCP Convergence Layer
    5. DGR: Datagram Retransmission Convergence Layer

    ION also provides UDP-based Underlying Communication Protocol (UCP) to support testing of the LTP CLA in terrestrial systems.

    ION also supports the AMS (Asynchronous Management Architecture) by implementing both an Asynchronous Management Protocol (AMP) Agent and Manager and the associated Application Data Model (ADM) that describes both common and ION-specific DTN network management state information and commands.

    The entire ION software suite operates within a prescribed memory space. It is privately managed by ION's ICI infrastructure library functions for space allocation/deallocation, data I/O, and linked list and zero-copy object (ZCO) management. There are two types of data storage: working memory to facilitate data processing and heap in the SDR designed to store state information and data that should persist through a power cycle when implemented on a non-volatile storage medium. ION's APIs are exposed to the user through a set of C header files associated with each module's library.

    "},{"location":"ION-Design-and-API-Overview/#modular-packaging-ion-core","title":"Modular Packaging - ION Core","text":"

    Due to the highly modular design of ION, it is possible to build a streamlined package that contains only the modules required for a specific system to maximize resource efficiency and reduce V&V costs. ION-Core 4.1.2b package offers the ability to selectively build different sets of CLAs and bundle extensions blocks, targeting either 32-bit or 64-bit operating systems.

    "},{"location":"ION-Design-and-API-Overview/#ion-apis","title":"ION APIs","text":"

    For software development, ION provides several sets of APIs for interacting with services/daemons of the underlying DTN protocols, as shown below:

    ION APIs can be roughly categorized as follows:

    1. BP Service API: This set of APIs enables an external application to interact with the BP service daemons to transmit and receive bundles through end-points using the IPN naming scheme.
    2. Convergence Layer API: This set of APIs enables developers to add custom convergence layer adaptors that can interact with BP Services to transmit and receive bundles between neighboring DTN nodes.
    3. Underlying Communications Protocol API: This set of APIs allows external software to transmit and receive data on behalf of LTP CLA.
    4. BP Extension Interface: This set of library functions provides a standard framework to add additional BP extension blocks to ION without modifying the core BP source code.
    5. Application Service API: These APIs are provided by AMS, CFDP, BSS, and DTPC modules of ION to deliver advanced capabilities such as messaging, file transfer, streaming data real-time and off-line playback, and in-order end-to-end delivery service to DTN applications.
    6. DTN Network Management API: These APIs enable external applications to interact with the AMP Managers to control and monitor local and remote ION nodes.
    7. Interplanetary Communication Infrastructure (ICI) API: This set of APIs performs basic administration, SDR & private memory management, and platform portability translation for ION. This is the core set of APIs all software connected to ION utilize to maximize portability between OS and CPU architectures.
    "},{"location":"ION-Guide/","title":"Interplanetary Overlay Network (ION) Design and Operation's Guide","text":"

    Version 4.1.3 JPL D-48259

    Document Change Log

    Ver No. Date Description Note V4.1.3 12/08 /2023 converted to MarkDown V4.0.1 11/20/2020 ION 4.0.1 V3.6.2 11/19/2018 ION 3.6.2 release features Skipped V3.6.1. V3.6 12/31/2017 ION 3.6 release features Skipped V3.5. V3.4 3/28/2016 ION 3.4 release features V3.3 3/4/2015 ION 3.3 release features V3.2 12/17/2013 ION 3.2 release features V3.1 9/28/2012 ION 3.1 release features V3.0 3/22/2012 Align with ION 3.0 release V1.13 10/13/2011 Updates for Source Forge Release V1.12 6/11/2010 Updates for second open source release (2.2) V1.11 12/11/2009 BRS updates, multi-node config V1.10 10/23/2009 Final additions prior to DINET 2 experiment V1.9 6/29/2009 Add updates for DINET 2, including CFDP, ionsec V1.8 2/6/2009 Update discussion of Contact Graph Routing; document status msg formats V1.7 12/1/2008 Add documentation for OWLT simulator, BP extension V1.6 10/03/2008 Add documentation of sm_SemUnend V1.5 09/20/2008 Revisions requested SQA V1.4 07/31/2008 Add a section on optimizing ION-based network; tuning V1.3 07/08/2008 Revised some details of CGR V1.2 05/24/2008 Revised man pages for bptrace, ltprc, bprc. V1.1 05/18/2008 Some additional diagrams V1.0 04/28/2008 Initial version of ION design and ops manual"},{"location":"ION-Guide/#design","title":"Design","text":"

    The Interplanetary Overlay Network (ION) software distribution is an implementation of Delay-Tolerant Networking (DTN) architecture as described in Internet RFC 4838. It is designed to enable inexpensive insertion of DTN functionality into embedded systems such as robotic spacecraft. The intent of ION deployment in space flight mission systems is to reduce cost and risk in mission communications by simplifying the construction and operation of automated digital data communication networks spanning space links, planetary surface links, and terrestrial links.

    A comprehensive overview of DTN is beyond the scope of this document. Very briefly, though, DTN is a digital communication networking technology that enables data to be conveyed between two communicating entities automatically and reliably even if one or more of the network links in the end-to-end path between those entities is subject to very long signal propagation latency and/or prolonged intervals of unavailability.

    The DTN architecture is much like the architecture of the Internet, except that it is one layer higher in the familiar ISO protocol \"stack\". The DTN analog to the Internet Protocol (IP), called \"Bundle Protocol\" (BP), is designed to function as an \"overlay\" network protocol that interconnects \"internets\" -- including both Internet-structured networks and also data paths that utilize only space communication links as defined by the Consultative Committee for Space Data Systems (CCSDS) -- in much the same way that IP interconnects \"subnets\" such as those built on Ethernet, SONET, etc. By implementing the DTN architecture, ION provides communication software configured as a protocol stack that looks like this:

    Figure 1 DTN protocol stack

    Data traversing a DTN are conveyed in DTN bundles -- which are functionally analogous to IP packets -- between BP endpoints which are functionally analogous to sockets. Multiple BP endpoints may be accessed at a single DTN node -- functionally analogous to a network interface card -- and multiple nodes may reside on the same computer just as a single computer (host or router) in the Internet may have multiple network interface cards.

    BP endpoints are identified by Universal Record Identifiers (URIs), which are ASCII text strings of the general form:

    scheme_name:scheme_specific_part

    For example:

    dtn://topquark.caltech.edu/mail

    But for space flight communications this general textual representation might impose more transmission overhead than missions can afford. For this reason, ION is optimized for networks of endpoints whose IDs conform more narrowly to the following scheme:

    ipn:node_number.service_number

    This enables them to be abbreviated to pairs of unsigned binary integers via a technique called Compressed Bundle Header Encoding (CBHE). CBHE-conformant BP endpoint IDs (EIDs) are not only functionally similar to Internet socket addresses but also structurally similar: node numbers are roughly analogous to Internet node numbers (IP addresses), in that they typically identify the flight or ground data system computers on which network software executes, and service numbers are roughly analogous to TCP and UDP port numbers.

    More generally, the node numbers in CBHE-conformant BP endpoint IDs are one manifestation of the fundamental ION notion of network node number: in the ION architecture there is a natural one-to-one mapping not only between node numbers and BP endpoint node numbers but also between node numbers and:

    Starting with version 3.1 of ION, this endpoint naming rule is experimentally extended to accommodate bundle multicast, i.e., the delivery of copies of a single transmitted bundle to multiple nodes at which interest in that bundle's payload has been expressed. Multicast in ION -- \"Interplanetary Multicast\" (IMC) -- is accomplished by simply issuing a bundle whose destination endpoint ID conforms to the following scheme:

    imc:group_number.service_number

    A copy of the bundle will automatically be delivered at every node that has registered in the destination endpoint.

    (Note: for now, the operational significance of a given group number must be privately negotiated among ION users. If this multicast mechanism proves useful, IANA may at some point establish a registry for IMC group numbers. Also note that a new mechanism for bundle multicast is introduced in ION 4.0.1, along with support for Bundle Protocol version 7. This new mechanism vastly simplifies bundle multicast; chiefly, the imcadmin utility is deprecated.)

    "},{"location":"ION-Guide/#structure-and-function","title":"Structure and function","text":"

    The ION distribution comprises the following software packages:

    Taken together, the packages included in the ION software distribution constitute a communication capability characterized by the following operational features:

    "},{"location":"ION-Guide/#constraints-on-the-design","title":"Constraints on the Design","text":"

    A DTN implementation intended to function in an interplanetary network environment -- specifically, aboard interplanetary research spacecraft separated from Earth and from one another by vast distances -- must operate successfully within two general classes of design constraints: link constraints and processor constraints.

    1. Link constraints

    All communications among interplanetary spacecraft are, obviously, wireless. Less obviously, those wireless links are generally slow and are usually asymmetric.

    The electrical power provided to on-board radios is limited and antennae are relatively small, so signals are weak. This limits the speed at which data can be transmitted intelligibly from an interplanetary spacecraft to Earth, usually to some rate on the order of 256 Kbps to 6 Mbps.

    The electrical power provided to transmitters on Earth is certainly much greater, but the sensitivity of receivers on spacecraft is again constrained by limited power and antenna mass allowances. Because historically the volume of command traffic that had to be sent to spacecraft was far less than the volume of telemetry the spacecraft were expected to return, spacecraft receivers have historically been engineered for even lower data rates from Earth to the spacecraft, on the order of 1 to 2 Kbps.

    As a result, the cost per octet of data transmission or reception is high and the links are heavily subscribed. Economical use of transmission and reception opportunities is therefore important, and transmission is designed to enable useful information to be obtained from brief communication opportunities: units of transmission are typically small, and the immediate delivery of even a small part (carefully delimited) of a large data object may be preferable to deferring delivery of the entire object until all parts have been acquired.

    1. Processor constraints

    The computing capability aboard a robotic interplanetary spacecraft is typically quite different from that provided by an engineering workstation on Earth. In part this is due, again, to the limited available electrical power and limited mass allowance within which a flight computer must operate. But these factors are exacerbated by the often intense radiation environment of deep space. In order to minimize errors in computation and storage, flight processors must be radiation-hardened and both dynamic memory and non-volatile storage (typically flash memory) must be radiation-tolerant. The additional engineering required for these adaptations takes time and is not inexpensive, and the market for radiation-hardened spacecraft computers is relatively small; for these reasons, the latest advances in processing technology are typically not available for use on interplanetary spacecraft, so flight computers are invariably slower than their Earth-bound counterparts. As a result, the cost per processing cycle is high and processors are heavily subscribed; economical use of processing resources is very important.

    The nature of interplanetary spacecraft operations imposes a further constraint. These spacecraft are wholly robotic and are far beyond the reach of mission technicians; hands-on repairs are out of the question. Therefore the processing performed by the flight computer must be highly reliable, which in turn generally means that it must be highly predictable. Flight software is typically required to meet \"hard\" real-time processing deadlines, for which purpose it must be run within a hard real-time operating system (RTOS).

    One other implication of the requirement for high reliability in flight software is that the dynamic allocation of system memory may be prohibited except in certain well-understood states, such as at system start-up. Unrestrained dynamic allocation of system memory introduces a degree of unpredictability into the overall flight system that can threaten the reliability of the computing environment and jeopardize the health of the vehicle.

    "},{"location":"ION-Guide/#design-principles","title":"Design Principles","text":"

    The design of the ION software distribution reflects several core principles that are intended to address these constraints.

    Figure 2 ION inter-task communication

    1. Shared memory

    Since ION must run on flight processors, it had to be designed to function successfully within an RTOS. Many real-time operating systems improve processing determinism by omitting the support for protected-memory models that is provided by Unix-like operating systems: all tasks have direct access to all regions of system memory. (In effect, all tasks operate in kernel mode rather than in user mode.) ION therefore had to be designed with no expectation of memory protection.

    But universally shared access to all memory can be viewed not only as a hazard but also as an opportunity. Placing a data object in shared memory is an extremely efficient means of passing data from one software task to another.

    ION is designed to exploit this opportunity as fully as possible. In particular, virtually all inter-task data interchange in ION follows the model shown in Figure 2:

    Semaphore operations are typically extremely fast, as is the storage and retrieval of data in memory, so this inter-task data interchange model is suitably efficient for flight software.

    1. Zero-copy procedures

    Given ION's orientation toward the shared memory model, a further strategy for processing efficiency offers itself: if the data item appended to a linked list is merely a pointer to a large data object, rather than a copy, then we can further reduce processing overhead by eliminating the cost of byte-for-byte copying of large objects.

    Moreover, in the event that multiple software elements need to access the same large object at the same time, we can provide each such software element with a pointer to the object rather than its own copy (maintaining a count of references to assure that the object is not destroyed until all elements have relinquished their pointers). This serves to reduce somewhat the amount of memory needed for ION operations.

    1. Highly distributed processing

    The efficiency of inter-task communications based on shared memory makes it practical to distribute ION processing among multiple relatively simple pipelined tasks rather than localize it in a single, somewhat more complex daemon. This strategy has a number of advantages:

    Designs based on these kinds of principles are foreign to many software developers, who may be far more comfortable in development environments supported by protected memory. It is typically much easier, for example, to develop software in a Linux environment than in VxWorks 5.4. However, the Linux environment is not the only one in which ION software must ultimately run.

    Consequently, ION has been designed for easy portability. POSIX\u2122 API functions are widely used, and differences in operating system support that are not concealed within the POSIX abstractions are mostly encapsulated in two small modules of platform-sensitive ION code. The bulk of the ION software runs, without any source code modification whatsoever, equally well in Linux\u2122 (Red Hat\u00ae, Fedora\u2122, and Ubuntu\u2122, so far), FreeBSD\u00ae, Solaris\u00ae 9, Microsoft Windows (the MinGW environment), OS/X\u00ae, VxWorks\u00ae 5.4, and RTEMS\u2122, on both 32-bit and 64-bit processors. Developers may compile and test ION modules in whatever environment they find most convenient.

    "},{"location":"ION-Guide/#organizational-overview","title":"Organizational Overview","text":"

    Two broad overviews of the organization of ION may be helpful at this point. First, here is a summary view of the main functional dependencies among ION software elements:

    Figure 3 ION software functional dependencies

    That is, BP and LTP invoke functions provided by the sdr, zco, psm, and platform elements of the ici package, in addition to functions provided by the operating system itself; the zco functions themselves also invoke sdr, psm, and platform functions; and so on.

    Second, here is a summary view of the main line of data flow in ION's DTN protocol implementations:

    Figure 4 Main line of ION data flow

    Note that data objects residing in shared memory, many of them in a nominally non-volatile SDR data store, constitute the central organizing principle of the design. Here as in other diagrams showing data flow in this document:

    A few notes on this main line data flow:

    Finally, note that the data flow shown here represents the sustained operational configuration of a node that has been successfully instantiated on a suitable computer. The sequence of operations performed to reach this configuration is not shown. That startup sequence will necessarily vary depending on the nature of the computing platform and the supporting link services. Broadly, the first step normally is to run the ionadmin utility program to initialize the data management infrastructure required by all elements of ION. Following this initialization, the next steps normally are (a) any necessary initialization of link service protocols, (b) any necessary initialization of convergence-layer protocols (e.g., LTP -- the ltpadmin utility program), and finally (c) initialization of the Bundle Protocol by means of the bpadmin utility program. BP applications should not try to commence operation until BP has been initialized.

    "},{"location":"ION-Guide/#resource-management-in-ion","title":"Resource Management in ION","text":"

    Successful Delay-Tolerant Networking relies on retention of bundle protocol agent state information -- including protocol traffic that is awaiting a transmission opportunity -- for potentially lengthy intervals. The nature of that state information will fluctuate rapidly as the protocol agent passes through different phases of operation, so efficient management of the storage resources allocated to state information is a key consideration in the design of ION.

    Two general classes of storage resources are managed by ION: volatile \"working memory\" and non-volatile \"heap\".

    1. Working Memory

    ION's \"working memory\" is a fixed-size pool of shared memory (dynamic RAM) that is allocated from system RAM at the time the bundle protocol agent commences operation. Working memory is used by ION tasks to store temporary data of all kinds: linked lists, red-black trees, transient buffers, volatile databases, etc. All intermediate data products and temporary data structures that ought not to be retained in the event of a system power cycle are written to working memory.

    Data structures residing in working memory may be shared among ION tasks or may be created and managed privately by individual ION tasks. The dynamic allocation of working memory to ION tasks is accomplished by the Personal Space Management (PSM) service, described later. All of the working memory for any single ION bundle protocol agent is managed as a single PSM \"partition\". The size of the partition is specified in the wmSize parameter of the ionconfig file supplied at the time ION is initialized.

    1. Heap

    ION's \"heap\" is a fixed-size pool of notionally non-volatile storage that is likewise allocated at the time the bundle protocol agent commences operation. This notionally non-volatile space may occupy a fixed-size pool of shared memory (dynamic RAM, which might or might not be battery-backed), or it may occupy only a single fixed-size file in the file system, or it may occupy both. In the latter case, all heap data are written both to memory and to the file but are read only from memory; this configuration offers the reliable non-volatility of file storage coupled with the high performance of retrieval from dynamic RAM.

    We characterize ION's heap storage as \"notionally\" non-volatile because the heap may be configured to reside only in memory (or, for that matter, in a file that resides in the file system of a RAM disk). When the heap resides only in memory, its contents are truly non-volatile only if that memory is battery-backed. Otherwise heap storage is in reality as volatile as working memory: heap contents will be lost upon a system power cycle (which may in fact be the preferred behavior for any given deployment of ION). However, the heap should not be thought of as \\\"memory\\\" even when it in fact resides only in DRAM, just as a disk device should not be thought of as \\\"memory\\\" even when it is in fact a RAM disk.

    {width=\"4.738575021872266in\" height=\"3.338542213473316in\"}

    Figure 5 ION heap space use

    The ION heap is used for storage of data that (in at least some deployments) would have to be retained in the event of a system power cycle to ensure the correct continued operation of the node. For example, all queues of bundles awaiting route computation, transmission, or delivery reside in the node's heap. So do the non-volatile databases for all of the protocols implemented within ION, together with all of the node's persistent configuration parameters.

    The dynamic allocation of heap space to ION tasks is accomplished by the Simple Data Recorder (SDR) service, described later. The entire heap for any single ION bundle protocol agent is managed as a single SDR \"data store\".

    Space within the ION heap is apportioned as shown in Figure 5. The total number of bytes of storage space in the heap is computed as the product of the size of a \"word\" on the deployment platform (normally the size of a pointer) multiplied by the value of the heapWords parameter of the ionconfig file supplied at the time ION is initialized. Of this total, 20% is normally reserved as margin and another 40% is normally reserved for various infrastructure operations. (Both of these percentages are macros that may be overridden at compile time.) The remainder is available for storage of protocol state data in the form of \"zero-copy objects\", described later. At any given moment, the data encapsulated in a zero-copy object may \"belong\" to any one of the protocols in the ION stack (AMS, CFDP, BP, LTP), depending on processing state; the available heap space is a single common resource to which all of the protocols share concurrent access.

    Because the heap is used to store queues of bundles awaiting processing, blocks of LTP data awaiting transmission or reassembly, etc., the heap for any single ION node must be large enough to contain the maximum volume of such data that the node will be required to retain during operations. Demand for heap space is substantially mitigated if most of the application data units passed to ION for transmission are file-resident, as the file contents themselves need not be copied into the heap. In general, however, computing the optimum ION heap size for a given deployment remains a research topic.

    "},{"location":"ION-Guide/#package-overviews","title":"Package Overviews","text":""},{"location":"ION-Guide/#interplanetary-communication-infrastructure-ici","title":"Interplanetary Communication Infrastructure (ICI)","text":"

    The ICI package in ION provides a number of core services that, from ION's point of view, implement what amounts to an extended POSIX-based operating system. ICI services include the following:

    1. Platform

    The platform system contains operating-system-sensitive code that enables ICI to present a single, consistent programming interface to those common operating system services that multiple ION modules utilize. For example, the platform system implements a standard semaphore abstraction that may invisibly be mapped to underlying POSIX semaphores, SVR4 IPC semaphores, Windows Events, or VxWorks semaphores, depending on which operating system the package is compiled for. The platform system also implements a standard shared-memory abstraction, enabling software running on operating systems both with and without memory protection to participate readily in ION's shared-memory-based computing environment.

    2. Personal Space Management (PSM)

    Although sound flight software design may prohibit the uncontrolled dynamic management of system memory, private management of assigned, fixed blocks of system memory is standard practice. Often that private management amounts to merely controlling the reuse of fixed-size rows in static tables, but such techniques can be awkward and may not make the most efficient use of available memory. The ICI package provides an alternative, called PSM, which performs high-speed dynamic allocation and recovery of variable-size memory objects within an assigned memory block of fixed size. A given PSM-managed memory block may be either private or shared memory.

    3. Memmgr

    The static allocation of privately-managed blocks of system memory for different purposes implies the need for multiple memory management regimes, and in some cases a program that interacts with multiple software elements may need to participate in the private shared-memory management regimes of each. ICI's memmgr system enables multiple memory managers -- for multiple privately-managed blocks of system memory -- to coexist within ION and be concurrently available to ION software elements.

    4. Lyst

    The lyst system is a comprehensive, powerful, and efficient system for managing doubly-linked lists in private memory. It is the model for a number of other list management systems supported by ICI; as noted earlier, linked lists are heavily used in ION inter-task communication.

    5. Llcv

    The llcv (Linked-List Condition Variables) system is an inter-thread communication abstraction that integrates POSIX thread condition variables (vice semaphores) with doubly-linked lists in private memory.

    6. Smlist

    Smlist is another doubly-linked list management service. It differs from lyst in that the lists it manages reside in shared (rather than private) DRAM, so operations on them must be semaphore-protected to prevent race conditions.

    7. SmRbt

    The SmRbt service provides mechanisms for populating and navigating \"red/black trees\" (RBTs) residing in shared DRAM. RBTs offer an alternative to linked lists: like linked lists they can be navigated as queues, but locating a single element of an RBT by its \"key\" value can be much quicker than the equivalent search through an ordered linked list.

    8. Simple Data Recorder (SDR)

    SDR is a system for managing non-volatile storage, built on exactly the same model as PSM. Put another way, SDR is a small and simple \"persistent object\" system or \"object database\" management system. It enables straightforward management of linked lists (and other data structures of arbitrary complexity) in non-volatile storage, notionally within a single file whose size is pre-defined and fixed.

    SDR includes a transaction mechanism that protects database integrity by ensuring that the failure of any database operation will cause all other operations undertaken within the same transaction to be backed out. The intent of the system is to assure retention of coherent protocol engine state even in the event of an unplanned flight computer reboot in the midst of communication activity.

    9. Sptrace

    The sptrace system is an embedded diagnostic facility that monitors the performance of the PSM and SDR space management systems. It can be used, for example, to detect memory \"leaks\" and other memory management errors.

    10. Zco

    ION's zco (zero-copy objects) system leverages the SDR system's storage flexibility to enable user application data to be encapsulated in any number of layers of protocol without copying the successively augmented protocol data unit from one layer to the next. It also implements a reference counting system that enables protocol data to be processed safely by multiple software elements concurrently -- e.g., a bundle may be both delivered to a local endpoint and, at the same time, queued for forwarding to another node -- without requiring that distinct copies of the data be provided to each element.

    11. Rfx

    The ION rfx (R/F Contacts) system manages lists of scheduled communication opportunities in support of a number of LTP and BP functions.

    12. Ionsec

    The IONSEC (ION security) system manages information that supports the implementation of security mechanisms in the other packages: security policy rules and computation keys.

    "},{"location":"ION-Guide/#licklider-transmission-protocol-ltp","title":"Licklider Transmission Protocol (LTP)","text":"

    The ION implementation of LTP conforms fully to RFC 5326, but it also provides two additional features that enhance functionality without affecting interoperability with other implementations:

    In the ION stack, LTP serves effectively the same role that is performed by an LLC protocol (such as IEEE 802.2) in the Internet architecture, providing flow control and retransmission-based reliability between topologically adjacent bundle protocol agents.

    All LTP session state is safely retained in the ION heap for rapid recovery from a spacecraft or software fault.

    "},{"location":"ION-Guide/#bundle-protocol-bp","title":"Bundle Protocol (BP)","text":"

    The ION implementation of BP conforms fully to RFC 5050, including support for the following standard capabilities:

    The system also provides three additional features that enhance functionality without affecting interoperability with other implementations:

    In addition, ION BP includes a system for computing dynamic routes through time-varying network topology assembled from scheduled, bounded communication opportunities. This system, called \"Contact Graph Routing,\" is described later in this Guide.

    In short, BP serves effectively the same role that is performed by IP in the Internet architecture, providing route computation, forwarding, congestion avoidance, and control over quality of service.

    All bundle transmission state is safely retained in the ION heap for rapid recovery from a spacecraft or software fault.

    "},{"location":"ION-Guide/#asynchronous-message-service-ams","title":"Asynchronous Message Service (AMS)","text":"

    The ION implementation of the CCSDS AMS standard conforms fully to CCSDS 735.0-B-1. AMS is a data system communications architecture under which the modules of mission systems may be designed as if they were to operate in isolation, each one producing and consuming mission information without explicit awareness of which other modules are currently operating. Communication relationships among such modules are self-configuring; this tends to minimize complexity in the development and operations of modular data systems.

    A system built on this model is a \"society\" of generally autonomous inter-operating modules that may fluctuate freely over time in response to changing mission objectives, modules' functional upgrades, and recovery from individual module failure. The purpose of AMS, then, is to reduce mission cost and risk by providing standard, reusable infrastructure for the exchange of information among data system modules in a manner that is simple to use, highly automated, flexible, robust, scalable, and efficient.

    A detailed discussion of AMS is beyond the scope of this Design Guide. For more information, please see the AMS Programmer's Guide.

    "},{"location":"ION-Guide/#datagram-retransmission-dgr","title":"Datagram Retransmission (DGR)","text":"

    The DGR package in ION is an alternative implementation of LTP that is designed to operate responsibly -- i.e., with built-in congestion control -- in the Internet or other IP-based networks. It is provided as a candidate \"primary transfer service\" in support of AMS operations in an Internet-like (non-delay-tolerant) environment. The DGR design combines LTP's concept of concurrent transmission transactions with congestion control and timeout interval computation algorithms adapted from TCP.

    "},{"location":"ION-Guide/#ccsds-file-delivery-protocol-cfdp","title":"CCSDS File Delivery Protocol (CFDP)","text":"

    The ION implementation of CFDP conforms fully to Service Class 1 (Unreliable Transfer) of CCSDS 727.0-B-4, including support for the following standard capabilities:

    All CFDP transaction state is safely retained in the ION heap for rapid recovery from a spacecraft or software fault.

    "},{"location":"ION-Guide/#bundle-streaming-service-bss","title":"Bundle Streaming Service (BSS)","text":"

    The BSS service provided in ION enables a stream of video, audio, or other continuously generated application data units, transmitted over a delay-tolerant network, to be presented to a destination application in two useful modes concurrently:

    "},{"location":"ION-Guide/#trusted-collective-tc","title":"Trusted Collective (TC)","text":"

    The TC service provided in ION enables critical but non-confidential information (such as public keys, for asymmetric cryptography) to be provided in a delay-tolerant, trustworthy manner. An instance of TC comprises:

    "},{"location":"ION-Guide/#acronyms","title":"Acronyms","text":"Acronyms Description BP Bundle Protocol BSP Bundle Security Protocol BSS Bundle Streaming Service CCSDS Consultative Committee for Space Data Systems CFDP CCSDS File Delivery Protocol CGR Contact Graph Routing CL convergence layer CLI convergence layer input CLO convergence layer output DTKA Delay-Tolerant Key Administration DTN Delay-Tolerant Networking ICI Interplanetary Communication Infrastructure ION Interplanetary Overlay Network LSI link service input LSO link service output LTP Licklider Transmission Protocol OWLT one-way light time RFC request for comments RFX Radio (R/F) Contacts RTT round-trip time TC Trusted Collective TTL time to live"},{"location":"ION-Guide/#network-operation-concepts","title":"Network Operation Concepts","text":"

    A small number of network operation design elements -- fragmentation and reassembly, bandwidth management, and delivery assurance (retransmission) -- can potentially be addressed at multiple layers of the protocol stack, possibly in different ways for different reasons. In stack design it's important to allocate this functionality carefully so that the effects at lower layers complement, rather than subvert, the effects imposed at higher layers of the stack. This allocation of functionality is discussed below, together with a discussion of several related key concepts in the ION design.

    "},{"location":"ION-Guide/#fragmentation-and-reassembly","title":"Fragmentation and Reassembly","text":"

    To minimize transmission overhead and accommodate asymmetric links (i.e., limited \"uplink\" data rate from a ground data system to a spacecraft) in an interplanetary network, we ideally want to send \"downlink\" data in the largest possible aggregations -- coarse-grained transmission.

    But to minimize head-of-line blocking (i.e., delay in transmission of a newly presented high-priority item) and minimize data delivery latency by using parallel paths (i.e., to provide fine-grained partial data delivery, and to minimize the impact of unexpected link termination), we want to send \"downlink\" data in the smallest possible aggregations -- fine-grained transmission.

    We reconcile these impulses by doing both, but at different layers of the ION protocol stack.

    First, at the application service layer (AMS and CFDP) we present relatively small application data units (ADUs) -- on the order of 64 KB -- to BP for encapsulation in bundles. This establishes an upper bound on head-of-line blocking when bundles are de-queued for transmission, and it provides perforations in the data stream at which forwarding can readily be switched from one link (route) to another, enabling partial data delivery at relatively fine, application-appropriate granularity.

    (Alternatively, large application data units may be presented to BP and the resulting large bundles may be proactively fragmented at the time they are presented to the convergence-layer manager. This capability is meant to accommodate environments in which the convergence-layer manager has better information than the application as to the optimal bundle size, such as when the residual capacity of a contact is known to be less than the size of the bundle.)

    Then, at the BP/LTP convergence layer adapter lower in the stack, we aggregate these small bundles into blocks for presentation to LTP:

    Any continuous sequence of bundles that are to be shipped to the same LTP engine and all require assured delivery may be aggregated into a single block, to reduce overhead and minimize report traffic.

    However, this aggregation is constrained by an aggregation size limit rule: aggregation must stop and the block must be transmitted as soon as the sum of the sizes of all bundles aggregated into the block exceeds the block aggregation threshhold value declared for the applicable span (the relationship between the local node's LTP engine and the receiving LTP engine) during LTP protocol configuration via ltpadmin.

    Given a preferred block acknowledgment period -- e.g., a preferred acknowledgement traffic rate of one report per second -- the nominal block aggregation threshold is notionally computed as the amount of data that can be sent over the link to the receiving LTP engine in a single block acknowledgment period at the planned outbound data rate to that engine.

    Taken together, application-level fragmentation (or BP proactive fragmentation) and LTP aggregation place an upper limit on the amount of data that would need to be re-transmitted over a given link at next contact in the event of an unexpected link termination that caused delivery of an entire block to fail. For example, if the data rate is 1 Mbps and the nominal block size is 128 KB (equivalent to 1 second of transmission time), we would prefer to avoid the risk of having wasted five minutes of downlink in sending a 37.5 MB file that fails on transmission of the last kilobyte, forcing retransmission of the entire 37.5 MB. We therefore divide the file into, say, 1200 bundles of 32 KB each which are aggregated into blocks of 128 KB each: only a single block failed, so only that block (containing just 4 bundles) needs to be retransmitted. The cost of this retransmission is only 1 second of link time rather than 5 minutes. By controlling the cost of convergence-layer protocol failure in this way, we avoid the overhead and complexity of \"reactive fragmentation\" in the BP implementation.

    Finally, within LTP itself we fragment the block as necessary to accommodate the Maximum Transfer Unit (MTU) size of the underlying link service, typically the transfer frame size of the applicable CCSDS link protocol.

    "},{"location":"ION-Guide/#bandwidth-management","title":"Bandwidth Management","text":"

    The allocation of bandwidth (transmission opportunity) to application data is requested by the application task that's passing data to DTN, but it is necessarily accomplished only at the lowest layer of the stack at which bandwidth allocation decisions can be made -- and then always in the context of node policy decisions that have global effect.

    The transmission queue interface to a given neighbor in the network is actually three queues of outbound bundles rather than one: one queue for each of the defined levels of priority (\"class of service\") supported by BP. When an application presents an ADU to BP for encapsulation in a bundle, it indicates its own assessment of the ADU's priority. Upon selection of a proximate forwarding destination node for that bundle, the bundle is appended to whichever of the queues corresponds to the ADU's priority.

    Normally the convergence-layer manager (CLM) task servicing a given proximate node extracts bundles in strict priority order from the heads of the three queues. That is, the bundle at the head of the highest-priority non-empty queue is always extracted.

    However, if the ION_BANDWIDTH_RESERVED compiler option is selected at the time ION is built, the convergence-layer manager task servicing a given proximate node extracts bundles in interleaved fashion from the heads of the node's three queues:

    Following insertion of the extracted bundles into transmission buffers, CLO tasks other than ltpclo simply segment the buffered bundles as necessary and transmit them using the underlying convergence-layer protocols. In the case of ltpclo, the output task aggregates the buffered bundles into blocks as described earlier and a second daemon task named ltpmeter waits for aggregated blocks to be completed; ltpmeter, rather than the CLO task itself, segments each completed block as necessary and passes the segments to the link service protocol that underlies LTP. Either way, the transmission ordering requested by application tasks is preserved.

    "},{"location":"ION-Guide/#contact-plans","title":"Contact Plans","text":"

    In the Internet, protocol operations can be largely driven by currently effective information that is discovered opportunistically and immediately, at the time it is needed, because the latency in communicating this information over the network is negligible: distances between communicating entities are small and connectivity is continuous. In a DTN-based network, however, ad-hoc information discovery would in many cases take so much time that it could not be completed before the information lost currency and effectiveness. Instead, protocol operations must be largely driven by information that is pre-placed at the network nodes and tagged with the dates and times at which it becomes effective. This information takes the form of contact plans that are managed by the R/F Contacts (rfx) services of ION's ici package.

    Figure 6 RFX services in ION

    The structure of ION's RFX (contact plan) database, the rfx system elements that populate and use that data, and affected portions of the BP and LTP protocol state databases are shown in Figure 6. (For additional details of BP and LTP database management, see the BP/LTP discussion later in this document.)

    To clarify the notation of this diagram, which is also used in other database structure diagrams in this document:

    A contact is here defined as an interval during which it is expected that data will be transmitted by DTN node A (the contact's transmitting node) and most or all of the transmitted data will be received by node B (the contact's receiving node). Implicitly, the transmitting mode will utilize some \"convergence-layer\" protocol underneath the Bundle Protocol to effect this direct transmission of data to the receiving node. Each contact is characterized by its start time, its end time, the identities of the transmitting and receiving nodes, and the rate at which data are expected to be transmitted by the transmitting node throughout the indicated time period.

    (Note that a contact is specifically not an episode of activity on a link. Episodes of activity on different links -- e.g., different radio transponders operating on the same spacecraft -- may well overlap, but contacts by definition cannot; they are bounded time intervals and as such are innately \"tiled\". For example, suppose transmission on link X from node A to node B, at data rate RX, begins at time T1 and ends at time T2; also, transmission on link Y from node A to node B, at data rate RY begins at time T3 and ends at time T4. If T1 = T3 and T2 = T4, then there is a single contact from time T1 to time T2 at data rate RX + RY. If T1 \\< T3 and T2 = T4, then there are two contiguous contacts: one from T1 to T3 at data rate RX, then one from T3 to T2 at data rate RX + RY. If T1 \\< T3 and T3\\<T2 \\< T4, then there are three contiguous contacts: one from T1 to T3 at data rate RX, then one from T3 to T2 at data rate RX + RY, then one from T2 to T4 at data rate RY. And so on.)

    A range interval is a period of time during which the displacement between two nodes A and B is expected to vary by less than 1 light second from a stated anticipated distance. (We expect this information to be readily computable from the known orbital elements of all nodes.) Each range interval is characterized by its start time, its end time, the identities of the two nodes to which it pertains, and the anticipated approximate distance between those nodes throughout the indicated time period, to the nearest light second.

    The topology timeline at each node in the network is a time-ordered list of scheduled or anticipated changes in the topology of the network. Entries in this list are of two types:

    \u2022 Contact entries characterize scheduled contacts.

    \u2022 Range entries characterize anticipated range intervals.

    Each node to which, according to the RFX database, the local node transmits data directly via some convergence-layer protocol at some time is termed a neighbor of the local node. Each neighbor is associated with one or more outduct for the applicable BP convergence-layer (CL) protocol adapter(s), so bundles that are to be transmitted directly to this neighbor can simply be queued for transmission by outduct (as discussed in the Bandwidth Management notes above).

    At startup, and at any time while the system is running, ionadmin inserts and removes Contact and Range entries in the topology timeline of the RFX database. Inserting or removing a Contact or Range entry will cause routing tables to be recomputed for the destination nodes of all subsequently forwarded bundles, as described in the discussion of Contact Graph Routing below.

    Once per second, the rfxclock task (which appears in multiple locations on the diagram to simplify the geometry) applies all topology timeline events (Contact and Range start, stop, purge) with effective time in the past. Applying a Contact event that cites a neighboring node revises the transmission or reception data rate between the local node and that Neighbor. Applying a Range event that cites a neighboring node revises the OWLT between the local node and that neighbor. Setting data rate or OWLT for a node with which the local node will at some time be in direct communication may entail creation of a Neighbor object.

    "},{"location":"ION-Guide/#route-computation","title":"Route Computation","text":"

    ION's computation of a route for a given bundle with a given destination endpoint is accomplished by one of several methods, depending on the destination. In every case, the result of successful routing is the insertion of the bundle into an outbound transmission queue (selected according to the bundle's priority) for one or more neighboring nodes.

    But before discussing these methods it will be helpful to establish some terminology:

    Egress plans

    ION can only forward bundles to a neighboring node by queuing them on some explicitly specified transmission queue. Specifications that associate neighboring nodes with outducts are termed egress plans. They are retained in ION's unicast forwarding database.

    Static routes

    ION can be configured to forward to some specified node all bundles that are destined for a given node to which no dynamic route can be discovered from an examination of the contact graph, as described later. Static routing is implemented by means of the \"exit\" mechanism described below.

    Unicast

    When the destination of a bundle is a single node that is registered within a known \"singleton endpoint\" (that is, an endpoint that is known to have exactly one member), then transmission of that bundle is termed unicast. For this purpose, the destination endpoint ID must be a URI formed in either the \"dtn\" scheme (e.g., dtn://bobsmac/mail) or the \"ipn\" scheme (e.g., ipn:913.11).

    Exits

    When unicast routes must be computed to nodes for which no contact plan information is known (e.g., the size of the network makes it impractical to distribute all Contact and Range information for all nodes to every node, or the destination nodes don't participate in Contact Graph Routing at all), the job of computing routes to all nodes may be partitioned among multiple exit nodes. Each exit is responsible for managing routing information (for example, a comprehensive contact graph) for some subset of the total network population -- a group comprising all nodes whose node numbers fall within the range of node numbers assigned to the exit. A bundle destined for a node for which no dynamic route can be computed from the local node's contact graph may be routed to the exit node for the group within whose range the destination's node number falls. Exits are defined in ION's unicast forwarding database. (Note that the exit implements static routes in ION in addition to improving scalability.)

    Multicast

    When the destination of a bundle is all nodes that are registered within a known \"multicast endpoint\" (that is, an endpoint that is not known to have exactly one member), then transmission of that bundle is termed multicast. For this purpose (in ION), the destination endpoint ID must be a URI formed in the \"imc\" scheme (e.g., imc:913.11).

    Multicast Groups

    A multicast group is the set of all nodes in the network that are members of a given multicast endpoint. Forwarding a bundle to all members of its destination multicast endpoint is the responsibility of all of the multicast-aware nodes of the network. These nodes are additionally configured to be nodes of a single multicast spanning tree overlaid onto the dtnet. A single multicast tree serves to forward bundles to all multicast groups: each node of the tree manages petitions indicating which of its \"relatives\" (parent and children) are currently interested in bundles destined for each multicast endpoint, either natively (due to membership in the indicated group) or on behalf of more distant relatives.

    "},{"location":"ION-Guide/#unicast","title":"Unicast","text":"

    We begin unicast route computation by attempting to compute a dynamic route to the bundle's final destination node. The details of this algorithm are described in the section on Contact Graph Routing, below.

    If no dynamic route can be computed, but the final destination node is a \"neighboring\" node that is directly reachable, then we assume that taking this direct route is the best strategy unless transmission to that neighbor is flagged as \"blocked\" for network operations purposes.

    Otherwise we must look for a static route. If the bundle's destination node number is in one of the ranges of node numbers assigned to exit nodes, then we forward the bundle to the exit node for the smallest such range. (If the exit node is a neighbor and transmission to that neighbor is not blocked, we simply queue the bundle for transmission to that neighbor; otherwise we similarly look up the static route for the exit node until eventually we resolve to some egress plan.)

    If we can determine neither a dynamic route nor a static route for this bundle, but the reason for this failure was transmission blockage that might be resolved in the future, then the bundle is placed in a \"limbo\" list for future re-forwarding when transmission to some node is \"unblocked.\"

    Otherwise, the bundle cannot be forwarded. If custody transfer is requested for the bundle, we send a custody refusal to the bundle's current custodian; in any case, we discard the bundle.

    "},{"location":"ION-Guide/#multicast","title":"Multicast","text":"

    Multicast route computation is much simpler.

    "},{"location":"ION-Guide/#delivery-assurance","title":"Delivery Assurance","text":"

    End-to-end delivery of data can fail in many ways, at different layers of the stack. When delivery fails, we can either accept the communication failure or retransmit the data structure that was transmitted at the stack layer at which the failure was detected. ION is designed to enable retransmission at multiple layers of the stack, depending on the preference of the end user application.

    At the lowest stack layer that is visible to ION, the convergence-layer protocol, failure to deliver one or more segments due to segment loss or corruption will trigger segment retransmission if a \"reliable\" convergence-layer protocol is in use: LTP \"red-part\" transmission or TCP (including Bundle Relay Service, which is based on TCP)1.

    Segment loss may be detected and signaled via NAK by the receiving entity, or it may only be detected at the sending entity by expiration of a timer prior to reception of an ACK. Timer interval computation is well understood in a TCP environment, but it can be a difficult problem in an environment of scheduled contacts as served by LTP. The round-trip time for an acknowledgment dialogue may be simply twice the one-way light time (OWLT) between sender and receiver at one moment, but it may be hours or days longer at the next moment due to cessation of scheduled contact until a future contact opportunity. To account for this timer interval variability in retransmission, the ltpclock task infers the initiation and cessation of LTP transmission, to and from the local node, from changes in the current xmit and recv data rates in the corresponding Neighbor objects. This controls the dequeuing of LTP segments for transmission by underlying link service adapter(s) and it also controls suspension and resumption of timers, removing the effects of contact interruption from the retransmission regime. For a further discussion of this mechanism, see the section below on LTP Timeout Intervals.

    Note that the current OWLT in Neighbor objects is also used in the computation of the nominal expiration times of timers and that ltpclock is additionally the agent for LTP segment retransmission based on timer expiration.

    It is, of course, possible for the nominally reliable convergence-layer protocol to fail altogether: a TCP connection might be abruptly terminated, or an LTP transmission might be canceled due to excessive retransmission activity (again possibly due to an unexpected loss of connectivity). In this event, BP itself detects the CL protocol failure and re-forwards all bundles whose acquisition by the receiving entity is presumed to have been aborted by the failure. This re-forwarding is initiated in different ways for different CL protocols, as implemented in the CL input and output adapter tasks. If immediate re-forwarding is impossible because transmission to all potentially viable neighbors is blocked, the affected bundles are placed in the limbo list for future re-forwarding when transmission to some node is unblocked.

    In addition to the implicit forwarding failure detected when a CL protocol fails, the forwarding of a bundle may be explicitly refused by the receiving entity, provided the bundle is flagged for custody transfer service. A receiving node's refusal to take custody of a bundle may have any of a variety of causes: typically the receiving node either (a) has insufficient resources to store and forward the bundle, (b) has no route to the destination, or (c) will have no contact with the next hop on the route before the bundle's TTL has expired. In any case, a \"custody refusal signal\" (packaged in a bundle) is sent back to the sending node, which must re-forward the bundle in hopes of finding a more suitable route.

    Alternatively, failure to receive a custody acceptance signal within some convergence-layer-specified or application-specified time interval may also be taken as an implicit indication of forwarding failure. Here again, when BP detects such a failure it attempts to re-forward the affected bundle, placing the bundle in the limbo list if re-forwarding is currently impossible.

    In the worst case, the combined efforts of all the retransmission mechanisms in ION are not enough to ensure delivery of a given bundle, even when custody transfer is requested. In that event, the bundle's \"time to live\" will eventually expire while the bundle is still in custody at some node: the bpclock task will send a bundle status report to the bundle's report-to endpoint, noting the TTL expiration, and destroy the bundle. The report-to endpoint, upon receiving this report, may be able to initiate application-layer retransmission of the original application data unit in some way. This final retransmission mechanism is wholly application-specific, however.

    "},{"location":"ION-Guide/#rate-control","title":"Rate Control","text":"

    In the Internet, the rate of transmission at a node can be dynamically negotiated in response to changes in level of activity on the link, to minimize congestion. On deep space links, signal propagation delays (distances) may be too great to enable effective dynamic negotiation of transmission rates. Fortunately, deep space links are operationally reserved for use by designated pairs of communicating entities over pre-planned periods of time at pre-planned rates. Provided there is no congestion inherent in the contact plan, congestion in the network can be avoided merely by adhering to the planned contact periods and data rates. Rate control in ION serves this purpose.

    While the system is running, transmission and reception of bundles is constrained by the current capacity in the throttle of each convergence-layer manager. Completed bundle transmission activity reduces the current capacity of the applicable throttle by the capacity consumption computed for that bundle. This reduction may cause the throttle's current capacity to become negative. Once the current capacity of the applicable throttle goes negative, activity is blocked until non-negative capacity has been restored by bpclock.

    Once per second, the bpclock task increases the current capacity of each throttle by one second's worth of traffic at the nominal data rate for transmission to that node, thus enabling some possibly blocked bundle transmission and reception to proceed.

    bpclock revises all throttles' nominal data rates once per second in accord with the current data rates in the corresponding Neighbor objects, as adjusted by rfxclock per the contact plan.

    Note that this means that, for any neighboring node for which there are planned contacts, ION's rate control system will enable data flow only while contacts are active.

    "},{"location":"ION-Guide/#flow-control","title":"Flow Control","text":"

    A further constraint on rates of data transmission in an ION-based network is LTP flow control. LTP is designed to enable multiple block transmission sessions to be in various stages of completion concurrently, to maximize link utilization: there is no requirement to wait for one session to complete before starting the next one. However, if unchecked this design principle could in theory result in the allocation of all memory in the system to incomplete LTP transmission sessions. To prevent complete storage resource exhaustion, we set a firm upper limit on the total number of outbound blocks that can be concurrently in transit at any given time. These limits are established by ltpadmin at node initialization time.

    The maximum number of transmission sessions that may be concurrently managed by LTP therefore constitutes a transmission \"window\" -- the basis for a delay-tolerant, non-conversational flow control service over interplanetary links. Once the maximum number of sessions are in flight, no new block transmission session can be initiated -- regardless of how much outduct transmission capacity is provided by rate control -- until some existing session completes or is canceled.

    Note that this consideration emphasizes the importance of configuring the aggregation size limits and session count limits of spans during LTP initialization to be consistent with the maximum data rates scheduled for contacts over those spans.

    "},{"location":"ION-Guide/#storage-management","title":"Storage Management","text":"

    Congestion in a dtnet is the imbalance between data enqueuing and dequeuing rates that results in exhaustion of queuing (storage) resources at a node, preventing continued operation of the protocols at that node.

    In ION, the affected queuing resources are allocated from notionally non-volatile storage space in the SDR data store and/or file system. The design of ION is required to prevent resource exhaustion by simply refusing to enqueue additional data that would cause it.

    However, a BP router's refusal to enqueue received data for forwarding could result in costly retransmission, data loss, and/or the \"upstream\" propagation of resource exhaustion to other nodes. Therefore the ION design additionally attempts to prevent potential resource exhaustion by forecasting levels of queuing resource occupancy and reporting on any congestion that is predicted. Network operators, upon reviewing these forecasts, may revise contact plans to avert the anticipated resource exhaustion.

    The non-volatile storage used by ION serves several purposes: it contains queues of bundles awaiting forwarding, transmission, and delivery; it contains LTP transmission and reception sessions, including the blocks of data that are being transmitted and received; it contains queues of LTP segments awaiting radiation; it may contain CFDP transactions in various stages of completion; and it contains protocol operational state information, such as configuration parameters, static routes, the contact graph, etc.

    Effective utilization of non-volatile storage is a complex problem. Static pre-allocation of storage resources is in general less efficient (and also more labor-intensive to configure) than storage resource pooling and automatic, adaptive allocation: trying to predict a reasonable maximum size for every data storage structure and then rigidly enforcing that limit typically results in underutilization of storage resources and underperformance of the system as a whole. However, static pre-allocation is mandatory for safety-critical resources, where certainty of resource availability is more important than efficient resource utilization.

    The tension between the two approaches is analogous to the tension between circuit switching and packet switching in a network: circuit switching results in underutilization of link resources and underperformance of the network as a whole (some peaks of activity can never be accommodated, even while some resources lie idle much of the time), but dedicated circuits are still required for some kinds of safety-critical communication.

    So the ION data management design combines these two approaches (see 1.5 above for additional discussion of this topic):

    The maximum projected occupancy of the node is the result of computing a congestion forecast for the node, by adding to the current occupancy all anticipated net increases and decreases from now until some future time, termed the horizon for the forecast.

    The forecast horizon is indefinite -- that is, \"forever\" -- unless explicitly declared by network management via the ionadmin utility program. The difference between the horizon and the current time is termed the interval of the forecast.

    Net occupancy increases and decreases are of four types:

    1. Bundles that are originated locally by some application on the node, which are enqueued for forwarding to some other node.
    2. Bundles that are received from some other node, which are enqueued either for forwarding to some other node or for local delivery to an application.
    3. Bundles that are transmitted to some other node, which are dequeued from some forwarding queue.
    4. Bundles that are delivered locally to an application, which are dequeued from some delivery queue.

    The type-1 anticipated net increase (total data origination) is computed by multiplying the node's projected rate of local data production, as declared via an ionadmin command, by the interval of the forecast. Similarly, the type-4 anticipated net decrease (total data delivery) is computed by multiplying the node's projected rate of local data consumption, as declared via an ionadmin command, by the interval of the forecast. Net changes of types 2 and 3 are computed by multiplying inbound and outbound data rates, respectively, by the durations of all periods of planned communication contact that begin and/or end within the interval of the forecast.

    Congestion forecasting is performed by the ionwarn utility program. ionwarn may be run independently at any time; in addition, the ionadmin utility program automatically runs ionwarn immediately before exiting if it executed any change in the contact plan, the forecast horizon, or the node's projected rates of local data production or consumption. Moreover, the rfxclock daemon program also runs ionwarn automatically whenever any of the scheduled reconfiguration events it dispatches result in contact state changes that might alter the congestion forecast.

    If the final result of the forecast computation -- the maximum projected occupancy of the node over the forecast interval -- is less than the total protocol traffic allocation, then no congestion is forecast. Otherwise, a congestion forecast status message is logged noting the time at which maximum projected occupancy is expected to equal the total protocol traffic allocation.

    Congestion control in ION, then, has two components:

    First, ION's congestion detection is anticipatory (via congestion forecasting) rather than reactive as in the Internet.

    Anticipatory congestion detection is important because the second component -- congestion mitigation -- must also be anticipatory: it is the adjustment of communication contact plans by network management, via the propagation of revised schedules for future contacts.

    (Congestion mitigation in an ION-based network is likely to remain mostly manual for many years to come, because communication contact planning involves much more than orbital dynamics: science operations plans, thermal and power constraints, etc. It will, however, rely on the automated rate control features of ION, discussed above, which ensure that actual network operations conform to established contact plans.)

    Rate control in ION is augmented by admission control. ION tracks the sum of the sizes of all zero-copy objects currently residing in the heap and file system at any moment. Whenever any protocol implementation attempts to create or extend a ZCO in such a way that total heap or file occupancy would exceed an upper limit asserted for the node, that attempt is either blocked until ZCO space becomes available or else rejected altogether.

    "},{"location":"ION-Guide/#optimizing-an-ion-based-network","title":"Optimizing an ION-based network","text":"

    ION is designed to deliver critical data to its final destination with as much certainty as possible (and optionally as soon as possible), but otherwise to try to maximize link utilization. The delivery of critical data is expedited by contact graph routing and bundle prioritization as described elsewhere. Optimizing link utilization, however, is a more complex problem.

    If the volume of data traffic offered to the network for transmission is less than the capacity of the network, then all offered data should be successfully delivered3. But in that case the users of the network are paying the opportunity cost of whatever portion of the network capacity was not used.

    Offering a data traffic volume that is exactly equal to the capacity of the network is in practice infeasible. TCP in the Internet can usually achieve this balance because it exercises end-to-end flow control: essentially, the original source of data is blocked from offering a message until notified by the final destination that transmission of this message can be accommodated given the current negotiated data rate over the end-to-end path (as determined by TCP's congestion control mechanisms). In a delay-tolerant network no such end-to-end negotiated data rate may exist, much less be knowable, so such precise control of data flow is impossible.4

    The only alternative: the volume of traffic offered by the data source must be greater than the capacity of the network and the network must automatically discard excess traffic, shedding lower-priority data in preference to high-priority messages on the same path.

    ION discards excess traffic proactively when possible and reactively when necessary.

    Proactive data triage occurs when ION determines that it cannot compute a route that will deliver a given bundle to its final destination prior to expiration of the bundle's Time To Live (TTL). That is, a bundle may be discarded simply because its TTL is too short, but more commonly it will be discarded because the planned contacts to whichever neighboring node is first on the path to the destination are already fully subscribed: the queue of bundles awaiting transmission to that neighbor is already so long as to consume the entire capacity of all announced opportunities to transmit to it. Proactive data triage causes the bundle to be immediately destroyed as one for which there is \"No known route to destination from here.\"

    The determination of the degree to which a contact is subscribed is based not only on the aggregate size of the queued bundles but also on the estimated aggregate size of the overhead imposed by all the convergence-layer (CL) protocol data units -- at all layers of the underlying stack -- that encapsulate those bundles: packet headers, frame headers, etc. This means that the accuracy of this overhead estimate will affect the aggressiveness of ION's proactive data triage:

    Essentially, all reactive data triage -- the destruction of bundles due to TTL expiration prior to successful delivery to the final destination -- occurs when the network conveys bundles at lower net rates than were projected during route computation. These performance shortfalls can have a variety of causes:

    Some level of data triage is essential to cost-effective network utilization, and proactive triage is preferable because its effects can be communicated immediately to users, improving user control over the use of the network. Optimizing an ION-based network therefore amounts to managing for a modicum of proactive data triage and as little reactive data triage as possible. It entails the following:

    1. Estimating convergence-layer protocol overhead as accurately as possible, erring (if necessary) on the side of optimism -- that is, underestimating a little.

    As an example, suppose the local node uses LTP over CCSDS Telemetry to send bundles. The immediate convergence-layer protocol is LTP, but the total overhead per CL \"frame\" (in this case, per LTP segment) will include not only the size of the LTP header (nominally 5 bytes) but also the size of the encapsulating space packet header (nominally 6 bytes) and the overhead imposed by the outer encapsulating TM frame.

    Suppose each LTP segment is to be wrapped in a single space packet, which is in turn wrapped in a single TM frame, and Reed-Solomon encoding is applied. An efficient TM frame size is 1115 bytes, with an additional 160 bytes of trailing Reed-Solomon encoding and another 4 bytes of leading pseudo-noise code. The frame would contain a 6-byte TM frame header, a 6-byte space packet header, a 5-byte LTP segment header, and 1098 bytes of some LTP transmission block.

    So the number of \"payload bytes per frame\" in this case would be 1098 and the number of \"overhead bytes per frame\" would be 4 + 6 + 6 + 5 + 160 = 181. Nominal total transmission overhead on the link would be 181 / 1279 = about 14%. 2. Synchronizing nodes' clocks as accurately as possible, so that timing margins configured to accommodate clock error can be kept as close to zero as possible. 3. Setting the LTP session limit and block size limit as generously as possible (whenever LTP is at the convergence layer), to assure that LTP flow control does not constrain data flow to rates below those supported by BP rate control. 4. Setting ranges (one-way light times) and queuing delays as accurately as possible, to prevent unnecessary retransmission. Err on the side of pessimism -- that is, overestimate a little. 5. Communicating changes in configuration -- especially contact plans -- to all nodes as far in advance of the time they take effect as possible. 6. Providing all nodes with as much storage capacity as possible for queues of bundles awaiting transmission.

    "},{"location":"ION-Guide/#bpltp-detail-how-they-work","title":"BP/LTP Detail -- How They Work","text":"

    Although the operation of BP/LTP in ION is complex in some ways, virtually the entire system can be represented in a single diagram. The interactions among all of the concurrent tasks that make up the node -- plus a Remote AMS task or CFDP UT-layer task, acting as the application at the top of the stack -- are shown below. (The notation is as used earlier but with semaphores added. Semaphores are shown as small circles, with arrows pointing into them signifying that the semaphores are being given and arrows pointing out of them signifying that the semaphores are being taken.)

    Figure 7 ION node functional overview

    Further details of the BP/LTP data structures and flow of control and data appear on the following pages. (For specific details of the operation of the BP and LTP protocols as implemented by the ION tasks, such as the nature of report-initiated retransmission in LTP, please see the protocol specifications. The BP specification is documented in Internet RFC 5050, while the LTP specification is documented in Internet RFC 5326.)

    "},{"location":"ION-Guide/#databases","title":"Databases","text":"

    Figure 8: Bundle protocol database

    Figure 9: Licklider transmission protocol database

    "},{"location":"ION-Guide/#control-and-data-flow","title":"Control and data flow","text":""},{"location":"ION-Guide/#bundle-protocol","title":"Bundle Protocol","text":"

    Figure 10 BP forwarder

    Figure 11 BP convergence layer output

    "},{"location":"ION-Guide/#ltp","title":"LTP","text":"

    Figure 12 LTP transmission metering

    Figure 13 LTP link service output

    Figure 14 LTP link service input

    "},{"location":"ION-Guide/#contact-graph-routing-cgr","title":"Contact Graph Routing (CGR)","text":"

    CGR is a dynamic routing system that computes routes through a time-varying topology of scheduled communication contacts in a DTN network. It is designed to support operations in a space network based on DTN, but it also could be used in terrestrial applications where operation according to a predefined schedule is preferable to opportunistic communication, as in a low-power sensor network.

    The basic strategy of CGR is to take advantage of the fact that, since communication operations are planned in detail, the communication routes between any pair of \"bundle agents\" in a population of nodes that have all been informed of one another's plans can be inferred from those plans rather than discovered via dialogue (which is impractical over long-one-way-light-time space links).

    "},{"location":"ION-Guide/#contact-plan-messages","title":"Contact Plan Messages","text":"

    CGR relies on accurate contact plan information provided in the form of contact plan messages that currently are only read from ionrc files and processed by ionadmin, which retains them in a non-volatile contact plan in the RFX database, in ION's SDR data store.

    Contact plan messages are of two types: contact messages and range messages.

    Each contact message has the following content:

    Each range message has the following content:

    Note that range messages may be used to declare that the \"distance\" in light seconds between nodes A and B is different in the B\ud83e\udc6aA direction from the distance in the A\ud83e\udc6aB direction. While direct radio communication between A and B will not be subject to such asymmetry, it's possible for connectivity established using other convergence-layer technologies to take different physical paths in different directions, with different signal propagation delays.

    "},{"location":"ION-Guide/#routing-tables","title":"Routing Tables","text":"

    Each node uses Range and Contact messages in the contact plan to build a \\\"routing table\\\" data structure.

    The routing table constructed locally by each node in the network is a list of entry node lists, one route list for every other node D in the network that is cited in any Contact or Range in the contact plan. Entry node lists are computed as they are needed, and the maximum number of entry node lists resident at a given time is the number of nodes that are cited in any Contacts or Ranges in the contact plan. Each entry in the entry node list for node D is a list of the neighbors of local node X; included with each entry of the entry node list is a list one or more routes to D through the indicated neighbor, termed a route list.

    Each route in the route list for node D identifies a path to destination node D, from the local node, that begins with transmission to one of the local node's neighbors in the network-- the initial receiving node for the route, termed the route's entry node.

    For any given route, the contact from the local node to the entry node constitutes the initial transmission segment of the end-to-end path to the destination node. Additionally noted in each route object are all of the other contacts that constitute the remaining segments of the route's end-to-end path.

    Each route object also notes the forwarding cost for a bundle that is forwarded along this route. In this version of ION, CGR is configured to deliver bundles as early as possible, so best-case final delivery time is used as the cost of a route. Other metrics might be substituted for final delivery time in other CGR implementations. NOTE, however, that if different metrics are used at different nodes along a bundle's end-to-end path it becomes impossible to prevent routing loops that can result in non-delivery of the data.

    Finally, each route object also notes the route's termination time, the time after which the route will become moot due to the termination of the earliest-ending contact in the route.

    "},{"location":"ION-Guide/#key-concepts","title":"Key Concepts","text":""},{"location":"ION-Guide/#expiration-time","title":"Expiration time","text":"

    Every bundle transmitted via DTN has a time-to-live (TTL), the length of time after which the bundle is subject to destruction if it has not yet been delivered to its destination. The expiration time of a bundle is computed as its creation time plus its TTL. When computing the next-hop destination for a bundle that the local bundle agent is required to forward, there is no point in selecting a route that can\\'t get the bundle to its final destination prior to the bundle's expiration time.

    "},{"location":"ION-Guide/#owlt-margin","title":"OWLT margin","text":"

    One-way light time (OWLT) -- that is, distance -- is obviously a factor in delivering a bundle to a node prior to a given time. OWLT can actually change during the time a bundle is en route, but route computation becomes intractably complex if we can\\'t assume an OWLT \\\"safety margin\\\" -- a maximum delta by which OWLT between any pair of nodes can change during the time a bundle is in transit between them.

    We assume that the maximum rate of change in distance between any two nodes in the network is about 150,000 miles per hour, which is about 40 miles per second. (This was the speed of the Helios spacecraft, the fastest man-made object launched to date.)

    At this speed, the distance between any two nodes that are initially separated by a distance of N light seconds will increase by a maximum of 80 miles per second of transit (in the event that they are moving in opposite directions). This will result in data arrival no later than roughly (N + 2Q) seconds after transmission -- where the \"OWLT margin\" value Q is (40 * N) divided by 186,000 -- rather than just N seconds after transmission as would be the case if the two nodes were stationary relative to each other. When computing the expected time of arrival of a transmitted bundle we simply use N + 2Q, the most pessimistic case, as the anticipated total in-transit time.

    "},{"location":"ION-Guide/#capacity","title":"Capacity","text":"

    The capacity of a contact is the product of its data transmission rate (in bytes per second) and its duration (stop time minus start time, in seconds).

    "},{"location":"ION-Guide/#estimated-capacity-consumption","title":"Estimated capacity consumption","text":"

    The size of a bundle is the sum of its payload size and its header size5, but bundle size is not the only lien on the capacity of a contact. The total estimated volume consumption (or \"EVC\") for a bundle is the sum of the sizes of the bundle's payload and header and the estimated convergence-layer overhead. For a bundle whose header is of size M and whose payload is of size N, the estimated convergence-layer overhead is defined as 3% of (M+N), or 100 bytes, whichever is larger.

    "},{"location":"ION-Guide/#residual-capacity","title":"Residual capacity","text":"

    The residual capacity of a given contact between the local node and one of its neighbors, as computed for a given bundle, is the sum of the capacities of that contact and all prior scheduled contacts between the local node and that neighbor, less the sum of the ECCs of all bundles with priority equal to or higher than the priority of the subject bundle that are currently queued on the outduct for transmission to that neighbor.

    "},{"location":"ION-Guide/#excluded-neighbors","title":"Excluded neighbors","text":"

    A neighboring node C that refuses custody of a bundle destined for some remote node D is termed an excluded neighbor for (that is, with respect to computing routes to) D. So long as C remains an excluded neighbor for D, no bundles destined for D will be forwarded to C -- except that occasionally (once per lapse of the RTT between the local node and C) a custodial bundle destined for D will be forwarded to C as a \"probe bundle\". C ceases to be an excluded neighbor for D as soon as it accepts custody of a bundle destined for D.

    "},{"location":"ION-Guide/#critical-bundles","title":"Critical bundles","text":"

    A Critical bundle is one that absolutely has got to reach its destination and, moreover, has got to reach that destination as soon as is physically possible6.

    For an ordinary non-Critical bundle, the CGR dynamic route computation algorithm uses the routing table to select a single neighboring node to forward the bundle through. It is possible, though, that due to some unforeseen delay the selected neighbor may prove to be a sub-optimal forwarder: the bundle might arrive later than it would have if another neighbor had been selected, or it might not even arrive at all.

    For Critical bundles, the CGR dynamic route computation algorithm causes the bundle to be inserted into the outbound transmission queues for transmission to all neighboring nodes that can plausibly forward the bundle to its final destination. The bundle is therefore guaranteed to travel over the most successful route, as well as over all other plausible routes. Note that this may result in multiple copies of a Critical bundle arriving at the final destination.

    "},{"location":"ION-Guide/#dynamic-route-selection-algorithm","title":"Dynamic Route Selection Algorithm","text":"

    Given a bundle whose destination is node D, we proceed as follows.

    First, if no contacts in the contact plan identify transmission to node D, then we cannot use CGR to find a route for this bundle; CGR route selection is abandoned.

    Next, if the contact plan has been modified in any way since routes were computed for any nodes, we discard all routes for all nodes and authorize route recomputation. (The contact plan changes may have invalidated any or all of those earlier computations.)

    We create an empty list of Proximate Nodes (network neighbors) to send the bundle to.

    We create a list of Excluded Nodes, i.e., nodes through which we will not compute a route for this bundle. The list of Excluded Nodes is initially populated with:

    If all routes computed for node D have been discarded due to contact plan modification, then we must compute a new list of all routes from the local node to D. To do so:

    We next examine all of the routes that are currently computed for transmission of bundles to node D.

    For each route that is not ignored, the route's entry node is added to the list of Proximate Nodes for this bundle. Associated with the entry node number in this list entry are the best-case final delivery time of the route, the total number of \"hops\" in the route's end-to-end path, and the forfeit time for transmission to this node. Forfeit time is the route's termination time, the time by which the bundle must have been transmitted to this node in order to have any chance of being forwarded on this route.

    If, at the end of this procedure, the Proximate Nodes list is empty, then we have been unable to use CGR to find a route for this bundle; CGR route selection is abandoned.

    Otherwise:

    "},{"location":"ION-Guide/#exception-handling","title":"Exception Handling","text":"

    Conveyance of a bundle from source to destination through a DTN can fail in a number of ways, many of which are best addressed by means of the Delivery Assurance mechanisms described earlier. Failures in Contact Graph Routing, specifically, occur when the expectations on which routing decisions are based prove to be false. These failures of information fall into two general categories: contact failure and custody refusal.

    "},{"location":"ION-Guide/#contact-failure","title":"Contact Failure","text":"

    A scheduled contact between some node and its neighbor on the end-to-end route may be initiated later than the originally scheduled start time, or be terminated earlier than the originally scheduled stop time, or be canceled altogether. Alternatively, the available capacity for a contact might be overestimated due to, for example, diminished link quality resulting in unexpectedly heavy retransmission at the convergence layer. In each of these cases, the anticipated transmission of a given bundle during the affected contact may not occur as planned: the bundle might expire before the contact's start time, or the contact's stop time might be reached before the bundle has been transmitted.

    For a non-Critical bundle, we handle this sort of failure by means of a timeout: if the bundle is not transmitted prior to the forfeit time for the selected Proximate Node, then the bundle is removed from its outbound transmission queue and the Dynamic Route Computation Algorithm is re-applied to the bundle so that an alternate route can be computed.

    "},{"location":"ION-Guide/#custody-refusal","title":"Custody refusal","text":"

    A node that receives a bundle may find it impossible to forward it, for any of several reasons: it may not have enough storage capacity to hold the bundle, it may be unable to compute a forward route (static, dynamic, or default) for the bundle, etc. Such bundles are simply discarded, but discarding any such bundle that is marked for custody transfer will cause a custody refusal signal to be returned to the bundle's current custodian.

    When the affected bundle is non-Critical, the node that receives the custody refusal re-applies the Dynamic Route Computation Algorithm to the bundle so that an alternate route can be computed -- except that in this event the node from which the bundle was originally directly received is omitted from the initial list of Excluded Nodes. This enables a bundle that has reached a dead end in the routing tree to be sent back to a point at which an altogether different branch may be selected.

    For a Critical bundle no mitigation of either sort of failure is required or indeed possible: the bundle has already been queued for transmission on all plausible routes, so no mechanism that entails re-application of CGR's Dynamic Route Computation Algorithm could improve its prospects for successful delivery to the final destination. However, in some environments it may be advisable to re-apply the Dynamic Route Computation Algorithm to all Critical bundles that are still in local custody whenever a new Contact is added to the contact graph: the new contact may open an additional forwarding opportunity for one or more of those bundles.

    "},{"location":"ION-Guide/#remarks","title":"Remarks","text":"

    The CGR routing procedures respond dynamically to the changes in network topology that the nodes are able know about, i.e., those changes that are subject to mission operations control and are known in advance rather than discovered in real time. This dynamic responsiveness in route computation should be significantly more effective and less expensive than static routing, increasing total data return while at the same time reducing mission operations cost and risk.

    Note that the non-Critical forwarding load across multiple parallel paths should be balanced automatically:

    Although the route computation procedures are relatively complex they are not computationally difficult. The impact on computation resources at the vehicles should be modest.

    "},{"location":"ION-Guide/#ltp-timeout-intervals","title":"LTP Timeout Intervals","text":"

    Suppose we've got Earth ground station ES that is currently in view of Mars but will be rotating out of view (\"Mars-set\") at some time T1 and rotating back into view (\"Mars-rise\") at time T3. Suppose we've also got Mars orbiter MS that is currently out of the shadow of Mars but will move behind Mars at time T2, emerging at time T4. Let's also suppose that ES and MS are 4 light-minutes apart (Mars is at its closest approach to Earth). Finally, for simplicity, let's suppose that both ES and MS want to be communicating at every possible moment (maximum link utilization) but never want to waste any electricity.

    Neither ES nor MS wants to be wasting power on either transmitting or receiving at a time when either Earth or Mars will block the signal.

    ES will therefore stop transmitting at either T1 or (T2 - 4 minutes), whichever is earlier; call this time Tet0. It will stop receiving -- that is, power off the receiver -- at either T1 or (T2 + 4 minutes), whichever is earlier; call this time Ter0. It will resume transmitting at either T3 or (T4 - 4 minutes), whichever is late, and it will resume reception at either T3 or (T4 + 4 minutes), whichever is later; call these times Tet1 and Ter1.

    Similarly, MS will stop transmitting at either T2 or (T1 - 4 minutes), whichever is earlier; call this time Tmt0. It will stop receiving -- that is, power off the receiver -- at either T2 or (T1 + 4 minutes), whichever is earlier; call this time Tmr0. It will resume transmitting at either T4 or (T3 - 4 minutes), whichever is later, and it will resume reception at either T4 or (T3 + 4 minutes), whichever is later; call these times Tmt1 and Tmr1.

    By making sure that we don't transmit when the signal would be blocked, we guarantee that anything that is transmitted will arrive at a time when it can be received. Any reception failure is due to data corruption en route.

    So the moment of transmission of an acknowledgment to any message is always equal to the moment the original message was sent plus some imputed outbound queuing delay QO1 at the sending node, plus 4 minutes, plus some imputed inbound and outbound queuing delay QI1 + QO2 at the receiving node. The nominally expected moment of reception of this acknowledgment is that moment of transmission plus 4 minutes, plus some imputed inbound queuing delay QI2 at the original sending node. That is, the timeout interval is 8 minutes + QO1 + QI1 + QO2 + QO2 -- unless this moment of acknowledgement transmission is during an interval when the receiving node is not transmitting, for whatever reason. In this latter case, we want to suspend the acknowledgment timer during any interval in which we know the remote node will not be transmitting. More precisely, we want to add to the timeout interval the time difference between the moment of message arrival and the earliest moment at which the acknowledgment could be sent, i.e., the moment at which transmission is resumed7.

    So the timeout interval Z computed at ES for a message sent to MS at time TX is given by:

    Z = QO1 + 8 + QI1 + ((TA = TX + 4) > Tmt0 && TA < Tmt1) ?\nTmt1 - TA: 0) + QI2 + QO2\n

    This can actually be computed in advance (at time TX) if T1, T2, T3, and T4 are known and are exposed to the protocol engine.

    If they are not exposed, then Z must initially be estimated to be (2 * the one-way light time) + QI + QO. The timer for Z must be dynamically suspended at time Tmt0 in response to a state change as noted by ltpclock. Finally, the timer must be resumed at time Tmt1 (in response to another state change as noted by ltpclock), at which moment the correct value for Z can be computed.

    "},{"location":"ION-Guide/#cfdp","title":"CFDP","text":"

    The ION implementation of CFDP is very simple, because only Class-1 (Unacknowledged) functionality is implemented: the store-and-forward routing performed by Bundle Protocol makes the CFDP Extended Procedures unnecessary and the inter-node reliability provided by the CL protocol underneath BP -- in particular, by LTP -- makes the CFDP Acknowledged Procedures unnecessary. All that CFDP is required to do is segment and reassemble files, interact with the underlying Unitdata Transfer layer -- BP/LTP -- to effect the transmission and reception of file data segments, and handle CFDP metadata including filestore requests. CFDP-ION does all this, including support for cancellation of a file transfer transaction by cancellation of the transmission of the bundles encapsulating the transaction's protocol data units.

    Note that all CFDP data transmission is \"by reference\", via the ZCO system, rather than \"by value\": the retransmission buffer for a bundle containing CFDP file data is an extent of the original file itself, not a copy retained in the ION database, and data received in bundles containing CFDP PDU is written immediately to the appropriate location in the reconstituted file rather than stored in the ION database. This minimizes the space needed for the database. In general, file transmission via CFDP is the most memory-efficient way to use ION in flight operations.

    Figure 15 A CFDP-ION entity

    "},{"location":"ION-Guide/#list-data-structures-lyst-sdrlist-smlist","title":"List data structures (lyst, sdrlist, smlist)","text":"

    Figure 16 ION list data structures

    "},{"location":"ION-Guide/#psm-partition-structure","title":"PSM Partition Structure","text":"

    Figure 17 psm partition structure

    "},{"location":"ION-Guide/#psm-and-sdr-block-structures","title":"PSM and SDR Block Structures","text":"

    Figure 18 psm and sdr block structures

    "},{"location":"ION-Guide/#sdr-heap-structure","title":"SDR Heap Structure","text":"

    Figure 19 sdr heap structure

    "},{"location":"ION-Guide/#operation","title":"Operation","text":"

    The ION source distribution contains a README.TXT file with details on building ION from source. For installations starts with the open source distribution ION-DTN, using the standard sequence of

    will build ION and install it under /usr/local.

    Users building from a clone of the repository need to use the command

    before starting the installation.

    The \"Build\" instructions shown in the following sections for each package are the instructions for building each package individually, for ION development purposes. The default installation target for the individual package build commands is /opt.

    One compile-time option is applicable to all ION packages: the platform selection parameters --DVXWORKS and --DRTEMS affect the manner in which most task instantiation functions are compiled. For VXWORKS and RTEMS, these functions are compiled as library functions that must be identified by name in the platform's symbol table, while for Unix-like platforms they are compiled as main()functions.

    "},{"location":"ION-Guide/#interplanetary-communication-infrastructure-ici_1","title":"Interplanetary Communication Infrastructure (ICI)","text":""},{"location":"ION-Guide/#compile-time-options","title":"Compile-time options","text":"

    Declaring values for the following variables, by setting parameters that are provided to the C compiler (for example, --DFSWSOURCE or --DSM_SEMBASEKEY=0xff13), will alter the functionality of ION as noted below.

    PRIVATE_SYMTAB

    This option causes ION to be built for VxWorks 5.4 or RTEMS with reliance on a small private local symbol table that is accessed by means of a function named sm_FindFunction. Both the table and the function definition are, by default, provided by the symtab.c source file, which is automatically included within the platform_sm.c source when this option is set. The table provides the address of the top-level function to be executed when a task for the indicated symbol (name) is to be spawned, together with the priority at which that task is to execute and the amount of stack space to be allocated to that task.

    PRIVATE_SYMTAB is defined by default for RTEMS but not for VxWorks 5.4.

    Absent this option, ION on VxWorks 5.4 must successfully execute the VxWorks symFindByName function in order to spawn a new task. For this purpose the entire VxWorks symbol table for the compiled image must be included in the image, and task priority and stack space allocation must be explicitly specified when tasks are spawned.

    FSWLOGGER

    This option causes the standard ION logging function, which simply writes all ION status messages to a file named ion.log in the current working directory, to be replaced (by #include) with code in the source file fswlogger.c. A file of this name must be in the inclusion path for the compiler, as defined by --Ixxxx compiler option parameters.

    FSWCLOCK

    This option causes the invocation of the standard time function within getUTCTime (in ion.c) to be replaced (by #include) with code in the source file fswutc.c, which might for example invoke a mission-specific function to read a value from the spacecraft clock. A file of this name must be in the inclusion path for the compiler.

    FSWWDNAME

    This option causes the invocation of the standard getcwd function within cfdpInit (in libcfdpP.c) to be replaced (by #include) with code in the source file wdname.c, which must in some way cause the mission-specific value of the current working directory name to be copied into cfdpdbBuf.workingDirectoryName. A file of this name must be in the inclusion path for the compiler.

    FSWSYMTAB

    If the PRIVATE_SYMTAB option is also set, then the FSWSYMTAB option causes the code in source file mysymtab.c to be included in platform_sm.c in place of the default symbol table access implementation in symtab.c. A file named mysymtab.c must be in the inclusion path for the compiler.

    FSWSOURCE

    This option simply causes FSWLOGGER, FSWCLOCK, FSWWDNAME, and FSWSYMTAB all to be set.

    GDSLOGGER

    This option causes the standard ION logging function, which simply writes all ION status messages to a file named ion.log in the current working directory, to be replaced (by #include) with code in the source file gdslogger.c. A file of this name must be in the inclusion path for the compiler, as defined by --Ixxxx compiler option parameters.

    GDSSOURCE

    This option simply causes GDSLOGGER to be set.

    ION_OPS_ALLOC=*xx*

    This option specifies the percentage of the total non-volatile storage space allocated to ION that is reserved for protocol operational state information, i.e., is not available for the storage of bundles or LTP segments. The default value is 20.

    ION_SDR_MARGIN=*xx*

    This option specifies the percentage of the total non-volatile storage space allocated to ION that is reserved simply as margin, for contingency use. The default value is 20.

    The sum of ION_OPS_ALLOC and ION_SDR_MARGIN defines the amount of non-volatile storage space that is sequestered at the time ION operations are initiated: for purposes of congestion forecasting and prevention of resource oversubscription, this sum is subtracted from the total size of the SDR \"heap\" to determine the maximum volume of space available for bundles and LTP segments. Data reception and origination activities fail whenever they would cause the total amount of data store space occupied by bundles and segments to exceed this limit.

    USING_SDR_POINTERS

    This is an optimization option for the SDR non-volatile data management system: when set, it enables the value of any variable in the SDR data store to be accessed directly by means of a pointer into the dynamic memory that is used as the data store storage medium, rather than by reading the variable into a location in local stack memory. Note that this option must not be enabled if the data store is configured for file storage only, i.e., if the SDR_IN_DRAM flag was set to zero at the time the data store was created by calling sdr_load_profile. See the ionconfig(5) man page in Appendix A for more information.

    NO_SDR_TRACE

    This option causes non-volatile storage utilization tracing functions to be omitted from ION when the SDR system is built. It disables a useful debugging option but reduces the size of the executable software.

    NO_PSM_TRACE

    This option causes memory utilization tracing functions to be omitted from ION when the PSM system is built. It disables a useful debugging option but reduces the size of the executable software.

    IN_FLIGHT

    This option controls the behavior of ION when an unrecoverable error is encountered.

    If it is set, then the status message \"Unrecoverable SDR error\" is logged and the SDR non-volatile storage management system is globally disabled: the current database access transaction is ended and (provided transaction reversibility is enabled) rolled back, and all ION tasks terminate.

    Otherwise, the ION task that encountered the error is simply aborted, causing a core dump to be produced to support debugging.

    SM_SEMKEY=0x*XXXX*

    This option overrides the default value (0xee01) of the identifying \"key\" used in creating and locating the global ION shared-memory system mutex.

    SVR4_SHM

    This option causes ION to be built using svr4 shared memory as the pervasive shared-memory management mechanism. svr4 shared memory is selected by default when ION is built for any platform other than MinGW, VxWorks 5.4, or RTEMS. (For these latter operating systems all memory is shared anyway, due to the absence of a protected-memory mode.)

    POSIX1B_SEMAPHORES

    This option causes ION to be built using POSIX semaphores as the pervasive semaphore mechanism. POSIX semaphores are selected by default when ION is built for RTEMS but are otherwise not used or supported; this option enables the default to be overridden.

    SVR4_SEMAPHORES

    This option causes ION to be built using svr4 semaphores as the pervasive semaphore mechanism. svr4 semaphores are selected by default when ION is built for any platform other than MinGW (for which Windows event objects are used), VxWorks 5.4 (for which VxWorks native semaphores are the default choice), or RTEMS (for which POSIX semaphores are the default choice).

    SM_SEMBASEKEY=0x*XXXX*

    This option overrides the default value (0xee02) of the identifying \"key\" used in creating and locating the global ION shared-memory semaphore database, in the event that svr4 semaphores are used.

    SEMMNI=*xxx*

    This option declares to ION the total number of svr4 semaphore sets provided by the operating system, in the event that svr4 semaphores are used. It overrides the default value, which is 10 for Cygwin and 128 otherwise. (Changing this value typically entails rebuilding the O/S kernel.)

    SEMMSL=*xxx*

    This option declares to ION the maximum number of semaphores in each svr4 semaphore set, in the event that svr4 semaphores are used. It overrides the default value, which is 6 for Cygwin and 250 otherwise. (Changing this value typically entails rebuilding the O/S kernel.)

    SEMMNS=*xxx*

    This option declares to ION the total number of svr4 semaphores that the operating system can support; the maximum possible value is SEMMNI x SEMMSL. It overrides the default value, which is 60 for Cygwin and 32000 otherwise. (Changing this value typically entails rebuilding the O/S kernel.)

    ION_NO_DNS

    This option causes the implementation of a number of Internet socket I/O operations to be omitted for ION. This prevents ION software from being able to operate over Internet connections, but it prevents link errors when ION is loaded on a spacecraft where the operating system does not include support for these functions.

    ERRMSGS_BUFSIZE=*xxxx*

    This option set the size of the buffer in which ION status messages are constructed prior to logging. The default value is 4 KB.

    SPACE_ORDER=*x*

    This option declares the word size of the computer on which the compiled ION software will be running: it is the base-2 log of the number of bytes in an address. The default value is 2, i.e., the size of an address is 2^2^ = 4 bytes. For a 64-bit machine, SPACE_ORDER must be declared to be 3, i.e., the size of an address is 2^3^ = 8 bytes.

    NO_SDRMGT

    This option enables the SDR system to be used as a data access transaction system only, without doing any dynamic management of non-volatile data. With the NO_SDRMGT option set, the SDR system library can (and in fact must) be built from the sdrxn.c source file alone.

    DOS_PATH_DELIMITER

    This option causes ION_PATH_DELIMITER to be set to '\\' (backslash), for use in construction path names. The default value of ION_PATH_DELIMITER is '/' (forward slash, as is used in Unix-like operating systems).

    "},{"location":"ION-Guide/#build","title":"Build","text":"

    To build ICI for a given deployment platform:

    1. Decide where you want ION's executables, libraries, header files, etc. to be installed. The ION makefiles all install their build products to subdirectories (named bin, lib, include, man, man/man1, man/man3, man/man5) of an ION root directory, which by default is the directory named /opt. If you wish to use the default build configuration, be sure that the default directories (/opt/bin, etc.) exist; if not, select another ION root directory name -- this document will refer to it as $OPT -- and create the subdirectories as needed. In any case, make sure that you have read, write, and execute permission for all of the ION installation directories and that:

    2. The directory /$OPT/bin is in your execution path.

    3. The directory /$OPT/lib is in your $LD_LOADLIB_PATH.
    4. Edit the Makefile in ion/ici:

    5. Make sure PLATFORMS is set to the appropriate platform name, e.g., x86-redhat, sparc-sol9, etc.

    6. Set OPT to the directory where you want to install the ici packages you build, if other than \"/opt\" (for example: /usr/local).

    7. Then:

    cd ion/ici\nsudo make\nsudo make install\n
    "},{"location":"ION-Guide/#configure","title":"Configure","text":"

    Three types of files are used to provide the information needed to perform global configuration of the ION protocol stack: the ION system configuration (or ionconfig) file, the ION administration command (ionrc) file, and the ION security configuration (ionsecrc) file. For details, see the man pages for ionconfig(5), ionrc(5), and ionsecrc(5) in Appendix A.

    Normally the instantiation of ION on a given computer establishes a single ION node on that computer, for which hard-coded values of wmKey and sdrName (see ionconfig(5)) are used in common by all executables to assure that all elements of the system operate within the same state space. For some purposes, however, it may be desirable to establish multiple ION nodes on a single workstation. (For example, constructing an entire self-contained DTN network on a single machine may simplify some kinds of regression testing.) ION supports this configuration option as follows:

    "},{"location":"ION-Guide/#run","title":"Run","text":"

    The executable programs used in operation of the ici component of ION include:

    Each time it is executed, ionadmin computes a new congestion forecast and, if a congestion collapse is predicted, invokes the node's congestion alarm script (if any). ionadmin also establishes the node number for the local node and starts/stops the rfxclock task, among other functions. For further details, see the man pages for ionadmin(1), ionsecadmin(1), rfxclock(1), sdrmend(1), sdrwatch(1), and psmwatch(1) in Appendix A.

    "},{"location":"ION-Guide/#test","title":"Test","text":"

    Six test executables are provided to support testing and debugging of the ICI component of ION:

    For details, see the man pages for file2sdr(1), sdr2file(1), psmshell(1), file2sm(1), sm2file(1), and smlistsh(1) in Appendix A.

    "},{"location":"ION-Guide/#licklider-transmission-protocol-ltp_1","title":"Licklider Transmission Protocol (LTP)","text":""},{"location":"ION-Guide/#build_1","title":"Build","text":"

    To build LTP:

    1. Make sure that the \"ici\" component of ION has been built for the platform on which you plan to run LTP.
    2. Edit the Makefile in ion/ltp:

    3. As for ici, make sure PLATFORMS is set to the name of the platform on which you plan to run LTP.

    4. Set OPT to the directory containing the bin, lib, include, etc. directories where the ici package is installed (for example: /usr/local).

    5. Then:

    cd ion/ltp\nmake\nsudo make install\n
    "},{"location":"ION-Guide/#configure_1","title":"Configure","text":"

    The LTP administration command (ltprc) file provides the information needed to configure LTP on a given ION node. For details, see the man page for ltprc(5) in Appendix A.

    "},{"location":"ION-Guide/#run_1","title":"Run","text":"

    The executable programs used in operation of the ltp component of ION include:

    ltpadmin starts/stops the ltpclock and ltpmeter tasks and, as mandated by configuration, the udplsi and udplso tasks.

    For details, see the man pages for ltpadmin(1), ltpclock(1), ltpmeter(1), udplsi(1), and udplso(1) in Appendix A.

    "},{"location":"ION-Guide/#test_1","title":"Test","text":"

    Two test executables are provided to support testing and debugging of the LTP component of ION:

    For details, see the man pages for ltpdriver(1) and ltpcounter(1) in Appendix A.

    "},{"location":"ION-Guide/#bundle-streaming-service-protocol-bssp","title":"Bundle Streaming Service Protocol (BSSP)","text":""},{"location":"ION-Guide/#build_2","title":"Build","text":"

    To build BSSP:

    1. Make sure that the \"ici\" component of ION has been built for the platform on which you plan to run BSSP.
    2. Edit the Makefile in ion/bssp:

    3. As for ici, make sure PLATFORMS is set to the name of the platform on which you plan to run BSSP.

    4. Set OPT to the directory containing the bin, lib, include, etc. directories where the ici package is installed (for example: /usr/local).

    5. Then:

    cd ion/bssp\nmake\nsudo make install\n
    "},{"location":"ION-Guide/#configure_2","title":"Configure","text":"

    The BSSP administration command (bssprc) file provides the information needed to configure BSSP on a given ION node. For details, see the man page for bssprc(5) in Appendix A.

    The bssprc file has a command option specifying the max_block_size. This is to prevent retransmission inefficiency when the blocks size of a stream data is too large. The unit of retransmission for BSSP is the block, so if the block size is too large, it is very expensive to the network to provide retransmission. If one needs bulk data transfer, instead of streaming, one should use BP with reliability LTP instead of using BSSP. If you are using udpbso and udpbsi as the underlying convergence layer, then the max_block_size parameter for bssprc cannot be larger than 65507 bytes, because each UDP datagram can only be as large as 65507 bytes (payload) + 20 (IP Header) + 8 (UDP Header) = 65535 byte.

    "},{"location":"ION-Guide/#run_2","title":"Run","text":"

    The executable programs used in operation of the bssp component of ION include:

    bsspadmin starts/stops the bsspclock task and, as mandated by configuration, the udpbsi and udblso tasks.

    For details, see the man pages for bsspadmin(1), bsspclock(1), bsspmeter(1), udpbsi(1), and udpbso(1) in Appendix A.

    "},{"location":"ION-Guide/#bundle-protocol-bp_1","title":"Bundle Protocol (BP)","text":""},{"location":"ION-Guide/#compile-time-options_1","title":"Compile-time options","text":"

    Declaring values for the following variables, by setting parameters that are provided to the C compiler (for example, --DION_NOSTATS or --DBRSTERM=60), will alter the functionality of BP as noted below.

    "},{"location":"ION-Guide/#targetffs","title":"TargetFFS","text":"

    Setting this option adapts BP for use with the TargetFFS flash file system on the VxWorks operating system. TargetFFS apparently locks one or more system semaphores so long as a file is kept open. When a BP task keeps a file open for a sustained interval, subsequent file system access may cause a high-priority non-BP task to attempt to lock the affected semaphore and therefore block; in this event, the priority of the BP task may automatically be elevated by the inversion safety mechanisms of VxWorks. This \"priority inheritance\" can result in preferential scheduling for the BP task -- which does not need it -- at the expense of normally higher-priority tasks, and can thereby introduce runtime anomalies. BP tasks should therefore close files immediately after each access when running on a VxWorks platform that uses the TargetFFS flash file system. The TargetFFS compile-time option ensures that they do so.

    "},{"location":"ION-Guide/#brstermxx","title":"BRSTERM=xx","text":"

    This option sets the maximum number of seconds by which the current time at the BRS server may exceed the time tag in a BRS authentication message from a client; if this interval is exceeded, the authentication message is presumed to be a replay attack and is rejected. Small values of BRSTERM are safer than large ones, but they require that clocks be more closely synchronized. The default value is 5.

    "},{"location":"ION-Guide/#ion_nostats","title":"ION_NOSTATS","text":"

    Setting this option prevents the logging of bundle processing statistics in status messages.

    "},{"location":"ION-Guide/#keepalive_periodxx","title":"KEEPALIVE_PERIOD=xx","text":"

    This option sets the number of seconds between transmission of keep-alive messages over any TCP or BRS convergence-layer protocol connection. The default value is 15.

    "},{"location":"ION-Guide/#ion_bandwidth_reserved","title":"ION_BANDWIDTH_RESERVED","text":"

    Setting this option overrides strict priority order in bundle transmission, which is the default. Instead, bandwidth is shared between the priority-1 and priority-0 queues on a 2:1 ratio whenever there is no priority-2 traffic.

    "},{"location":"ION-Guide/#enable_bpacs","title":"ENABLE_BPACS","text":"

    This option causes Aggregate Custody Signaling source code to be included in the build. ACS is alternative custody transfer signaling mechanism that sharply reduces the volume of custody acknowledgment traffic.

    "},{"location":"ION-Guide/#enable_imc","title":"ENABLE_IMC","text":"

    This option causes IPN Multicast source code to be included in the build. IMC is discussed in section 1.8.4 above.

    "},{"location":"ION-Guide/#build_3","title":"Build","text":"

    To build BP:

    1. Make sure that the \"ici\", \"ltp\", \"dgr\", and \"bssp\" components of ION have been built for the platform on which you plan to run BP.
    2. Edit the Makefile in ion/bp:

    3. As for ici, make sure PLATFORMS is set to the name of the platform on which you plan to run BP.

    4. Set OPT to the directory containing the bin, lib, include, etc. directories where the ici package is installed (for example: /usr/local).

    5. Then:

    cd ion/bp\nmake\nsudo make install\n
    "},{"location":"ION-Guide/#configure_3","title":"Configure","text":"

    The BP administration command (bprc) file provides the information needed to configure generic BP on a given ION node. The IPN scheme administration command (ipnrc) file provides information that configures static and default routes for endpoints whose IDs conform to the \"ipn\" scheme. The DTN scheme administration command (dtn2rc) file provides information that configures static and default routes for endpoints whose IDs conform to the \"dtn\" scheme, as supported by the DTN2 reference implementation. For details, see the man pages for bprc(5), ipnrc(5), and dtn2rc(5) in Appendix A.

    "},{"location":"ION-Guide/#run_3","title":"Run","text":"

    The executable programs used in operation of the bp component of ION include:

    bpadmin starts/stops the bpclock task and, as mandated by configuration, the ipnfw, dtn2fw, ipnadminep, dtn2adminep, bpclm, brsscla, brsccla, tcpcli, stcpcli, stcpclo, udpcli, udpclo, ltpcli, ltpclo, and dgrcla tasks.

    For details, see the man pages for bpadmin(1),ipnadmin(1), dtn2admin(1), bpclock(1), bpclm(1), ipnfw(1), dtn2fw(1), ipnadminep(1), dtn2adminep(1), brsscla(1), brsccla(1),tcpcli(1), stcpcli(1), stcpclo(1), udpcli(1), udpclo(1), ltpcli(1), ltpclo(1), dgrcla(1), bpsendfile(1), bpstats(1), bptrace(1), lgsend(1), lgagent(1), and hmackeys(1) in Appendix A.

    "},{"location":"ION-Guide/#test_2","title":"Test","text":"

    Five test executables are provided to support testing and debugging of the BP component of ION:

    For details, see the man pages for bpdriver(1), bpcounter(1), bpecho(1), bpsource(1), and bpsink(1) in Appendix A.

    "},{"location":"ION-Guide/#datagram-retransmission-dgr_1","title":"Datagram Retransmission (DGR)","text":""},{"location":"ION-Guide/#build_4","title":"Build","text":"

    To build DGR:

    1. Make sure that the \"ici\" component of ION has been built for the platform on which you plan to run DGR.
    2. Edit the Makefile in ion/dgr:

    3. As for ici, make sure PLATFORMS is set to the name of the platform on which you plan to run DGR.

    4. Set OPT to the directory containing the bin, lib, include, etc. directories where the ici package is installed (for example: /usr/local).

    5. Then:

    cd ion/dgr\nmake\nsudo make install\n
    "},{"location":"ION-Guide/#configure_4","title":"Configure","text":"

    No additional configuration files are required for the operation of the DGR component of ION.

    "},{"location":"ION-Guide/#run_4","title":"Run","text":"

    No runtime executables are required for the operation of the DGR component of ION.

    "},{"location":"ION-Guide/#test_3","title":"Test","text":"

    Two test executables are provided to support testing and debugging of the DGR component of ION:

    For details, see the man pages for file2dgr(1) and dgr2file(1) in Appendix A.

    "},{"location":"ION-Guide/#asynchronous-message-service-ams_1","title":"Asynchronous Message Service (AMS)","text":""},{"location":"ION-Guide/#compile-time-options_2","title":"Compile-time options","text":"

    Note that, by default, the syntax by which AMS MIB information is presented to AMS is as documented in the \"amsrc\" man page. Alternatively it is possible to use an XML-based syntax as documented in the \"amsxml\" man page. To use the XML-based syntax instead, be sure that the \"expat\" XML interpretation system is installed and pass the argument \"--with-expat\" to \"./configure\" when building ION.

    Defining the following macros, by setting parameters that are provided to the C compiler (for example, DAMS_INDUSTRIAL), will alter the functionality of AMS as noted below.

    AMS_INDUSTRIAL

    Setting this option adapts AMS to an \"industrial\" rather than safety-critical model for memory management. By default, the memory acquired for message transmission and reception buffers in AMS is allocated from limited ION working memory, which is fixed at ION start-up time; this limits the rate at which AMS messages may be originated and acquired. When --DAMS_INDUSTRIAL is set at compile time, the memory acquired for message transmission and reception buffers in AMS is allocated from system memory, using the familiar malloc() and free() functions; this enables much higher message traffic rates on machines with abundant system memory.

    "},{"location":"ION-Guide/#build_5","title":"Build","text":"

    To build AMS:

    1. Make sure that the \"bp\" component of ION has been built for the platform on which you plan to run AMS.
    2. Edit the Makefile in ion/cfdp:

    3. Just as for bp, make sure PLATFORMS is set to the name of the platform on which you plan to run AMS.

    4. Set OPT to the directory containing the bin, lib, include, etc. directories where the ici package is installed (for example: /usr/local).

    5. Then:

    cd ion/ams\nmake\nsudo make install\n
    "},{"location":"ION-Guide/#configure_5","title":"Configure","text":"

    There is no central configuration of AMS; each AMS entity (configuration server, registrar, or application module) is individually configured at the time its initial MIB is loaded at startup. Note that a single MIB may be shared between multiple AMS entities without issue.

    For details of MIB file syntax, see the man pages for amsrc(5) and amsxml(5) in Appendix A.

    "},{"location":"ION-Guide/#run_5","title":"Run","text":"

    The executable programs used in operation of the AMS component of ION include:

    For details, see the man pages for amsd(1), ramsgate(1), amsstop(1), and amsmib(1) in Appendix A.

    "},{"location":"ION-Guide/#test_4","title":"Test","text":"

    Seven test executables are provided to support testing and debugging of the AMS component of ION:

    For details, see the man pages for amsbenchs(1), amsbenchr(1), amshello(1), amsshell(1), amslog(1), amslogprt(1), amspub(1), and amssub(1) in Appendix A.

    For further operational details of the AMS system, please see sections 4 and 5 of the AMS Programmer's Guide.

    "},{"location":"ION-Guide/#ccsds-file-delivery-protocol-cfdp_1","title":"CCSDS File Delivery Protocol (CFDP)","text":""},{"location":"ION-Guide/#compile-time-options_3","title":"Compile-time options","text":"

    Defining the following macro, by setting a parameter that is provided to the C compiler (i.e., --DTargetFFS), will alter the functionality of CFDP as noted below.

    "},{"location":"ION-Guide/#targetffs_1","title":"TargetFFS","text":"

    Setting this option adapts CFDP for use with the TargetFFS flash file system on the VxWorks operating system. TargetFFS apparently locks one or more system semaphores so long as a file is kept open. When a CFDP task keeps a file open for a sustained interval, subsequent file system access may cause a high-priority non-CFDP task to attempt to lock the affected semaphore and therefore block; in this event, the priority of the CFDP task may automatically be elevated by the inversion safety mechanisms of VxWorks. This \"priority inheritance\" can result in preferential scheduling for the CFDP task -- which does not need it -- at the expense of normally higher-priority tasks, and can thereby introduce runtime anomalies. CFDP tasks should therefore close files immediately after each access when running on a VxWorks platform that uses the TargetFFS flash file system. The TargetFFS compile-time option ensures that they do so.

    "},{"location":"ION-Guide/#build_6","title":"Build","text":"

    To build CFDP:

    1. Make sure that the \"bp\" component of ION has been built for the platform on which you plan to run CFDP.
    2. Edit the Makefile in ion/cfdp:

    3. Just as for bp, make sure PLATFORMS is set to the name of the platform on which you plan to run CFDP.

    4. Set OPT to the directory containing the bin, lib, include, etc. directories where the ici package is installed.

    5. Then:

    cd ion/cfdp\n\nmake\n\nmake install\n
    "},{"location":"ION-Guide/#configure_6","title":"Configure","text":"

    The CFDP administration command (cfdprc) file provides the information needed to configure CFDP on a given ION node. For details, see the man page for cfdprc(5) in Appendix A.

    "},{"location":"ION-Guide/#run_6","title":"Run","text":"

    The executable programs used in operation of the CFDP component of ION include:

    cfdpadmin starts/stops the cfdpclock task and, as mandated by configuration, the bputa task.

    For details, see the man pages for cfdpadmin(1), cfdpclock(1), and bputa(1) in Appendix A.

    "},{"location":"ION-Guide/#test_5","title":"Test","text":"

    A single executable, cfdptest, is provided to support testing and debugging of the DGR component of ION. For details, see the man page for cfdptest(1) in Appendix A.

    "},{"location":"ION-Guide/#bundle-streaming-service-bss_1","title":"Bundle Streaming Service (BSS)","text":""},{"location":"ION-Guide/#compile-time-options_4","title":"Compile-time options","text":"

    Defining the following macro, by setting a parameter that is provided to the C compiler (e.g., --DWINDOW=10000), will alter the functionality of BSS as noted below.

    "},{"location":"ION-Guide/#windowxx","title":"WINDOW=xx","text":"

    Setting this option changes the maximum number of seconds by which the BSS database for a BSS application may be \"rewound\" for replay. The default value is 86400 seconds, which is 24 hours.

    "},{"location":"ION-Guide/#build_7","title":"Build","text":"

    To build BSS:

    cd ion/bss\n\nmake\n\nsudo make install\n
    "},{"location":"ION-Guide/#configure_7","title":"Configure","text":"

    No additional configuration files are required for the operation of the BSS component of ION.

    "},{"location":"ION-Guide/#run_7","title":"Run","text":"

    No runtime executables are required for the operation of the BSS component of ION.

    "},{"location":"ION-Guide/#test_6","title":"Test","text":"

    Four test executables are provided to support testing and debugging of the BSS component of ION:

    For details, see the man pages for bssdriver(1), bsscounter(1), bssStreamingApp(1), and bssrecv(1) in Appendix A.

    1. In ION, reliable convergence-layer protocols (where available) are by default used for every bundle. The application can instead mandate selection of \"best-effort\" service at the convergence layer by setting the BP_BEST_EFFORT flag in the \"extended class of service flags\" parameter, but this feature is an ION extension that is not supported by other BP implementations at the time of this writing.\u00a0\u21a9

    2. Note that, in all occupancy figures, ION data management accounts not only for the sizes of the payloads of all queued bundles but also for the sizes of their headers.\u00a0\u21a9

    3. Barring data loss or corruption for which the various retransmission mechanisms in ION cannot compensate.\u00a0\u21a9

    4. Note that ION may indeed block the offering of a message to the network, but this is local admission control -- assuring that the node's local buffer space for queuing outbound bundles is not oversubscribed -- rather than end-to-end flow control. It is always possible for there to be ample local buffer space yet insufficient network capacity to convey the offered data to their final destination, and vice versa.\u00a0\u21a9

    5. The minimum size of an ION bundle header is 26 bytes. Adding extension blocks (such as those that effect the Bundle Security Protocol) will increase this figure.\u00a0\u21a9

    6. In ION, all bundles are by default non-critical. The application can indicate that data should be sent in a Critical bundle by setting the BP_MINIMUM_LATENCY flag in the \"extended class of service\" parameter, but this feature is an ION extension that is not supported by other BP implementations at the time of this writing.\u00a0\u21a9

    7. If we wanted to be extremely accurate we could also subtract from the timeout interval the imputed inbound queuing delay QI, since inbound queuing would presumably be completed during the interval in which transmission was suspended. But since we're guessing at the queuing delays anyway, this adjustment doesn't make a lot of sense.\u00a0\u21a9

    "},{"location":"ION-Launcher/","title":"ION Launcher","text":"

    Last Updated: 12/27/2023

    Previous versions of ION required a good understanding of the different ION adminstrative programs, how to write RC files from them, and what the different configuration commands mean.

    The ionlauncher was developed to ease the user into ION configuration by taking a few parameters that mission designer would likely know when planning a network. Using those parameters, captured in a simple JSON format, an entire ION network with defined configurations files can be created and started rather quickly.

    "},{"location":"ION-Launcher/#simple-network-model-syntax","title":"Simple Network Model Syntax","text":"

    This section will outline the necessary parameters needed to create a simple model for ionlauncher.

    "},{"location":"ION-Launcher/#model-parameters","title":"Model Parameters","text":"

    There are seven parameters that are needed to define a simple network model. They are as follows:

    NAME: serves as the key for the other parameters and naming start scripts\nIP ADDRESS: Host IP address or domain name the node will be running on\nNODE: assigned node number, will be used for addressing with neighbor(s)\nSERVICES: Applications running on the node, currently supports CFDP, AMS, & AMP\nDEST: node's neighbor(s)\nPROTOCOL: convergance layer to reach a neighbor. Currently supported options include LTP, TCP, UDP, and STCP. \n            (untested options: BSSP & DCCP)\nRATE: Data rate used to communicate with neighbor(s), in bytes/s\n
    "},{"location":"ION-Launcher/#example-model","title":"Example Model","text":"

    There are a few example models included with the ionlauncher prototype under example_models/. This section shows one of them and explains how it works.

    {\n    \"SC\": {\n        \"IP\": \"192.168.1.115\",\n        \"NODE\": 21,\n        \"SERVICES\": [],\n        \"DEST\": [\n            \"Relay\"\n        ],\n        \"PROTOCOL\": [\n            \"ltp\"\n        ],\n        \"RATE\": [\n            100000\n        ]\n    },\n    \"Relay\": {\n        \"IP\": \"192.168.1.114\",\n        \"NODE\": 22,\n        \"SERVICES\": [],\n        \"DEST\": [\n            \"SC\",\n            \"GS\"\n        ],\n        \"PROTOCOL\": [\n            \"ltp\",\n            \"tcp\"\n        ],\n        \"RATE\": [\n            10000,\n            2500000\n        ]\n    },\n    \"GS\": {\n        \"IP\": \"192.168.1.113\",\n        \"NODE\": 23,\n        \"SERVICES\": [],\n        \"DEST\": [\n            \"Relay\"\n        ],\n        \"PROTOCOL\": [\n            \"tcp\"\n        ],\n        \"RATE\": [\n            2500000\n        ]\n    }\n}\n

    This is an example of a three node setup where Relay serves as a DTN relay between SC and GS. Order is important in the lists for DEST, PROTOCOL, and RATE. They assume each element in the lists correspond to each other. For example, Relay communicates with SC via LTP at 10,000 bytes/s and Relay communicates with GS via TCP at 2,500,000 bytes/s.

    There are other two examples included with ionlaucher. The first is a simple two node setup over TCP. The second is a four node scenario where SC can only uplink to Relay1 and downlink from Relay2, while GS has continuous coverage of the two relays.

    "},{"location":"ION-Launcher/#prototype-ion-413","title":"Prototype - ION 4.1.3","text":"

    Ionlauncher is currently a prototype and may not be installed globally. Please make sure that both ionlauncher and net_model_gen in the demo folder has been copied to the execution path of ION, which is typically /usr/local/bin or something specified as part of the ./configure script during initial ION installation.

    Ionlauncher's simple network model file current does not handle multi-network interface configuration - this will be updated for ION 4.1.4.

    "},{"location":"ION-Launcher/#usage","title":"Usage","text":"

    This section will outline how to run ionlauncher and what the different parameters mean. It is assumed ionlauncher will be run on each host independently with the same simple model and the only parameter changing is the node name.

    ionlauncher [-h] -n <node name> -m <simple model file> [-d <ionconfig cli directory>]

    -h: display help\n-n: node name that will be used to start ION on the host\n-m: path to simple model file, ionlauncher assumes the file \n    is in the same directory or a directory below\n-d: optional parameter that defines the path to the ION Config \n    tool CLI scripts, default is /home/$USER/ionconfig-4.8.1/cli/bin\n

    Once the ION configuration files have been generate. ION will be started using the configuration files for the node passed via -n. Stopping the node is done via ionstop and if that hangs or errors out, killm can be used to force stop ION processes. To restart ION, ionlauncher can be used again, but this will overwrite the configuration files and wipe out any customization that has been added to the initial set of configuration files generated from previous run. If you did not add any customization, it is perfectly find to launch ION again in the same way. If you did make changes, then it is recommended that you use the start script in the node's working directory, ./start_{node_name}.sh to start ION.

    "},{"location":"ION-Launcher/#directory-structure","title":"Directory Structure","text":"

    The ionlauncher and associated net_model_gen python scripts will be installed in the same install path for ION, therefore making them available for use from any working directory.

    For example, say the 3 node simple file is called 3node.json and it is stored at directory $WKDIR. After cd into the working directory and executing the ionlauncher, a new directory $WKDIR/3node-ion will be created and it contains the following:

    After the initial ionlauncher run, the ION configuration files are generated for you based on the simple network model description and a set of default settings. To activate additional features, optimize parameters settings, and refine protocol behaviors, you will need to edit the ION config files individually. For those changes to take effect, you need to stop ION and restart ION using the start script in each node's working folder.

    NOTE: If you run ionlauncher again, the ION configuration files will regenerate and over-write your custom changes. So it is recommended that you make a copy or rename the configuration to avoid this situation.

    "},{"location":"ION-Launcher/#dependency","title":"Dependency","text":"

    The ionlaunch script requires that the installation of the ION Config Tool, which is publically accessible (starting January 2024) from GitHub, and the companion tool called ION Network Model is also available for download, although it is not needed for using ionlauncher.

    Download the latest release of the ION Config Tool and note the directory of the CLI (command line interface executables). For example, if it is /home/$USER/ionconfig-4.8.1/cli/bin, then you don't need to provide the -d option to ionlauncher. If it is somewhere else, then you should provide the -d option.

    You also need to install node.js and make sure python version 3.6 or higher is available in your system.

    "},{"location":"ION-Quick-Start-Guide/","title":"ION Quick Start Guide","text":""},{"location":"ION-Quick-Start-Guide/#installing-ion-on-linux-macos-solaris","title":"Installing ION on Linux, MacOS, Solaris","text":"

    To build and install the entire ION system on a Linux, MacOS, or Solaris platform, cd into ion-open-source and enter the following commands:

    ./configure

    If configure is not present run: autoreconf -fi first

    make

    sudo make install

    sudo ldconfig

    For MacOS, the ldconfig command is not present and not necessary to run.

    "},{"location":"ION-Quick-Start-Guide/#compile-time-switches","title":"Compile Time Switches","text":"

    If you want to set overriding compile-time switches for a build, the place to do this is in the ./configure command. For details,

    ./configure -h

    By default, Bundle Protocol V7 will be built and installed, but BPv6 source code is still available. The BPv6 implementation is essentially the same as that of ION 3.7.4, with only critical bugs being updated going forward. All users are encouraged to switch to BPV7.

    To build BPv6, run

    ./configure --enable-bpv6

    To clean up compilation artifacts such as object files and shared libraries stored within the ION open-source directory, cd to the ION open-source directory and run:

    make clean

    To remove executables and shared libraries installed in the system, run:

    sudo make uninstall

    "},{"location":"ION-Quick-Start-Guide/#windows","title":"Windows","text":"

    To install ION for Windows, please download the Windows installer.

    "},{"location":"ION-Quick-Start-Guide/#build-individual-packages","title":"Build Individual Packages","text":"

    It's also possible to build the individual packages of ION, using platform-specific Makefiles in the package subdirectories. Currently the only actively maintained platform-specific Makefile is for 64-bits Linux under the \"i86_48-fedora\" folder. If you choose this option, be aware of the dependencies among the packages:

    For more detailed instruction on building ION, see section 2 of the \"ION Design and Operation Guide\" document that is distributed with this package.

    Also, be aware that these Makefiles install everything into subdirectories of /usr/local. To override this behavior, change the value of OPT in the top-level Makefile of each package.

    Additional details are provided in the README.txt files in the root directories of some of the subsystems.

    Note that all Makefiles are for gmake; on a FreeBSD platform, be sure to install gmake before trying to build ION.

    "},{"location":"ION-Quick-Start-Guide/#running-ion","title":"Running ION","text":""},{"location":"ION-Quick-Start-Guide/#check-installed-bp-and-ion-versions","title":"Check Installed BP and ION versions","text":"

    Before running ION, let's confirm which version of Bundle Protocol is installed by running:

    bpversion

    You will see a simple string on the terminal windows indicating either \"bpv6\" or \"bpv7\".

    Also check the ION version installed by running:

    ionadmin

    At the \":\" prompt, please enter the single character command 'v' and you should see a response like this:

     $ ionadmin\n: v\nION-OPEN-SOURCE-4.1.2\n

    Then type 'q' to quit ionadmin. While ionadmin quits, it may display certain error messages like this:

    at line 427 of ici/library/platform_sm.c, Can't get shared memory segment: Invalid argument (0)\nat line 312 of ici/library/memmgr.c, Can't open memory region.\nat line 367 of ici/sdr/sdrxn.c, Can't open SDR working memory.\nat line 513 of ici/sdr/sdrxn.c, Can't open SDR working memory.\nat line 963 of ici/library/ion.c, Can't initialize the SDR system.\nStopping ionadmin.\n

    This is normal due to the fact that ION has not launched yet.

    "},{"location":"ION-Quick-Start-Guide/#try-the-bping-test","title":"Try the 'bping' test","text":"

    The tests directory contains regression tests used by system integrator to check ION before issuing each new release. To make sure ION is operating properly after installation, you can also manually run the bping test:

    First enter the test directory: cd tests

    Enter the command: ./runtests bping/

    This command invokes one of the simplest test whereby two ION instances are created and a ping message is sent from one to the other and an echo is returned to the sender of the ping.

    During test, ION will display the configuration files used, clean the system of existing ION instances, relaunch ION according to the test configuration files, execute bping actions, display texts that indicates what the actions are being executed in real-time, and then shutdown ION, and display the final test status message, which looks like this:

    ION node ended. Log file: ion.log\nTEST PASSED!\n\npassed: 1\n    bping\n\nfailed: 0\n\nskipped: 0\n\nexcluded by OS type: 0\n\nexcluded by BP version: 0\n\nobsolete tests: 0\n

    In this case, the test script confirms that ION is able to execute a bping function properly.

    "},{"location":"ION-Quick-Start-Guide/#try-to-setup-a-udp-session","title":"Try to Setup a UDP Session","text":"

    Under the demos folder of the ION code directory, there are benchmark tests for various ION configurations. These tests also provide a template of how to configure ION.

    Take the example of the bench-udp demo:

    Go into the demos/bench-udp/ folder, you will see two subfolders: 2.bench.udp and 3.bench.udp, these folders configures two ION nodes, one with node numbers 2 and 3.

    Looking inside the 2.bench.udp folder, you will see specific files used to configure ION. These include:

    bench.bprc \nbench.ionconfig  \nbench.ionrc  \nbench.ionsecrc  \nbench.ipnrc  \nionstart  \nionstop\n

    One must note that ION distribution comes with a separate, global ionstart and ionstop scripts installed in /usr/local/bin that can launch and stop ION. The advantage of using local script is that it allows you customize the way you launch and stop ION, for example add helpful text prompt, perform additional checks and clean up activities, etc.

    To run this demo test, first go into the test directory bench-udp, then run the dotest script:

    ./dotest

    You can also study the test script to understand better what is happening.

    "},{"location":"ION-Quick-Start-Guide/#running-multiple-ion-instances-on-a-single-host","title":"Running multiple ION instances on a single host","text":"

    If you study the test script under the \"tests\" and the \"demos\" folders, you will realize that these tests often will launch 2 or 3 ION nodes on the same host to conduct the necessary tests. While this is necessary to simplify and better automate regression testing for ION developer and integration, it is not a typical, recommended configuration for new users.

    In order to run multiple ION instances in one host, specific, different IPCS keys must be used for each instance, and several variables must be set properly in the shell environment. Please see the ION Deployment Guide (included with the ION distribution) for more information on how to do that.

    We recommend that most users, unless due to specific contrain that they must run multiple ION instance on one host, to run each ION instance on a separate host or (VM).

    "},{"location":"ION-Quick-Start-Guide/#setup-udp-configuration-on-two-hosts","title":"Setup UDP Configuration on Two Hosts","text":"

    Once you have studied these scripts, you can try to run it on two different machines running ION.

    First, install ION in host A with an IP address of, for example, 192.168.0.2, and host B with an IP address of 192.168.0.3. Verify your installation based on earlier instructions.

    Copy the 2.bench.udp folder into host A and the 3.bench.udp folder into host B.

    Also copy the file global.ionrc from the bench.udp folder into the same folder where you placed 2.bench.udp and 3.bench.udp

    Then you need to modify the IP addresses in the UDP demo configuration files to match the IP addresses of hosts A and B.

    For example, the bprc files copied into host A is:

    1\na scheme ipn 'ipnfw' 'ipnadminep'\na endpoint ipn:2.0 x\na endpoint ipn:2.1 x\na endpoint ipn:2.2 x\na endpoint ipn:2.64 x\na endpoint ipn:2.65 x\na protocol udp 1400 100\na induct udp 127.0.0.1:2113 udpcli\na outduct udp 127.0.0.1:3113 'udpclo 1'\nr 'ipnadmin bench.ipnrc'\ns\n

    To make it work for host A, you need to replace the induct ip address 127.0.0.1:2113 to 192.168.0.2:2113 - this is where host A's ION will receive incoming UDP traffic.

    Similarly for outduct, you want to change the ip address from 127.0.0.1:3113 to 192.168.0.3:3113 - this is where UDP traffic will go out to host B.

    You can make similar modifications to the ipnrc file as well.

    In the ionconfig file, you want to comment out or delete the wmKey and sdrName entries. Since we are running these two nodes on different hosts, we always let ION use the default values for these parameters.

    If you don\u2019t do this you get an error on startup.

    Repeat the same updates for host B by appropriately substituting old IP address to that of the new hosts.

    "},{"location":"ION-Quick-Start-Guide/#launch-ion-on-two-separate-hosts","title":"Launch ION on two separate hosts","text":"

    After updating the configuration files on host A and B to reflect the new IP addresses and using default wmKey (by not specifying any), we are new ready to try launching ION.

    Before you try to launch ION, it is recommended that you:

    1. Use netcat or iperf to test the connection between host A and B. Make sure it is working properly. That means have a sufficiently high data rate and low loss rate (low single digit percent or fraction of a percent should not be a concern).
    2. If iperf tests show that the data rate between the two hosts are at or above 800 megabits per second, in both directions, and the UDP loss rate is no more than a few percent, then you are good to go.
    3. If not, then you want to reduce the data rate in the global.ionrc file, change the data rates for the a contact command down to something similar to your connection speed. Remember, the unit in the global.ionrc file is Bytes per second, not bits per second, which is typically what iperf test report uses.
    4. If the error rate is high, you may want to check both the physical connection or kernel buffer setting.
    5. Check firewall setting and MTU setting may help you narrow down problems.
    6. Using wireshark can also be helpful both for initial connection check as well as during ION testing.

    Once you are ready to launch ION on both host A and B, open a terminal and go to the directory where the configuration files are stored, and run the local ionstart script:

    ./ionstart

    Note: do not run ionstart since that will trigger the global script in the execution PATH

    You should see some standard output confirming that ION launch has completed. For example you might see something like this:

    Starting ION...\nwmSize:          5000000\nwmAddress:       0\nsdrName:        'ion2'\nsdrWmSize:       0\nconfigFlags:     1\nheapWords:       100000000\nheapKey:         -1\nlogSize:         0\nlogKey:          -1\npathName:       '/tmp'\nStopping ionadmin.\nStopping ionadmin.\nStopping ionsecadmin.\nStopping ltpadmin.\nStopping ipnadmin.\nStopping bpadmin.\n

    You can also see additional status information in the ion.log file in the same directory.

    Launch ION on both host A and B.

    "},{"location":"ION-Quick-Start-Guide/#run-a-bpdriver-bpcounter-test","title":"Run a bpdriver-bpcounter test","text":"

    Now that we have launched ION on both host A and B, it's time to send some data.

    We can repeat the bping test at this point. But since you have already seen that before, let's try something different.

    Let's use the bpdriver-bpcounter test utilities. This pair of utility programs simply sends a number of data in bundles from one node to another and provides a measurement on the throughput.

    On host B, run this command:

    bpcounter ipn:3.2 3

    This command tells ION node number 3 to be ready to receive three bundles on the end-point ID ipn:3.2 which was specified in the .bprc file.

    After host B has launched bpcounter, then on host A, run this command:

    bpdriver 3 ipn:2.2 ipn:3.2 -10000

    This command tells ION running in host A to send 3 bundles from EID 2.2 to EID 3.2, which is waiting for data (per bpcounter command.) And each bundle should be 10,000 bytes in size.

    Why use the \"-\" sign in front of the size parameter? It's not a typo. The \"-\" indicates that bpdriver should keep sending bundles without waiting for any response from the receiver. The feature where bpdriver waits for the receiver is available in BPv6 but no longer part of BPv7.

    When the test completed, you should see output indicating that all the data were sent, how many bundles were transmitted/received, and at what rate.

    Please note that on the sending side the transmission may appear to be almost instantaneous. That is because bpdriver, as an application, is pushing data into bundle protocol which has the ability to rate buffer the data. So as soon as the bpdriver application pushes all data into the local bundle protocol agent, it considers the transmission completed and it will report a very high throughput value, one that is far above the contact graph's data rate limit. This is not an error; it simple report the throughput as experienced by the sending application, knowing that the data has not yet delivered fully to the destination.

    Throughput reported by bpcounter, on the other hand, is quite accurate if a large number of bundles are sent. To accurately measure the time it takes to send the bundles, bpdriver program will send a \"pilot\" bundle just before sending the test data to signals to the bpcounter program to run its throughput calculation timer. This allows the user to run bpcounter and not haveing to worry about immediately send all the bundles in order to produce an accurate throughput measurement.

    If you want to emulate the action of a constant rate source, instead of having bpdriver pushing all data as fast as possible, then you can use the 'i' option to specify a data rate throttle in bits per second.

    If you want to know more about how bpdriver and bpcounter work, look up their man pages for details on syntax and command line options. Other useful ION test utility commands include bpecho, bping, bpsource, bpsink, bpsendfile, bprecvfile, etc.

    "},{"location":"ION-Quick-Start-Guide/#check-the-ionlog","title":"Check the ion.log","text":"

    To confirm whether ION is running properly or has experienced an error, the first thing to do is to check the ion.log, which is a file created in the directory from which ION was launched. If an ion.log file exists when ION starts, it will simply append additional log entries into that file. Each entry has a timestamp to help you determine the time and the relative order in which events occurred.

    When serious error occurs, ion.log will have detailed messages that can pinpoint the name and line number of the source code where the error was reported or triggered.

    "},{"location":"ION-Quick-Start-Guide/#bpacq-and-ltpacq-files","title":"bpacq and ltpacq files","text":"

    Sometimes after operating ION for a while, you will notice a number of files with names such as \"bpacq\" or \"ltpacq\" followed by a number. These are temporary files created by ION to stage bundles or LTP blocks during reception and processing. Once a bundle or LTP block is completely constructed, delivered, or cancelled properly, these temporary files are automatically removed by ION. But if ION experiences an anomalous shutdown, then these files may remain and accumulate in the local directory.

    It is generally safe to remove these files between ION runs. Their presence does not automatically imply issues with ION but can indicate that ION operations were interrupted for some reason. By noting their creation time stamp, it can provide clues on when these interruptions occurred. Right now there are no ION utilty program to parse them because these files are essentially bit buckets and do not contain internal markers or structure and allows user to parse them or extract information by processes outside the bundle agents that created them in the first place.

    "},{"location":"ION-Quick-Start-Guide/#forced-shutdown-of-ion","title":"Forced Shutdown of ION","text":"

    Sometimes shutting down ION does not go smoothly and you can't seem to relaunch ION properly. In that case, you can use the global ionstop script (or the killm script) to kill all ION processes that did not terminate using local ionstop script. The global ionstop or killm scripts also clears out the IPC shared memory and semaphores allocations that were locked by ION processes and would not terminate otherwise.

    "},{"location":"ION-Quick-Start-Guide/#additional-tutorials","title":"Additional Tutorials","text":""},{"location":"ION-Quick-Start-Guide/#ion-configuration-file-tutorial","title":"ION Configuration File Tutorial","text":"

    To learn about the configuration files and the basic set of command syntax and functions: ION Config File Tutorial

    "},{"location":"ION-Quick-Start-Guide/#ion-configuration-file-template","title":"ION Configuration File Template","text":"

    ION Config File Template

    "},{"location":"ION-Quick-Start-Guide/#ion-nasa-course","title":"ION NASA Course","text":"

    To learn more about the design principle of ION and how to use it, a complete series of tutorials is available here: NASA ION Course

    The ION Dev Kit mentioned in the NASA ION Course had been deprecated. However, some additional helpful files can be found here to complete the examples: Additional DevKit Files

    "},{"location":"ION-Quick-Start-Guide/#accessing-ion-open-source-code-repository","title":"Accessing ION Open-Source Code Repository","text":""},{"location":"ION-Quick-Start-Guide/#releases","title":"Releases","text":"

    Use the Summary or the Files tab to download point releases

    "},{"location":"ION-Quick-Start-Guide/#using-the-code-repository","title":"Using the code repository","text":""},{"location":"ION-Quick-Start-Guide/#contributing-code-to-ion","title":"Contributing Code to ION","text":""},{"location":"ION-Quick-Start-Guide/#expectations","title":"Expectations","text":"

    If you plan to contribute to the ION project, please keep these in mind:

    "},{"location":"ION-Quick-Start-Guide/#if-you-want-to-contribute-code-to-ion","title":"If you want to contribute code to ION","text":"
    1. Fork this repository
    2. Starting with the \"current\" branch, create a named feature or bugfix branch and develop/test your code in this branch
    3. Generate a pull request (called Merge Request on Source Forge) with

    4. Your feature or bugfix branch as the Source branch

    5. \"current\" as the destination branch
    "},{"location":"ION-TestSet-Readme/","title":"Running the ION test set","text":""},{"location":"ION-TestSet-Readme/#directory-layout","title":"Directory layout","text":"

    The tests directory under ION's root folder contains the test suite. Each test lives in its own subdirectory of this directory. Each test is conducted by a script $TESTNAME/dotest. Another directory that contains ION tests is the demos directory, which includes examples of ION configurations using different convergence layers. For this document, we focus on the usage of the tests directory.

    "},{"location":"ION-TestSet-Readme/#exclude-files","title":"Exclude files","text":"

    Exclude files are hidden files that allow for tests to be disabled based on certain conditions that may cause the test not to run correctly. If an exclude file exists, it should have a short message about why the test has been excluded.

    Exclude files can exist in any of the following formats:

    "},{"location":"ION-TestSet-Readme/#running-the-tests","title":"Running the tests","text":"

    The tests are run by running make test-all in the top-level directory, or by running runtests in this directory.

    An individual test can also be run: ./runtests <test_name>

    A file defining a set of tests can be run with runtestset. The arguments to runtestset are files that contain globs of tests to run, for example: ./runtestset quicktests.

    "},{"location":"ION-TestSet-Readme/#writing-new-tests","title":"Writing new tests","text":"

    A test directory must contain an executable file named dotest. If a directory does not contain this, the test will be ignored. The dotest program should execute the test, possibly reporting runtime information on stdout and stderr, and indicate by its return value the result of the test as follows:

    0: Success\n1: Failure\n2: Skip this test\n

    The test program starts without the ION stack running. The test program is responsible for starting ION in the ways that is appropriate for the test.

    The test program must stop the ION protocol stack before returning.

    "},{"location":"ION-TestSet-Readme/#the-test-environment","title":"The test environment","text":"

    The dotest scripts are run in their test directory. The following environment variables are set as part of the test environment:

    "},{"location":"ION-TestSet-Readme/#for-413-and-later","title":"For 4.1.3 and later","text":"

    The runtests script maintains a file called tests/progress that gives the start time, finish time, and final result for each test.

    If the environment variable RUNTESTS_OUTPUTDIR is set, as in, export RUNTESTS_OUTPUTDIR=\"/tmp\", then the output from each test will be stored in /tmp/results, which makes it much easier to find particular text or results when debugging.

    "},{"location":"ION-Utilities/","title":"ION Utility Programs","text":"

    Here is a short list of utility programs that comes with ION that are frequently used by users launch, stop, and query ION/BP operation status:

    Normally, when ION was shut down by calling ionstop, issuing the command '.' to the ionadmin programs, or using the killm script, the SDR will be modified/destroyed in the process. Calling ionexit with an argument 'keep' allows the SDR state, just prior to the execution of ionexit to be preserved in the non-volatile storage such as a file if ION was configured to use a file for the SDR.

    "},{"location":"ION-Watch-Characters/","title":"ION Watch Characters","text":"

    ION Version: 4.1.3

    Bundle Protocol Version:7

    Watch characters, when activated, provide immediate feedback on ION operations by printing various characters standard output (terminal). By examing the watch characters, and the order in which they appear, operators can quickly confirm proper operation or detect configuration or run-time errors.

    This document will list all watch characters currently supported by ION.

    "},{"location":"ION-Watch-Characters/#enhanced-watch-characters-ion-413-or-later","title":"Enhanced Watch Characters (ION 4.1.3 or later)","text":"

    Enhanced watch characters were added ION 4.1.3 to provide detailed state information at Bundle Protocol (BP) and LTP levels and can be activated at compile time by:

    ./configure --enable-ewchar\nor \n./configure CFLAGS=-DEWCHAR\n

    Enhanced watch characters prepends additional state information to the standard watch characters inside a pair of parenthesis. In this document, we use the following notion regarding enhanced watch characters information.

    Each field can be longer or shorter than 3 digits/characters.

    "},{"location":"ION-Watch-Characters/#logging-and-processing-of-watch-characters","title":"Logging and Processing of Watch Characters","text":"

    Besides real-time monitoring of the watch characters on standard out, ION can redirect the watch characters to customized user applications for network monitoring purposes.Prior to and including ION 4.1.2, watch characters are single character. Starting from ION 4.1.3 release, a watch character is now generalized to a string of type char*.

    To activate customized processing, there are two steps:

    1. Create a C code gdswatcher.c that defines a functions to process watch characters, and pass that function to ION to handle watch character:
    static void processWatchChar(char* token)\n{\n    //your code goes here\n} \n\nstatic void ionRedirectWatchCharacters()\n{ \n    setWatcher(processWatchChar);\n}\n
    1. Then use the following compiler flag to build ION:
    ./configure CFLAGS=\"-DGDSWATCHER -I/<path to the folder holding the gdswatcher.c file>\"\n
    "},{"location":"ION-Watch-Characters/#bundle-protocol-watch-character","title":"Bundle Protocol Watch Character","text":"

    a - new bundle is queued for forwarding; (nnn,sss,tttt,cccc)a

    b - bundle is queued for transmission; (nnn,sss,ccc)b

    c - bundle is popped from its transmission queue; (nnn,sss,ccc)c

    m - custody acceptance signal is received

    w - custody of bundle is accepted

    x - custody of bundle is refused

    y - bundle is accepted upon arrival; (nnn,sss,ccc)y

    z - bundle is queued for delivery to an application; (nnn,sss,ccc)z

    ~ - bundle is abandoned (discarded) on attempt to forward it; (nnn,sss,ccc)~

    ! - bundle is destroyed due to TTL expiration; (nnn,sss,ccc)!

    & - custody refusal signal is received

    # - bundle is queued for re-forwarding due to CL protocol failure; (nnn,sss,ccc)#

    j - bundle is placed in \\\"limbo\\\" for possible future re-forwarding; (nnn,sss,ccc)j

    k - bundle is removed from \\\"limbo\\\" and queued for re-forwarding; (nnn,sss,ccc)k

    "},{"location":"ION-Watch-Characters/#ltp-watch-characters","title":"LTP Watch Characters","text":"

    d - bundle appended to block for next session

    e - segment of block is queued for transmission

    f - block has been fully segmented for transmission; (xxxx)f

    g - segment popped from transmission queue;

    h - positive ACK received for block, session ended; (xxx)h

    s - segment received

    t - block has been fully received

    @ - negative ACK received for block, segments retransmitted; (xxx)@

    = - unacknowledged checkpoint was retransmitted; (xxx)=

    + - unacknowledged report segment was retransmitted; (xxx)+

    { - export session canceled locally (by sender)

    } - import session canceled by remote sender

    [ - import session canceled locally (by receiver)

    ] - export session canceled by remote receiver

    "},{"location":"ION-Watch-Characters/#bibect-watch-characters","title":"BIBECT Watch Characters","text":"

    w - custody request is accepted (by receiving entity)

    m - custody acceptance signal is received (by requester)

    x - custody of bundle has been refused

    & - custody refusal signal is received (by requester)

    $ - bundle retransmitted due to expiration of custody request timer

    "},{"location":"ION-Watch-Characters/#bssp-watch-characters","title":"BSSP Watch Characters","text":"

    D - bssp send completed

    E - bssp block constructed for issuance

    F - bssp block issued

    G - bssp block popped from best-efforts transmission queue

    H - positive ACK received for bssp block, session ended

    S - bssp block received

    T - bssp block popped from reliable transmission queue

    - - unacknowledged best-efforts block requeued for reliable transmission

    * - session canceled locally by sender

    "},{"location":"LTP-UComm-API/","title":"LTP Underlying Communications API","text":"

    In the Licklider Transmission Protocol (LTP) Specification issued by CCSDS 734.1-B-1, the elements of a LTP architecture is shown as follows:

    The LTP Engine and MIB is implemented and configured by ION, and the Client Service Instance is either BPv6 or BPv7. The storage is provided by the host system through the ICI APIs.

    The Underlying Communication Protocol element is responsible for data and control message exchanges between two peered LTP Engines. It is not responsible for flow control, error correction/detection, and in-ordered delivery.

    For a spacecraft, the LTP Engine will execute the LTP protocol logic and handing the LTP segments to the underlying communication services provided in the form of a simple UDP socket or a radio frequency/optical telecommunication system. In ION, the standard underlying communications protocol is UDP since it is widely available in terrestrial computer systems. In actual deployment, the UDP protocol may need to be substituted by a different ground-based or flight communications system.

    In the document we describe a few essential APIs for any externally implemented underlying communication protocols to interface with LTP engine and perform the most basic tasks of (a) submitting a received LTP segments to the LTP Engine for processing and (b) acquiring an LTP segment from the LTP Engine for transmission to its peer.

    "},{"location":"LTP-UComm-API/#connecting-to-the-ltp-engine","title":"Connecting to the LTP Engine","text":"

    There are several steps for an external application to connecting to LTP:

    1. The ltp service must be running on the host system. The ltp service is started by the ION system and is configured by the .ltprc file processed ltpadmin. See the Configuration File Tutorial to understand how BP and LTP services are instantiated.
      • Typically, to ensure that the ltp service is running before the communications protocols try to connect to it, the underlying communication protocol service is invoked as part of LTP instantiation. See manual page for ltprc for more details.
    2. The external application must make sure LTP is initialized by calling the ltpInit() API.
    3. Once ltpInit called returned successfully, it must obtain access to ION SDR and detemine the associated LTP span (based on a peer engine number) for which communication service will be provisioned. This is done by using the findSpan() API. A span defines the communication parameters between two LTP engine peers.
    4. Acquire the semaphore used by the associated LTP engines - for the span - to indicate the availability of a segment for transmission. The presences of a valid semaphore is also indication that the span is currently active.
    5. Use the ltpDequeueOUtboundSegment API to acquire each available segment from the LTP Engine for transmission to the peer entity.

    In the following section we will describe the private APIs used by the underlying communication protocols. There are other APIs for external processes to use LTP as a reliable point-to-point data transmission service, but they are not described in this document; they are available in the manual pages.

    "},{"location":"LTP-UComm-API/#ltp-data-structure","title":"LTP Data Structure","text":"

    Here is a diagram of the major LTP data structures and their relationships.

    +----------------------------------+----------------------------------+\n|                                  |                                  |\n| non->olatile (SDR heap)          |    volatile (working memory ION) |\n|                                  |                                  |\n|                                  |                                  |\n| LtpDB                            |    LtpVdb                        |\n|   +      (list)                  |     +       (list)               |\n|   +---> spans +--+               |     +-----+ spans+------+        |\n|   |              |               |     |                   |        |\n|   +---> seats +---------+        |     +-----+ seats+---+  |        |\n|                  |      |        |                      |  |        |\n|                  |      |        |                      |  |        |\n| LtpSpan <--------+      |        |     LtpVspan <----------+        |\n|   +                     |        |       +              |           |\n|   +---> importSessions+----+     |       +-> importSessions+--+     |\n|   |       (list)        |  |     |             (list)   |     |     |\n|   +---> exportSessions+------+   |                      |     |     |\n|                         |  | |   |     LtpVseat <-------+     |     |\n| LtpSeat <---------------+  | |   |                            |     |\n|                            | |   |                            |     |\n|                            | |   |                            |     |\n| LtpImportSession <---------+ |   |     LtpVImportSession<-----+     |\n|                              |   |                                  |\n|                              |   |                                  |\n| LtpExportSession <-----------+   |                                  |\n|                                  |                                  |\n+----------------------------------+----------------------------------+\n
    "},{"location":"LTP-UComm-API/#ltp-apis-for-implementation-of-underlying-communication-protocol","title":"LTP APIs for implementation of underlying communication protocol","text":""},{"location":"LTP-UComm-API/#header","title":"Header","text":"
    #include \"ltpP.h\"\n
    "},{"location":"LTP-UComm-API/#ltpinit","title":"ltpInit","text":"

    Function Prototype

    extern int  ltpInit(int estMaxExportSessions);\n

    Parameters

    Return Value

    Example Call

    /*  Note that ltpadmin must be run before the first\n *  invocation of ltplso, to initialize the LTP database\n *  (as necessary) and dynamic database.*/\n\nif (ltpInit(0) < 0)\n{\n    putErrmsg(\"aoslso can't initialize LTP.\", NULL);\n\n    /* user error handling routine here */\n}\n

    Description

    This call attaches to ION and either initializes a new LTP database or loads the LTP database of an existing service. If the value of estMaxExportSessions is positive and no existing LTP service are found, then LTP service will be initialized with the specified maximum number of export sessions indicated. If the value of estMaxExportSessions is zero or negative, then ltpInit will load the LTP database or otherwise quit if no existing LTP service is found. NOTE: for the underlying communication protocol implementation, setting ltpInit(0) is appropriate since the intention is to load an existing LTP service only.

    Once a LTP service is either found or initialized, it loads the address to the LTP database object defined by LtpDB in ltpP.h.

    "},{"location":"LTP-UComm-API/#findspan","title":"findSpan","text":"

    Function Prototype

    void findSpan(uvast engineId, LtpVspan **vspan, PsmAddress *vspanElt);\n

    Parameters

    Return Value

    Example Code

    sdr = getIonsdr();\nCHKZERO(sdr_begin_xn(sdr)); /*  Lock SDR.   */\nfindSpan(remoteEngineId, &vspan, &vspanElt);\nif (vspanElt == 0)\n{\n    sdr_exit_xn(sdr);\n    putErrmsg(\"No such engine in database.\", itoa(remoteEngineId));\n    /* user error handling routine here */\n}\n\nif (vspan->lsoPid != ERROR && vspan->lsoPid != sm_TaskIdSelf())\n{\n    sdr_exit_xn(sdr);\n    putErrmsg(\"LSO task is already started for this span.\",\n        itoa(vspan->lsoPid));\n    /* user error handling routine here */\n}\n\n/* unlock the SDR */\nsdr_exit_xn(sdr);\n

    Description

    This function searches the volatile database for the span that corresponds to the specified engine number. If the span is found, then the pointer to the span object is stored in the vspan parameter and the address of the span object in the list of spans in the volatile database is stored in the vspanElt parameter. If the span is not found, then vspanElt parameter is set to 0.

    Note: In addition to check the value of vspanElt, one can also check for the process ID of the LSO task (the LTP output process, i.e., the underlying communication protocol) of the span has not already been serviced by another protocol implementation.

    "},{"location":"LTP-UComm-API/#ltpdequeueoutboundsegment","title":"ltpDequeueOutboundSegment","text":"

    Function Prototype

    extern int ltpDequeueOutboundSegment(LtpVspan *vspan, char **buf);\n

    Parameters

    Return Value

    Example Code

    segmentLength = ltpDequeueOutboundSegment(vspan, &segment);\nif (segmentLength < 0)\n{\n    /* handle error */\n}\n\nif (segmentLength == 0)\n{\n    /* session is closed, take appropriate action */\n\n}\n\n/* transmit the segment */\n

    Description:

    This function dequeues a LTP segment, based on the segSemaphore in vspan object, into a buffer space for the calling task to process for transmission. The returned value is the length of the LTP segment dequeue; 0 if the segment belongs to a session that already closed (therefore no action is required), and -1 if an error occurred.

    If this call is implemented in a loop, then it is suggested that the loop monitors the segSemaphore in vspan to detect the termination of the semaphore using the sm_SemEnded(vspan->segSemaphore) call. If the semaphore has ended, it means the span associated with the underlying communication protocol instance has ended. This is the right time to end the task itself.

    After each successful iteration in a loop, it is recommended that you call sm_TaskYield() to give other tasks a chance to run. A good example code to read is the udplso.c program.

    "},{"location":"LTP-UComm-API/#ltphandleinboundsegment","title":"ltpHandleInboundSegment","text":"

    Function Prototype

    int ltpHandleInboundSegment(char *buf, int length)\n

    Parameters

    Return Value

    Example Code

    if (ltpHandleInboundSegment(buffer, segmentLength) < 0)\n{\n    putErrmsg(\"Can't handle inbound segment.\", NULL);\n    /* handle error here */\n}`\n

    Description

    This function submits received LTP segments to LTP engine for processing. The return value is 0 if the segment is successfully handled, and -1 if an error occurred. A successfully handled segment includes cases where the segments are ignored for several possible, non-critial, non-fatal discrepencies such as wrong LTP version number, closed session number, session under cancellation (therefore the segment was not processed) and other conditions are may occur under nominal condition.

    To develop one's own underlying communication protocol implementation to support LTP, the udplsi.c and udplso.c programs are good templates to use.

    "},{"location":"License/","title":"License","text":"

    NO WARRANTY:

    DISCLAIMER

    THE SOFTWARE AND/OR RELATED MATERIALS ARE PROVIDED \"AS-IS\" WITHOUT WARRANTY OF ANY KIND INCLUDING ANY WARRANTIES OF PERFORMANCE OR MERCHANTABILITY OR FITNESS FOR A PARTICULAR USE OR PURPOSE (AS SET FORTH IN UCC 2312-2313) OR FOR ANY PURPOSE WHATSOEVER, FOR THE LICENSED PRODUCT, HOWEVER USED.

    IN NO EVENT SHALL CALTECH/JPL BE LIABLE FOR ANY DAMAGES AND/OR COSTS, INCLUDING BUT NOT LIMITED TO INCIDENTAL OR CONSEQUENTIAL DAMAGES OF ANY KIND, INCLUDING ECONOMIC DAMAGE OR INJURY TO PROPERTY AND LOST PROFITS, REGARDLESS OF WHETHER CALTECH/JPL SHALL BE ADVISED, HAVE REASON TO KNOW, OR IN FACT SHALL KNOW OF THE POSSIBILITY.

    USER BEARS ALL RISK RELATING TO QUALITY AND PERFORMANCE OF THE SOFTWARE AND/OR RELATED MATERIALS.

    Copyright 2002-2013, by the California Institute of Technology. ALL RIGHTS RESERVED. U.S. Government Sponsorship acknowledged.

    This software and/or related materials may be subject to U.S. export control laws. By accepting this software and related materials, the user agrees to comply with all applicable U.S. export laws and regulations. User has the responsibility to obtain export licenses or other export authority as may be required before exporting the software or related materials to foreign countries or providing access to foreign persons.

    The QCBOR code included is distributed with the following condition

    Copyright (c) 2016-2018, The Linux Foundation. Copyright (c) 2018-2019, Laurence Lundblade. All rights reserved.

    Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of The Linux Foundation nor the names of its contributors, nor the name \"Laurence Lundblade\" may be used to endorse or promote products derived from this software without specific prior written permission.

    THIS SOFTWARE IS PROVIDED \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

    "},{"location":"List-of-Papers/","title":"List of Papers","text":""},{"location":"Platform-Macros-Error-Reporting/","title":"Platform Macros & Error Reporting","text":"

    From manual page for \"platform\"

    "},{"location":"Platform-Macros-Error-Reporting/#platform-compatibility","title":"Platform Compatibility","text":"

    The platform library \"patches\" the APIs of supported OS's to guarantee that all of the following items may be utilized by application software:

    The strchr(), strrchr(), strcasecmp(), and strncasecmp() functions.\n\nThe unlink(), getpid(), and gettimeofday() functions.\n\nThe select() function.\n\nThe FD_BITMAP macro (used by select()).\n\nThe MAXHOSTNAMELEN macro.\n\nThe NULL macro.\n\nThe timer_t type definition.\n
    "},{"location":"Platform-Macros-Error-Reporting/#platform-generic-macros-functions","title":"Platform Generic Macros & Functions","text":"

    The generic macros and functions in this section may be used in place of comparable O/S-specific functions, to enhance the portability of code. (The implementations of these macros and functions are no-ops in environments in which they are inapplicable, so they're always safe to call.)

    "},{"location":"Platform-Macros-Error-Reporting/#fdtable_size","title":"FDTABLE_SIZE","text":"

    The FDTABLE_SIZE macro returns the total number of file descriptors defined for the process (or VxWorks target).

    "},{"location":"Platform-Macros-Error-Reporting/#ion_path_delimiter","title":"ION_PATH_DELIMITER","text":"

    The ION_PATH_DELIMITER macro returns the ASCII character -- either '/' or '\\' -- that is used as a directory name delimiter in path names for the file system used by the local platform.

    "},{"location":"Platform-Macros-Error-Reporting/#ok","title":"oK","text":"

    oK(expression)\n
    The oK macro simply casts the value of expression to void, a way of handling function return codes that are not meaningful in this context.

    "},{"location":"Platform-Macros-Error-Reporting/#chkerr","title":"CHKERR","text":"
    CHKERR(condition)\n

    The CHKERR macro is an \"assert\" mechanism. It causes the calling function to return -1 immediately if condition is false.

    "},{"location":"Platform-Macros-Error-Reporting/#chkzero","title":"CHKZERO","text":"
    CHKZERO(condition)\n

    The CHKZERO macro is an \"assert\" mechanism. It causes the calling function to return 0 immediately if condition is false.

    "},{"location":"Platform-Macros-Error-Reporting/#chknull","title":"CHKNULL","text":"

    CHKNULL(condition)\n
    The CHKNULL macro is an \"assert\" mechanism. It causes the calling function to return NULL immediately if condition is false.

    "},{"location":"Platform-Macros-Error-Reporting/#chkvoid","title":"CHKVOID","text":"
    CHKVOID(condition)\n

    The CHKVOID macro is an \"assert\" mechanism. It causes the calling function to return immediately if condition is false.

    "},{"location":"Platform-Macros-Error-Reporting/#snooze","title":"snooze","text":"

    void snooze(unsigned int seconds)\n
    Suspends execution of the invoking task or process for the indicated number of seconds.

    "},{"location":"Platform-Macros-Error-Reporting/#microsnooze","title":"microsnooze","text":"
    void microsnooze(unsigned int microseconds)\n

    Suspends execution of the invoking task or process for the indicated number of microseconds.

    "},{"location":"Platform-Macros-Error-Reporting/#getcurrenttime","title":"getCurrentTime","text":"
    void getCurrentTime(struct timeval *time)\n

    Returns the current local time (ctime, i.e., Unix epoch time) in a timeval structure (see gettimeofday(3C)).

    "},{"location":"Platform-Macros-Error-Reporting/#isprintf","title":"isprintf","text":"
    void isprintf(char *buffer, int bufSize, char *format, ...)\n

    isprintf() is a safe, portable implementation of snprintf(); see the snprintf(P) man page for details. isprintf() differs from snprintf() in that it always NULL-terminates the string in buffer, even if the length of the composed string would equal or exceed bufSize. Buffer overruns are reported by log message; unlike snprintf(), isprintf() returns void.

    "},{"location":"Platform-Macros-Error-Reporting/#istrlen","title":"istrlen","text":"
    size_t istrlen(const char *sourceString, size_t maxlen)\n

    istrlen() is a safe implementation of strlen(); see the strlen(3) man page for details. istrlen() differs from strlen() in that it takes a second argument, the maximum valid length of sourceString. The function returns the number of non-NULL characters in sourceString preceding the first NULL character in sourceString, provided that a NULL character appears somewhere within the first maxlen characters of sourceString; otherwise it returns maxlen.

    "},{"location":"Platform-Macros-Error-Reporting/#istrcpy","title":"istrcpy","text":"
    char *istrcpy(char *buffer, char *sourceString, int bufSize)\n

    istrcpy() is a safe implementation of strcpy(); see the strcpy(3) man page for details. istrcpy() differs from strcpy() in that it takes a third argument, the total size of the buffer into which sourceString is to be copied. istrcpy() always NULL-terminates the string in buffer, even if the length of sourceString string would equal or exceed bufSize (in which case sourceString is truncated to fit within the buffer).

    "},{"location":"Platform-Macros-Error-Reporting/#istrcat","title":"istrcat","text":"
    char *istrcat(char *buffer, char *sourceString, int bufSize)\n

    istrcat() is a safe implementation of strcat(); see the strcat(3) man page for details. istrcat() differs from strcat() in that it takes a third argument, the total size of the buffer for the string that is being aggregated. istrcat() always NULL-terminates the string in buffer, even if the length of sourceString string would equal or exceed the sum of bufSize and the length of the string currently occupying the buffer (in which case sourceString is truncated to fit within the buffer).

    "},{"location":"Platform-Macros-Error-Reporting/#igetcwd","title":"igetcwd","text":"
    char *igetcwd(char *buf, size_t size)\n

    igetcwd() is normally just a wrapper around getcwd(3). It differs from getcwd(3) only when FSWWDNAME is defined, in which case the implementation of igetcwd() must be supplied in an included file named \"wdname.c\"; this adaptation option accommodates flight software environments in which the current working directory name must be configured rather than discovered at run time.

    "},{"location":"Platform-Macros-Error-Reporting/#isignal","title":"isignal","text":"
    void isignal(int signbr, void (*handler)(int))\n

    isignal() is a portable, simplified interface to signal handling that is functionally indistinguishable from signal(P). It assures that reception of the indicated signal will interrupt system calls in SVR4 fashion, even when running on a FreeBSD platform.

    "},{"location":"Platform-Macros-Error-Reporting/#iblock","title":"iblock","text":"
    void iblock(int signbr)\n

    iblock() simply prevents reception of the indicated signal by the calling thread. It provides a means of controlling which of the threads in a process will receive the signal cited in an invocation of isignal().

    "},{"location":"Platform-Macros-Error-Reporting/#ifopen","title":"ifopen","text":"
    int ifopen(const char *fileName, int flags, int pmode)\n

    ifopen() is a portable function for opening \"regular\" files. It operates in exactly the same way as open() except that it fails (returning -1) if fileName does not identify a regular file, i.e., it's a directory, a named pipe, etc.

    NOTE that ION also provides iopen() which is nothing more than a portable wrapper for open(). iopen() can be used to open a directory, for example.

    "},{"location":"Platform-Macros-Error-Reporting/#igets","title":"igets","text":"
    char *igets(int fd, char *buffer, int buflen, int *lineLen)\n

    igets() reads a line of text, delimited by a newline character, from fd into buffer and writes a NULL character at the end of the string. The newline character itself is omitted from the NULL-terminated text line in buffer; if the newline is immediately preceded by a carriage return character (i.e., the line is from a DOS text file), then the carriage return character is likewise omitted from the NULL-terminated text line in buffer. End of file is interpreted as an implicit newline, terminating the line. If the number of characters preceding the newline is greater than or equal to buflen, only the first (buflen - 1) characters of the line are written into buffer. On error the function sets lineLen to -1 and returns NULL. On reading end-of-file, the function sets lineLen to zero and returns NULL. Otherwise the function sets *lineLen to the length of the text line in buffer, as if from strlen(3), and returns buffer.

    "},{"location":"Platform-Macros-Error-Reporting/#iputs","title":"iputs","text":"
    int iputs(int fd, char *string)\n

    iputs() writes to fd the NULL-terminated character string at string. No terminating newline character is appended to string by iputs(). On error the function returns -1; otherwise the function returns the length of the character string written to fd, as if from strlen(3).

    "},{"location":"Platform-Macros-Error-Reporting/#strtovast","title":"strtovast","text":"
    vast strtovast(char *string)\n

    Converts the leading characters of string, skipping leading white space and ending at the first subsequent character that can't be interpreted as contributing to a numeric value, to a vast integer and returns that integer.

    "},{"location":"Platform-Macros-Error-Reporting/#strtouvast","title":"strtouvast","text":"
    uvast strtouvast(char *string)\n

    Same as strtovast() except the result is an unsigned vast integer value.

    "},{"location":"Platform-Macros-Error-Reporting/#findtoken","title":"findToken","text":"
    void findToken(char **cursorPtr, char **token)\n

    Locates the next non-whitespace lexical token in a character array, starting at cursorPtr. The function NULL-terminates that token within the array and places a pointer to the token in token. Also accommodates tokens enclosed within matching single quotes, which may contain embedded spaces and escaped single-quote characters. If no token is found, *token contains NULL on return from this function.

    "},{"location":"Platform-Macros-Error-Reporting/#acquiresystemmemory","title":"acquireSystemMemory","text":"
    void *acquireSystemMemory(size_t size)\n

    Uses memalign() to allocate a block of system memory of length size, starting at an address that is guaranteed to be an integral multiple of the size of a pointer to void, and initializes the entire block to binary zeroes. Returns the starting address of the allocated block on success; returns NULL on any error.

    "},{"location":"Platform-Macros-Error-Reporting/#createfile","title":"createFile","text":"
    int createFile(const char *name, int flags)\n

    Creates a file of the indicated name, using the indicated file creation flags. This function provides common file creation functionality across VxWorks and Unix platforms, invoking creat() under VxWorks and open() elsewhere. For return values, see creat(2) and open(2).

    "},{"location":"Platform-Macros-Error-Reporting/#getinternetaddress","title":"getInternetAddress","text":"
    unsigned int getInternetAddress(char *hostName)\n

    Returns the IP address of the indicated host machine, or zero if the address cannot be determined.

    "},{"location":"Platform-Macros-Error-Reporting/#getinternethostname","title":"getInternetHostName","text":"
    char *getInternetHostName(unsigned int hostNbr, char *buffer)\n

    Writes the host name of the indicated host machine into buffer and returns buffer, or returns NULL on any error. The size of buffer should be (MAXHOSTNAMELEN + 1).

    "},{"location":"Platform-Macros-Error-Reporting/#getnameofhost","title":"getNameOfHost","text":"
    int getNameOfHost(char *buffer, int bufferLength)\n

    Writes the first (bufferLength - 1) characters of the host name of the local machine into buffer. Returns 0 on success, -1 on any error.

    "},{"location":"Platform-Macros-Error-Reporting/#getaddressofhost","title":"getAddressOfHost","text":"
    unsigned int getAddressOfHost()\n

    Returns the IP address for the host name of the local machine, or 0 on any error.

    "},{"location":"Platform-Macros-Error-Reporting/#parsesocketspec","title":"parseSocketSpec","text":"
    void parseSocketSpec(char *socketSpec, unsigned short *portNbr, unsigned int *hostNbr)\n

    Parses socketSpec, extracting host number (IP address) and port number from the string. socketSpec is expected to be of the form \"{ @ | hostname }[:]\", where @ signifies \"the host name of the local machine\". If host number can be determined, writes it into hostNbr; otherwise writes 0 into hostNbr. If port number is supplied and is in the range 1024 to 65535, writes it into portNbr; otherwise writes 0 into portNbr."},{"location":"Platform-Macros-Error-Reporting/#printdottedstring","title":"printDottedString","text":"

    void printDottedString(unsigned int hostNbr, char *buffer)\n

    Composes a dotted-string (xxx.xxx.xxx.xxx) representation of the IPv4 address in hostNbr and writes that string into buffer. The length of buffer must be at least 16.

    "},{"location":"Platform-Macros-Error-Reporting/#getnameofuser","title":"getNameOfUser","text":"
    char *getNameOfUser(char *buffer)\n

    Writes the user name of the invoking task or process into buffer and returns buffer. The size of buffer must be at least L_cuserid, a constant defined in the stdio.h header file. Returns buffer.

    "},{"location":"Platform-Macros-Error-Reporting/#reuseaddress","title":"reUseAddress","text":"
    int reUseAddress(int fd)\n

    Makes the address that is bound to the socket identified by fd reusable, so that the socket can be closed and immediately reopened and re-bound to the same port number. Returns 0 on success, -1 on any error.

    "},{"location":"Platform-Macros-Error-Reporting/#makeiononblocking","title":"makeIoNonBlocking","text":"
    int makeIoNonBlocking(int fd)\n

    Makes I/O on the socket identified by fd non-blocking; returns -1 on failure. An attempt to read on a non-blocking socket when no data are pending, or to write on it when its output buffer is full, will not block; it will instead return -1 and cause errno to be set to EWOULDBLOCK.

    "},{"location":"Platform-Macros-Error-Reporting/#watchsocket","title":"watchSocket","text":"
    int watchSocket(int fd)\n

    Turns on the \"linger\" and \"keepalive\" options for the socket identified by fd. See socket(2) for details. Returns 0 on success, -1 on any failure.

    "},{"location":"Platform-Macros-Error-Reporting/#closeonexec","title":"closeOnExec","text":"
    void closeOnExec(int fd)\n

    Ensures that fd will NOT be open in any child process fork()ed from the invoking process. Has no effect on a VxWorks platform.

    "},{"location":"Platform-Macros-Error-Reporting/#exception-reporting","title":"Exception Reporting","text":"

    The functions in this section offer platform-independent capabilities for reporting on processing exceptions.

    The underlying mechanism for ICI's exception reporting is a pair of functions that record error messages in a privately managed pool of static memory. These functions -- postErrmsg() and postSysErrmsg() -- are designed to return very rapidly with no possibility of failing, themselves. Nonetheless they are not safe to call from an interrupt service routing (ISR). Although each merely copies its text to the next available location in the error message memory pool, that pool is protected by a mutex; multiple processes might be queued up to take that mutex, so the total time to execute the function is non-deterministic.

    Built on top of postErrmsg() and postSysErrmsg() are the putErrmsg() and putSysErrmsg() functions, which may take longer to return. Each one simply calls the corresponding \"post\" function but then calls the writeErrmsgMemos() function, which calls writeMemo() to print (or otherwise deliver) each message currently posted to the pool and then destroys all of those posted messages, emptying the pool.

    Recommended general policy on using the ICI exception reporting functions (which the functions in the ION distribution libraries are supposed to adhere to) is as follows:

    In the implementation of any ION library function or any ION\ntask's top-level driver function, any condition that prevents\nthe function from continuing execution toward producing the\neffect it is designed to produce is considered an \"error\".\n\nDetection of an error should result in the printing of an\nerror message and, normally, the immediate return of whatever\nreturn value is used to indicate the failure of the function\nin which the error was detected.  By convention this value\nis usually -1, but both zero and NULL are appropriate\nfailure indications under some circumstances such as object\ncreation.\n\nThe CHKERR, CHKZERO, CHKNULL, and CHKVOID macros are used to\nimplement this behavior in a standard and lexically terse\nmanner.  Use of these macros offers an additional feature:\nfor debugging purposes, they can easily be configured to\ncall sm_Abort() to terminate immediately with a core dump\ninstead of returning a error indication.  This option is\nenabled by setting the compiler parameter CORE_FILE_NEEDED\nto 1 at compilation time.\n\nIn the absence of either any error, the function returns a\nvalue that indicates nominal completion.  By convention this\nvalue is usually zero, but under some circumstances other\nvalues (such as pointers or addresses) are appropriate\nindications of nominal completion.  Any additional information\nproduced by the function, such as an indication of \"success\",\nis usually returned as the value of a reference argument.\n[Note, though, that database management functions and the\nSDR hash table management functions deviate from this rule:\nmost return 0 to indicate nominal completion but functional\nfailure (e.g., duplicate key or object not found) and return\n1 to indicate functional success.]\n\nSo when returning a value that indicates nominal completion\nof the function -- even if the result might be interpreted\nas a failure at a higher level (e.g., an object identified\nby a given string is not found, through no failure of the\nsearch function) -- do NOT invoke putErrmsg().\n\nUse putErrmsg() and putSysErrmsg() only when functions are\nunable to proceed to nominal completion.  Use writeMemo()\nor writeMemoNote() if you just want to log a message.\n\nWhenever returning a value that indicates an error:\n\n        If the failure is due to the failure of a system call\n        or some other non-ION function, assume that errno\n        has already been set by the function at the lowest\n        layer of the call stack; use putSysErrmsg (or\n        postSysErrmsg if in a hurry) to describe the nature\n        of the activity that failed.  The text of the error\n        message should normally start with a capital letter\n        and should NOT end with a period.\n\n        Otherwise -- i.e., the failure is due to a condition\n        that was detected within ION -- use putErrmsg (or\n        postErrmg if pressed for time) to describe the nature\n        of the failure condition.  This will aid in tracing\n        the failure through the function stack in which the\n        failure was detected.  The text of the error message\n        should normally start with a capital letter and should\n        end with a period.\n\nWhen a failure in a called function is reported to \"driver\"\ncode in an application program, before continuing or exiting\nuse writeErrmsgMemos() to empty the message pool and print a\nsimple stack trace identifying the failure.\n
    "},{"location":"Platform-Macros-Error-Reporting/#system_error_msg","title":"system_error_msg()","text":"
    char *system_error_msg( )\n

    Returns a brief text string describing the current system error, as identified by the current value of errno.

    "},{"location":"Platform-Macros-Error-Reporting/#setlogger","title":"setLogger","text":"
    void setLogger(Logger usersLoggerName)\n

    Sets the user function to be used for writing messages to a user-defined \"log\" medium. The logger function's calling sequence must match the following prototype:

    void    usersLoggerName(char *msg);\n

    The default Logger function simply writes the message to standard output.

    "},{"location":"Platform-Macros-Error-Reporting/#writememo","title":"writeMemo","text":"
    void writeMemo(char *msg)\n

    Writes one log message, using the currently defined message logging function. To construct a more complex string, it is customary and safer to use the isprintf function to build a message string first, and then pass that string as an argument to writeMemo.

    "},{"location":"Platform-Macros-Error-Reporting/#writememonote","title":"writeMemoNote","text":"
    void writeMemoNote(char *msg, char *note)\n

    Writes a log message like writeMemo(), accompanied by the user-supplied context-specific text string in note. The text string can also be build separately using isprintf().

    "},{"location":"Platform-Macros-Error-Reporting/#writeerrmemo","title":"writeErrMemo","text":"
    void writeErrMemo(char *msg)\n

    Writes a log message like writeMemo(), accompanied by text describing the current system error.

    "},{"location":"Platform-Macros-Error-Reporting/#itoa","title":"itoa","text":"
    char *itoa(int value)\n

    Returns a string representation of the signed integer in value, nominally for immediate use as an argument to putErrmsg(). [Note that the string is constructed in a static buffer; this function is not thread-safe.]

    "},{"location":"Platform-Macros-Error-Reporting/#utoa","title":"utoa","text":"
    char *utoa(unsigned int value)\n

    Returns a string representation of the unsigned integer in value, nominally for immediate use as an argument to putErrmsg(). [Note that the string is constructed in a static buffer; this function is not thread-safe.]

    "},{"location":"Platform-Macros-Error-Reporting/#posterrmsg","title":"postErrmsg","text":"
    void postErrmsg(char *text, char *argument)\n

    Constructs an error message noting the name of the source file containing the line at which this function was called, the line number, the text of the message, and -- if not NULL -- a single textual argument that can be used to give more specific information about the nature of the reported failure (such as the value of one of the arguments to the failed function). The error message is appended to the list of messages in a privately managed pool of static memory, ERRMSGS_BUFSIZE bytes in length.

    If text is NULL or is a string of zero length or begins with a newline character (i.e., *text == '\\0' or '\\n'), the function returns immediately and no error message is recorded.

    The errmsgs pool is designed to be large enough to contain error messages from all levels of the calling stack at the time that an error is encountered. If the remaining unused space in the pool is less than the size of the new error message, however, the error message is silently omitted. In this case, provided at least two bytes of unused space remain in the pool, a message comprising a single newline character is appended to the list to indicate that a message was omitted due to excessive length.

    "},{"location":"Platform-Macros-Error-Reporting/#postsyserrmsg","title":"postSysErrmsg","text":"
    void postSysErrmsg(char *text, char *arg)\n

    Like postErrmsg() except that the error message constructed by the function additionally contains text describing the current system error. text is truncated as necessary to assure that the sum of its length and that of the description of the current system error does not exceed 1021 bytes.

    "},{"location":"Platform-Macros-Error-Reporting/#geterrmsg","title":"getErrmsg","text":"
    int getErrmsg(char *buffer)\n

    Copies the oldest error message in the message pool into buffer and removes that message from the pool, making room for new messages. Returns zero if the message pool cannot be locked for update or there are no more messages in the pool; otherwise returns the length of the message copied into buffer. Note that, for safety, the size of buffer should be ERRMSGS_BUFSIZE.

    Note that a returned error message comprising only a single newline character always signifies an error message that was silently omitted because there wasn't enough space left on the message pool to contain it.

    "},{"location":"Platform-Macros-Error-Reporting/#writeerrmsgmemos","title":"writeErrmsgMemos","text":"
    void writeErrmsgMemos( )\n

    Calls getErrmsg() repeatedly until the message pool is empty, using writeMemo() to log all the messages in the pool. Messages that were omitted due to excessive length are indicated by logged lines of the form \"[message omitted due to excessive length]\".

    "},{"location":"Platform-Macros-Error-Reporting/#puterrmsg","title":"putErrmsg","text":"
    void putErrmsg(char *text, char *argument)\n

    The putErrmsg() function merely calls postErrmsg() and then writeErrmsgMemos().

    "},{"location":"Platform-Macros-Error-Reporting/#putsyserrmsg","title":"putSysErrmsg","text":"
    void putSysErrmsg(char *text, char *arg)\n

    The putSysErrmsg() function merely calls postSysErrmsg() and then writeErrmsgMemos().

    "},{"location":"Platform-Macros-Error-Reporting/#discarderrmsgs","title":"discardErrmsgs","text":"
    void discardErrmsgs( )\n

    Calls getErrmsg() repeatedly until the message pool is empty, discarding all of the messages.

    "},{"location":"Platform-Macros-Error-Reporting/#printstacktrace","title":"printStackTrace","text":"
    void printStackTrace( )\n

    On Linux machines only, uses writeMemo() to print a trace of the process's current execution stack, starting with the lowest level of the stack and proceeding to the main() function of the executable.

    Note that (a) printStackTrace() is only implemented for Linux platforms at this time; (b) symbolic names of functions can only be printed if the -rdynamic flag was enabled when the executable was linked; (c) only the names of non-static functions will appear in the stack trace.

    For more complete information about the state of the executable at the time the stack trace snapshot was taken, use the Linux addr2line tool. To do this, cd into a directory in which the executable file resides (such as /opt/bin) and submit an addr2line command as follows:

    addr2line -e name_of_executable stack_frame_address where both name_of_executable and stack_frame_address are taken from one of the lines of the printed stack trace. addr2line will print the source file name and line number for that stack frame.

    "},{"location":"Use-Cases/","title":"Use Cases for ION","text":""},{"location":"Use-Cases/#current-deployment-of-ion-ion-integrated-systems","title":"\ud83d\ude80 Current Deployment of ION & ION-Integrated Systems","text":""},{"location":"Using-LTP-Config-Tool/","title":"A Guide to Configuring LTP in ION","text":"

    Scott Burleigh, Jay Gao, and Leigh Torgerson

    Jet Propulsion Laboratory, California Institute of Technology

    Version 4.1.3

    "},{"location":"Using-LTP-Config-Tool/#introduction","title":"Introduction","text":"

    ION open source comes with an Excel spreadsheet to help users configure the LTP protocol to optimize performance based on each user's unique use case.

    ION's implementation of LTP is challenging to configure: there are a lot of configuration parameters to set, because the design is intended to support a very wide variety of deployment scenarios that are optimized for a variety of different figures of merit (utility metrics).

    LTP-ION is managed as a collection of \"spans\", that is, transmission/reception relationships between the local LTP engine (the engine -- or DTN \"node\" -- that you are configuring) and each other LTP engine with which the local engine can exchange LTP protocol segments. Spans are managed using functions defined in libltpP.c that are offered to the operator by the ltpadmin program.

    ltpadmin can be used to add a span, update an existing span, delete a span, provide current information on a specified span, or list all spans. The span configuration parameters that must be set when you add or update a span are as follows:

    In addition, at the time you initialize LTP (normally at the start of the ltpadmin configuration file) you must set one further configuration parameter:

    In many cases, the best values for these configuration parameters will not be obvious to the DTN network administrator. To simplify this task, an LTP Configuration Worksheet has been developed.

    "},{"location":"Using-LTP-Config-Tool/#worksheet-overview","title":"Worksheet overview","text":"

    The LTP configuration worksheet is designed to aid in the configuration of a single span -- that is, the worksheet for the span between engines X and Y will provide configuration parameter values for use in commanding ltpadmin on both engine X and engine Y.

    The cells of the worksheet are of two general types, Input Cells and Calculated Cells.

    Some of these cells are used as span configuration parameters or are figures of merit for network administrators:

    Note: Configuration parameters that are described in detail in this document are numbered. To ease cross referencing between this document and the worksheet, the parameter numbers are placed next to the title cells in the worksheet.*

    "},{"location":"Using-LTP-Config-Tool/#input-parameters","title":"Input Parameters","text":"

    This section provides guidance on the values that must be supplied by the network administrator. Global parameters affect calculated values and configuration file parameters for all spans involving the local LTP engine.

    "},{"location":"Using-LTP-Config-Tool/#global-parameters","title":"Global Parameters","text":"

    Maximum bit error rate is the maximum bit error rate that the LTP should provide for in computing the maximum number of transmission efforts to initiate in the course of transmitting a given block. (Note that this computation is also sensitive to data segment size and to the size of the block that is to be transmitted.) The default value is .000001, i.e., 10^-6^, one uncorrected (but detected) bit error per million bits transmitted.

    The size - estimated size of an LTP report segment in bytes - may vary slightly depending on the sizes of the session numbers in use. 25 bytes is a reasonable estimate.

    "},{"location":"Using-LTP-Config-Tool/#basic-input-parameters","title":"Basic input Parameters","text":"

    Values for the following parameters must be provided by the network administrator in order for the worksheet to guide the configuration. Values must be provided for both engine \"X\" and engine \"Y\".

    1. The OWLT between engines (sec) is the maximum one-way light time over this span, i.e., the distance between the engines. (Note that this value is assumed to be symmetrical.)
    2. A unique engine number for each engine.
    3. The IP address of each engine. (Assuming udplso will be used as the link service output daemon.)
    4. The LTP reception port number for each engine. (Again assuming udplso will be used as the link service output daemon.)
    5. An estimate of the mean size of the LTP service data units (nominally bundles) sent from this engine over this span.
    6. Link service overhead. The expected number of bytes of link service protocol header information per LTP segment.
    7. Aggregation size limit - this is the service data unit aggregation size limit for LTP. Note that a suggested value for this parameter is automatically computed as described below, based on available return channel capacity.
    8. The scheduled transmission rate (in bytes per second) at which this engine will transmit data over this span when the two engines are in contact.
    9. Maximum percentage of channel capacity that may be consumed by LTP report segments. A warning will be displayed if other configuration parameters cause this limit to be breached. There are no actual mechanism to enforce this limit in ION. This only set in order to check the estimated report traffic for the current configuration. It is provided as an aid to LTP link designer.
    10. An estimate of the percentage of all data sent over this span that will be red data, i.e., will be subject to positive and negative LTP acknowledgment.
    11. Aggregation time limit. The minimum value is 1 second. Increasing this limit can marginally reduce the number of blocks transmitted, and hence protocol overhead, at times of low communication activity. However, it reduces the \"responsiveness\" of the protocol, increasing the maximum possible delay before transmission of any given service data unit. (This delay is referred to as \"data aggregation latency\".)

      • Low communication activity is defined as a rate of presentation of service data to LTP that is less than the aggregation size limit divided by the aggregation time limit.
      • LTP segment size (bytes) is the maximum LTP segment size sent over this span by this engine. Typically, this is the maximum permitted size of the payloads of link-layer protocol data units (frames).
      • The maximum number of export sessions. This implements a form of flow control by placing a limit on the number of concurrent LTP sessions used to transmit blocks. Smaller numbers will result in slower transmission, while higher numbers increase storage resource occupancy. Note that a suggested value for this parameter is automatically computed as described below, based on transmission rate and one-way light time.
    "},{"location":"Using-LTP-Config-Tool/#further-guidance","title":"Further Guidance","text":"

    This section provides further information on the methods used to compute the Calculated Cells and also guidance for Input Cell values.

    "},{"location":"Using-LTP-Config-Tool/#first-order-computed-parameters","title":"First-order Computed Parameters","text":"

    The following parameters are automatically computed based on the values of the basic input parameters.

    1. Estimated \"red\" data transmission rate (bytes/sec) is simply the scheduled transmission rate multiplied by the estimated \"red\" data percentage.
    2. Maximum export data in transit (bytes) is the product of the estimated red data transmission rate and the round-trip light time (which is twice the one-way light time between the engines). This is the maximum amount of red data that cannot yet have been positively acknowledged by the remote engine and therefore must be retained in storage for possible retransmission.
    "},{"location":"Using-LTP-Config-Tool/#configuration-decision-parameters","title":"Configuration decision parameters","text":"

    Values for the following parameters must be chosen by the network administrator on the basis of (a) known project requirements or preferences. (b) the first-order computed parameters, and (c) the computed values of figures of merit that result from tentative parameter value selections, as noted.

    "},{"location":"Using-LTP-Config-Tool/#ltp-initialization-parameters","title":"LTP Initialization Parameters","text":"

    Finally, the remaining LTP initialization parameter can be computed when all span configuration decisions have been made.

    1. Maximum number of import sessions is automatically taken from the remote engine's maximum number of export sessions.

    This research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration.

    "},{"location":"Using-LTP-Config-Tool/#updated-features-may-2021","title":"Updated Features - May 2021","text":"

    This section describes the following features added to the configuration tool as of May 2021:

    1. A \"link\" worksheet has been added to set space link parameters such as frame size and error rate and to compute parameters such as maxBer and laboratory Ethernet-based frame loss simulation.
    2. Conditional formatting has been added to a few entries in the main worksheet to provide visual cues for out-of-range parameters and warning messages to guide parameter selection.
    3. A simple model to estimate the minimum required heapWord size for a one-hop LTP link.
    "},{"location":"Using-LTP-Config-Tool/#link-worksheet","title":"Link Worksheet","text":"

    The recommended workflow for using the LTP configuration tool is to first establish the space link configuration using the link worksheet before attempting to generate a LTP configuration under the main worksheet. The link worksheet has the following input and computed cells:

    "},{"location":"Using-LTP-Config-Tool/#enhancements","title":"Enhancements","text":"

    In the main worksheet described in Section 3, we made the following enhancements:

    "},{"location":"Using-LTP-Config-Tool/#heapword-size-estimate","title":"HeapWord Size Estimate","text":"

    A simple HeapWord size estimate calculation is added to the main worksheet, based on the following assumptions:

    1. The only traffic flows in the system are those between node X and Y using LTP.
    2. Heap is sized to support at least 1 contact session
    3. Each contact starts with a clean slate. At the beginning of a contact, the heap space is not occupied by bundles/blocks/segments left over from previous contact or other unfinished processes.
    4. Source user data is file-resident. Most ION test utility programs such as bpsendfile and bpdriver will keep source data (or create simulated data) in a file for as long as possible until just prior to transmission by the underlying convergence layer adaptor when pieces of user data are copied into each out-going convergence layer's PDUs. Please check how your software uses the BP API to determine how source data is handled. If in doubt, you may need to increase the heap space allocation to hold the user's source data.
    5. Aggregated LTP blocks are size-limited (not time-limited).
    "},{"location":"Using-LTP-Config-Tool/#user-input","title":"User Input:","text":""},{"location":"Using-LTP-Config-Tool/#model-output","title":"Model Output:","text":""},{"location":"Using-LTP-Config-Tool/#appendix","title":"Appendix","text":""},{"location":"Using-LTP-Config-Tool/#bpltp-memory-usage-analysis-summary","title":"BP/LTP Memory Usage Analysis Summary","text":"

    In this section, we summarize the finding documented in a powerpoint presentation titled, \"ION DTN/LTP Configuration and ION Memory Usage Analysis\", dated January 2021, that is used as the basis for estimating the heap space required for BP/LTP operation in ION:

    "},{"location":"Using-LTP-Config-Tool/#acknowledgements","title":"Acknowledgements","text":"

    Nik Ansell co-authored/contributed to the 2016 version of this document, which has been updated and revised in 2021.

    \u00a9 2016 California Institute of Technology. Government sponsorship acknowledged.

    "},{"location":"community/Contributing-Code-to-ION/","title":"Contributing Code to ION","text":""},{"location":"community/Contributing-Code-to-ION/#expectations","title":"Expectations","text":"

    If you plan to contribute to the ION project, please keep these in mind:

    "},{"location":"community/Contributing-Code-to-ION/#if-you-want-to-contribute","title":"If you want to contribute...","text":"
    1. Fork this repository
    2. Starting with the \"current\" branch, create a named feature or bugfix branch and develop/test your code in this branch
    3. Generate a pull request
    "},{"location":"community/dtn-gcp-2nodes/ION-Two-Node-on-Cloud-Linux-VMs/","title":"Running DTN on Cloud VM using a Two-Node Ring","text":"

    This project has been developed by Dr Lara Suzuki, a Visiting Researcher at NASA JPL.

    "},{"location":"community/dtn-gcp-2nodes/ION-Two-Node-on-Cloud-Linux-VMs/#introduction","title":"Introduction","text":"

    In this project we demonstrate how to run DTN on two nodes on Cloud VM using NASA's implementation of the bundle protocol - ION.

    Two-Node Topology

    The ION (interplanetary overlay network) software is a suite of communication protocol implementations designed to support mission operation communications across an end-to-end interplanetary network, which might include on-board (flight) subnets, in-situ planetary or lunar networks, proximity links, deep space links, and terrestrial internets.

    "},{"location":"community/dtn-gcp-2nodes/ION-Two-Node-on-Cloud-Linux-VMs/#dtn-on-cloud-linux-vms-101","title":"DTN on Cloud Linux VMs 101","text":"

    We strongly recommend that you firstly get familiar with the Loopback communication of ION running on a single node on Google Cloud Platform.

    "},{"location":"community/dtn-gcp-2nodes/ION-Two-Node-on-Cloud-Linux-VMs/#getting-started-with-two-linux-cloud-vms","title":"Getting Started with Two Linux Cloud VMs","text":"

    On your preferred Cloud provider dashboard, create a Linux VM instance (e.g. for instance Debian). In this tutorial we have created one instance named Golstone in Zone: us-central1 and the another instance named Madrid in Zone: europe-west2-c. The diagram below illustrates the two node communication that we will be developing in this tutorial.

    "},{"location":"community/dtn-gcp-2nodes/ION-Two-Node-on-Cloud-Linux-VMs/#the-configuration-files","title":"The configuration files","text":"

    In this section we will walk you through the creation of the host1.rc file. Follow the same steps to create the same file for host2.rc.

    "},{"location":"community/dtn-gcp-2nodes/ION-Two-Node-on-Cloud-Linux-VMs/#the-ionadmin-configuration","title":"The ionadmin configuration","text":"

    The ionadmin configuration assigns an identity (node number) to the node, optionally configures the resources that will be made available to the node, and specifies contact bandwidths and one-way transmission times.

    ## begin ionadmin \n# Initialization command (command 1). \n# Set this node to be node 1 (as in ipn:1).\n# Use default sdr configuration (empty configuration file name '').\n1 1 ''\n\n# Start ion node\ns\n\n# Add a contact.\n# It will start at +1 seconds from now, ending +3600 seconds from now.\n# It will connect node 1 to itself.\n# It will transmit 100000 bytes/second.\na contact +1 +3600 1 1 100000\n\n# Add more contacts.\n# The network goes 1--2\n# Note that contacts are unidirectional, so order matters.\na contact +1 +3600 1 2 100000\na contact +1 +3600 2 1 100000\na contact +1 +3600 2 2 100000\n\n# Add a range. This is the physical distance between nodes.\n# It will start at +1 seconds from now, ending +3600 seconds from now.\n# It will connect node 1 to itself.\n# Data on the link is expected to take 1 second to reach the other\n# end (One Way Light Time).\na range +1 +3600 1 1 1\n\n# Add more ranges.\n# We will assume every range is one second.\n# Note that ranges cover both directions, so you \n#only need define one range for any combination of nodes.\na range +1 +3600 1 2 1\na range +1 +3600 2 2 1\n\n# Set this node to consume and produce a mean of 1000000 bytes/second.\nm production 1000000\nm consumption 1000000\n## end ionadmin \n

    "},{"location":"community/dtn-gcp-2nodes/ION-Two-Node-on-Cloud-Linux-VMs/#the-ltpadmin-configuration","title":"The ltpadmin configuration","text":"

    The ltpadmin configuration specifies spans, transmission speeds, and resources for the Licklider Transfer Protocol convergence layer

    # Initialization command (command 1).\n1 32\n\n# Add a span. (a connection)\na span 1 10 10 1400 10000 1 'udplso `external_IP_of_node_1`:1113'\n\n# Add another span. (to host2) \n# Identify the span as engine number 2.\n# Use the command 'udplso 10.1.1.2:1113' to implement the link itself.  \na span 2 10 10 1400 10000 1 'udplso `external_IP_of_node_2`:1113'\n\n# Start command.\n# This command actually runs the link service output commands.\n# It also starts the link service INPUT task 'udplsi `internal_IP_of_node_1`:1113' \n# to listen locally on UDP port 1113 for incoming LTP traffic.\ns 'udplsi `internal_IP_of_node_1`:1113'\n## end ltpadmin \n
    "},{"location":"community/dtn-gcp-2nodes/ION-Two-Node-on-Cloud-Linux-VMs/#the-bpadmin-configuration","title":"The bpadmin configuration","text":"

    The bpadmin configuration specifies all of the open endpoints for delivery on your local end and specifies which convergence layer protocol(s) you intend to use.

    ## begin bpadmin \n# Initialization command (command 1).\n1\n\n# Add an EID scheme.\n# The scheme's name is ipn.\n# This scheme's forwarding engine is handled by the program 'ipnfw.'\n# This scheme's administration program (acting as the custodian\n# daemon) is 'ipnadminep.'\na scheme ipn 'ipnfw' 'ipnadminep'\n\n# Add endpoints.\n# Establish endpoints ipn:1.0, ipn:1.1, and ipn:1.2 on the local node.\n# ipn:1.0 is expected for custodian traffic.  The rest are usually\n# used for specific applications (such as bpsink).\n# The behavior for receiving a bundle when there is no application\n# currently accepting bundles, is to queue them 'q', as opposed to\n# immediately and silently discarding them (use 'x' instead of 'q' to\n# discard).\na endpoint ipn:1.0 q\na endpoint ipn:1.1 q\na endpoint ipn:1.2 q\n\n# Add a protocol. \n# Add the protocol named ltp.\n# Estimate transmission capacity assuming 1400 bytes of each frame (in\n# this case, udp on ethernet) for payload, and 100 bytes for overhead.\na protocol ltp 1400 100\n\n# Add an induct. (listen)\n# Add an induct to accept bundles using the ltp protocol.\n# The duct's name is 1 (this is for future changing/deletion of the\n# induct). \n# The induct itself is implemented by the 'ltpcli' command.\na induct ltp 1 ltpcli\n\n# Add an outduct (send to yourself).\n# Add an outduct to send bundles using the ltp protocol.\na outduct ltp 1 ltpclo\n\n# Add an outduct. (send to host2)\n# Add an outduct to send bundles using the ltp protocol.\na outduct ltp 2 ltpclo\n\n# Start bundle protocol engine, also running all of the induct, outduct,\n# and administration programs defined above\ns\n## end bpadmin \n
    "},{"location":"community/dtn-gcp-2nodes/ION-Two-Node-on-Cloud-Linux-VMs/#the-ipnadmin-configuration","title":"The ipnadmin configuration","text":"

    The ipnadmin configuration maps endpoints at \"neighboring\" (topologically adjacent, directly reachable) nodes to convergence-layer addresses.

    ## begin ipnadmin \n# ipnrc configuration file for host1 in a 3node ltp/tcp test. \n# Essentially, this is the IPN scheme's routing table.\n\n# Add an egress plan.\n# Bundles to be transmitted to node number 1 (that is, yourself).\n# The plan is to queue for transmission on protocol 'ltp' using\n# the outduct identified as '1.'\na plan 1 ltp/1\n\n# Add other egress plans.\n# Bundles for elemetn 2 can be transmitted directly to host2 using\n# ltp outduct identified as '2.' \na plan 2 ltp/2\n## end ipnadmin\n

    "},{"location":"community/dtn-gcp-2nodes/ION-Two-Node-on-Cloud-Linux-VMs/#the-ionsecadmin-configuration","title":"The ionsecadmin configuration","text":"

    The ionsecadmin enables bundle security (also avoid error messages in ion.log).

    ## begin ionsecadmin\n# Enable bundle security and avoid error messages in ion.log\n1\n## end ionsecadmin\n

    "},{"location":"community/dtn-gcp-2nodes/ION-Two-Node-on-Cloud-Linux-VMs/#executing-the-configuration-files","title":"Executing the configuration files","text":"

    On the terminal of host 1 execute the command

    $ ionstart -I host1.rc\n
    Simmilarly, on the terminal of host 2 execute the command
    $ ionstart -I host2.rc\n
    To send a message from host 1 to host 2, you must firstly start bpsink in host 2 by executing the command below
    $ bpsink ipn:2.1 &\n
    On the terminal of host 1, enter the following command and hit enter
    $ echo \"hi\" | bpsource ipn:2.1\n
    After the execution of the command above you should see in the terminal of host 2 the following message
    $ ION event: Payload delivered.\n$   payload length is 2.\n$   'hi'\n
    The image below illustrates the above scenario plus host 2 sending a hello message to host 1.

    "},{"location":"community/dtn-gcp-2nodes/rc_files/host1-start-script-2node/","title":"ION Start Script Example","text":"

    Note: place this in a file named host1.rc

    ## begin ionadmin \n1 1 ''\ns \n\na contact +1 +3600 1 1 100000\na contact +1 +3600 1 2 100000\na contact +1 +3600 2 1 100000\na contact +1 +3600 2 2 100000\n\na range +1 +3600 1 1 1\na range +1 +3600 1 2 1\na range +1 +3600 2 2 1\n\nm production 1000000\nm consumption 1000000\n## end ionadmin \n\n## begin ltpadmin \n1 32\n\na span 1 10 10 1400 10000 1 'udplso `external_IP_of_node_1`:1113'\na span 2 10 10 1400 10000 1 'udplso `external_IP_of_node_2`:1113'\ns 'udplsi `internal_IP_of_node_1`:1113'\n## end ltpadmin \n\n## begin bpadmin \n1\na scheme ipn 'ipnfw' 'ipnadminep'\n\na endpoint ipn:1.0 q\na endpoint ipn:1.1 q\na endpoint ipn:1.2 q\n\na protocol ltp 1400 100\na induct ltp 1 ltpcli\na outduct ltp 1 ltpclo\na outduct ltp 2 ltpclo\n\ns\n## end bpadmin \n\n## begin ipnadmin \na plan 1 ltp/1\na plan 2 ltp/2\n## end ipnadmin\n\n## begin ionsecadmin\n1\n## end ionsecadmin\n
    "},{"location":"community/dtn-gcp-2nodes/rc_files/host2-start-script-2node/","title":"ION Start Script Example","text":"

    Note: place this in a file named host2.rc

    ## begin ionadmin \n1 2 ''\ns\n\na contact +1 +3600 1 1 100000\na contact +1 +3600 1 2 100000\na contact +1 +3600 2 1 100000\na contact +1 +3600 2 2 100000 \n\na range +1 +3600 1 1 1\na range +1 +3600 1 2 1\na range +1 +3600 2 2 1\n\nm production 1000000\nm consumption 1000000\n## end ionadmin \n\n## begin ltpadmin \n1 32\n\na span 1 10 10 1400 10000 1 'udplso `external_IP_of_node_1`:1113'\na span 2 10 10 1400 10000 1 'udplso `external_IP_of_node_2`:1113'\ns 'udplsi `internal_IP_of_node_2`:1113'\n## end ltpadmin \n\n## begin bpadmin \n1\na scheme ipn 'ipnfw' 'ipnadminep'\n\na endpoint ipn:2.0 q\na endpoint ipn:2.1 q\na endpoint ipn:2.2 q\n\na protocol ltp 1400 100\na induct ltp 2 ltpcli\na outduct ltp 2 ltpclo\na outduct ltp 1 ltpclo\n\ns\n## end bpadmin \n\n## begin ipnadmin \na plan 1 ltp/1\na plan 2 ltp/2\n## end ipnadmin\n\n## begin ionsecadmin\n1\n## end ionsecadmin\n
    "},{"location":"community/dtn-gcp-iot-main/ION-and-IOT-on-Linux-VMs-running-on-Cloud-Computing/","title":"Telemetry Data on Cloud Vms using Pub/Sub and DTN","text":"

    This project has been developed by Dr Lara Suzuki, a visiting Researcher at NASA JPL.

    In this tutorial we will demonstrate how to connect a Raspberry Pi and Sensor Hat onto Google Cloud using cloud Pub/Sub on host 1 and serving the messages over DTN to host 2. This tutorial follows the [Running DTN on Google Cloud using a Two-Node Ring] tutorial and uses the configurations of host 1 and host 2 as described in the tutorial.

    "},{"location":"community/dtn-gcp-iot-main/ION-and-IOT-on-Linux-VMs-running-on-Cloud-Computing/#setting-up-raspbberry-pi-and-the-sense-hat","title":"Setting up Raspbberry Pi and the Sense Hat","text":"

    In this tutorial we use Raspberry Pi 4 model B (2018) and Sense Hat Version 1.0.

    The first step is to be sure your Pi can connect to the Internet. You can either plug in an ethernet cable, or if you\u2019re using WiFi, scan for networks your Pi can see. Plug your Pi in a monitor, and when it starts, at the top right corner you can find the wifi logo. Select the network you want to connect to. Once that is connected open your browser to check whether you can access the Internet.

    "},{"location":"community/dtn-gcp-iot-main/ION-and-IOT-on-Linux-VMs-running-on-Cloud-Computing/#library-dependency-setup","title":"Library dependency setup","text":"

    The first thing to do is to make sure that the places where Raspberry Pi will be getting its libraries from is current. For this, on your Pi's terminal, run the following command:

    $ sudo apt-get update\n
    The next step is to securely connect to a Pub Sub service running locally or on a cloud provider service. For this we will use JWT to handle authentication (library pyjwt). The meta-model for communication used on the cloud Pub/Sub is based on publish/subscribe messaging technology provided by the MQTT (MQ Telemetry Transport) protocol (library paho-mqtt). MQTT is a topic-based publish/subscribe communications protocol that is designed to be open, simple, lightweight, easy-to-implement, and efficient in terms of processor, memory, and network resources.

    On your Pi's terminal run the following commands

    $ sudo apt-get install build-essential\n$ sudo apt-get install libssl-dev\n$ sudo apt-get install python-dev\n$ sudo apt-get install libffi-dev\n$ sudo pip install paho-mqtt\n
    For encryption, run the install the pyjwt library and its dependency, the cryptography library .
    $ sudo pip install pyjwt\n$ sudo pip install cryptography\n
    For telemetry data we are using Sense Hat. Sense Hat is composed by telemetry sensors such as temperature, accelerometer, humidity and pressure. To install the library for Sense Hat, run the command:
    $ sudo apt get install sense-hat\n

    "},{"location":"community/dtn-gcp-iot-main/ION-and-IOT-on-Linux-VMs-running-on-Cloud-Computing/#ssl-certificate-rsa-with-x509-wrapper","title":"SSL Certificate - RSA with X509 wrapper","text":"

    In order to authenticate in Google Cloud IoT Core, we need a SSL certificate. We will create an RSA with X509 wrapper. For this, execute the following command on your Pi's terminal:

    $ openssl req -x509 -newkey rsa:2048 -keyout sensing_private.pem -nodes -out demo.pub -subj \u201c/CN=unused\u201d\n
    "},{"location":"community/dtn-gcp-iot-main/ION-and-IOT-on-Linux-VMs-running-on-Cloud-Computing/#setting-up-a-pubsub-servide-on-cloud","title":"Setting up a Pub/Sub servide on Cloud","text":"

    Once your Raspberry Pi is fully set up, follow the instructions of your Cloud provider to create a Registry of your new Pub/Sub service. For your Pub/Sub 'topic', create a topic named: sensing

    To connect your device on your cloud provider, you will likely to need to use an authentication method. In our case we use authentication using a Public Key in the format RS256_X509.

    To copy the content of your Pi's public key, on the Pi's terminal run:

    $ cat demo.pub\n

    Copy everything, including the tages, between

    -----BEGIN PUBLIC KEY-----\n-----END PUBLIC KEY-----\n
    and paste it in the Public Key Value textbox.

    "},{"location":"community/dtn-gcp-iot-main/ION-and-IOT-on-Linux-VMs-running-on-Cloud-Computing/#create-a-subscription-to-listen-to-the-pubsub-topic","title":"Create a Subscription to listen to the Pub/Sub Topic","text":"

    On your cloud provide Console, create a 'Subscription' to listen to the topic sensing we created in the previous steps. Now you should have all the pieces needed to send telemetry data from your Pi to a Pub/Sub service on a VM instance running on the cloud!

    "},{"location":"community/dtn-gcp-iot-main/ION-and-IOT-on-Linux-VMs-running-on-Cloud-Computing/#send-telemetry-data-from-raspberry-pi-to-linux-vm-on-the-cloud","title":"Send telemetry data from Raspberry Pi to Linux VM on the Cloud","text":"

    The code on this repository named sense.py is based on the implementation of GabeWeiss.

    In the code, edit the following fields:

    ssl_private_key_filepath = '/home/pi/sensing_private.pem'\nssl_algorithm = 'RS256'\nroot_cert_filepath = '/home/pi/roots.pem'\nproject_id = 'if you have to use a project ID identifier in your cloud service'\nregistry_id = 'name of your registry'\ndevice_id = 'name of your device'\n
    Once you have configured the above parameters in the file sense.py, on your Raspberry Pi run the command:
    $ python3 sense.py\n

    "},{"location":"community/dtn-gcp-iot-main/ION-and-IOT-on-Linux-VMs-running-on-Cloud-Computing/#send-telemetry-data-to-from-host-1-to-host-2-via-dtn","title":"Send telemetry data to from host 1 to host 2 via DTN","text":"

    Log into the VM host 1. In the VM go to the base directory of ION and create a folder named dtn

    $ mkdir dtn\n
    CD into dtn directory, and clone the file named iot.py. In this file configure the following parameters:
    subscription_path = subscriber.subscription_path(\n  'ID_OF_YOUR_CLOUD_PROJECT', 'ID_OF_YOUR_SUBSCRIPTION')\n
    And add host 2 as the receiver of the telemetry data:
    os.system(f'echo \"{value}\" | bpsource ipn:2.1')\n
    On the terminal of host 1 and host 2, start ion:
    $ ionstart -I hostX.rc #where X is the number of the host\n
    On the terminal of host 2, start bpsink
    $ bpsink ipn:2.1 &\n
    On the terminal of host 1, start iot.py
    $ python3 iot.py\n
    On the terminal of host 1 you should see the print out of the telemetry data received as below:

    On the terminal of host 2 you should see the payloads delivered. Please note that messages beyond 80 characters are not shown on bpsink:

    "},{"location":"community/dtn-gcp-iot-main/iot-python-script/","title":"ION Start Script Example","text":"

    Note: place this in a file named iot.py

    ```` import os import time from google.cloud import pubsub_v1 subscriber = pubsub_v1.SubscriberClient()

    "},{"location":"community/dtn-gcp-iot-main/iot-python-script/#the-subscription_path-method-creates-a-fully-qualified-identifier","title":"The subscription_path method creates a fully qualified identifier","text":""},{"location":"community/dtn-gcp-iot-main/iot-python-script/#in-the-form-projectsproject_idsubscriptionssubscription_name","title":"in the form projects/{project_id}/subscriptions/{subscription_name}","text":"

    subscription_path = subscriber.subscription_path( 'ID_OF_YOUR_GOOGLE_CLOUD_PROJECT', 'ID_OF_YOUR_SUBSCRIPTION') def callback(message): value = message.data os.system(f'echo \"{value}\" | bpsource ipn:2.1') print('Received message: {}'.format(value)) message.ack() # Acknowledges the receipt of the message and remove it from the topic queue subscriber.subscribe(subscription_path, callback=callback)

    "},{"location":"community/dtn-gcp-iot-main/iot-python-script/#the-subscriber-is-non-blocking-we-must-keep-the-main-thread-from","title":"The subscriber is non-blocking. We must keep the main thread from","text":""},{"location":"community/dtn-gcp-iot-main/iot-python-script/#exiting-to-allow-it-to-process-messages-asynchronously-in-the-background","title":"exiting to allow it to process messages asynchronously in the background.","text":"

    print('Listening for messages on {}'.format(subscription_path)) while True: time.sleep(60)

    ````

    "},{"location":"community/dtn-gcp-iot-main/sense-python-script/","title":"ION Start Script Example","text":"

    Note: place this in a file named sense.py

    ```` from sense_hat import SenseHat import Adafruit_DHT import time import datetime import time import jwt import paho.mqtt.client as mqtt

    "},{"location":"community/dtn-gcp-iot-main/sense-python-script/#define-some-project-based-variables-to-be-used-below-this-should-be-the-only","title":"Define some project-based variables to be used below. This should be the only","text":""},{"location":"community/dtn-gcp-iot-main/sense-python-script/#block-of-variables-that-you-need-to-edit-in-order-to-run-this-script","title":"block of variables that you need to edit in order to run this script","text":"

    ssl_private_key_filepath = '' # The .pem file of your Pi ssl_algorithm = 'RS256' root_cert_filepath = '' # The .pem file of Google project_id = '' # The project ID on Google Cloud gcp_location = '' # The zone where your project is deployed registry_id = '' # The ID of your registry on Google IoT Core device_id = '' # The ID of your Pi as set up on Google IoT Core

    cur_time = datetime.datetime.utcnow()

    DHT_SENSOR = Adafruit_DHT.DHT11 DHT_PIN = 4

    def create_jwt(): token = { 'iat': cur_time, 'exp': cur_time + datetime.timedelta(minutes=60), 'aud': project_id }

    with open(ssl_private_key_filepath, 'r') as f: private_key = f.read()

    return jwt.encode(token, private_key, ssl_algorithm)

    _CLIENT_ID = 'projects/{}/locations/{}/registries/{}/devices/{}'.format(project_id, gcp_location, registry_id, device_id) _MQTT_TOPIC = '/devices/{}/events'.format(device_id)

    client = mqtt.Client(client_id=_CLIENT_ID) client.username_pw_set( username='unused', password=create_jwt())def error_str(rc): return '{}: {}'.format(rc, mqtt.error_string(rc))

    def on_connect(unusued_client, unused_userdata, unused_flags, rc): print('on_connect', error_str(rc))

    def on_publish(unused_client, unused_userdata, unused_mid): print('on_publish')

    client.on_connect = on_connect client.on_publish = on_publish

    client.tls_set(ca_certs=root_cert_filepath) client.connect('mqtt.googleapis.com', 8883) client.loop_start()

    "},{"location":"community/dtn-gcp-iot-main/sense-python-script/#could-set-this-granularity-to-whatever-we-want-based-on-device-monitoring-needs-etc","title":"Could set this granularity to whatever we want based on device, monitoring needs, etc","text":"

    temperature = 0 humidity = 0 pressure = 0

    sense = SenseHat()

    while True: cur_temp = sense.get_temperature() cur_pressure = sense.get_pressure() cur_humidity = sense.get_humidity() if cur_temp == temperature and cur_humidity == humidity and cur_pressure == pressure: time.sleep(1) continue temperature = cur_temp pressure = cur_pressure humidity = cur_humidity

    payload = '{{ \"ts\": {}, \"temperature\": {}, \"pressure\": {}, \"humidity\": {} }}'.format(cur_time, \"%.1f C\" % temperature,\"%.2f Millibars\" % press$

    client.publish(_MQTT_TOPIC, payload, qos=1)

    print(\"{}\\n\".format(payload))

    sense.set_rotation(180) # Set LED matrix to scroll from right to left

    sense.show_message(\"%.1f C\" % temperature, scroll_speed=0.10, text_colour=[0, 255, 0]) # Show the temperature on the LED Matrix

    time.sleep(10)

    ````

    "},{"location":"community/dtn-gcp-ltp-tcp-main/ION-LTP-TCP-on-Azure/","title":"Three-Node Network communication via DTN on Google Cloud Platform and Windows Azure","text":"

    This project has been developed by Dr Lara Suzuki, a Visiting Researcher at NASA JPL.

    "},{"location":"community/dtn-gcp-ltp-tcp-main/ION-LTP-TCP-on-Azure/#introduction","title":"Introduction","text":"

    This is the third tutorial on a series of DTN on Google Cloud tutorials. In this tutorial we will introduce you to Windows Azure, and how to configure a 3-node network using ION. The figure below shows the network we will be building. Note that this example network uses two different convergence layers: TCP and LTP. This can illustrates the case of a terrestrial connection with two interplanetary internet nodes.

    "},{"location":"community/dtn-gcp-ltp-tcp-main/ION-LTP-TCP-on-Azure/#getting-started-on-windows-azure","title":"Getting Started on Windows Azure","text":"

    Sign up for a free account on Windows Azure. Once your free account is set up, log into the Azure Portal. Under Azure Services, click Virtual Machines. In the Virtual Machines window, click Add and follow the steps below. 1. Click Add then Virtual Machine 2. Subscription free trial 3. In Resource Group select Create New and name it dtn 4. In Virtual Machine name give it a name. In our example it is named Canberra 5. In Region select the region closest to you or of your choice. In our example it is Australia Central 6. In Image select Debian 10 \"Buster\" Gen 1 7. Size leave it as Standard 8. Under Administrator Account select either the use of SSH public key or Password 9. For Select inbound ports leave SSH(22) 10. Click Review and Create, then click Create

    To get ION working you must enable the inbound traffic to port 1113 and port 4556 - IANA assigned default DTN TCP port. To enable inbound traffic in those ports, at the top right of your window, hit Home, then Virtual Machines. Click on the Virtual Machine you have just created. On the new loaded page, under Settings click Networking as shown in the image below.

    On the networking page, click Add inbound port rule. Select Source as Any, on Source port ranges add the port numbers you want to allow inbound traffic, select the Protocol, the Action (Allow), and add a high Priority (e.g. 380). Give it a name and hit Add. You now can execute ION and the induct and outducts will work.

    "},{"location":"community/dtn-gcp-ltp-tcp-main/ION-LTP-TCP-on-Azure/#three-node-network","title":"Three-Node Network","text":"

    In this section, we assume that host 3 has an IP address of 10.0.0.3. Please modify this for your uses. Please note that in this tutorial we are not covering routing, therefore, host2 cannot communicate with host3. The routing tutorial can be found here.

    This network is created by running the following command on host 1

    ionstart -I host1.rc\n
    This command is run on host 2:
    ionstart -I host2.rc\n
    Finally, this command is run on host 3
    ionstart -I host3.rc\n

    "},{"location":"community/dtn-gcp-ltp-tcp-main/ION-LTP-TCP-on-Azure/#the-host3rc-configuration-file-tcp","title":"The host3.rc configuration file - TCP","text":"

    For the configuration files host 1 and host 2, follow the examples given in the tutorial Running DTN on Google Cloud using a Two-Node Ring

    . Remember to add contact, range, span, outduct and a plan for host 3. Below is the configuration file host3.rc.

    The ionadmin configuration uses tcp from host 2 to host 3

    ## begin ionadmin \n# ionrc configuration file for host3 in a 3node tcp/ltp test.\n# This uses tcp from 1 to 3.\n# \n# Initialization command (command 1). \n# Set this node to be node 3 (as in ipn:3).\n# Use default sdr configuration (empty configuration file name '').\n1 3 ''\n# start ion node\ns\n# Add a contact.\n# It will start at +1 seconds from now, ending +3600 seconds from now.\n# It will connect node 3 to itself\n# It will transmit 100000 bytes/second.\na contact +1 +3600 3 3 100000\n\n# Add more contacts.\n# They will connect 2 to 3, 3 to 2, and 3 to itself\n# Note that contacts are unidirectional, so order matters.\na contact +1 +3600 3 2 100000\na contact +1 +3600 2 3 100000\na contact +1 +3600 2 2 100000\n\n# Add a range. This is the physical distance between nodes.\na range +1 +3600 3 3 1\n\n# Add more ranges.\na range +1 +3600 2 2 1\na range +1 +3600 2 3 1\n\n# set this node to consume and produce a mean of 1000000 bytes/second.\nm production 1000000\nm consumption 1000000\n## end ionadmin \n

    The bpadmin configuration uses adds the endpoints and the protocol tcp. In the protocol section, it estimates transmission capacity assuming 1400 bytes of each frame (in this case, tcp on ethernet) for payload, and 100 bytes for overhead. The induct and outduct will listen on port 4556, the IANA assigned default DTN TCP convergence layer port. The induct itself is implemented by the tcpcli command and the outduct is implemented by the tcpclo

    ## begin bpadmin \n# bprc configuration file for host3 in a 3node test.\n# Initialization command (command 1).\n1\n\n# Add an EID scheme.\na scheme ipn 'ipnfw' 'ipnadminep'\n\n# Add endpoints.\na endpoint ipn:3.0 q\na endpoint ipn:3.1 q\na endpoint ipn:3.2 q\n\n# Add a protocol. \n# Add the protocol named tcp.\na protocol tcp 1400 100\n\n# Add an induct. (listen)\na induct tcp 10.0.0.3:4556 tcpcli\n\n# Add an outduct (send to yourself).\na outduct tcp 10.0.0.3:4556 tcpclo\n\n# Add an outduct. (send to host2)\na outduct tcp external_ip_of_host_2:4556 tcpclo\n\n# Start bundle protocol engine, also running all of the induct, outduct,\n# and administration programs defined above.\ns\n## end bpadmin \n

    The ipnadmin configuration adds the egress plans (to host 3 itself and to host 2) using tcp.

    ## begin ipnadmin \n# ipnrc configuration file for host1 in the 3node tcp network.\n# Add an egress plan (to yourself).\na plan 2 tcp/10.0.0.3:4556\n# Add an egress plan (to the host 2).\na plan 2 tcp/external_IP_of_node_2:4556\n## end ipnadmin\n

    The ionsecadmin configuration enables bundle security

    ## begin ionsecadmin\n1\n## end ionsecadmin\n

    "},{"location":"community/dtn-gcp-ltp-tcp-main/host3-start-script/","title":"ION Start Script Example","text":"

    Note: place this in a file named host3.rc

    ## begin ionadmin \n# ionrc configuration file for host3 in a 3node tcp/ltp test.\n# This uses tcp from 1 to 3.\n# \n# Initialization command (command 1). \n# Set this node to be node 3 (as in ipn:3).\n# Use default sdr configuration (empty configuration file name '').\n1 3 ''\n# start ion node\ns\n# Add a contact.\n# It will start at +1 seconds from now, ending +3600 seconds from now.\n# It will connect node 3 to itself\n# It will transmit 100000 bytes/second.\na contact +1 +3600 3 3 100000\n\n# Add more contacts.\n# They will connect 2 to 3, 3 to 2, and 3 to itself\n# Note that contacts are unidirectional, so order matters.\na contact +1 +3600 3 2 100000\na contact +1 +3600 2 3 100000\na contact +1 +3600 2 2 100000\n\n# Add a range. This is the physical distance between nodes.\na range +1 +3600 3 3 1\n\n# Add more ranges.\na range +1 +3600 2 2 1\na range +1 +3600 2 3 1\n\n# set this node to consume and produce a mean of 1000000 bytes/second.\nm production 1000000\nm consumption 1000000\n## end ionadmin \n\n## begin bpadmin \n# bprc configuration file for host3 in a 3node test.\n# Initialization command (command 1).\n1\n\n# Add an EID scheme.\na scheme ipn 'ipnfw' 'ipnadminep'\n\n# Add endpoints.\na endpoint ipn:3.0 q\na endpoint ipn:3.1 q\na endpoint ipn:3.2 q\n\n# Add a protocol. \n# Add the protocol named tcp.\na protocol tcp 1400 100\n\n# Add an induct. (listen)\na induct tcp 10.0.0.3:4556 tcpcli\n\n# Add an outduct (send to yourself).\na outduct tcp 10.0.0.3:4556 tcpclo\n\n# Add an outduct. (send to host2)\na outduct tcp external_ip_of_host_2:4556 tcpclo\n\n# Start bundle protocol engine, also running all of the induct, outduct,\n# and administration programs defined above.\ns\n## end bpadmin \n\n## begin ipnadmin \n# ipnrc configuration file for host1 in the 3node tcp network.\n# Add an egress plan (to yourself).\na plan 2 tcp/10.0.0.3:4556\n# Add an egress plan (to the host 2).\na plan 2 tcp/external_IP_of_node_2:4556\n## end ipnadmin\n\n## begin ionsecadmin\n1\n## end ionsecadmin\n
    "},{"location":"community/dtn-gcp-main/ION-One-Node-on-Cloud-Linux-VM/","title":"DTN 101 - Running the Interplanetary Internet on Cloud VM","text":"

    This project has been developed by Dr Lara Suzuki, a visiting researcher at NASA JPL.

    "},{"location":"community/dtn-gcp-main/ION-One-Node-on-Cloud-Linux-VM/#introduction","title":"Introduction","text":"

    In this project we demonstrate how to run DTN on a cloud VM using NASA's implementation of the bundle protocol - ION. DTN stands for delay-tolerant and disruption-tolerant networks.

    \"It is an evolution of the architecture originally designed for the Interplanetary Internet, a communication system envisioned to provide Internet-like services across interplanetary distances in support of deep space exploration\" Cerf et al, 2007.

    The ION (interplanetary overlay network) software is a suite of communication protocol implementations designed to support mission operation communications across an end-to-end interplanetary network, which might include on-board (flight) subnets, in-situ planetary or lunar networks, proximity links, deep space links, and terrestrial internets.

    "},{"location":"community/dtn-gcp-main/ION-One-Node-on-Cloud-Linux-VM/#getting-started-with-cloud-linux-vms","title":"Getting Started with Cloud Linux VMs","text":"

    On your preferred Cloud provider dashboard, create a Linux VM (e.g. Debian).

    When prompted, select the reagion closest to you. If you are prompted to select the machine type, select the configuration that will suit your needs. I have selected the a machine which has 2 virtual CPUs and 4 GB memory.

    For boot disk, in this tutorial we are using Debian GNU/Linux 10 (buster). In firewall configurations, I have selected it Allow HTTP and HTTPS.

    Once the VM is started you can SSH directly into the VM.

    "},{"location":"community/dtn-gcp-main/ION-One-Node-on-Cloud-Linux-VM/#ssh-in-the-cloud-linux-vm-instance","title":"SSH in the Cloud Linux VM Instance","text":"

    Mac and Linux support SSH connection natively. You just need to generate an SSH key pair (public key/private key) to connect securely to the virtual machine.

    To generate the SSH key pair to connect securely to the virtual machine, follow these steps:

    1. Enter the following command in Terminal: ssh-keygen -t rsa .
    2. It will start the key generation process.
    3. You will be prompted to choose the location to store the SSH key pair.
    4. Press ENTER to accept the default location
    5. Now run the following command: cat ~/.ssh/id_rsa.pub .
    6. It will display the public key in the terminal.
    7. Highlight and copy this key

    Back in the Cloud VM tools, follow your provider's direction on how to SSH into the VM. If you are requested to provide your SSH Keys, locate the SSH key file in your computer and inform it here.

    Now you can just open your terminal on your Mac or Linux machine and type ssh IP.IP.IP.IP and you will be on the VM (IP.IP.IP.IP is the external IP of the VM).

    "},{"location":"community/dtn-gcp-main/ION-One-Node-on-Cloud-Linux-VM/#getting-started-with-ion","title":"Getting Started with ION","text":"

    This example uses ION version 4.0.1, which can be downloaded here. ION 4.0.1 uses the version 7 of the Bundle Protocol.

    On your VM execute the following commands

    $ sudo apt update\n$ sudo apt install build-essential -y\n$ sudo apt-get install wget -y\n$ wget https://sourceforge.net/projects/ion-dtn/files/ion-open-source-4.0.1.tar.gz/download\n$ tar xzvf download\n
    "},{"location":"community/dtn-gcp-main/ION-One-Node-on-Cloud-Linux-VM/#compiling-ion-using-autotools","title":"Compiling ION using autotools","text":"

    Follow the standard autoconf method for compiling the project. In the base ION directory run:

    $ ./configure\n
    Then compile with:
    $ make\n````\nFinally, install (requires root privileges):\n
    $ sudo make install ```

    For Linux based systems, you may need to run sudo ldconfig with no arguments after install.

    "},{"location":"community/dtn-gcp-main/ION-One-Node-on-Cloud-Linux-VM/#programs-in-ion","title":"Programs in ION","text":"

    The following tools are a few examples of programs availale to you after ION is built:

    1. Daemon and Configuration: - ionadmin is the administration and configuration interface for the local ION node contacts and manages shared memory resources used by ION. - ltpadmin is the administration and configuration interface for LTP operations on the local ION node. - bsspadmin is the administrative interface for operations of the Bundle Streaming Service Protocol on the local ion node. - bpadmin is the administrative interface for bundle protocol operations on the local ion node. - ipnadmin is the administration and configuration interface for the IPN addressing system and routing on the ION node. (ipn:) - ionstart is a script which completely configures an ION node with the proper configuration file(s). - ionstop is a script which cleanly shut down all of the daemon processes. - killm is a script which releases all of the shared-memory resources that had been allocated to the state of the node. This actually destroys the node and enables a subsequent clean new start (the \u201cionstart\u201d script) to succeed. - ionscript is a script which aides in the creation and management of configuration files to be used with ionstart.

    2. Simple Sending and Receiving: - bpsource and bpsink are for testing basic connectivity between endpoints. bpsink listens for and then displays messages sent by bpsource. - bpsendfile and bprecvfile are used to send files between ION nodes.

    3. Testing and Benchmarking: - bpdriver benchmarks a connection by sending bundles in two modes: request-response and streaming. - bpecho issues responses to bpdriver in request-response mode. - bpcounter acts as receiver for streaming mode, outputting markers on receipt of data from bpdriver and computing throughput metrics.

    4. Logging: - By default, the administrative programs will all trigger the creation of a log file called ion.log in the directory where the program is called. This means that write-access in your current working directory is required. The log file itself will contain the expected log information from administrative daemons, but it will also contain error reports from simple applications such as bpsink.

    "},{"location":"community/dtn-gcp-main/ION-One-Node-on-Cloud-Linux-VM/#the-configuration-files","title":"The Configuration Files","text":"

    Below we present the configuration files that you should be aware and configure for ION to execute correctly.

    1. ionadmin's configuration file, assigns an identity (node number) to the node, optionally configures the resources that will be made available to the node, and specifies contact bandwidths and one-way transmission times. Specifying the \"contact plan\" is important in deep-space scenarios where the bandwidth must be managed and where acknowledgments must be timed according to propagation delays. It is also vital to the function of contact-graph routing - How To

    2. ltpadmin's configuration file, specifies spans, transmission speeds, and resources for the Licklider Transfer Protocol convergence layer - How To

    3. bpadmin's configuration file, specifies all of the open endpoints for delivery on your local end and specifies which convergence layer protocol(s) you intend to use. With the exception of LTP, most convergence layer adapters are fully configured in this file - How To

    4. ipnadmin's configuration file, maps endpoints at \"neighboring\" (topologically adjacent, directly reachable) nodes to convergence-layer addresses. This file populates the ION analogue to an ARP cache for the \"ipn\" naming scheme - How To

    5. ionsecadmin's configuration file, enables bundle security to avoid error messages in ion.log - How To

    "},{"location":"community/dtn-gcp-main/ION-One-Node-on-Cloud-Linux-VM/#testing-and-stopping-your-connection","title":"Testing and Stopping your Connection","text":"

    Assuming no errors occur with the configuration files above, we are now ready to test loopback communications, and also learn how to properly stop ION nodes. The below items are covered in this How To page.

    1. Testing your connection
    2. Stopping the Daemon
    3. Creating a single configuration file
    "},{"location":"community/dtn-gcp-main/Running-ION/","title":"Testing your connection","text":"

    A script has been created which allows a more streamlined configuration and startup of an ION node. This script is called ionstart, and it has the following syntax. Don't run it yet; we still have to configure it!

    ionstart -I <rc filename >\n
    "},{"location":"community/dtn-gcp-main/Running-ION/#loopback-communication","title":"Loopback communication","text":"

    Assuming no errors occur with the configuration above, we are now ready to test loopback communications. In one terminal, we have to run the start script alongside the configuration files.

    ionstart -i host1.ionrc -l host1.ltprc -b host1.bprc -p host1.ipnrc\n

    This command will run the appropriate administration programs, in order, with the appropriate configuration files. Don't worry that the command is lengthy and unwieldly; we will show you how to make a more clean single configuration file later. The image below illustrates the start of the administration programs.

    Once the daemon is started, run:

    bpsink ipn:1.1 &\n

    This will begin constantly listening on the Endpoint ID with the endpoint_number 1 on service_number 1, which is used for testing.

    Now run the command:

    bpsource ipn:1.1\n

    This will begin sending messages you type to the Endpoint ID ipn:1.1, which is currently being listened to by bpsink. Type messages into bpsource, press enter, and see if they are reported by bpsink. In the example below I am using the Endpoint ID ipn:2.1.

    "},{"location":"community/dtn-gcp-main/Running-ION/#stopping-the-daemon","title":"Stopping the Daemon","text":"

    As the daemon launches many ducts and helper applications, it can be complicated to turn it all off. The script similar to ionstart exists called ionstop, which tears down the ion node in one step. You can call it like so:

    ionstop\n
    The commands part of the ionstop script is shown below.

    # shell script to stop node\n#!/bin/bash\nbpadmin         .\nsleep 1\nltpadmin        .\nsleep 1\nionadmin        .\n

    After stopping the daemon, you can start fresh with a brand-new node. To do that, you first need to run the killm script (to destroy all of the persistent state objects in shared memory); after that, you can run your ionstart script again, whether with changes or not. Do remember that the ion.log file is still present, and will just keep growing as you experiment with ION. You can of course periodically delete entries out of the ion.log file.

    "},{"location":"community/dtn-gcp-main/Running-ION/#creating-a-single-configuration-file","title":"Creating a single configuration file","text":"

    To create a single file host1.rc out of the various configuration files defined in the previous section, run this command:

    ionscript -i host1.ionrc -p host1.ipnrc -l host1.ltprc -b host1.bprc -O host1.rc\n

    Once you have a single configuration file, starting the ION node is a single command:

    ionstart -I host1.rc\n
    "},{"location":"community/dtn-gcp-main/Running-ION/#loopback-testing-using-ltp","title":"Loopback testing using LTP","text":"

    Assuming no errors occur with the configuration files above, we are now ready to test a Loopback communication, and also learn how to properly stop the ION node. The single rc file for host 1 can be found here.

    The execution of the host should be performed using the command

    $ ionstart -I host1.rc\n

    The image below illustrates the loopback communication using bpsink and bpsource.

    To stop ION in the VM instance, use the command

    $ ionstop. \n
    "},{"location":"community/dtn-gcp-main/bp-config/","title":"The Bundle Protocol Configuration File","text":"

    Given to bpadmin either as a file or from the daemon command line, this file configures the endpoints through which this node's Bundle Protocol Agent (BPA) will communicate. We will assume the local BPA's node number is 1; as for LTP, in ION node numbers are used to identify bundle protocol agents.

    "},{"location":"community/dtn-gcp-main/bp-config/#initialise-the-bundle-protocol","title":"Initialise the bundle protocol","text":"

    1\n````. \n\n`1` refers to this being the initialization or ''first'' command.\n\n## Add support for a new Endpoint Identifier (EID) scheme\n
    a scheme ipn 'ipnfw' 'ipnadminep'
    `a` means that this command will add something.\n\n`scheme` means that this command will add a scheme.\n\n`ipn` is the name of the scheme to be added.\n\n`'ipnfw'` is the name of the IPN scheme's forwarding engine daemon.\n\n`'ipnadminep'` is the name of the IPN scheme's custody transfer management daemon.\n\n## Establishes the BP node's membership in a BP endpoint\n
    a endpoint ipn:1.0 q
    `a` means that this command will add something.\n\n`endpoint` means that this command adds an endpoint.\n\n`ipn` is the scheme name of the endpoint.\n\n`1.0` is the scheme-specific part of the endpoint. For the IPN scheme the scheme-specific part always has the form nodenumber:servicenumber. Each node must be a member of the endpoint whose node number is the node's own node number and whose service number is 0, indicating administrative traffic.\n\n`q` means that the behavior of the engine, upon receipt of a new bundle for this endpoint, is to queue it until an application accepts the bundle. The alternative is to silently discard the bundle if no application is actively listening; this is specified by replacing q with x.\n\n\n## Specify two more endpoints that will be used for test traffic\n
    a endpoint ipn:1.1 q a endpoint ipn:1.2 q
    ## Add support for a convergence-layer protocol\n
    a protocol ltp 1400 100
    `a` means that this command will add something.\n\n`protocol` means that this command will add a convergence-layer protocol.\n\n`ltp` is the name of the convergence-layer protocol.\n\n`1400` is the estimated size of each convergence-layer protocol data unit (in bytes); in this case, the value is based on the size of a UDP/IP packet on Ethernet.\n\n`100` is the estimated size of the protocol transmission overhead (in bytes) per convergence-layer procotol data unit sent.\n\n\n## Add an induct, through which incoming bundles can be received from other nodes\n
    a induct ltp 1 ltpcli
    `a` means that this command will add something.\n\n`induct` means that this command will add an induct.\n\n`ltp` is the convergence layer protocol of the induct.\n\n`1` is the identifier of the induct, in this case the ID of the local LTP engine.\n\n`ltpcli` is the name of the daemon used to implement the induct.\n\n\n\n## Add an outduct, through which outgoing bundles can be sent to other nodes\n
    a outduct ltp 1 ltpclo
    `a` means that this command will add something.\n\n`outduct` means that this command will add an outduct.\n\n`ltp` is the convergence layer protocol of the outduct.\n\n`1` is the identifier of the outduct, the ID of the convergence-layer protocol induct of some remote node. \n\n`ltpclo` is the name of the daemon used to implement the outduct.\n\n\n## Start the bundle engine including all daemons for the inducts and outducts\n
    s
    ## Final configuration file - `host1.bprc`\n

    "},{"location":"community/dtn-gcp-main/bp-config/#begin-bpadmin","title":"begin bpadmin","text":"

    1 a scheme ipn 'ipnfw' 'ipnadminep' a endpoint ipn:1.0 q a endpoint ipn:1.1 q a endpoint ipn:1.2 q a protocol ltp 1400 100 a induct ltp 1 ltpcli a outduct ltp 1 ltpclo s

    "},{"location":"community/dtn-gcp-main/bp-config/#end-bpadmin","title":"end bpadmin","text":"

    ````

    "},{"location":"community/dtn-gcp-main/host1-start-script/","title":"ION Start Script Example","text":"

    Note: place this in a file named host1.rc

    ## begin ionadmin\n1 1 ''\ns\n# Define contact plan\na contact +1 +3600 1 1 100000\n\n# Define 1sec OWLT between nodes\na range +1 +3600 1 1 1\nm production 1000000\nm consumption 1000000\n## end ionadmin\n\n## begin ltpadmin\n1 32\na span 1 32 32 1400 10000 1 'udplso 127.0.0.1:1113' 300\n# Start listening for incoming LTP traffic - assigned to the IP internal\ns 'udplsi 127.0.0.1:1113'\n## end ltpadmin\n\n## begin bpadmin\n1\n# Use the ipn eid naming scheme\na scheme ipn 'ipnfw' 'ipnadminep'\n# Create a endpoints\na endpoint ipn:1.0 q\na endpoint ipn:1.1 q\na endpoint ipn:1.2 q\n# Define ltp as the protocol used\na protocol ltp 1400 100\n# Listen \na induct ltp 1 ltpcli\n# Send to yourself\na outduct ltp 1 ltpclo\ns\n## end bpadmin\n\n## begin ipnadmin\n# Send to yourself\na plan 1 ltp/1\n## end ipnadmin\n\n## begin ionsecadmin\n# Enable bundle security to avoid error messages in ion.log\n1\n## end ionsecadmin\n
    "},{"location":"community/dtn-gcp-main/ion-config/","title":"The ION Configuration File","text":"

    Given to ionadmin either as a file or from the daemon command line, this file configures contacts for the ION node. We will assume that the local node's identification number is 1.

    This file specifies contact times and one-way light times between nodes. This is useful in deep-space scenarios: for instance, Mars may be 20 light-minutes away, or 8. Though only some transport protocols make use of this time (currently, only LTP), it must be specified for all links nonetheless. Times may be relative (prefixed with a + from current time) or absolute. Absolute times, are in the format yyyy/mm/dd-hh:mm:ss. By default, the contact-graph routing engine will make bundle routing decisions based on the contact information provided.

    The configuration file lines are as follows:

    "},{"location":"community/dtn-gcp-main/ion-config/#initialize-the-ion-node-to-be-node-number-1","title":"Initialize the ion node to be node number 1","text":"
    1 1 ''\n

    1 refers to this being the initialization or first command.

    1 specifies the node number of this ion node. (IPN node 1).

    '' specifies the name of a file of configuration commands for the node's use of shared memory and other resources (suitable defaults are applied if you leave this argument as an empty string).

    "},{"location":"community/dtn-gcp-main/ion-config/#start-the-ion-node","title":"Start the ION node","text":"

    s

    This will start the ION node. It mostly functions to officially \"start\" the node in a specific instant; it causes all of ION's protocol-independent background daemons to start running.

    "},{"location":"community/dtn-gcp-main/ion-config/#specify-a-transmission-opportunity","title":"Specify a transmission opportunity","text":"
    a contact +1 +3600 1 1 100000\n

    specifies a transmission opportunity for a given time duration between two connected nodes (or, in this case, a loopback transmission opportunity).

    a adds this entry in the configuration table.

    contact specifies that this entry defines a transmission opportunity.

    +1 is the start time for the contact (relative to when the s command is issued).

    +3600 is the end time for the contact (relative to when the s command is issued).

    1 is the source node number.

    1 is the destination node number.

    100000 is the maximum rate at which data is expected to be transmitted from the source node to the destination node during this time period (here, it is 100000 bytes / second).

    "},{"location":"community/dtn-gcp-main/ion-config/#specify-a-distance-between-nodes","title":"Specify a distance between nodes","text":"
    a range +1 +3600 1 1 1\n

    specifies a distance between nodes, expressed as a number of light seconds, where each element has the following meaning:

    a adds this entry in the configuration table.

    range declares that what follows is a distance between two nodes.

    +1 is the earliest time at which this is expected to be the distance between these two nodes (relative to the time s was issued).

    +3600 is the latest time at which this is still expected to be the distance between these two nodes (relative to the time s was issued).

    1 is one of the two nodes in question.

    1 is the other node.

    1 is the distance between the nodes, measured in light seconds, also sometimes called the \"one-way light time\" (here, one light second is the expected distance).

    "},{"location":"community/dtn-gcp-main/ion-config/#specify-the-maximum-rate-at-which-data-will-be-produced-by-the-node","title":"Specify the maximum rate at which data will be produced by the node","text":"
    m production 1000000\n

    m specifies that this is a management command.

    production declares that this command declares the maximum rate of data production at this ION node.

    1000000 specifies that at most 1000000 bytes/second will be produced by this node.

    "},{"location":"community/dtn-gcp-main/ion-config/#specify-the-maximum-rate-at-which-data-can-be-consumed-by-the-node","title":"Specify the maximum rate at which data can be consumed by the node","text":"
    m consumption 1000000\n

    m specifies that this is a management command.

    consumption declares that this command declares the maximum rate of data consumption at this ION node.

    1000000 specifies that at most 1000000 bytes/second will be consumed by this node.

    "},{"location":"community/dtn-gcp-main/ion-config/#final-configuration-file-host1ionrc","title":"Final configuration file - host1.ionrc","text":"
    ## begin ionadmin\n1 1 ''\ns\na contact +1 +3600 1 1 100000\na range +1 +3600 1 1 1\nm production 1000000\nm consumption 1000000\n## end ionadmin\n
    "},{"location":"community/dtn-gcp-main/ionsec-config/","title":"ION Security Admin Configuration File","text":"

    The ionsecadmin section is used to enable bundle security. Adding it will also avoid error messages in ion.log.

    "},{"location":"community/dtn-gcp-main/ionsec-config/#enable-the-security-of-the-bundle","title":"Enable the security of the bundle","text":"

    1 enables the bundle security

    "},{"location":"community/dtn-gcp-main/ionsec-config/#final-configuration-file-ionsecrc","title":"Final configuration file - ionsec.rc","text":"
    ## begin ionsecadmin\n1\n## end ionsecadmin\n
    "},{"location":"community/dtn-gcp-main/ipn-config/","title":"IPN Routing Configuration","text":"

    As noted earlier, this file is used to build ION's analogue to an ARP cache, a table of egress plans. It specifies which outducts to use in order to forward bundles to the local node's neighbors in the network. Since we only have one outduct, for forwarding bundles to one place (the local node), we only have one egress plan.

    "},{"location":"community/dtn-gcp-main/ipn-config/#define-an-egress-plan-for-bundles-to-be-transmitted-to-the-local-node","title":"Define an egress plan for bundles to be transmitted to the local node:","text":"
    a plan 1 ltp/1\n

    a means this command adds something. plan means this command adds an egress plan. 1 is the node number of the remote node. In this case, that is the local node's own node number; we're configuring for loopback. ltp/1 is the identifier of the outduct through which to transmit bundles in order to convey them to this ''remote'' node.

    "},{"location":"community/dtn-gcp-main/ipn-config/#final-configuration-file-host1ipnrc","title":"Final configuration file - host1.ipnrc","text":"
    ## begin ipnadmin\na plan 1 ltp/1\n## end ipnadmin\n
    "},{"location":"community/dtn-gcp-main/ltp-config/","title":"The Licklider Transfer Protocol Configuration File","text":"

    Given to ltpadmin as a file or from the command line, this file configures the LTP engine itself. We will assume the local IPN node number is 1; in ION, node numbers are used as the LTP engine numbers.

    "},{"location":"community/dtn-gcp-main/ltp-config/#initialize-the-ltp-engine","title":"Initialize the LTP engine","text":"
    1 32    \n

    1 refers to this being the initialization or ''first'' command.

    32 is an estimate of the maximum total number of LTP ''block'' transmission sessions - for all spans - that will be concurrently active in this LTP engine. It is used to size a hash table for session lookups.

    "},{"location":"community/dtn-gcp-main/ltp-config/#defines-an-ltp-engine-span","title":"Defines an LTP engine 'span'","text":"
    a span 1 32 32 1400 10000 1 'udplso localhost:1113'\n

    a indicates that this will add something to the engine.

    span indicates that an LTP span will be added.

    1 is the engine number for the span, the number of the remote engine to which LTP segments will be transmitted via this span. In this case, because the span is being configured for loopback, it is the number of the local engine, i.e., the local node number.

    32 specifies the maximum number of LTP ''block'' transmission sessions that may be active on this span. The product of the mean block size and the maximum number of transmission sessions is effectively the LTP flow control ''window'' for this span: if it's less than the bandwidth delay product for traffic between the local LTP engine and this spa's remote LTP engine then you'll be under-utilizing that link. We often try to size each block to be about one second's worth of transmission, so to select a good value for this parameter you can simply divide the span's bandwidth delay product (data rate times distance in light seconds) by your best guess at the mean block size.

    The second 32 specifies the maximum number of LTP ''block'' reception sessions that may be active on this span. When data rates in both directions are the same, this is usually the same value as the maximum number of transmission sessions.

    1400 is the number of bytes in a single segment. In this case, LTP runs atop UDP/IP on ethernet, so we account for some packet overhead and use 1400.

    1000 is the LTP aggregation size limit, in bytes. LTP will aggregate multiple bundles into blocks for transmission. This value indicates that the block currently being aggregated will be transmitted as soon as its aggregate size exceeds 10000 bytes.

    1 is the LTP aggregation time limit, in seconds. This value indicates that the block currently being aggregated will be transmitted 1 second after aggregation began, even if its aggregate size is still less than the aggregation size limit.

    'udplso localhost:1113' is the command used to implement the link itself. The link is implemented via UDP, sending segments to the localhost Internet interface on port 1113 (the IANA default port for LTP over UDP).

    "},{"location":"community/dtn-gcp-main/ltp-config/#starts-the-ltp-engine-itself","title":"Starts the ltp engine itself","text":"
    s 'udplsi localhost:1113'\n

    s starts the ltp engine.

    'udplsi localhost:1113' is the link service input task. In this case, the input ''duct' is a UDP listener on the local host using port 1113.

    "},{"location":"community/dtn-gcp-main/ltp-config/#the-final-configuration-file-host1ltprc","title":"The final configuration file - host1.ltprc","text":"
    ## begin ltpadmin\n1 32\na span 1 32 32 1400 10000 1 'udplso localhost:1113'\ns 'udplsi localhost:1113'\n## end ltpadmin\n
    "},{"location":"community/dtn-multicasting-main/Multicasting-over-ION/","title":"Multicasting over the Interplanetary Internet :rocket:","text":"

    This project has been developed by Dr Lara Suzuki, a visiting Researcher at NASA JPL.

    NOTE to reader: After ION 4.1, the imc multicasting implementation has changed its configuration and no longer requires the imcadmin program for configuration. Please see ION 4.1.2 man pages for more information. This tutorial is based on earlier version of ION's imc multicasting. The proper adaptation to latest ION version is left as an exercise for the reader.

    "},{"location":"community/dtn-multicasting-main/Multicasting-over-ION/#multicasting-using-ion","title":"Multicasting using ION","text":"

    Multicasting over the Interplanetary Internet uses version 7 of the Bundle Protocol. In a multicasting scenario, we send messages to a multicasting end point and the messages are propagated across the nodes of the network, removing the need to send data to individual nodes one at a time.

    Use multicasting when the messages you are sending should be delivered to all the known nodes of the network.

    "},{"location":"community/dtn-multicasting-main/Multicasting-over-ION/#executing-multicasting","title":"Executing Multicasting","text":"

    This tutorial presents the configurations of one host. To create additional hosts, you just need to copy the same configurations and alter the configuraation documents with the appropriate data as explained in our first tutorial.

    To execute multicasting:

    1. Execute the \"execute\" file to get ion started

      $ ./execute\n

    2. When performing multicasting, the eid naming scheme is imc 'imcfw' 'imcadminep'

    3. Once ION is started you can run the commands to open bpsink to listen to packages sent over DTN. Simply run on your terminal the command below. This command will leave bpsink running on the background. In our configuration file host.bprc.2 we define interplanetary multicasting EID as 19.0. Messages sent to this EID will be delivered to all the hosts running the same configuration.

      $ bpsink imc:19.0 &\n
      Messages sent on bpsource imc:19.0 will be delivered to all end points registered in the interplanetary multicasting eid. Note that you can also use bprecvfile and bpsendfile to send images and videos over multicasting.

    $ bpsendfile ipn:1.1 imc:19.0 image.jpeg\n$ bprecvfile imc:19.0 1\n
    "},{"location":"community/dtn-multicasting-main/Multicasting-over-ION/#why-ion-digital-communication-in-interplanetary-scenarios","title":"Why ION? Digital Communication in Interplanetary Scenarios","text":"

    In this section I will give a little overview of the basic concepts of the Interplanetary Overlay Network from Nasa.

    Digital communication between interplanetary spacecraft and space flight control centres on Earth is subject to constraints that differ in some ways from those that characterize terrestrial communications.

    "},{"location":"community/dtn-multicasting-main/Multicasting-over-ION/#basic-concepts-of-interplanetary-overlay-network","title":"Basic Concepts of Interplanetary Overlay Network","text":"

    Delay-Disruption Tolerant Networking (DTN) is NASA\u2019s solution for reliable, automated network communications in space missions. The DTN2 reference implementation of the Delay-Tolerant Networking (DTN) RFC 4838 and the current version of the Bundle Protocol (BP) is the foundation for a wide range of current DTN industrial and research applications.

    The Jet Propulsion Laboratory (JPL) has developed an alternative implementation of BP, named \u201cInterplanetary Overlay Network\u201d (ION). ION addresses those constraints and enables delay-tolerant network communications in interplanetary mission operations.

    "},{"location":"community/dtn-multicasting-main/Multicasting-over-ION/#space-communication-challenges","title":"Space communication challenges","text":"

    To give you a sense of signal propagation -> send a message and receive a response (RRT) back: 1. A typical round trip time between two points on the internet is 100ms to 300ms 2. Distance to ISS (throught TDRS - Tracking and Data Relay Satellite) approx 71322km -> round trip time is approx 1200 ms on Ku Link 3. Distance to the moon approx 384400 km - rrt 2560 ms -2,5 seconds 4. Minimun distance to mars: approx 54.6 million km - rrt of approx 6 min 5. Average distance to mars: approx 225 million km - rrt of approx 25 mins 6. Farthest distance to mars: approx 401 million km - rtt of approc 44.6 min

    The internet architecture is based on the assumption that network nodes are continuously connected. The following assumptions are valid for the terrestrial Internet architecture: - Networks have short signal propagation delays - Data links are symmetric and bidirectional - Bit error rates are low

    These assumptions are not valid in the space environment - a new sort of network is needed. In a space environment: - Connections can be routinely interrupted - Interplanetary distances impose delays - Link data rates are often asymmetric and some links are simplex - Bit error rates can be very high

    To communicate across these vast distances, NASA manages three communication networks consisting of distributed ground stations and space relay satellites for data transmission and reception that support both NASA and non-NASA missions. These are: - the Deep Space Network (DSN) - the Near Earth Network (NEN) - the Space Network (SN).

    Communication opportunities are scheduled, based on orbit dynamics and operation plans. Sometimes a spacecraft is on the far side of a planet and you cannot communicate with it. Transmission and reception episodes are individually configured, started and ended by command. Reliability over deeps space links is by management: on loss of data, command retransmission. More recently for Mars missions we have managed forwarding through relay points so that data from these surface vehicles is relayed thought Odyssey and other orbiting mars vehicles.

    "},{"location":"community/dtn-multicasting-main/Host1/execute/","title":"Execute","text":"
    #!/usr/bin/bash\necho \"starting ION...\"\nionadmin host.ionrc\n./ionstart\n
    "},{"location":"community/dtn-multicasting-main/Host1/host.bprc1/","title":"Host.bprc1","text":"
    ## begin bpadmin\n1\n#       Use the ipn eid naming scheme\na scheme ipn 'ipnfw' 'ipnadminep'\n#       Create a endpoints\na endpoint ipn:1.0 q\na endpoint ipn:1.1 q\na endpoint ipn:1.2 q\n#       Define ltp as the protocol used\na scheme imc 'imcfw' 'imcadminep'\n#a endpoint imc:19.0 q\na protocol ltp 1400 100\n#       Listen\na induct ltp 1 ltpcli\n#       Send to yourself\na outduct ltp 1 ltpclo\n#       Send to server2\na outduct ltp 2 ltpclo\na outduct ltp 3 ltpclo\nw 1\ns\n## end bpadmin\n
    "},{"location":"community/dtn-multicasting-main/Host1/host.bprc2/","title":"Host.bprc2","text":"
    a endpoint imc:19.0 q\n
    "},{"location":"community/dtn-multicasting-main/Host1/host.ionrc/","title":"Host.ionrc","text":"
    ## begin ionadmin\n1 1 ''\ns\n#       Define contact plan\na contact +1 +3600 1 1 100000\na contact +1 +3600 1 2 100000\na contact +1 +3600 2 1 100000\na contact +1 +3600 2 2 100000\n\n#       Define 1sec OWLT between nodes\na range +1 +3600 1 1 1\na range +1 +3600 1 2 1\na range +1 +3600 2 1 1\na range +1 +3600 2 2 1\nm production 1000000\nm consumption 1000000\n## end ionadmin\n
    "},{"location":"community/dtn-multicasting-main/Host1/host.ionsecrc/","title":"Host.ionsecrc","text":"
    1\n
    "},{"location":"community/dtn-multicasting-main/Host1/host.ipnrc/","title":"Host.ipnrc","text":"
    ## begin ipnadmin\n#       Send to yourself#\na plan 1 ltp/1\n#       Send to server 2\na plan 2 ltp/2\n## end ipnadmin\n
    "},{"location":"community/dtn-multicasting-main/Host1/host.ltprc/","title":"Host.ltprc","text":"
    1 32\n#a span peer_engine_nbr\n#       [queuing_latency]\n#       Create a span to tx\na span 1 32 32 1400 10000 1 'udplso external_ip_host1:1113' 300\na span 2 32 32 1400 10000 1 'udplso external_ip_host2:1113' 300\n#       Start listening for incoming LTP traffic - assigned to the IP internal\ns 'udplsi internal_ip:1113'\n## end ltpadmin\n
    "},{"location":"community/dtn-multicasting-main/Host1/ionstart/","title":"Ionstart","text":"
    #!/bin/bash\n# shell script to get node running\n\nionsecadmin     host.ionsecrc\nsleep 1\nltpadmin        host.ltprc\nsleep 1\nbpadmin         host.bprc.1\nsleep 1\nipnadmin        host.ipnrc\nsleep 2\nbpadmin         host.bprc.2\necho \"Started Host 1.\"\n
    "},{"location":"community/dtn-multicasting-main/Host1/ionstop/","title":"Ionstop","text":"
    #!/bin/bash\n#\n# ionstop\n# David Young\n# Aug 20, 2008\n#\n# will quickly and completely stop an ion node.\n\nION_OPEN_SOURCE=1\n\necho \"IONSTOP will now stop ion and clean up the node for you...\"\nbpversion\nif [ $? -eq 6 ]; then\necho \"acsadmin .\"\nacsadmin .\nsleep 1\nfi\necho \"bpadmin .\"\nbpadmin .\nsleep 1\nif [ \"$ION_OPEN_SOURCE\" == \"1\" ]; then\necho \"cfdpadmin .\"\ncfdpadmin .\nsleep 1\nfi\necho \"ltpadmin .\"\nltpadmin .\nsleep 1\necho \"ionadmin .\"\nionadmin .\nsleep 1\necho \"killm\"\nkillm\necho \"ION node ended. Log file: ion.log\"\n
    "},{"location":"community/dtn-video-intelligence-main/Video-Intelligence/","title":"Using GCP Video Intelligence over the Interplanetary Internet","text":"

    This project has been developed by Dr Lara Suzuki, a visiting researcher at NASA JPL.

    In this tutorial I will demonstrate how to connect use GCP Video Intelligence over the Interplanetary Internet.

    "},{"location":"community/dtn-video-intelligence-main/Video-Intelligence/#setting-up-the-interplanetary-internet","title":"Setting up the Interplanetary Internet","text":"

    Please follow tutorial Multicasting over the Interplanetary Internet to set up your nodes.

    As noted in the multicasting tutorial, current ION imc multicasting operation has been updated and adaptation is required to make this tutorial work.

    Once the hosts are configured you can run the command to get ionstart running:

    $ ./execute\n
    "},{"location":"community/dtn-video-intelligence-main/Video-Intelligence/#google-cloud-video-intelligence","title":"Google Cloud Video Intelligence","text":"

    Cloud Video Intelligence API allows you to process frames of videos with a simple API call from anywhere. With GCP Video Intelligence you can:

    1. Quickly understand video content by encapsulating powerful machine learning models in an easy to use REST API.

    2. Accurately annotate videos stored in Google Cloud Storage with video and frame-level (1 fps) contextual information.

    3. Make sense of large amount of video files in a very short amount of time.

    4. Utilise the technology via an easy to use REST API to analyze videos stored anywhere, or integrate with your image storage on Google Cloud Storage.

    "},{"location":"community/dtn-video-intelligence-main/Video-Intelligence/#executing-google-cloud-video-intelligence","title":"Executing Google Cloud Video Intelligence","text":"

    In the Interplanetary host you choose to run the excecution of the Video Intelligence API, install the library:

    $ pip install --upgrade google-cloud-videointelligence\n

    To use the Video Intelligence API you must be Authenticated. To do that, please follow the instructions in this GCP Tutorial. After you've created your service account, Provide authentication credentials to your application code by setting the environment variable GOOGLE_APPLICATION_CREDENTIALS.

    $ export GOOGLE_APPLICATION_CREDENTIALS=\"/home/user/Downloads/my-key.json\"\n

    On the host that will run the Video Intelligence, execute:

    python\npython3 video_processing.py\n
    "},{"location":"community/dtn-video-intelligence-main/video-processing-script/","title":"Video processing script","text":"
    import argparse\nfrom google.cloud import videointelligence\n\nvideo_client = videointelligence.VideoIntelligenceServiceClient()\nfeatures = [videointelligence.Feature.LABEL_DETECTION]\noperation = video_client.annotate_video(\nrequest={\"features\": features, \"input_uri\":'gs://test_bucket_pi/video1_pi.mp4'})\nprint(\"Processing video for label annotations:\")\n\nresult = operation.result(timeout=90)\nprint(\"\\nFinished processing.\")\n\nsegment_labels = result.annotation_results[0].segment_label_annotations\nfor i, segment_label in enumerate(segment_labels):\n  print(\"Video label description: {}\".format(segment_label.entity.description))\n  for category_entity in segment_label.category_entities:\n    print( \"\\tLabel category description: {}\".format(category_entity.description)\n            )\n\n  for i, segment in enumerate(segment_label.segments):\n    start_time = (\n      segment.segment.start_time_offset.seconds\n      + segment.segment.start_time_offset.microseconds / 1e6\n      )\n    end_time = (\n      segment.segment.end_time_offset.seconds\n      + segment.segment.end_time_offset.microseconds / 1e6\n      )\n    positions = \"{}s to {}s\".format(start_time, end_time)\n    confidence = segment.confidence\n    print(\"\\tSegment {}: {}\".format(i, positions))\n    print(\"\\tConfidence: {}\".format(confidence))\n    print(\"\\n\")\n
    "},{"location":"man/ams/","title":"Index of Man Pages","text":""},{"location":"man/ams/ams/","title":"NAME","text":"

    ams - CCSDS Asynchronous Message Service(AMS) communications library

    "},{"location":"man/ams/ams/#synopsis","title":"SYNOPSIS","text":"
    #include \"ams.h\"\n\ntypedef void                (*AmsMsgHandler)(AmsModule module,\n                                    void *userData,\n                                    AmsEvent *eventRef,\n                                    int continuumNbr,\n                                    int unitNbr,\n                                    int moduleNbr,\n                                    int subjectNbr,\n                                    int contentLength,\n                                    char *content,\n                                    int context,\n                                    AmsMsgType msgType,\n                                    int priority,\n                                    unsigned char flowLabel);\n\ntypedef void                (*AmsRegistrationHandler)(AmsModule module,\n                                    void *userData,\n                                    AmsEvent *eventRef,\n                                    int unitNbr,\n                                    int moduleNbr,\n                                    int roleNbr);\n\ntypedef void                (*AmsUnregistrationHandler)(AmsModule module,\n                                    void *userData,\n                                    AmsEvent *eventRef,\n                                    int unitNbr,\n                                    int moduleNbr);\n\ntypedef void                (*AmsInvitationHandler)(AmsModule module,\n                                    void *userData,\n                                    AmsEvent *eventRef,\n                                    int unitNbr,\n                                    int moduleNbr,\n                                    int domainRoleNbr,\n                                    int domainContinuumNbr,\n                                    int domainUnitNbr,\n                                    int subjectNbr,\n                                    int priority,\n                                    unsigned char flowLabel,\n                                    AmsSequence sequence,\n                                    AmsDiligence diligence);\n\ntypedef void                (*AmsDisinvitationHandler)(AmsModule module,\n                                    void *userData,\n                                    AmsEvent *eventRef,\n                                    int unitNbr,\n                                    int moduleNbr,\n                                    int domainRoleNbr,\n                                    int domainContinuumNbr,\n                                    int domainUnitNbr,\n                                    int subjectNbr);\n\ntypedef void                (*AmsSubscriptionHandler)(AmsModule module,\n                                    void *userData,\n                                    AmsEvent *eventRef,\n                                    int unitNbr,\n                                    int moduleNbr,\n                                    int domainRoleNbr,\n                                    int domainContinuumNbr,\n                                    int domainUnitNbr,\n                                    int subjectNbr,\n                                    int priority,\n                                    unsigned char flowLabel,\n                                    AmsSequence sequence,\n                                    AmsDiligence diligence);\n\ntypedef void                (*AmsUnsubscriptionHandler)(AmsModule module,\n                                    void *userData,\n                                    AmsEvent *eventRef,\n                                    int unitNbr,\n                                    int moduleNbr,\n                                    int domainRoleNbr,\n                                    int domainContinuumNbr,\n                                    int domainUnitNbr,\n                                    int subjectNbr);\n\ntypedef void                (*AmsUserEventHandler)(AmsModule module,\n                                    void *userData,\n                                    AmsEvent *eventRef,\n                                    int code,\n                                    int dataLength,\n                                    char *data);\n\ntypedef void                (*AmsMgtErrHandler)(void *userData,\n                                    AmsEvent *eventRef);\n\ntypedef struct\n{\n    AmsMsgHandler                   msgHandler;\n    void                            *msgHandlerUserData;\n    AmsRegistrationHandler          registrationHandler;\n    void                            *registrationHandlerUserData;\n    AmsUnregistrationHandler        unregistrationHandler;\n    void                            *unregistrationHandlerUserData;\n    AmsInvitationHandler            invitationHandler;\n    void                            *invitationHandlerUserData;\n    AmsDisinvitationHandler         disinvitationHandler;\n    void                            *disinvitationHandlerUserData;\n    AmsSubscriptionHandler          subscriptionHandler;\n    void                            *subscriptionHandlerUserData;\n    AmsUnsubscriptionHandler        unsubscriptionHandler;\n    void                            *unsubscriptionHandlerUserData;\n    AmsUserEventHandler             userEventHandler;\n    void                            *userEventHandlerUserData;\n    AmsMgtErrHandler                errHandler;\n    void                            *errHandlerUserData;\n} AmsEventMgt;\n\ntypedef enum\n{\n    AmsArrivalOrder = 0,\n    AmsTransmissionOrder\n} AmsSequence;\n\ntypedef enum\n{\n    AmsBestEffort = 0,\n    AmsAssured\n} AmsDiligence;\n\ntypedef enum\n{\n    AmsRegistrationState,\n    AmsInvitationState,\n    AmsSubscriptionState\n} AmsStateType;\n\ntypedef enum\n{\n    AmsStateBegins = 1,\n    AmsStateEnds\n} AmsChangeType;\n\ntypedef enum\n{\n    AmsMsgUnary = 0,\n    AmsMsgQuery,\n    AmsMsgReply,\n    AmsMsgNone\n} AmsMsgType;\n\n[see description for available functions]\n
    "},{"location":"man/ams/ams/#description","title":"DESCRIPTION","text":"

    The ams library provides functions enabling application software to use AMS to send and receive brief messages, up to 65000 bytes in length. It conforms to AMS Blue Book, including support for Remote AMS (RAMS).

    In the ION implementation of RAMS, the \"RAMS network protocol\" may be either the DTN Bundle Protocol (RFC 5050) or -- mainly for testing purposes -- the User Datagram Protocol (RFC 768). BP is the default. When BP is used as the RAMS network protocol, endpoints are by default assumed to conform to the \"ipn\" endpoint identifier scheme with node number set to the AMS continuum number and service number set to the AMS venture number.

    Note that RAMS functionality is enabled by instantiating a ramsgate daemon, which is simply an AMS application program that acts as a gateway between the local AMS message space and the RAMS network.

    AMS differs from other ION packages in that there is no utilization of non-volatile storage (aside from the BP functionality in RAMS, if applicable). Since there is no non-volatile AMS database, there is no AMS administration program and there are no library functions for attaching to or detaching from such a database. AMS is instantiated by commencing operation of the AMS real-time daemon amsd; once amsd is running, AMS application programs (\"modules\") can be started. All management of AMS operational state is performed automatically in real time.

    However, amsd and the AMS application programs all require access to a common store of configuration data at startup in order to load their Management Information Bases. This configuration data must reside in a readable file, which may take either of two forms: a file of XML statements conforming to the scheme described in the amsxml(5) man page, or a file of simple but less powerful configuration statements as described in the amsrc(5) man page. The amsxml alternative requires that the expat XML parsing system be installed; the amsrc alternative was developed to simplify deployment of AMS in environments in which expat is not readily supported. Selection of the configuration file format is a compile-time decision, implemented by setting (or not setting) the -DNOEXPAT compiler option.

    The path name of the applicable configuration file may be passed as a command-line parameter to amsd and as a registration function parameter by any AMS application program. Note, though, that ramsgate and the AMS test and utility programs included in ION always assume that the configuration file resides in the current working directory and that it is named \"mib.amsrc\" if AMS was built with -DNOEXPAT, \"amsmib.xml\" otherwise.

    The transport services that are made available to AMS communicating entities are declared by the transportServiceLoaders array in the libams.c source file. This array is fixed at compile time. The order of preference of the transport services in the array is hard-coded, but the inclusion or omission of individual transport services is controlled by setting compiler options. The \"udp\" transport service -- nominally the most preferred because it imposes the least processing and transmission overhead -- is included by setting the -DUDPTS option. The \"dgr\" service is included by setting the -DDGRTS option. The \"vmq\" (VxWorks message queue) service, supported only on VxWorks, is included by setting the -DVMQTS option. The \"tcp\" transport service -- selected only when its quality of service is required -- is included by setting the -DTCPTS option.

    The operating state of any single AMS application program is managed in an opaque AmsModule object. This object is returned when the application begins AMS operations (that is, registers) and must be provided as an argument to most AMS functions.

    "},{"location":"man/ams/ams/#see-also","title":"SEE ALSO","text":"

    amsd(1), ramsgate(1), amsxml(5), amsrc(5)

    "},{"location":"man/ams/amsbenchr/","title":"NAME","text":"

    amsbenchr - Asynchronous Message Service (AMS) benchmarking meter

    "},{"location":"man/ams/amsbenchr/#synopsis","title":"SYNOPSIS","text":"

    amsbenchr

    "},{"location":"man/ams/amsbenchr/#description","title":"DESCRIPTION","text":"

    amsbenchr is a test program that simply subscribes to subject \"bench\" and receives messages published by amsbenchs until all messages in the test - as indicated by the count of remaining messages, in in the first four bytes of each message - have been received. Then it stops receiving messages, calculates and prints performance statistics, and terminates.

    amsbenchr will register as an application module in the root unit of the venture identified by application name \"amsdemo\" and authority name \"test\". A configuration server for the local continuum and a registrar for the root unit of that venture (which may both be instantiated in a single amsd daemon task) must be running in order for amsbenchr to commence operations.

    "},{"location":"man/ams/amsbenchr/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/ams/amsbenchr/#files","title":"FILES","text":"

    A MIB initialization file with the applicable default name (see amsrc(5)) must be present.

    "},{"location":"man/ams/amsbenchr/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/ams/amsbenchr/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/ams/amsbenchr/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/ams/amsbenchr/#see-also","title":"SEE ALSO","text":"

    amsrc(5)

    "},{"location":"man/ams/amsbenchs/","title":"NAME","text":"

    amsbenchs - Asynchronous Message Service (AMS) benchmarking driver

    "},{"location":"man/ams/amsbenchs/#synopsis","title":"SYNOPSIS","text":"

    amsbenchs count size

    "},{"location":"man/ams/amsbenchs/#description","title":"DESCRIPTION","text":"

    amsbenchs is a test program that simply publishes count messages of size bytes each on subject \"bench\", then waits while all published messages are transmitted, terminating when the user uses ^C to interrupt the program. The remaining number of messages to be published in the test is written into the first four octets of each message.

    amsbenchs will register as an application module in the root unit of the venture identified by application name \"amsdemo\" and authority name \"test\". A configuration server for the local continuum and a registrar for the root unit of that venture (which may both be instantiated in a single amsd daemon task) must be running in order for amsbenchs to commence operations.

    "},{"location":"man/ams/amsbenchs/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/ams/amsbenchs/#files","title":"FILES","text":"

    A MIB initialization file with the applicable default name (see amsrc(5)) must be present.

    "},{"location":"man/ams/amsbenchs/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/ams/amsbenchs/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/ams/amsbenchs/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/ams/amsbenchs/#see-also","title":"SEE ALSO","text":"

    amsrc(5)

    "},{"location":"man/ams/amsd/","title":"NAME","text":"

    amsd - AMS configuration server and/or registrar daemon

    "},{"location":"man/ams/amsd/#synopsis","title":"SYNOPSIS","text":"

    amsd { @ | MIB_source_name } { . | @ | config_server_endpoint_spec } [application_name authority_name registrar_unit_name]

    "},{"location":"man/ams/amsd/#description","title":"DESCRIPTION","text":"

    amsd is a background \"daemon\" task that functions as an AMS \"configuration server\" in the local continuum, as an AMS \"registrar\" in a specified cell, or both.

    If MIB_source_name is specified, it must name a MIB initialization file in the correct format for amsd, either amsrc(5) or amsxml(5), depending on whether or not -DNOEXPAT was set at compile time. Otherwise @ is required; in this case, the built-in default MIB is loaded.

    If this amsd task is NOT to run as a configuration server then the second command-line argument must be a '.' character. Otherwise the second command-line argument must be either '@' or config_server_endpoint_spec. If '@' then the endpoint specification for this configuration server is automatically computed as the default endpoint specification for the primary transport service as noted in the MIB: \"hostname:2357\".

    If an AMS module is NOT to be run in a background thread for this daemon (enabling shutdown by amsstop(1) and/or runtime MIB update by amsmib(1)), then either the last three command-line arguments must be omitted or else the \"amsd\" role must not be defined in the MIB loaded for this daemon. Otherwise the application_name and authority_name arguments are required and the \"amsd\" role must be defined in the MIB.

    If this amsd task is NOT to run as a registrar then the last command-line argument must be omitted. Otherwise the last three command-line arguments are required and they must identify a unit in an AMS venture for the indicated application and authority that is known to operate in the local continuum, as noted in the MIB. Note that the unit name for the \"root unit\" of a venture is the zero-length string \"\".

    "},{"location":"man/ams/amsd/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/ams/amsd/#files","title":"FILES","text":"

    If MIB source name is specified, then a file of this name must be present. Otherwise a MIB initialization file with the applicable default name (see amsrc(5)) must be present.

    "},{"location":"man/ams/amsd/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/ams/amsd/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/ams/amsd/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/ams/amsd/#see-also","title":"SEE ALSO","text":"

    amsmib(1), amsstop(1), amsrc(5), amsxml(5)

    "},{"location":"man/ams/amshello/","title":"NAME","text":"

    amshello - Asynchronous Message Service (AMS) demo program for UNIX

    "},{"location":"man/ams/amshello/#synopsis","title":"SYNOPSIS","text":"

    amshello

    "},{"location":"man/ams/amshello/#description","title":"DESCRIPTION","text":"

    amshello is a sample program designed to demonstrate that an entire (very simple) distributed AMS application can be written in just a few lines of C code. When started, amshello forks a second process and initiates transmission of a \"Hello\" text message from one process to the other, after which both processes unregister and terminate.

    The amshello processes will register as application modules in the root unit of the venture identified by application name \"amsdemo\" and authority name \"test\". A configuration server for the local continuum and a registrar for the root unit of that venture (which may both be instantiated in a single amsd daemon task) must be running in order for the amshello processes to run.

    "},{"location":"man/ams/amshello/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/ams/amshello/#files","title":"FILES","text":"

    A MIB initialization file with the applicable default name (see amsrc(5)) must be present.

    "},{"location":"man/ams/amshello/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/ams/amshello/#diagnostics","title":"DIAGNOSTICS","text":"

    No diagnostics apply.

    "},{"location":"man/ams/amshello/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/ams/amshello/#see-also","title":"SEE ALSO","text":"

    amsrc(5)

    "},{"location":"man/ams/amslog/","title":"NAME","text":"

    amslog - Asynchronous Message Service (AMS) test message receiver

    "},{"location":"man/ams/amslog/#synopsis","title":"SYNOPSIS","text":"

    amslog unit_name role_name application_name authority_name [{ s | i }]

    "},{"location":"man/ams/amslog/#description","title":"DESCRIPTION","text":"

    amslog is a message reception program designed to test AMS functionality.

    When amslog is started, it registers as an application module in the unit identified by unit_name of the venture identified by application_name and authority_name; the role in which it registers must be indicated in role_name. A configuration server for the local continuum and a registrar for the indicated unit of the indicated venture (which may both be instantiated in a single amsd daemon task) must be running in order for amslog to run.

    amslog runs as two threads: a background thread that receives AMS messages and logs them to standard output, together with a foreground thread that acquires operating parameters in lines of console input to control the flow of messages to the background thread.

    When the first character of a line of input from stdin to the amslog foreground thread is '.' (period), amslog immediately terminates. Otherwise, the first character of each line of input from stdin must be either '+' indicating assertion of interest in a message subject or '-' indicating cessation of interest in a subject. In each case, the name of the subject in question must begin in the second character of the input line. Note that \"everything\" is a valid subject name.

    By default, amslog runs in \"subscribe\" mode: when interest in a message subject is asserted, amslog subscribes to that subject; when interest in a message subject is rescinded, amslog unsubscribes to that subject. This behavior can be overridden by providing a third command-line argument to amslog - a \"mode\" indicator. When mode is 'i', amslog runs in \"invite\" mode. In \"invite\" mode, when interest in a message subject is asserted, amslog invites messages on that subject; when interest in a message subject is rescinded, amslog cancels its invitation for messages on that subject.

    The \"domain\" of a subscription or invitation can optionally be specified immediately after the subject name, on the same line of console input:

    Domain continuum name may be specified, or the place-holder domain continuum name \"_\" may be specified to indicate \"all continua\".

    If domain continuum name (\"_\" or otherwise) is specified, then domain unit name may be specified or the place-holder domain unit name \"_\" may be specified to indicate \"the root unit\" (i.e., the entire venture).

    If domain unit name (\"_\" or otherwise) is specified, then domain role name may be specified.

    When amslog runs in VxWorks or RTEMS, the subject and content of each message are simply written to standard output in a text line for display on the console. When amslog runs in a UNIX environment, the subject name length (a binary integer), subject name (ASCII text), content length (a binary integer), and content (ASCII text) are written to standard output for redirection either to a file or to a pipe to amslogprt.

    Whenever a received message is flagged as a Query, amslog returns a reply message whose content is the string \"Got \" followed by the first 128 bytes of the content of the Query message, enclosed in single quote marks and followed by a period.

    "},{"location":"man/ams/amslog/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/ams/amslog/#files","title":"FILES","text":"

    A MIB initialization file with the applicable default name (see amsrc(5)) must be present.

    "},{"location":"man/ams/amslog/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/ams/amslog/#diagnostics","title":"DIAGNOSTICS","text":""},{"location":"man/ams/amslog/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/ams/amslog/#see-also","title":"SEE ALSO","text":"

    amsshell(1), amslogprt(1), amsrc(5)

    "},{"location":"man/ams/amslogprt/","title":"NAME","text":"

    amslogprt - UNIX utility program for printing AMS log messages from amslog

    "},{"location":"man/ams/amslogprt/#synopsis","title":"SYNOPSIS","text":"

    amslogprt

    "},{"location":"man/ams/amslogprt/#description","title":"DESCRIPTION","text":"

    amslogprt simply reads AMS activity log messages from standard input (nominally written by amslog and prints them. When the content of a logged message is judged not to be an ASCII text string, the content is printed in hexadecimal.

    amslogprt terminates at the end of input.

    "},{"location":"man/ams/amslogprt/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/ams/amslogprt/#files","title":"FILES","text":"

    No files are needed by amslogprt.

    "},{"location":"man/ams/amslogprt/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/ams/amslogprt/#diagnostics","title":"DIAGNOSTICS","text":"

    None.

    "},{"location":"man/ams/amslogprt/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/ams/amslogprt/#see-also","title":"SEE ALSO","text":"

    amsrc(5)

    "},{"location":"man/ams/amsmib/","title":"NAME","text":"

    amsmib - Asynchronous Message Service (AMS) MIB update utility

    "},{"location":"man/ams/amsmib/#synopsis","title":"SYNOPSIS","text":"

    amsmib application_name authority_name role_name continuum_name unit_name file_name

    "},{"location":"man/ams/amsmib/#description","title":"DESCRIPTION","text":"

    amsmib is a utility program that announces relatively brief Management Information Base (MIB) updates to a select population of AMS modules. Because amsd processes may run AAMS modules in background threads, and because a single MIB is shared in common among all threads of any process, amsmib may update the MIBs used by registrars and/or configuration servers as well.

    MIB updates can only be propagated to modules for which the subject \"amsmib\" was defined in the MIB initialization files cited at module registration time. All ION AMS modules implicitly invite messages on subject \"amsmib\" (from all modules registered in role \"amsmib\" in all continua of the same venture) at registration time if subject \"amsmib\" and role \"amsmib\" are defined in the MIB.

    amsmib registers in the root cell of the message space identified by application_name and authority_name, within the local continuum. It registers in the role \"amsmib\"; if this role is not defined in the (initial) MIB loaded by amsmib at registration time, then registration fails and amsmib terminates.

    amsmib then reads into a memory buffer up to 4095 bytes of MIB update text from the file identified by file_name. The MIB update text must conform to amsxml(5) or amsrc(5) syntax, depending on whether or not the intended recipient modules were compiled with the -DNOEXPAT option.

    amsmib then \"announces\" (see ams_announce() in ams(3)) the contents of the memory buffer to all modules of this same venture (identified by application_name and authority_name) that registered in the indicated role, in the indicated unit of the indicated continuum. If continuum_name is \"\" then the message will be sent to modules in all continua. If role_name is \"\" then all modules will be eligible to receive the message, regardless of the role in which they registered. If unit_name is \"\" (the root unit) then all modules will be eligible to receive the message, regardless of the unit in which they registered.

    Upon reception of the announced message, each destination module will apply all of the MIB updates in the content of the message, in exactly the same way that its original MIB was loaded from the MIB initialization file when the module started running.

    If multiple modules are running in the same memory space (e.g., in different threads of the same process, or in different tasks on the same VxWorks target) then the updates will be applied multiple times, because all modules in the same memory space share a single MIB. MIB updates are idempotent, so this is harmless (though some diagnostics may be printed).

    Moreover, an amsd daemon will have a relevant \"MIB update\" module running in a background thread if application_name and authority_name were cited on the command line that started the daemon (provided the role \"amsd\" was defined in the initial MIB loaded at the time amsd began running). The MIB exposed to the configuration server and/or registrar running in that daemon will likewise be updated upon reception of the announced message.

    The name of the subject of the announced mib update message is \"amsmib\"; if this subject is not defined in the (initial) MIB loaded by amsmib then the message cannot be announced. Nor can any potential recipient module receive the message if subject \"amsmib\" is not defined in that module's MIB.

    "},{"location":"man/ams/amsmib/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/ams/amsmib/#files","title":"FILES","text":"

    A MIB initialization file with the applicable default name (see amsrc(5) and amsxml(5)) must be present.

    "},{"location":"man/ams/amsmib/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/ams/amsmib/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/ams/amsmib/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/ams/amsmib/#see-also","title":"SEE ALSO","text":"

    amsd(1), ams(3), amsrc(5), amsxml(5)

    "},{"location":"man/ams/amspub/","title":"NAME","text":"

    amspub - Asynchronous Message Service (AMS) test driver for VxWorks

    "},{"location":"man/ams/amspub/#synopsis","title":"SYNOPSIS","text":"

    amspub \"application_name\", \"authority_name\", \"subject_name\", \"message_text\"

    "},{"location":"man/ams/amspub/#description","title":"DESCRIPTION","text":"

    amspub is a message publication program designed to test AMS functionality in a VxWorks environment. When an amspub task is started, it registers as an application module in the root unit of the venture identified by application_name and authority_name, looks up the subject number for subject_name, publishes a single message with content message_text on that subject, unregisters, and terminates.

    A configuration server for the local continuum and a registrar for the root unit of the indicated venture (which may both be instantiated in a single amsd daemon task) must be running in order for amspub to run.

    "},{"location":"man/ams/amspub/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/ams/amspub/#files","title":"FILES","text":"

    The amspub source code is in the amspubsub.c source file.

    A MIB initialization file with the applicable default name (see amsrc(5)) must be present.

    "},{"location":"man/ams/amspub/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/ams/amspub/#diagnostics","title":"DIAGNOSTICS","text":""},{"location":"man/ams/amspub/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/ams/amspub/#see-also","title":"SEE ALSO","text":"

    amssub(1), amsrc(5)

    "},{"location":"man/ams/amsrc/","title":"NAME","text":"

    amsrc - CCSDS Asynchronous Message Service MIB initialization file

    "},{"location":"man/ams/amsrc/#description","title":"DESCRIPTION","text":"

    The Management Information Base (MIB) for an AMS communicating entity (either amsd or an AMS application module) must contain enough information to enable the entity to initiate participation in AMS message exchange, such as the network location of the configuration server and the roles and message subjects defined for some venture.

    AMS entities automatically load their MIBs from initialization files at startup. When AMS is built with the -DNOEXPAT compiler option set, the MIB initialization file must conform to the amsrc syntax described below; otherwise the expat XML parsing library must be linked into the AMS executable and the MIB initialization file must conform to the amsxml syntax described in amsxml(5).

    The MIB initialization file lists elements of MIB update information, each of which may have one or more attributes. An element may also have sub-elements that are listed within the declaration of the parent element, and so on.

    The declaration of an element may occupy a single line of text in the MIB initialization file or may extend across multiple lines. A single-line element declaration is indicated by a '*' in the first character of the line. The beginning of a multi-line element declaration is indicated by a '+' in the first character of the line, while the end of that declaration is indicated by a '-' in the first character of the line. In every case, the type of element must be indicated by an element-type name beginning in the second character of the line and terminated by whitespace. Every start-of-element line must be matched by a subsequent end-of-element line that precedes the start of any other element that is not a nested sub-element of this element.

    Attributes are represented by whitespace-terminated <name>=<value> expressions immediately following the element-type name on a '*' or '+' line. An attribute value that contains whitespace must be enclosed within a pair of single-quote (') characters.

    Two types of elements are recognized in the MIB initialization file: control elements and configuration elements. A control element establishes the update context within which the configuration elements nested within it are processed, while a configuration element declares values for one or more items of AMS configuration information in the MIB.

    Note that an aggregate configuration element (i.e., one which may contain other interior configuration elements; venture, for example) may be presented outside of any control element, simply to establish the context in which subsequent control elements are to be interpreted.

    "},{"location":"man/ams/amsrc/#control-elements","title":"CONTROL ELEMENTS","text":""},{"location":"man/ams/amsrc/#configuration-elements","title":"CONFIGURATION ELEMENTS","text":""},{"location":"man/ams/amsrc/#example","title":"EXAMPLE","text":"

    *ams_mib_init continuum_nbr=2 ptsname=dgr

    +ams_mib_add

    *continuum nbr=1 name=apl desc=APL

    *csendpoint epspec=beaumont.stepsoncats.com:2357

    *application name=amsdemo

    +venture nbr=1 appname=amsdemo authname=test

    *role nbr=2 name=shell

    *role nbr=3 name=log

    *role nbr=4 name=pitch

    *role nbr=5 name=catch

    *role nbr=6 name=benchs

    *role nbr=7 name=benchr

    *role nbr=96 name=amsd

    *role nbr=97 name=amsmib

    *role nbr=98 name=amsstop

    *subject nbr=1 name=text desc='ASCII text'

    *subject nbr=2 name=noise desc='more ASCII text'

    *subject nbr=3 name=bench desc='numbered msgs'

    *subject nbr=97 name=amsmib desc='MIB updates'

    *subject nbr=98 name=amsstop desc='shutdown'

    *unit nbr=1 name=orbiters

    *unit nbr=2 name=orbiters.near

    *unit nbr=3 name=orbiters.far

    *msgspace nbr=2

    -venture

    -ams_mib_add

    "},{"location":"man/ams/amsrc/#see-also","title":"SEE ALSO","text":"

    amsxml(5)

    "},{"location":"man/ams/amsshell/","title":"NAME","text":"

    amsshell - Asynchronous Message Service (AMS) test message sender (UNIX)

    "},{"location":"man/ams/amsshell/#synopsis","title":"SYNOPSIS","text":"

    amsshell unit_name role_name application_name authority_name [{ p | s | q | a }]

    "},{"location":"man/ams/amsshell/#description","title":"DESCRIPTION","text":"

    amsshell is a message issuance program designed to test AMS functionality.

    When amsshell is started, it registers as an application module in the unit identified by unit_name of the venture identified by application_name and authority_name; the role in which it registers must be indicated in role_name. A configuration server for the local continuum and a registrar for the indicated unit of the indicated venture (which may both be instantiated in a single amsd daemon task) must be running in order for amsshell to run.

    amsshell runs as two threads: a background thread that receives watches for AMS configuration events (including shutdown), together with a foreground thread that acquires operating parameters and message content in lines of console input to control the issuance of messages.

    The first character of each line of input from stdin to the amsshell indicates the significance of that line:

    When the first character of a line of input from stdin is none of the above, the entire line is taken to be the text of a message that is to be issued immediately, on the previously specified subject, to the previously specified module (if applicable), and subject to the previously specified domain (if applicable).

    By default, amsshell runs in \"publish\" mode: when a message is to be issued, it is simply published. This behavior can be overridden by providing a fifth command-line argument to amsshell - a \"mode\" indicator. The supported modes are as follows:

    "},{"location":"man/ams/amsshell/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/ams/amsshell/#files","title":"FILES","text":"

    A MIB initialization file with the applicable default name (see amsrc(5)) must be present.

    "},{"location":"man/ams/amsshell/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/ams/amsshell/#diagnostics","title":"DIAGNOSTICS","text":""},{"location":"man/ams/amsshell/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/ams/amsshell/#see-also","title":"SEE ALSO","text":"

    amslog(1), amsrc(5)

    "},{"location":"man/ams/amsstop/","title":"NAME","text":"

    amsstop - Asynchronous Message Service (AMS) message space shutdown utility

    "},{"location":"man/ams/amsstop/#synopsis","title":"SYNOPSIS","text":"

    amsstop application_name authority_name

    "},{"location":"man/ams/amsstop/#description","title":"DESCRIPTION","text":"

    amsstop is a utility program that terminates the operation of all registrars and all application modules running in the message space which is that portion of the indicated AMS venture that is operating in the local continuum. If one of the amsd tasks that are functioning as registrars for this venture is also functioning as the configuration server for the local continuum, then that configuration server is also terminated.

    application_name and authority_name must identify an AMS venture that is known to operate in the local continuum, as noted in the MIB for the amsstop application module.

    A message space can only be shut down by amsstop if the subject \"amsstop\" is defined in the MIBs of all modules in the message spaces.

    "},{"location":"man/ams/amsstop/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/ams/amsstop/#files","title":"FILES","text":"

    A MIB initialization file with the applicable default name (see amsrc(5) and amsxml(5)) must be present.

    "},{"location":"man/ams/amsstop/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/ams/amsstop/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/ams/amsstop/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/ams/amsstop/#see-also","title":"SEE ALSO","text":"

    amsrc(5)

    "},{"location":"man/ams/amssub/","title":"NAME","text":"

    amssub - Asynchronous Message Service (AMS) test message receiver for VxWorks

    "},{"location":"man/ams/amssub/#synopsis","title":"SYNOPSIS","text":"

    amssub \"application_name\", \"authority_name\", \"subject_name\"

    "},{"location":"man/ams/amssub/#description","title":"DESCRIPTION","text":"

    amssub is a message reception program designed to test AMS functionality in a VxWorks environment. When an amssub task is started, it registers as an application module in the root unit of the venture identified by application_name and authority_name, looks up the subject number for subject_name, subscribes to that subject, and begins receiving and printing messages on that subject until terminated by amsstop.

    A configuration server for the local continuum and a registrar for the root unit of the indicated venture (which may both be instantiated in a single amsd daemon task) must be running in order for amssub to run.

    "},{"location":"man/ams/amssub/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/ams/amssub/#files","title":"FILES","text":"

    The amssub source code is in the amspubsub.c source file.

    A MIB initialization file with the applicable default name (see amsrc(5)) must be present.

    "},{"location":"man/ams/amssub/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/ams/amssub/#diagnostics","title":"DIAGNOSTICS","text":""},{"location":"man/ams/amssub/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/ams/amssub/#see-also","title":"SEE ALSO","text":"

    amspub(1), amsrc(5)

    "},{"location":"man/ams/amsxml/","title":"NAME","text":"

    amsxml - CCSDS Asynchronous Message Service MIB initialization XML file

    "},{"location":"man/ams/amsxml/#description","title":"DESCRIPTION","text":"

    The Management Information Base (MIB) for an AMS communicating entity (either amsd or an AMS application module) must contain enough information to enable the entity to initiate participation in AMS message exchange, such as the network location of the configuration server and the roles and message subjects defined for some venture.

    AMS entities automatically load their MIBs from initialization files at startup. When AMS is built with the -DNOEXPAT compiler option set, the MIB initialization file must conform to the amsrc syntax described in amsrc(5); otherwise the expat XML parsing library must be linked into the AMS executable and the MIB initialization file must conform to the amsxml syntax described below.

    The XML statements in the MIB initialization file constitute elements of MIB update information, each of which may have one or more attributes. An element may also have sub-elements that are listed within the declaration of the parent element, and so on.

    Two types of elements are recognized in the MIB initialization file: control elements and configuration elements. A control element establishes the update context within which the configuration elements nested within it are processed, while a configuration element declares values for one or more items of AMS configuration information in the MIB.

    For a discussion of the recognized control elements and configuration elements of the MIB initialization file, see the amsrc(5) man page. NOTE, though, that all elements of an XML-based MIB initialization file must be sub-elements of a single sub-element of the XML extension type ams_load_mib in order for the file to be parsed successfully by expat.

    "},{"location":"man/ams/amsxml/#example","title":"EXAMPLE","text":"

    <?xml version=\"1.0\" standalone=\"yes\"?>

    <ams_mib_load>

        <ams_mib_init continuum_nbr=\"2\" ptsname=\"dgr\"/>\n\n    <ams_mib_add>\n\n            <continuum nbr=\"1\" name=\"apl\" desc=\"APL\"/>\n\n            <csendpoint epspec=\"beaumont.stepsoncats.com:2357\"/>\n\n            <application name=\"amsdemo\" />\n\n            <venture nbr=\"1\" appname=\"amsdemo\" authname=\"test\">\n\n                    <role nbr=\"2\" name=\"shell\"/>\n\n                    <role nbr=\"3\" name=\"log\"/>\n\n                    <role nbr=\"4\" name=\"pitch\"/>\n\n                    <role nbr=\"5\" name=\"catch\"/>\n\n                    <role nbr=\"6\" name=\"benchs\"/>\n\n                    <role nbr=\"7\" name=\"benchr\"/>\n\n                    <role nbr=\"96\" name=\"amsd\"/>\n\n                    <role nbr=\"97\" name=\"amsmib\"/>\n\n                    <role nbr=\"98\" name=\"amsstop\"/>\n\n                    <subject nbr=\"1\" name=\"text\" desc=\"ASCII text\"/>\n\n                    <subject nbr=\"2\" name=\"noise\" desc=\"more ASCII text\"/>\n\n                    <subject nbr=\"3\" name=\"bench\" desc=\"numbered msgs\"/>\n\n                    <subject nbr=\"97\" name=\"amsmib\" desc=\"MIB updates\"/>\n\n                    <subject nbr=\"98\" name=\"amsstop\" desc=\"shutdown\"/>\n\n                    <unit nbr=\"1\" name=\"orbiters\"/>\n\n                    <unit nbr=\"2\" name=\"orbiters.near\"/>\n\n                    <unit nbr=\"3\" name=\"orbiters.far\"/>\n\n                    <msgspace nbr=\"2\"/>\n\n            </venture>\n\n    </ams_mib_add>\n

    </ams_mib_load>

    "},{"location":"man/ams/amsxml/#see-also","title":"SEE ALSO","text":"

    amsrc(5)

    "},{"location":"man/ams/petition_log/","title":"NAME","text":"

    petition.log - Remote AMS petition log

    "},{"location":"man/ams/petition_log/#description","title":"DESCRIPTION","text":"

    The Remote AMS daemon ramsgate records all \"petitions\" (requests for data on behalf of AMS modules in other continua) in a file named petition.log. At startup, the ramsgate daemon automatically reads and processes all petitions in the petition.log file just as if they were received in real time, to re-establish the petition state that was in effect at the time the ramsgate daemon shut down. Note that this means that you can cause petitions to be, in effect, \"pre-received\" by simply editing this file prior to startup. This can be an especially effective way to configure a RAMS network in which long signal propagation times would otherwise retard real-time petitioning and thus delay the onset of fully functional message exchange.

    Entries in petition.log are simple ASCII text lines, with parameters separated by spaces. Each line of petition.log has the following parameters:

    "},{"location":"man/ams/petition_log/#see-also","title":"SEE ALSO","text":"

    ramsgate(1), ams(3)

    "},{"location":"man/ams/ramsgate/","title":"NAME","text":"

    ramsgate - Remote AMS gateway daemon

    "},{"location":"man/ams/ramsgate/#synopsis","title":"SYNOPSIS","text":"

    ramsgate application_name authority_name [bundles_TTL]

    "},{"location":"man/ams/ramsgate/#description","title":"DESCRIPTION","text":"

    ramsgate is a background \"daemon\" task that functions as a Remote AMS gateway. application_name and authority_name must identify an AMS venture that is known to operate in the local continuum, as noted in the MIB for the ramsgate application module.

    ramsgate will register as an application module in the root unit of the indicated venture, so a configuration server for the local continuum and a registrar for the root unit of the indicated venture (which may both be instantiated in a single amsd daemon task) must be running in order for ramsgate to commence operations.

    ramsgate with communicate with other RAMS gateway modules in other continua by means of the RAMS network protocol noted in the RAMS gateway endpoint ID for the local continuum, as identified (explicitly or implicitly) in the MIB.

    If the RAMS network protocol is \"bp\" (i.e., the DTN Bundle Protocol), then an ION Bundle Protocol node must be operating on the local computer and that node must be registered in the BP endpoint identified by the RAMS gateway endpoint ID for the local continuum. Moreover, in this case the value of bundles_TTL - if specified - will be taken as the lifetime in seconds that is to be declared for all \"bundles\" issued by ramsgate; bundles_TTL defaults to 86400 seconds (one day) if omitted.

    "},{"location":"man/ams/ramsgate/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/ams/ramsgate/#files","title":"FILES","text":"

    A MIB initialization file with the applicable default name (see amsrc(5)) must be present.

    ramsgate records all \"petitions\" (requests for data on behalf of AMS modules in other continua) in a file named \"petition.log\". At startup, the ramsgate daemon automatically reads and processes all petitions in the petition.log file just as if they were received in real time. Note that this means that you can cause petitions to be, in effect, \"pre-received\" by simply editing this file prior to startup. This can be an especially effective way to configure a RAMS network in which long signal propagation times would otherwise retard real-time petitioning and thus delay the onset of fully functional message exchange.

    "},{"location":"man/ams/ramsgate/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/ams/ramsgate/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/ams/ramsgate/#bugs","title":"BUGS","text":"

    Note that the AMS design principle of receiving messages immediately and enqueuing them for eventual ingestion by the application module - rather than imposing application-layer flow control on AMS message traffic - enables high performance but makes ramsgate vulnerable to message spikes. Since production and transmission of bundles is typically slower than AMS message reception over TCP service, the ION working memory and/or heap space available for AMS event insertion and/or bundle production can be quickly exhausted if a high rate of application message production is sustained for a long enough time. Mechanisms for defending against this sort of failure are under study, but for now the best mitigations are simply to (a) build with compiler option -DAMS_INDUSTRIAL=1, (b) allocate as much space as possible to ION working memory and SDR heap (see ionconfig(5)) and (c) limit the rate of AMS message issuance.

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/ams/ramsgate/#see-also","title":"SEE ALSO","text":"

    amsrc(5), petition_log(5)

    "},{"location":"man/bpv6/","title":"Index of Man Pages","text":""},{"location":"man/bpv6/acsadmin/","title":"NAME","text":"

    acsadmin - ION Aggregate Custody Signal (ACS) administration interface

    "},{"location":"man/bpv6/acsadmin/#synopsis","title":"SYNOPSIS","text":"

    acsadmin [ commands_filename ]

    "},{"location":"man/bpv6/acsadmin/#description","title":"DESCRIPTION","text":"

    acsadmin configures aggregate custody signal behavior for the local ION node.

    It operates in response to ACS configuration commands found in the file commands_filename, if provided; if not, acsadmin prints a simple prompt (:) so that the user may type commands directly into standard input.

    The format of commands for commands_filename can be queried from acsadmin with the 'h' or '?' commands at the prompt. The commands are documented in acsrc(5).

    "},{"location":"man/bpv6/acsadmin/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/acsadmin/#examples","title":"EXAMPLES","text":""},{"location":"man/bpv6/acsadmin/#files","title":"FILES","text":"

    See acsrc(5) for details of the ACS configuration commands.

    "},{"location":"man/bpv6/acsadmin/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/acsadmin/#diagnostics","title":"DIAGNOSTICS","text":"

    Note: all ION administration utilities expect source file input to be lines of ASCII text that are NL-delimited. If you edit the acsrc file on a Windows machine, be sure to use dos2unix to convert it to Unix text format before presenting it to acsadmin. Otherwise acsadmin will detect syntax errors and will not function satisfactorily.

    The following diagnostics may be issued to the logfile ion.log:

    Various errors that don't cause acsadmin to fail but are noted in the ion.log log file may be caused by improperly formatted commands given at the prompt or in the commands_filename file. Please see acsrc(5) for details.

    "},{"location":"man/bpv6/acsadmin/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/acsadmin/#see-also","title":"SEE ALSO","text":"

    ionadmin(1), bpadmin(1), acsrc(5)

    "},{"location":"man/bpv6/acslist/","title":"NAME","text":"

    acslist - Aggregate Custody Signals (ACS) utility for checking custody IDs.

    "},{"location":"man/bpv6/acslist/#synopsis","title":"SYNOPSIS","text":"

    acslist [-s|--stdout]

    "},{"location":"man/bpv6/acslist/#description","title":"DESCRIPTION","text":"

    acslist is a utility program that lists all mappings from bundle ID to custody ID currently in the local bundle agent's ACS ID database, in no specific order. A bundle ID (defined in RFC5050) is the tuple of (source EID, creation time, creation count, fragment offset, fragment length). A custody ID (defined in draft-jenkins-aggregate-custody-signals) is an integer that the local bundle agent will be able to map to a bundle ID for the purposes of aggregating and compressing custody signals.

    The format for mappings is:

    (ipn:13.1,333823688,95,0,0)->(26)

    While listing, acslist also checks the custody ID database for self-consistency, and if it detects any errors it will print a line starting with \"Mismatch:\" and describing the error.

    -s|--stdout tells acslist to print results to stdout, rather than to the ION log.

    "},{"location":"man/bpv6/acslist/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/acslist/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv6/acslist/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/acslist/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued:

    "},{"location":"man/bpv6/acslist/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/acslist/#see-also","title":"SEE ALSO","text":"

    acsadmin(1), bplist(1)

    "},{"location":"man/bpv6/acsrc/","title":"NAME","text":"

    acsrc - Aggregate Custody Signal management commands file

    "},{"location":"man/bpv6/acsrc/#description","title":"DESCRIPTION","text":"

    Aggregate Custody Signal management commands are passed to acsadmin either in a file of text lines or interactively at acsadmin's command prompt (:). Commands are interpreted line-by line, with exactly one command per line. The formats and effects of the Aggregate Custody Signal management commands are described below.

    "},{"location":"man/bpv6/acsrc/#general-commands","title":"GENERAL COMMANDS","text":""},{"location":"man/bpv6/acsrc/#custodian-commands","title":"CUSTODIAN COMMANDS","text":""},{"location":"man/bpv6/acsrc/#examples","title":"EXAMPLES","text":""},{"location":"man/bpv6/acsrc/#see-also","title":"SEE ALSO","text":"

    acsadmin(1)

    "},{"location":"man/bpv6/bibeclo/","title":"NAME","text":"

    bibeclo - BP convergence layer output task using bundle-in-bundle encapsulation

    "},{"location":"man/bpv6/bibeclo/#synopsis","title":"SYNOPSIS","text":"

    bibeclo peer_node_eid destination_node_eid

    "},{"location":"man/bpv6/bibeclo/#description","title":"DESCRIPTION","text":"

    bibeclo is a background \"daemon\" task that extracts bundles from the queues of bundles ready for transmission to destination_node_eid via bundle-in-bundle encapsulation (BIBE), encapsulates them in BP administrative records of (non-standard) record type 7 (BP_ENCAPSULATED_BUNDLE), and sends those administrative records to the DTN node identified by peer_node_eid. The receiving node is expected to process these received administrative records by simply dispatching the encapsulated bundles as if they had been received from neighboring nodes in the normal course of operations.

    bibeclo is spawned automatically by bpadmin in response to the 's' (START) command that starts operation of the Bundle Protocol, and it is terminated by bpadmin in response to an 'x' (STOP) command. bibeclo can also be spawned and terminated in response to START and STOP commands that pertain specifically to the BIBE convergence layer protocol.

    "},{"location":"man/bpv6/bibeclo/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/bibeclo/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv6/bibeclo/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/bibeclo/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv6/bibeclo/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/bibeclo/#see-also","title":"SEE ALSO","text":"

    bibeadmin(1), bp(3), biberc(5)

    "},{"location":"man/bpv6/bp/","title":"NAME","text":"

    bp - Bundle Protocol communications library

    "},{"location":"man/bpv6/bp/#synopsis","title":"SYNOPSIS","text":"
    #include \"bp.h\"\n\n[see description for available functions]\n
    "},{"location":"man/bpv6/bp/#description","title":"DESCRIPTION","text":"

    The bp library provides functions enabling application software to use Bundle Protocol to send and receive information over a delay-tolerant network. It conforms to the Bundle Protocol specification as documented in Internet RFC 5050.

    "},{"location":"man/bpv6/bp/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), lgsend(1), lgagent(1), bpextensions(3), bprc(5), lgfile(5)

    "},{"location":"man/bpv6/bpadmin/","title":"NAME","text":"

    bpadmin - ION Bundle Protocol (BP) administration interface

    "},{"location":"man/bpv6/bpadmin/#synopsis","title":"SYNOPSIS","text":"

    bpadmin [ commands_filename | . | ! ]

    "},{"location":"man/bpv6/bpadmin/#description","title":"DESCRIPTION","text":"

    bpadmin configures, starts, manages, and stops bundle protocol operations for the local ION node.

    It operates in response to BP configuration commands found in the file commands_filename, if provided; if not, bpadmin prints a simple prompt (:) so that the user may type commands directly into standard input. If commands_filename is a period (.), the effect is the same as if a command file containing the single command 'x' were passed to bpadmin -- that is, the ION node's bpclock task, forwarder tasks, and convergence layer adapter tasks are stopped. If commands_filename is an exclamation point (!), that effect is reversed: the ION node's bpclock task, forwarder tasks, and convergence layer adapter tasks are restarted.

    The format of commands for commands_filename can be queried from bpadmin with the 'h' or '?' commands at the prompt. The commands are documented in bprc(5).

    "},{"location":"man/bpv6/bpadmin/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/bpadmin/#examples","title":"EXAMPLES","text":""},{"location":"man/bpv6/bpadmin/#files","title":"FILES","text":"

    See bprc(5) for details of the BP configuration commands.

    "},{"location":"man/bpv6/bpadmin/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/bpadmin/#diagnostics","title":"DIAGNOSTICS","text":"

    Note: all ION administration utilities expect source file input to be lines of ASCII text that are NL-delimited. If you edit the bprc file on a Windows machine, be sure to use dos2unix to convert it to Unix text format before presenting it to bpadmin. Otherwise bpadmin will detect syntax errors and will not function satisfactorily.

    The following diagnostics may be issued to the logfile ion.log:

    Various errors that don't cause bpadmin to fail but are noted in the ion.log log file may be caused by improperly formatted commands given at the prompt or in the commands_filename file. Please see bprc(5) for details.

    "},{"location":"man/bpv6/bpadmin/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/bpadmin/#see-also","title":"SEE ALSO","text":"

    ionadmin(1), bprc(5), ipnadmin(1), ipnrc(5), dtnadmin(1), dtnrc(5)

    "},{"location":"man/bpv6/bpcancel/","title":"NAME","text":"

    bpcancel - Bundle Protocol (BP) bundle cancellation utility

    "},{"location":"man/bpv6/bpcancel/#synopsis","title":"SYNOPSIS","text":"

    bpcancel source_EID creation_seconds [creation_count [fragment_offset [fragment_length]]]

    "},{"location":"man/bpv6/bpcancel/#description","title":"DESCRIPTION","text":"

    bpcancel attempts to locate the bundle identified by the command-line parameter values and cancel transmission of this bundle. Bundles for which multiple copies have been queued for transmission can't be canceled, because one or more of those copies might already have been transmitted. Transmission of a bundle that has never been cloned and that is still in local bundle storage is cancelled by simulation of an immediate time-to-live expiration.

    "},{"location":"man/bpv6/bpcancel/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/bpcancel/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv6/bpcancel/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/bpcancel/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv6/bpcancel/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/bpcancel/#see-also","title":"SEE ALSO","text":"

    bplist(1)

    "},{"location":"man/bpv6/bpchat/","title":"NAME","text":"

    bpchat - Bundle Protocol chat test program

    "},{"location":"man/bpv6/bpchat/#synopsis","title":"SYNOPSIS","text":"

    bpchat sourceEID destEID [ct]

    "},{"location":"man/bpv6/bpchat/#description","title":"DESCRIPTION","text":"

    bpchat uses Bundle Protocol to send input text in bundles, and display the payload of received bundles as output. It is similar to the talk utility, but operates over the Bundle Protocol. It operates like a combination of the bpsource and bpsink utilities in one program (unlike bpsource, bpchat emits bundles with a sourceEID).

    If the sourceEID and destEID are both bpchat applications, then two users can chat with each other over the Bundle Protocol: lines that one user types on the keyboard will be transported over the network in bundles and displayed on the screen of the other user (and the reverse).

    bpchat terminates upon receiving the SIGQUIT signal, i.e., ^C from the keyboard.

    "},{"location":"man/bpv6/bpchat/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/bpchat/#options","title":"OPTIONS","text":""},{"location":"man/bpv6/bpchat/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv6/bpchat/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/bpchat/#diagnostics","title":"DIAGNOSTICS","text":"

    Diagnostic messages produced by bpchat are written to the ION log file ion.log.

    "},{"location":"man/bpv6/bpchat/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/bpchat/#see-also","title":"SEE ALSO","text":"

    bpecho(1), bpsource(1), bpsink(1), bp(3)

    "},{"location":"man/bpv6/bpclm/","title":"NAME","text":"

    bpclm - DTN bundle protocol convergence layer management daemon

    "},{"location":"man/bpv6/bpclm/#synopsis","title":"SYNOPSIS","text":"

    bpclm neighboring_node_ID

    "},{"location":"man/bpv6/bpclm/#description","title":"DESCRIPTION","text":"

    bpclm is a background \"daemon\" task that manages the transmission of bundles to a single designated neighboring node (as constrained by an \"egress plan\" data structure for that node) by one or more convergence-layer (CL) adapter output daemons (via buffer structures called \"outducts\").

    bpclm is spawned automatically by bpadmin in response to the 's' (START) command that starts operation of the Bundle Protocol, and it is terminated by bpadmin in response to an 'x' (STOP) command. bpclm can also be spawned and terminated in response to commands that START and STOP the corresponding node's egress plan.

    "},{"location":"man/bpv6/bpclm/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/bpclm/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv6/bpclm/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/bpclm/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv6/bpclm/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/bpclm/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), bprc(5)

    "},{"location":"man/bpv6/bpclock/","title":"NAME","text":"

    bpclock - Bundle Protocol (BP) daemon task for managing scheduled events

    "},{"location":"man/bpv6/bpclock/#synopsis","title":"SYNOPSIS","text":"

    bpclock

    "},{"location":"man/bpv6/bpclock/#description","title":"DESCRIPTION","text":"

    bpclock is a background \"daemon\" task that periodically performs scheduled Bundle Protocol activities. It is spawned automatically by bpadmin in response to the 's' command that starts operation of Bundle Protocol on the local ION node, and it is terminated by bpadmin in response to an 'x' (STOP) command.

    Once per second, bpclock takes the following action:

    First it (a) destroys all bundles whose TTLs have expired, (b) enqueues for re-forwarding all bundles that were expected to have been transmitted (by convergence-layer output tasks) by now but are still stuck in their assigned transmission queues, and (c) enqueues for re-forwarding all bundles for which custody has not yet been taken that were expected to have been received and acknowledged by now (as noted by invocation of the bpMemo() function by some convergence-layer adapter that had CL-specific insight into the appropriate interval to wait for custody acceptance).

    Then bpclock adjusts the transmission and reception \"throttles\" that control rates of LTP transmission to and reception from neighboring nodes, in response to data rate changes as noted in the RFX database by rfxclock.

    bpclock then checks for bundle origination activity that has been blocked due to insufficient allocated space for BP traffic in the ION data store: if space for bundle origination is now available, bpclock gives the bundle production throttle semaphore to unblock that activity.

    Finally, bpclock applies rate control to all convergence-layer protocol inducts and outducts:

    For each induct, bpclock increases the current capacity of the duct by the applicable nominal data reception rate. If the revised current capacity is greater than zero, bpclock gives the throttle's semaphore to unblock data acquisition (which correspondingly reduces the current capacity of the duct) by the associated convergence layer input task.

    For each outduct, bpclock increases the current capacity of the duct by the applicable nominal data transmission rate. If the revised current capacity is greater than zero, bpclock gives the throttle's semaphore to unblock data transmission (which correspondingly reduces the current capacity of the duct) by the associated convergence layer output task.

    "},{"location":"man/bpv6/bpclock/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/bpclock/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv6/bpclock/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/bpclock/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv6/bpclock/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/bpclock/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), rfxclock(1)

    "},{"location":"man/bpv6/bpcounter/","title":"NAME","text":"

    bpcounter - Bundle Protocol reception test program

    "},{"location":"man/bpv6/bpcounter/#synopsis","title":"SYNOPSIS","text":"

    bpcounter ownEndpointId [maxCount]

    "},{"location":"man/bpv6/bpcounter/#description","title":"DESCRIPTION","text":"

    bpcounter uses Bundle Protocol to receive application data units from a remote bpdriver application task. When the total number of application data units it has received exceeds maxCount, it terminates and prints its reception count. If maxCount is omitted, the default limit is 2 billion application data units.

    "},{"location":"man/bpv6/bpcounter/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/bpcounter/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv6/bpcounter/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/bpcounter/#diagnostics","title":"DIAGNOSTICS","text":"

    Diagnostic messages produced by bpcounter are written to the ION log file ion.log.

    "},{"location":"man/bpv6/bpcounter/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/bpcounter/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), bpdriver(1), bpecho(1), bp(3)

    "},{"location":"man/bpv6/bpdriver/","title":"NAME","text":"

    bpdriver - Bundle Protocol transmission test program

    "},{"location":"man/bpv6/bpdriver/#synopsis","title":"SYNOPSIS","text":"

    bpdriver nbrOfCycles ownEndpointId destinationEndpointId [length] [t_TTL_]

    "},{"location":"man/bpv6/bpdriver/#description","title":"DESCRIPTION","text":"

    bpdriver uses Bundle Protocol to send nbrOfCycles application data units of length indicated by length, to a counterpart application task that has opened the BP endpoint identified by destinationEndpointId.

    If omitted, length defaults to 60000.

    TTL indicates the number of seconds the bundles may remain in the network, undelivered, before they are automatically destroyed. If omitted, TTL defaults to 300 seconds.

    bpdriver normally runs in \"echo\" mode: after sending each bundle it waits for an acknowledgment bundle before sending the next one. For this purpose, the counterpart application task should be bpecho.

    Alternatively bpdriver can run in \"streaming\" mode, i.e., without expecting or receiving acknowledgments. Streaming mode is enabled when length is specified as a negative number, in which case the additive inverse of length is used as the effective value of length. For this purpose, the counterpart application task should be bpcounter.

    If the effective value of length is 1, the sizes of the transmitted service data units will be randomly selected multiples of 1024 in the range 1024 to 62464.

    bpdriver normally runs with custody transfer disabled. To request custody transfer for all bundles sent by bpdriver, specify nbrOfCycles as a negative number; the additive inverse of nbrOfCycles will be used as its effective value in this case.

    When all copies of the file have been sent, bpdriver prints a performance report.

    "},{"location":"man/bpv6/bpdriver/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/bpdriver/#files","title":"FILES","text":"

    The service data units transmitted by bpdriver are sequences of text obtained from a file in the current working directory named \"bpdriverAduFile\", which bpdriver creates automatically.

    "},{"location":"man/bpv6/bpdriver/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/bpdriver/#diagnostics","title":"DIAGNOSTICS","text":"

    Diagnostic messages produced by bpdriver are written to the ION log file ion.log.

    "},{"location":"man/bpv6/bpdriver/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/bpdriver/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), bpcounter(1), bpecho(1), bp(3)

    "},{"location":"man/bpv6/bpecho/","title":"NAME","text":"

    bpecho - Bundle Protocol reception test program

    "},{"location":"man/bpv6/bpecho/#synopsis","title":"SYNOPSIS","text":"

    bpecho ownEndpointId

    "},{"location":"man/bpv6/bpecho/#description","title":"DESCRIPTION","text":"

    bpecho uses Bundle Protocol to receive application data units from a remote bpdriver application task. In response to each received application data unit it sends back an \"echo\" application data unit of length 2, the NULL-terminated string \"x\".

    bpecho terminates upon receiving the SIGQUIT signal, i.e., ^C from the keyboard.

    "},{"location":"man/bpv6/bpecho/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/bpecho/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv6/bpecho/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/bpecho/#diagnostics","title":"DIAGNOSTICS","text":"

    Diagnostic messages produced by bpecho are written to the ION log file ion.log.

    "},{"location":"man/bpv6/bpecho/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/bpecho/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), bpdriver(1), bpcounter(1), bp(3)

    "},{"location":"man/bpv6/bpextensions/","title":"NAME","text":"

    bpextensions - interface for adding extensions to Bundle Protocol

    "},{"location":"man/bpv6/bpextensions/#synopsis","title":"SYNOPSIS","text":"
    #include \"bpextensions.c\"\n
    "},{"location":"man/bpv6/bpextensions/#description","title":"DESCRIPTION","text":"

    ION's interface for extending the Bundle Protocol enables the definition of external functions that insert extension blocks into outbound bundles (either before or after the payload block), parse and record extension blocks in inbound bundles, and modify extension blocks at key points in bundle processing. All extension-block handling is statically linked into ION at build time, but the addition of an extension never requires that any standard ION source code be modified.

    Standard structures for recording extension blocks -- both in transient storage [memory] during bundle acquisition (AcqExtBlock) and in persistent storage [the ION database] during subsequent bundle processing (ExtensionBlock) -- are defined in the bei.h header file. In each case, the extension block structure comprises a block type code, block processing flags, possibly a list of EID references, an array of bytes (the serialized form of the block, for transmission), the length of that array, optionally an extension-specific opaque object whose structure is designed to characterize the block in a manner that's convenient for the extension processing functions, and the size of that object.

    The definition of each extension is asserted in an ExtensionDef structure, also as defined in the bei.h header file. Each ExtensionDef must supply:

    The name of the extension. (Used in some diagnostic messages.)

    The extension's block type code.

    A pointer to an offer function.

    A pointer to a function to be called when forwarding a bundle containing this sort of block.

    A pointer to a function to be called when taking custody of a bundle containing this sort of block.

    A pointer to a function to be called when enqueuing for transmission a bundle containing this sort of block.

    A pointer to a function to be called when a convergence-layer adapter dequeues a bundle containing this sort of block, before serializing it.

    A pointer to a function to be called immediately before a convergence-layer adapter transmits a bundle containing this sort of block, after the bundle has been serialized.

    A pointer to a release function.

    A pointer to a copy function.

    A pointer to an acquire function.

    A pointer to a decrypt function.

    A pointer to a parse function.

    A pointer to a check function.

    A pointer to a record function.

    A pointer to a clear function.

    All extension definitions must be coded into an array of ExtensionDef structures named extensionDefs.

    An array of ExtensionSpec structures named extensionSpecs is also required. Each ExtensionSpec provides the specification for producing an outbound extension block: block definition (identified by block type number), three discriminator tags whose semantics are block-type-specific, and a list index value indicating whether the extension block is to be inserted before or after the Payload block. The order of appearance of extension specifications in the extensionSpecs array determines the order in which extension blocks will be inserted into locally sourced bundles.

    The standard extensionDefs array -- which is empty -- is in the noextensions.c prototype source file. The procedure for extending the Bundle Protocol in ION is as follows:

    1. Specify -DBP_EXTENDED in the Makefile's compiler command line when building the libbpP.c library module.

    2. Create a copy of the prototype extensions file, named \"bpextensions.c\", in a directory that is made visible to the Makefile's libbpP.c compilation command line (by a -I parameter).

    3. In the \"external function declarations\" area of \"bpextensions.c\", add \"extern\" function declarations identifying the functions that will implement your extension (or extensions).

    4. Add one or more ExtensionDef structure initialization lines to the extensionDefs array, referencing those declared functions.

    5. Add one or more ExtensionSpec structure initialization lines to the extensionSpecs array, referencing those extension definitions.

    6. Develop the implementations of the extension implementation functions in one or more new source code files.

    7. Add the object file or files for the new extension implementation source file (or files) to the Makefile's command line for linking libbpP.so.

    The function pointers supplied in each ExtensionDef must conform to the following specifications. NOTE that any function that modifies the bytes member of an ExtensionBlock or AckExtBlock must set the corresponding length to the new length of the bytes array, if changed.

    "},{"location":"man/bpv6/bpextensions/#utility-functions-for-extension-processing","title":"UTILITY FUNCTIONS FOR EXTENSION PROCESSING","text":""},{"location":"man/bpv6/bpextensions/#see-also","title":"SEE ALSO","text":"

    bp(3)

    "},{"location":"man/bpv6/bping/","title":"NAME","text":"

    bping - Send and receive Bundle Protocol echo bundles.

    "},{"location":"man/bpv6/bping/#synopsis","title":"SYNOPSIS","text":"

    bping [-c count] [-i interval] [-p priority] [-q wait] [-r flags] [-t ttl] srcEID destEID [reporttoEID]

    "},{"location":"man/bpv6/bping/#description","title":"DESCRIPTION","text":"

    bping sends bundles from srcEID to destEID. If the destEID echoes the bundles back (for instance, it is a bpecho endpoint), bping will print the round-trip time. When complete, bping will print statistics before exiting. It is very similar to ping, except it works with the bundle protocol.

    bping terminates when one of the following happens: it receives the SIGINT signal (Ctrl+C), it receives responses to all of the bundles it sent, or it has sent all count of its bundles and waited wait seconds.

    When bping is executed in a VxWorks or RTEMS environment, its runtime arguments are presented positionally rather than by keyword, in this order: count, interval, priority, wait, flags, TTL, verbosity (a Boolean, defaulting to zero), source EID, destination EID, report-to EID.

    Source EID and destination EID are always required.

    "},{"location":"man/bpv6/bping/#exit-status","title":"EXIT STATUS","text":"

    These exit statuses are taken from ping.

    "},{"location":"man/bpv6/bping/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv6/bping/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/bping/#diagnostics","title":"DIAGNOSTICS","text":"

    Diagnostic messages produced by bping are written to the ION log file ion.log and printed to standard error. Diagnostic messages that don't cause bping to terminate indicate a failure parsing an echo response bundle. This means that destEID isn't an echo endpoint: it's responding with some other bundle message of an unexpected format.

    "},{"location":"man/bpv6/bping/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/bping/#see-also","title":"SEE ALSO","text":"

    bpecho(1), bptrace(1), bpadmin(1), bp(3), ping(8)

    "},{"location":"man/bpv6/bplist/","title":"NAME","text":"

    bplist - Bundle Protocol (BP) utility for listing queued bundles

    "},{"location":"man/bpv6/bplist/#synopsis","title":"SYNOPSIS","text":"

    bplist [{count | detail} [destination_EID[/priority]]]

    "},{"location":"man/bpv6/bplist/#description","title":"DESCRIPTION","text":"

    bplist is a utility program that reports on bundles that currently reside in the local node, as identified by entries in the local bundle agent's \"timeline\" list.

    Either a count of bundles or a detailed list of bundles (noting primary block information together with hex and ASCII dumps of the payload and all extension blocks, in expiration-time sequence) may be requested.

    Either all bundles or just a subset of bundles - restricted to bundles for a single destination endpoint, or to bundles of a given level of priority that are all destined for some specified endpoint - may be included in the report.

    By default, bplist prints a detailed list of all bundles residing in the local node.

    "},{"location":"man/bpv6/bplist/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/bplist/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv6/bplist/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/bplist/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv6/bplist/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/bplist/#see-also","title":"SEE ALSO","text":"

    bpclock(1)

    "},{"location":"man/bpv6/bpnmtest/","title":"NAME","text":"

    bpnmtest - Bundle Protocol (BP) network management statistics test

    "},{"location":"man/bpv6/bpnmtest/#synopsis","title":"SYNOPSIS","text":"

    bpnmtest

    "},{"location":"man/bpv6/bpnmtest/#description","title":"DESCRIPTION","text":"

    bpnmtest simply prints to stdout messages containing the current values of all BP network management tallies, then terminates.

    "},{"location":"man/bpv6/bpnmtest/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/bpnmtest/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv6/bpnmtest/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/bpnmtest/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv6/bpnmtest/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/bprc/","title":"NAME","text":"

    bprc - Bundle Protocol management commands file

    "},{"location":"man/bpv6/bprc/#description","title":"DESCRIPTION","text":"

    Bundle Protocol management commands are passed to bpadmin either in a file of text lines or interactively at bpadmin's command prompt (:). Commands are interpreted line-by line, with exactly one command per line. The formats and effects of the Bundle Protocol management commands are described below.

    "},{"location":"man/bpv6/bprc/#general-commands","title":"GENERAL COMMANDS","text":""},{"location":"man/bpv6/bprc/#scheme-commands","title":"SCHEME COMMANDS","text":""},{"location":"man/bpv6/bprc/#endpoint-commands","title":"ENDPOINT COMMANDS","text":""},{"location":"man/bpv6/bprc/#protocol-commands","title":"PROTOCOL COMMANDS","text":""},{"location":"man/bpv6/bprc/#induct-commands","title":"INDUCT COMMANDS","text":""},{"location":"man/bpv6/bprc/#outduct-commands","title":"OUTDUCT COMMANDS","text":""},{"location":"man/bpv6/bprc/#egress-plan-commands","title":"EGRESS PLAN COMMANDS","text":""},{"location":"man/bpv6/bprc/#examples","title":"EXAMPLES","text":""},{"location":"man/bpv6/bprc/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), ipnadmin(1), dtn2admin(1)

    "},{"location":"man/bpv6/bprecvfile/","title":"NAME","text":"

    bprecvfile - Bundle Protocol (BP) file reception utility

    "},{"location":"man/bpv6/bprecvfile/#synopsis","title":"SYNOPSIS","text":"

    bprecvfile own_endpoint_ID [max_files]

    "},{"location":"man/bpv6/bprecvfile/#description","title":"DESCRIPTION","text":"

    bprecvfile is intended to serve as the counterpart to bpsendfile. It uses bp_receive() to receive bundles containing file content. The content of each bundle is simply written to a file named \"testfileN\" where N is the total number of bundles received since the program began running.

    If a max_files value of N (where N > 0) is provided, the program will terminate automatically upon completing its Nth file reception. Otherwise it will run indefinitely; use ^C to terminate the program.

    "},{"location":"man/bpv6/bprecvfile/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/bprecvfile/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv6/bprecvfile/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/bprecvfile/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv6/bprecvfile/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/bprecvfile/#see-also","title":"SEE ALSO","text":"

    bpsendfile(1), bp(3)

    "},{"location":"man/bpv6/bpsecadmin/","title":"NAME","text":"

    bpsecadmin - BP security policy administration interface

    "},{"location":"man/bpv6/bpsecadmin/#synopsis","title":"SYNOPSIS","text":"

    bpsecadmin [ commands_filename ]

    "},{"location":"man/bpv6/bpsecadmin/#description","title":"DESCRIPTION","text":"

    bpsecadmin configures and manages BP security policy on the local computer.

    It configures and manages BP security policy on the local computer in response to BP configuration commands found in commands_filename, if provided; if not, bpsecadmin prints a simple prompt (:) so that the user may type commands directly into standard input.

    The format of commands for commands_filename can be queried from bpsecadmin by entering the command 'h' or '?' at the prompt. The commands are documented in bpsecrc(5).

    "},{"location":"man/bpv6/bpsecadmin/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/bpsecadmin/#examples","title":"EXAMPLES","text":""},{"location":"man/bpv6/bpsecadmin/#files","title":"FILES","text":"

    Status and diagnostic messages from bpsecadmin and from other software that utilizes the ION node are nominally written to a log file in the current working directory within which bpsecadmin was run. The log file is typically named ion.log.

    See also bpsecrc(5).

    "},{"location":"man/bpv6/bpsecadmin/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/bpsecadmin/#diagnostics","title":"DIAGNOSTICS","text":"

    Note: all ION administration utilities expect source file input to be lines of ASCII text that are NL-delimited. If you edit the ionrc file on a Windows machine, be sure to use dos2unix to convert it to Unix text format before presenting it to bpsecadmin. Otherwise bpsecadmin will detect syntax errors and will not function satisfactorily.

    The following diagnostics may be issued to the log file:

    Various errors that don't cause bpsecadmin to fail but are noted in the log file may be caused by improperly formatted commands given at the prompt or in the commands_filename. Please see bpsecrc(5) for details.

    "},{"location":"man/bpv6/bpsecadmin/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/bpsecadmin/#see-also","title":"SEE ALSO","text":"

    bpsecrc(5)

    "},{"location":"man/bpv6/bpsecrc/","title":"NAME","text":"

    bpsecrc - BP security policy management commands file

    "},{"location":"man/bpv6/bpsecrc/#description","title":"DESCRIPTION","text":"

    BP security policy management commands are passed to bpsecadmin either in a file of text lines or interactively at bpsecadmin's command prompt (:). Commands are interpreted line-by line, with exactly one command per line. The formats and effects of the BP security policy management commands are described below.

    A parameter identifed as an eid_expr is an \"endpoint ID expression.\" For all commands, whenever the last character of an endpoint ID expression is the wild-card character '*', an applicable endpoint ID \"matches\" this EID expression if all characters of the endpoint ID expression prior to the last one are equal to the corresponding characters of that endpoint ID. Otherwise an applicable endpoint ID \"matches\" the EID expression only when all characters of the EID and EID expression are identical.

    ION supports the proposed \"streamlined\" Bundle Security Protocol (currently posted as CCSDS Red Book 734.5-R-1) in place of the standard Bundle Security Protocol (RFC 6257). Since SBSP is not yet a published standard, ION's Bundle Protocol security mechanisms will not necessarily interoperate with those of other BP implementations. This is unfortunate but (we hope) temporary, as SBSP represents a major improvement in bundle security. It is possible that the SBSP specification will change somewhat between now and the time SBSP is published as a CCSDS standard and eventually an RFC, and ION will be revised as necessary to conform to those changes, but in the meantime we believe that the advantages of SBSP make it more suitable than RFC 6257 as a foundation for the development and deployment of secure DTN applications.

    "},{"location":"man/bpv6/bpsecrc/#commands","title":"COMMANDS","text":""},{"location":"man/bpv6/bpsecrc/#see-also","title":"SEE ALSO","text":"

    bpsecadmin(1)

    "},{"location":"man/bpv6/bpsendfile/","title":"NAME","text":"

    bpsendfile - Bundle Protocol (BP) file transmission utility

    "},{"location":"man/bpv6/bpsendfile/#synopsis","title":"SYNOPSIS","text":"

    bpsendfile own_endpoint_ID destination_endpoint_ID file_name [class_of_service [time_to_live (seconds) ]]

    "},{"location":"man/bpv6/bpsendfile/#description","title":"DESCRIPTION","text":"

    bpsendfile uses bp_send() to issue a single bundle to a designated destination endpoint, containing the contents of the file identified by file_name, then terminates. The bundle is sent with no custody transfer requested. When class_of_service is omitted, the bundle is sent at standard priority; for details of the class_of_service parameter, see bptrace(1). time_to_live, if not specified, defaults to 300 seconds (5 minutes). NOTE that time_to_live is specified AFTER class_of_service, rather than before it as in bptrace(1).

    "},{"location":"man/bpv6/bpsendfile/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/bpsendfile/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv6/bpsendfile/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/bpsendfile/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv6/bpsendfile/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/bpsendfile/#see-also","title":"SEE ALSO","text":"

    bprecvfile(1), bp(3)

    "},{"location":"man/bpv6/bpsink/","title":"NAME","text":"

    bpsink - Bundle Protocol reception test program

    "},{"location":"man/bpv6/bpsink/#synopsis","title":"SYNOPSIS","text":"

    bpsink ownEndpointId

    "},{"location":"man/bpv6/bpsink/#description","title":"DESCRIPTION","text":"

    bpsink uses Bundle Protocol to receive application data units from a remote bpsource application task. For each application data unit it receives, it prints the ADU's length and -- if length is less than 80 -- its text.

    bpsink terminates upon receiving the SIGQUIT signal, i.e., ^C from the keyboard.

    "},{"location":"man/bpv6/bpsink/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/bpsink/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv6/bpsink/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/bpsink/#diagnostics","title":"DIAGNOSTICS","text":"

    Diagnostic messages produced by bpsink are written to the ION log file ion.log.

    "},{"location":"man/bpv6/bpsink/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/bpsink/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), bpsource(1), bp(3)

    "},{"location":"man/bpv6/bpsource/","title":"NAME","text":"

    bpsource - Bundle Protocol transmission test shell

    "},{"location":"man/bpv6/bpsource/#synopsis","title":"SYNOPSIS","text":"

    bpsource destinationEndpointId [\"text\"] [-t_TTL_]

    "},{"location":"man/bpv6/bpsource/#description","title":"DESCRIPTION","text":"

    When text is supplied, bpsource simply uses Bundle Protocol to send text to a counterpart bpsink application task that has opened the BP endpoint identified by destinationEndpointId, then terminates.

    Otherwise, bpsource offers the user an interactive \"shell\" for testing Bundle Protocol data transmission. bpsource prints a prompt string (\": \") to stdout, accepts a string of text from stdin, uses Bundle Protocol to send the string to a counterpart bpsink application task that has opened the BP endpoint identified by destinationEndpointId, then prints another prompt string and so on. To terminate the program, enter a string consisting of a single exclamation point (!) character.

    TTL indicates the number of seconds the bundles may remain in the network, undelivered, before they are automatically destroyed. If omitted, TTL defaults to 300 seconds.

    The source endpoint ID for each bundle sent by bpsource is the null endpoint ID, i.e., the bundles are anonymous. All bundles are sent standard priority with no custody transfer and no status reports requested.

    "},{"location":"man/bpv6/bpsource/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/bpsource/#files","title":"FILES","text":"

    The service data units transmitted by bpsource are sequences of text obtained from a file in the current working directory named \"bpsourceAduFile\", which bpsource creates automatically.

    "},{"location":"man/bpv6/bpsource/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/bpsource/#diagnostics","title":"DIAGNOSTICS","text":"

    Diagnostic messages produced by bpsource are written to the ION log file ion.log.

    "},{"location":"man/bpv6/bpsource/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/bpsource/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), bpsink(1), bp(3)

    "},{"location":"man/bpv6/bpstats/","title":"NAME","text":"

    bpstats - Bundle Protocol (BP) processing statistics query utility

    "},{"location":"man/bpv6/bpstats/#synopsis","title":"SYNOPSIS","text":"

    bpstats

    "},{"location":"man/bpv6/bpstats/#description","title":"DESCRIPTION","text":"

    bpstats simply logs messages containing the current values of all BP processing statistics accumulators, then terminates.

    "},{"location":"man/bpv6/bpstats/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/bpstats/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv6/bpstats/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/bpstats/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv6/bpstats/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/bpstats/#see-also","title":"SEE ALSO","text":"

    ion(3)

    "},{"location":"man/bpv6/bpstats2/","title":"NAME","text":"

    bpstats2 - Bundle Protocol (BP) processing statistics query utility via bundles

    "},{"location":"man/bpv6/bpstats2/#synopsis","title":"SYNOPSIS","text":"

    bpstats2 sourceEID [default destEID] [ct]

    "},{"location":"man/bpv6/bpstats2/#description","title":"DESCRIPTION","text":"

    bpstats2 creates bundles containing the current values of all BP processing statistics accumulators. It creates these bundles when:

    "},{"location":"man/bpv6/bpstats2/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/bpstats2/#options","title":"OPTIONS","text":""},{"location":"man/bpv6/bpstats2/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv6/bpstats2/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/bpstats2/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv6/bpstats2/#notes","title":"NOTES","text":"

    A very simple interrogator is bpchat which can repeatedly interrogate bpstats2 by just striking the enter key.

    "},{"location":"man/bpv6/bpstats2/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/bpstats2/#see-also","title":"SEE ALSO","text":"

    bpstats(1), bpchat(1)

    "},{"location":"man/bpv6/bptrace/","title":"NAME","text":"

    bptrace - Bundle Protocol (BP) network trace utility

    "},{"location":"man/bpv6/bptrace/#synopsis","title":"SYNOPSIS","text":"

    bptrace own_endpoint_ID destination_endpoint_ID report-to_endpoint_ID TTL class_of_service \"trace_text\" [status_report_flags]

    "},{"location":"man/bpv6/bptrace/#description","title":"DESCRIPTION","text":"

    bptrace uses bp_send() to issue a single bundle to a designated destination endpoint, with status reporting options enabled as selected by the user, then terminates. The status reports returned as the bundle makes its way through the network provide a view of the operation of the network as currently configured.

    TTL indicates the number of seconds the trace bundle may remain in the network, undelivered, before it is automatically destroyed.

    class_of_service is custody-requested.priority[.ordinal[.unreliable.critical[.data-label]]], where custody-requested must be 0 or 1 (Boolean), priority must be 0 (bulk) or 1 (standard) or 2 (expedited), ordinal must be 0-254, unreliable must be 0 or 1 (Boolean), critical must also be 0 or 1 (Boolean), and data-label may be any unsigned integer. ordinal is ignored if priority is not 2. Setting class_of_service to \"0.2.254\" or \"1.2.254\" gives a bundle the highest possible priority. Setting unreliable to 1 causes BP to forego retransmission in the event of data loss, both at the BP layer and at the convergence layer. Setting critical to 1 causes contact graph routing to forward the bundle on all plausible routes rather than just the \"best\" route it computes; this may result in multiple copies of the bundle arriving at the destination endpoint, but when used in conjunction with priority 2.254 it ensures that the bundle will be delivered as soon as physically possible.

    trace_text can be any string of ASCII text; alternatively, if we want to send a file, it can be \"@\" followed by the file name.

    status_report_flags must be a sequence of status report flags, separated by commas, with no embedded whitespace. Each status report flag must be one of the following: rcv, ct, fwd, dlv, del.

    "},{"location":"man/bpv6/bptrace/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/bptrace/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv6/bptrace/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/bptrace/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv6/bptrace/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/bptrace/#see-also","title":"SEE ALSO","text":"

    bp(3)

    "},{"location":"man/bpv6/bptransit/","title":"NAME","text":"

    bptransit - Bundle Protocol (BP) daemon task for forwarding received bundles

    "},{"location":"man/bpv6/bptransit/#synopsis","title":"SYNOPSIS","text":"

    bptransit

    "},{"location":"man/bpv6/bptransit/#description","title":"DESCRIPTION","text":"

    bptransit is a background \"daemon\" task that is responsible for presenting to ION's forwarding daemons any bundles that were received from other nodes (i.e., bundles whose payloads reside in Inbound ZCO space) and are destined for yet other nodes. In doing so, it migrates these bundles from Inbound buffer space to Outbound buffer space on the same prioritized basis as the insertion of locally sourced outbound bundles.

    Management of the bptransit daemon is automatic. It is spawned automatically by bpadmin in response to the 's' command that starts operation of Bundle Protocol on the local ION node, and it is terminated by bpadmin in response to an 'x' (STOP) command.

    Whenever a received bundle is determined to have a destination other than the local node, a pointer to that bundle is appended to one of two queues of \"in-transit\" bundles, one for bundles whose forwarding is provisional (depending on the availability of Outbound ZCO buffer space; bundles in this queue are potentially subject to congestion loss) and one for bundles whose forwarding is confirmed. Bundles received via convergence-layer adapters that can sustain flow control, such as STCP, are appended to the \"confirmed\" queue, while those from CLAs that cannot sustain flow control (such as LTP) are appended to the \"provisional\" queue.

    bptransit comprises two threads, one for each in-transit queue. The confirmed in-transit thread dequeues bundles from the \"confirmed\" queue and moves them from Inbound to Outbound ZCO buffer space, blocking (if necessary) until space becomes available. The provisional in-transit queue dequeues bundles from the \"provisional\" queue and moves them from Inbound to Outbound ZCO buffer space if Outbound space is available, discarding (\"abandoning\") them if it is not.

    "},{"location":"man/bpv6/bptransit/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/bptransit/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv6/bptransit/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/bptransit/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv6/bptransit/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/bptransit/#see-also","title":"SEE ALSO","text":"

    bpadmin(1)

    "},{"location":"man/bpv6/brsccla/","title":"NAME","text":"

    brsccla - BRSC-based BP convergence layer adapter (input and output) task

    "},{"location":"man/bpv6/brsccla/#synopsis","title":"SYNOPSIS","text":"

    brsccla server_hostname[:server_port_nbr]_own_node_nbr

    "},{"location":"man/bpv6/brsccla/#description","title":"DESCRIPTION","text":"

    BRSC is the \"client\" side of the Bundle Relay Service (BRS) convergence layer protocol for BP. It is complemented by BRSS, the \"server\" side of the BRS convergence layer protocol for BP. BRS clients send bundles directly only to the server, regardless of their final destinations, and the server forwards them to other clients as necessary.

    brsccla is a background \"daemon\" task comprising three threads: one that connects to the BRS server, spawns the other threads, and then handles BRSC protocol output by transmitting bundles over the connected socket to the BRS server; one that simply sends periodic \"keepalive\" messages over the connected socket to the server (to assure that local inactivity doesn't cause the connection to be lost); and one that handles BRSC protocol input from the connected server.

    The output thread connects to the server's TCP socket at server_hostname and server_port_nbr, sends over the connected socket the client's own_node_nbr (in SDNV representation) followed by a 32-bit time tag and a 160-bit HMAC-SHA1 digest of that time tag, to authenticate itself; checks the authenticity of the 160-bit countersign returned by the server; spawns the keepalive and receiver threads; and then begins extracting bundles from the queues of bundles ready for transmission via BRSC and transmitting those bundles over the connected socket to the server. Each transmitted bundle is preceded by its length, a 32-bit unsigned integer in network byte order. The default value for server_port_nbr, if omitted, is 80.

    The reception thread receives bundles over the connected socket and passes them to the bundle protocol agent on the local ION node. Each bundle received on the connection is preceded by its length, a 32-bit unsigned integer in network byte order.

    The keepalive thread simply sends a \"bundle length\" value of zero (a 32-bit unsigned integer in network byte order) to the server once every 15 seconds.

    brsccla is spawned automatically by bpadmin in response to the 's' (START) command that starts operation of the Bundle Protocol, and it is terminated by bpadmin in response to an 'x' (STOP) command. brsccla can also be spawned and terminated in response to START and STOP commands that pertain specifically to the BRSC convergence layer protocol.

    "},{"location":"man/bpv6/brsccla/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/brsccla/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv6/brsccla/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/brsccla/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv6/brsccla/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/brsccla/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), bprc(5), brsscla(1)

    "},{"location":"man/bpv6/brsscla/","title":"NAME","text":"

    brsscla - BRSS-based BP convergence layer adapter (input and output) task

    "},{"location":"man/bpv6/brsscla/#synopsis","title":"SYNOPSIS","text":"

    brsscla local_hostname[:local_port_nbr]

    "},{"location":"man/bpv6/brsscla/#description","title":"DESCRIPTION","text":"

    BRSS is the \"server\" side of the Bundle Relay Service (BRS) convergence layer protocol for BP. It is complemented by BRSC, the \"client\" side of the BRS convergence layer protocol for BP.

    brsscla is a background \"daemon\" task that spawns 2*N threads: one that handles BRSS client connections and spawns sockets for continued data interchange with connected clients; one that handles BRSS protocol output by transmitting over those spawned sockets to the associated clients; and two thread for each spawned socket, an input thread to handle BRSS protocol input from the associated connected client and an output thread to forward BRSS protocol output to the associated connected client.

    The connection thread simply accepts connections on a TCP socket bound to local_hostname and local_port_nbr and spawns reception threads. The default value for local_port_nbr, if omitted, is 80.

    Each reception thread receives over the socket connection the node number of the connecting client (in SDNV representation), followed by a 32-bit time tag and a 160-bit HMAC-SHA1 digest of that time tag. The receiving thread checks the time tag, requiring that it differ from the current time by no more than BRSTERM (default value 5) seconds. It then recomputes the digest value using the HMAC-SHA1 key named \"node_number.brs\" as recorded in the ION security database (see ionsecrc(5)), requiring that the supplied and computed digests be identical. If all registration conditions are met, the receiving thread sends the client a countersign -- a similarly computed HMAC-SHA1 digest, for the time tag that is 1 second later than the provided time tag -- to assure the client of its own authenticity, then commences receiving bundles over the connected socket. Each bundle received on the connection is preceded by its length, a 32-bit unsigned integer in network byte order. The received bundles are passed to the bundle protocol agent on the local ION node.

    Each output thread extracts bundles from the queues of bundles ready for transmission via BRSS to the corresponding connected client and transmits the bundles over the socket to that client. Each transmitted bundle is preceded by its length, a 32-bit unsigned integer in network byte order.

    brsscla is spawned automatically by bpadmin in response to the 's' (START) command that starts operation of the Bundle Protocol, and it is terminated by bpadmin in response to an 'x' (STOP) command. brsscla can also be spawned and terminated in response to START and STOP commands that pertain specifically to the BRSS convergence layer protocol.

    "},{"location":"man/bpv6/brsscla/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/brsscla/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv6/brsscla/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/brsscla/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv6/brsscla/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/brsscla/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), bprc(5), brsccla(1)

    "},{"location":"man/bpv6/cgrfetch/","title":"NAME","text":"

    cgrfetch - Visualize CGR simulations

    "},{"location":"man/bpv6/cgrfetch/#synopsis","title":"SYNOPSIS","text":"

    cgrfetch [OPTIONS] DEST-NODE

    "},{"location":"man/bpv6/cgrfetch/#description","title":"DESCRIPTION","text":"

    cgrfetch uses CGR to simulate sending a bundle from the local node to DEST-NODE. It traces the execution of CGR to generate graphs of the routes that were considered and the routes that were ultimately chosen to forward along. No bundle is sent during the simulation.

    A JSON representation of the simulation is output to OUTPUT-FILE. The representation includes parameters of the simulation and a structure for each considered route, which in turn includes calculated parameters for the route and an image of the contact graph.

    The dot(1) tool from the Graphviz package is used to generate the contact graph images and is required for cgrfetch(1). The base64(1) tool from coreutils is used to embed the images in the JSON and is also required.

    Note that a trace of the route computation logic performed by CGR is printed to stderr; there is currently no cgrfetch option for redirecting this output to a file.

    "},{"location":"man/bpv6/cgrfetch/#options","title":"OPTIONS","text":""},{"location":"man/bpv6/cgrfetch/#examples","title":"EXAMPLES","text":""},{"location":"man/bpv6/cgrfetch/#see-also","title":"SEE ALSO","text":"

    dot(1), base64(1)

    "},{"location":"man/bpv6/dccpcli/","title":"NAME","text":"

    dccpcli - DCCP-based BP convergence layer input task

    "},{"location":"man/bpv6/dccpcli/#synopsis","title":"SYNOPSIS","text":"

    dccpcli local_hostname[:local_port_nbr]

    "},{"location":"man/bpv6/dccpcli/#description","title":"DESCRIPTION","text":"

    dccpcli is a background \"daemon\" task that receives DCCP datagrams via a DCCP socket bound to local_hostname and local_port_nbr, extracts bundles from those datagrams, and passes them to the bundle protocol agent on the local ION node.

    If not specified, port number defaults to 4556.

    Note that dccpcli has no fragmentation support at all. Therefore, the largest bundle that can be sent via this convergence layer is limited to just under the link's MTU (typically 1500 bytes).

    The convergence layer input task is spawned automatically by bpadmin in response to the 's' (START) command that starts operation of the Bundle Protocol; the text of the command that is used to spawn the task must be provided at the time the \"dccp\" convergence layer protocol is added to the BP database. The convergence layer input task is terminated by bpadmin in response to an 'x' (STOP) command. dccpcli can also be spawned and terminated in response to START and STOP commands that pertain specifically to the DCCP convergence layer protocol.

    "},{"location":"man/bpv6/dccpcli/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/dccpcli/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv6/dccpcli/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/dccpcli/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv6/dccpcli/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/dccpcli/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), bprc(5), dccpclo(1)

    "},{"location":"man/bpv6/dccpclo/","title":"NAME","text":"

    dccpclo - DCCP-based BP convergence layer output task

    "},{"location":"man/bpv6/dccpclo/#synopsis","title":"SYNOPSIS","text":"

    dccpclo remote_hostname[:remote_port_nbr]

    "},{"location":"man/bpv6/dccpclo/#description","title":"DESCRIPTION","text":"

    dccpclo is a background \"daemon\" task that connects to a remote node's DCCP socket at remote_hostname and remote_port_nbr. It then begins extracting bundles from the queues of bundles ready for transmission via DCCP to this remote bundle protocol agent and transmitting those bundles as DCCP datagrams to the remote host.

    If not specified, remote_port_nbr defaults to 4556.

    Note that dccpclo has no fragmentation support at all. Therefore, the largest bundle that can be sent via this convergence layer is limited to just under the link's MTU (typically 1500 bytes).

    dccpclo is spawned automatically by bpadmin in response to the 's' (START) command that starts operation of the Bundle Protocol, and it is terminated by bpadmin in response to an 'x' (STOP) command. dccpclo can also be spawned and terminated in response to START and STOP commands that pertain specifically to the DCCP convergence layer protocol.

    "},{"location":"man/bpv6/dccpclo/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/dccpclo/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv6/dccpclo/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/dccpclo/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv6/dccpclo/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/dccpclo/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), bprc(5), dccpcli(1)

    "},{"location":"man/bpv6/dgrcli/","title":"NAME","text":"

    dgrcli - DGR-based BP convergence layer reception task

    "},{"location":"man/bpv6/dgrcli/#synopsis","title":"SYNOPSIS","text":"

    dgrcli local_hostname[:local_port_nbr]

    "},{"location":"man/bpv6/dgrcli/#description","title":"DESCRIPTION","text":"

    dgrcli is a background \"daemon\" task that handles DGR convergence layer protocol input.

    The daemon receives DGR messages via a UDP socket bound to local_hostname and local_port_nbr, extracts bundles from those messages, and passes them to the bundle protocol agent on the local ION node. (local_port_nbr defaults to 1113 if not specified.)

    dgrcli is spawned automatically by bpadmin in response to the 's' (START) command that starts operation of the Bundle Protocol, and it is terminated by bpadmin in response to an 'x' (STOP) command. dgrcli can also be spawned and terminated in response to START and STOP commands that pertain specifically to the DGR convergence layer protocol.

    "},{"location":"man/bpv6/dgrcli/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/dgrcli/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv6/dgrcli/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/dgrcli/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv6/dgrcli/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/dgrcli/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), bprc(5)

    "},{"location":"man/bpv6/dgrclo/","title":"NAME","text":"

    dgrclo - DGR-based BP convergence layer transmission task

    "},{"location":"man/bpv6/dgrclo/#synopsis","title":"SYNOPSIS","text":"

    dgrclo remote_hostname[:remote_port_nbr]

    "},{"location":"man/bpv6/dgrclo/#description","title":"DESCRIPTION","text":"

    dgrclo is a background \"daemon\" task that spawns two threads, one that handles DGR convergence layer protocol input (positive and negative acknowledgments) and a second that handles DGR convergence layer protocol output.

    The output thread extracts bundles from the queues of bundles ready for transmission via DGR to a remote bundle protocol agent, encapsulates them in DGR messages, and uses a randomly configured local UDP socket to send those messages to the remote UDP socket bound to remote_hostname and remote_port_nbr. (local_port_nbr defaults to 1113 if not specified.)

    The input thread receives DGR messages via the same local UDP socket and uses them to manage DGR retransmission of transmitted datagrams.

    dgrclo is spawned automatically by bpadmin in response to the 's' (START) command that starts operation of the Bundle Protocol, and it is terminated by bpadmin in response to an 'x' (STOP) command. dgrclo can also be spawned and terminated in response to START and STOP commands that pertain specifically to the DGR convergence layer protocol.

    "},{"location":"man/bpv6/dgrclo/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/dgrclo/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv6/dgrclo/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/dgrclo/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv6/dgrclo/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/dgrclo/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), bprc(5)

    "},{"location":"man/bpv6/dtn2admin/","title":"NAME","text":"

    dtn2admin - baseline \"dtn\" scheme administration interface

    "},{"location":"man/bpv6/dtn2admin/#synopsis","title":"SYNOPSIS","text":"

    dtn2admin [ commands_filename ]

    "},{"location":"man/bpv6/dtn2admin/#description","title":"DESCRIPTION","text":"

    dtn2admin configures the local ION node's routing of bundles to endpoints whose IDs conform to the dtn endpoint ID scheme. dtn is a non-CBHE-conformant scheme. The structure of dtn endpoint IDs remains somewhat in flux at the time of this writing, but endpoint IDs in the dtn scheme historically have been strings of the form \"dtn://node_name[/demux_token]\", where node_name normally identifies a computer somewhere on the network and demux_token normally identifies a specific application processing point. Although the dtn endpoint ID scheme imposes more transmission overhead than the ipn scheme, ION provides support for dtn endpoint IDs to enable interoperation with other implementations of Bundle Protocol.

    dtn2admin operates in response to \"dtn\" scheme configuration commands found in the file commands_filename, if provided; if not, dtn2admin prints a simple prompt (:) so that the user may type commands directly into standard input.

    The format of commands for commands_filename can be queried from dtn2admin with the 'h' or '?' commands at the prompt. The commands are documented in dtn2rc(5).

    "},{"location":"man/bpv6/dtn2admin/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/dtn2admin/#examples","title":"EXAMPLES","text":""},{"location":"man/bpv6/dtn2admin/#files","title":"FILES","text":"

    See dtn2rc(5) for details of the DTN scheme configuration commands.

    "},{"location":"man/bpv6/dtn2admin/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/dtn2admin/#diagnostics","title":"DIAGNOSTICS","text":"

    Note: all ION administration utilities expect source file input to be lines of ASCII text that are NL-delimited. If you edit the dtn2rc file on a Windows machine, be sure to use dos2unix to convert it to Unix text format before presenting it to dtn2admin. Otherwise dtn2admin will detect syntax errors and will not function satisfactorily.

    The following diagnostics may be issued to the logfile ion.log:

    Various errors that don't cause dtn2admin to fail but are noted in the ion.log log file may be caused by improperly formatted commands given at the prompt or in the commands_filename file. Please see dtn2rc(5) for details.

    "},{"location":"man/bpv6/dtn2admin/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/dtn2admin/#see-also","title":"SEE ALSO","text":"

    dtn2rc(5)

    "},{"location":"man/bpv6/dtn2adminep/","title":"NAME","text":"

    dtn2adminep - administrative endpoint task for the \"dtn\" scheme

    "},{"location":"man/bpv6/dtn2adminep/#synopsis","title":"SYNOPSIS","text":"

    dtn2adminep

    "},{"location":"man/bpv6/dtn2adminep/#description","title":"DESCRIPTION","text":"

    dtn2adminep is a background \"daemon\" task that receives and processes administrative bundles (all custody signals and, nominally, all bundle status reports) that are sent to the \"dtn\"-scheme administrative endpoint on the local ION node, if and only if such an endpoint was established by bpadmin. It is spawned automatically by bpadmin in response to the 's' (START) command that starts operation of Bundle Protocol on the local ION node, and it is terminated by bpadmin in response to an 'x' (STOP) command. dtn2adminep can also be spawned and terminated in response to START and STOP commands that pertain specifically to the \"dtn\" scheme.

    dtn2adminep responds to custody signals as specified in the Bundle Protocol specification, RFC 5050. It responds to bundle status reports by logging ASCII text messages describing the reported activity.

    "},{"location":"man/bpv6/dtn2adminep/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/dtn2adminep/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv6/dtn2adminep/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/dtn2adminep/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv6/dtn2adminep/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/dtn2adminep/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), dtn2admin(1).

    "},{"location":"man/bpv6/dtn2fw/","title":"NAME","text":"

    dtn2fw - bundle route computation task for the \"dtn\" scheme

    "},{"location":"man/bpv6/dtn2fw/#synopsis","title":"SYNOPSIS","text":"

    dtn2fw

    "},{"location":"man/bpv6/dtn2fw/#description","title":"DESCRIPTION","text":"

    dtn2fw is a background \"daemon\" task that pops bundles from the queue of bundle destined for \"dtn\"-scheme endpoints, computes proximate destinations for those bundles, and appends those bundles to the appropriate queues of bundles pending transmission to those computed proximate destinations.

    For each possible proximate destination (that is, neighboring node) there is a separate queue for each possible level of bundle priority: 0, 1, 2. Each outbound bundle is appended to the queue matching the bundle's designated priority.

    Proximate destination computation is affected by static routes as configured by dtn2admin(1).

    dtn2fw is spawned automatically by bpadmin in response to the 's' (START) command that starts operation of Bundle Protocol on the local ION node, and it is terminated by bpadmin in response to an 'x' (STOP) command. dtn2fw can also be spawned and terminated in response to START and STOP commands that pertain specifically to the \"dtn\" scheme.

    "},{"location":"man/bpv6/dtn2fw/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/dtn2fw/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv6/dtn2fw/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/dtn2fw/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv6/dtn2fw/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/dtn2fw/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), dtn2admin(1), bprc(5), dtn2rc(5).

    "},{"location":"man/bpv6/dtn2rc/","title":"NAME","text":"

    dtn2rc - \"dtn\" scheme configuration commands file

    "},{"location":"man/bpv6/dtn2rc/#description","title":"DESCRIPTION","text":"

    \"dtn\" scheme configuration commands are passed to dtn2admin either in a file of text lines or interactively at dtn2admin's command prompt (:). Commands are interpreted line-by line, with exactly one command per line.

    \"dtn\" scheme configuration commands establish static routing rules for forwarding bundles to \"dtn\"-scheme destination endpoints, identified by node ID. (Each node ID is simply a BP endpoint ID.)

    Static routes are expressed as plans in the \"dtn\"-scheme routing database. A plan that is established for a given node name associates a routing directive with the named node. Each directive is a string of one of two possible forms:

    f endpoint_ID

    ...or...

    x protocol_name/outduct_name

    The former form signifies that the bundle is to be forwarded to the indicated endpoint, requiring that it be re-queued for processing by the forwarder for that endpoint (which might, but need not, be identified by another \"dtn\"-scheme endpoint ID). The latter form signifies that the bundle is to be queued for transmission via the indicated convergence layer protocol outduct.

    The node IDs cited in dtn2rc plans may be \"wild-carded\". That is, when the last character of a node ID is either '*' or '~' (these two wild-card characters are equivalent for this purpose), the plan applies to all nodes whose IDs are identical to the wild-carded node name up to the wild-card character. For example, a bundle whose destination EID name is \"dtn://foghorn\" would be routed by plans citing the following node IDs: \"dtn://foghorn\", \"dtn://fogh*\", \"dtn://fog~\", \"//*\". When multiple plans are all applicable to the same destination EID, the one citing the longest (i.e., most narrowly targeted) node ID will be applied.

    The formats and effects of the DTN scheme configuration commands are described below.

    "},{"location":"man/bpv6/dtn2rc/#general-commands","title":"GENERAL COMMANDS","text":""},{"location":"man/bpv6/dtn2rc/#plan-commands","title":"PLAN COMMANDS","text":""},{"location":"man/bpv6/dtn2rc/#examples","title":"EXAMPLES","text":""},{"location":"man/bpv6/dtn2rc/#see-also","title":"SEE ALSO","text":"

    dtn2admin(1)

    "},{"location":"man/bpv6/hmackeys/","title":"NAME","text":"

    hmackeys - utility program for generating good HMAC-SHA1 keys

    "},{"location":"man/bpv6/hmackeys/#synopsis","title":"SYNOPSIS","text":"

    hmackeys [ keynames_filename ]

    "},{"location":"man/bpv6/hmackeys/#description","title":"DESCRIPTION","text":"

    hmackeys writes files containing randomized 160-bit key values suitable for use by HMAC-SHA1 in support of Bundle Authentication Block processing, Bundle Relay Service connections, or other functions for which symmetric hash computation is applicable. One file is written for each key name presented to hmackeys; the content of each file is 20 consecutive randomly selected 8-bit integer values, and the name given to each file is simply \"keyname.hmk\".

    hmackeys operates in response to the key names found in the file keynames_filename, one name per file text line, if provided; if not, hmackeys prints a simple prompt (:) so that the user may type key names directly into standard input.

    When the program is run in interactive mode, either enter 'q' or press ^C to terminate.

    "},{"location":"man/bpv6/hmackeys/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/hmackeys/#examples","title":"EXAMPLES","text":""},{"location":"man/bpv6/hmackeys/#files","title":"FILES","text":"

    No other files are used in the operation of hmackeys.

    "},{"location":"man/bpv6/hmackeys/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/hmackeys/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the logfile ion.log:

    "},{"location":"man/bpv6/hmackeys/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/hmackeys/#see-also","title":"SEE ALSO","text":"

    brsscla(1), ionsecadmin(1)

    "},{"location":"man/bpv6/imcadmin/","title":"NAME","text":"

    imcadmin - Interplanetary Multicast (IMC) scheme administration interface

    "},{"location":"man/bpv6/imcadmin/#synopsis","title":"SYNOPSIS","text":"

    imcadmin [ commands_filename ]

    "},{"location":"man/bpv6/imcadmin/#description","title":"DESCRIPTION","text":"

    imcadmin configures the local ION node's routing of bundles to endpoints whose IDs conform to the imc endpoint ID scheme. imc is a CBHE-conformant scheme; that is, every endpoint ID in the imc scheme is a string of the form \"imc:group_number.service_number\" where group_number (an IMC multicast group number) serves as a CBHE \"node number\" and service_number identifies a specific application processing point.

    imcadmin operates in response to IMC scheme configuration commands found in the file commands_filename, if provided; if not, imcadmin prints a simple prompt (:) so that the user may type commands directly into standard input.

    The format of commands for commands_filename can be queried from imcadmin with the 'h' or '?' commands at the prompt. The commands are documented in imcrc(5).

    "},{"location":"man/bpv6/imcadmin/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/imcadmin/#examples","title":"EXAMPLES","text":""},{"location":"man/bpv6/imcadmin/#files","title":"FILES","text":"

    See imcrc(5) for details of the IMC scheme configuration commands.

    "},{"location":"man/bpv6/imcadmin/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/imcadmin/#diagnostics","title":"DIAGNOSTICS","text":"

    Note: all ION administration utilities expect source file input to be lines of ASCII text that are NL-delimited. If you edit the imcrc file on a Windows machine, be sure to use dos2unix to convert it to Unix text format before presenting it to imcadmin. Otherwise imcadmin will detect syntax errors and will not function satisfactorily.

    The following diagnostics may be issued to the logfile ion.log:

    Various errors that don't cause imcadmin to fail but are noted in the ion.log log file may be caused by improperly formatted commands given at the prompt or in the commands_filename file. Please see imcrc(5) for details.

    "},{"location":"man/bpv6/imcadmin/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/imcadmin/#see-also","title":"SEE ALSO","text":"

    imcrc(5)

    "},{"location":"man/bpv6/imcfw/","title":"NAME","text":"

    imcfw - bundle route computation task for the IMC scheme

    "},{"location":"man/bpv6/imcfw/#synopsis","title":"SYNOPSIS","text":"

    imcfw

    "},{"location":"man/bpv6/imcfw/#description","title":"DESCRIPTION","text":"

    imcfw is a background \"daemon\" task that pops bundles from the queue of bundle destined for IMC-scheme (Interplanetary Multicast) endpoints, determines which \"relatives\" on the IMC multicast tree to forward the bundles to, and appends those bundles to the appropriate queues of bundles pending transmission to those proximate destinations.

    For each possible proximate destination (that is, neighboring node) there is a separate queue for each possible level of bundle priority: 0, 1, 2. Each outbound bundle is appended to the queue matching the bundle's designated priority.

    Proximate destination computation is determined by multicast group membership as resulting from nodes' registration in multicast endpoints, governed by multicast tree structure as configured by imcadmin(1).

    imcfw is spawned automatically by bpadmin in response to the 's' (START) command that starts operation of Bundle Protocol on the local ION node, and it is terminated by bpadmin in response to an 'x' (STOP) command. imcfw can also be spawned and terminated in response to START and STOP commands that pertain specifically to the IMC scheme.

    "},{"location":"man/bpv6/imcfw/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/imcfw/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv6/imcfw/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/imcfw/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv6/imcfw/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/imcfw/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), imcadmin(1), bprc(5), imcrc(5)

    "},{"location":"man/bpv6/imcrc/","title":"NAME","text":"

    imcrc - IMC scheme configuration commands file

    "},{"location":"man/bpv6/imcrc/#description","title":"DESCRIPTION","text":"

    IMC scheme configuration commands are passed to imcadmin either in a file of text lines or interactively at imcadmin's command prompt (:). Commands are interpreted line-by line, with exactly one command per line.

    IMC scheme configuration commands simply establish which nodes are the local node's parents and children within a single IMC multicast tree. This single spanning tree, an overlay on a single BP-based network, is used to convey all multicast group membership assertions and cancellations in the network, for all groups. Each node privately tracks which of its immediate \"relatives\" in the tree are members of which multicast groups and on this basis selectively forwards -- directly, to all (and only) interested relatives -- the bundles destined for the members of each group.

    Note that all of a node's immediate relatives in the multicast tree must be among its immediate neighbors in the underlying network. This is because multicast bundles can only be correctly forwarded within the tree if each forwarding node knows the identity of the relative that passed the bundle to it, so that the bundle is not passed back to that relative creating a routing loop. The identity of that prior forwarding node can only be known if the forwarding node was a neighbor, because no prior forwarding node (aside from the source) other than the immediate proximate (neighboring) sender of a received bundle is ever known.

    IMC group IDs are unsigned integers, just as IPN node IDs are unsigned integers. The members of a group are nodes identified by node number, and the multicast tree parent and children of a node are neighboring nodes identified by node number.

    The formats and effects of the IMC scheme configuration commands are described below.

    "},{"location":"man/bpv6/imcrc/#general-commands","title":"GENERAL COMMANDS","text":""},{"location":"man/bpv6/imcrc/#kinship-commands","title":"KINSHIP COMMANDS","text":""},{"location":"man/bpv6/imcrc/#examples","title":"EXAMPLES","text":""},{"location":"man/bpv6/imcrc/#see-also","title":"SEE ALSO","text":"

    imcadmin(1)

    "},{"location":"man/bpv6/ipnadmin/","title":"NAME","text":"

    ipnadmin - Interplanetary Internet (IPN) scheme administration interface

    "},{"location":"man/bpv6/ipnadmin/#synopsis","title":"SYNOPSIS","text":"

    ipnadmin [ commands_filename ]

    "},{"location":"man/bpv6/ipnadmin/#description","title":"DESCRIPTION","text":"

    ipnadmin configures the local ION node's routing of bundles to endpoints whose IDs conform to the ipn endpoint ID scheme. ipn is a CBHE-conformant scheme; that is, every endpoint ID in the ipn scheme is a string of the form \"ipn:node_number.service_number\" where node_number is a CBHE \"node number\" and service_number identifies a specific application processing point.

    ipnadmin operates in response to IPN scheme configuration commands found in the file commands_filename, if provided; if not, ipnadmin prints a simple prompt (:) so that the user may type commands directly into standard input.

    The format of commands for commands_filename can be queried from ipnadmin with the 'h' or '?' commands at the prompt. The commands are documented in ipnrc(5).

    "},{"location":"man/bpv6/ipnadmin/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/ipnadmin/#examples","title":"EXAMPLES","text":""},{"location":"man/bpv6/ipnadmin/#files","title":"FILES","text":"

    See ipnrc(5) for details of the IPN scheme configuration commands.

    "},{"location":"man/bpv6/ipnadmin/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/ipnadmin/#diagnostics","title":"DIAGNOSTICS","text":"

    Note: all ION administration utilities expect source file input to be lines of ASCII text that are NL-delimited. If you edit the ipnrc file on a Windows machine, be sure to use dos2unix to convert it to Unix text format before presenting it to ipnadmin. Otherwise ipnadmin will detect syntax errors and will not function satisfactorily.

    The following diagnostics may be issued to the logfile ion.log:

    Various errors that don't cause ipnadmin to fail but are noted in the ion.log log file may be caused by improperly formatted commands given at the prompt or in the commands_filename file. Please see ipnrc(5) for details.

    "},{"location":"man/bpv6/ipnadmin/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/ipnadmin/#see-also","title":"SEE ALSO","text":"

    ipnrc(5)

    "},{"location":"man/bpv6/ipnadminep/","title":"NAME","text":"

    ipnadminep - administrative endpoint task for the IPN scheme

    "},{"location":"man/bpv6/ipnadminep/#synopsis","title":"SYNOPSIS","text":"

    ipnadminep

    "},{"location":"man/bpv6/ipnadminep/#description","title":"DESCRIPTION","text":"

    ipnadminep is a background \"daemon\" task that receives and processes administrative bundles (all custody signals and, nominally, all bundle status reports) that are sent to the IPN-scheme administrative endpoint on the local ION node, if and only if such an endpoint was established by bpadmin. It is spawned automatically by bpadmin in response to the 's' (START) command that starts operation of Bundle Protocol on the local ION node, and it is terminated by bpadmin in response to an 'x' (STOP) command. ipnadminep can also be spawned and terminated in response to START and STOP commands that pertain specifically to the IPN scheme.

    ipnadminep responds to custody signals as specified in the Bundle Protocol specification, RFC 5050. It responds to bundle status reports by logging ASCII text messages describing the reported activity.

    "},{"location":"man/bpv6/ipnadminep/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/ipnadminep/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv6/ipnadminep/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/ipnadminep/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv6/ipnadminep/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/ipnadminep/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), ipnadmin(1), bprc(5).

    "},{"location":"man/bpv6/ipnd/","title":"NAME","text":"

    ipnd - ION IPND module

    "},{"location":"man/bpv6/ipnd/#description","title":"DESCRIPTION","text":"

    The ipnd daemon is the ION implementation of DTN IP Neighbor Discovery. This module allows the node to send and receive beacon messages using unicast, multicast or broadcast IP addresses. Beacons are used for the discovery of neighbors and may be used to advertise services that are present and available on nodes, such as routing algorithms or CLAs.

    ION IPND module is configured using a *.rc configuration file. The name of the configuration file must be passed as the sole command-line argument to ipnd when the daemon is started. Commands are interpreted line by line, with exactly one command per line. The formats and effects of the ION ipnd management commands are described below.

    "},{"location":"man/bpv6/ipnd/#usage","title":"USAGE","text":"

    ipnd config_file_name

    "},{"location":"man/bpv6/ipnd/#commands","title":"COMMANDS","text":""},{"location":"man/bpv6/ipnd/#examples","title":"EXAMPLES","text":"

    m scvdef 128 FooRouter Seed:SeedVal BaseWeight:WeightVal RootHash:bytes

    Defines a new service called FooRouter comprising 3 elements. SeedVal and WeightVal are user defined services that must be already defined.

    m svcdef 129 SeedVal Value:fixed16

    m svcdef 130 WeightVal Value:fixed16

    m svcdef 128 FooRouter Seed:SeedVal BaseWeight:WeightVal RootHash:bytes

    m svcdef 150 FixedValuesList F16:fixed16 F32:fixed32 F64:fixed64

    m svcdef 131 VariableValuesList U64:uint64 S64:sint64

    m svcdef 132 BooleanValues B:boolean

    m svcdef 133 FloatValuesList F:float D:double

    m svcdef 135 IntegersList FixedValues:FixedValuesList VariableValues:VariableValuesList

    m svcdef 136 NumbersList Integers:IntegersList Floats:FloatValuesList

    m svcdef 140 HugeService CLAv4:CLA-TCP-v4 Booleans:BooleanValues Numbers:NumbersList FR:FooRouter

    a svcadv HugeService CLAv4:IP:10.1.0.10 CLAv4:Port:4444 Booleans:true FR:Seed:0x5432 FR:BaseWeight:13 FR:RootHash:BEEF Numbers:Integers:FixedValues:F16:0x16 Numbers:Integers:FixedValues:F32:0x32 Numbers:Integers:FixedValues:F64:0x1234567890ABCDEF Numbers:Floats:F:0.32 Numbers:Floats:D:-1e-6 Numbers:Integers:VariableValues:U64:18446744073704783380 Numbers:Integers:VariableValues:S64:-4611686018422619668

    This shows how to define multiple nested services and how to advertise them.

    "},{"location":"man/bpv6/ipnd/#see-also","title":"SEE ALSO","text":"

    ion(3)

    "},{"location":"man/bpv6/ipnfw/","title":"NAME","text":"

    ipnfw - bundle route computation task for the IPN scheme

    "},{"location":"man/bpv6/ipnfw/#synopsis","title":"SYNOPSIS","text":"

    ipnfw

    "},{"location":"man/bpv6/ipnfw/#description","title":"DESCRIPTION","text":"

    ipnfw is a background \"daemon\" task that pops bundles from the queue of bundle destined for IPN-scheme endpoints, computes proximate destinations for those bundles, and appends those bundles to the appropriate queues of bundles pending transmission to those computed proximate destinations.

    For each possible proximate destination (that is, neighboring node) there is a separate queue for each possible level of bundle priority: 0, 1, 2. Each outbound bundle is appended to the queue matching the bundle's designated priority.

    Proximate destination computation is affected by static and default routes as configured by ipnadmin(1) and by contact graphs as managed by ionadmin(1) and rfxclock(1).

    ipnfw is spawned automatically by bpadmin in response to the 's' (START) command that starts operation of Bundle Protocol on the local ION node, and it is terminated by bpadmin in response to an 'x' (STOP) command. ipnfw can also be spawned and terminated in response to START and STOP commands that pertain specifically to the IPN scheme.

    "},{"location":"man/bpv6/ipnfw/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/ipnfw/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv6/ipnfw/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/ipnfw/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv6/ipnfw/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/ipnfw/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), ipnadmin(1), bprc(5), ipnrc(5)

    "},{"location":"man/bpv6/ipnrc/","title":"NAME","text":"

    ipnrc - IPN scheme configuration commands file

    "},{"location":"man/bpv6/ipnrc/#description","title":"DESCRIPTION","text":"

    IPN scheme configuration commands are passed to ipnadmin either in a file of text lines or interactively at ipnadmin's command prompt (:). Commands are interpreted line-by line, with exactly one command per line.

    IPN scheme configuration commands (a) establish egress plans for direct transmission to neighboring nodes that are members of endpoints identified in the \"ipn\" URI scheme and (b) establish static default routing rules for forwarding bundles to specified destination nodes.

    The egress plan established for a given node associates a duct expression with that node. Each duct expression is a string of the form \"protocol_name/outduct_name\" signifying that the bundle is to be queued for transmission via the indicated convergence layer protocol outduct.

    Note that egress plans must be established for all neighboring nodes, regardless of whether or not contact graph routing is used for computing dynamic routes to distant nodes. This is by definition: if there isn't an egress plan to a node, it can't be considered a neighbor.

    Static default routes are declared as exits in the ipn-scheme routing database. An exit is a range of node numbers identifying a set of nodes for which defined default routing behavior is established. Whenever a bundle is to be forwarded to a node whose number is in the exit's node number range and it has not been possible to compute a dynamic route to that node from the contact schedules that have been provided to the local node and that node is not a neighbor to which the bundle can be directly transmitted, BP will forward the bundle to the gateway node associated with this exit. The gateway node for any exit is identified by an endpoint ID, which might or might not be an ipn-scheme EID; regardless, directing a bundle to the gateway for an exit causes the bundle to be re-forwarded to that intermediate destination endpoint. Multiple exits may encompass the same node number, in which case the gateway associated with the most restrictive exit (the one with the smallest range) is always selected.

    Note that \"exits\" were termed \"groups\" in earlier versions of ION. The term \"exit\" has been adopted instead, to minimize any possible confusion with multicast groups. To protect backward compatibility, the keyword \"group\" continues to be accepted by ipnadmin as an alias for the new keyword \"exit\", but the older terminology is deprecated.

    The formats and effects of the IPN scheme configuration commands are described below.

    "},{"location":"man/bpv6/ipnrc/#general-commands","title":"GENERAL COMMANDS","text":""},{"location":"man/bpv6/ipnrc/#plan-commands","title":"PLAN COMMANDS","text":""},{"location":"man/bpv6/ipnrc/#exit-commands","title":"EXIT COMMANDS","text":""},{"location":"man/bpv6/ipnrc/#override-commands","title":"OVERRIDE COMMANDS","text":""},{"location":"man/bpv6/ipnrc/#examples","title":"EXAMPLES","text":""},{"location":"man/bpv6/ipnrc/#see-also","title":"SEE ALSO","text":"

    ipnadmin(1)

    "},{"location":"man/bpv6/lgagent/","title":"NAME","text":"

    lgagent - ION Load/Go remote agent program

    "},{"location":"man/bpv6/lgagent/#synopsis","title":"SYNOPSIS","text":"

    lgagent own_endpoint_ID

    "},{"location":"man/bpv6/lgagent/#description","title":"DESCRIPTION","text":"

    ION Load/Go is a system for management of an ION-based network, enabling the execution of ION administrative programs at remote nodes. The system comprises two programs, lgsend and lgagent.

    The lgagent task on a given node opens the indicated ION endpoint for bundle reception, receives the extracted payloads of Load/Go bundles sent to it by lgsend as run on one or more remote nodes, and processes those payloads, which are the text of Load/Go source files.

    Load/Go source file content is limited to newline-terminated lines of ASCII characters. More specifically, the text of any Load/Go source file is a sequence of line sets of two types: file capsules and directives. Any Load/Go source file may contain any number of file capsules and any number of directives, freely intermingled in any order, but the typical structure of a Load/Go source file is simply a single file capsule followed by a single directive.

    When lgagent identifies a file capsule, it copies all of the capsule's text lines to a new file that it creates in the current working directory. When lgagent identifies a directive, it executes the directive by passing the text of the directive to the pseudoshell() function (see platform(3)). lgagent processes the line sets of a Load/Go source file in the order in which they appear in the file, so the text of a directive may reference a file that was created as the result of processing a prior file capsule in the same source file.

    "},{"location":"man/bpv6/lgagent/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/lgagent/#files","title":"FILES","text":"

    lgfile contains the Load/Go file capsules and directives that are to be processed.

    "},{"location":"man/bpv6/lgagent/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/lgagent/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    A variety of other diagnostics noting source file parsing problems may also be reported. These errors are non-fatal but they terminate the processing of the source file content from the most recently received bundle.

    "},{"location":"man/bpv6/lgagent/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/lgagent/#see-also","title":"SEE ALSO","text":"

    lgsend(1), lgfile(5)

    "},{"location":"man/bpv6/lgfile/","title":"NAME","text":"

    lgfile - ION Load/Go source file

    "},{"location":"man/bpv6/lgfile/#description","title":"DESCRIPTION","text":"

    The ION Load/Go system enables the execution of ION administrative programs at remote nodes:

    The lgsend program reads a Load/Go source file from a local file system, encapsulates the text of that source file in a bundle, and sends the bundle to a designated DTN endpoint on the remote node.

    An lgagent task running on the remote node, which has opened that DTN endpoint for bundle reception, receives the extracted payload of the bundle -- the text of the Load/Go source file -- and processes it.

    Load/Go source file content is limited to newline-terminated lines of ASCII characters. More specifically, the text of any Load/Go source file is a sequence of line sets of two types: file capsules and directives. Any Load/Go source file may contain any number of file capsules and any number of directives, freely intermingled in any order, but the typical structure of a Load/Go source file is simply a single file capsule followed by a single directive.

    Each file capsule is structured as a single start-of-capsule line, followed by zero or more capsule text lines, followed by a single end-of-capsule line. Each start-of-capsule line is of this form:

    [file_name

    Each capsule text line can be any line of ASCII text that does not begin with an opening ([) or closing (]) bracket character.

    A text line that begins with a closing bracket character (]) is interpreted as an end-of-capsule line.

    A directive is any line of text that is not one of the lines of a file capsule and that is of this form:

    !directive_text

    When lgagent identifies a file capsule, it copies all of the capsule's text lines to a new file named file_name that it creates in the current working directory. When lgagent identifies a directive, it executes the directive by passing directive_text to the pseudoshell() function (see platform(3)). lgagent processes the line sets of a Load/Go source file in the order in which they appear in the file, so the directive_text of a directive may reference a file that was created as the result of processing a prior file capsule line set in the same source file.

    Note that lgfile directives are passed to pseudoshell(), which on a VxWorks platform will always spawn a new task; the first argument in directive_text must be a symbol that VxWorks can resolve to a function, not a shell command. Also note that the arguments in directive_text will be actual task arguments, not shell command-line arguments, so they should never be enclosed in double-quote characters (\"). However, any argument that contains embedded whitespace must be enclosed in single-quote characters (') so that pseudoshell() can parse it correctly.

    "},{"location":"man/bpv6/lgfile/#examples","title":"EXAMPLES","text":"

    Presenting the following lines of source file text to lgsend:

    [cmd33.bprc

    x protocol ltp

    ]

    !bpadmin cmd33.bprc

    should cause the receiving node to halt the operation of the LTP convergence-layer protocol.

    "},{"location":"man/bpv6/lgfile/#see-also","title":"SEE ALSO","text":"

    lgsend(1), lgagent(1), platform(3)

    "},{"location":"man/bpv6/lgsend/","title":"NAME","text":"

    lgsend - ION Load/Go command program

    "},{"location":"man/bpv6/lgsend/#synopsis","title":"SYNOPSIS","text":"

    lgsend command_file_name own_endpoint_ID destination_endpoint_ID

    "},{"location":"man/bpv6/lgsend/#description","title":"DESCRIPTION","text":"

    ION Load/Go is a system for management of an ION-based network, enabling the execution of ION administrative programs at remote nodes. The system comprises two programs, lgsend and lgagent.

    The lgsend program reads a Load/Go source file from a local file system, encapsulates the text of that source file in a bundle, and sends the bundle to an lgagent task that is waiting for data at a designated DTN endpoint on the remote node.

    To do so, it first reads all lines of the Load/Go source file identified by command_file_name into a temporary buffer in ION's SDR data store, concatenating the lines of the file and retaining all newline characters. Then it invokes the bp_send() function to create and send a bundle whose payload is this temporary buffer, whose destination is destination_endpoint_ID, and whose source endpoint ID is own_endpoint_ID. Then it terminates.

    "},{"location":"man/bpv6/lgsend/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/lgsend/#files","title":"FILES","text":"

    lgfile contains the Load/Go file capsules and directive that are to be sent to the remote node.

    "},{"location":"man/bpv6/lgsend/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/lgsend/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv6/lgsend/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/lgsend/#see-also","title":"SEE ALSO","text":"

    lgagent(1), lgfile(5)

    "},{"location":"man/bpv6/ltpcli/","title":"NAME","text":"

    ltpcli - LTP-based BP convergence layer input task

    "},{"location":"man/bpv6/ltpcli/#synopsis","title":"SYNOPSIS","text":"

    ltpcli local_node_nbr

    "},{"location":"man/bpv6/ltpcli/#description","title":"DESCRIPTION","text":"

    ltpcli is a background \"daemon\" task that receives LTP data transmission blocks, extracts bundles from the received blocks, and passes them to the bundle protocol agent on the local ION node.

    ltpcli is spawned automatically by bpadmin in response to the 's' (START) command that starts operation of the Bundle Protocol; the text of the command that is used to spawn the task must be provided at the time the \"ltp\" convergence layer protocol is added to the BP database. The convergence layer input task is terminated by bpadmin in response to an 'x' (STOP) command. ltpcli can also be spawned and terminated in response to START and STOP commands that pertain specifically to the LTP convergence layer protocol.

    "},{"location":"man/bpv6/ltpcli/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/ltpcli/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv6/ltpcli/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/ltpcli/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv6/ltpcli/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/ltpcli/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), bprc(5), ltpadmin(1), ltprc(5), ltpclo(1)

    "},{"location":"man/bpv6/ltpclo/","title":"NAME","text":"

    ltpclo - LTP-based BP convergence layer adapter output task

    "},{"location":"man/bpv6/ltpclo/#synopsis","title":"SYNOPSIS","text":"

    ltpclo remote_node_nbr

    "},{"location":"man/bpv6/ltpclo/#description","title":"DESCRIPTION","text":"

    ltpclo is a background \"daemon\" task that extracts bundles from the queues of segments ready for transmission via LTP to the remote bundle protocol agent identified by remote_node_nbr and passes them to the local LTP engine for aggregation, segmentation, and transmission to the remote node.

    ltpclo is spawned automatically by bpadmin in response to the 's' (START) command that starts operation of the Bundle Protocol, and it is terminated by bpadmin in response to an 'x' (STOP) command. ltpclo can also be spawned and terminated in response to START and STOP commands that pertain specifically to the LTP convergence layer protocol.

    "},{"location":"man/bpv6/ltpclo/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/ltpclo/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv6/ltpclo/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/ltpclo/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv6/ltpclo/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/ltpclo/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), bprc(5), ltpadmin(1), ltprc(5), ltpcli(1)

    "},{"location":"man/bpv6/stcpcli/","title":"NAME","text":"

    sstcpcli - DTN simple TCP convergence layer input task

    "},{"location":"man/bpv6/stcpcli/#synopsis","title":"SYNOPSIS","text":"

    stcpcli local_hostname[:local_port_nbr]

    "},{"location":"man/bpv6/stcpcli/#description","title":"DESCRIPTION","text":"

    stcpcli is a background \"daemon\" task comprising 1 + N threads: one that handles TCP connections from remote stcpclo tasks, spawning sockets for data reception from those tasks, plus one input thread for each spawned socket to handle data reception over that socket.

    The connection thread simply accepts connections on a TCP socket bound to local_hostname and local_port_nbr and spawns reception threads. The default value for local_port_nbr, if omitted, is 4556.

    Each reception thread receives bundles over the associated connected socket. Each bundle received on the connection is preceded by a 32-bit unsigned integer in network byte order indicating the length of the bundle. The received bundles are passed to the bundle protocol agent on the local ION node.

    stcpcli is spawned automatically by bpadmin in response to the 's' (START) command that starts operation of the Bundle Protocol; the text of the command that is used to spawn the task must be provided at the time the \"stcp\" convergence layer protocol is added to the BP database. The convergence layer input task is terminated by bpadmin in response to an 'x' (STOP) command. stcpcli can also be spawned and terminated in response to START and STOP commands that pertain specifically to the STCP convergence layer protocol.

    "},{"location":"man/bpv6/stcpcli/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/stcpcli/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv6/stcpcli/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/stcpcli/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv6/stcpcli/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/stcpcli/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), bprc(5), stcpclo(1)

    "},{"location":"man/bpv6/stcpclo/","title":"NAME","text":"

    stcpclo - DTN simple TCP convergence layer adapter output task

    "},{"location":"man/bpv6/stcpclo/#synopsis","title":"SYNOPSIS","text":"

    stcpclo remote_hostname[:remote_port_nbr]

    "},{"location":"man/bpv6/stcpclo/#description","title":"DESCRIPTION","text":"

    stcpclo is a background \"daemon\" task that connects to a remote node's TCP socket at remote_hostname and remote_port_nbr. It then begins extracting bundles from the queues of bundles ready for transmission via TCP to this remote bundle protocol agent and transmitting those bundles over the connected socket to that node. Each transmitted bundle is preceded by a 32-bit integer in network byte order indicating the length of the bundle.

    If not specified, remote_port_nbr defaults to 4556.

    stcpclo is spawned automatically by bpadmin in response to the 's' (START) command that starts operation of the Bundle Protocol, and it is terminated by bpadmin in response to an 'x' (STOP) command. stcpclo can also be spawned and terminated in response to START and STOP commands that pertain specifically to the STCP convergence layer protocol.

    "},{"location":"man/bpv6/stcpclo/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/stcpclo/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv6/stcpclo/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/stcpclo/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv6/stcpclo/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/stcpclo/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), bprc(5), stcpcli(1)

    "},{"location":"man/bpv6/tcpcli/","title":"NAME","text":"

    tcpcli - DTN TCPCL-compliant convergence layer input task

    "},{"location":"man/bpv6/tcpcli/#synopsis","title":"SYNOPSIS","text":"

    tcpcli local_hostname[:local_port_nbr]

    "},{"location":"man/bpv6/tcpcli/#description","title":"DESCRIPTION","text":"

    tcpcli is a background \"daemon\" task comprising 3 + 2*N threads: an executive thread; a clock thread that periodically attempts to connect to remote TCPCL entities as identified by the tcp outducts enumerated in the bprc(5) file (each of which must specify the hostname[:port_nbr] to connect to); a thread that handles TCP connections from remote TCPCL entities, spawning sockets for data reception from those tasks; plus one input thread and one output thread for each connection, to handle data reception and transmission over that socket.

    The connection thread simply accepts connections on a TCP socket bound to local_hostname and local_port_nbr and spawns reception threads. The default value for local_port_nbr, if omitted, is 4556.

    Each time a connection is established, the entities will first exchange contact headers, because connection parameters need to be negotiated. tcpcli records the acknowledgement flags, reactive fragmentation flag, and negative acknowledgements flag in the contact header it receives from its peer TCPCL entity.

    Each reception thread receives bundles over the associated connected socket. Each bundle received on the connection is preceded by message type, fragmentation flags, and size represented as an SDNV. The received bundles are passed to the bundle protocol agent on the local ION node.

    Similarly, each transmission thread obtains outbound bundles from the local ION node, encapsulates them as noted above, and transmits them over the associated connected socket.

    tcpcli is spawned automatically by bpadmin in response to the 's' (START) command that starts operation of the Bundle Protocol; the text of the command that is used to spawn the task must be provided at the time the \"tcp\" convergence layer protocol is added to the BP database. The convergence layer input task is terminated by bpadmin in response to an 'x' (STOP) command. tcpcli can also be spawned and terminated in response to START and STOP commands that pertain specifically to the TCP convergence layer protocol.

    "},{"location":"man/bpv6/tcpcli/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/tcpcli/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv6/tcpcli/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/tcpcli/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv6/tcpcli/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/tcpcli/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), bprc(5)

    "},{"location":"man/bpv6/tcpclo/","title":"NAME","text":"

    tcpclo - DTN TCPCL-compliant convergence layer adapter output task

    "},{"location":"man/bpv6/tcpclo/#synopsis","title":"SYNOPSIS","text":"

    tcpclo remote_hostname[:remote_port_nbr]

    "},{"location":"man/bpv6/tcpclo/#description","title":"DESCRIPTION","text":"

    tcpclo is a background \"daemon\" task that connects to a remote node's TCP socket at remote_hostname and remote_port_nbr. It sends a contact header, and it records the acknowledgement flag, reactive fragmentation flag and negative acknowledgements flag in the contact header it receives from its peer tcpcli task. It then begins extracting bundles from the queues of bundles ready for transmission via TCP to this remote bundle protocol agent and transmitting those bundles over the connected socket to that node. Each transmitted bundle is preceded by message type, segmentation flags, and an SDNV indicating the size of the bundle (in bytes).

    If not specified, remote_port_nbr defaults to 4556.

    tcpclo is spawned automatically by bpadmin in response to the 's' (START) command that starts operation of the Bundle Protocol, and it is terminated by bpadmin in response to an 'x' (STOP) command. tcpclo can also be spawned and terminated in response to START and STOP commands that pertain specifically to the TCP convergence layer protocol.

    "},{"location":"man/bpv6/tcpclo/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/tcpclo/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv6/tcpclo/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/tcpclo/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv6/tcpclo/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/tcpclo/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), bprc(5), tcpcli(1)

    "},{"location":"man/bpv6/udpcli/","title":"NAME","text":"

    udpcli - UDP-based BP convergence layer input task

    "},{"location":"man/bpv6/udpcli/#synopsis","title":"SYNOPSIS","text":"

    udpcli local_hostname[:local_port_nbr]

    "},{"location":"man/bpv6/udpcli/#description","title":"DESCRIPTION","text":"

    udpcli is a background \"daemon\" task that receives UDP datagrams via a UDP socket bound to local_hostname and local_port_nbr, extracts bundles from those datagrams, and passes them to the bundle protocol agent on the local ION node.

    If not specified, port number defaults to 4556.

    The convergence layer input task is spawned automatically by bpadmin in response to the 's' (START) command that starts operation of the Bundle Protocol; the text of the command that is used to spawn the task must be provided at the time the \"udp\" convergence layer protocol is added to the BP database. The convergence layer input task is terminated by bpadmin in response to an 'x' (STOP) command. udpcli can also be spawned and terminated in response to START and STOP commands that pertain specifically to the UDP convergence layer protocol.

    "},{"location":"man/bpv6/udpcli/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/udpcli/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv6/udpcli/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/udpcli/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv6/udpcli/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/udpcli/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), bprc(5), udpclo(1)

    "},{"location":"man/bpv6/udpclo/","title":"NAME","text":"

    udpclo - UDP-based BP convergence layer output task

    "},{"location":"man/bpv6/udpclo/#synopsis","title":"SYNOPSIS","text":"

    udpclo round_trip_time remote_hostname[:remote_port_nbr]

    "},{"location":"man/bpv6/udpclo/#description","title":"DESCRIPTION","text":"

    udpclo is a background \"daemon\" task that extracts bundles from the queues of bundles ready for transmission via UDP to a remote node's UDP socket at remote_hostname and remote_port_nbr, encapsulates those bundles in UDP datagrams, and sends those datagrams to that remote UDP socket.

    Because UDP is not itself a \"reliable\" transmission protocol (i.e., it performs no retransmission of lost data), it may be used in conjunction with BP custodial retransmission. BP custodial retransmission is triggered only by expiration of a timer whose interval is nominally the round-trip time between the sending BP node and the next node in the bundle's end-to-end path that is expected to take custody of the bundle; notionally, if no custody signal citing the transmitted bundle has been received before the end of this interval it can be assumed that either the bundle or the custody signal was lost in transmission and therefore the bundle should be retransmitted. The value of the custodial retransmission timer interval (the expected round-trip time between the sending node and the anticipated next custodian) must be provided as a run-time argument to udpclo. If the value of this parameter is zero, custodial retransmission is disabled.

    udpclo is spawned automatically by bpadmin in response to the 's' (START) command that starts operation of the Bundle Protocol, and it is terminated by bpadmin in response to an 'x' (STOP) command. udpclo can also be spawned and terminated in response to START and STOP commands that pertain specifically to the UDP convergence layer protocol.

    "},{"location":"man/bpv6/udpclo/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv6/udpclo/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv6/udpclo/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv6/udpclo/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv6/udpclo/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv6/udpclo/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), bprc(5), udpcli(1)

    "},{"location":"man/bpv7/","title":"Index of Man Pages","text":""},{"location":"man/bpv7/bibeadmin/","title":"NAME","text":"

    bibeadmin - bundle-in-bundle encapsulation database administration interface

    "},{"location":"man/bpv7/bibeadmin/#synopsis","title":"SYNOPSIS","text":"

    bibeadmin [ commands_filename ]

    "},{"location":"man/bpv7/bibeadmin/#description","title":"DESCRIPTION","text":"

    bibeadmin configures the local ION node's database of parameters governing the forwarding of BIBE PDUs to specified remote nodes.

    bibeadmin operates in response to BIBE configuration commands found in the file commands_filename, if provided; if not, bibeadmin prints a simple prompt (:) so that the user may type commands directly into standard input.

    The format of commands for commands_filename can be queried from bibeadmin with the 'h' or '?' commands at the prompt. The commands are documented in biberc(5).

    "},{"location":"man/bpv7/bibeadmin/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/bibeadmin/#examples","title":"EXAMPLES","text":""},{"location":"man/bpv7/bibeadmin/#files","title":"FILES","text":"

    See biberc(5) for details of the BIBE configuration commands.

    "},{"location":"man/bpv7/bibeadmin/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/bibeadmin/#diagnostics","title":"DIAGNOSTICS","text":"

    Note: all ION administration utilities expect source file input to be lines of ASCII text that are NL-delimited. If you edit the biberc file on a Windows machine, be sure to use dos2unix to convert it to Unix text format before presenting it to bibeadmin. Otherwise bibeadmin will detect syntax errors and will not function satisfactorily.

    The following diagnostics may be issued to the logfile ion.log:

    Various errors that don't cause bibeadmin to fail but are noted in the ion.log log file may be caused by improperly formatted commands given at the prompt or in the commands_filename file. Please see biberc(5) for details.

    "},{"location":"man/bpv7/bibeadmin/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/bibeadmin/#see-also","title":"SEE ALSO","text":"

    bibeclo(1), biberc(5)

    "},{"location":"man/bpv7/bibeclo/","title":"NAME","text":"

    bibeclo - BP convergence layer output task using bundle-in-bundle encapsulation

    "},{"location":"man/bpv7/bibeclo/#synopsis","title":"SYNOPSIS","text":"

    bibeclo peer_EID destination_EID

    "},{"location":"man/bpv7/bibeclo/#description","title":"DESCRIPTION","text":"

    bibeclo is a background \"daemon\" task that extracts bundles from the queues of bundles destined for destination_EID that are ready for transmission via bundle-in-bundle encapsulation (BIBE) to peer_EID, encapsulates them in BP administrative records of (non-standard) record type 7 (BP_BIBE_PDU), and sends those administrative records in encapsulating bundles destined for peer_EID. The forwarding of encapsulated bundles for which custodial acknowledgment is requested causes bibeclo to post custodial re-forwarding timers to the node's timeline. Parameters governing the forwarding of BIBE PDUs to peer_EID are stipulated in the corresponding BIBE convergence-layer adapter (bcla) structure residing in the BIBE database, as managed by bibeadmin.

    The receiving node is expected to process received BIBE PDUs by simply dispatching the encapsulated bundles - whose destination is the node identified by destination_EID - as if they had been received from neighboring nodes in the normal course of operations; BIBE PDUs for which custodial acknowledgment was requested cause the received bundles to be noted in custody signals that are being aggregated by the receiving node.

    bibeclo additionally sends aggregated custody signals in BP administrative records of (non-standard) record type 8 (BP_BIBE_SIGNAL) as the deadlines for custody signal transmission arrive.

    Note that the reception and processing of both encapsulated bundles and custody signals is performed by the scheme-specific administration endpoint daemon(s) at the receiving nodes. Reception of a custody signal terminates the custodial re-forwarding timers for all bundles acknowledged in that signal; the re-forwarding of bundles upon custodial re-forwarding timer expiration is initiated by the bpclock daemon.

    bibeclo is spawned automatically by bpadmin in response to the 's' (START) command that starts operation of the Bundle Protocol, and it is terminated by bpadmin in response to an 'x' (STOP) command. bibeclo can also be spawned and terminated in response to START and STOP commands that pertain specifically to the BIBE convergence layer protocol.

    "},{"location":"man/bpv7/bibeclo/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/bibeclo/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv7/bibeclo/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/bibeclo/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv7/bibeclo/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/bibeclo/#see-also","title":"SEE ALSO","text":"

    biberc(5), bibeadmin(1)

    "},{"location":"man/bpv7/biberc/","title":"NAME","text":"

    biberc - BIBE configuration commands file

    "},{"location":"man/bpv7/biberc/#description","title":"DESCRIPTION","text":"

    BIBE configuration commands are passed to bibeadmin either in a file of text lines or interactively at bibeadmin's command prompt (:). Commands are interpreted line-by line, with exactly one command per line.

    BIBE configuration commands establish the parameters governing transmission of BIBE PDUs to specified peer nodes: anticipated delivery latency in the forward direction, anticipated delivery latency in the return direction, TTL for BIBE PDUs, priority for BIBE PDUs, ordinal priority for BIBE PDUs in the event that priority is Expedited, and (optionally) data label for BIBE PDUs. As such, they configure BIBE convergence-layer adapter (bcla) structures.

    The formats and effects of the BIBE configuration commands are described below.

    NOTE: in order to cause bundles to be transmitted via BIBE:

    "},{"location":"man/bpv7/biberc/#general-commands","title":"GENERAL COMMANDS","text":""},{"location":"man/bpv7/biberc/#bcla-commands","title":"BCLA COMMANDS","text":""},{"location":"man/bpv7/biberc/#examples","title":"EXAMPLES","text":""},{"location":"man/bpv7/biberc/#see-also","title":"SEE ALSO","text":"

    bibeadmin(1), bibeclo(1)

    "},{"location":"man/bpv7/bp/","title":"NAME","text":"

    bp - Bundle Protocol communications library

    "},{"location":"man/bpv7/bp/#synopsis","title":"SYNOPSIS","text":"
    #include \"bp.h\"\n\n[see description for available functions]\n
    "},{"location":"man/bpv7/bp/#description","title":"DESCRIPTION","text":"

    The bp library provides functions enabling application software to use Bundle Protocol to send and receive information over a delay-tolerant network. It conforms to the Bundle Protocol specification as documented in Internet RFC 5050.

    "},{"location":"man/bpv7/bp/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), lgsend(1), lgagent(1), bpextensions(3), bprc(5), lgfile(5)

    "},{"location":"man/bpv7/bpadmin/","title":"NAME","text":"

    bpadmin - ION Bundle Protocol (BP) administration interface

    "},{"location":"man/bpv7/bpadmin/#synopsis","title":"SYNOPSIS","text":"

    bpadmin [ commands_filename | . | ! ]

    "},{"location":"man/bpv7/bpadmin/#description","title":"DESCRIPTION","text":"

    bpadmin configures, starts, manages, and stops bundle protocol operations for the local ION node.

    It operates in response to BP configuration commands found in the file commands_filename, if provided; if not, bpadmin prints a simple prompt (:) so that the user may type commands directly into standard input. If commands_filename is a period (.), the effect is the same as if a command file containing the single command 'x' were passed to bpadmin -- that is, the ION node's bpclock task, forwarder tasks, and convergence layer adapter tasks are stopped. If commands_filename is an exclamation point (!), that effect is reversed: the ION node's bpclock task, forwarder tasks, and convergence layer adapter tasks are restarted.

    The format of commands for commands_filename can be queried from bpadmin with the 'h' or '?' commands at the prompt. The commands are documented in bprc(5).

    "},{"location":"man/bpv7/bpadmin/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/bpadmin/#examples","title":"EXAMPLES","text":""},{"location":"man/bpv7/bpadmin/#files","title":"FILES","text":"

    See bprc(5) for details of the BP configuration commands.

    "},{"location":"man/bpv7/bpadmin/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/bpadmin/#diagnostics","title":"DIAGNOSTICS","text":"

    Note: all ION administration utilities expect source file input to be lines of ASCII text that are NL-delimited. If you edit the bprc file on a Windows machine, be sure to use dos2unix to convert it to Unix text format before presenting it to bpadmin. Otherwise bpadmin will detect syntax errors and will not function satisfactorily.

    The following diagnostics may be issued to the logfile ion.log:

    Various errors that don't cause bpadmin to fail but are noted in the ion.log log file may be caused by improperly formatted commands given at the prompt or in the commands_filename file. Please see bprc(5) for details.

    "},{"location":"man/bpv7/bpadmin/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/bpadmin/#see-also","title":"SEE ALSO","text":"

    ionadmin(1), bprc(5), ipnadmin(1), ipnrc(5), dtnadmin(1), dtnrc(5)

    "},{"location":"man/bpv7/bpcancel/","title":"NAME","text":"

    bpcancel - Bundle Protocol (BP) bundle cancellation utility

    "},{"location":"man/bpv7/bpcancel/#synopsis","title":"SYNOPSIS","text":"

    bpcancel source_EID creation_seconds [creation_count [fragment_offset [fragment_length]]]

    "},{"location":"man/bpv7/bpcancel/#description","title":"DESCRIPTION","text":"

    bpcancel attempts to locate the bundle identified by the command-line parameter values and cancel transmission of this bundle. Bundles for which multiple copies have been queued for transmission can't be canceled, because one or more of those copies might already have been transmitted. Transmission of a bundle that has never been cloned and that is still in local bundle storage is cancelled by simulation of an immediate time-to-live expiration.

    "},{"location":"man/bpv7/bpcancel/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/bpcancel/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv7/bpcancel/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/bpcancel/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv7/bpcancel/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/bpcancel/#see-also","title":"SEE ALSO","text":"

    bplist(1)

    "},{"location":"man/bpv7/bpchat/","title":"NAME","text":"

    bpchat - Bundle Protocol chat test program

    "},{"location":"man/bpv7/bpchat/#synopsis","title":"SYNOPSIS","text":"

    bpchat sourceEID destEID [ct]

    "},{"location":"man/bpv7/bpchat/#description","title":"DESCRIPTION","text":"

    bpchat uses Bundle Protocol to send input text in bundles, and display the payload of received bundles as output. It is similar to the talk utility, but operates over the Bundle Protocol. It operates like a combination of the bpsource and bpsink utilities in one program (unlike bpsource, bpchat emits bundles with a sourceEID).

    If the sourceEID and destEID are both bpchat applications, then two users can chat with each other over the Bundle Protocol: lines that one user types on the keyboard will be transported over the network in bundles and displayed on the screen of the other user (and the reverse).

    bpchat terminates upon receiving the SIGQUIT signal, i.e., ^C from the keyboard.

    "},{"location":"man/bpv7/bpchat/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/bpchat/#options","title":"OPTIONS","text":""},{"location":"man/bpv7/bpchat/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv7/bpchat/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/bpchat/#diagnostics","title":"DIAGNOSTICS","text":"

    Diagnostic messages produced by bpchat are written to the ION log file ion.log.

    "},{"location":"man/bpv7/bpchat/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/bpchat/#see-also","title":"SEE ALSO","text":"

    bpecho(1), bpsource(1), bpsink(1), bp(3)

    "},{"location":"man/bpv7/bpclm/","title":"NAME","text":"

    bpclm - DTN bundle protocol convergence layer management daemon

    "},{"location":"man/bpv7/bpclm/#synopsis","title":"SYNOPSIS","text":"

    bpclm neighboring_node_ID

    "},{"location":"man/bpv7/bpclm/#description","title":"DESCRIPTION","text":"

    bpclm is a background \"daemon\" task that manages the transmission of bundles to a single designated neighboring node (as constrained by an \"egress plan\" data structure for that node) by one or more convergence-layer (CL) adapter output daemons (via buffer structures called \"outducts\").

    bpclm is spawned automatically by bpadmin in response to the 's' (START) command that starts operation of the Bundle Protocol, and it is terminated by bpadmin in response to an 'x' (STOP) command. bpclm can also be spawned and terminated in response to commands that START and STOP the corresponding node's egress plan.

    "},{"location":"man/bpv7/bpclm/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/bpclm/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv7/bpclm/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/bpclm/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv7/bpclm/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/bpclm/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), bprc(5)

    "},{"location":"man/bpv7/bpclock/","title":"NAME","text":"

    bpclock - Bundle Protocol (BP) daemon task for managing scheduled events

    "},{"location":"man/bpv7/bpclock/#synopsis","title":"SYNOPSIS","text":"

    bpclock

    "},{"location":"man/bpv7/bpclock/#description","title":"DESCRIPTION","text":"

    bpclock is a background \"daemon\" task that periodically performs scheduled Bundle Protocol activities. It is spawned automatically by bpadmin in response to the 's' command that starts operation of Bundle Protocol on the local ION node, and it is terminated by bpadmin in response to an 'x' (STOP) command.

    Once per second, bpclock takes the following action:

    First it (a) destroys all bundles whose TTLs have expired, (b) enqueues for re-forwarding all bundles that were expected to have been transmitted (by convergence-layer output tasks) by now but are still stuck in their assigned transmission queues, and (c) enqueues for re-forwarding all bundles for which custody has not yet been taken that were expected to have been received and acknowledged by now (as noted by invocation of the bpMemo() function by some convergence-layer adapter that had CL-specific insight into the appropriate interval to wait for custody acceptance).

    Then bpclock adjusts the transmission and reception \"throttles\" that control rates of LTP transmission to and reception from neighboring nodes, in response to data rate changes as noted in the RFX database by rfxclock.

    bpclock then checks for bundle origination activity that has been blocked due to insufficient allocated space for BP traffic in the ION data store: if space for bundle origination is now available, bpclock gives the bundle production throttle semaphore to unblock that activity.

    Finally, bpclock applies rate control to all convergence-layer protocol inducts and outducts:

    For each induct, bpclock increases the current capacity of the duct by the applicable nominal data reception rate. If the revised current capacity is greater than zero, bpclock gives the throttle's semaphore to unblock data acquisition (which correspondingly reduces the current capacity of the duct) by the associated convergence layer input task.

    For each outduct, bpclock increases the current capacity of the duct by the applicable nominal data transmission rate. If the revised current capacity is greater than zero, bpclock gives the throttle's semaphore to unblock data transmission (which correspondingly reduces the current capacity of the duct) by the associated convergence layer output task.

    "},{"location":"man/bpv7/bpclock/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/bpclock/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv7/bpclock/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/bpclock/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv7/bpclock/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/bpclock/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), rfxclock(1)

    "},{"location":"man/bpv7/bpcounter/","title":"NAME","text":"

    bpcounter - Bundle Protocol reception test program

    "},{"location":"man/bpv7/bpcounter/#synopsis","title":"SYNOPSIS","text":"

    bpcounter ownEndpointId [maxCount]

    "},{"location":"man/bpv7/bpcounter/#description","title":"DESCRIPTION","text":"

    bpcounter uses Bundle Protocol to receive application data units from a remote bpdriver application task. When the total number of application data units it has received exceeds maxCount, it terminates and prints its reception count. If maxCount is omitted, the default limit is 2 billion application data units.

    "},{"location":"man/bpv7/bpcounter/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/bpcounter/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv7/bpcounter/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/bpcounter/#diagnostics","title":"DIAGNOSTICS","text":"

    Diagnostic messages produced by bpcounter are written to the ION log file ion.log.

    "},{"location":"man/bpv7/bpcounter/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/bpcounter/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), bpdriver(1), bpecho(1), bp(3)

    "},{"location":"man/bpv7/bpdriver/","title":"NAME","text":"

    bpdriver - Bundle Protocol transmission test program

    "},{"location":"man/bpv7/bpdriver/#synopsis","title":"SYNOPSIS","text":"

    bpdriver nbrOfCycles ownEndpointId destinationEndpointId [length] [t_TTL_] [i_Injection Rate_]

    "},{"location":"man/bpv7/bpdriver/#description","title":"DESCRIPTION","text":"

    bpdriver uses Bundle Protocol to send nbrOfCycles application data units of length indicated by length, to a counterpart application task that has opened the BP endpoint identified by destinationEndpointId.

    If omitted, length defaults to 60000.

    TTL indicates the number of seconds the bundles may remain in the network, undelivered, before they are automatically destroyed. If omitted, TTL defaults to 300 seconds.

    bpdriver normally runs in \"echo\" mode: after sending each bundle it waits for an acknowledgment bundle before sending the next one. For this purpose, the counterpart application task should be bpecho.

    Alternatively bpdriver can run in \"streaming\" mode, i.e., without expecting or receiving acknowledgments. Streaming mode is enabled when length is specified as a negative number, in which case the additive inverse of length is used as the effective value of length. For this purpose, the counterpart application task should be bpcounter.

    If the effective value of length is 1, the sizes of the transmitted service data units will be randomly selected multiples of 1024 in the range 1024 to 62464.

    Injection Rate specifies in bits-per-second the equivalent, average rate at which bpdriver will send bundles into the network. A negative or 0 rate value will turn off injection rate control. By default, bpdriver will inject bundle as fast as it can be handled by ION unless a positive value for injection rate is provided.

    bpdriver normally runs with custody transfer disabled. To request custody transfer for all bundles sent by bpdriver, specify nbrOfCycles as a negative number; the additive inverse of nbrOfCycles will be used as its effective value in this case.

    When all copies of the file have been sent, bpdriver prints a performance report.

    "},{"location":"man/bpv7/bpdriver/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/bpdriver/#files","title":"FILES","text":"

    The service data units transmitted by bpdriver are sequences of text obtained from a file in the current working directory named \"bpdriverAduFile\", which bpdriver creates automatically.

    "},{"location":"man/bpv7/bpdriver/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/bpdriver/#diagnostics","title":"DIAGNOSTICS","text":"

    Diagnostic messages produced by bpdriver are written to the ION log file ion.log.

    "},{"location":"man/bpv7/bpdriver/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/bpdriver/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), bpcounter(1), bpecho(1), bp(3)

    "},{"location":"man/bpv7/bpecho/","title":"NAME","text":"

    bpecho - Bundle Protocol reception test program

    "},{"location":"man/bpv7/bpecho/#synopsis","title":"SYNOPSIS","text":"

    bpecho ownEndpointId

    "},{"location":"man/bpv7/bpecho/#description","title":"DESCRIPTION","text":"

    bpecho uses Bundle Protocol to receive application data units from a remote bpdriver application task. In response to each received application data unit it sends back an \"echo\" application data unit of length 2, the NULL-terminated string \"x\".

    bpecho terminates upon receiving the SIGQUIT signal, i.e., ^C from the keyboard.

    "},{"location":"man/bpv7/bpecho/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/bpecho/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv7/bpecho/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/bpecho/#diagnostics","title":"DIAGNOSTICS","text":"

    Diagnostic messages produced by bpecho are written to the ION log file ion.log.

    "},{"location":"man/bpv7/bpecho/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/bpecho/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), bpdriver(1), bpcounter(1), bp(3)

    "},{"location":"man/bpv7/bpextensions/","title":"NAME","text":"

    bpextensions - interface for adding extensions to Bundle Protocol

    "},{"location":"man/bpv7/bpextensions/#synopsis","title":"SYNOPSIS","text":"
    #include \"bpextensions.c\"\n
    "},{"location":"man/bpv7/bpextensions/#description","title":"DESCRIPTION","text":"

    ION's interface for extending the Bundle Protocol enables the definition of external functions that insert extension blocks into outbound bundles (either before or after the payload block), parse and record extension blocks in inbound bundles, and modify extension blocks at key points in bundle processing. All extension-block handling is statically linked into ION at build time, but the addition of an extension never requires that any standard ION source code be modified.

    Standard structures for recording extension blocks -- both in transient storage [memory] during bundle acquisition (AcqExtBlock) and in persistent storage [the ION database] during subsequent bundle processing (ExtensionBlock) -- are defined in the bei.h header file. In each case, the extension block structure comprises a block type code, block processing flags, possibly a list of EID references, an array of bytes (the serialized form of the block, for transmission), the length of that array, optionally an extension-specific opaque object whose structure is designed to characterize the block in a manner that's convenient for the extension processing functions, and the size of that object.

    The definition of each extension is asserted in an ExtensionDef structure, also as defined in the bei.h header file. Each ExtensionDef must supply:

    The name of the extension. (Used in some diagnostic messages.)

    The extension's block type code.

    A pointer to an offer function.

    A pointer to a function to be called when forwarding a bundle containing this sort of block.

    A pointer to a function to be called when taking custody of a bundle containing this sort of block.

    A pointer to a function to be called when enqueuing for transmission a bundle containing this sort of block.

    A pointer to a function to be called when a convergence-layer adapter dequeues a bundle containing this sort of block, before serializing it.

    A pointer to a function to be called immediately before a convergence-layer adapter transmits a bundle containing this sort of block, after the bundle has been serialized.

    A pointer to a release function.

    A pointer to a copy function.

    A pointer to an acquire function.

    A pointer to a review function.

    A pointer to a decrypt function.

    A pointer to a parse function.

    A pointer to a check function.

    A pointer to a record function.

    A pointer to a clear function.

    All extension definitions must be coded into an array of ExtensionDef structures named extensionDefs.

    An array of ExtensionSpec structures named extensionSpecs is also required. Each ExtensionSpec provides the specification for producing an outbound extension block: block definition (identified by block type number), three discriminator tags whose semantics are block-type-specific, and CRC type indicating what type of CRC must be used to protect this extension block. The order of appearance of extension specifications in the extensionSpecs array determines the order in which extension blocks will be inserted into locally sourced bundles.

    The standard extensionDefs array -- which is empty -- is in the noextensions.c prototype source file. The procedure for extending the Bundle Protocol in ION is as follows:

    1. Specify -DBP_EXTENDED in the Makefile's compiler command line when building the libbpP.c library module.

    2. Create a copy of the prototype extensions file, named \"bpextensions.c\", in a directory that is made visible to the Makefile's libbpP.c compilation command line (by a -I parameter).

    3. In the \"external function declarations\" area of \"bpextensions.c\", add \"extern\" function declarations identifying the functions that will implement your extension (or extensions).

    4. Add one or more ExtensionDef structure initialization lines to the extensionDefs array, referencing those declared functions.

    5. Add one or more ExtensionSpec structure initialization lines to the extensionSpecs array, referencing those extension definitions.

    6. Develop the implementations of the extension implementation functions in one or more new source code files.

    7. Add the object file or files for the new extension implementation source file (or files) to the Makefile's command line for linking libbpP.so.

    The function pointers supplied in each ExtensionDef must conform to the following specifications. NOTE that any function that modifies the bytes member of an ExtensionBlock or AckExtBlock must set the corresponding length to the new length of the bytes array, if changed.

    "},{"location":"man/bpv7/bpextensions/#utility-functions-for-extension-processing","title":"UTILITY FUNCTIONS FOR EXTENSION PROCESSING","text":""},{"location":"man/bpv7/bpextensions/#see-also","title":"SEE ALSO","text":"

    bp(3)

    "},{"location":"man/bpv7/bping/","title":"NAME","text":"

    bping - Send and receive Bundle Protocol echo bundles.

    "},{"location":"man/bpv7/bping/#synopsis","title":"SYNOPSIS","text":"

    bping [-c count] [-i interval] [-p priority] [-q wait] [-r flags] [-t ttl] srcEID destEID [reporttoEID]

    "},{"location":"man/bpv7/bping/#description","title":"DESCRIPTION","text":"

    bping sends bundles from srcEID to destEID. If the destEID echoes the bundles back (for instance, it is a bpecho endpoint), bping will print the round-trip time. When complete, bping will print statistics before exiting. It is very similar to ping, except it works with the bundle protocol.

    bping terminates when one of the following happens: it receives the SIGINT signal (Ctrl+C), it receives responses to all of the bundles it sent, or it has sent all count of its bundles and waited wait seconds.

    When bping is executed in a VxWorks or RTEMS environment, its runtime arguments are presented positionally rather than by keyword, in this order: count, interval, priority, wait, flags, TTL, verbosity (a Boolean, defaulting to zero), source EID, destination EID, report-to EID.

    Source EID and destination EID are always required.

    "},{"location":"man/bpv7/bping/#exit-status","title":"EXIT STATUS","text":"

    These exit statuses are taken from ping.

    "},{"location":"man/bpv7/bping/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv7/bping/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/bping/#diagnostics","title":"DIAGNOSTICS","text":"

    Diagnostic messages produced by bping are written to the ION log file ion.log and printed to standard error. Diagnostic messages that don't cause bping to terminate indicate a failure parsing an echo response bundle. This means that destEID isn't an echo endpoint: it's responding with some other bundle message of an unexpected format.

    "},{"location":"man/bpv7/bping/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/bping/#see-also","title":"SEE ALSO","text":"

    bpecho(1), bptrace(1), bpadmin(1), bp(3), ping(8)

    "},{"location":"man/bpv7/bplist/","title":"NAME","text":"

    bplist - Bundle Protocol (BP) utility for listing queued bundles

    "},{"location":"man/bpv7/bplist/#synopsis","title":"SYNOPSIS","text":"

    bplist [{count | detail} [destination_EID[/priority]]]

    "},{"location":"man/bpv7/bplist/#description","title":"DESCRIPTION","text":"

    bplist is a utility program that reports on bundles that currently reside in the local node, as identified by entries in the local bundle agent's \"timeline\" list.

    Either a count of bundles or a detailed list of bundles (noting primary block information together with hex and ASCII dumps of the payload and all extension blocks, in expiration-time sequence) may be requested.

    Either all bundles or just a subset of bundles - restricted to bundles for a single destination endpoint, or to bundles of a given level of priority that are all destined for some specified endpoint - may be included in the report.

    By default, bplist prints a detailed list of all bundles residing in the local node.

    "},{"location":"man/bpv7/bplist/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/bplist/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv7/bplist/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/bplist/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv7/bplist/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/bplist/#see-also","title":"SEE ALSO","text":"

    bpclock(1)

    "},{"location":"man/bpv7/bpnmtest/","title":"NAME","text":"

    bpnmtest - Bundle Protocol (BP) network management statistics test

    "},{"location":"man/bpv7/bpnmtest/#synopsis","title":"SYNOPSIS","text":"

    bpnmtest

    "},{"location":"man/bpv7/bpnmtest/#description","title":"DESCRIPTION","text":"

    bpnmtest simply prints to stdout messages containing the current values of all BP network management tallies, then terminates.

    "},{"location":"man/bpv7/bpnmtest/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/bpnmtest/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv7/bpnmtest/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/bpnmtest/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv7/bpnmtest/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/bprc/","title":"NAME","text":"

    bprc - Bundle Protocol management commands file

    "},{"location":"man/bpv7/bprc/#description","title":"DESCRIPTION","text":"

    Bundle Protocol management commands are passed to bpadmin either in a file of text lines or interactively at bpadmin's command prompt (:). Commands are interpreted line-by line, with exactly one command per line. The formats and effects of the Bundle Protocol management commands are described below.

    "},{"location":"man/bpv7/bprc/#general-commands","title":"GENERAL COMMANDS","text":""},{"location":"man/bpv7/bprc/#scheme-commands","title":"SCHEME COMMANDS","text":""},{"location":"man/bpv7/bprc/#endpoint-commands","title":"ENDPOINT COMMANDS","text":""},{"location":"man/bpv7/bprc/#protocol-commands","title":"PROTOCOL COMMANDS","text":""},{"location":"man/bpv7/bprc/#induct-commands","title":"INDUCT COMMANDS","text":""},{"location":"man/bpv7/bprc/#outduct-commands","title":"OUTDUCT COMMANDS","text":""},{"location":"man/bpv7/bprc/#egress-plan-commands","title":"EGRESS PLAN COMMANDS","text":""},{"location":"man/bpv7/bprc/#examples","title":"EXAMPLES","text":""},{"location":"man/bpv7/bprc/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), ipnadmin(1), dtn2admin(1)

    "},{"location":"man/bpv7/bprecvfile/","title":"NAME","text":"

    bprecvfile - Bundle Protocol (BP) file reception utility

    "},{"location":"man/bpv7/bprecvfile/#synopsis","title":"SYNOPSIS","text":"

    bprecvfile own_endpoint_ID [max_files]

    "},{"location":"man/bpv7/bprecvfile/#description","title":"DESCRIPTION","text":"

    bprecvfile is intended to serve as the counterpart to bpsendfile. It uses bp_receive() to receive bundles containing file content. The content of each bundle is simply written to a file named \"testfileN\" where N is the total number of bundles received since the program began running.

    If a max_files value of N (where N > 0) is provided, the program will terminate automatically upon completing its Nth file reception. Otherwise it will run indefinitely; use ^C to terminate the program.

    "},{"location":"man/bpv7/bprecvfile/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/bprecvfile/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv7/bprecvfile/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/bprecvfile/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv7/bprecvfile/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/bprecvfile/#see-also","title":"SEE ALSO","text":"

    bpsendfile(1), bp(3)

    "},{"location":"man/bpv7/bprecvfile2/","title":"NAME","text":"

    bprecvfile2 - Bundle Protocol (BP) file reception utility

    "},{"location":"man/bpv7/bprecvfile2/#synopsis","title":"SYNOPSIS","text":"

    bprecvfile own_endpoint_ID [filename]

    "},{"location":"man/bpv7/bprecvfile2/#description","title":"DESCRIPTION","text":"

    This is an updated version of the original bprecvfile utility

    bprecvfile is intended to serve as the counterpart to bpsendfile. It uses bp_receive() to receive bundles containing file content. The content of each bundle is simply written to a file named \"filename\". If the filename is not provided on the command line bundles are written to stdout. Use of UNIX pipes is allowed. Note: If filename exists data will be appended to that file. If filename does not exist it will be created. Use ^C to terminate the program.

    "},{"location":"man/bpv7/bprecvfile2/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/bprecvfile2/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv7/bprecvfile2/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/bprecvfile2/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv7/bprecvfile2/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/bprecvfile2/#see-also","title":"SEE ALSO","text":"

    bpsendfile(1), bp(3)

    "},{"location":"man/bpv7/bpsecadmin/","title":"NAME","text":"

    bpsecadmin - BP security policy administration interface

    "},{"location":"man/bpv7/bpsecadmin/#synopsis","title":"SYNOPSIS","text":"

    bpsecadmin [ commands_filename ]

    "},{"location":"man/bpv7/bpsecadmin/#description","title":"DESCRIPTION","text":"

    bpsecadmin configures and manages BP security policy on the local computer.

    It configures and manages BP security policy on the local computer in response to BP configuration commands found in commands_filename, if provided; if not, bpsecadmin prints a simple prompt (:) so that the user may type commands directly into standard input.

    The format of commands for commands_filename can be queried from bpsecadmin by entering the command 'h' or '?' at the prompt. The commands are documented in bpsecrc(5).

    "},{"location":"man/bpv7/bpsecadmin/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/bpsecadmin/#examples","title":"EXAMPLES","text":""},{"location":"man/bpv7/bpsecadmin/#files","title":"FILES","text":"

    Status and diagnostic messages from bpsecadmin and from other software that utilizes the ION node are nominally written to a log file in the current working directory within which bpsecadmin was run. The log file is typically named ion.log.

    See also bpsecrc(5).

    "},{"location":"man/bpv7/bpsecadmin/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/bpsecadmin/#diagnostics","title":"DIAGNOSTICS","text":"

    Note: all ION administration utilities expect source file input to be lines of ASCII text that are NL-delimited. If you edit the ionrc file on a Windows machine, be sure to use dos2unix to convert it to Unix text format before presenting it to bpsecadmin. Otherwise bpsecadmin will detect syntax errors and will not function satisfactorily.

    The following diagnostics may be issued to the log file:

    Various errors that don't cause bpsecadmin to fail but are noted in the log file may be caused by improperly formatted commands given at the prompt or in the commands_filename. Please see bpsecrc(5) for details.

    "},{"location":"man/bpv7/bpsecadmin/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/bpsecadmin/#see-also","title":"SEE ALSO","text":"

    bpsecrc(5)

    "},{"location":"man/bpv7/bpsecrc/","title":"NAME","text":"

    bpsecrc - BP security policy management commands file

    "},{"location":"man/bpv7/bpsecrc/#description","title":"DESCRIPTION","text":"

    BP security policy management commands are passed to bpsecadmin either in a file of text lines or interactively at bpsecadmin's command prompt (:). Commands are interpreted line-by line, with exactly one command per line. JSON commands may span multiple lines when provided as part of a config file. The formats and effects of the BP security policy management commands are described below.

    A parameter identifed as an eid_expr is an \"endpoint ID expression.\" For all commands, whenever the last character of an endpoint ID expression is the wild-card character '*', an applicable endpoint ID \"matches\" this EID expression if all characters of the endpoint ID expression prior to the last one are equal to the corresponding characters of that endpoint ID. Otherwise an applicable endpoint ID \"matches\" the EID expression only when all characters of the EID and EID expression are identical.

    At present, ION supports a subset of the proposed \"BPSec\" security protocol specification currently under consideration by the Internet Engineering Steering Group. Since BPSec is not yet a published standard, ION's Bundle Protocol security mechanisms will not necessarily interoperate with those of other BP implementations. This is unfortunate but (we hope) temporary, as BPSec represents a major improvement in bundle security. Future releases of ION will implement the entire BPSec specification.

    "},{"location":"man/bpv7/bpsecrc/#commands","title":"COMMANDS","text":""},{"location":"man/bpv7/bpsecrc/#see-also","title":"SEE ALSO","text":"

    bpsecadmin(1)

    "},{"location":"man/bpv7/bpsendfile/","title":"NAME","text":"

    bpsendfile - Bundle Protocol (BP) file transmission utility

    "},{"location":"man/bpv7/bpsendfile/#synopsis","title":"SYNOPSIS","text":"

    bpsendfile own_endpoint_ID destination_endpoint_ID file_name [class_of_service [time_to_live (seconds) ]]

    "},{"location":"man/bpv7/bpsendfile/#description","title":"DESCRIPTION","text":"

    bpsendfile uses bp_send() to issue a single bundle to a designated destination endpoint, containing the contents of the file identified by file_name, then terminates. The bundle is sent with no custody transfer requested. When class_of_service is omitted, the bundle is sent at standard priority; for details of the class_of_service parameter, see bptrace(1). time_to_live, if not specified, defaults to 300 seconds (5 minutes). NOTE that time_to_live is specified AFTER class_of_service, rather than before it as in bptrace(1).

    "},{"location":"man/bpv7/bpsendfile/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/bpsendfile/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv7/bpsendfile/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/bpsendfile/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv7/bpsendfile/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/bpsendfile/#see-also","title":"SEE ALSO","text":"

    bprecvfile(1), bp(3)

    "},{"location":"man/bpv7/bpsink/","title":"NAME","text":"

    bpsink - Bundle Protocol reception test program

    "},{"location":"man/bpv7/bpsink/#synopsis","title":"SYNOPSIS","text":"

    bpsink ownEndpointId

    "},{"location":"man/bpv7/bpsink/#description","title":"DESCRIPTION","text":"

    bpsink uses Bundle Protocol to receive application data units from a remote bpsource application task. For each application data unit it receives, it prints the ADU's length and -- if length is less than 80 -- its text.

    bpsink terminates upon receiving the SIGQUIT signal, i.e., ^C from the keyboard.

    "},{"location":"man/bpv7/bpsink/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/bpsink/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv7/bpsink/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/bpsink/#diagnostics","title":"DIAGNOSTICS","text":"

    Diagnostic messages produced by bpsink are written to the ION log file ion.log.

    "},{"location":"man/bpv7/bpsink/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/bpsink/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), bpsource(1), bp(3)

    "},{"location":"man/bpv7/bpsource/","title":"NAME","text":"

    bpsource - Bundle Protocol transmission test shell

    "},{"location":"man/bpv7/bpsource/#synopsis","title":"SYNOPSIS","text":"

    bpsource destinationEndpointId [\"text\"] [-t_TTL_]

    "},{"location":"man/bpv7/bpsource/#description","title":"DESCRIPTION","text":"

    When text is supplied, bpsource simply uses Bundle Protocol to send text to a counterpart bpsink application task that has opened the BP endpoint identified by destinationEndpointId, then terminates.

    Otherwise, bpsource offers the user an interactive \"shell\" for testing Bundle Protocol data transmission. bpsource prints a prompt string (\": \") to stdout, accepts a string of text from stdin, uses Bundle Protocol to send the string to a counterpart bpsink application task that has opened the BP endpoint identified by destinationEndpointId, then prints another prompt string and so on. To terminate the program, enter a string consisting of a single exclamation point (!) character.

    TTL indicates the number of seconds the bundles may remain in the network, undelivered, before they are automatically destroyed. If omitted, TTL defaults to 300 seconds.

    The source endpoint ID for each bundle sent by bpsource is the null endpoint ID, i.e., the bundles are anonymous. All bundles are sent standard priority with no custody transfer and no status reports requested.

    "},{"location":"man/bpv7/bpsource/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/bpsource/#files","title":"FILES","text":"

    The service data units transmitted by bpsource are sequences of text obtained from a file in the current working directory named \"bpsourceAduFile\", which bpsource creates automatically.

    "},{"location":"man/bpv7/bpsource/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/bpsource/#diagnostics","title":"DIAGNOSTICS","text":"

    Diagnostic messages produced by bpsource are written to the ION log file ion.log.

    "},{"location":"man/bpv7/bpsource/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/bpsource/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), bpsink(1), bp(3)

    "},{"location":"man/bpv7/bpstats/","title":"NAME","text":"

    bpstats - Bundle Protocol (BP) processing statistics query utility

    "},{"location":"man/bpv7/bpstats/#synopsis","title":"SYNOPSIS","text":"

    bpstats

    "},{"location":"man/bpv7/bpstats/#description","title":"DESCRIPTION","text":"

    bpstats simply logs messages containing the current values of all BP processing statistics accumulators, then terminates.

    "},{"location":"man/bpv7/bpstats/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/bpstats/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv7/bpstats/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/bpstats/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv7/bpstats/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/bpstats/#see-also","title":"SEE ALSO","text":"

    ion(3)

    "},{"location":"man/bpv7/bpstats2/","title":"NAME","text":"

    bpstats2 - Bundle Protocol (BP) processing statistics query utility via bundles

    "},{"location":"man/bpv7/bpstats2/#synopsis","title":"SYNOPSIS","text":"

    bpstats2 sourceEID [default destEID] [ct]

    "},{"location":"man/bpv7/bpstats2/#description","title":"DESCRIPTION","text":"

    bpstats2 creates bundles containing the current values of all BP processing statistics accumulators. It creates these bundles when:

    "},{"location":"man/bpv7/bpstats2/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/bpstats2/#options","title":"OPTIONS","text":""},{"location":"man/bpv7/bpstats2/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv7/bpstats2/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/bpstats2/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv7/bpstats2/#notes","title":"NOTES","text":"

    A very simple interrogator is bpchat which can repeatedly interrogate bpstats2 by just striking the enter key.

    "},{"location":"man/bpv7/bpstats2/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/bpstats2/#see-also","title":"SEE ALSO","text":"

    bpstats(1), bpchat(1)

    "},{"location":"man/bpv7/bptrace/","title":"NAME","text":"

    bptrace - Bundle Protocol (BP) network trace utility

    "},{"location":"man/bpv7/bptrace/#synopsis","title":"SYNOPSIS","text":"

    bptrace own_endpoint_ID destination_endpoint_ID report-to_endpoint_ID TTL class_of_service \"trace_text\" [status_report_flags]

    "},{"location":"man/bpv7/bptrace/#description","title":"DESCRIPTION","text":"

    bptrace uses bp_send() to issue a single bundle to a designated destination endpoint, with status reporting options enabled as selected by the user, then terminates. The status reports returned as the bundle makes its way through the network provide a view of the operation of the network as currently configured.

    TTL indicates the number of seconds the trace bundle may remain in the network, undelivered, before it is automatically destroyed.

    class_of_service is custody-requested.priority[.ordinal[.unreliable.critical[.data-label]]], where custody-requested must be 0 or 1 (Boolean), priority must be 0 (bulk) or 1 (standard) or 2 (expedited), ordinal must be 0-254, unreliable must be 0 or 1 (Boolean), critical must also be 0 or 1 (Boolean), and data-label may be any unsigned integer. custody-requested is passed in with the bundle transmission request, but if set to 1 it serves only to request the use of reliable convergence-layer protocols; this will have the effect of enabling custody transfer whenever the applicable convergence-layer protocol is bundle-in-bundle encapsulation (BIBE). ordinal is ignored if priority is not 2. Setting class_of_service to \"0.2.254\" or \"1.2.254\" gives a bundle the highest possible priority. Setting unreliable to 1 causes BP to forego convergence-layer retransmission in the event of data loss. Setting critical to 1 causes contact graph routing to forward the bundle on all plausible routes rather than just the \"best\" route it computes; this may result in multiple copies of the bundle arriving at the destination endpoint, but when used in conjunction with priority 2.254 it ensures that the bundle will be delivered as soon as physically possible.

    trace_text can be any string of ASCII text; alternatively, if we want to send a file, it can be \"@\" followed by the name of the file.

    status_report_flags must be a sequence of status report flags, separated by commas, with no embedded whitespace. Each status report flag must be one of the following: rcv, fwd, dlv, del.

    "},{"location":"man/bpv7/bptrace/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/bptrace/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv7/bptrace/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/bptrace/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv7/bptrace/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/bptrace/#see-also","title":"SEE ALSO","text":"

    bp(3)

    "},{"location":"man/bpv7/bptransit/","title":"NAME","text":"

    bptransit - Bundle Protocol (BP) daemon task for forwarding received bundles

    "},{"location":"man/bpv7/bptransit/#synopsis","title":"SYNOPSIS","text":"

    bptransit

    "},{"location":"man/bpv7/bptransit/#description","title":"DESCRIPTION","text":"

    bptransit is a background \"daemon\" task that is responsible for presenting to ION's forwarding daemons any bundles that were received from other nodes (i.e., bundles whose payloads reside in Inbound ZCO space) and are destined for yet other nodes. In doing so, it migrates these bundles from Inbound buffer space to Outbound buffer space on the same prioritized basis as the insertion of locally sourced outbound bundles.

    Management of the bptransit daemon is automatic. It is spawned automatically by bpadmin in response to the 's' command that starts operation of Bundle Protocol on the local ION node, and it is terminated by bpadmin in response to an 'x' (STOP) command.

    Whenever a received bundle is determined to have a destination other than the local node, a pointer to that bundle is appended to one of two queues of \"in-transit\" bundles, one for bundles whose forwarding is provisional (depending on the availability of Outbound ZCO buffer space; bundles in this queue are potentially subject to congestion loss) and one for bundles whose forwarding is confirmed. Bundles received via convergence-layer adapters that can sustain flow control, such as STCP, are appended to the \"confirmed\" queue, while those from CLAs that cannot sustain flow control (such as LTP) are appended to the \"provisional\" queue.

    bptransit comprises two threads, one for each in-transit queue. The confirmed in-transit thread dequeues bundles from the \"confirmed\" queue and moves them from Inbound to Outbound ZCO buffer space, blocking (if necessary) until space becomes available. The provisional in-transit queue dequeues bundles from the \"provisional\" queue and moves them from Inbound to Outbound ZCO buffer space if Outbound space is available, discarding (\"abandoning\") them if it is not.

    "},{"location":"man/bpv7/bptransit/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/bptransit/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv7/bptransit/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/bptransit/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv7/bptransit/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/bptransit/#see-also","title":"SEE ALSO","text":"

    bpadmin(1)

    "},{"location":"man/bpv7/brsccla/","title":"NAME","text":"

    brsccla - BRSC-based BP convergence layer adapter (input and output) task

    "},{"location":"man/bpv7/brsccla/#synopsis","title":"SYNOPSIS","text":"

    brsccla server_hostname[:server_port_nbr]_own_node_nbr

    "},{"location":"man/bpv7/brsccla/#description","title":"DESCRIPTION","text":"

    BRSC is the \"client\" side of the Bundle Relay Service (BRS) convergence layer protocol for BP. It is complemented by BRSS, the \"server\" side of the BRS convergence layer protocol for BP. BRS clients send bundles directly only to the server, regardless of their final destinations, and the server forwards them to other clients as necessary.

    brsccla is a background \"daemon\" task comprising three threads: one that connects to the BRS server, spawns the other threads, and then handles BRSC protocol output by transmitting bundles over the connected socket to the BRS server; one that simply sends periodic \"keepalive\" messages over the connected socket to the server (to assure that local inactivity doesn't cause the connection to be lost); and one that handles BRSC protocol input from the connected server.

    The output thread connects to the server's TCP socket at server_hostname and server_port_nbr, sends over the connected socket the client's own_node_nbr (in SDNV representation) followed by a 32-bit time tag and a 160-bit HMAC-SHA1 digest of that time tag, to authenticate itself; checks the authenticity of the 160-bit countersign returned by the server; spawns the keepalive and receiver threads; and then begins extracting bundles from the queues of bundles ready for transmission via BRSC and transmitting those bundles over the connected socket to the server. Each transmitted bundle is preceded by its length, a 32-bit unsigned integer in network byte order. The default value for server_port_nbr, if omitted, is 80.

    The reception thread receives bundles over the connected socket and passes them to the bundle protocol agent on the local ION node. Each bundle received on the connection is preceded by its length, a 32-bit unsigned integer in network byte order.

    The keepalive thread simply sends a \"bundle length\" value of zero (a 32-bit unsigned integer in network byte order) to the server once every 15 seconds.

    brsccla is spawned automatically by bpadmin in response to the 's' (START) command that starts operation of the Bundle Protocol, and it is terminated by bpadmin in response to an 'x' (STOP) command. brsccla can also be spawned and terminated in response to START and STOP commands that pertain specifically to the BRSC convergence layer protocol.

    "},{"location":"man/bpv7/brsccla/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/brsccla/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv7/brsccla/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/brsccla/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv7/brsccla/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/brsccla/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), bprc(5), brsscla(1)

    "},{"location":"man/bpv7/brsscla/","title":"NAME","text":"

    brsscla - BRSS-based BP convergence layer adapter (input and output) task

    "},{"location":"man/bpv7/brsscla/#synopsis","title":"SYNOPSIS","text":"

    brsscla local_hostname[:local_port_nbr]

    "},{"location":"man/bpv7/brsscla/#description","title":"DESCRIPTION","text":"

    BRSS is the \"server\" side of the Bundle Relay Service (BRS) convergence layer protocol for BP. It is complemented by BRSC, the \"client\" side of the BRS convergence layer protocol for BP.

    brsscla is a background \"daemon\" task that spawns 2*N threads: one that handles BRSS client connections and spawns sockets for continued data interchange with connected clients; one that handles BRSS protocol output by transmitting over those spawned sockets to the associated clients; and two thread for each spawned socket, an input thread to handle BRSS protocol input from the associated connected client and an output thread to forward BRSS protocol output to the associated connected client.

    The connection thread simply accepts connections on a TCP socket bound to local_hostname and local_port_nbr and spawns reception threads. The default value for local_port_nbr, if omitted, is 80.

    Each reception thread receives over the socket connection the node number of the connecting client (in SDNV representation), followed by a 32-bit time tag and a 160-bit HMAC-SHA1 digest of that time tag. The receiving thread checks the time tag, requiring that it differ from the current time by no more than BRSTERM (default value 5) seconds. It then recomputes the digest value using the HMAC-SHA1 key named \"node_number.brs\" as recorded in the ION security database (see ionsecrc(5)), requiring that the supplied and computed digests be identical. If all registration conditions are met, the receiving thread sends the client a countersign -- a similarly computed HMAC-SHA1 digest, for the time tag that is 1 second later than the provided time tag -- to assure the client of its own authenticity, then commences receiving bundles over the connected socket. Each bundle received on the connection is preceded by its length, a 32-bit unsigned integer in network byte order. The received bundles are passed to the bundle protocol agent on the local ION node.

    Each output thread extracts bundles from the queues of bundles ready for transmission via BRSS to the corresponding connected client and transmits the bundles over the socket to that client. Each transmitted bundle is preceded by its length, a 32-bit unsigned integer in network byte order.

    brsscla is spawned automatically by bpadmin in response to the 's' (START) command that starts operation of the Bundle Protocol, and it is terminated by bpadmin in response to an 'x' (STOP) command. brsscla can also be spawned and terminated in response to START and STOP commands that pertain specifically to the BRSS convergence layer protocol.

    "},{"location":"man/bpv7/brsscla/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/brsscla/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv7/brsscla/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/brsscla/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv7/brsscla/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/brsscla/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), bprc(5), brsccla(1)

    "},{"location":"man/bpv7/cgrfetch/","title":"NAME","text":"

    cgrfetch - Visualize CGR simulations

    "},{"location":"man/bpv7/cgrfetch/#synopsis","title":"SYNOPSIS","text":"

    cgrfetch [OPTIONS] DEST-NODE

    "},{"location":"man/bpv7/cgrfetch/#description","title":"DESCRIPTION","text":"

    cgrfetch uses CGR to simulate sending a bundle from the local node to DEST-NODE. It traces the execution of CGR to generate graphs of the routes that were considered and the routes that were ultimately chosen to forward along. No bundle is sent during the simulation.

    A JSON representation of the simulation is output to OUTPUT-FILE. The representation includes parameters of the simulation and a structure for each considered route, which in turn includes calculated parameters for the route and an image of the contact graph.

    The dot(1) tool from the Graphviz package is used to generate the contact graph images and is required for cgrfetch(1). The base64(1) tool from coreutils is used to embed the images in the JSON and is also required.

    Note that a trace of the route computation logic performed by CGR is printed to stderr; there is currently no cgrfetch option for redirecting this output to a file.

    "},{"location":"man/bpv7/cgrfetch/#options","title":"OPTIONS","text":""},{"location":"man/bpv7/cgrfetch/#examples","title":"EXAMPLES","text":""},{"location":"man/bpv7/cgrfetch/#see-also","title":"SEE ALSO","text":"

    dot(1), base64(1)

    "},{"location":"man/bpv7/dccpcli/","title":"NAME","text":"

    dccpcli - DCCP-based BP convergence layer input task

    "},{"location":"man/bpv7/dccpcli/#synopsis","title":"SYNOPSIS","text":"

    dccpcli local_hostname[:local_port_nbr]

    "},{"location":"man/bpv7/dccpcli/#description","title":"DESCRIPTION","text":"

    dccpcli is a background \"daemon\" task that receives DCCP datagrams via a DCCP socket bound to local_hostname and local_port_nbr, extracts bundles from those datagrams, and passes them to the bundle protocol agent on the local ION node.

    If not specified, port number defaults to 4556.

    Note that dccpcli has no fragmentation support at all. Therefore, the largest bundle that can be sent via this convergence layer is limited to just under the link's MTU (typically 1500 bytes).

    The convergence layer input task is spawned automatically by bpadmin in response to the 's' (START) command that starts operation of the Bundle Protocol; the text of the command that is used to spawn the task must be provided at the time the \"dccp\" convergence layer protocol is added to the BP database. The convergence layer input task is terminated by bpadmin in response to an 'x' (STOP) command. dccpcli can also be spawned and terminated in response to START and STOP commands that pertain specifically to the DCCP convergence layer protocol.

    "},{"location":"man/bpv7/dccpcli/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/dccpcli/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv7/dccpcli/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/dccpcli/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv7/dccpcli/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/dccpcli/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), bprc(5), dccpclo(1)

    "},{"location":"man/bpv7/dccpclo/","title":"NAME","text":"

    dccpclo - DCCP-based BP convergence layer output task

    "},{"location":"man/bpv7/dccpclo/#synopsis","title":"SYNOPSIS","text":"

    dccpclo remote_hostname[:remote_port_nbr]

    "},{"location":"man/bpv7/dccpclo/#description","title":"DESCRIPTION","text":"

    dccpclo is a background \"daemon\" task that connects to a remote node's DCCP socket at remote_hostname and remote_port_nbr. It then begins extracting bundles from the queues of bundles ready for transmission via DCCP to this remote bundle protocol agent and transmitting those bundles as DCCP datagrams to the remote host.

    If not specified, remote_port_nbr defaults to 4556.

    Note that dccpclo has no fragmentation support at all. Therefore, the largest bundle that can be sent via this convergence layer is limited to just under the link's MTU (typically 1500 bytes).

    dccpclo is spawned automatically by bpadmin in response to the 's' (START) command that starts operation of the Bundle Protocol, and it is terminated by bpadmin in response to an 'x' (STOP) command. dccpclo can also be spawned and terminated in response to START and STOP commands that pertain specifically to the DCCP convergence layer protocol.

    "},{"location":"man/bpv7/dccpclo/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/dccpclo/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv7/dccpclo/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/dccpclo/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv7/dccpclo/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/dccpclo/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), bprc(5), dccpcli(1)

    "},{"location":"man/bpv7/dgrcli/","title":"NAME","text":"

    dgrcli - DGR-based BP convergence layer reception task

    "},{"location":"man/bpv7/dgrcli/#synopsis","title":"SYNOPSIS","text":"

    dgrcli local_hostname[:local_port_nbr]

    "},{"location":"man/bpv7/dgrcli/#description","title":"DESCRIPTION","text":"

    dgrcli is a background \"daemon\" task that handles DGR convergence layer protocol input.

    The daemon receives DGR messages via a UDP socket bound to local_hostname and local_port_nbr, extracts bundles from those messages, and passes them to the bundle protocol agent on the local ION node. (local_port_nbr defaults to 1113 if not specified.)

    dgrcli is spawned automatically by bpadmin in response to the 's' (START) command that starts operation of the Bundle Protocol, and it is terminated by bpadmin in response to an 'x' (STOP) command. dgrcli can also be spawned and terminated in response to START and STOP commands that pertain specifically to the DGR convergence layer protocol.

    "},{"location":"man/bpv7/dgrcli/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/dgrcli/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv7/dgrcli/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/dgrcli/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv7/dgrcli/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/dgrcli/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), bprc(5)

    "},{"location":"man/bpv7/dgrclo/","title":"NAME","text":"

    dgrclo - DGR-based BP convergence layer transmission task

    "},{"location":"man/bpv7/dgrclo/#synopsis","title":"SYNOPSIS","text":"

    dgrclo remote_hostname[:remote_port_nbr]

    "},{"location":"man/bpv7/dgrclo/#description","title":"DESCRIPTION","text":"

    dgrclo is a background \"daemon\" task that spawns two threads, one that handles DGR convergence layer protocol input (positive and negative acknowledgments) and a second that handles DGR convergence layer protocol output.

    The output thread extracts bundles from the queues of bundles ready for transmission via DGR to a remote bundle protocol agent, encapsulates them in DGR messages, and uses a randomly configured local UDP socket to send those messages to the remote UDP socket bound to remote_hostname and remote_port_nbr. (local_port_nbr defaults to 1113 if not specified.)

    The input thread receives DGR messages via the same local UDP socket and uses them to manage DGR retransmission of transmitted datagrams.

    dgrclo is spawned automatically by bpadmin in response to the 's' (START) command that starts operation of the Bundle Protocol, and it is terminated by bpadmin in response to an 'x' (STOP) command. dgrclo can also be spawned and terminated in response to START and STOP commands that pertain specifically to the DGR convergence layer protocol.

    "},{"location":"man/bpv7/dgrclo/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/dgrclo/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv7/dgrclo/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/dgrclo/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv7/dgrclo/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/dgrclo/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), bprc(5)

    "},{"location":"man/bpv7/dtn2admin/","title":"NAME","text":"

    dtn2admin - baseline \"dtn\" scheme administration interface

    "},{"location":"man/bpv7/dtn2admin/#synopsis","title":"SYNOPSIS","text":"

    dtn2admin [ commands_filename ]

    "},{"location":"man/bpv7/dtn2admin/#description","title":"DESCRIPTION","text":"

    dtn2admin configures the local ION node's routing of bundles to endpoints whose IDs conform to the dtn endpoint ID scheme. Endpoint IDs in the dtn scheme are strings of the form \"dtn://node_name/[[~]demux_token]\", where node_name identifies a BP node (often this is the DNS name of the computer on which the node resides) and demux_token normally identifies a specific application processing point. When and only when the terminating demux string (everything after the final '/') does NOT begin with '~', the endpoint ID identifies a singleton endpoint; when the terminating demux string is omitted, the endpoint ID constitutes a node ID. Although the dtn endpoint ID scheme imposes more transmission overhead than the ipn scheme, ION provides support for dtn endpoint IDs to enable interoperation with other implementations of Bundle Protocol.

    dtn2admin operates in response to \"dtn\" scheme configuration commands found in the file commands_filename, if provided; if not, dtn2admin prints a simple prompt (:) so that the user may type commands directly into standard input.

    The format of commands for commands_filename can be queried from dtn2admin with the 'h' or '?' commands at the prompt. The commands are documented in dtn2rc(5).

    "},{"location":"man/bpv7/dtn2admin/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/dtn2admin/#examples","title":"EXAMPLES","text":""},{"location":"man/bpv7/dtn2admin/#files","title":"FILES","text":"

    See dtn2rc(5) for details of the DTN scheme configuration commands.

    "},{"location":"man/bpv7/dtn2admin/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/dtn2admin/#diagnostics","title":"DIAGNOSTICS","text":"

    Note: all ION administration utilities expect source file input to be lines of ASCII text that are NL-delimited. If you edit the dtn2rc file on a Windows machine, be sure to use dos2unix to convert it to Unix text format before presenting it to dtn2admin. Otherwise dtn2admin will detect syntax errors and will not function satisfactorily.

    The following diagnostics may be issued to the logfile ion.log:

    Various errors that don't cause dtn2admin to fail but are noted in the ion.log log file may be caused by improperly formatted commands given at the prompt or in the commands_filename file. Please see dtn2rc(5) for details.

    "},{"location":"man/bpv7/dtn2admin/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/dtn2admin/#see-also","title":"SEE ALSO","text":"

    dtn2rc(5)

    "},{"location":"man/bpv7/dtn2adminep/","title":"NAME","text":"

    dtn2adminep - administrative endpoint task for the \"dtn\" scheme

    "},{"location":"man/bpv7/dtn2adminep/#synopsis","title":"SYNOPSIS","text":"

    dtn2adminep

    "},{"location":"man/bpv7/dtn2adminep/#description","title":"DESCRIPTION","text":"

    dtn2adminep is a background \"daemon\" task that receives and processes administrative bundles (minimally, all bundle status reports) that are sent to the \"dtn\"-scheme administrative endpoint on the local ION node, if and only if such an endpoint was established by bpadmin. It is spawned automatically by bpadmin in response to the 's' (START) command that starts operation of Bundle Protocol on the local ION node, and it is terminated by bpadmin in response to an 'x' (STOP) command. dtn2adminep can also be spawned and terminated in response to START and STOP commands that pertain specifically to the \"dtn\" scheme.

    dtn2adminep responds to bundle status reports by logging ASCII text messages describing the reported activity.

    "},{"location":"man/bpv7/dtn2adminep/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/dtn2adminep/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv7/dtn2adminep/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/dtn2adminep/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv7/dtn2adminep/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/dtn2adminep/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), dtn2admin(1).

    "},{"location":"man/bpv7/dtn2fw/","title":"NAME","text":"

    dtn2fw - bundle route computation task for the \"dtn\" scheme

    "},{"location":"man/bpv7/dtn2fw/#synopsis","title":"SYNOPSIS","text":"

    dtn2fw

    "},{"location":"man/bpv7/dtn2fw/#description","title":"DESCRIPTION","text":"

    dtn2fw is a background \"daemon\" task that pops bundles from the queue of bundle destined for \"dtn\"-scheme endpoints, computes proximate destinations for those bundles, and appends those bundles to the appropriate queues of bundles pending transmission to those computed proximate destinations.

    For each possible proximate destination (that is, neighboring node) there is a separate queue for each possible level of bundle priority: 0, 1, 2. Each outbound bundle is appended to the queue matching the bundle's designated priority.

    Proximate destination computation is affected by static routes as configured by dtn2admin(1).

    dtn2fw is spawned automatically by bpadmin in response to the 's' (START) command that starts operation of Bundle Protocol on the local ION node, and it is terminated by bpadmin in response to an 'x' (STOP) command. dtn2fw can also be spawned and terminated in response to START and STOP commands that pertain specifically to the \"dtn\" scheme.

    "},{"location":"man/bpv7/dtn2fw/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/dtn2fw/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv7/dtn2fw/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/dtn2fw/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv7/dtn2fw/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/dtn2fw/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), dtn2admin(1), bprc(5), dtn2rc(5).

    "},{"location":"man/bpv7/dtn2rc/","title":"NAME","text":"

    dtn2rc - \"dtn\" scheme configuration commands file

    "},{"location":"man/bpv7/dtn2rc/#description","title":"DESCRIPTION","text":"

    \"dtn\" scheme configuration commands are passed to dtn2admin either in a file of text lines or interactively at dtn2admin's command prompt (:). Commands are interpreted line-by line, with exactly one command per line.

    \"dtn\" scheme configuration commands establish static routing rules for forwarding bundles to nodes identified by \"dtn\"-scheme destination endpoints.

    Static routes are expressed as plans in the \"dtn\"-scheme routing database. A plan that is established for a given node name associates a routing directive with the named node. Each directive is a string of one of two possible forms:

    f endpoint_ID

    ...or...

    x protocol_name/outduct_name

    The former form signifies that the bundle is to be forwarded to the indicated endpoint, requiring that it be re-queued for processing by the forwarder for that endpoint (which might, but need not, be identified by another \"dtn\"-scheme endpoint ID). The latter form signifies that the bundle is to be queued for transmission via the indicated convergence layer protocol outduct.

    The node names cited in dtn2rc plans may be \"wild-carded\". That is, when the last character of a plan's node name is either '*' or '~' (these two wild-card characters are equivalent for this purpose), the plan applies to all nodes whose names are identical to the wild-carded node name up to the wild-card character. For example, a bundle whose destination EID is \"dtn://foghorn/x\" would be routed by plans citing the following node names: \"foghorn\", \"fogh*\", \"fog~\", \"*\". When multiple plans are all applicable to the same destination EID, the one citing the longest (i.e., most narrowly targeted) node name will be applied.

    The formats and effects of the DTN scheme configuration commands are described below.

    "},{"location":"man/bpv7/dtn2rc/#general-commands","title":"GENERAL COMMANDS","text":""},{"location":"man/bpv7/dtn2rc/#plan-commands","title":"PLAN COMMANDS","text":""},{"location":"man/bpv7/dtn2rc/#examples","title":"EXAMPLES","text":""},{"location":"man/bpv7/dtn2rc/#see-also","title":"SEE ALSO","text":"

    dtn2admin(1)

    "},{"location":"man/bpv7/hmackeys/","title":"NAME","text":"

    hmackeys - utility program for generating good HMAC-SHA1 keys

    "},{"location":"man/bpv7/hmackeys/#synopsis","title":"SYNOPSIS","text":"

    hmackeys [ keynames_filename ]

    "},{"location":"man/bpv7/hmackeys/#description","title":"DESCRIPTION","text":"

    hmackeys writes files containing randomized 160-bit key values suitable for use by HMAC-SHA1 in support of Bundle Authentication Block processing, Bundle Relay Service connections, or other functions for which symmetric hash computation is applicable. One file is written for each key name presented to hmackeys; the content of each file is 20 consecutive randomly selected 8-bit integer values, and the name given to each file is simply \"keyname.hmk\".

    hmackeys operates in response to the key names found in the file keynames_filename, one name per file text line, if provided; if not, hmackeys prints a simple prompt (:) so that the user may type key names directly into standard input.

    When the program is run in interactive mode, either enter 'q' or press ^C to terminate.

    "},{"location":"man/bpv7/hmackeys/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/hmackeys/#examples","title":"EXAMPLES","text":""},{"location":"man/bpv7/hmackeys/#files","title":"FILES","text":"

    No other files are used in the operation of hmackeys.

    "},{"location":"man/bpv7/hmackeys/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/hmackeys/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the logfile ion.log:

    "},{"location":"man/bpv7/hmackeys/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/hmackeys/#see-also","title":"SEE ALSO","text":"

    brsscla(1), ionsecadmin(1)

    "},{"location":"man/bpv7/imcadminep/","title":"NAME","text":"

    imcadminep - administrative endpoint task for the IMC (multicast) scheme

    "},{"location":"man/bpv7/imcadminep/#synopsis","title":"SYNOPSIS","text":"

    imcadminep

    "},{"location":"man/bpv7/imcadminep/#description","title":"DESCRIPTION","text":"

    imcadminep is a background \"daemon\" task that receives and processes administrative bundles (multicast group petitions) that are sent to the IMC-scheme administrative endpoint on the local ION node, if and only if such an endpoint was established by bpadmin. It is spawned automatically by bpadmin in response to the 's' (START) command that starts operation of Bundle Protocol on the local ION node, and it is terminated by bpadmin in response to an 'x' (STOP) command. imcadminep can also be spawned and terminated in response to START and STOP commands that pertain specifically to the IMC scheme.

    imcadminep responds to multicast group \"join\" and \"leave\" petitions by managing entries in the node's database of multicast groups and their members.

    "},{"location":"man/bpv7/imcadminep/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/imcadminep/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv7/imcadminep/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/imcadminep/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv7/imcadminep/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/imcadminep/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), bprc(5)

    "},{"location":"man/bpv7/imcfw/","title":"NAME","text":"

    imcfw - bundle route computation task for the IMC scheme

    "},{"location":"man/bpv7/imcfw/#synopsis","title":"SYNOPSIS","text":"

    imcfw

    "},{"location":"man/bpv7/imcfw/#description","title":"DESCRIPTION","text":"

    imcfw is a background \"daemon\" task that pops bundles from the queue of bundle destined for IMC-scheme (Interplanetary Multicast) endpoints, determines which \"relatives\" on the IMC multicast tree to forward the bundles to, and appends those bundles to the appropriate queues of bundles pending transmission to those proximate destinations.

    For each possible proximate destination (that is, neighboring node) there is a separate queue for each possible level of bundle priority: 0, 1, 2. Each outbound bundle is appended to the queue matching the bundle's designated priority.

    Proximate destination computation is determined by multicast group membership as resulting from nodes' registration in multicast endpoints (accomplished simply by adding the appropriate endpoint as discussed in bprc(5).

    imcfw is spawned automatically by bpadmin in response to the 's' (START) command that starts operation of Bundle Protocol on the local ION node, and it is terminated by bpadmin in response to an 'x' (STOP) command. imcfw can also be spawned and terminated in response to START and STOP commands that pertain specifically to the IMC scheme.

    "},{"location":"man/bpv7/imcfw/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/imcfw/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv7/imcfw/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/imcfw/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv7/imcfw/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/imcfw/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), bprc(5)

    "},{"location":"man/bpv7/ipnadmin/","title":"NAME","text":"

    ipnadmin - Interplanetary Internet (IPN) scheme administration interface

    "},{"location":"man/bpv7/ipnadmin/#synopsis","title":"SYNOPSIS","text":"

    ipnadmin [ commands_filename ]

    "},{"location":"man/bpv7/ipnadmin/#description","title":"DESCRIPTION","text":"

    ipnadmin configures the local ION node's routing of bundles to endpoints whose IDs conform to the ipn endpoint ID scheme. Every endpoint ID in the ipn scheme is a string of the form \"ipn:node_number.service_number\" where node_number is a CBHE \"node number\" and service_number identifies a specific application processing point. When service_number is zero, the endpoint ID constitutes a node ID. All endpoint IDs formed in the ipn scheme identify singleton endpoints.

    ipnadmin operates in response to IPN scheme configuration commands found in the file commands_filename, if provided; if not, ipnadmin prints a simple prompt (:) so that the user may type commands directly into standard input.

    The format of commands for commands_filename can be queried from ipnadmin with the 'h' or '?' commands at the prompt. The commands are documented in ipnrc(5).

    "},{"location":"man/bpv7/ipnadmin/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/ipnadmin/#examples","title":"EXAMPLES","text":""},{"location":"man/bpv7/ipnadmin/#files","title":"FILES","text":"

    See ipnrc(5) for details of the IPN scheme configuration commands.

    "},{"location":"man/bpv7/ipnadmin/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/ipnadmin/#diagnostics","title":"DIAGNOSTICS","text":"

    Note: all ION administration utilities expect source file input to be lines of ASCII text that are NL-delimited. If you edit the ipnrc file on a Windows machine, be sure to use dos2unix to convert it to Unix text format before presenting it to ipnadmin. Otherwise ipnadmin will detect syntax errors and will not function satisfactorily.

    The following diagnostics may be issued to the logfile ion.log:

    Various errors that don't cause ipnadmin to fail but are noted in the ion.log log file may be caused by improperly formatted commands given at the prompt or in the commands_filename file. Please see ipnrc(5) for details.

    "},{"location":"man/bpv7/ipnadmin/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/ipnadmin/#see-also","title":"SEE ALSO","text":"

    ipnrc(5)

    "},{"location":"man/bpv7/ipnadminep/","title":"NAME","text":"

    ipnadminep - administrative endpoint task for the IPN scheme

    "},{"location":"man/bpv7/ipnadminep/#synopsis","title":"SYNOPSIS","text":"

    ipnadminep

    "},{"location":"man/bpv7/ipnadminep/#description","title":"DESCRIPTION","text":"

    ipnadminep is a background \"daemon\" task that receives and processes administrative bundles (nominally, all bundle status reports) that are sent to the IPN-scheme administrative endpoint on the local ION node, if and only if such an endpoint was established by bpadmin. It is spawned automatically by bpadmin in response to the 's' (START) command that starts operation of Bundle Protocol on the local ION node, and it is terminated by bpadmin in response to an 'x' (STOP) command. ipnadminep can also be spawned and terminated in response to START and STOP commands that pertain specifically to the IPN scheme.

    ipnadminep responds to bundle status reports by logging ASCII text messages describing the reported activity.

    "},{"location":"man/bpv7/ipnadminep/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/ipnadminep/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv7/ipnadminep/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/ipnadminep/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv7/ipnadminep/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/ipnadminep/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), ipnadmin(1), bprc(5)

    "},{"location":"man/bpv7/ipnd/","title":"NAME","text":"

    ipnd - ION IPND module

    "},{"location":"man/bpv7/ipnd/#description","title":"DESCRIPTION","text":"

    The ipnd daemon is the ION implementation of DTN IP Neighbor Discovery. This module allows the node to send and receive beacon messages using unicast, multicast or broadcast IP addresses. Beacons are used for the discovery of neighbors and may be used to advertise services that are present and available on nodes, such as routing algorithms or CLAs.

    ION IPND module is configured using a *.rc configuration file. The name of the configuration file must be passed as the sole command-line argument to ipnd when the daemon is started. Commands are interpreted line by line, with exactly one command per line. The formats and effects of the ION ipnd management commands are described below.

    "},{"location":"man/bpv7/ipnd/#usage","title":"USAGE","text":"

    ipnd config_file_name

    "},{"location":"man/bpv7/ipnd/#commands","title":"COMMANDS","text":""},{"location":"man/bpv7/ipnd/#examples","title":"EXAMPLES","text":"

    m scvdef 128 FooRouter Seed:SeedVal BaseWeight:WeightVal RootHash:bytes

    Defines a new service called FooRouter comprising 3 elements. SeedVal and WeightVal are user defined services that must be already defined.

    m svcdef 129 SeedVal Value:fixed16

    m svcdef 130 WeightVal Value:fixed16

    m svcdef 128 FooRouter Seed:SeedVal BaseWeight:WeightVal RootHash:bytes

    m svcdef 150 FixedValuesList F16:fixed16 F32:fixed32 F64:fixed64

    m svcdef 131 VariableValuesList U64:uint64 S64:sint64

    m svcdef 132 BooleanValues B:boolean

    m svcdef 133 FloatValuesList F:float D:double

    m svcdef 135 IntegersList FixedValues:FixedValuesList VariableValues:VariableValuesList

    m svcdef 136 NumbersList Integers:IntegersList Floats:FloatValuesList

    m svcdef 140 HugeService CLAv4:CLA-TCP-v4 Booleans:BooleanValues Numbers:NumbersList FR:FooRouter

    a svcadv HugeService CLAv4:IP:10.1.0.10 CLAv4:Port:4444 Booleans:true FR:Seed:0x5432 FR:BaseWeight:13 FR:RootHash:BEEF Numbers:Integers:FixedValues:F16:0x16 Numbers:Integers:FixedValues:F32:0x32 Numbers:Integers:FixedValues:F64:0x1234567890ABCDEF Numbers:Floats:F:0.32 Numbers:Floats:D:-1e-6 Numbers:Integers:VariableValues:U64:18446744073704783380 Numbers:Integers:VariableValues:S64:-4611686018422619668

    This shows how to define multiple nested services and how to advertise them.

    "},{"location":"man/bpv7/ipnd/#see-also","title":"SEE ALSO","text":"

    ion(3)

    "},{"location":"man/bpv7/ipnfw/","title":"NAME","text":"

    ipnfw - bundle route computation task for the IPN scheme

    "},{"location":"man/bpv7/ipnfw/#synopsis","title":"SYNOPSIS","text":"

    ipnfw

    "},{"location":"man/bpv7/ipnfw/#description","title":"DESCRIPTION","text":"

    ipnfw is a background \"daemon\" task that pops bundles from the queue of bundle destined for IPN-scheme endpoints, computes proximate destinations for those bundles, and appends those bundles to the appropriate queues of bundles pending transmission to those computed proximate destinations.

    For each possible proximate destination (that is, neighboring node) there is a separate queue for each possible level of bundle priority: 0, 1, 2. Each outbound bundle is appended to the queue matching the bundle's designated priority.

    Proximate destination computation is affected by static and default routes as configured by ipnadmin(1) and by contact graphs as managed by ionadmin(1) and rfxclock(1).

    ipnfw is spawned automatically by bpadmin in response to the 's' (START) command that starts operation of Bundle Protocol on the local ION node, and it is terminated by bpadmin in response to an 'x' (STOP) command. ipnfw can also be spawned and terminated in response to START and STOP commands that pertain specifically to the IPN scheme.

    "},{"location":"man/bpv7/ipnfw/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/ipnfw/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv7/ipnfw/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/ipnfw/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv7/ipnfw/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/ipnfw/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), ipnadmin(1), bprc(5), ipnrc(5)

    "},{"location":"man/bpv7/ipnrc/","title":"NAME","text":"

    ipnrc - IPN scheme configuration commands file

    "},{"location":"man/bpv7/ipnrc/#description","title":"DESCRIPTION","text":"

    IPN scheme configuration commands are passed to ipnadmin either in a file of text lines or interactively at ipnadmin's command prompt (:). Commands are interpreted line-by line, with exactly one command per line.

    IPN scheme configuration commands (a) establish egress plans for direct transmission to neighboring nodes that are members of endpoints identified in the \"ipn\" URI scheme and (b) establish static default routing rules for forwarding bundles to specified destination nodes.

    The egress plan established for a given node associates a duct expression with that node. Each duct expression is a string of the form \"protocol_name/outduct_name\" signifying that the bundle is to be queued for transmission via the indicated convergence layer protocol outduct.

    Note that egress plans must be established for all neighboring nodes, regardless of whether or not contact graph routing is used for computing dynamic routes to distant nodes. This is by definition: if there isn't an egress plan to a node, it can't be considered a neighbor.

    Static default routes are declared as exits in the ipn-scheme routing database. An exit is a range of node numbers identifying a set of nodes for which defined default routing behavior is established. Whenever a bundle is to be forwarded to a node whose number is in the exit's node number range and it has not been possible to compute a dynamic route to that node from the contact schedules that have been provided to the local node and that node is not a neighbor to which the bundle can be directly transmitted, BP will forward the bundle to the gateway node associated with this exit. The gateway node for any exit is identified by an endpoint ID, which might or might not be an ipn-scheme EID; regardless, directing a bundle to the gateway for an exit causes the bundle to be re-forwarded to that intermediate destination endpoint. Multiple exits may encompass the same node number, in which case the gateway associated with the most restrictive exit (the one with the smallest range) is always selected.

    Note that \"exits\" were termed \"groups\" in earlier versions of ION. The term \"exit\" has been adopted instead, to minimize any possible confusion with multicast groups. To protect backward compatibility, the keyword \"group\" continues to be accepted by ipnadmin as an alias for the new keyword \"exit\", but the older terminology is deprecated.

    Routing and class-of-service overrides may also be managed:

    A routing override declares a neighboring node to which all bundles must be forwarded that meet specified criteria. This override is strictly local, affecting only forwarding from the local node, and it is applied before any route computed by CGR or IRR is considered.

    A class-of-service override declares the class of service (priority and ordinal and [optionally] quality-of-service flags) that will condition - in terms of order and outduct selection - the forwarding of all bundles that meet specified criteria. Again this override is strictly local, affecting only forwarding from the local node.

    The formats and effects of the IPN scheme configuration commands are described below.

    "},{"location":"man/bpv7/ipnrc/#general-commands","title":"GENERAL COMMANDS","text":""},{"location":"man/bpv7/ipnrc/#plan-commands","title":"PLAN COMMANDS","text":""},{"location":"man/bpv7/ipnrc/#exit-commands","title":"EXIT COMMANDS","text":""},{"location":"man/bpv7/ipnrc/#override-commands","title":"OVERRIDE COMMANDS","text":""},{"location":"man/bpv7/ipnrc/#examples","title":"EXAMPLES","text":""},{"location":"man/bpv7/ipnrc/#see-also","title":"SEE ALSO","text":"

    ipnadmin(1)

    "},{"location":"man/bpv7/lgagent/","title":"NAME","text":"

    lgagent - ION Load/Go remote agent program

    "},{"location":"man/bpv7/lgagent/#synopsis","title":"SYNOPSIS","text":"

    lgagent own_endpoint_ID

    "},{"location":"man/bpv7/lgagent/#description","title":"DESCRIPTION","text":"

    ION Load/Go is a system for management of an ION-based network, enabling the execution of ION administrative programs at remote nodes. The system comprises two programs, lgsend and lgagent.

    The lgagent task on a given node opens the indicated ION endpoint for bundle reception, receives the extracted payloads of Load/Go bundles sent to it by lgsend as run on one or more remote nodes, and processes those payloads, which are the text of Load/Go source files.

    Load/Go source file content is limited to newline-terminated lines of ASCII characters. More specifically, the text of any Load/Go source file is a sequence of line sets of two types: file capsules and directives. Any Load/Go source file may contain any number of file capsules and any number of directives, freely intermingled in any order, but the typical structure of a Load/Go source file is simply a single file capsule followed by a single directive.

    When lgagent identifies a file capsule, it copies all of the capsule's text lines to a new file that it creates in the current working directory. When lgagent identifies a directive, it executes the directive by passing the text of the directive to the pseudoshell() function (see platform(3)). lgagent processes the line sets of a Load/Go source file in the order in which they appear in the file, so the text of a directive may reference a file that was created as the result of processing a prior file capsule in the same source file.

    "},{"location":"man/bpv7/lgagent/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/lgagent/#files","title":"FILES","text":"

    lgfile contains the Load/Go file capsules and directives that are to be processed.

    "},{"location":"man/bpv7/lgagent/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/lgagent/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    A variety of other diagnostics noting source file parsing problems may also be reported. These errors are non-fatal but they terminate the processing of the source file content from the most recently received bundle.

    "},{"location":"man/bpv7/lgagent/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/lgagent/#see-also","title":"SEE ALSO","text":"

    lgsend(1), lgfile(5)

    "},{"location":"man/bpv7/lgfile/","title":"NAME","text":"

    lgfile - ION Load/Go source file

    "},{"location":"man/bpv7/lgfile/#description","title":"DESCRIPTION","text":"

    The ION Load/Go system enables the execution of ION administrative programs at remote nodes:

    The lgsend program reads a Load/Go source file from a local file system, encapsulates the text of that source file in a bundle, and sends the bundle to a designated DTN endpoint on the remote node.

    An lgagent task running on the remote node, which has opened that DTN endpoint for bundle reception, receives the extracted payload of the bundle -- the text of the Load/Go source file -- and processes it.

    Load/Go source file content is limited to newline-terminated lines of ASCII characters. More specifically, the text of any Load/Go source file is a sequence of line sets of two types: file capsules and directives. Any Load/Go source file may contain any number of file capsules and any number of directives, freely intermingled in any order, but the typical structure of a Load/Go source file is simply a single file capsule followed by a single directive.

    Each file capsule is structured as a single start-of-capsule line, followed by zero or more capsule text lines, followed by a single end-of-capsule line. Each start-of-capsule line is of this form:

    [file_name

    Each capsule text line can be any line of ASCII text that does not begin with an opening ([) or closing (]) bracket character.

    A text line that begins with a closing bracket character (]) is interpreted as an end-of-capsule line.

    A directive is any line of text that is not one of the lines of a file capsule and that is of this form:

    !directive_text

    When lgagent identifies a file capsule, it copies all of the capsule's text lines to a new file named file_name that it creates in the current working directory. When lgagent identifies a directive, it executes the directive by passing directive_text to the pseudoshell() function (see platform(3)). lgagent processes the line sets of a Load/Go source file in the order in which they appear in the file, so the directive_text of a directive may reference a file that was created as the result of processing a prior file capsule line set in the same source file.

    Note that lgfile directives are passed to pseudoshell(), which on a VxWorks platform will always spawn a new task; the first argument in directive_text must be a symbol that VxWorks can resolve to a function, not a shell command. Also note that the arguments in directive_text will be actual task arguments, not shell command-line arguments, so they should never be enclosed in double-quote characters (\"). However, any argument that contains embedded whitespace must be enclosed in single-quote characters (') so that pseudoshell() can parse it correctly.

    "},{"location":"man/bpv7/lgfile/#examples","title":"EXAMPLES","text":"

    Presenting the following lines of source file text to lgsend:

    [cmd33.bprc

    x protocol ltp

    ]

    !bpadmin cmd33.bprc

    should cause the receiving node to halt the operation of the LTP convergence-layer protocol.

    "},{"location":"man/bpv7/lgfile/#see-also","title":"SEE ALSO","text":"

    lgsend(1), lgagent(1), platform(3)

    "},{"location":"man/bpv7/lgsend/","title":"NAME","text":"

    lgsend - ION Load/Go command program

    "},{"location":"man/bpv7/lgsend/#synopsis","title":"SYNOPSIS","text":"

    lgsend command_file_name own_endpoint_ID destination_endpoint_ID

    "},{"location":"man/bpv7/lgsend/#description","title":"DESCRIPTION","text":"

    ION Load/Go is a system for management of an ION-based network, enabling the execution of ION administrative programs at remote nodes. The system comprises two programs, lgsend and lgagent.

    The lgsend program reads a Load/Go source file from a local file system, encapsulates the text of that source file in a bundle, and sends the bundle to an lgagent task that is waiting for data at a designated DTN endpoint on the remote node.

    To do so, it first reads all lines of the Load/Go source file identified by command_file_name into a temporary buffer in ION's SDR data store, concatenating the lines of the file and retaining all newline characters. Then it invokes the bp_send() function to create and send a bundle whose payload is this temporary buffer, whose destination is destination_endpoint_ID, and whose source endpoint ID is own_endpoint_ID. Then it terminates.

    "},{"location":"man/bpv7/lgsend/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/lgsend/#files","title":"FILES","text":"

    lgfile contains the Load/Go file capsules and directive that are to be sent to the remote node.

    "},{"location":"man/bpv7/lgsend/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/lgsend/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv7/lgsend/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/lgsend/#see-also","title":"SEE ALSO","text":"

    lgagent(1), lgfile(5)

    "},{"location":"man/bpv7/ltpcli/","title":"NAME","text":"

    ltpcli - LTP-based BP convergence layer input task

    "},{"location":"man/bpv7/ltpcli/#synopsis","title":"SYNOPSIS","text":"

    ltpcli local_node_nbr

    "},{"location":"man/bpv7/ltpcli/#description","title":"DESCRIPTION","text":"

    ltpcli is a background \"daemon\" task that receives LTP data transmission blocks, extracts bundles from the received blocks, and passes them to the bundle protocol agent on the local ION node.

    ltpcli is spawned automatically by bpadmin in response to the 's' (START) command that starts operation of the Bundle Protocol; the text of the command that is used to spawn the task must be provided at the time the \"ltp\" convergence layer protocol is added to the BP database. The convergence layer input task is terminated by bpadmin in response to an 'x' (STOP) command. ltpcli can also be spawned and terminated in response to START and STOP commands that pertain specifically to the LTP convergence layer protocol.

    "},{"location":"man/bpv7/ltpcli/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/ltpcli/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv7/ltpcli/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/ltpcli/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv7/ltpcli/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/ltpcli/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), bprc(5), ltpadmin(1), ltprc(5), ltpclo(1)

    "},{"location":"man/bpv7/ltpclo/","title":"NAME","text":"

    ltpclo - LTP-based BP convergence layer adapter output task

    "},{"location":"man/bpv7/ltpclo/#synopsis","title":"SYNOPSIS","text":"

    ltpclo remote_node_nbr

    "},{"location":"man/bpv7/ltpclo/#description","title":"DESCRIPTION","text":"

    ltpclo is a background \"daemon\" task that extracts bundles from the queues of segments ready for transmission via LTP to the remote bundle protocol agent identified by remote_node_nbr and passes them to the local LTP engine for aggregation, segmentation, and transmission to the remote node.

    ltpclo is spawned automatically by bpadmin in response to the 's' (START) command that starts operation of the Bundle Protocol, and it is terminated by bpadmin in response to an 'x' (STOP) command. ltpclo can also be spawned and terminated in response to START and STOP commands that pertain specifically to the LTP convergence layer protocol.

    "},{"location":"man/bpv7/ltpclo/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/ltpclo/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv7/ltpclo/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/ltpclo/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv7/ltpclo/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/ltpclo/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), bprc(5), ltpadmin(1), ltprc(5), ltpcli(1)

    "},{"location":"man/bpv7/stcpcli/","title":"NAME","text":"

    sstcpcli - DTN simple TCP convergence layer input task

    "},{"location":"man/bpv7/stcpcli/#synopsis","title":"SYNOPSIS","text":"

    stcpcli local_hostname[:local_port_nbr]

    "},{"location":"man/bpv7/stcpcli/#description","title":"DESCRIPTION","text":"

    stcpcli is a background \"daemon\" task comprising 1 + N threads: one that handles TCP connections from remote stcpclo tasks, spawning sockets for data reception from those tasks, plus one input thread for each spawned socket to handle data reception over that socket.

    The connection thread simply accepts connections on a TCP socket bound to local_hostname and local_port_nbr and spawns reception threads. The default value for local_port_nbr, if omitted, is 4556.

    Each reception thread receives bundles over the associated connected socket. Each bundle received on the connection is preceded by a 32-bit unsigned integer in network byte order indicating the length of the bundle. The received bundles are passed to the bundle protocol agent on the local ION node.

    stcpcli is spawned automatically by bpadmin in response to the 's' (START) command that starts operation of the Bundle Protocol; the text of the command that is used to spawn the task must be provided at the time the \"stcp\" convergence layer protocol is added to the BP database. The convergence layer input task is terminated by bpadmin in response to an 'x' (STOP) command. stcpcli can also be spawned and terminated in response to START and STOP commands that pertain specifically to the STCP convergence layer protocol.

    "},{"location":"man/bpv7/stcpcli/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/stcpcli/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv7/stcpcli/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/stcpcli/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv7/stcpcli/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/stcpcli/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), bprc(5), stcpclo(1)

    "},{"location":"man/bpv7/stcpclo/","title":"NAME","text":"

    stcpclo - DTN simple TCP convergence layer adapter output task

    "},{"location":"man/bpv7/stcpclo/#synopsis","title":"SYNOPSIS","text":"

    stcpclo remote_hostname[:remote_port_nbr]

    "},{"location":"man/bpv7/stcpclo/#description","title":"DESCRIPTION","text":"

    stcpclo is a background \"daemon\" task that connects to a remote node's TCP socket at remote_hostname and remote_port_nbr. It then begins extracting bundles from the queues of bundles ready for transmission via TCP to this remote bundle protocol agent and transmitting those bundles over the connected socket to that node. Each transmitted bundle is preceded by a 32-bit integer in network byte order indicating the length of the bundle.

    If not specified, remote_port_nbr defaults to 4556.

    stcpclo is spawned automatically by bpadmin in response to the 's' (START) command that starts operation of the Bundle Protocol, and it is terminated by bpadmin in response to an 'x' (STOP) command. stcpclo can also be spawned and terminated in response to START and STOP commands that pertain specifically to the STCP convergence layer protocol.

    "},{"location":"man/bpv7/stcpclo/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/stcpclo/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv7/stcpclo/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/stcpclo/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv7/stcpclo/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/stcpclo/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), bprc(5), stcpcli(1)

    "},{"location":"man/bpv7/tcpcli/","title":"NAME","text":"

    tcpcli - DTN TCPCL-compliant convergence layer input task

    "},{"location":"man/bpv7/tcpcli/#synopsis","title":"SYNOPSIS","text":"

    tcpcli local_hostname[:local_port_nbr]

    "},{"location":"man/bpv7/tcpcli/#description","title":"DESCRIPTION","text":"

    tcpcli is a background \"daemon\" task comprising 3 + 2*N threads: an executive thread; a clock thread that periodically attempts to connect to remote TCPCL entities as identified by the tcp outducts enumerated in the bprc(5) file (each of which must specify the hostname[:port_nbr] to connect to); a thread that handles TCP connections from remote TCPCL entities, spawning sockets for data reception from those tasks; plus one input thread and one output thread for each connection, to handle data reception and transmission over that socket.

    The connection thread simply accepts connections on a TCP socket bound to local_hostname and local_port_nbr and spawns reception threads. The default value for local_port_nbr, if omitted, is 4556.

    Each time a connection is established, the entities will first exchange contact headers, because connection parameters need to be negotiated. tcpcli records the acknowledgement flags, reactive fragmentation flag, and negative acknowledgements flag in the contact header it receives from its peer TCPCL entity.

    Each reception thread receives bundles over the associated connected socket. Each bundle received on the connection is preceded by message type, fragmentation flags, and size represented as an SDNV. The received bundles are passed to the bundle protocol agent on the local ION node.

    Similarly, each transmission thread obtains outbound bundles from the local ION node, encapsulates them as noted above, and transmits them over the associated connected socket.

    tcpcli is spawned automatically by bpadmin in response to the 's' (START) command that starts operation of the Bundle Protocol; the text of the command that is used to spawn the task must be provided at the time the \"tcp\" convergence layer protocol is added to the BP database. The convergence layer input task is terminated by bpadmin in response to an 'x' (STOP) command. tcpcli can also be spawned and terminated in response to START and STOP commands that pertain specifically to the TCP convergence layer protocol.

    "},{"location":"man/bpv7/tcpcli/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/tcpcli/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv7/tcpcli/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/tcpcli/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv7/tcpcli/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/tcpcli/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), bprc(5)

    "},{"location":"man/bpv7/tcpclo/","title":"NAME","text":"

    tcpclo - TCPCL-compliant convergence layer adapter output task [DEPRECATED]

    "},{"location":"man/bpv7/tcpclo/#synopsis","title":"SYNOPSIS","text":"

    tcpclo

    "},{"location":"man/bpv7/tcpclo/#description","title":"DESCRIPTION","text":"

    tcpclo is deprecated. The outducts for the \"tcp\" convergence-layer adapter are now drained by threads managed within tcpcli.

    "},{"location":"man/bpv7/tcpclo/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/tcpclo/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv7/tcpclo/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/tcpclo/#diagnostics","title":"DIAGNOSTICS","text":"

    No diagnostics apply.

    "},{"location":"man/bpv7/tcpclo/#see-also","title":"SEE ALSO","text":"

    tcpcli(1)

    "},{"location":"man/bpv7/udpcli/","title":"NAME","text":"

    udpcli - UDP-based BP convergence layer input task

    "},{"location":"man/bpv7/udpcli/#synopsis","title":"SYNOPSIS","text":"

    udpcli local_hostname[:local_port_nbr]

    "},{"location":"man/bpv7/udpcli/#description","title":"DESCRIPTION","text":"

    udpcli is a background \"daemon\" task that receives UDP datagrams via a UDP socket bound to local_hostname and local_port_nbr, extracts bundles from those datagrams, and passes them to the bundle protocol agent on the local ION node.

    If not specified, port number defaults to 4556.

    The convergence layer input task is spawned automatically by bpadmin in response to the 's' (START) command that starts operation of the Bundle Protocol; the text of the command that is used to spawn the task must be provided at the time the \"udp\" convergence layer protocol is added to the BP database. The convergence layer input task is terminated by bpadmin in response to an 'x' (STOP) command. udpcli can also be spawned and terminated in response to START and STOP commands that pertain specifically to the UDP convergence layer protocol.

    "},{"location":"man/bpv7/udpcli/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/udpcli/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv7/udpcli/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/udpcli/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv7/udpcli/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/udpcli/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), bprc(5), udpclo(1)

    "},{"location":"man/bpv7/udpclo/","title":"NAME","text":"

    udpclo - UDP-based BP convergence layer output task

    "},{"location":"man/bpv7/udpclo/#synopsis","title":"SYNOPSIS","text":"

    udpclo remote_hostname[:remote_port_nbr]

    "},{"location":"man/bpv7/udpclo/#description","title":"DESCRIPTION","text":"

    udpclo is a background \"daemon\" task that extracts bundles from the queues of bundles ready for transmission via UDP to a remote node's UDP socket at remote_hostname and remote_port_nbr, encapsulates those bundles in UDP datagrams, and sends those datagrams to that remote UDP socket.

    udpclo is spawned automatically by bpadmin in response to the 's' (START) command that starts operation of the Bundle Protocol, and it is terminated by bpadmin in response to an 'x' (STOP) command. udpclo can also be spawned and terminated in response to START and STOP commands that pertain specifically to the UDP convergence layer protocol.

    "},{"location":"man/bpv7/udpclo/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bpv7/udpclo/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bpv7/udpclo/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bpv7/udpclo/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bpv7/udpclo/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bpv7/udpclo/#see-also","title":"SEE ALSO","text":"

    bpadmin(1), bprc(5), udpcli(1)

    "},{"location":"man/bss/","title":"Index of Man Pages","text":""},{"location":"man/bss/bss/","title":"NAME","text":"

    bss - Bundle Streaming Service library

    "},{"location":"man/bss/bss/#synopsis","title":"SYNOPSIS","text":"
    #include \"bss.h\"\n\ntypedef int (*RTBHandler)(time_t time, unsigned long count, char *buffer, int bufLength);\n\n[see description for available functions]\n
    "},{"location":"man/bss/bss/#description","title":"DESCRIPTION","text":"

    The BSS library supports the streaming of data over delay-tolerant networking (DTN) bundles. The intent of the library is to enable applications that pass streaming data received in transmission time order (i.e., without time regressions) to an application-specific \"display\" function -- notionally for immediate real-time display -- but to store all received data (including out-of-order data) in a private database for playback under user control. The reception and real-time display of in-order data is performed by a background thread, leaving the application's main (foreground) thread free to respond to user commands controlling playback or other application-specific functions.

    The application-specific \"display\" function invoked by the background thread must conform to the RTBHandler type definition. It must return 0 on success, -1 on any error that should terminate the background thread. Only on return from this function will the background thread proceed to acquire the next BSS payload.

    All data acquired by the BSS background thread is written to a BSS database comprising three files: table, list, and data. The name of the database is the root name that is common to the three files, e.g., db3.tbl, db3.lst, db3.dat would be the three files making up the db3 BSS database. All three files of the selected BSS database must reside in the same directory of the file system.

    Several replay navigation functions in the BSS library require that the application provide a navigation state structure of type bssNav as defined in the bss.h header file. The application is not reponsible for populating this structure; it's strictly for the private use of the BSS library.

    "},{"location":"man/bss/bss/#see-also","title":"SEE ALSO","text":"

    bp(3)

    "},{"location":"man/bss/bssStreamingApp/","title":"NAME","text":"

    bssStreamingApp - Bundle Streaming Service transmission test program

    "},{"location":"man/bss/bssStreamingApp/#synopsis","title":"SYNOPSIS","text":"

    bssStreamingApp own_endpoint_ID destination_endpoint_ID [class_of_service]

    "},{"location":"man/bss/bssStreamingApp/#description","title":"DESCRIPTION","text":"

    bssStreamingApp uses BSS to send streaming data over BP from own_endpoint_ID to bssrecv listening at destination_endpoint_ID. class_of_service is as specified for bptrace(1); if omitted, bundles are sent at BP's standard priority (1).

    The bundles issued by bssStreamingApp all have 65000-byte payloads, where the ASCII representation of a positive integer (increasing monotonically from 0, by 1, throughout the operation of the program) appears at the start of each payload. All bundles are sent with custody transfer requested, with time-to-live set to 1 day. The application meters output by sleeping for 12800 microseconds after issuing each bundle.

    Use CTRL-C to terminate the program.

    "},{"location":"man/bss/bssStreamingApp/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bss/bssStreamingApp/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bss/bssStreamingApp/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bss/bssStreamingApp/#see-also","title":"SEE ALSO","text":"

    bssrecv(1), bss(3)

    "},{"location":"man/bss/bssrecv/","title":"NAME","text":"

    bssrecv - Bundle Streaming Service reception test program

    "},{"location":"man/bss/bssrecv/#synopsis","title":"SYNOPSIS","text":"

    bssrecv

    "},{"location":"man/bss/bssrecv/#description","title":"DESCRIPTION","text":"

    bssrecv uses BSS to acquire streaming data from bssStreamingApp.

    bssrecv is a menu-driven interactive test program, run from the operating system shell prompt. The program enables the user to begin and end a session of BSS data acquisition from bssStreamingApp, displaying the data as it arrives in real time; to replay data acquired during the current session; and to replay data acquired during a prior session.

    The user must provide values for three parameters in order to initiate the acquisition or replay of data from bssStreamingApp:

    bssrecv offers the following menu options:

    "},{"location":"man/bss/bssrecv/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bss/bssrecv/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bss/bssrecv/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bss/bssrecv/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bss/bssrecv/#see-also","title":"SEE ALSO","text":"

    bssStreamingApp(1), bss(3)

    "},{"location":"man/bssp/","title":"Index of Man Pages","text":""},{"location":"man/bssp/bssp/","title":"NAME","text":"

    bssp - Bundle Streaming Service Protocol (BSSP) communications library

    "},{"location":"man/bssp/bssp/#synopsis","title":"SYNOPSIS","text":"
    #include \"bssp.h\"\n\ntypedef enum\n{\n    BsspNoNotice = 0,\n    BsspXmitSuccess,\n    BsspXmitFailure,\n    BsspRecvSuccess\n} BsspNoticeType;\n\n[see description for available functions]\n
    "},{"location":"man/bssp/bssp/#description","title":"DESCRIPTION","text":"

    The bssp library provides functions enabling application software to use BSSP to send and receive streaming data in bundles.

    BSSP is designed to forward streaming data in original transmission order wherever possible but to retransmit data as necessary to ensure that the entire stream is available for playback eventually. To this end, BSSP uses not one but two underlying \"link service\" channels: (a) an unreliable \"best efforts\" channel, for data items that are successfully received upon initial transmission over every extent of the end-to-end path, and (b) a \"reliable\" channel, for data items that were lost at some point, had to be retransmitted, and therefore are now out of order. The BSS library at the destination node supports immediate \"real-time\" display of all data received on the \"best efforts\" channel in transmission order, together with database retention of all data eventually received on the \"reliable\" channel.

    The BSSP notion of engine ID corresponds closely to the Internet notion of a host, and in ION engine IDs are normally indistinguishable from node numbers including the node numbers in Bundle Protocol endpoint IDs conforming to the \"ipn\" scheme.

    The BSSP notion of client ID corresponds closely to the Internet notion of \"protocol number\" as used in the Internet Protocol. It enables data from multiple applications -- clients -- to be multiplexed over a single reliable link. However, for ION operations we normally use BSSP exclusively for the transmission of Bundle Protocol data, identified by client ID = 1.

    "},{"location":"man/bssp/bssp/#see-also","title":"SEE ALSO","text":"

    bsspadmin(1), bssprc(5), zco(3)

    "},{"location":"man/bssp/bsspadmin/","title":"NAME","text":"

    bsspadmin - Bundle Streaming Service Protocol (BSSP) administration interface

    "},{"location":"man/bssp/bsspadmin/#synopsis","title":"SYNOPSIS","text":"

    bsspadmin [ commands_filename | . ]

    "},{"location":"man/bssp/bsspadmin/#description","title":"DESCRIPTION","text":"

    bsspadmin configures, starts, manages, and stops BSSP operations for the local ION node.

    It operates in response to BSSP configuration commands found in the file commands_filename, if provided; if not, bsspadmin prints a simple prompt (:) so that the user may type commands directly into standard input. If commands_filename is a period (.), the effect is the same as if a command file containing the single command 'x' were passed to bsspadmin -- that is, the ION node's bsspclock task and link service adapter tasks are stopped.

    The format of commands for commands_filename can be queried from bsspadmin with the 'h' or '?' commands at the prompt. The commands are documented in bssprc(5).

    "},{"location":"man/bssp/bsspadmin/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bssp/bsspadmin/#examples","title":"EXAMPLES","text":""},{"location":"man/bssp/bsspadmin/#files","title":"FILES","text":"

    See bssprc(5) for details of the BSSP configuration commands.

    "},{"location":"man/bssp/bsspadmin/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bssp/bsspadmin/#diagnostics","title":"DIAGNOSTICS","text":"

    Note: all ION administration utilities expect source file input to be lines of ASCII text that are NL-delimited. If you edit the bssprc file on a Windows machine, be sure to use dos2unix to convert it to Unix text format before presenting it to bsspadmin. Otherwise bsspadmin will detect syntax errors and will not function satisfactorily.

    The following diagnostics may be issued to the logfile ion.log:

    Various errors that don't cause bsspadmin to fail but are noted in the ion.log log file may be caused by improperly formatted commands given at the prompt or in the commands_filename file. Please see bssprc(5) for details.

    "},{"location":"man/bssp/bsspadmin/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bssp/bsspadmin/#see-also","title":"SEE ALSO","text":"

    bssprc(5)

    "},{"location":"man/bssp/bsspclock/","title":"NAME","text":"

    bsspclock - BSSP daemon task for managing scheduled events

    "},{"location":"man/bssp/bsspclock/#synopsis","title":"SYNOPSIS","text":"

    bsspclock

    "},{"location":"man/bssp/bsspclock/#description","title":"DESCRIPTION","text":"

    bsspclock is a background \"daemon\" task that periodically performs scheduled BSSP activities. It is spawned automatically by bsspadmin in response to the 's' command that starts operation of the BSSP protocol, and it is terminated by bsspadmin in response to an 'x' (STOP) command.

    Once per second, bsspclock takes the following action:

    First it manages the current state of all links (\"spans\"). Specifically, it infers link state changes (\"link cues\") from data rate changes as noted in the RFX database by rfxclock:

    If the rate of transmission to a neighbor was zero but is now non-zero, then transmission to that neighbor resumes. The applicable \"buffer empty\" semaphore is given (enabling start of a new transmission session) and the best-efforts and/or reliable \"PDUs ready\" semaphores are given if the corresponding outbound PDU queues are non-empty (enabling transmission of PDUs by the link service output task).

    If the rate of transmission to a neighbor was non-zero but is now zero, then transmission to that neighbor is suspended -- i.e., the semaphores triggering transmission will no longer be given.

    If the imputed rate of transmission from a neighbor was non-zero but is now zero, then all best-efforts transmission acknowledgment timers affecting transmission to that neighbor are suspended. This has the effect of extending the interval of each affected timer by the length of time that the timers remain suspended.

    If the imputed rate of transmission from a neighbor was zero but is now non-zero, then all best-efforts transmission acknowledgment timers affecting transmission to that neighbor are resumed.

    Then bsspclock enqueues for reliable transmission all blocks for which the best-efforts transmission acknowledgment timeout interval has now expired but no acknowledgment has yet been received.

    "},{"location":"man/bssp/bsspclock/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bssp/bsspclock/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bssp/bsspclock/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bssp/bsspclock/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bssp/bsspclock/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bssp/bsspclock/#see-also","title":"SEE ALSO","text":"

    bsspadmin(1), rfxclock(1)

    "},{"location":"man/bssp/bssprc/","title":"NAME","text":"

    bssprc - Bundle Streaming Service Protocol management commands file

    "},{"location":"man/bssp/bssprc/#description","title":"DESCRIPTION","text":"

    BSSP management commands are passed to bsspadmin either in a file of text lines or interactively at bsspadmin's command prompt (:). Commands are interpreted line-by line, with exactly one command per line. The formats and effects of the BSSP management commands are described below.

    "},{"location":"man/bssp/bssprc/#commands","title":"COMMANDS","text":""},{"location":"man/bssp/bssprc/#examples","title":"EXAMPLES","text":""},{"location":"man/bssp/bssprc/#see-also","title":"SEE ALSO","text":"

    bsspadmin(1), udpbsi(1), udpbso(1), tcpbsi(1), tcpbso(1)

    "},{"location":"man/bssp/tcpbsi/","title":"NAME","text":"

    tcpbsi - TCP-based reliable link service input task for BSSP

    "},{"location":"man/bssp/tcpbsi/#synopsis","title":"SYNOPSIS","text":"

    tcpbsi {local_hostname | @}[:local_port_nbr]

    "},{"location":"man/bssp/tcpbsi/#description","title":"DESCRIPTION","text":"

    tcpbsi is a background \"daemon\" task that receives TCP stream data via a TCP socket bound to local_hostname and local_port_nbr, extracts BSSP blocks from that stream, and passes them to the local BSSP engine. Host name \"@\" signifies that the host name returned by hostname(1) is to be used as the socket's host name. If not specified, port number defaults to 4556.

    The link service input task is spawned automatically by bsspadmin in response to the 's' command that starts operation of the BSSP protocol; the text of the command that is used to spawn the task must be provided as a parameter to the 's' command. The link service input task is terminated by bsspadmin in response to an 'x' (STOP) command.

    "},{"location":"man/bssp/tcpbsi/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bssp/tcpbsi/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bssp/tcpbsi/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bssp/tcpbsi/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bssp/tcpbsi/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bssp/tcpbsi/#see-also","title":"SEE ALSO","text":"

    bsspadmin(1), tcpbso(1), udpbsi(1)

    "},{"location":"man/bssp/tcpbso/","title":"NAME","text":"

    tcpbso - TCP-based reliable link service output task for BSSP

    "},{"location":"man/bssp/tcpbso/#synopsis","title":"SYNOPSIS","text":"

    tcpbso {remote_engine_hostname | @}[:remote_port_nbr] remote_engine_nbr

    "},{"location":"man/bssp/tcpbso/#description","title":"DESCRIPTION","text":"

    tcpbso is a background \"daemon\" task that extracts BSSP blocks from the queue of blocks bound for the indicated remote BSSP engine and uses a TCP socket to send them to the indicated TCP port on the indicated host. If not specified, port number defaults to 4556.

    Each \"span\" of BSSP data interchange between the local BSSP engine and a neighboring BSSP engine requires its own best-effort and reliable link service output tasks. All link service output tasks are spawned automatically by bsspadmin in response to the 's' command that starts operation of the BSSP protocol, and they are all terminated by bsspadmin in response to an 'x' (STOP) command.

    "},{"location":"man/bssp/tcpbso/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bssp/tcpbso/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bssp/tcpbso/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bssp/tcpbso/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bssp/tcpbso/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bssp/tcpbso/#see-abso","title":"SEE ABSO","text":"

    bsspadmin(1), tcpbsi(1), udpbso(1)

    "},{"location":"man/bssp/udpbsi/","title":"NAME","text":"

    udpbsi - UDP-based best-effort link service input task for BSSP

    "},{"location":"man/bssp/udpbsi/#synopsis","title":"SYNOPSIS","text":"

    udpbsi {local_hostname | @}[:local_port_nbr]

    "},{"location":"man/bssp/udpbsi/#description","title":"DESCRIPTION","text":"

    udpbsi is a background \"daemon\" task that receives UDP datagrams via a UDP socket bound to local_hostname and local_port_nbr, extracts BSSP PDUs from those datagrams, and passes them to the local BSSP engine. Host name \"@\" signifies that the host name returned by hostname(1) is to be used as the socket's host name. If not specified, port number defaults to 6001.

    The link service input task is spawned automatically by bsspadmin in response to the 's' command that starts operation of the BSSP protocol; the text of the command that is used to spawn the task must be provided as a parameter to the 's' command. The link service input task is terminated by bsspadmin in response to an 'x' (STOP) command.

    "},{"location":"man/bssp/udpbsi/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bssp/udpbsi/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bssp/udpbsi/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bssp/udpbsi/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bssp/udpbsi/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bssp/udpbsi/#see-also","title":"SEE ALSO","text":"

    bsspadmin(1), tcpbsi(1), udpbso(1)

    "},{"location":"man/bssp/udpbso/","title":"NAME","text":"

    udpbso - UDP-based best-effort link service output task for BSSP

    "},{"location":"man/bssp/udpbso/#synopsis","title":"SYNOPSIS","text":"

    udpbso {remote_engine_hostname | @}[:remote_port_nbr] txbps remote_engine_nbr

    "},{"location":"man/bssp/udpbso/#description","title":"DESCRIPTION","text":"

    udpbso is a background \"daemon\" task that extracts BSSP PDUs from the queue of PDUs bound for the indicated remote BSSP engine, encapsulates them in UDP datagrams, and sends those datagrams to the indicated UDP port on the indicated host. If not specified, port number defaults to 6001.

    The parameter txbps is optional and kept only for backward compatibility with older configuration files. If it is included, it's value is ignored. For context, txbps (transmission rate in bits per second) was used for congestion control but udpbso now derive its data rate from contact graph.

    When invoking udpbso through bsspadmin using the start or add seat command, the remote_engine_nbr and txbps should be omitted. BSSP admin daemon will automatically provide the remote_engine_nbr.

    Each \"span\" of BSSP data interchange between the local BSSP engine and a neighboring BSSP engine requires its own best-effort and reliable link service output tasks. All link service output tasks are spawned automatically by bsspadmin in response to the 's' command that starts operation of the BSSP protocol, and they are all terminated by bsspadmin in response to an 'x' (STOP) command.

    "},{"location":"man/bssp/udpbso/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/bssp/udpbso/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/bssp/udpbso/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/bssp/udpbso/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/bssp/udpbso/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/bssp/udpbso/#see-abso","title":"SEE ABSO","text":"

    bsspadmin(1), tcpbso(1), udpbsi(1)

    "},{"location":"man/cfdp/","title":"Index of Man Pages","text":""},{"location":"man/cfdp/bpcp/","title":"NAME","text":"

    bpcp - A remote copy utility for delay tolerant networks utilizing NASA JPL's Interplanetary Overlay Network (ION)

    "},{"location":"man/cfdp/bpcp/#synopsis","title":"SYNOPSIS","text":"

    bpcp [-dqr | -v] [-L bundle_lifetime] [-C custody_on/off] [-S class_of_service] [host1:]file1 ... [host2:]file2

    "},{"location":"man/cfdp/bpcp/#description","title":"DESCRIPTION","text":"

    bpcp copies files between hosts utilizing NASA JPL's Interplanetary Overlay Network (ION) to provide a delay tolerant network. File copies from local to remote, remote to local, or remote to remote are permitted. bpcp depends on ION to do any authentication or encryption of file transfers. All covergence layers over which bpcp runs MUST be reliable.

    The options are permitted as follows:

    bpcp utilizes CFDP to preform the actual file transfers. This has several important implications. First, ION's CFDP implementation requires that reliable convergence layers be used to transfer the data. Second, file permissions are not transferred. Files will be made executable on copy. Third, symbolic links are ignored for local to remote transfers and their target is copied for remote transfers. Fourth, all hosts must be specified using ION's IPN naming scheme.

    In order to preform remote to local transfers or remote to remote transfers, bpcpd must be running on the remote hosts. However, bpcp should NOT be run simultaneously with bpcpd or cfdptest.

    "},{"location":"man/cfdp/bpcp/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/cfdp/bpcp/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/cfdp/bpcp/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/cfdp/bpcp/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/cfdp/bpcp/#see-also","title":"SEE ALSO","text":"

    bpcpd(1), ion(3), cfdptest(1)

    "},{"location":"man/cfdp/bpcpd/","title":"NAME","text":"

    bpcpd - ION Delay Tolerant Networking remote file copy daemon

    "},{"location":"man/cfdp/bpcpd/#synopsis","title":"SYNOPSIS","text":"

    bpcpd [-d | -v]

    "},{"location":"man/cfdp/bpcpd/#description","title":"DESCRIPTION","text":"

    bpcpd is the daemon for bpcp. Together these programs copy files between hosts utilizing NASA JPL's Interplanetary Overlay Network (ION) to provide a delay tolerant network.

    The options are permitted as follows:

    ** -d** Debug output. Repeat for increased verbosity.

    ** -v** Display version information.

    bpcpd must be running in order to copy files from this host to another host (i.e. remote to local). Copies in the other direction (local to remote) do not require bpcpd. Further, bpcpd should NOT be run simultaneously with bpcp or cfdptest.

    "},{"location":"man/cfdp/bpcpd/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/cfdp/bpcpd/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/cfdp/bpcpd/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/cfdp/bpcpd/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/cfdp/bpcpd/#see-also","title":"SEE ALSO","text":"

    bpcp(1), ion(3), cfdptest(1)

    "},{"location":"man/cfdp/bputa/","title":"NAME","text":"

    bputa - BP-based CFDP UT-layer adapter

    "},{"location":"man/cfdp/bputa/#synopsis","title":"SYNOPSIS","text":"

    bputa

    "},{"location":"man/cfdp/bputa/#description","title":"DESCRIPTION","text":"

    bputa is a background \"daemon\" task that sends and receives CFDP PDUs encapsulated in DTN bundles.

    The task is spawned automatically by cfdpadmin in response to the 's' command that starts operation of the CFDP protocol; the text of the command that is used to spawn the task must be provided as a parameter to the 's' command. The UT-layer daemon is terminated by cfdpadmin in response to an 'x' (STOP) command.

    "},{"location":"man/cfdp/bputa/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/cfdp/bputa/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/cfdp/bputa/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/cfdp/bputa/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/cfdp/bputa/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/cfdp/bputa/#see-also","title":"SEE ALSO","text":"

    cfdpadmin(1), bpadmin(1)

    "},{"location":"man/cfdp/cfdp/","title":"NAME","text":"

    cfdp - CCSDS File Delivery Protocol (CFDP) communications library

    "},{"location":"man/cfdp/cfdp/#synopsis","title":"SYNOPSIS","text":"
    #include \"cfdp.h\"\n\ntypedef enum\n{\n    CksumTypeUnknown = -1,\n    ModularChecksum = 0,\n    CRC32CChecksum = 2,\n    NullChecksum = 15\n} CfdpCksumType;\n\ntypedef int (*CfdpReaderFn)(int fd, unsigned int *checksum, CfdpCksumType ckType);\n\ntypedef int (*CfdpMetadataFn)(uvast fileOffset, unsigned int recordOffset, unsigned int length, int sourceFileFD, char *buffer);\n\ntypedef enum\n{\n    CfdpCreateFile = 0,\n    CfdpDeleteFile,\n    CfdpRenameFile,\n    CfdpAppendFile,\n    CfdpReplaceFile,\n    CfdpCreateDirectory,\n    CfdpRemoveDirectory,\n    CfdpDenyFile,\n    CfdpDenyDirectory\n} CfdpAction;\n\ntypedef enum\n{\n    CfdpNoEvent = 0,\n    CfdpTransactionInd,\n    CfdpEofSentInd,\n    CfdpTransactionFinishedInd,\n    CfdpMetadataRecvInd,\n    CfdpFileSegmentRecvInd,\n    CfdpEofRecvInd,\n    CfdpSuspendedInd,\n    CfdpResumedInd,\n    CfdpReportInd,\n    CfdpFaultInd,\n    CfdpAbandonedInd\n} CfdpEventType;\n\ntypedef struct\n{\n    char            *sourceFileName;\n    char            *destFileName;\n    MetadataList    messagesToUser;\n    MetadataList    filestoreRequests;\n    CfdpHandler     *faultHandlers;\n    int             unacknowledged;\n    unsigned int    flowLabelLength;\n    unsigned char   *flowLabel;\n    int             recordBoundsRespected;\n    int             closureRequested;\n} CfdpProxyTask;\n\ntypedef struct\n{\n    char            *directoryName;\n    char            *destFileName;\n} CfdpDirListTask;\n\n[see description for available functions]\n
    "},{"location":"man/cfdp/cfdp/#description","title":"DESCRIPTION","text":"

    The cfdp library provides functions enabling application software to use CFDP to send and receive files. It conforms to the Class 1 (Unacknowledged) service class defined in the CFDP Blue Book and includes implementations of several standard CFDP user operations.

    In the ION implementation of CFDP, the CFDP notion of entity ID is taken to be identical to the BP (CBHE) notion of DTN node number.

    CFDP entity and transaction numbers may be up to 64 bits in length. For portability to 32-bit machines, these numbers are stored in the CFDP state machine as structures of type CfdpNumber.

    To simplify the interface between CFDP the user application without risking storage leaks, the CFDP-ION API uses MetadataList objects. A MetadataList is a specially formatted SDR list of user messages, filestore requests, or filestore responses. During the time that a MetadataList is pending processing via the CFDP API, but is not yet (or is no longer) reachable from any FDU object, a pointer to the list is appended to one of the lists of MetadataList objects in the CFDP non-volatile database. This assures that any unplanned termination of the CFDP daemons won't leave any SDR lists unreachable -- and therefore un-recyclable -- due to the absence of references to those lists. Restarting CFDP automatically purges any unused MetadataLists from the CFDP database. The \"user data\" variable of the MetadataList itself is used to implement this feature: while the list is reachable only from the database root, its user data variable points to the database root list from which it is referenced; while the list is attached to a File Delivery Unit, its user data is null.

    By default, CFDP transmits the data in a source file in segments of fixed size. The user application can override this behavior at the time transmission of a file is requested, by supplying a file reader callback function that reads the file -- one byte at a time -- until it detects the end of a \"record\" that has application significance. Each time CFDP calls the reader function, the function must return the length of one such record (which must be no greater than 65535).

    When CFDP is used to transmit a file, a 32-bit checksum must be provided in the \"EOF\" PDU to enable the receiver of the file to assure that it was not corrupted in transit. When an application-specific file reader function is supplied, that function is responsible for updating the computed checksum as it reads each byte of the file; a CFDP library function is provided for this purpose. Two types of file checksums are supported: a simple modular checksum or a 32-bit CRC. The checksum type must be passed through to the CFDP checksum computation function, so it must be provided by (and thus to) the file reader function.

    Per-segment metadata may be provided by the user application. To enable this, upon formation of each file data segment, CFDP will invoke the user-provided per-segment metadata composition callback function (if any), a function conforming to the CfdpMetadataFn type definition. The callback will be passed the offset of the segment within the file, the segment's offset within the current record (as applicable), the length of the segment, an open file descriptor for the source file (in case the data must be read in order to construct the metadata), and a 63-byte buffer in which to place the new metadata. The callback function must return the length of metadata to attach to the file data segment PDU (may be zero) or -1 in the event of a general system failure.

    The return value for each CFDP \"request\" function (put, cancel, suspend, resume, report) is a reference number that enables \"events\" obtained by calling cfdp_get_event() to be matched to the requests that caused them. Events with reference number set to zero are events that were caused by autonomous CFDP activity, e.g., the reception of a file data segment.

    "},{"location":"man/cfdp/cfdp/#see-also","title":"SEE ALSO","text":"

    cfdpadmin(1), cfdprc(5)

    "},{"location":"man/cfdp/cfdpadmin/","title":"NAME","text":"

    cfdpadmin - ION's CCSDS File Delivery Protocol (CFDP) administration interface

    "},{"location":"man/cfdp/cfdpadmin/#synopsis","title":"SYNOPSIS","text":"

    cfdpadmin [ commands_filename | . | ! ]

    "},{"location":"man/cfdp/cfdpadmin/#description","title":"DESCRIPTION","text":"

    cfdpadmin configures, starts, manages, and stops CFDP operations for the local ION node.

    It operates in response to CFDP configuration commands found in the file commands_filename, if provided; if not, cfdpadmin prints a simple prompt (:) so that the user may type commands directly into standard input. If commands_filename is a period (.), the effect is the same as if a command file containing the single command 'x' were passed to cfdpadmin -- that is, the ION node's cfdpclock task and UT layer service task (nominally bputa) are stopped. If commands_filename is an exclamation point (!), that effect is reversed: the ION node's cfdpclock task and UT layer service task (nominally bputa) are restarted.

    The format of commands for commands_filename can be queried from cfdpadmin with the 'h' or '?' commands at the prompt. The commands are documented in cfdprc(5).

    "},{"location":"man/cfdp/cfdpadmin/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/cfdp/cfdpadmin/#examples","title":"EXAMPLES","text":""},{"location":"man/cfdp/cfdpadmin/#files","title":"FILES","text":"

    See cfdprc(5) for details of the CFDP configuration commands.

    "},{"location":"man/cfdp/cfdpadmin/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/cfdp/cfdpadmin/#diagnostics","title":"DIAGNOSTICS","text":"

    Note: all ION administration utilities expect source file input to be lines of ASCII text that are NL-delimited. If you edit the cfdprc file on a Windows machine, be sure to use dos2unix to convert it to Unix text format before presenting it to cfdpadmin. Otherwise cfdpadmin will detect syntax errors and will not function satisfactorily.

    The following diagnostics may be issued to the logfile ion.log:

    Various errors that don't cause cfdpadmin to fail but are noted in the ion.log log file may be caused by improperly formatted commands given at the prompt or in the commands_filename file. Please see cfdprc(5) for details.

    "},{"location":"man/cfdp/cfdpadmin/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/cfdp/cfdpadmin/#see-also","title":"SEE ALSO","text":"

    cfdprc(5)

    "},{"location":"man/cfdp/cfdpclock/","title":"NAME","text":"

    cfdpclock - CFDP daemon task for managing scheduled events

    "},{"location":"man/cfdp/cfdpclock/#synopsis","title":"SYNOPSIS","text":"

    cfdpclock

    "},{"location":"man/cfdp/cfdpclock/#description","title":"DESCRIPTION","text":"

    cfdpclock is a background \"daemon\" task that periodically performs scheduled CFDP activities. It is spawned automatically by cfdpadmin in response to the 's' command that starts operation of the CFDP protocol, and it is terminated by cfdpadmin in response to an 'x' (STOP) command.

    Once per second, cfdpclock takes the following action:

    First it scans all inbound file delivery units (FDUs). For each one whose check timeout deadline has passed, it increments the check timeout count and resets the check timeout deadline. For each one whose check timeout count exceeds the limit configured for this node, it invokes the Check Limit Reached fault handling procedure.

    Then it scans all outbound FDUs. For each one that has been Canceled, it cancels all extant PDU bundles and sets transmission progress to the size of the file, simulating the completion of transmission. It destroys each outbound FDU whose transmission is completed.

    "},{"location":"man/cfdp/cfdpclock/#exit-status","title":"EXIT STATUS","text":""},{"location":"man/cfdp/cfdpclock/#files","title":"FILES","text":"

    No configuration files are needed.

    "},{"location":"man/cfdp/cfdpclock/#environment","title":"ENVIRONMENT","text":"

    No environment variables apply.

    "},{"location":"man/cfdp/cfdpclock/#diagnostics","title":"DIAGNOSTICS","text":"

    The following diagnostics may be issued to the ion.log log file:

    "},{"location":"man/cfdp/cfdpclock/#bugs","title":"BUGS","text":"

    Report bugs to ion-dtn-support@lists.sourceforge.net

    "},{"location":"man/cfdp/cfdpclock/#see-also","title":"SEE ALSO","text":"

    cfdpadmin(1)

    "},{"location":"man/cfdp/cfdprc/","title":"NAME","text":"

    cfdprc - CCSDS File Delivery Protocol management commands file

    "},{"location":"man/cfdp/cfdprc/#description","title":"DESCRIPTION","text":"

    CFDP management commands are passed to cfdpadmin either in a file of text lines or interactively at cfdpadmin's command prompt (:). Commands are interpreted line-by line, with exactly one command per line. The formats and effects of the CFDP management commands are described below.

    "},{"location":"man/cfdp/cfdprc/#commands","title":"COMMANDS","text":"