The following are the major changes in each version.
On user request, this release contains improvements targeting enhancements for scheduled traffic (Time-Aware Shaper) that enable transmissions from traffic class queues in Ethernet Ports to be scheduled relative to a known timescale:
Behind the scene, three state machines control the execution of the gate operations in an Ethernet Port’s gate control list that open and close the transmission gates that can be associated with each traffic class queue. These state machines are the Cycle Timer, the List Execute and the List Config state machine, respectively. Up until now, the last of these, the List Config state machine, has been missing with the result that it has only been possible to define a single gate control list per Ethernet port, and this control list has been active during the whole course of a simulation.
However, this release presents a remedy to this shortcoming by including a simulation model of the List Config state machine; The image below shows the new and improved Edit Gate Control List dialog. As should be evident, this dialog now allows constructing a series of gate control lists with each control list being presented in a tab of its own. During a simulation, every control list will become operational at its specified base time as seen in the title of each tab.
So, during a running simulation, the List Config state machine will await the specified base time for the next control list (the AdminControlList) that is pending configuration and, when the time is right, interrupt the other two state machines that are responsible for executing the currently operational schedule (the OperControlList). After reconfiguration, the old AdminControlList has now become the new OperControlList and possibly a new AdminControlList is awaiting its time to be configured and so forth.
In a Gantt-chart resulting from a simulation, a GCL reconfiguration as well as the operation of the different state machines is now evident; In the figure below, a gate-schedule reconfiguration occurs at time t = 1 ms whereafter it is obvious that the cyclic behaviour of the transmission gates clearly changes.
We refer the interested reader to the tool manual for a complete description of all the parameters available in Edit Gate Control List dialog.
Behind the scene, there has also been some further improvements on the simulation model of a 5G protocol stack, which will allow simulation of 5G wireless links. Furthermore, our attention has been drawn to work done by 3GPP, which has been included in their recent standards releases. Quite recent additions explain how 5G can be integrated with Time-Sensitive Networking, and more precisely, how a 5G system (5GS) will be able to function as one or more virtual or logical TSN bridges. This will foster the integration of a 5GS into, for example, industrial networks and support various industrial use cases by providing improved communication services. We will likely work towards including simulation models of these additions and, when completed, this will be the focus of a future release.
5G and eventually 6G will have an important role to play when it comes to supporting new services for e.g. automotive active safety via V2X, edge computing etc. Therefore, and after having received a market request, we have taken the first steps towards supporting simulation of 5G NR links.
Besides including a more detailed simulation model of a 5G protocol stack etc., we will also support using a probabilistic model on a higher protocol stack level. Such a model will be based on latencies computed from capturing time stamped packets sent between a 5G User Equipment (UE), e.g. a mobile phone or a vehicle, and a 5G base station (gNB). The latter work is currently being performed as part of a research project together with an academic research group having access to a commercial stand-alone 5G network for facilitating collaboration between academia and industry.
This is still very much work in progress, and the model will be improved in future releases. However, we wanted to give you a heads up and, if you have an interest in this area, we would love to hear from you!
In this version, we have enhanced the ViewMode logic to provide much more detailed information. This will enable users to gain a better understanding of the flow of information, especially when the model grows more complex.
For example in the Frame Replication and Elimination for Reliability (FRER) flow below, you can now see how the different paths flows and where revovery points etc are placed.
1 | FRER Port | 1 | Generate | 1 | Recovery/Terminate | 1 | Recovery/Continue |
We are excited to announce that our application is now officially available for the Linux x64 platform. To download the application, please visit our website and follow the instructions provided there. We hope you find this new addition useful and look forward to hearing your feedback!
If you are interested in builds for other platforms/architectures, please contact support.
In this release, we take a first step towards supporting the specific use case of sanity checking a physical network with the aid of a network digital twin constructed in TCNs tool, by letting it inspect packet trace files captured in a physical network counterpart. More specifically, we have added a first set of Ethernet packet counters that can inspect the payloads encapsulated by the Ethernet packets found in a PCAP file, look for a specific datagram or message and increment a counter each time a matching payload is found. Currently, a specific PCAP file, that is to be examined, should be associated with the port of a Sensor node that corresponds to the device port in the physical network in which the packets were captured.
The screenshot below shows how to activate the different packet counters supported in this first release. More types will be added in future versions. With each counter, two separate watchdogs can also be associated that will check that the final counter value lies within a user specified lower and upper limit, respectively.
As should be evident from the different packet types in the list above, a packet inspection can iteratively continue further down through the different frame and datagram payload layers until the innermost message is reached, for example, a specific PTP message. The inspection can however also stop at a more shallow level; if one, for example, is content with verifying that some PTP messages are being exchanged on a specific link in the network, it is sufficient to activate the PTP Message counter. This counter will trigger on any Ethernet-frame that contains a PTP message of some sort.
As is evident from the following figure, the final packet counter value and the results from applying any associated watchdog limits will be shown together with the results of all other active watchdogs in the Overview tab in the results window. In this example, the user expected Switch1.Port1 to receive between 10 and 15 PTP Announce Messages, however, only 5 were observed resulting in the lower limit-watchdog red-flagging this result.
Finally, it should be noted that the different Packet Counters are intended to function also when running a traditional simulation that does not include injecting packets from any PCAP file.
To get the exact same simulation results regardless of target platform used, we have chosen to include a new pseudorandom number generator into the simulation core of TCN TimeAnalysis. The selected generator is the fast and open-source version of Mersenne Twister (MT), a 623-dimensionally equidistributed uniform pseudorandom number generator. Thus, given the same seed value, the very same series of simulation events and results will now occur, irrespective of which platform a simulation is run on. For more information, please visit the Mersenne Twister Home Page
To simplify working with capture files (PCAP/HEA etc), it’s now possible to configure Capture Paths. The system allows the user to specify a number of catalogs which will be used when resolving capture files.
When specifying capture files, the user can now directly choose from files contained in Capture Paths.
It’s now possible to perform an upgrade when importing results data - the simulated model will be updated to the latest version and resimulated with the setup of the imported file.
In this version, we introduce Watchdogs, which are entities with the single purpose of observing whether or not certain user specified requirements are met during a simulation. For example, as is evident from the Edit Timer-dialog below, it is now possible to activate watchdogs that inspect the latency and jitter samples registered by Timers and warn if any sample exceed the upper limit given by the user. In this figure, the watchdog that guards that all computed jitter samples stay within the specified upper bound has been activated by checking the associated box and the user has then specified that the jitter measured and computed by Timer1 should never exceed 0.3 milliseconds.
As the number of watchdogs, that are added to, for example, an advanced electrical architecture, potentially could become quite large, a new table has been introduced that allows quickly inspecting the results of all the different watchdogs that are active during a simulation. The Watchdogs table is found in the Overview tab, when opening the Simulation results window, after a simulation terminates. In this example, besides the above mentioned jitter and latency watchdogs, the user has also specified upper limits on the maximum utilization of the transmitter (Tx) in a switch port as well as the utilization of Ethernet-frame memory-buffers in some selected ports of the switch. As can be seen, the watchdogs use the color codes green and red to flag whether or not the associated requirements were observed to be met or not during the simulation.
However, it is also possible to inspect the individual results of, for example, the latencies measured by a specific Timer and quickly get an appreciation of how well the requirement of an associated watchdog was kept during a simulation. In the latency frequency plot of Timer1 below, we can see that a large portion of the latencies registered by Timer1 actually exceeded the specified upper limit of 1 millisecond, as is evident by the rightmost histogram bars colored red. So in this case, the designer should consider potentially relaxing the requirement or performing some parameter reconfiguration or system redesign.
For simulations with a lot of task objects, viewing of the Gantt chart could be very slow and unresponsive. This issue has now been addressed - all chart interactions are now instantaneous.
The application no longer unpacks executable files to/from the temp folder - all files are now unpacked correctly during installation. Furthermore, the registry is no longer used to store data. Except for the installation directory, the only folders written to are:
This version of TCN TimeAnalysis adds basic support for IEEE 802.1Qci Per-Stream Filtering and Policing (PSFP). This mechanism allows applying filtering and bandwidth profiling to the Ethernet frames identified as belonging to specific traffic streams. This can be useful to avoid networks being overloaded with unwanted traffic resulting, for example, from a babbling-idiot failure or a malicious DoS attack.
Besides PSFP, this release also includes improvements to the gPTP model added in version 3.0. For example, new configuration parameters have been added to give the user more fine grained control of the timing of Sync messages, internal switch processing delay etc.
Version 3.0 of TCN TimeAnalysis introduces initial support for the IEEE 802.1AS generalized precision time protocol (gPTP). Based on IEEE 1588, this standard is better optimized for time-sensitive applications. Currently, an Ethernet network comprising a number Switches together with the Hosts connected to them can be defined as a gPTP domain.
One of the Hosts (Host1 in the figure above) is selected as containing the grandmaster clock and the time of this clock is then distributed across the network by the gPTP protocol to all the slave Hosts in the network. Thereby time synchronization is achieved between these nodes.
This version also adds support for simulating data communication over FlexRay buses. The FlexRay bus was designed to handle a large variety of frames and provides both time-triggered communication, allowing deterministic data transfers that arrive at predictable times down to the microsecond, as well as dynamic, event-driven data inspired by CAN.
Switch Ports that connect to Host Ports are now allowed to belong to multiple VLANs. This also makes the link partner Host Port aware of these VLANs and opens up for different traffic flows originating in the same Host to be sent directly onto any of the different VLANs.
This version also includes some improvements to Frame Replication and Elimination for Reliability (FRER):
One objective of the designers of FRER was to maximise opportunities for backwards compatibility, that is, it should be possible to use existing end stations and switches that are unaware of FRER together with new devices that do support FRER. Therefore, this version now includes the opportunity to apply different strategies that, for example, influence whether FRER functions are automatically instantiated in Host Ports or Switch Ports.
Furthermore, when Sequence recovery functions are instantiated, it is now possible to employ either the MatchRecoveryAlgorithm or the VectorRecoveryAlgorithm.
This release contains a couple of improvements related to Frame Replication and Elimination for Reliability (FRER). First, to make the benefits of FRER more obvious, it is now possible to unplug an Ethernet cable during a running simulation:
In the Gantt chart above, each vertical bar is an Ethernet frame that belongs to a FRER multicast flow.
The second Gantt chart shows an enlargement of two of the boxes. At t = 5 ms, the cable connected to Switch1.Port2 is unplugged. As can be expected, this effectively blocks further transmission of the Ethernet packets. Up until then, the Sequence Recovery Function in the last switch port connected to the flow’s destination, Host2, has been discarding redundant packets. However, thanks to duplicate packets being transmitted via an alternative, redundant path, Host2 continues receiving Ethernet frames despite the unplugged cable.
The Links that are to be disabled, at what specific times, or if they are to be unplugged at a random time during the simulation, can be specified in the Start simulation dialog:
Version 2.2 of TCN TimeAnalysis supported the Null Stream identification function for identifying the packets belonging to a specific FRER flow (Stream). Now, support for Source MAC and VLAN Stream identification has also been added. The tool will try to automatically choose the most suitable identification method for a specific FRER flow.
If a simulation is run using, for example, random task offsets, every new simulation will differ slightly as the internal Random Number Generator (RNG) will produce a different series of random numbers each time. Upon request, it is now possible to explicitly specify the seed value used to initialize the RNG. This will make it produce exactly the same series of random numbers for each simulation and thus also the exact same simulation and simulation results, which can be of interest in certain circumstances.
This version adds initial support for IEEE Standard 802.1CB-2017 - Frame Replication and Elimination for Reliability (FRER). By identifying packets belonging to specific flows, duplicating them and forwarding them along different paths to the same end Host(s), this standard allows redundancy which increases the likelihood that Ethernet packets containing time-sensitive or critical data reach their designated end Host(s) despite link or node failure along the original path.
Marking a specific flow as a FRER flow and specifying redundant paths to target Hosts, algorithms will then automatically instantiate FRER functionality for generating, duplicating and eliminating packets in the required ports.
The Path/VLAN visualization framework has been revamped into a more powerful View Mode solution - all configurations are easily accessible and the graphs carry richer information.
For an IPDU flow, one graph show how the IPDU is forwarded all the way from the source to each target destination as well as subgraphs for the different frames that carry the IPDU over different networks.
When working with complex networks with numerous connections, there will inevitable be a number of links that overlap in the layout. This can make it hard to grasp exactly how things are connected, especially in sections with large number of links.
To mitigate this problem, it is now possible to visualize these crossings using a number of different styles.
In response to customer requests, it is now possible to run simulations from the command line - please run the simulation program without parameters to get a full list of available options.
It is now possible to specify the length of the cable which will affect the propagation time. The speed at which signals propagate along a cable is assumed to be approximately one third of the speed of light in vacuum.
Version 2.0 of TCN TimeAnalysis introduces data messages or IPDUs (Interaction layer Protocol Data Units) and the PDU Router available in each Host. The PDU Router in the source Host of an IPDU can multicast the IPDU via several different egress ports of different types, so the IPDU can be transmitted on, for example, both a LIN, CAN and Ethernet network more or less simultaneously.
When received on an ingress port of an intermediate Host, the PDU Router can also gateway the IPDU to a set of other CAN, LIN and Ethernet ports on that Host.
The figure below shows an example where the blue colored links indicate the fan-out of a certain IPDU; in this case the IPDU originates in Host1 where it is broadcasted on CANBus1 to Host2, Host3 and Host4. Host3 works as a gateway and transmits the IPDU on its Ethernet port where it is multicasted to Host6 and Host7.
Furthermore, IPDU Timers from Host1 to Host6 and Host7 (indicated by highlighted borders) have been applied to measure the forwarding latency required to transmit the IPDU from its source to the selected targets Host.
When the user defines the different CAN, LIN and Ethernet frames that carry the IPDU, the tool enforces the avoidance of any loop occurring in the fan-out of the IPDU, that is, no Host will ever be allowed to receive the same IPDU over several ingress Ports.
The user can now define explicit Ethernet Frames to carry IPDUs smaller than the Links MTU directly in their payload. Like before UDP Datagrams can be used to send larger IPDUs that require fragmentation.
By running several consecutive simulations in one batch and randomly update the offsets of the Tasks that produce the IPDUs in between each run, a larger variety of resource contention situations and queuing delays will be encountered. This will translate into the latency and jitter histograms of Frames, Datagrams and IPDUs showing an increased variance and thus also an increased likelihood of finding, for example, the worst case latency that could ever occur for a particular IPDU.
An Inspector that displays the details of the selected element(s) attributes has been added.
To increase run time performance and reduce memory footprint, simulation results are now stored using a SQLite database. Furthermore, the database logic has been heavily optimized to minimize the time spent writing to the database.
Depending on the number of nodes in the topology and the amount of traffic forwarded through networks and buses, which result in different numbers of database operations, speedups of up to four times faster simulation times have been observed. Besides the performance gains from the database optimizations, several bottlenecks have been identified and reworked to increase the run time performance.
It is now possible to specify that the data size of UDP datagrams sent periodically from Hosts should vary randomly: by checking the Random size box as in the figure below, the data bit size of each new datagram will be selected with uniform probability in the specified interval.
The simulation results chart with the above title now correctly shows the utilization of each Switch Port frame buffer, that is one individual bar for the high and the low frame buffer, respectively. Earlier versions of the tool only showed a single utilization bar per Switch Port, which made the results harder to interpret.
It is now possible to generate PDF reports for simulation results.
Additionally, if the result contains jitter/latencies, these can be exported to CSV files.
Earlier versions of TCN Time Analysis has only allowed a single Ethernet port per Host. However, it is now possible to add two Ethernet ports to a single Host and then send different Data Frames through both ports.
A Stream reads and releases Ethernet frames from the capture file specified by an associated Sensor. This new Stream attribute allows creating a gap in time between the last and the first Ethernet frame found in the capture file when wrap around occurs.
It is now also possible to associate a Stream with a priority in the range 0 to 7. Before transmission, this priority will be written to the IP ToS bits of the IPv4 header of any recorded Stream Ethernet frame containing an IPv4 frame.
Earlier versions of TCN TimeAnalysis have supported the enhancements
for scheduled traffic or Time-Aware Shaper (TAS).
Version 1.5 now also adds basic support for frame preemption (FP), which
is another mechanism for protecting time-sensitive traffic from
interference caused by Ethernet packets that has less stringent timing
requirements.
FP can be activated on each Host and Switch Port and allows express frames to interrupt an ongoing transmission of a preemptable frame. When FP is active, a frame preemption status is assigned to each value of priority via a frame preemption status table.
Currently, TAS and FP can only be used in isolation, however, a future release will allow them to be used in combination.
Besides timing Ethernet frames, it is now possible to let Timers measure latency on UDP frames. As large UDP frames might not fit into a single IPv4 Frame and Ethernet frame, fragmentation might have to be applied. It will then take several Ethernet frames to deliver a full UDP frame and it is first after fragment reassembly in the receiver that the Timer will register a stop-time for the delivery of the UDP frame.
All flow dialogs now lists all paths to all reachable target Hosts from the source of the flow - now it’s only a matter of checking the desired path(s) from the list.
If the network originating from the source contains cycles in the
VLAN, broadcast is not allowed.
Furthermore, only one Path for a specific target Host can be
selected.
Cycle detection is enforced at all times - if you have a broadcast flow and manipulates the network in such a way that cycles emerges in the network covered by the flow, the flow will be removed.
Bit rates for transmission over CAN and LIN buses can now be specified in on the bus.
The manual is now bundled with the application.
We have given TCN TimeAnalysis a new look and feel to make it more fresh, modern and powerful.