Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

The commands from level 1 to level 3

...

DOCUMENT N.

...

CS-20070604.1.b1.41

...

CREATION DATE

...

04/06/2007

...

LAST MOD. DATE

...

25/03/2009, 12/05/09, 03/07/09, 06/07/09, 15/03/10, 21/07/10, 21/10/10

...

REPLACES DOCUMENT N.

...

SEE ALSO DOCUMENT N.

...

-

...

NR. OF PAGES

...

AUTHORS

...

A. Stecchi, F. Galletti

Abstract

At the present time, the DAFNE Control System employs a common memory space for exchanging information. Commands, data and error reports are written to and read from this space.

During the design phase, this choice was adopted in order to gain bandwidth respect to a network based approach and this is convincing if we recall that at that time — the standard network technology was Ethernet 10Base2 @10 Mbps.

We decided to redesign the DAFNE Control System data flow for three main reasons:

  • the experience along 12 years of operation shows that even though the idea to use a common memory space was right, the hardware adopted to implement it has been giving some troubles for all this period;
  • the hardware adopted is also obsolete and out of production. This is a risky condition for a system that has to guarantee a continuous uptime;
  • the present architecture rely entirely on the VME bus and this deny to adopt any processor but VME embedded boards.

 This paper describes the new data flow system from a software point of view and reports the results of the measurement about its performance.

System Data Flow

The System data flow has been redirected from VME-to-VME links to the Ethernet network.
The commands from Level I to Level III —instead of passing through the Level II— now goes straight from the consoles to the DEVILs via TCP/IP. At level 3, TCP receive receives buffers and LabVIEW queues replaces the memory mailboxesaccept and dispatch commands.
A MySQL database — running on a dedicated server — implemets the implements the command, error and warning log services:.

Client/Server transactions

The communications within the system are based on transactions over TCP, established between a Client (any control window on a console) and a Server (any DEVIL).
When a Client opens a connection with a Server, this launches a dedicated program (Servlet) that handles all the requests incoming coming from that Client.
Many Clients can open simultaneous connections with the same DEVIL. In ; in this case the DEVIL will launch launches — and hold on — many Servlet instances that will operate concurrently.

The connection relies on TCP/IP native transaction protocol — within a LabVIEW process — with no application handshake:-  

  • the Server keeps running a TCP listener;

...

  • the Client opens a TCP connection;

...

  • the TCP listener on the Server accepts the TCP connections.

The TCP connection terminates automatically when the Client stops, that is : the control window closes , or the LabVIEW process quits (or even crashes). Should any of these condition occouroccur, the corresponding Servlet quits by itself.

The TCP_DCS application protocol

Once the connections has been established, the Client asks the Server to do something or to return some data by mean of a specific application protocol named TCP_DCS.
The Client sends always Command packets and the Server answers with Ok packets, Result packets or Error packets.
Both commands from a Client and answers from the Server always starts with a Header packet.
The Header packet will not be shown in the following description; think of it as always there. But logically, it Logically the Header packet, "precedes the packet" rather than "is being included in the packet" (See fig. 1).

Header packet

Image Added

Fig. 1 - Structure of a command packet.

Code Block
themeConfluence
titleHeader packet
Code Block
Bytes              Name
-----              ----
4                  packet Length
4                  Transaction ID
4                  Unit ID
  • packet length:

...

  • the length, in bytes, of the packet that follows the Header packet.
  • Transaction ID:

...

  • a number that can be used to validate the transaction.
  • Unit ID:

...

  • a number that can be used to select some action at the application level.
Code Block
titleCommand packet
Code Block
Bytes              Name
-----              ----
4                  commandopcode
n                  data (alias command arguments)

...

  • opcode:

...

  • a unique number associated to

...

  • the specific service.
  • arguments:

...

  • any data (string or binary stream) needed for the execution of the command (e.g. element name, set value, etc...)

...

  • Arguments size can be calculated

...

  • as: arguments size = (packet length) - 4
Command

Opcode

Name

Associated function and parameters

0x00000000

DO_NOT_USE

0x00000000

must NOT be used

0x00000001

FETCH

fetch the STA or DYN fork: <elementName>,STA|DYN

0x00000002

SEND CMD 1 TO 3

send a command from the console to the DEVIL: <command_string>

0x00000003

FETCH_BUFFER

fetch n bytes from a buffer global variable: <#OfBytes>

0x00000004

ECHO

send n bytes ang get the same n byte back: <byte_string>

0x00000005

FETCH_BLOCK

fetch STA or DYN fork (full array): <elementName>,STA|DYN

(the elementName is used by the servlet just to recognize the proper class and can be any elementName belonging to that class in that DEVIL)

0x00000006

GET_ALIVE_COUNT

read the DEVIL alive counter: no arguments

0x000000xx

to be defined

not implemented

must NOT be used

to be defined

0x000000ff

DO_NOT_USE

0x000000ff

must NOT be used


Code Block
titleOk packet
Code Block
Bytes              Name
-----              ----
4                  packet code, always = 0x00000000


Code Block
titleError packet
Code Block
Bytes              Name
-----              ----
4                  packet code, always = 0x000000ff


Code Block
titleResult packet

...

code
Bytes              Name
-----              ----
4                  packet code
n                  raw data

packet code:        The the Server returns in this field the command number (as from the Client request).
raw data:           Any any data (string or binary stream) as requested by the command (e.g. element dynamic record, data buffer, etc...)The raw data size can be calculated from the packet length as: arguments size = packet length - 4

LabVIEW VIs

 The TCP_DCS application protocol is implemented as by a set of LabVIEW VIs. There are VIs to be used at level 3 by the DEVILs and other VIs to be used at level 1 by the control windows.

Third level VIs

TCP_DCS_readPacket.vi

This VI has been set as reentrant. A reentrant VI is a program that has a separate space of memory allocated data for each instance. More instances of reentrant VIs can execute in parallel without interfering with each other.Reads the packet coming from level 1 (e.g. from a user window).
 Image Modified

Path: /u2/dcs/source_linux/3_hell/TCP_DCS_readPacket.vi (reentrant)

 

The task performed by the VI is to read the packet through two separate TCP Read.
The VI executes two TCP reads: the first one gets the 12 byte of the header, whilst the second one gets the 4 bytes of the command that has to be executed and the parameters (if any).
The first TCP Read timeout is set to -1 in order to allow infinite waiting for the header arrival.
 
The second TCP Read timeout is set to 3000 ms, because it would make no sense to wait for an indefinite time after the reception of the header.

TCP_DCS_sendPacket.vi

...

Sends a TCP_DCS answer packet to level 1 (e.g. to a user window).

Image AddedImage Removed

 Path: /u2/dcs/source_linux/3_hell/TCP_DCS_sendPacket.vi (reentrant)

 The VI reconstructs Reconstructs the packet to be sent through a single TCP Write, appending to the header the requested information. The byteNumber is calculated including the header length.

servlet_eth_x.x.vit

...

Image Modified 

Path: /u2/dcs/source_linux/devils/DEVIL_eth/servlet_eth_3.x.vit  
(reentrant, called by reference as a VIT instance)

The VI  operates in three modes managed by the DEVIL process:
mode = 0 the Servlet recovers the references associated to the typeDef which the DEVIL has to deal with.
mode = 1 the  Servlet receives, interpretes and executes the commands, accordingly to the TCP_DCS protocol.
command —interpreted by the sub-VI

TCP_DCS_query.vi

Image Modified

Path: /u2/dcs/source_solaris/common/ethernet/TCP_DCS_query.vi
 (reentrant)

First level VIs

TCP_DCS_testConn.vi

Image Modified

Path: /u2/dcs/source_solaris/common/ethernet/TCP_DCS_testConn.vi
( WARNING: think if this VI must be reentrant or not!!!  ) 

GConnList1_eth.vi

Image Modified

Path: /u2/dcs/source_solaris/globals/GConnList1_eth.vi

This global VI keeps a list of the DEVILs (with the TCP_DCS protocol) invoked by each window, with the associated connection IDs. The GConnList1_eth VI provides by itself all the services needed to manage the data.

service

required inputs

description

init

none

initializes the array of clusters to an empty array

register

windowName

appends a new component to the array:

windowName: the windowName value

DEVIL array: empty array

connID array: empty array

search

windowName

searches the windowName string into all the array components. If there is a match, then the component and its index are returned. If there is no match, an empty component and -1 are returned instead

get

index

If the index is in range (0, array dimension), then the corresponding component is returned. If the index is not in range, an empty component and -1 are returned instead

update

windowName

element

replaces the component corresponding to windowName with the element value

remove

windowName

removes the component corresponding to windowName

The GConnList1_eth is initiated at startup time by the DANTE top-bar.

...

The GConnList1_eth holds an array of clusters. Each cluster consists of a string and two integer arrays of integers

connectionManager_eth_2.0.vi

Manages the connection with DEVILs in charge of the control of elements that are in "elementList".

Image AddedImage Removed 

/u2/dcs/source_solaris/1_paradise/util/connectionManager_eth_2.0.vi

 

As said above, this VI performs a complete management of the connection with the DEVILs involved in the control of a given set of elements.

At first it the VI identifies wich which elements require a VME-to-VME connection , and which one require an ethernet connection and returns this information into an array of booleans the boolean array VME/ETH (FALSE=VME, TRUE=ETH).
Then it opens the required TCP connections (if any). In case of window reconfiguration — as for the MagTerminal when a new operating area is selected — it closes the TCP connection no longer needed.
The connectionManager_eth takes care of updating the global VI GConnList1_eth.

For the elements that require a VME-to-VME connection, the connectionManager_eth gets the RTDBAddresses for the static and the dynamic forks by calling the usual getRTDBAddress routine.
An example of use of the connectionManager_eth: let us suppose to pass the following elementList array to the VI

elementList

QUATM004

QUATM005

CHHTB101

CHHTB102

...

the VI returns 4 arrays aligned one another.  

VME/ETH

T

F

F

T

...

RTDBStaAdd

A0FF0000

F4A05000

F4DE1000

A0FF0000

...

RTDBDynAdd

A0FF0000

F4A06000

F4DE2000

A0FF0000

...

classID

21

21

15

15

...

 Looking at the above example, we see that the first element QUATM004 is controlled by a DEVIL with ethernet connection (VME/ETH = T). For this element, the two corresponding components of the RTDBStaAdd and RTDBDynAdd arrays are equal each other and contain the same TCP_DCS connection ID.

The second element QUATM005 is controlled by a DEVIL with VME-to-VME connection (VME/ETH = F) so that the two corresponding components of the RTDBStaAdd and RTDBDynAdd arrays contain the actual VME addresses that we have to use for the dynamic and static fetch routines.

The connectionManager_eth also returns also — for each element — the class ID (21=MG1, 15=CHN).