The commands from level 1 to level 3 goes straight from the consoles to DEVILs via TCP/IP. At level 3, TCP receives buffers and LabVIEW queues accept and dispatch commands.
A MySQL database — running on a dedicated server — implements the command, error and warning log services.
Client/Server transactions
The communications within the system are based on transactions over TCP, established between a Client (any control window on a console) and a Server (any DEVIL).
When a Client opens a connection with a Server, this launches a dedicated program (Servlet) that handles all requests coming from that Client.
Many Clients can open simultaneous connections with the same DEVIL; in this case the DEVIL launches — and hold on — many Servlet instances that operate concurrently.
The connection relies on TCP/IP native transaction protocol — within a LabVIEW process — with no application handshake:
- the Server keeps running a TCP listener;
- the Client opens a TCP connection;
- the TCP listener on the Server accepts the TCP connections.
The TCP connection terminates automatically when the Client stops, that is the control window closes or the LabVIEW process quits (or even crashes). Should any of these condition occur, the corresponding Servlet quits by itself.
The TCP_DCS application protocol
Once the connections has been established, the Client asks the Server to do something or to return some data by mean of a specific application protocol named TCP_DCS.
The Client sends always Command packets and the Server answers with Ok packets, Result packets or Error packets.
Both commands from a Client and answers from the Server always starts with a Header packet.
The Header packet will not be shown in the following description; think of it as always there. Logically the Header packet, "precedes the packet" rather than "being included in the packet" (See fig. 1).
Fig. 1 - Structure of a command packet.
Bytes Name ----- ---- 4 packet Length 4 Transaction ID 4 Unit ID
- packet length: the length, in bytes, of the packet that follows the Header packet.
- Transaction ID: a number that can be used to validate the transaction.
- Unit ID: a number that can be used to select some action at the application level.
Bytes Name ----- ---- 4 opcode n data (alias command arguments)
- opcode: a unique number associated to the specific service.
- arguments: any data (string or binary stream) needed for the execution of the command (e.g. element name, set value, etc...) Arguments size can be calculated as: arguments size = (packet length) - 4
Opcode | Name | Associated function and parameters |
0x00000000 | DO_NOT_USE | must NOT be used |
0x00000001 | FETCH | fetch the STA or DYN fork: <elementName>,STA|DYN |
0x00000002 | SEND CMD 1 TO 3 | send a command from the console to the DEVIL: <command_string> |
0x00000003 | FETCH_BUFFER | fetch n bytes from a buffer global variable: <#OfBytes> |
0x00000004 | ECHO | send n bytes ang get the same n byte back: <byte_string> |
0x00000005 | FETCH_BLOCK | fetch STA or DYN fork (full array): <elementName>,STA|DYN (the elementName is used by the servlet just to recognize the proper class and can be any elementName belonging to that class in that DEVIL) |
0x00000006 | GET_ALIVE_COUNT | read the DEVIL alive counter: no arguments |
0x000000xx | not implemented | must NOT be used |
0x000000ff | DO_NOT_USE | must NOT be used |
Bytes Name ----- ---- 4 packet code, always = 0x00000000
Bytes Name ----- ---- 4 packet code, always = 0x000000ff
Bytes Name ----- ---- 4 packet code n raw data
packet code: the Server returns in this field the command number (as from the Client request).
raw data: any data (string or binary stream) as requested by the command (e.g. element dynamic record, data buffer, etc...)
LabVIEW VIs
The TCP_DCS application protocol is implemented by a set of LabVIEW VIs. There are VIs to be used at level 3 by the DEVILs and other VIs to be used at level 1 by the control windows.
Third level VIs
TCP_DCS_readPacket.vi
Reads the packet coming from level 1 (e.g. from a user window).
Path: /u2/dcs/source_linux/3_hell/TCP_DCS_readPacket.vi (reentrant)
The VI executes two TCP reads: the first one gets the 12 byte of the header, whilst the second one gets the 4 bytes of the command that has to be executed and the parameters (if any).
The first TCP Read timeout is set to -1 in order to allow infinite waiting for the header arrival. The second TCP Read timeout is set to 3000 ms, because it would make no sense to wait for an indefinite time after the reception of the header.
TCP_DCS_sendPacket.vi
Sends a TCP_DCS answer packet to level 1 (e.g. to a user window).
Path: /u2/dcs/source_linux/3_hell/TCP_DCS_sendPacket.vi (reentrant)
Reconstructs the packet to be sent through a single TCP Write, appending to the header the requested information. The byteNumber is calculated including the header length.
servlet_eth_x.x.vit
Path: /u2/dcs/source_linux/devils/DEVIL_eth/servlet_eth_3.x.vit (reentrant, called by reference as a VIT instance)
The VI operates in three modes managed by the DEVIL process:
mode = 0 the Servlet recovers the references associated to the typeDef which the DEVIL has to deal with.
mode = 1 the Servlet receives, interpretes and executes the commands, accordingly to the TCP_DCS protocol.
TCP_DCS_query.vi
Path: /u2/dcs/source_solaris/common/ethernet/TCP_DCS_query.vi (reentrant)
First level VIs
TCP_DCS_testConn.vi
Path: /u2/dcs/source_solaris/common/ethernet/TCP_DCS_testConn.vi
GConnList1_eth.vi
Path: /u2/dcs/source_solaris/globals/GConnList1_eth.vi
This global VI keeps a list of the DEVILs (with the TCP_DCS protocol) invoked by each window, with the associated connection IDs. The GConnList1_eth VI provides by itself all the services needed to manage the data.
service | required inputs | description |
init | none | initializes the array of clusters to an empty array |
register | windowName | appends a new component to the array: windowName: the windowName value DEVIL array: empty array connID array: empty array |
search | windowName | searches the windowName string into all the array components. If there is a match, then the component and its index are returned. If there is no match, an empty component and -1 are returned instead |
get | index | If the index is in range (0, array dimension), then the corresponding component is returned. If the index is not in range, an empty component and -1 are returned instead |
update | windowName element | replaces the component corresponding to windowName with the element value |
remove | windowName | removes the component corresponding to windowName |
The GConnList1_eth is initiated at startup time by the DANTE top-bar.
When a window starts up (or reconfigures itself with different elements), it must call the connectionManager_eth VI (see below), which takes care of registering the window into the global, looking for the DEVILs involved in the elements control, opening the TCP_DCS connections with such DEVILs and closing back the connections no longer needed.
When a window closes, it must remove itself from the GConnList1_eth by calling it in remove mode.
In such a way, the GConnList1_eth keeps a list — always up to date — of window names, each one associated to an array of DEVIL numbers and another array of ethernet connection IDs.
The GConnList1_eth holds an array of clusters. Each cluster consists of a string and two arrays of integers
connectionManager_eth_2.0.vi
Manages the connection with DEVILs in charge of the control of elements that are in "elementList".
/u2/dcs/source_solaris/1_paradise/util/connectionManager_eth_2.0.vi
At first the VI identifies which elements require a VME-to-VME connection and which an ethernet connection and returns this information into the boolean array VME/ETH (FALSE=VME, TRUE=ETH).
Then it opens the required TCP connections (if any). In case of window reconfiguration — as for the MagTerminal when a new operating area is selected — it closes the TCP connection no longer needed.
The connectionManager_eth takes care of updating the global VI GConnList1_eth.
For the elements that require a VME-to-VME connection, the connectionManager_eth gets the RTDBAddresses for the static and the dynamic forks by calling the usual getRTDBAddress routine.
An example of use of the connectionManager_eth: let us suppose to pass the following elementList array to the VI
elementList | QUATM004 | QUATM005 | CHHTB101 | CHHTB102 | ... |
the VI returns 4 arrays aligned one another.
VME/ETH | T | F | F | T | ... |
RTDBStaAdd | A0FF0000 | F4A05000 | F4DE1000 | A0FF0000 | ... |
RTDBDynAdd | A0FF0000 | F4A06000 | F4DE2000 | A0FF0000 | ... |
classID | 21 | 21 | 15 | 15 | ... |
Looking at the above example, we see that the first element QUATM004 is controlled by a DEVIL with ethernet connection (VME/ETH = T). For this element, the two corresponding components of the RTDBStaAdd and RTDBDynAdd arrays are equal and contain the same TCP_DCS connection ID.
The second element QUATM005 is controlled by a DEVIL with VME-to-VME connection (VME/ETH = F) so that the two corresponding components of the RTDBStaAdd and RTDBDynAdd arrays contain the actual VME addresses to use for the dynamic and static fetch routines.
The connectionManager_eth also returns — for each element — the class ID (21=MG1, 15=CHN).