Fast data transmission technologies (in particular optical) are being deployed on an unprecedented scale in the LHC era experiments. Most of the resultant optical links are used to connect the detector frontends for both data readout and detector control, while a smaller number are used for data transmission between cards, crates and racks in the counting rooms.
Forecoming experiments require a new generation of data acquisition systems that are cost effective and maximize the use of commercially available components. Using standard networking protocols and hardware will ensure compatibility between different components of the detectors, whilst allowing for seamless incremental upgrades to individual systems.
Some new experiments (e.g. ILC) will take data without the use of a hardware trigger. Consequently, to minimize dead time, all experimental data must be read from the detector in the 200 ms gaps between beam bunch trains. This semi-continuous data stream must be routed to DAQ computers, currently assumed to be PCs, processed and then sent to offline storage.
The frontend electronics are connected to concentrators over custom, high]speed links. These concentrators then feed the data over a network to compute nodes where the data are processed and stored. These concentrators may do some re-formatting or zero suppression. Whilst the use of some application]specific devices is inevitable, it is highly desirable to use commercial networking devices and protocols as much as possible. One possible option would be the use of 1 and 10 Gigabit Ethernet.
In particular, since much of the frontend electronics is actually and likely will be controlled by Field Programmable Gate Arrays (FPGA), it is vital to assess the suitability of FPGAs for directly driving network traffic and to optimize the performance of the hardware and protocols used to send the data over the network.
Without some form of traffic shaping between the concentrators and the destination PCs, there would be the classic bottleneck problem on the egress of the Ethernet switch, with data queuing for transmission to the processing node and the possibility of packet loss. The growing convergence of storage protocols (iScsi, FibreChannel over Ethernet,iWarp, RDMA), around 10 Gigabit Ethernet standard makes it attractive for deployment in new data acquisition systems for a number of reasons.
The proposed activity should envisage the design and construction of a board or adapter with a suitable FPGA and a 10Gb Ethernet MAC and PHY chips to be used as a front]end readout emulator.
The emulator lends itself to test different approaches to data injection; besides the complex programming model of RDMA with a full featured TCP transport, It makes sense to investigate a custom developed TCP stack to be embedded in a field programmable gate array.<\p>
In the context of a frontend readout it should be noticed that a stream oriented reliable protocol is needed for the following reasons:
The effectiveness of this custom built TCP stack in an FPGA is to be tested on its own and against a traffic shaping mechanism. The test]bed will allow investigations of latency, throughput, buffering schemes and global event building bandwidth, but will allow also the evaluation of novel schemes of traffic shaping based on the new IEEE extensions concerning virtual links and per link congestion signaling driven by hardware.
The proposed R&D activity is geared towards the adoption of the 10Gethernet technology in fore coming experiments. A good candidate is the SuperB project, in which the proponents have involvements in the specification of the readout electronics and data acquisition system. Finally, an expression of interest to collaborate in this development has been done by the CERN CMS DAQ group in view of the fore coming SLHC upgrade.