NetCDF based data archiving system applied to ITER Fast Plant System Control prototype

6
Fusion Engineering and Design 87 (2012) 2223–2228 Contents lists available at SciVerse ScienceDirect Fusion Engineering and Design journa l h o me page: www.elsevier.com/locate/fusengdes NetCDF based data archiving system applied to ITER Fast Plant System Control prototype R. Castro a,, J. Vega a , M. Ruiz b , G. De Arcas b , E. Barrera b , J.M. López b , D. Sanz b , B. Gonc ¸ alves c , B. Santos c , N. Utzel d , P. Makijarvi d a Asociación EURATOM/CIEMAT para Fusión, Madrid, Spain b Grupo de Investigación en Instrumentación y Acústica Aplicada, UPM, Madrid, Spain c Associac ¸ ão EURATOM/IST, IPFN Laboratório Associado, IST, Lisboa, Portugal d ITER Organization, St. Paul lez Durance Cedex, France h i g h l i g h t s Implementation of a data archiving solution for a Fast Plant System Controller (FPSC) for ITER CODAC. Data archiving solution based on scientific NetCDF-4 file format and Lustre storage clustering. EPICS control based solution. Tests results and detailed analysis of using NetCDF-4 and clustering technologies on fast acquisition data archiving. a r t i c l e i n f o Article history: Available online 1 June 2012 Keywords: Control Archiving Cluster NetCDF EPICS a b s t r a c t EURATOM/CIEMAT and Technical University of Madrid (UPM) have been involved in the development of a FPSC [1] (Fast Plant System Control) prototype for ITER, based on PXIe (PCI eXtensions for Instrumenta- tion). One of the main focuses of this project has been data acquisition and all the related issues, including scientific data archiving. Additionally, a new data archiving solution has been developed to demonstrate the obtainable performances and possible bottlenecks of scientific data archiving in Fast Plant System Control. The presented system implements a fault tolerant architecture over a GEthernet network where FPSC data are reliably archived on remote, while remaining accessible to be redistributed, within the duration of a pulse. The storing service is supported by a clustering solution to guaranty scalability, so that FPSC management and configuration may be simplified, and a unique view of all archived data provided. All the involved components have been integrated under EPICS [2] (Experimental Physics and Industrial Control System), implementing in each case the necessary extensions, state machines and configuration process variables. The prototyped solution is based on the NetCDF-4 [3,4] (Network Common Data Format) file format in order to incorporate important features, such as scientific data models support, huge size files management, platform independent codification, or single-writer/multiple-readers concurrency. In this contribution, a complete description of the above mentioned solution is presented, together with the most relevant results of the tests performed, while focusing in the benefits and limitations of the applied technologies. © 2012 Elsevier B.V. All rights reserved. 1. Introduction CIEMAT and UPM have been involved in an ITER project for the development of a FPSC [1] (Fast Plant System Control) pro- totype based on PXIe [5] technology during the last two years. The project has focused on data acquisition aspects inside a fast Corresponding author. Tel.: +34 913466419. E-mail address: [email protected] (R. Castro). control system with two main objectives. The first one has been to develop a working system that fulfills the set of requirements of an ITER plant fast control system. And the second one has been to develop a system that is compatible with ITER Codac Core System distribution technologies, and more specifically, a system based on EPICS [2] as control technology. One of the facets of the project has been the implementation of a data archiving solution for the FPSC. In this sense, an approach based on NetCDF-4 [3,4] format and storage clustering technologies has been implemented, and a set of tests has been run in order 0920-3796/$ see front matter © 2012 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.fusengdes.2012.05.003

Transcript of NetCDF based data archiving system applied to ITER Fast Plant System Control prototype

Np

RBa

b

c

d

h

����

a

AA

KCACNE

1

ttT

0h

Fusion Engineering and Design 87 (2012) 2223– 2228

Contents lists available at SciVerse ScienceDirect

Fusion Engineering and Design

journa l h o me page: www.elsev ier .com/ locate / fusengdes

etCDF based data archiving system applied to ITER Fast Plant System Controlrototype

. Castroa,∗, J. Vegaa, M. Ruizb, G. De Arcasb, E. Barrerab, J.M. Lópezb, D. Sanzb,

. Gonc alvesc, B. Santosc, N. Utzeld, P. Makijarvid

Asociación EURATOM/CIEMAT para Fusión, Madrid, SpainGrupo de Investigación en Instrumentación y Acústica Aplicada, UPM, Madrid, SpainAssociac ão EURATOM/IST, IPFN – Laboratório Associado, IST, Lisboa, PortugalITER Organization, St. Paul lez Durance Cedex, France

i g h l i g h t s

Implementation of a data archiving solution for a Fast Plant System Controller (FPSC) for ITER CODAC.Data archiving solution based on scientific NetCDF-4 file format and Lustre storage clustering.EPICS control based solution.Tests results and detailed analysis of using NetCDF-4 and clustering technologies on fast acquisition data archiving.

r t i c l e i n f o

rticle history:vailable online 1 June 2012

eywords:ontrolrchivinglusteretCDFPICS

a b s t r a c t

EURATOM/CIEMAT and Technical University of Madrid (UPM) have been involved in the development ofa FPSC [1] (Fast Plant System Control) prototype for ITER, based on PXIe (PCI eXtensions for Instrumenta-tion). One of the main focuses of this project has been data acquisition and all the related issues, includingscientific data archiving. Additionally, a new data archiving solution has been developed to demonstratethe obtainable performances and possible bottlenecks of scientific data archiving in Fast Plant SystemControl.

The presented system implements a fault tolerant architecture over a GEthernet network where FPSCdata are reliably archived on remote, while remaining accessible to be redistributed, within the durationof a pulse. The storing service is supported by a clustering solution to guaranty scalability, so that FPSCmanagement and configuration may be simplified, and a unique view of all archived data provided. All theinvolved components have been integrated under EPICS [2] (Experimental Physics and Industrial ControlSystem), implementing in each case the necessary extensions, state machines and configuration process

variables. The prototyped solution is based on the NetCDF-4 [3,4] (Network Common Data Format) fileformat in order to incorporate important features, such as scientific data models support, huge size filesmanagement, platform independent codification, or single-writer/multiple-readers concurrency.

In this contribution, a complete description of the above mentioned solution is presented, togetherwith the most relevant results of the tests performed, while focusing in the benefits and limitations ofthe applied technologies.

. Introduction

CIEMAT and UPM have been involved in an ITER project for

he development of a FPSC [1] (Fast Plant System Control) pro-otype based on PXIe [5] technology during the last two years.he project has focused on data acquisition aspects inside a fast

∗ Corresponding author. Tel.: +34 913466419.E-mail address: [email protected] (R. Castro).

920-3796/$ – see front matter © 2012 Elsevier B.V. All rights reserved.ttp://dx.doi.org/10.1016/j.fusengdes.2012.05.003

© 2012 Elsevier B.V. All rights reserved.

control system with two main objectives. The first one has been todevelop a working system that fulfills the set of requirements ofan ITER plant fast control system. And the second one has been todevelop a system that is compatible with ITER Codac Core Systemdistribution technologies, and more specifically, a system basedon EPICS [2] as control technology.

One of the facets of the project has been the implementationof a data archiving solution for the FPSC. In this sense, an approachbased on NetCDF-4 [3,4] format and storage clustering technologieshas been implemented, and a set of tests has been run in order

2 ing an

tdmcr

1

otd

1

tUncAat

1

dflaaor(Ebrbbb

DDi

2

th

224 R. Castro et al. / Fusion Engineer

o analyze the performance of the system. This paper includes aetailed description of the solution, a summary of the performanceeasures obtained from the tests, and a list of the most relevant

onclusions about the implemented system and the use of theseelevant archiving technologies on fast control.

.1. FPSC

ITER IO started in 2009 a project for developing a prototypef the FPSC using PXIe technologies. Apart from other constraints,here have been three specific requirements that have driven theevelopment of the system:

FPSC had to be able to manage high-rate data acquisition, includ-ing a highly efficient data distribution among different functionalelements for processing, monitoring and archiving data duringthe length of the pulse. FPSC had to be able to acquire and man-age data at a sampling rate of (at least) 1 MSample/s per channel,and additionally (as a secondary objective) to saturate the 1 Gb/snetwork connection between the FPSC and the archiving system.All the data acquired during the long pulse (about 1800 s) had tobe archived on remote.FPSC had to run on a RHEL (RedHat Enterprise Linux) system andits control system had to be based on EPICS, and more specifically,it had to be based on “asynDriver” [6] technology.

.2. EPICS

EPICS is an open source technology that includes a wide set ofools and applications to implement distributed control systems.sing Client/Server and Publish/Subscribe techniques to commu-icate between computers, EPICS is not only able to manage localontrol tasks but also to publish and monitor control parameters.dditionally, asynDriver is an extension for EPICS that simplifiesnd homogenizes the interfaces of the different devices to be con-rolled.

.3. Highly efficient data distribution

EPICS has an important limitation as regards managing massiveata, as obtained from high-rate data acquisition devices, and trans-erring them among the local tasks of a system. The classic EPICSinks between EPICS records are not “quick” enough for this goal,nd therefore a new technology has been implemented to man-ge data block links among local concurrent tasks, with the generalbjectives of optimizing data throughput, minimizing system inter-upts and minimizing CPU consume. The solution is named DPDData Process and Distribution), which works over the standardPICS and asynDriver technology. It provides EPICS with the datalock link technology mentioned above and with some additionalouting commands and performance measures. DPD is not going toe described in detail here as it is not the main topic in this paper,ut it is important to take it into account, as it is the technologicalasis on which the FPSC prototype has been implemented.

From the design point of view, FPSC is composed by several asyn-river software devices connected by data block links. Basically,PD components (acquisition, data monitoring, archiving, etc.) are

mplemented as threads directly managed by EPICS.

. Archiving solution

One of the main parts of the FPSC prototype project has beenhe development of an archiving solution. The main requirementsave been in this case:

d Design 87 (2012) 2223– 2228

• Compatibility with Ethernet network: The solution should pro-vide remote data storage over a standard Gigabit Ethernetnetwork.

• Reliability: For scientific data archiving, the main requirement isto grant the reliability of the storage mechanism.

• Long pulse: The solution must be compatible with long pulseoperation (currently defined in 1800 s). From this requirementa set of properties are derived:◦ Continuous archiving within the pulse.◦ Huge amount of data managed.◦ Archived data can be read during the pulse.◦ FPSC has to be able to Start/Restart during the pulse.

• Scalable: The archiving solution must be able to scale on capacityand performance.

• Valid for different data types: The solution must be flexibleenough for easily incorporating new types of data.

• Integrated in EPICS control system: The control solution adoptedby ITER.

• Fault tolerance: The solution must be fault-tolerant in order tominimize scientific data loss.

Although data archiving solutions in fusion have been mainlyoriented to short pulse experiments, these new approaches arecloser to ITER requirements [7,8].

2.1. Architecture

The solution consists in a main storage cluster, where clients(FPSC nodes and first-level data distribution and monitoring sys-tems) can be connected and use it as high capacity common storageunit. Some more detailed considerations about clustering storageare discussed later (Section 2.4).

The nature of the archived data, coming from continuous dataflows, as well as the nature of the data accesses, based on simplerequests for specific time intervals of a signal, has been considered.Finally, the archiving schema was implemented using the NetCDF-4 files and the storage unit was defined on the basis of a file persignal and per shot. A more detailed explanation about NetCDF-4technology and used storage schemas can be found later in Section2.3.

The decision of using a file per pulse and per signal impliesmanaging a large number of files per shot. This fact generates twomain problems. The first one concerns information managementorganization. To solve it, complementary metadata systems basedon large database technologies should be implemented in orderto provide an efficient access to data. Traditional database-basedstorage solutions have not proved totally efficient for massive datastorage, although they have been optimized to provide large dataindexing solutions and flexible searching engines. The second mainproblem derives from operating system limitations as regards filesystems. To solve this inconvenience, the present solution is basedon the Lustre technology (described in Section 2.4), which has beendesigned to operate with thousands of nodes, where each node canstore millions of files per directory, in order to achieve a correctdistribution of files. Even though, the presented work is focused ona first-level tier storage solution, closer to data acquisition nodes.At a second level tier (with different performance requirements)an optimization in the number of files, or even an alternative datastorage technology should be taken into account.

2.2. The FPSC

On the client side, the present FPSC node is running on a systemwith RHEL 5.5, EPICS 3.14.12 and asynDriver 4.16. The FPSC hasbeen implemented using the DPD technology, and, as it is shownin Fig. 1, it is formed by: an asynDriver Flex RIO acquisition module

R. Castro et al. / Fusion Engineering an

(mdahhw8viat

mbalua(ottTbtt

if

-

-

-

-

-

2

oci

Fig. 2. The cluster structure consists of one node running as meta-

Fig. 1. FPSC configuration in archiving tests.

responsible of acquiring analog input data), an EPICS waveformonitoring module (responsible of distributing EPICS monitoring

ata), and three remote data archiving modules (responsible ofrchiving data blocks in the remote archiving system). From theardware point of view, a FPGA (Field Programmable Gate Array)as been programmed for implementing a data acquisition cardith 3 DMA (Direct Memory Access) ports that group 12, 12 and

analog input channels respectively. From the software point ofiew, every DMA requires an independent acquisition thread. Thiss why three different block links with their corresponding remoterchiving modules have been configured; one per each data inputhread.

From the fault tolerance perspective, the links to the archivingodules are configured in backup mode. This is an option that has

een specifically developed to prevent data link interruptions inrchiving modules, but that can be activated in any DPD data blockink. This feature provides different buffers at different levels andses different storage devices. The block link activates a new buffer

level below the current buffer when this one is full for writingusually because there is an error that blocks data consume at thether side of the link). If the consumer starts to read very quickly,he buffers are sequentially emptied until the normal behavior ofhe block link has been restored up to the buffer of the highest level.he objective of this type of backup block link is to prevent dataeen discarded when the receptor is not quick enough to consumehem; for example, when the archiving module detects an error inhe connection with the remote storage service.

Continuing with FPSC development, the archiving modulemplements a set of primitives that cover all the archiving requiredunctionality:

initChannel: This primitive is used to initialize all required vari-ables and connections for the archiving.

openSource: This primitive has to be used either when new datasource appears for the current pulse or either when data from anew pulse appears. It is responsible of creating (or opening if itwas already created) a new file to store the data.

storeBlock: This primitive is used every time a data block needsto be stored.

closeSource: This primitive corresponds to the last data block of asignal and a pulse. It is used to close the corresponding store file.

closeChannel: It is used to close all pending connections and closeall pending store files.

.3. NetCDF-4

NetCDF is a scientific oriented file format with a complete set

f libraries and management tools (supported by a wide scientificommunity), and with the main objective of making data platformsndependent and portable. In its last version 4, NetCDF storage layer

d Design 87 (2012) 2223– 2228 2225

is completely based on the HDF-5 [9] technology, and its librarieswork almost as a wrapper for HDF-5 libraries.

There is a list of important advantages that support the useof NetCDF-4 as a FPSC data archiving format. The following listincludes the most important ones:

• Self-description data management: Any client can completelymanage and know the format of the data stored in a file. Thisfeature also helps to keep versions compatibility.

• Huge file size support: With no size limits apart from the operat-ing system or file system ones.

• Optimized for data segment access: It perfectly fits the data accessrequirements for scientific data archiving in long pulse experi-ments.

• Data appending: It is possible to append new data to a file withoutcopies or redefinitions.

• Concurrent single writer—multiple readers: It gives the possibil-ity of accessing archived data during the pulse.

In the case of FPSC, for high data acquisition rates, data blocksmanagement is mandatory. It prevents the inadmissible overheadof including any metadata per sample (a time stamp, for exam-ple). NetCDF-4 not only supports data blocks totally, but has alsodemonstrated its ability, simplicity and precision for defining newdata types during the project.

2.4. Clustering storage

The clustering storage technology is based on the aggregation ofdifferent elements for creating a common storage space. Its maincharacteristic is its scaling ability in size and performance, achievedby aggregating new components (nodes) to the system.

Additionally, there is a list of additional features that the clus-tering storage solution should have:

• A unique file system view: All clients (writers and readers) shouldperceive this space as a unique and common file system.

• Fault tolerance capacity: In order to grant no data loss in case ofnode failure.

Within the FPSC project some additional requirements wereoutlined for the clustering storage solution:

• RHEL and GEthernet compatibility: The technology must be com-patible for RHEL systems and GEthernet networks.

• Optimized for writing: FPSC systems require archiving allacquired data with no losses. The high data acquisition rates inFPSC systems will demand high writing data throughputs.

• Concurrent (read/write): Long pulse demands access to archiveddata during the pulse. The cluster must be able to manage locks.

Finally, a Lustre-based solution [10] was implemented as testbed. Apart of fulfilling all the previously described requirements,Lustre is a widely extended and experimented solution in manyscientific installations with a wide supporting community behind.Additionally, other important features can be remarked, such asfault tolerance in all their components, file stripping (distributed)capacity over 160 nodes, clusters support up to 100,000 clients,and storage and performance scalability only limited by disks andnetworks capacity.

The Lustre cluster configuration used in the tests is shown in

data service (MDS/MDT) and two nodes running as object storageservices (OSS), with RAID0 local disk partitions as storage termi-nals (OST). The present implementation does not include a fault

2226 R. Castro et al. / Fusion Engineering an

tcsctdvic

3

spbtec

3

Ttatfiaa

uaitnw

Fig. 2. Lustre storage cluster configuration for the archiving tests.

olerant Lustre configuration, as this would have implied to dupli-ate metadata nodes and to share storage terminals between twotorage service nodes. Alternatively, the main goals this work wasoncerned with were performances and the general validity ofhe solution rather than testing fault tolerant features (thoroughlyescribed in the Lustre documentation). From the network point ofiew, Lustre uses a proprietary protocol called lnet. This protocol,n the case of Ethernet based systems, works over standard TCP/IPonnection.

. Test scenarios

To test the availability of the archiving solution, three testingcenarios have been implemented (a diagram of each of them isresented in Fig. 3). In all of them, the used network is a 1 Giga-it Ethernet, and the internal configuration of the FPSC is almosthe same except for the archiving module, which is different forach scenario and depends on the archiving mechanism used. Theommon FPCS configuration, contains:

An FPGA Flex RIO acquisition module to acquire and distributedata for the rest of the system.An EPICS monitoring module to monitor a very decimated versionof the acquired data.

.1. First test scenario

In the first test scenario FPSC streams acquired data, using aCP/IP socket, toward a remote archiving server. This server hashe storage cluster mounted as local file system and uses it likeny other local storage device. The server is implemented usinghe multiprocessing “fork” technique, and a new process is runor every new received connection. Every piece of data receivedn the archiving server is stored in its local cluster file system using

NetCDF-4 file. This means having one file per data source (signal)nd per pulse.

In this scenario, the FPSC uses the “stream archiving” mod-le, specifically implemented for archiving data through a remoterchiving server. This module, as any other archiving module,mplements the primitives described in the point Section 2.1. All ofhis primitives have a specific message formatted in an archivingetwork protocol that has been developed ad hoc for this project,ith the following elements:

initChannel Message: Used for any new connection between theFPSC and the remote archiving server. It contains: a “1” integer to

check the endian of the client (the FPSC system), plus the clientidentifier and time reference of (“UTC” or “TAI”).openSource: It is used to tell the server that a new data source (asignal, for example) has been created. This message contains: the

d Design 87 (2012) 2223– 2228

source identifier, the pulse identifier, and the type of data to bestored together with a specific initialization header for it.

• storeBlock: This message is sent for every data block to store. Itcontains the following: seconds and nanoseconds fields of timestamp, and the payload.

• closeSource: Used to mark the last data block of the data sourcein the current pulse.

• closeChannel: Used to indicate the end of the connection.

3.2. Second test scenario

In the second test scenario, the remote cluster is mounted on theFPSC as a local file system, and for archiving the “NetCDF Archiv-ing” module is used. In the implementation of this module, thereceived data blocks are stored in the cluster, as if it were a localdevice, using NetCDF-4 files. The “initChannel” and “closeChannel”primitives, described in Section 2.1, are used for state variables ini-tialization, and the other functions (“openSource”, “storeBlock” and“closeSource”) correspond to the creation of a new NetCDF-4 file,the storage of a new data block, and the closing of the NetCDF-4 filerespectively.

3.3. Third test scenario

In the third scenario the remote cluster is mounted on the FPSCas a local file system, and the archiving component (the “RAWArchiving” module) manages ad hoc binary files. The format of thesead hoc files includes a general metadata header at the beginning ofthe file, after which data blocks are sequentially written as soon asthey are received. The objective of this “raw” format is to minimizedata processing in the archiving operation. In the implementationof the “archiving raw” module, received data blocks are stored in thecluster, as if it were a local device, using the previously described“raw” binary files. The “initChannel” and “closeChannel” primitivesare used for state variables initialization, and the rest of the prim-itives (“openSource”, “storeBlock” and “closeBlock”) correspond tothe standard file system functions “open”, “write” and “close”.

3.4. Test results

The results of the tests for the three different scenarios are pre-sented in Table 1.

The poorest archiving throughput (24 Mbytes/s) is obtained inthe Scenario 2. The reasons are:

1. Early saturation of the CPU of the FPSC system (96% of average)because the intensive use of NetCDF-4 libraries for archiving allreceived data.

2. Poor thread management from NetCDF-4 (HDF-5) libraries.These libraries currently implement a workaround for this prob-lem which basically serialize all file operations.

Compared with Scenario 2, Scenario 1 improves the archiv-ing throughput over NetCDF files (40 Mbytes/s). In Scenario 1, theNetCDF libraries are run in a multi-processor context, meanwhile,in Scenario 2 they are run in a multi-thread context. As it hasbeen previously explained, this is one of the main problems withNetCDF (HDF-5) libraries. Additionally, in Scenario 1, a CPU satura-tion problem is also detected in the remote archiving server whenan intensive use of NetCDF libraries is required.

The best archiving throughput performance is obtained in theScenario 3 (98 Mbytes/s). On one hand, something close to network

saturation (for TCP connections) is observed, although no bottle-neck is detected apart from that due to network configuration. Onthe other hand, a very reasonable use of the CPU of about 35% isobserved, which preserves a good load margin for other tasks in

R. Castro et al. / Fusion Engineering and Design 87 (2012) 2223– 2228 2227

Fig. 3. Test scenarios for FPSC archiving system.

Table 1Comparative of archiving measured results.

Archiving throughput Av. CPU usage (before archiving) Av. CPU usage (in archiving)

tnEi

4

caaAs

t

Scen. 1 40 MB/s 20%

Scen. 2 24 MB/s 20%

Scen. 3 98 MB/s 20%

he FPSC. In this scenario the system was able to archive 24 chan-els acquired at a sampling rate of 1 MS/s during more than 1800 s.very sample was archived with resolution of 32 bits (unsignednteger).

. Conclusions

Integrated in the FPSC prototype development project, a fastontrol archiving solution approach has been developed. Therchiving solution has included NetCDF-4 files for data archivingnd a storage system based on the Lustre clustering technology.

set of tests based on three different scenarios has revealed thetrengths and weaknesses of each technology.

From the application of NetCDF-4 (completely based on HDF-5)o FPSC archiving, the most interesting conclusions are:

The application of NetCDF-4 makes it quite easy to store newdata types, completely and clearly defined in the NetCDF-4self-description language. It also makes version control and com-patibility simpler.

28%96%35%

• NetCDF-4 is very well supported for many different programminglanguages, with very simple and powerful data access.

• NetCDF-4 is compatible with data block archiving.• NetCDF-4 has a high CPU demand for high rate data archiving

processes.• Multi-threading is not well managed by NetCDF-4 libraries.

About the use of a clustering storage solution based on Lustrefor FPSC data archiving, the most interesting conclusions are:

• Lustre provides a complete fault tolerant solution.• Lustre is simple to scale in size and performance.• Lustre is flexible (new archiving formats can be incorporated in

a simple way).• Lustre is only valid for Linux clients, even though it is compatible

with CODAC Core System.

A final conclusion for the presented results is that NetCDF-4 (andpossibly HDF-5), as a FPSC data archiving format, is not recom-mendable at a first level in scientific archiving, because archiving

2 ing an

vOdostpra

R

[8] A. Luchetta, G. Manduchi, A. Barbalace, A. Soppelsa, C. Taliercio, Data acquisition

228 R. Castro et al. / Fusion Engineer

ery high rate data using NetCDF-4 libraries saturates the CPUs.n the other hand, both are very interesting formats for storingata at a second level, where complex data accesses to intervalsf the file would be required. As regards the use of a clusteringtorage technology, it incorporates clear and demanded features tohe solution. In the case of using Lustre as a local file system, com-lex data formatting tasks could saturate FPSC CPU. Incorporatingemote archiving servers to the cluster would provide compatibilitynd low CPU demand for specific clients.

eferences

[1] A. Wallander, L. Abadie, H. Dave, F. Di Maio, et al., ITER instrumentation andcontrol—status and plans, Fusion Engineering and Design 85 (3–4) (2010).

[

d Design 87 (2012) 2223– 2228

[2] EPICS home web page. http://www.aps.anl.gov/epics.[3] R. Rew, G. Davis, NetCDF: an interface for scientific data access, IEEE Computer

Graphics and Applications 10 (4) (1990) 76–82.[4] NetCDF home web page. http://www.unidata.ucar.edu/software/netcdf/.[5] G. Caesar, M. Wetzel, PXI Express: extending backplanes to 6 Gbyte/s

while maintaining backwards compatibility, in: Autotestcon, IEEE, 2005, pp.231–234.

[6] asynDriver home web page. http://www.aps.anl.gov/epics/modules/soft/asyn/.[7] H. Nakanishi, M. Ohsuna, M. Kojima, S. Imazu, et al., Data acquisition sys-

tem for steady-state experiments at multiple sites, Nuclear Fusion 51 (11)(2011) 113014.

in the ITER Ion Source experiment, in: Real Time Conference (RT), 17th IEEE-NPSS, 24–28 May, 2010, pp. 1–5.

[9] HDF-5 home web page. http://www.hdfgroup.org/HDF5/.10] Lustre home web page. http://wiki.lustre.org/index.php/Main Page.